• just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    ·
    3 days ago

    Remember when everybody was making “smart” toasters and fridges and shit with cameras or WiFi for absolutely no reason.

    This is that all over again.

    Nobody needs “AI” in their water kettles, dryers, or dildos.

    • GeeDubHayduke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      “This is just plain fuckin’ stupid. Your neighbor gets a dildo that plays ‘O Come, All Ye Faithful’ and you wanna get one too!”

    • InternetCitizen2@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      3 days ago

      But if I don’t have a toilet AI how will I remember what I had to eat the other day for less than $4.99/mo?

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      2 days ago

      This is a pretty good take imo

      Like AI, IoT is an important and lasting technology

      But too many businesses and products jumped on a misguided bandwagon to pull stupid uniformed VC money

  • Phen@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    55
    ·
    3 days ago

    AI is not the new NFT but also not the new Internet. It’s the new touchscreen. Amazing in some contexts, but forced down on every other.

  • owenfromcanada@lemmy.ca
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    3 days ago

    Developers are resentful toward AI for the same reason they resented blockchain–it becomes a buzz word that every middle manager is convinced will improve productivity, and it’s forced whether it’s actually helpful or not.

    I work on safety-critical code. AI is useless here, but we have to “use” it to appease clueless shareholders.

    • Lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      What would happen if you collectively put your foot down on zero “AI code” to management, with such critical applications?

      • owenfromcanada@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I’m a senior with a good boss, I pretty much just ignore it. And fortunately, at least in my company, most people have done that (especially with the safety critical stuff). But management still has a way of making your life miserable when you stand your ground on this kind of thing, so it’s also common to just tell them some bullshit and go about your job.

  • AstralPath@lemmy.ca
    link
    fedilink
    English
    arrow-up
    23
    ·
    3 days ago

    Developers all love to preen about code. They worry LLMs lower the “ceiling” for quality. Maybe. But they also raise the “floor”.

    And this is how the human element of this industry dies. This dev is the last of a dying breed, the senior dev. He’s also loading more bullets into the gun that’s pointed at the heart of the role that got him to where he is in the first place.

    You don’t get to become a junior dev if that role is occupied by AI and you don’t get to ever ascend to senior dev unless you start as a junior dev.

    For as analytical this person seems to be, he has a massive blindspot related to the path he himself treaded to get where he is. He’s pulling the ladder up behind him and condemning the people on the path behind to finding another way or giving up entirely.

    AI is brain rot. It’s actively and aggressively atrophying humanity’s ability to reason and problem solve.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      3 days ago

      If this dev doesn’t do it, the next one will

      This dev is analytical enough to understand basic incentive modeling and game theory. Capitalism is a race to the bottom no less now than it always was.

  • Beej Jorgensen@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 days ago

    This guy’s argument is that he’s a 10xer because he’s using AI effectively, i.e. just proofreading its output and deleting the comments. (Also, why hire juniors when you can get the same work for $20/mo?)

    I think this is a losing strategy unless all senior devs never retire and are immortal. (Or unless GAI happens in which case the world economy will collapse and who cares about strategy.)

    It looks like what’s happening is that way fewer companies are willing to invest in juniors now, leading to falling enrollment in university, leading to a shortage of seniors, leading to very high dev pay, leading to increased enrollment. Eventually.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    3 days ago

    “AI” is not the new NFT because “AI” doesn’t even exist. It’s a far bigger and far worse grift. Sure, some dummies wasted their money on jpgs of monkeys. But nobody used NFTs to murder palestinian kids, spy on society, steal our data, outlaw regulation, etc. No amount of shitty generated code will redeem that. Ofc this delusional myopic article has nothing to say about this.

    “AI” is a far worse grift than NFTs.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      10
      ·
      3 days ago

      Replace AI with Excel in your argument and repeat it again. Do you see how silly you sound?

  • Kokesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    Copilot in Code is hell. It pops code suggestions almost after every keystroke. Idiotic suggestions mostly.

    • Nighed@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      You can configure that I think? (The every keystroke, not the stupidity)

  • drkt@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    14
    ·
    3 days ago

    There is a very loud population of AI-haters who don’t hate AI but rather corporate AI but they don’t know what the difference is and can be lead to water but won’t drink it.

    If they wanted to stick it to the AI companies, they’d be all in on the open source LLMs. They’re not, though, because they don’t understand it. They’re just angry at this nebulous concept of AI because a few companies pissed in the well. Nobody was upset at AI Dungeon when that came out.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      3
      ·
      3 days ago

      I can’t speak for others but I simply hate that people keep telling us how amazing AI is yet not a single one of them can ever point to a single task completed by AI on its own that is actually of decent quality, never mind enough tasks that I would trust AI to do anything without supervision. I mean actual tasks, e.g. PRs on an open source repository or a video showing some realistic every-day task done from start to finish by AI alone, not hand-wavy “I use it every day” abstract claims.

      People like OP seem to be completely oblivious to the fact that reading code takes a lot of time and effort, even when there was an actual human thought process behind it, never mind when it might be totally random garbage. Writing code is also not nearly as much of a bottleneck as AI proponents seem to think it is. Reading code to verify it is not total garbage is actually much more effort than writing the same code yourself. It might not appear like that if you are writing in a low expressiveness language like Go or Java because you are reading or writing a lot of lines for every actual high level action the code takes that you need to think about but it becomes more obvious in more expressive languages where the same action can be expressed closer to 1:1 in terms of lines per high level action.

      • auraithx@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        3 days ago

        Why does it need to complete it on its own?

        With a human reviewer you can still do things a lot quicker. Code is complex so more the exception to the rule.

        Next time your stuck on an issue for hours stick it into deep research and go for walk

        • BombOmOm@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          ·
          edit-2
          3 days ago

          Any code reviewer will tell you code review is harder than writing code. And it gets harder and harder the lower the quality the code is; the more revisions and research the code reviewer needs to do to get the final product to a high quality.

          One must consider how humans will interact with this part of the program (often this throws all kinds of spanners in the works), what happens when data comes in differently than expected, how other parts of the system work with this one, etc, etc, etc. Code that merely achieves the stated goals of a ticket can easily produce a dozen tickets later if not done right.

    • HiddenLayer555@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      2 days ago

      The issue with AI is not that it’s not an impressive technology, it’s that it’s built on stolen data and is incredibly wasteful of resources. It’s a lot like cars in that regard, sure it solves some problems and is more convenient than the alternatives, but its harmful externalities vastly outweigh the benefits.

      LLMs are amazing because they steal the amazing work of humans. Encyclopedias, scientific papers, open source projects, fiction, news, etc. Every time the LLM gets something right, it’s because a human figured it out, their work was published, and some company scraped it without permission. Yet it’s the LLM that gets the credit and not the person. Their very existence is unjust because they profit off humanity’s collective labour and give nothing in return.

      No matter how good the technology is, if it’s made through unethical means, it doesn’t deserve to exist. You’re not entitled to AI more than content creators are entitled to their intellectual property.

      • auraithx@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        2 days ago

        It’s built on publicly available data, the same way that humans learn, by reading and observing what is accessible. Many are also now trained on licensed, opt-in and synthetic data.

        They don’t erase credit they amplify access to human ideas.

        Training consumes energy, but its ongoing usage to query is vastly cheaper to query than most industrial processes. You’re assuming it cannot reduce our energy usage by improving efficiency and removing manual labour.

        “If something is made unethically, it shouldn’t exist”

        By that logic, nearly all modern technology (from smartphones to pharmaceuticals) would be invalidated.

        And fyi I am an anarchist and do not think intellectual property is a valid thing to start with.

        I think you’re also underestimating the benefits cars have ushered, you’d be hard pressed to find anyone serious that can show that the harm has ‘outweighed their benefits’

    • auraithx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      3 days ago
      • Self-reported reductions in cognitive effort do not equal reduced critical thinking; efficiency isn’t cognitive decline.
      • The study relies on subjective perception, not objective performance or longitudinal data.
      • Trust in AI may reflect appropriate tool use, not overreliance or diminished judgment.
      • Users often shift critical thinking to higher-level tasks like verifying and editing, not abandoning it.
      • Routine task delegation is intentional and rational, not evidence of skill loss.
      • The paper describes perceptions, but overstates risks without proving causation.