- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Remember when everybody was making “smart” toasters and fridges and shit with cameras or WiFi for absolutely no reason.
This is that all over again.
Nobody needs “AI” in their water kettles, dryers, or dildos.
“This is just plain fuckin’ stupid. Your neighbor gets a dildo that plays ‘O Come, All Ye Faithful’ and you wanna get one too!”
Dildo-as-a-Service
But if I don’t have a toilet AI how will I remember what I had to eat the other day for less than $4.99/mo?
This is a pretty good take imo
Like AI, IoT is an important and lasting technology
But too many businesses and products jumped on a misguided bandwagon to pull stupid uniformed VC money
AI is not the new NFT but also not the new Internet. It’s the new touchscreen. Amazing in some contexts, but forced down on every other.
This is a brilliant take. Whoever designed my car’s screen system can kick rocks
I think this is a very good analogy
man, I’m going to steal that analogy. it’s perfect
Developers are resentful toward AI for the same reason they resented blockchain–it becomes a buzz word that every middle manager is convinced will improve productivity, and it’s forced whether it’s actually helpful or not.
I work on safety-critical code. AI is useless here, but we have to “use” it to appease clueless shareholders.
What would happen if you collectively put your foot down on zero “AI code” to management, with such critical applications?
I’m a senior with a good boss, I pretty much just ignore it. And fortunately, at least in my company, most people have done that (especially with the safety critical stuff). But management still has a way of making your life miserable when you stand your ground on this kind of thing, so it’s also common to just tell them some bullshit and go about your job.
Developers all love to preen about code. They worry LLMs lower the “ceiling” for quality. Maybe. But they also raise the “floor”.
And this is how the human element of this industry dies. This dev is the last of a dying breed, the senior dev. He’s also loading more bullets into the gun that’s pointed at the heart of the role that got him to where he is in the first place.
You don’t get to become a junior dev if that role is occupied by AI and you don’t get to ever ascend to senior dev unless you start as a junior dev.
For as analytical this person seems to be, he has a massive blindspot related to the path he himself treaded to get where he is. He’s pulling the ladder up behind him and condemning the people on the path behind to finding another way or giving up entirely.
AI is brain rot. It’s actively and aggressively atrophying humanity’s ability to reason and problem solve.
If this dev doesn’t do it, the next one will
This dev is analytical enough to understand basic incentive modeling and game theory. Capitalism is a race to the bottom no less now than it always was.
This guy’s argument is that he’s a 10xer because he’s using AI effectively, i.e. just proofreading its output and deleting the comments. (Also, why hire juniors when you can get the same work for $20/mo?)
I think this is a losing strategy unless all senior devs never retire and are immortal. (Or unless GAI happens in which case the world economy will collapse and who cares about strategy.)
It looks like what’s happening is that way fewer companies are willing to invest in juniors now, leading to falling enrollment in university, leading to a shortage of seniors, leading to very high dev pay, leading to increased enrollment. Eventually.
“AI” is not the new NFT because “AI” doesn’t even exist. It’s a far bigger and far worse grift. Sure, some dummies wasted their money on jpgs of monkeys. But nobody used NFTs to murder palestinian kids, spy on society, steal our data, outlaw regulation, etc. No amount of shitty generated code will redeem that. Ofc this delusional myopic article has nothing to say about this.
“AI” is a far worse grift than NFTs.
Replace AI with Excel in your argument and repeat it again. Do you see how silly you sound?
Copilot in Code is hell. It pops code suggestions almost after every keystroke. Idiotic suggestions mostly.
You can configure that I think? (The every keystroke, not the stupidity)
There is a very loud population of AI-haters who don’t hate AI but rather corporate AI but they don’t know what the difference is and can be lead to water but won’t drink it.
If they wanted to stick it to the AI companies, they’d be all in on the open source LLMs. They’re not, though, because they don’t understand it. They’re just angry at this nebulous concept of AI because a few companies pissed in the well. Nobody was upset at AI Dungeon when that came out.
I can’t speak for others but I simply hate that people keep telling us how amazing AI is yet not a single one of them can ever point to a single task completed by AI on its own that is actually of decent quality, never mind enough tasks that I would trust AI to do anything without supervision. I mean actual tasks, e.g. PRs on an open source repository or a video showing some realistic every-day task done from start to finish by AI alone, not hand-wavy “I use it every day” abstract claims.
People like OP seem to be completely oblivious to the fact that reading code takes a lot of time and effort, even when there was an actual human thought process behind it, never mind when it might be totally random garbage. Writing code is also not nearly as much of a bottleneck as AI proponents seem to think it is. Reading code to verify it is not total garbage is actually much more effort than writing the same code yourself. It might not appear like that if you are writing in a low expressiveness language like Go or Java because you are reading or writing a lot of lines for every actual high level action the code takes that you need to think about but it becomes more obvious in more expressive languages where the same action can be expressed closer to 1:1 in terms of lines per high level action.
deleted by creator
Why does it need to complete it on its own?
With a human reviewer you can still do things a lot quicker. Code is complex so more the exception to the rule.
Next time your stuck on an issue for hours stick it into deep research and go for walk
You’re listening to hype bros when you should be listening to developers.
Any code reviewer will tell you code review is harder than writing code. And it gets harder and harder the lower the quality the code is; the more revisions and research the code reviewer needs to do to get the final product to a high quality.
One must consider how humans will interact with this part of the program (often this throws all kinds of spanners in the works), what happens when data comes in differently than expected, how other parts of the system work with this one, etc, etc, etc. Code that merely achieves the stated goals of a ticket can easily produce a dozen tickets later if not done right.
whatever die mad I’ll keep being more productive than I’ve ever been
Every developer i know also hates AI
The issue with AI is not that it’s not an impressive technology, it’s that it’s built on stolen data and is incredibly wasteful of resources. It’s a lot like cars in that regard, sure it solves some problems and is more convenient than the alternatives, but its harmful externalities vastly outweigh the benefits.
LLMs are amazing because they steal the amazing work of humans. Encyclopedias, scientific papers, open source projects, fiction, news, etc. Every time the LLM gets something right, it’s because a human figured it out, their work was published, and some company scraped it without permission. Yet it’s the LLM that gets the credit and not the person. Their very existence is unjust because they profit off humanity’s collective labour and give nothing in return.
No matter how good the technology is, if it’s made through unethical means, it doesn’t deserve to exist. You’re not entitled to AI more than content creators are entitled to their intellectual property.
It’s built on publicly available data, the same way that humans learn, by reading and observing what is accessible. Many are also now trained on licensed, opt-in and synthetic data.
They don’t erase credit they amplify access to human ideas.
Training consumes energy, but its ongoing usage to query is vastly cheaper to query than most industrial processes. You’re assuming it cannot reduce our energy usage by improving efficiency and removing manual labour.
“If something is made unethically, it shouldn’t exist”
By that logic, nearly all modern technology (from smartphones to pharmaceuticals) would be invalidated.
And fyi I am an anarchist and do not think intellectual property is a valid thing to start with.
I think you’re also underestimating the benefits cars have ushered, you’d be hard pressed to find anyone serious that can show that the harm has ‘outweighed their benefits’
Already showing signs of spreading.
- Self-reported reductions in cognitive effort do not equal reduced critical thinking; efficiency isn’t cognitive decline.
- The study relies on subjective perception, not objective performance or longitudinal data.
- Trust in AI may reflect appropriate tool use, not overreliance or diminished judgment.
- Users often shift critical thinking to higher-level tasks like verifying and editing, not abandoning it.
- Routine task delegation is intentional and rational, not evidence of skill loss.
- The paper describes perceptions, but overstates risks without proving causation.