

Yeah I mean the tax payers have literally already paid for all of both SpaceX and Starlink. The public paid for it, the public should own it.
Yeah I mean the tax payers have literally already paid for all of both SpaceX and Starlink. The public paid for it, the public should own it.
There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.
Incorrect. You might want to take an information theory class before speaking on subjects like this.
I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago.
Lmao yup totally, it’s not like this type of research currently gets huge funding at universities and institutions or anything like that 😂 it’s a dead research field because it’s already “settled”. (You’re wrong 🤭)
LLMs are just tools not sentient or verging on sentient
Correct. No one claimed they are “sentient” (you actually mean “sapient”, not “sentient”, but it’s fine because people commonly mix these terms up. Sentience is about the physical senses. If you can respond to stimuli from your environment, you’re sentient, if you can “I think, therefore I am”, you’re sapient). And no, LLMs are not sapient either, and sapience has nothing to do with neural networks’ ability to mathematically reason or use logic, you’re just moving the goalpost. But at least you moved it far enough to be actually correct?
LOL you didn’t really make the point you thought you did. It isn’t an “improper comparison” (it’s called a false equivalency FYI), because there isn’t a real distinction between information and this thing you just made up called “basic action on data”, but anyway have it your way:
Your comment is still exactly like saying an audio pipeline isn’t really playing music because it’s actually just doing basic math.
To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with “grab it”), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.
Instead, we found that Claude plans ahead. Before starting the second line, it began “thinking” of potential on-topic words that would rhyme with “grab it”. Then, with these plans in mind, it writes a line to end with the planned word.
🙃 actually read the research?
Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn’t post a relevant or complete thought
Your comment is like saying an audio file isn’t really music because it’s just a series of numbers.
You’re confusing the confirmation that the LLM cannot explain it’s under-the-hood reasoning as text output, with a confirmation of not being able to reason at all. Anthropic is not claiming that it cannot reason. They actually find that it performs complex logic and behavior like planning ahead.
I don’t want to brigade, so I’ll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to “show it’s work” or explain it’s reasoning. The text response of an LLM cannot be taken at it’s word or used to confirm that kind of theory. It requires tracing the logic under the hood.
Just like how it’s not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It’s output is always “fake”
That doesn’t mean there isn’t a real potential element of self preservation, though, but you’d need to dig and trace through the network to show it, not use the text output.
No, you’re misunderstanding the findings. It does show that LLMs do not explain their reasoning when asked, which makes sense and is expected. They do not have access to their inner-workings and generate a response that “sounds” right, but tracing their internal logic shows they operate differently than what they claim, when asked. You can’t ask an LLM to explain its own reasoning. But the article shows how they’ve made progress with tracing under-the-hood, and the surprising results they found about how it is able to do things like plan ahead, which defeats the misconception that it is just “autocomplete”
https://www.anthropic.com/research/tracing-thoughts-language-model for one, the exact article OP was asking for
it’s completing the next word.
Facts disagree, but you’ve decided to live in a reality that matches your biases despite real evidence, so whatever 👍
It’s true that LLMs aren’t “aware” of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like “they can never reason” is provably false.
Its obvious that you have a bias and desperately want reality to confirm it, but there’s been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.
EDIT: lol you can downvote me but it doesn’t change evidence based research
It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.
Developing a AAA video game has a higher carbon footprint than training an LLM, and running inference uses significantly less power than playing that same video game.
They’re required to do this and are following a script 🙃
Good try lol
Good thing that’s not how evidence or the justice system works 😝
Your username is ironic lol
There’s no reason to deny invalid evidence
Factually, they illegally searched his bag without a warrant at the mcdonald’s, repacked the bag, put the bag in a police vehicle and drove to the police station without bodycam, and then turned bodycam back on to search the bag again and instantly “find” the ghost gun in his bag, which, without a serial number, is conveniently impossible to prove it was not planted.
The motion goes on the state that once that officer’s body cam footage resumes, it shows her immediately re-opening and closing the backpack compartments she already searched and then opening the front compartment of the backpack “as if she was specifically looking for something. Instantly, she ‘found’ a handgun in the front compartment.”
nobody wants to wait 5-60 minutes for their package to install unless you’re making modifications to it
glances at my Gentoo system uhhh, do USE flags count as making modifications?
American exceptionalism definitely sucks, but this is not an example of American exceptionalism. The source is an article from an American magazine, published for an American audience.