That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.
- 0 Posts
- 4 Comments
If you would like to link some abstracts you find in a DuckDuckGo search that’s fine.
Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.
Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.
And none of these tech companies even pretend that they’ve invented a caring machine that they just haven’t inspired yet. Don’t ascribe further moral and intellectual capabilities to server racks than do the people who advertise them.
I’ve only ever seen the legal “right to be forgotten” concept applied to search engines and news publications. I think the closest to this was in Delhi high court where they ruled to have some social media “news” posts deleted. But that’s far different from having platforms erase things you’ve said and may regret. And then add yet another degree of separation for using a semi-private form of communication in email.
I am not speaking authoritatively so anyone who knows more than me jump right in.