In other news: My toaster makes better toast than my vacuum.
Attempting to badly quote someone on another post: « How can people honestly think a glorified word autocomplete function could be able to understand what is a logarithm? »
You can make external tools available to the LLM and then provide it with instructions for when/how to use them.
So, for example, you’d describe to it that if someone asks it about math or chess, then it should generate JSON text according to a given schema and generate the command text to parametrize a script with it. The script can then e.g. make an API call to Wolfram Alpha or call into Stockfish or whatever.This isn’t going to be 100% reliable. For example, there’s a decent chance of the LLM fucking up when generating the relatively big JSON you need for describing the entire state of the chessboard, especially with general-purpose LLMs which are configured to introduce some amount of randomness in their output.
But well, in particular, ChatGPT just won’t have the instructions built-in for calling a chess API/program, so for this particular case, it is likely as dumb as auto-complete. It will likely have a math API hooked up, though, so it should be able to calculate a logarithm through such an external tool. Of course, it might still not understand when to use a logarithm, for example.
This is so stupid and pointless…
“Thing not made to solve spesific task fails against thing made for it…”
This is like saying that a really old hand pushed lawn mower is better then a SUV at cutting grass…
SUVs aren’t marketed as grass mowers. LLMs are marketed as AI with all the answers.
I’d be interested in seeing marketing of ChatGPT as a competitive boardgame player. Is there any?
Not necessarily that AI is marketed as a competitive board game player, but that AI is marketed as intelligence. This helps illustrate how clueless it really is.
There are plenty of geniuses out there who aren’t great at board games. Using a tool not fit for task is more of an issue with the person using the wrong tool than an issue with the tool itself.
I do get where you’re coming from though. There are definitely people who don’t understand why a ChatBot wouldn’t be good at chess.
Source?
Made people click though didnt it.
Is this just because gibbity couldn’t recognize the chess pieces? I’d love to believe this is true otherwise, love my 2600 haha.
At first it blamed its poor performance on the icons used, but then they switched to chess notation and it still failed hard
That is baffling
in other words, a hammer “got absolutely wrecked” by a handsaw in a board-halving competition
When all you have (or you try to convince others that all they need) is a hammer, everything looks like a nail. I guess this shows that it isn’t.
One of those Fisher-Price plastic hammers with the hole in the handle?
What happens if you ask ChatGPT to code you a chess AI though?
It probably consumes as much energy as a family house for a day just to come up with that program. That’s what happens.
In fact, I did a Google search and didn’t have any choice but to have an “AI” answer, even if I don’t want it. Here’s what it says:
Each ChatGPT query is estimated to use around 10 times more electricity than a traditional Google search, with a single query consuming approximately 3 watt-hours, compared to 0.3 watt-hours for a Google search. This translates to a daily energy consumption of over half a million kilowatts, equivalent to the power used by 180,000 US households.
It doesn’t work without 200 hours of un-fucking
clop - clop - clop - clop - clop - clop
. . .
*bloop*
. . .
[screen goes black for 20 minutes]
. . .
Hmmmmm.
clop - clop - clop - clop - clop - clop - clop - clop - clop - clop
*bloop*
Hey I don’t mean to ruin your day, but maybe you should Google what you just commented…
There is 100% no chance google knows what that is