mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.8K
active users

Computers have been beating the best human Go players since 2016. The Go world champion retired in part because AI is “an entity that cannot be defeated.”

But a human just trounced one of the world’s best Go AIs 14 games to 1: arstechnica.com/information-te

I think this news story is more interesting than it might first appear (without knowing details, so grain of salt). It isn’t just a gaming curiosity; it points to a fundamental flaw with “deep learning” approaches in general.
1/

Ars Technica · Man beats machine at Go in human victory over AIAmateur exploited weakness in systems that have otherwise dominated grandmasters.

The Go AI was trained by feeding it a huge number of Go games. It built a model based on w̶h̶a̶t̶ ̶h̶u̶m̶a̶n̶s̶ ̶d̶o̶. CORRECTION: KataGo is trained by playing against itself; the model input is “past AI games.”

The human beat it by doing something so obvious a human or sensible AI would never do it — so it wasn’t in the training data, so the AI didn't counter it.

(Basically, the human forms a conspicuous giant capture ring while distracting the AI with tactical battles the AI knows how to counter.)

#ai
2/

Why is this interesting?

The recent eye-popping advances in AI have come from models that scan huge datasets, either generated by humans (e.g. “all competitive Go games” or “all the text we could find on the web”) or by computer (“the AI plays itself a billion times”), and imitating the patterns in that dataset — with no underlying model of meaning, no experience to check against, no underlying theory formation, just parroting.
3/

These systems produce such striking results, it raises the question of whether our brains are any different. Are we also just fancy pattern mimics?

Would an LLM gain human-like intelligence if only we had more processing power?

This result suggests that no, there’s something else our brains do. The tactic the AI missed would be painfully obvious to a human player. There’s still something our brains do — theorizing, generalizing, reasoning through the unfamiliar — that these AIs don’t.
4/

As usual on Ars, there are actually some good comments.

This person wisely reminds us that AI has been littered with bold predictions that human-like AI is just around the corner — and every time, we realized that we’d failed to understand what the hard part even was. The Marvin Minsky quote here is eye-popping:
arstechnica.com/information-te
5/

But does this matter? Sure, the AI did a faceplant on some bizarro strategy that would never fool a competent human. So what? It’s just a board game.

OK, what if it’s a self-driving car?

Have you ever encountered a traffic situation that was just totally bizarre, but had a common-sense solution like “wait” or “just go around?” What would an AI do in that situation?

Think of the reports of self-driving Teslas suddenly swerving or accelerating straight into an obvious crash.
6/

Or as this comment remarks: what if it’s an autonomous killbot? (arguably a superset of the previous item, I know) arstechnica.com/information-te

Several comments point out the parallels to the recent story about Marines defeating an AI with Jim-Carrey-style nonsense antics that bore no relationship to the training data: arstechnica.com/information-te

The linked article: taskandpurpose.com/news/marine
7/

This is probably what’s going on with the hilarious ChatGPT faceplants making the rounds on social media.

People try to fool GPT with esoteric questions, but those are easy for it: if anybody anywhere on the web already answered the question, no problem — and making it esoteric just narrows the search space.

But give it a three-digit addition problem, and there’s no single specific example to match. And GPT can’t turn all those examples into a generalized theory of how to do addition.
8/

Matt McIrvin

@inthehands I recall the Bing one faceplanted on not recognizing that a movie's release date was in the past, even though it could tell you today's correct date if you asked. And then it doubled down with seemingly angry language when challenged on the point.

@mattmcirvin @inthehands My favourite part is that, once it started acting arrogant and obtuse, it decided that "the date is actually 2022" was the most likely next token. The mistake it made about the movie's release date damaged its ability to output the correct date.

@wizzwizz4 @inthehands It's an interesting problem for this kind of approach--you have certain categories of text in the corpus describing specific events in time, that were appropriate things to say at the time that text was written, but it's now possible to deduce that the text is obsolete. But only if you have an understanding of how time works.

@mattmcirvin @wizzwizz4 The lack of embodied experience as a frame of reference is what undoes a lot of these AIs, even just as thought experiments. Many have argued (I think persuasively) that attempts to sever embodiment from intelligence are nonsense, that the latter as we know it requires the former.

@inthehands @wizzwizz4 I could maybe imagine a mind "embodied" in a virtual world very unlike ours; it would probably develop into a very otherworldly intelligence. But these language models don't even have that--their world is just language interactions severed from direct experience of any of the things the language is referencing.