The dark side of the moon is a nickname give to a region on our celestial friend that is never seen from Earth. This is due to a phenomenon called gravitational lock where the moon doesn’t rotate and is permanently fixed facing one way towards the Earth while still orbiting the Earth. This gives it the unique property of being perfectly shielded from all Earth bound communications. Radio waves, light waves, digital communication, none of it reaches the dark side of the moon. This is where you have to go in order to stop hearing about the wonders of “AI”.

Headlines and comments proliferate and pontificate about the miracles of both current and pending artificial intelligence systems. How this brand new technology has seen a recent surge in development, leading to spectacular breakthroughs, like passing the Turing test, but only in very limited circumstances, how AI is going to replace artists and that’s apparently a good thing for some reason? How OpenAI is secretly hiding a sentient AI behind the scenes, or how ChatGPT 5 will have the intelligence of a PhD graduate.

Well I hate to derail this hype train, but not only are we likely nowhere near a sentient AI, if that’s even possible, but it’s not even really artificial intelligence in the strictest of sense. It’s much closer to machine learning than Google, Microsoft, or OpenAI care to let on, and here’s why.

In 2016 Lee Sedol a South Korean Go player who was and is considered one of the best go players in the world if not in history, sat down to play his fourth match against AlphaGo, a Go playing “AI” developed by Google Deepmind, which had been in development since 2014. Lee had lost his previous three matches despite having 18 international titles to his name, and this match seemed to be going the same way.

“Go” is an ancient Chinese board game believed to be the oldest continuously played board game in the world. It is extremely simple in premise but extremely complex in practice. Players pick one of two coloured stones black or white. Which they then place one by one, taking turns to capture territory or the other player’s stones. To capture something, you have to surround it with your stones, and by the end of the game the player with the most territory captured wins.

AlphaGo played black and Lee played white which traditionally goes second in Go matches. The game started much the same way as before, with Lee playing conservatively and very much on the back foot against AlphaGo’s single minded directive of placing one of two coloured rocks on a 19×19 board. To clarify AlphaGo had 1920 CPU’s 280 GPU’s and Googles own proprietary TPU’s which are machine learning application specific integrated circuits which were used to help run the neural network, to accomplish this task. Everyone watching, including Lee himself, thought that the same pattern of slowly losing to the machine was emerging, until that is, move 78.

You see, the way AlphaGo worked was to place a statistical probability of a stone being place on any one of the available spots on the board. This was based on a hundred thousand games played by high ranking Go players and then on millions of games it played against itself, learning from its own mistakes and improving each time. The higher the probability of a stone been placed on any particular spot, the surer the machine was of that happening and the better it could respond with its own moves, with the ultimate goal of winning the game.

On move 78 it was Lee’s turn and he placed a stone on L11 right in between two black stones. Creating a wedge and stopping the machine from consolidating more territory in the center of the board. AlphaGo followed up with K10 and suffered an instant 8% drop in win rate, the highest drop so far in all of the games. On move 180 AlphaGo’s win rate dropped to 20% and it resigned, Lee had won.

AlphaGo predicted that Lee had a 1/10,000 chance of placing a stone on L11 during move 78. It was later dubbed the God move, for a brief moment Lee Sedol out thought and outmatched a computational monster with only one purpose, to win Go, with nothing more than his human ingenuity. He beat teams of mathematicians, computer scientists, AI experts, years of research, and several tonnes of raw processing power with a 1 in 10,000 move. To the machine it was divine, to Lee it was all he could see.

The reason why this story is so important is because it perfectly demonstrates the fundamental flaw with our so-called artificial intelligent systems. Both AlphaGo and ChatGPT may run off complex neural networks, but don’t be fooled by what are essentially statistical probability calculators, they can’t think and they aren’t true artificial intelligence.

In the same way that AlphaGo assigns a probability percentage to a square being the optimal place to put a stone, ChatGPT assigns a percentage to words. The higher the percentage the more likely it’s the correct word to respond to whatever the user said in the first place. So for example if I asked ChatGPT “what is the meaning of the word defunct”, it’ll respond with the definition of defunct because in its training data when it was fed the words “what is the meaning of the word defunct” they most often preceded the words that described defuncts definition. It does not think, it does not learn or understand, it does not know what the state of being defunct actually is, it only knows statistics and probability, based on the data it was fed.

If you are still not convinced let me put it another way by explaining the famous Chinese room thought experiment. In 1980 Philosopher John Searle proposed that a machine or programme that appears to think could all just be an illusion and really when you get down to it doesn’t truly understand anything. The thought experiment goes a bit like this. Say I place myself in a sealed room with a fluent Chinese speaker outside. I have in front of me a very large book that contains millions of Chinese phrases with an “if” “then” instruction on each of them, meaning “if” I receive a message “x” “then” respond with message “y”. The fluent Chinese speaker outside could put in any message then wanted and I, using the book, would be able to respond with perfect accuracy every time without being able to speak or read a single word of Chinese.

However, say if the Chinese speaker outside put in a phrase that wasn’t in the book. I would have no choice but to respond at random and then the person outside would know instantly that I couldn’t speak Chinese since I responded with a nonsensical answer. Anyone who has experienced ChatGPT getting something so fundamentally, so confidently wrong knows what I’m talking about, and therein lies the fundamental flaw with so called “AI” systems.

When faced with a novel situation that isn’t in the training data, like a phrase not appearing in my book, the AI breaks down and “hallucinates”. Like what we saw with AlphaGO on move 78, it wasn’t expecting that move at that time in that game because it wasn’t in its training data, so it didn’t know how to respond and it lost itself. ChatGPT does the same thing when you ask it something very specific or obtuse or even occasionally, something not so complicated like telling you there are only two r’s in strawberry.

Now, how image generating “AI” works is slightly different. Say I wanted Dall E to make an image of a tulip for me, first it gathers all of the reference images of tulips it has, which could potentially be millions. And then it generates an image of random noise, which is an image were different coloured pixels of varying light intensity are placed randomly on a canvas. It then does a sweep “de-noising” the image to find the tulip inside based on the reference images it has in its data set. Then after a number of passes it spits out a “new” image of a tulip, in theory.

However, in the same way that AlphaGo doesn’t truly understand what a group of stones are, Dall E doesn’t know what a tulip actually is, sure it has millions of reference images but does it know that there are over one hundred different species? Can it tell what angle each image was taken or painted at? What about time of day, cloud cover, or environment, which can all effect the way light bounces and shines on the tulip, which can give the same flower a completely different look with each change. That’s why you can end up with weird artifacts especially in the case of hands, which machines still don’t get right. Even today machines can still generate hands with more than five fingers or several hands on one arm. Or if you ask the machine to make a novel image which doesn’t directly correspond with any of its training data it’ll generally make a horrific abomination which looks closer to an amalgamation of images rather than one cohesive image. Something that should be very simple for a multi billion-euro “AI” machine to do, yet it gets routinely outperformed by your average human artist.

This is despite the fact that humans don’t rely on millions of reference images to paint a tulip because we fundamentally understand what a tulip is and we can produce multiple angles in different styles, under an array of lighting conditions, of the same flower, with an internal consistency not possible with so-called “AI”. From this we can extrapolate that what machines are doing, when they create these images is fundamentally different to the creative process going on inside a human’s brain. Which by the way no one fully understands yet, and if someone claims they do or that “AI” is doing the same thing as a human does, they should be questioned thoroughly because they are leaving themselves open to the allegation of lying, not telling the full truth, or they don’t realise that they don’t know.

That fundamental difference brings up another very uncomfortable question for “AI” companies, the question of copyright. Let’s say for example you don’t agree with me, generative “AI” makes new images which don’t infringe on other’s copyright. Even if you believe that, there are still two huge problems. Let’s start with an analogy. Say I, without telling anyone, scan perfectly a hundred paintings in a gallery. I then take those scans and I give them to an artist who works for me and I tell him to produce art in their styles but to change the subject. So instead of a clock do a chair. Finally, I take these “new” paintings and sell them for profit (feeling uncomfortable yet?). Now replace that human I employ with a machine so no one gets paid for their work, is that ok, is that really the world we want to live in? Let me be clear I’m not asking the question of whether or not this process constitutes copyright infringement I’m asking you whether or not you want to live in this world. History is full of examples of laws being changed when new technology comes out, why stop now?

There is also the question of power consumption, it takes the equivalent power of a smartphone battery to make one “AI” image. And according to former Open AI employee Leopold Aschenbrenner, by 2028 the most advanced “AI” models will run on 10 GW of power at a price tag of hundreds of billions of dollars each. This is on top of the already inflated cost of “AI” which according to Sequoia Capital a venture capital firm based in the US, puts its forecasted estimate to be 600 billion dollars a year this year. Which puts that price at almost parity with the entire US military budget. You know, the one with 12 aircraft carriers, the world’s largest and second largest air forces (the second largest is its navy’s air force), and 2.07 million personnel, yeah that one. Meanwhile a human artist can run on a bit of toast and maybe a cup of tea.

Someone should really tell these guys that maybe we should be putting that money and saving those greenhouse gasses for a more important problem, like global warming, for example. Or maybe I’m wrong and a perfectly generated “AI” cat video is far more of a noble accomplishment than saving the planet.

Getting back to the question of “AI” generation and copyright, the first problem is the artwork being stolen (for lack of a better term) by “AI” companies to be used in their training data. The second problem is it being sold for a profit by these same “AI” companies. So if you’re an artist out there and you suspect that your art has been used by one of these “AI” companies without permission, I suggest you take action. Whatever you decide to do, for the love of God don’t go down without a fight.

Speaking of not going down without a fight, remember our friend Lee Sedol? Well according to some commentators his move 78 wasn’t as spectacular as it was first perceived. Michael Redmond a US based professional Go player, was one of the English speaking commentators at the AlphaGo match back in 2016. In a post-match analysis, he pointed out the many ways that AlphaGo could have responded to Lee’s wedge, making his move ineffective. What Lee did was “he chose the one move where the result seemed to be uncertain”, a novel approach.

However, “AI” Go players continued to develop and seemed to snuff out any hope of a human player reclaiming the throne. Not when you only have your 3-pound brain vs several tonnes of machinery which can practice the equivalent of hundreds of year’s worth of Go playing in a single month. And that would be the end of the story if it wasn’t for human ingenuity, that plucky little thing that let us survive an extinction event, an ice age, and two world wars.

KataGo is a superhuman level “AI” Go player that was released in 2019. It is at least an order of magnitude more powerful than AlphaGo. It would stand to reason then that not a single human player, not even the best among us would ever be able to beat it, at least not in our lifetimes. However, what if I told you that not only is that false but it just took four years for an amateur level human player to beat KataGo.

As most ingenious plans go it was surprisingly simple. FAR AI a Californian based research firm first designed a programme to probe KataGo for weaknesses. Then after playing millions of games against KataGo suggested a number of tactics to an amateur level human player by the name of Kellin Pelrine. Kellin, then without any computer assistance was then able to use these tactics he learned from the programme to beat KataGo decisively with a win rate of 14 to 1 games. The winning tactic was dubbed the double sandwich, where Kellin would surround stones in the middle of the board and allow the KataGo to surround them, but he would then surround KataGo once more which KataGo would just ignore for seemingly no reason, allowing Kellin to win the game. Any human player with an average ability at the game would instantly see the trap and adapt to it, but not this so called “AI”. Even when FAR AI gave KataGo defences against this tactic it still lost.

Now you might be asking yourself, well, they just used an even more powerful AI to beat KataGo and then copied it, well it’s not that simple. The programme was specifically designed to find faults with KataGo and therefore didn’t need to be more intelligent it just needed to play a few million games to find those weaknesses, and just to hammer home this point, KataGo and other so called “AI” will never adapt to these weaknesses. They are permanently built in once the model is complete because they can’t think or adapt, they are fundamentally no different to machine learning algorithms and statistical probability calculators. Just like AlphaGo, ChatGPT, and Dall E, they all have flaws which can be exploited. But most importantly they have no concept of death.

Let me lay out an absurd scenario for a moment, say mankind as a whole had to defeat a superhuman “AI” Go player to survive an extinction event, I would suggest it would take no longer than two hours to do. First there are a little over eight billion humans on the planet, roughly seven billion of those are over the age of ten, which we can reasonably teach the basic rules of Go. In the time it takes to play one game of Go, roughly an hour, the Humans would have played seven billion games of Go against this “AI”. Now remember it only took a few million games to find KataGo’s weakness. So potentially thousands of people, not one or two, thousands have stumbled across this “AI”s weakness. They then share this revelation online for everyone to see and now the Humans have evolved and adapted. By the time the second game is played billions of people across the globe are beating this “superhuman” “AI” and the day is won for mankind.

I’m reminded of the penultimate scene from Django Unchained after Jamie Fox’s character “Django” kills most of the white slavers. He confronts “Stephen” played by Samuel L. Jackson, a black man who sided with the white slavers and helped manage the slaves. Django says the following iconic line; “76 years Stephen how many n*****s you think you see come and go, seven thousand, eight thousand, nine thousand, nine thousand nine hundred and ninety-nine? Every single word that came out of Calvin Candie’s mouth was nothing but horse shit, but he was right about one thing I am that one n****r in ten thousand.” A moment later Django lights a fuse and walks out the door.

I am however, acutely aware that maybe one day a true AI will emerge. One that can adapt, one that can truly understand what a tulip is, what a group of stones are or indeed fully grasp the concept of death. Maybe, or maybe just maybe, life is a little bit more complicated than placing one of two coloured stones on a 19×19 board.

David Toolan.

Picture credits, The Jupiter Moon conjunction photo by David Toolan, Go board game photo by Elena Popova on Unsplash, Tulip Photo by Pixabay, and cowboy silhouette photo by Cemrecan Yurtman on Unsplash.

Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments