When typing “Klikauer+AI” into an AI-based website called lexica.art, the picture shown above appeared. Yet, the man in the picture looks nothing like me. This failure might lead to the inevitable question of, how intelligent is artificial intelligence (AI)?
Apparently, the supposedly intelligent AI can’t even find a picture of me on the Internet – something that is actually saved, for example, on my very own website under my very own name.
If one looks, for example, at a rather common concept of what intelligence is – the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, problem-solving, and the ability to perceive or infer information – AI doesn’t seem to do all that well.
Worse, for AI, intelligence can also be seen as the ability to retain newly learned information – not misinformation and disinformation and not made up stuff generated by ChatGPT.
Intelligence is knowledge applied towards adaptive behaviors within an environment or context. From the standpoint of what intelligence actually is, AI seems to be miles away from actually being intelligent.
Given our understanding of intelligence, the allegedly so intelligent AI even failed to find a simple picture of me and to create a reasonably close approximation of me. Worse, the much famed ChatGPT – in another self-test – got four facts wrong about me. Rather than being intelligent, AI seems to just make stuff up.
Yet, despite this, the apostles of AI – including media capitalism – have a very serious incentive to diminish AI’s – known! – limitations. Hyped up by the media, AI has become big business in recent months.
Even thorough corporate media – and this is quite apart from ChatGPT failing the Turing-test and from other rather incapable AI image creation websites, AI is set to become increasingly dominant in our global online, and not so online, culture. To arrive where it is today, AI had to travel a long way.
While the term “artificial intelligence” may had been first used in 1894, the term “artificial intelligence” was turbo-charged in 1956. Today, AI continues to be popularized and most recently sensationalized. In reality, AI has no human-like intelligence. Its limited machine intelligence is radically different from what we know intelligence to be.
The prevailing myth of AI tells us that AI can – or will in the future – do almost everything. Yet, AI’s incredible success rests on narrow applications like board games It can also predict the next set of sleepwear purchased on Amazon. However, all this gets us not one step closer to general intelligence – an AI system that can do more than play games and sell things.
For the most part, today’s AI is rather successful in applying a simple and rather narrow version of something that might be called functional crypto-intelligence or machine intelligence. Quite apart from this, current AI still benefits from much faster computers compared to previous decades’ and, most importantly, from fast and cheap access – via the Internet – to lots and lots of data.
While AI is great for those kinds of things, overall, however, AI is making only incremental progress towards being more than that. In other words, today’s AI is picking low hanging fruits. Despite AI making quantitative progress – by showing some improvements – it does not however, make much qualitative progress towards human-like intelligence – including, for example, understanding irony, sarcasm, cynicism, inference, and intuition.
In other words, even if AI engineers could program “intuition” into an AI machine, it remains rather doubtful if AI ever can reach the level of human intelligence. In short, your AI-driven home robot will follow your command, get the orange juice out of the fridge, and bring it to me. But it might not “intuitively” check the expiry date of the juice. We do. And worse for AI, we do hundreds of such things every day without even thinking (much) about it.
To get out of the pickles from all the too obvious demonstration that artificial intelligence isn’t really that intelligent, AI likes to frame “intelligence” as simple and highly reductive problem solving.
Yet, the much-loved problem solving only gives us a narrow part of the human world. The great news for AI is that the problem-solving application of AI is, rather unsurprisingly, an area in which AI is very good at and does it extremely successful. The good news for AI continues when solving problems is sold as intelligence.
On the downswing and worse for AI is the fact that there seems to be an inverse correlation between an AI machine’s success in learning one thing, and its success in learning another thing.
In other words, an AI system that has learned how to play a winning game of Go won’t also learn how to play a winning game of chess. In machine intelligence, one does not lead to the other – sadly. AI remains, so far, trapped inside its own machine intelligence of simple puzzle solving.
Even more problematic for AI, a success in puzzle solving and the narrowness of AI programs are two sides of the same coin. This problématique alone casts very serious doubts on the currently much hyped-up prospect of an easy progress from today’s (narrow) AI to tomorrow’s human level AI. As well as towards next steps for AI: moving towards so-called super AI (ASI), whether speed-ASI, collective-ASI, and quality-ASI.
Quite apart from grandiose claims about ASI, it is, at least historically, almost self-evident that AI focused on engineering programs for narrow puzzle- and problem-solving applications. This is still the pre-dominant form of AI, today. Focusing on puzzle solving and winning games virtually assures global media hype.
Besides all the hype, even the so-called general AI (AGI) which will be (so the belief goes) non-narrow problem-solving intelligence, is actually something that we – as human beings – actually display every day. But true intelligence is not, never has been, and never will be a pre-programmed algorithm running in our heads.
Inside the current fanfare, super-AI is also called ultra-intelligent machine AI. This is the belief that AI can far surpass all the intellectual activities of any one person – however clever. Since the design of such an ultra-AI machine is an intellectual activity, an ultra-intelligent machine could then self-design even better and even more clever AI machines.
The apostles of AI think that this will unquestionably lead to what they have termed as intelligence explosion. They also imagine that this exploded intelligence would leave us behind. The propaganda behind all this is relatively obvious, so don’t dispute whether super-intelligence is coming – instead, get ready for it!
Yet, one of the key snags for AI remains this: adding more RAM to your MacBook does not make it actually more intelligent. At times, it even appears as if there is an untold assumption about an ever-increasing intelligence in AI. But there also seems to be a certain circularity. It looks like that it will take general intelligence to increase general intelligence.
In other words, AI without real intelligence just gets the wrong answers more quickly. This is where we are at, today. But this is also, according to MIT’s computer scientist and “one” of the Godfathers of AI – often actually called as “the” Godfather of AI – Marvin Minsky, where we should “NOT” be today. He declared in 1967,
within a generation, the problem of creating artificial intelligence will be substantially solved.
Minsky’s within a generation is considered to be within twenty to thirty years. Since 1967, two generations have passed (56 years). Contrary to Minsky’s statement, AI is still not substantially solved.
Instead of utterly unreachable and unachievable goals, the idée fixe that AI computers could be programmed with the knowledge of human beings, remains utterly quixotic. Actually, it shouldn’t deserve any serious discussion.
Remaining inside the fantasy world of AI, there are, of course Hollywood-style tales of fearsome and even apocalyptic AI. These are scary campfire horror yarns that do not reflect the reality of AI. At the other extreme are utopian FALC-like dreams about AI – solving global warming, ending world poverty, etc. Both are equally trivial and unwarranted.
Meanwhile, many of AI’s fairytales live and breathe from a shift from human wisdom and knowledge towards technology, computers, and algorithmic programming. Yet, this also marks a move toward what Greek philosopher Aristotle calls techne (the making of things) and a move away from what he called episteme, the knowledge of natural phenomena.
In other words, the focus on programming, algorithms, and AI moves us further away from sapientiae, the human wisdom relating to human values, morality, and society. As a consequence of this, it will make it ever more difficult to develop a meaningful idea of human uniqueness. In its finality, all this also means, that by placing techne at the center, it makes it possible to view a human being as something that can be built – even inside a computer or an algorithm.
All this very quickly leads to the idée fixe of a computational mind believing – wrongly – that the human mind is nothing more than an information processing system. Yet, paralleling the human mind with a computer is not scientific – it is a rather unhelpful illusion. And this is not even adding Werner Heisenberg’s uncertainty principle to the AI computational mix.
This leads to one of Heisenberg’s contemporaries, the Hungarian-British philosopher Michael Polanyi who unequivocally rejected the idea that machines could capture all of human intelligence. He also argued that machine intelligence would necessarily leave out the tacit constituents of human intelligence. These are elements of human thinking that cannot be precisely described by writing down symbols, i.e. coded computer programs and algorithms.
This explains why, for example, many human talents, skills, and crafts, like cooking, can’t be conquered by simply reading recipes. This applies even more so to the writing of literature. Just, imagine to code an AI computer program for writing a novel like, let’s say, James Joyce’s Ulysses.
Much of this suggests that the human mind on the one hand and machines and machine intelligence on the other hand have fundamental, very deep, and extremely serious dissimilarities. All this also means – by “inference” (something AI finds impossible to do) – that equating the human mind with AI machines is problematic. And this is apart from the fact that the human brain, which AI prefers to talk about, is not the same as the human mind.
All of this, almost inevitably, leads to an over-simplification and misunderstanding of what the human mind and what intelligence actually is. It renders that true (read: not puzzle solving) AI is unachievable. And this comes quite apart from the fact that the word “intelligence” in artificial intelligence is quite a misnomer. True human-like intelligence remains unachievable for current AI.