comScore
Active Stocks
Wed Aug 09 2023 12:25:31
  1. Tata Steel share price
  2. 117.2 -0.76%
  1. Infosys share price
  2. 1,387.55 -0.2%
  1. ITC share price
  2. 448.65 -0.84%
  1. Tata Motors share price
  2. 608.85 0.27%
  1. Wipro share price
  2. 415.5 -0.25%
Business News/ Opinion / First Person/  Machine Intelligence: Real or Oxymoron?
Back

It was 1996, and the world waited with bated breath for an extraordinary chess match between Garry Kasparov and IBM’s super computer ‘Deep Blue’. By then, the fastest human calculators like Shakuntala Devi had long been overtaken by machines, and hence computers beating the fastest humans in a test of sheer computing power was a well-accepted reality. But the belief was that chess needed a different kind of thinking that went beyond brute computing power, and hence a victory of Deep Blue over the reigning chess champion would herald the arrival of artificial intelligence (AI), a term that computer scientist John McCarthy had coined in 1977. 

Kasparov won the first match, held in Philadelphia, by 4–2. The IBM team quickly learnt from the defeat, made modifications to the program, and a rematch was held in New York City a year later in 1997.

Deep Blue won by 3½–2½.

In 1949, nearly half a century before the advent of Deep Blue, Claude Shannon, a mathematician, had written a paper describing how a computer could play chess. He had described two types of algorithms which, probably for want of imagination, he called Type ‘A’ and Type ‘B’. 

Type ‘A’ algorithm involved evaluating all possible moves from any position, ranking them from best to worst using game theory, and making the top ranked move. This involved brute computing power, and in 1949, at the time Shannon wrote the paper, humanity did not have access to this kind of computing power and hence Shannon didn’t give Type ‘A’ algorithms a chance. However, nearly five decades of exponential growth of computing power fueled by Moore’s law had given brute computing, or the Type ‘A’ algorithm, a shot at competing with the chess champion. In 1997, Deep Blue’s brute computing power was good enough for the ‘Minimax’ (a Type ‘A’ algorithm) algorithm to beat Kasparov and create the excitement and illusion of ‘intelligence’.

Humans versus Machines 

It is common to underestimate the long-term transformational power of technology, but it is equally common to witness insanely high short-term hype around any new technology. Generative AI tools have created such a spike in the chatter around AI that all disappointments of the past on AI have been forgotten.

This piece is only partly about whether the hype is for real, and primarily seeks to address the more fundamental question that keeps popping up from time to time on whether machines in some form or shape will outsmart humans, or in other words, can machines do better than humans in the ‘Type B’ algorithms?

Hans Moravec was a roboticist who wrote in 1988, “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility". This is referred to as the ‘Moravec’s Paradox’. Complex computation is child’s play for an intelligent machine but what humans and animals do ‘without thinking’ have proved to be exceedingly hard for the machine and AI.

Atul Jalan writes that in 42 years of his tormented life, Alan Turing asked more questions on logic, mathematics, computing, biology, metaphysics, and human consciousness than any of his genius brethren and these questions would become the foundation of other questions that experts in all fields would ask.

Could a machine, however smart, ever ask such questions?

Real intelligence, as we understand it, is an outcome of hundreds of millions of years of evolution. Only evolution could have created the non-trivial diversity needed to produce thinking as varied as that of Turing and Darwin. An uncanny ability to seamlessly combine intuition and logical thinking is again a product of evolution. Some of this intuition has been hardcoded into the DNA and does not need training on even the minimal data sets to be operational. The ability of a new-born to distinguish a loving face from a threatening one is just an example. When it comes to diversity, two individuals, say Nandan Nilekani and Sanjeev Bikhchandani, could have perspectives on scaling a business, but their perspectives and insights are likely to be very different and who you reach out to discuss a specific problem you are confronted with would depend on the context. Neither of their specific insights could be replaced by a machine that has crawled the net or has been fed learning data sets by programmers. Asking an AI tool for insights on scaling a business is like asking Hanuman from the Ramayana, to bring the ‘Sanjeevani Herb’. Hanuman brought the whole mountain. Your position isn’t any better off because you still need to find the Sanjeevani Herb.

Everyone’s Turing Test Moment

The spectacular spike in the interest around ChatGPT is not as much a result of the promise of transformation as it has been in creating a breath-taking Turing test moment for most humans. ChatGPT had a lot of novelty value, and everyone across the world was suddenly having a field day getting ChatGPT to do both interesting and funny things like re-writing an Alfred Tennyson poem in the style of the American declaration of independence! This was probably the first time, a machine was able to have human like conversations, and most users were intuitively experiencing the ‘Turing test’ moment. The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Ted Chiang uses a powerful analogy to crush all attempts to connect ChatGPT’s ability to have elementary conversations to near-human intelligence. Compression algorithms have been used to save large data files, especially those with heavy image and video content, with lower storage needs. There is some loss when the data is reproduced. Similarly, if you were to imagine saving the entire data on the internet onto just one server, then the compression would have to be enormous, and there would be significant loss of data. If you had to reconstruct the data, you would have to obviously interpolate for all the missing elements. And ChatGPT does just that and it is now quite clear that as a result of this interpolation, ChatGPT tends to ‘hallucinate’.

Humans not an epitome of Intelligence

Evolution has programmed birds and whales, even some insects, to use the weak magnetic field of the earth to navigate thousands of miles across the globe with almost pinpoint accuracy. Humans have acquired this ability only recently with the help of GPS.

There are roughly ten million ants in the world for every human, and the combined complexity of their brains is similar to the combined complexity of all the human brains. Randal Munroe makes an interesting observation that we have caught up to ants and they don’t seem too concerned! If one had to guess which one of us would be around a million years from now – humans, ants or computers – the answer is pretty much a no-brainer. So, it is difficult to logically dispute a claim that ants are more ‘intelligent’ than both computers and humans!

Tools or Rivals

Man-made machines and programmes have almost always served as productivity enhancement tools, though from time to time, AI enthusiasts including popular writers and CEOs of large tech companies, have talked about AI transforming or taking over the world! Technology has always been transformational in the long-term, but the way new age tech evangelists talk about it is almost as if they were the messiahs. This is like the claim made by Ginni Rommety, the CEO of IBM in 2019, that “IBM artificial intelligence can predict with 95% accuracy which workers are about to quit their jobs" She would not explain “the secret sauce" that allowed the AI to work so effectively in identifying workers about to jump and just said that its success comes through analysing many data points. This claim was in violation of ‘chaos theory’ and has never been borne out by reality. Such claims are marketing gimmicks with the sole motive of driving up sales. We have seen some of this in the months since OpenAI launched ChatGPT.

AI and Job losses

Every technological breakthrough, from dynamite to an internal combustion engine to the humble spreadsheet, has been a tool that served the purpose of enhancing productivity. And in the process, propelled economic growth. The natural question is what happened to the armies of accountants and business analysts who were there before the advent of the spreadsheet? Were they all laid off?

The answer to this question offers a deep insight into human nature. The army of accountants and business analysts were not laid off. They got busy creating pivot tables and charts, they sliced and diced data along every possible dimension. As a result, the quality of some decisions certainly became better, but the long tail of activity traps that this tool generated merely gave the illusion of better decision making. This is true of other inventions like the photocopier. Every document now needed multiple copies to be made, transmitted and stored! And the story with AI is unlikely to be any different.

Misplaced Frankenstein Worries

No one would deny that generative AI has its uses, but to argue that it is a step towards replacing humans is no different than the argument at the time of the invention of dynamite that it would replace labour, or that spreadsheets would replace accountants. Dynamite actually amplified the need for labour because humans could now dream of much bigger man-made structures.

Every technological breakthrough has had positive as well as negative impact on our lives. And social media and AI won’t be exceptions. Google Search changed our lives both for the good and bad. On the one hand we were able to get any information we needed in seconds, but on the other hand, page ranking algorithms were written by humans and their biases crept into the search results. The use to which we put these technologies has depended entirely on human nature. If human nature is to dominate and eliminate anyone seen as competition, then the nature of wars will change but won’t go away. New weapons will replace old weapons. We should therefore concern ourselves more about human nature than the dangers of AI.

Prometheus and Frankenstein have personified humans’ eternal fear of being destroyed by their own creations, and that fear will not easily go away. At different points in history, there have been different Frankensteins. The Frankenstein of this century has been intelligent machines that could outsmart humans.

In conclusion

A quote attributed to Einstein, which is not validated, is ‘No problem can be solved from the same level of consciousness that created it’. An equivalent statement in the context of Man versus AI would probably go something like, ‘No intelligence can be designed with the same capabilities as the intelligence that designed it’. Self-learning algorithms are getting better every day, but their quality still depends upon the programmer.

If Deep Blue had been designed to think and play like humans, but had lost to Kasparov, it wouldn’t have made news. It’s victory over the world’s best chess player was what made news. Humans will always be better than machines on their own turf, and machines, however well they are programmed to learn on their own, will always be subservient to humans. For instance, while machines are getting really good at image recognition, humans would always be better at, say, looking at a picture and imagining what had just happened and what were the sequence of events that led to this and what the next set of outcomes could be. Programming computers to figure out something like this isn’t easy whereas human brains have the benefit of millions of years of evolution, and interpreting situations quickly has been part of the survival kit.

We should all be excited by the prospects of AI triggering the next round of productivity gains and freeing up our time to do what we do best. Anyone who is less excited and more worried about what AI would result in has been watching too many doomsday science fiction films from Hollywood.

T.N. Hari is an author and co-founder of Artha School of Entrepreneurship.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Updated: 08 Aug 2023, 04:55 PM IST
Next Story
Recommended For You
Switch to the Mint app for fast and personalized news - Get App
×
userProfile
Get alerts on WhatsApp
Set Preferences My Reads Watchlist Feedback Redeem a Gift Card Logout