Saturday, May 12, 2018

Relax! The real AI is not coming any soon!


FYI: If you would like to find a publication with the exactly same statement but coming from an AI guru, you may want to check this link: https://www.theatlantic.com/amp/article/560675/
Some quotes: “All the impressive achievements of deep learning amount to just curve fitting,” and “artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.”

Relax! The real AI is not coming any soon!
Lately, Google Assistant mimicking a person has given a new boost to speculations about the role of AI, and how close we are to the point when “AI will take over the world”, well, at least the business world.
This development pushed me to write this piece.
Currently I am on a fence between diving deep into teacher professional development, or entering the field of AI (2-minute video) on a professional basis, but in any case, I think this will be my last popular publication on the matter of artificial intelligence.
All major ideas, except one :) (see the Appendix), have been covered already.
In “On a definition of AI”, I presented the definition of intelligence.
“Intelligence is the property of a system; the mission, the reason for its existence, and the core ability of intelligence is creating solutions to problems which have never been solved before (by that system).”
Or, in other words, intelligence is the property of a system  (the host of intelligence) which allows that system to create solutions to problems which have never been solved before (by the host).
All other aspects of intelligence play their roles, and take their places as devices, components, abilities, organs, functions required for intelligence to exist, perform, and achieve its goals, fulfill its mission - creating, again and again, a solution to a problem which has never been solved before. 
Artificial intelligence is just an artificially manufactured system which possesses intelligence.
In “The Dawn of The New AI Era.”, and then in Who will train our “artificial puppies”? But more importantly – who will train the trainers?, I stated that we have entered the new phase in the development of AI field. Now, in parallel with the continuous basic (a.k.a. fundamental) research in AI, the field of practical applications of AI will be rapidly growing. In that new field, AI developers will not be as important any more as experienced AI trainers.
In “Is Artificial Intelligence Already Actual Intelligence?” and then in “Will Artificial Intelligence Save, Replace or even Affect Education Practices? (a venture capitalist’s view)”, I discussed the current state of AI, and why some people in the field overestimate its potential influence on education (because they, like many others, have a very primitive view on education: BTW, why do we expect that someone who wrote a successful app, or sold a successful product is an expert in everything else? Especially those who surround themselves with people who always agree with them?).
In this publication I would like to offer an additional and specific reason for why AI, in its current state of development, is not even close to the intelligence of a human baby (and what should be done to bridge the gap).
But first, a little bit of history.
Initially, the term “Artificial Intelligence” was used to describe machines that could act like people. One of the best representations of AI in its original sense is both terminators in the “Terminator 2” movie (the roots of which can be traced to many science fiction stories, e.g. “I, robot”).
When scientists started developing systems which could have some features similar to those which humans have, they were coding complicated algorithms which would let a computer to conduct some of the human-type actions, like, recognizing an object, recognizing a sound, moving artificial limbs in a specific way, producing human-like sounds. Everything what a computer did was first designed and then encoded by a team of professionals, and that team was absolutely and completely responsible for every action of a computer/device/robot/system. Everything what a computer did right was initially envisioned and programmed by humans, and anything unexpected done by a computer was wrong.
No one would call those systems “intelligent”.
At some point, scientists realized that writing more and more complicated algorithms, which would check more and more possibilities before making a decision, has its limits.
The next idea was “simple”.
In order to mimic the intelligent actions of a human, an artificial system needs to mimic the structure of the organ which is responsible for the intelligent actions of a human, i.e. a brain.
Hence, the attention had shifted to neural networks, but made artificially.
Initially, the behavior of those networks was also absolutely defined by people.
The breakthrough came when an artificial neural network became capable of learning without interference of a human (after the initial training).
This is where we are today.
Existing methods, approaches, technologies, like, “deep learning”, “machine learning”, “supervised learning”, “unsupervised learning”, “reinforcement learning” all based on various types of “data mining”, with a following training process (also, they all have large overlapping areas).
The progress in AI development is astonishing, no doubt.
However, on the road from the 1956 meeting at the Dartmouth College to the latest Google presentation, the meaning of the term “Artificial Intelligence” has changed.
Initially, the “study of artificial intelligence” meant the study of “every aspect of learning or any other feature of intelligence” […] “that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” This study will need “to discuss computers, natural language processing, neural networks, theory of computation, abstraction and creativity” (all highlighting in the quotes above is mine, V.V.).
However, over time, such concepts as “abstractions”, “concepts”, “creativity” have been excluded from the meaning of AI. What’s left was – and still is – just “pattern recognition”.
Imagine, that you decided to improve a basic corkscrew. And you did! And you named your new model “The Ultimate Drilling Machine”.
This is not exactly what happened in the field of Artificial Intelligence, but very similar to it (I hope you get the idea, which is gross exaggeration of the reality).
Currently, the most impressive features of AI are those which imitate human actions.
AI can recognize sounds, symbols, objects, and can mimic speech, movements, even some emotions.
But so can animals.
Currently, AI is on its way to become a mouse, or a dog. Soon, what AI will be able to do in homes or businesses is more or less what a mouse or a dog could do, when properly trained.
Following down this road, eventually current AI will be as powerful as a cat, then a chimpanzee, then a lion, or a gorilla (just to illustrate an idea).
Current AI may become very powerful and helpful, or very powerful and dangerous.
But NOT because it will decide to be good or bad.
No.
Because people who will be using it, will be using it for good or for bad.
Like a hammer that can be used as a blunt weapon.
Like a bulldozer that can be converted into a tank.
Or like a dog, or bear, or a lion, or a gorilla which can be trained to hunt and to hurt people, or to find and help them.
But don’t expect that artificial human-like intelligence, or, AHLI, will be hanging around any soon.
Naturally, two questions immediately arise after this statement.
1. Why?
And.
2. What can be done about it?
Short answers to these questions are:
1. Because no one knows how the structure of a human brain is related to the existence of AHLI; no one knows how AHLI had been developed, what was the evolutionary path which had led to its existence. The whole science of intelligence is in its infancy. Scientists even don’t have yet a commonly accepted definition of intelligence!
And.
2. We have to intensify the research in the field of human intelligence (maybe, another workshop at the Dartmouth College?).
Basically, we need to learn as deep as possible:
1. How, from the moment they are born, humans become intelligent?
2. What factors affect human intelligence (positively and negatively)?
3. How does the development of the structure of a human brain correlate with the development of human intelligence?
4. What is the structural difference between a brain of an intelligent human being and of an animal, that is responsible for the existence of human-level intelligence, or HLI?
And that is just the bare minimum (however, having a research entity devoted to those studies could significantly advance the research).
In parallel, we will need to study how to build artificial systems which structure would resemble the structure of an intelligent brain, how to make this system function, how to train it.
Scientist tried to invent a system which would resemble a bird. They had failed. Instead they invented a system which just imitated a bird (an airplane), and succeeded.
Following this approach, scientists tried to invent a system which structure would not be similar to the structure of a brain, but which would function as such.
They had failed.
Eventually they turned their focus toward systems mimicking a brain – neural networks – and they succeeded.
But the current neural networks – when compared with even a brain of a mouse – are absolutely trivial.
The question is – in order to achieve the abilities of human intelligence, should new, future, more advanced neural networks “simply” have more elements, or the whole structure of those networks should be very different from the current ones?
This situation should not be an issue to any religious person, because if God would wanted to create an artificial intelligence, he or she would most probably had it already done. And if God didn’t do it, hence humans can’t, no matter how complicated the future neural networks could be.
Everyone who does not believe in God should turn to Charles Darwin.
The answers to questions like “How human intelligence came about?”, and “What is the difference in the structure between a human brain and an animal brain, which makes humans humanly intelligent?” should lie in the theory of the evolution.
A very helpful hint lies in the fact that the development of an individual human being has a resemblance with the development of the whole human race (or, “human ontogenesis recapitulates phylogenesis in abbreviated form”: the quote is from https://link.springer.com/chapter/10.1007/978-1-4612-6347-0_4).
That is why the future of AHLI is in the hands of the experts in human intelligence, and not in the hands of the experts in (current) AI. The vast majority of the experts in the field of “artificial intelligence” are walking (or racing) toward the development on an “artificial gorilla”. That is where the money goes.

No one really works on the development of AHLI (or that research is highly classified).
So, for example, when you listen to someone talking about how to make AI ethical, know that he or she tells you a lot of BS, or he/she talks about science fiction. This type of AI will require decades of development, hence decades ahead of us. What may be worth a discussion is the ethics of developing and exploiting current AI in a non-harmful way – just like anything else: knives, guns, tanks, bulldozers, dogs, gorillas, missiles.
Now, let’s circle back to Google Assistant mimicking a person. This demonstration generated a loud buzz in the media. I even read that it has shown how a machine had passed the Turing Test. I do understand the desire to see a miracle, or to live in a fairy tale, that is built-in in our genetic code (hence , the success of the Marvel Universe blockbusters). But we also need to use our ability to do a reality check to balance our fairy tale desire.
Put yourself in the shoes of a phone operator working 9 to 5, every day, answering numerous phone calls. You can use ANY voice you like when talking to the guy, he/she will never even think that it might be a machine. Maybe someone old, or ill, or wearing braces, or a joker, or with a broken jaw, or else is calling. Because the guy knows that in 2018 only people can call on a phone. When a person simply CANNOT EXPECT that a specific event may even happen, saying that that person did not recognize that event is just – well - misleading.
A tribe of Australian aborigines had never known about the exitance of airplanes. Every time when an airplane was flying above them in the sky they thought it was the devil. Only after the tribe was discovered, eventually the aborigines learned what an airplane was. Before that anyone could say “Look, I will show the aborigines an airplane but they will not recognize it!” And? But we don’t have to travel to Australia for such a misleading example. Parents do the same to their little kids all the time!
The Turing Test requires that the evaluator would know that the hidden partner may be a person OR a machine. Without such knowledge the whole demonstration is meaningless. Exciting (hence the buzz)! But meaningless (I wonder why not a single professional in the field would point at that fact).
I am not sure if AI professionals do not know human psychology or deliberately ignore it, but it seems that their craving for making a good show overrides the standards of a scientific conduct.
So, ladies and gentlemen, let’s see the difference between a show and the reality, let’s cool down and relax! 
The real AI is not coming any soon!
Appendix I: The path toward AHLI
For me, based on all my professional experience in the field, it is absolutely obvious that in order to achieve the abilities of human intelligence, new, future, more advanced neural networks should not just “simply” have more elements, but the whole structure of those networks should be very different from the current ones.
For me, it is also absolutely obvious that there is the structural difference between a brain of an intelligent human being and of an animal, which is responsible for the existence of HLI.
When just born, human babies behave not much different from the most of the animal cubs. But with time, humans develop a quality which represents one of the most significant differences between them and other animals; humans learn to recognize themselves in a mirror, but the most of the animals do not have that ability. The ability to recognize themselves in a mirror is one of the most important steps toward HLI.
When growing up, a human brain learns to differentiate and analyze signals coming from all the sensors responsible for all the senses. The signals come from the organs at the boundary between the body and the world, as well as from the organs inside the body. Those signals lead to various reactions – some pleasant, some painful, some neutral. And eventually the learning process leads to the development of humanly intelligent actions.
In the first sentence in the paragraph above we can replace word “human” with “animal” and the sentence still will remain correct. So will the second. And the third. But the last sentence will be wrong.
Why? What would be the natural reason for this difference?
We should conclude, that, most probably, having all the signals coming from all the sensors responsible for all the senses, from the organs at the boundary between the body and the world, as well as from the organs inside the body, is not enough for the development of HLI.
What’s left is a brain itself.
In a human brain, there has to be a part of a brain which is constantly “feeling” the state of that brain, and which does not exist in the most of the animals, or exists in a very underdeveloped state.
That part of a brain, in part, is responsible for knowing the difference between “I” and “that” and “them”.
When a toddler begins recognizing himself/herself in a mirror it is a sign that the initial phase of the development of that part of a brain has reached its end.
It is natural to assume that a brain has its own “organs” responsible for all activities of a body, but also an “organ” responsible for the functioning of the brain itself. Those parts of a brain interact, communicate, and we can even see the existence of those interactions when observing human interactions. For example, when asked an unexpected question, a person may have a very brief moment to first let his/her brain to generate an “idea of the answer”, and then to transfer that idea into the conscious/cognizant construction of words and sentences.
Just look closely at the face of Jordan Peterson right after he was asked a question and a second later (pictures are from https://youtu.be/8wLCmDtCDAM; the video, or course, is much more informative, you can see the moment when the idea of the answer has been formed, and the rest of the time a brain just spent on the verbalization of that idea).
Those two different mental actions: generating an “idea of the answer”, and delivering the verbal representation of that idea – happen in different parts of a brain. And there should be an additional part in a brain which oversee the transition from one part into another.

No current neural network has such a structure.
No current neural network can mimic this mental behavior.
I know no researcher who even tries to study it.
As I mentioned above, AI professionals are working hard on the development of an “artificial gorilla”, and they are very good at doing that. But the development of an “artificial human” requires first and foremost the fundamental research in the field of human intelligence.
Appendix II: a couple of more details
Every mental ability has its own physiological basis.
Physiological basis of intelligence is a brain.
Intelligence is the result of the development of the ability to delay a reaction to a stimulus in a brain.
When a stimulus is traveling from a sensor into a brain and generates an immediate reaction, such a process as “reasoning” simple can’t happen.
When a stimulus reaches a brain, but then a brain pauses, there is a chance that the following reaction will be one of several possible reactions to the original stimulus (or to a stimulus similar to the original one).
That pausing may have developed as the result of reacting to several competing stimuli (and mutations).
That pausing eventually let the brain to develop an additional “organ” which could oversee the selection between various possible reactions, which eventually has led to the development of the ability to reason.
That additional part of a brain was able to develop signals which would model/imitate various stimuli and possible reactions to them and then would select the optima behavior for the host of the brain.
That mental activity has eventually become what we now call “human-level intelligence”, or HLI.

Without modeling that additional part of a brain no AI will ever become HLI.
Without modeling that additional part of a brain no AI will be even able to conduct a “common sense” behavior (a Paul Allen's dream).
A “common sense” behavior is more than just a set of our everyday actions.
First, a “common sense” is relative. What is “common” for some people in one culture or a tradition may be absolutely “out of the ordinary” for other people in a different culture or a tradition.
Second, different cultures even use different linguistic structures to express the meaning of a “common sense”. In English, “common sense” means “a meaning which is common, the same, for many”, “communal meaning”, i.e. something naturally accepted by many people. But in Russian the literal translation of “common sense” would be “healthy meaning”, or “robust meaning”. 
And “healthy” or “robust” does not necessarily mean “communal”; it rather means “natural for that person”, “obvious and does not need an explanation”, “just because”.
The meaning of a “common sense as “just because” can be understood from the neurological point of view.
The “logical sense” represents a decision for which we can provide some kind of a reasoning to support it, we can say “we do that because …” and list some logical steps which has lead us to the decision.
But “common sense” as “just because” represents a decision for which we cannot provide any specific reasoning; it is just obvious for us. However, the reason it is obvious for us, lies in the numerous stimulus-reaction interactions which over a certain period of time our brain learned as beneficial for it (or not) without our conscious realization. We “know” this is the right decision because our brain had built a cause and effect chain between the initial stimulus and the final reaction. But we are not aware of the elements of that chain.
And that means that our brain has two separate parts: one is responsible for the generation of our intentions, and another is responsible for the verbalization of our intentions, and those parts may “talk” or not “talk” to each other.

And that means that there is another, the third part of a brain which can regulate the communication between the first two.
The road to AHLI requires the development and study of neural nets which model this complicated structure of a brain.

Finally, I want to ask a question to every AI professional who had patience to finish this piece: “Do you want to spend your professional life on the development of an “artificial gorilla”, or your goal is the development of actual AHLI?”
Thank you for visiting, 
Dr. Valentin Voroshilov
Education Advancement Professionals 
GoMars.xyz
To learn more about my professional experience:
Dear Visitor, please, feel free to use the buttons below to share your feelings (ANY!) about this post to your Twitter of Facebook followers.