© Dr. Valentin Voroshilov
Appendix I: The path toward AHLI
Click here to jump directly to Appendix I.
Appendix II: What is common sense?
Click here to jump directly to Appendix II.
Lately, Google Assistant mimicking a person has given a new boost to speculations about the role of AI, and how close we are to the point when “AI will take over the world”, well, at least the business world.
The physiological or technological basis of intelligence is the existence of hierarchies of interconnected patter recognition systems.
No one is really close yet to the development of AHLI (top AI developers/promoters reported to the Congress that AHLI will not be achieved for at least 20 more years).
This currently is a hot discussion. But this discussion is greatly over-hyped. The ethics of using AI is no different from the ethics of using a hammer, or bulldozer, genetic editing, or an atomic bomb. It is based on two simple rules:
1. When developing ... (fill in the blank) do not mean using it to harm anyone.
2. When developing ... (fill in the blank) also work on preparations to counteract an attempts to harm people by the means of ...
(fill in the blank).
That's it. The ethical part of the discussion is over. The technical part begins, including the discussion of relevant policies.
The proliferation of AI-based technologies in all aspects of business and society is inevitable. But the question "how will AI affect people" does not belong to the field of AI, it belongs to the field of political philosophy, starting from answering question: what does make a human - human?
There is no doubt in my mind that some time in the future the real AI (a.k.a. AGI, AHLI) will be developed. Not in 5 or even 10 years, though, and not even in 20. But sometime in the far future it will. That means only one thing; that means people will cease to be single intelligent species. We can see this situation as if aliens finally introduced themselves, but the kind which also grow up together with humans. And the fact that they will be growing up together is the most important one, because those "artificial humans", when fully developed, will act in accordance with the culture they grew up in, and in accordance with education they received - exactly like humans! There will be, though, one big difference between humans and AI. Every human has to spend years to learn how to live his/her life. AI will be able do it much faster. After one AI will grow up in a specifically constructed environment ("AI schooling facility") and learn how to behave, its memory can be replicated in all other AIs (well, maybe with some unpredictable fluctuations - "AI mental mutations").
This is why the field of AI training will become much more important than it is today. Although, not many AI professionals see it so far - despite the fact that the need for professional AI trainers will grow much faster than advances in the development of human level AI.
Now, let’s circle back to Google Assistant mimicking a person This demonstration generated a loud buzz in the media. I even read that it has shown how a machine had passed the Turing Test. I do understand the desire to see a miracle, or to live in a fairy tale, that is built-in in our genetic code (hence , the success of the Marvel Universe blockbusters). But we also need to use our ability to do a reality check to balance our fairy tale desire.
For me, based on all my professional experience in the field, it is absolutely obvious that in order to achieve the abilities of human intelligence, new, future, more advanced neural networks should not just “simply” have more elements, but the whole structure of those networks should be very different from the current ones.
The structure of a neural network of a highly intelligent host has to have physical/spacial/physiological correspondence to the functions that neural network must enact/fulfill/carry-on.
No current neural network has such a structure.
high level of intelligence simply cannot be achieved without achieving consciousness. This, of course, is just my belief, or an axiom, or a postulate, because, technically, this statement is impossible to prove or disprove. But starting from this axiom we must conclude that no matter how advanced pattern recognition methods will become, until the artificial consciousness is achieved, there will be no human level artificial intelligence.
Intelligence is the result of the development of the ability to delay a reaction to a stimulus in a brain.
A momentous stimulus-reaction effect is genetically built in (or trained) and does not require intelligent actions. When a stimulus is traveling from a sensor into a brain and generates an immediate reaction, such a process as “reasoning” simple can’t happen.
That additional part of a brain was able to develop signals which would model/imitate various external stimuli and possible reactions to them and then would select the optimal behavior for the host of the brain.
Without modeling that additional part of a brain no AI will ever become HLI.
And that means that our brain has two separate parts: one is responsible for the generation of our intentions, and another is responsible for the verbalization of our intentions, and those parts may “talk” or not “talk” to each other.
And that means that there is another, the third part of a brain which can regulate the communication between the first two.
The road to AHLI requires the development and study of neural nets which model this complicated structure of a brain (including a net which studies patterns happening in another net which makes a decision "is this a banana or not?").
Finally, I want to ask a question to every AI professional who had patience to finish this piece: “Do you want to spend your professional life on the development of an “artificial gorilla”, or your goal is the development of actual AHLI?”
BTW: Everyone who uses a smartphone, or a computer, or a tablet knows what “a bug” is. Bugs are ubiquitous and present themselves everyday in different forms to millions of users.
Thinking that developing AI will be bugged-less is a delusion.
So, it is not just how to make AI which do what is right and don’t do what is wrong. It is also about who will be responsible for the mistakes done by a buggy AI? And those mistakes will never disappear. The second issues is where to find good AI trainers?
If a bad teacher screws a child’s education, a bad trainer will screw AI’s training (and that is on the top of bugs it will have).
They say that AI is only as good as data it used to train it.
But it is not just about data. It is about how good is a trainer at using those data. A bad teacher can have the best Physics textbook and yet don’t so any good to students. Theses are the questions I have not seen discussed anywhere - except here: