Five Popular Posts Of The Month

Saturday, May 12, 2018

Relax! The real AI is not coming any soon!

  If a process has patterns it CAN be computerized
For that 1. an intelligent subject develops an algorithm 2. then writes a code
           Today, AI can partially replace part 2 by being trained, but still ALL intelligent work – pattern analytics, algorithm development, including the one for training AI - falls on humans.  AI is not more than a dog trained do tricks.
       Imagine, AI intelligent as Einstein was developed and left in a jungle with monkeys. All it’ll learn is “monkey business”.
AI developers deliberately ignore this fact.
(https://www.cognisity.how/2017/12/AIfuture.html
© Dr. Valentin Voroshilov
So far my "interventions" into the field of AI development have been sporadic, and the full picture comes with the reading of all major pieces on the matter on the designated page:
Artificial Intelligence
FYI: If you would like to find a publication with the exactly same statement but coming from an AI guru, you may want to check this link: https://www.theatlantic.com/amp/article/560675/
Some quotes: “All the impressive achievements of deep learning amount to just curve fitting,” and “artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.”

Relax! The real AI is not coming any soon!

Appendix I: The path toward AHLI
Click here to jump directly to Appendix I.

Appendix II: What is common sense?
Click here to jump directly to Appendix II.

Lately, Google Assistant mimicking a person has given a new boost to speculations about the role of AI, and how close we are to the point when “AI will take over the world”, well, at least the business world.
In “On a definition of AI”, I presented the definition of intelligence.
All other aspects of intelligence play their roles, and take their places as devices, components, abilities, organs, functions required for intelligence to exist, perform, and achieve its goals, fulfill its mission - creating, again and again, a solution to a problem which has never been solved before. 
The physiological or technological basis of intelligence is the existence of hierarchies of interconnected patter recognition systems.
Artificial intelligence is just an artificially manufactured system which possesses intelligence.
In “The Dawn of The New AI Era.”, and then in Who will train our “artificial puppies”? But more importantly – who will train the trainers?, I stated that we have entered the new phase in the development of AI field. Now, in parallel with the continuous basic (a.k.a. fundamental) research in AI, the field of practical applications of AI will be rapidly growing. In that new field, AI developers will not be as important any more as experienced AI trainers.
In “Is Artificial Intelligence Already Actual Intelligence?” and then in “Will Artificial Intelligence Save, Replace or even Affect Education Practices? (a venture capitalist’s view)”, I discussed the current state of AI, and why some people in the field overestimate its potential influence on education (because they, like many others, have a very primitive view on education: BTW, why do we expect that someone who wrote a successful app, or sold a successful product is an expert in everything else? Especially those who surround themselves with people who always agree with them?).
In this publication I would like to offer an additional and specific reason for why AI, in its current state of development, is not even close to the intelligence of a human baby (and what should be done to bridge the gap). currently, and for decades to come, there is no such thing as actual artificial intelligence, there is only various computer software with limited self-adjusting abilities. If someone points at a gorilla and says "It has two legs, tow arms, tow eyes - it is a human!" we know it is a joke - at best. But that's exactly what is happening now with AI. "This system can recognize and emulates some patterns, and even makes some adjustments to how it does it - it is intelligent!" Well, in the same sens as an ameba - yes, it is intelligent.
But first, a little bit of history.
Initially, the term “Artificial Intelligence” was used to describe machines that could act like people. One of the best representations of AI in its original sense is both terminators in the “Terminator 2” movie (the roots of which can be traced to many science fiction stories, e.g. “I, robot”).
When scientists started developing systems which could have some features similar to those which humans have, they were coding complicated algorithms which would let a computer to conduct some of the human-type actions, like, recognizing an object, recognizing a sound, moving artificial limbs in a specific way, producing human-like sounds. Everything what a computer did was first designed and then encoded by a team of professionals, and that team was absolutely and completely responsible for every action of a computer/device/robot/system. Everything what a computer did right was initially envisioned and programmed by humans, and anything unexpected done by a computer was wrong.
No one would call those systems “intelligent”.
At some point, scientists realized that writing more and more complicated algorithms, which would check more and more possibilities before making a decision, has its limits.
The next idea was “simple”.
In order to mimic the intelligent actions of a human, an artificial system needs to mimic the structure of the organ which is responsible for the intelligent actions of a human, i.e. a brain.
Hence, the attention had shifted to neural networks, but made artificially.
Initially, the behavior of those networks was also absolutely defined by people.
The breakthrough came when an artificial neural network became capable of learning without interference of a human (after the initial training).
This is where we are today.
Existing methods, approaches, technologies, like, “deep learning”, “machine learning”, “supervised learning”, “unsupervised learning”, “reinforcement learning” all based on various types of “data mining”, with a following training process (also, they all have large overlapping areas).
The progress in AI development is astonishing, no doubt.
However, on the road from the 1956 meeting at the Dartmouth College to the latest Google presentation, the meaning of the term “Artificial Intelligence” has changed.
Initially, the “study of artificial intelligence” meant the study of “every aspect of learning or any other feature of intelligence” […] “that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” This study will need “to discuss computers, natural language processing, neural networks, theory of computation, abstraction and creativity” (all highlighting in the quotes above is mine, V.V.).
However, over time, such concepts as “abstractions”, “concepts”, “creativity” have been excluded from the meaning of AI. What’s left was – and still is – just “pattern recognition”.
Imagine, that you decided to improve a basic corkscrew. And you did! And you named your new model “The Ultimate Drilling Machine”.
This is not exactly what happened in the field of Artificial Intelligence, but very similar to it (I hope you get the idea, which is gross exaggeration of the reality).
Currently, the most impressive features of AI are those which imitate human actions.
AI can recognize sounds, symbols, objects, and can mimic speech, movements, even some emotions.
But so can animals.
Currently, AI is on its way to become a mouse, or a dog. Soon, what AI will be able to do in homes or businesses is more or less what a mouse or a dog could do, when properly trained.
Following down this road, eventually current AI will be as powerful as a cat, then a chimpanzee, then a lion, or a gorilla (just to illustrate an idea).
Current AI may become very powerful and helpful, or very powerful and dangerous.
But NOT because it will decide to be good or bad.
No.
Because people who will be using it, will be using it for good or for bad.
Like a hammer that can be used as a blunt weapon.
Like a bulldozer that can be converted into a tank.
Or like a dog, or bear, or a lion, or a gorilla which can be trained to hunt and to hurt people, or to find and help them.
But don’t expect that artificial human-like intelligence, or, AHLI, will be hanging around any soon.
Naturally, two questions immediately arise after this statement.
1. Why?
And.
2. What can be done about it?
Short answers to these questions are:
1. Because no one knows how the structure of a human brain is related to the existence of AHLI; no one knows how AHLI had been developed, what was the evolutionary path which had led to its existence. The whole science of intelligence is in its infancy. Scientists even don’t have yet a commonly accepted definition of intelligence!
And.
2. We have to intensify the research in the field of human intelligence (maybe, another workshop at the Dartmouth College?).
Basically, we need to learn as deep as possible:
1. How, from the moment they are born, humans become intelligent?
2. What factors affect human intelligence (positively and negatively)?
3. How does the development of the structure of a human brain correlate with the development of human intelligence?
4. What is the structural difference between a brain of an intelligent human being and of an animal, that is responsible for the existence of human-level intelligence, or HLI?
And that is just the bare minimum (however, having a research entity devoted to those studies could significantly advance the research).
In parallel, we will need to study how to build artificial systems which structure would resemble the structure of an intelligent brain, how to make this system function, how to train it.
Scientist tried to invent a system which would resemble a bird. They had failed. Instead they invented a system which just imitated a bird (an airplane), and succeeded.
Following this approach, scientists tried to invent a system which structure would not be similar to the structure of a brain, but which would function as such.
They had failed.
Eventually they turned their focus toward systems mimicking a brain – neural networks – and they succeeded.
But the current neural networks – when compared with even a brain of a mouse – are absolutely trivial.
The question is – in order to achieve the abilities of human intelligence, should new, future, more advanced neural networks “simply” have more elements, or the whole structure of those networks should be very different from the current ones?
This situation should not be an issue to any religious person, because if God would wanted to create an artificial intelligence, he or she would most probably had it already done. And if God didn’t do it, hence humans can’t, no matter how complicated the future neural networks could be.
Everyone else who does not believe in God should turn to Charles Darwin.
The answers to questions like “How human intelligence came about?”, and “What is the difference in the structure between a human brain and an animal brain, which makes humans humanly intelligent?” should lie in the theory of the evolution.
A very helpful hint lies in the fact that the development of an individual human being has a resemblance with the development of the whole human race (or, “human ontogenesis recapitulates phylogenesis in abbreviated form”: the quote is from https://link.springer.com/chapter/10.1007/978-1-4612-6347-0_4).
That is why the future of AHLI is in the hands of the experts in human intelligence, and not in the hands of the experts in (current) AI. The vast majority of the experts in the field of “artificial intelligence” are walking (or racing) toward the development on an “artificial gorilla”. That is where the money goes. 
No one is really close yet to the development of AHLI (top AI developers/promoters reported to the Congress that AHLI will not be achieved for at least 20 more years).
Hence, for example, when you listen to someone talking about how to make AI ethical, know that he or she tells you a lot of "BS" (beyond sense-making), or he/she talks about science fiction. Today, and for many more years, AI will not be much different from an advanced computer. Human like type of AI will require decades of development, hence decades ahead of us. What may be worth a discussion is the ethics of developing and exploiting current AI in a non-harmful way – just like anything else: knives, guns, tanks, bulldozers, dogs, gorillas, missiles.
This currently is a hot discussion. But this discussion is greatly over-hyped. The ethics of using AI is no different from the ethics of using a hammer, or bulldozer, genetic editing, or an atomic bomb. It is based on two simple rules:
1. When developing ... (fill in the blank) do not mean using it to harm anyone.
2. When developing ... (fill in the blank) also work on preparations to counteract an attempts to harm people by the means of ...
(fill in the blank).
That's it. The ethical part of the discussion is over. The technical part begins, including the discussion of relevant policies.
The proliferation of AI-based technologies in all aspects of business and society is inevitable. But the question "how will AI affect people" does not belong to the field of AI, it belongs to the field of political philosophy, starting from answering question: what does make a human - human?
There is no doubt in my mind that some time in the future the real AI (a.k.a. AGI, AHLI) will be developed. Not in 5 or even 10 years, though, and not even in 20. But sometime in the far future it will. That means only one thing; that means people will cease to be single intelligent species. We can see this situation as if aliens finally introduced themselves, but the kind which also grow up together with humans. And the fact that they will be growing up together is the most important one, because those "artificial humans", when fully developed, will act in accordance with the culture they grew up in, and in accordance with education they received - exactly like humans! There will be, though, one big difference between humans and AI. Every human has to spend years to learn how to live his/her life. AI will be able do it much faster. After one AI will grow up in a specifically constructed environment ("AI schooling facility") and learn how to behave, its memory can be replicated in all other AIs (well, maybe with some unpredictable fluctuations - "AI mental mutations"). 
This is why the field of AI training will become much more important than it is today. Although, not many AI professionals see it so far - despite the fact that the need for professional AI trainers will grow much faster than advances in the development of human level AI.
Now, let’s circle back to Google Assistant mimicking a person. This demonstration generated a loud buzz in the media. I even read that it has shown how a machine had passed the Turing Test. I do understand the desire to see a miracle, or to live in a fairy tale, that is built-in in our genetic code (hence , the success of the Marvel Universe blockbusters). But we also need to use our ability to do a reality check to balance our fairy tale desire.
Put yourself in the shoes of a phone operator working 9 to 5, every day, answering numerous phone calls. You can use ANY voice you like when talking to the guy, he/she will never even think that it might be a machine. Maybe someone old, or ill, or wearing braces, or a joker, or with a broken jaw, or else is calling. Because the guy knows that in 2018 only people can call on a phone. When a person simply CANNOT EXPECT that a specific event may even happen, saying that that person did not recognize that event is just – well - misleading.
A tribe of Australian aborigines had never known about the existence of airplanes. Every time when an airplane was flying above them in the sky they thought it was the devil. Only after the tribe was discovered, eventually the aborigines learned what an airplane was. Before that, anyone could say “Look, I will show the aborigines an airplane but they will not recognize it!” And? But we don’t have to travel to Australia for such a misleading example. Parents do the same to their little kids all the time! (e.g. telling fairy tales).
The Turing Test requires that the evaluator would know that the hidden partner may be a person OR a machine. Without such knowledge the whole demonstration is meaningless. Exciting (hence the buzz)! But meaningless (I wonder why not a single professional in the field would point at that fact).
I am not sure if AI professionals do not know human psychology or deliberately ignore it, but it seems that their craving for making a good show overrides the standards of a scientific conduct.
So, ladies and gentlemen, let’s see the difference between a show and the reality, let’s cool down and relax! 
The real AI is not coming any soon!

Appendix I: The path toward AHLI
First, I would strongly recommend to read "On The Definition of AI". That piece has more insights on how AHLI can be built.
For me, based on all my professional experience in the field, it is absolutely obvious that in order to achieve the abilities of human intelligence, new, future, more advanced neural networks should not just “simply” have more elements, but the whole structure of those networks should be very different from the current ones.
For me, it is also absolutely obvious that there is the structural difference between a brain of an intelligent human being and of an animal, difference, which is responsible for the existence of HLI.
When just born, human babies behave not much different from the most of the animal cubs. But with time, humans develop a quality which represents one of the most significant differences between them and other animals; humans learn to recognize themselves in a mirror (consciousness), but the most of the animals do not have that ability. Human intelligence grows from consciousness as "I have to solve this problem and achieve this goal" (human intelligence has a vector: from "I now" to "I tomorrow"). Hence, the ability to recognize themselves in a mirror is one of the most important steps toward HLI.
(1) When growing up, a human brain learns to differentiate and analyze signals coming from all the sensors responsible for all the senses. (2) The signals come from the organs inside the body, as well as from the organs at the boundary between the body and the world. (3) Those signals lead to various reactions – some pleasant, some painful, some neutral. (4) And eventually the learning process leads to the development of highly intelligent actions (in accordance with my definition of intelligence; as an ability to create solutions to problems which have never been solved before).
In the first sentence in the paragraph above we can replace word “human” with word “animal” and the sentence still will remain correct. So will the second. And the third. But the last (#4) sentence will be wrong.
Why? What would be the natural reason for this difference?
We should conclude, that having all the signals coming from all the sensors responsible for all the senses, from the sensors at the boundary between the body and the world, as well as from the organs inside the body, is not enough for the development of HLI.
What’s left is a brain itself.
In a developed human brain, there has to be a part of a brain which is constantly “feeling” the state of that brain, and which does not exist in the most of the animals, or exists in a very underdeveloped state.That part of a brain, in part, is responsible for knowing the difference between “I” and “that” and “them”. The development and future functioning of this part of a brain is responsible for consciousness (hence - human level intelligence).
When a toddler begins recognizing himself/herself in a mirror it is a sign that the initial phase of the development of that part of a brain has reached its end.
It is natural to assume that a brain has its own “organs” responsible for all activities of a body, but also an “organ” responsible for the functioning of the brain itself. Those parts of a brain interact, communicate, and we can even see the existence of those interactions when observing human interactions. For example, when asked an unexpected question, a person may have a very brief moment to first let his/her brain to generate an “idea of the answer”, and then to transfer that idea into the conscious/cognizant/intelligent construction of words and sentences.
Just look closely at the face of Jordan Peterson right after he was asked a question and a second later (pictures are from https://youtu.be/8wLCmDtCDAM; the video, or course, is much more informative, you can see the moment when the idea of the answer has been formed, and the rest of the time a brain just spent on the verbalization of that idea).
Those two different mental actions: generating an “idea of the answer”, and delivering the verbal representation of that idea – must happen in different (but, obliviously, connected) parts of a brain. And there should be an additional part in a brain which oversees the interaction between one part into another.
The structure of a neural network of a highly intelligent host has to have physical/spacial/physiological correspondence to the functions that neural network must enact/fulfill/carry-on.
No current neural network has such a structure.
No current neural network can mimic this mental behavior.
As I mentioned above, AI professionals are working hard on the development of an “artificial gorilla”, and they are very good at doing that. But the development of an “artificial human” requires first and foremost the fundamental research in the field of human intelligence and consciousness. The fact that only humans have both, high level of intelligence and consciousness, tells me that
high level of intelligence simply cannot be achieved without achieving consciousness. This, of course, is just my belief, or an axiom, or a postulate, because, technically, this statement is impossible to prove or disprove. But starting from this axiom we must conclude that no matter how advanced pattern recognition methods will become, until the artificial consciousness is achieved, there will be no human level artificial intelligence.
Appendix II: What is common sense?
Every mental ability has its own physiological basis.
Physiological basis of intelligence is a brain.
Intelligence is the result of the development of the ability to delay a reaction to a stimulus in a brain. 
A momentous stimulus-reaction effect is genetically built in (or trained) and does not require intelligent actions. When a stimulus is traveling from a sensor into a brain and generates an immediate reaction, such a process as “reasoning” simple can’t happen.
When a stimulus reaches a brain, but then a brain pauses (for whatever reason), there is a chance that the following reaction will be one of several possible reactions to the original stimulus (or to a stimulus similar to the original one).
That pausing may have developed as the result of reacting to several competing stimuli (and mutations).
That pausing eventually let a brain to develop an additional “organ” which could oversee the selection between various possible reactions, which eventually has led to the development of the ability to reason.
When a source of a stimulus is not a physiological sensor, but a signal generated in a brain, and then a brain selecting a reaction from several possible reactions, and that reaction is not necessary transmitted to a physical motion, but becomes a stimulus for the next selection - that is reasoning. And it can happen only when there is a part of a brain which oversees the process of selecting a reaction of a brain to a stimulus in a brain. 
That additional part of a brain was able to develop signals which would model/imitate various external stimuli and possible reactions to them and then would select the optimal behavior for the host of the brain.
That mental activity has eventually become what we now call “human-level intelligence”, or HLI.
Without modeling that additional part of a brain no AI will ever become HLI.
Without modeling that additional part of a brain no AI will be even able to conduct a “common sense” behavior (a Paul Allen's dream).
A “common sense” behavior is more than just a set of our everyday actions.
First, a “common sense” is relative. What is “common” for some people in one culture or a tradition may be absolutely “out of the ordinary” for other people in a different culture or a tradition.
Second, different cultures even use different linguistic structures to express the meaning of a “common sense”. In English, “common sense” means “a meaning which is common, the same, for many”, “communal meaning”, i.e. something naturally accepted by many people. But in Russian the literal translation of “common sense” would be “healthy meaning”, or “robust meaning”. 
And “healthy” or “robust” does not necessarily mean “communal”; it rather means “natural for that person”, “obvious and does not need an explanation”, “just because”.
The meaning of a “common sense as “just because” can be understood from the neurological point of view.
The “logical sense” represents a decision for which we can provide some kind of a reasoning to support it, we can say “we do that because …” and list some logical steps which has lead us to the decision.
But “common sense” as “just because” represents a decision for which we cannot provide any specific reasoning; it is just obvious (another term that AI does not know) for us. However, the reason it is obvious  for us, lies in the numerous stimulus-reaction interactions which, over a certain period of time, our brain learned as beneficial for it (or not) without our conscious realization. We “know” (meaning, we feel kike we know) this is the right decision because our brain had built a cause and effect chain between the initial stimulus and the final reaction. But we are not aware of the elements of that chain.
"Common sense", or "obvious", is an action which is a reaction of a brain to a certain stimulus, which had been developed without conscious participation of the host of the brain. That is why the host cannot explain why he/she reacts in that certain way.
And that means that our brain has two separate parts: one is responsible for the generation of our intentions, and another is responsible for the verbalization of our intentions, and those parts may “talk” or not “talk” to each other. 
And that means that there is another, the third part of a brain which can regulate the communication between the first two.
The situation becomes even more complicated when we begin to analyze such human ability as imagination. What is the mechanism behind it? What additional structures of a brain are responsible for it existence? AI professionals deliverable omit this topic, pretending the imagination has nothing to do with intelligence. Well, there is a very smart person who disagrees.
The road to AHLI requires the development and study of neural nets which model this complicated structure of a brain (including a net which studies patterns happening in another net which makes a decision "is this a banana or not?").
Finally, I want to ask a question to every AI professional who had patience to finish this piece: “Do you want to spend your professional life on the development of an “artificial gorilla”, or your goal is the development of actual AHLI?”
BTW: Everyone who uses a smartphone, or a computer, or a tablet knows what “a bug” is. Bugs are ubiquitous  and present themselves everyday in different forms to millions of users.
Thinking that developing AI will be bugged-less is a delusion.
So, it is not just how to make AI which do what is right and don’t do what is wrong. It is also about who will be responsible for the mistakes done by a buggy AI? And those mistakes will never disappear. The second issues is where to find good AI trainers?
If a bad teacher screws a child’s education, a bad trainer will screw AI’s training (and that is on the top of bugs it will have).
They say that AI is only as good as data it used to train it.
But it is not just about data. It is about how good is a trainer at using those data. A bad teacher can have the best Physics textbook and yet don’t so any good to students. Theses are the questions I have not seen discussed anywhere - except here:
https://www.cognisity.how/2018/04/aipuppies.html

Thank you for visiting, 
Dr. Valentin Voroshilov Education Advancement Professionals 
GoMars.xyz
To learn more about my professional experience:
Dear Visitor, please, feel free to use the buttons below to share your feelings (ANY!) about this post to your Twitter of Facebook followers.

Tuesday, May 8, 2018

Anti-Atheism; or The Struggle Between the God and Science.


Anti-Atheism; or The Struggle Between the God and Science.

(the first piece on the matter: "What is the difference between science and religion?")
When people invoke God that only means people have reached the limit of their knowledge.
Phrases like “God knows why”, or “That's God's will”, or “That’s in the God’s hands” simply mean “I don't know”, “I have no idea why this is happening”, “I don't know what to do”, “I don’t know why did I do that?”.
The idea of God represents the awareness of people about the limits of their own knowledge. And the following frustration they feel and want to escape.
This awareness is as old as humans are.
At first people created an idea of spirits (demons, angels, trolls, …). Spirits lived around people: in a forest and in a lake, in trees and in caves, etc. Spirits were the governors of the forces of nature, they were responsible for a rain, for fire, for thunder, for the Sun, for fish and animals, and everything else what people could not control. Spirits were molded based on people who lived in a tribe, but stronger, more powerful, and always hungry (exactly like people who created them) – hence, for everything they were asked to do they always demanded some sacrifice.
The next after the next after the next … generation of spirits eventually has been replaced by gods who usually did not live around people any more (for example, they lived at the top of mountains, or in the ocean). Gods were stronger than spirits (do not confuse them with wizards, or sorcerers, …, who were also powerful, but came from people, following the line of shamans). In time Gods had “created” an internal hierarchy. Gods did what spirits did not do – they were competing with each other for power, deceiving each other, fighting with each other – exactly like people who created them.
But eventually someone very smart and wise asked - why having so many gods when one God can do exactly the same? That would make the whole story much easier. But more importantly, that would make the rituals simpler and more unified, hence better suited to a wider audience. And the first monotheistic religion was born. And then the second, and the third. Because people wanted to believe in the existence of a Superior Being.
Why do people want to believe in God?
Because people feel the need for an explanation for things happening around them and with them, and because people feel the need for guidance for their actions.
This need is genetically built into the human code.
Why?
Because that helps human species to survive, be fruitful and multiply. Because it brings some structure into human life which helps fighting chaos brought by the natural forces.
"God" fills up the gap between a reality 
and the knowledge about that reality.
"God" is the "substance" between everyday life and science.
"God" is the need for filling up gaps in our everyday life
 between our knowledge and things we cannot explain.
The more science people do, the more knowledge people have, the less they have the need to have a God. Science erases the control over human minds which religions have been enjoying for thousands of years. Science is the worst enemy of religion.
This is the main reason for every religion is to be inherently anti-knowledge, anti-science.
A scientist, or a scientifically raised person, does not need to invoke God. He or she just bluntly accepts the fact that he or she does not know the reason for … – whatever they cannot explain.
He or she just simply says: “I don’t know”.
For example, when asked “Why does the Universe exist?” a religious person says “Because the God created it”, and a scientist says “I don’t’ know, it just does”.
Both answers are absolutely equally informative (i.e. both give 0 actual information).
However, the former answer may bring some feeling of a closer (“Someone stronger and smarter than I told me so, so I don’t have to think about it”), and the latter answer may leave the one in frustration (and we don’t like that feeling).
But ask a religious person “Why does the God exists? Who created the God?” and he or she will become as frustrated as a scientist who said “I don’t know”.
Well, maybe at first, the religious person would say something like “The God has always existed, he/she does not need to be created”. But when then asked “How is it different from saying that the Universe has always existed, and does not need to be created, hence there is no need for appealing to a Superior Being?”, then the religious person starts feeling frustrated (BTW: frustration is a common sign for us for reaching the limit of our knowledge).
Believing in God is a timeless tradition.
Exactly like believing in a Tooth Fairy, or the Santa Claus – until you are 8.
Infants, toddlers simply cannot know the difference between the reality and fantasies, or between the truth and a lie. Everything what little children feel using their receptors, everything they perceive via their human senses is equally real for them. Cartoons characters on TV are equally real as people in a room. Until a certain age, children simply don’t differentiate the truth from a lie (including fairy tales).
Of course, eventually they learn the difference.
The key word is “learn”.
A grown-up person is as knowledgeable as good was his or her learning.
And the quality of learning is directly proportional to the quality of teaching, to the quality of the education that person has had while growing up (for some, including self-education).
A low-quality education leads to grown-ups who are biologically adults, but mentally are still children who still believe in “fairy tales for adults”, like God.
Naturally, religion has played and still plays a very important social role.
It brings people together. It offers moral guidance and ethical rules. It brings an emotional relief and peace of mind. 
But nowadays, in order to manage the same social functions people just do not need an idea of God anymore. More and more people are capable of doing “the right thing” because doing it is the right thing to do, and not because of the fear of “eternal damnation” (by trying the best they can to follow the Golden Rule).
This realization has led to the development of atheism.
But simple trivial atheism denies any beliefs.
Simple trivial atheism makes any rule relative, hence equal for all people in their importance or not importance.
Simple trivial atheism equates religion and beliefs, and by denying the need for religion it also denies the need for some universal (absolute) beliefs.
And that makes simple trivial atheism absolutely wrong.
There are absolute beliefs. There is the absolute truth. That truth and those beliefs just do not include the existence of any Supreme Being because they do not need it anymore.
Instead of starting from “God exists”, we can start from “World exists” because it does. It is just a fact.
Instead of saying “God created rules which govern the world”, we can say “World is govern by rules” (we call them “laws”, though).
Instead of saying “God gave us rules to follow and will punish us if we will not”, we can say “Our World, including our social world, will only exist and reward us for our actions if we follow the laws governing the World”.
Etc.
When we reach the limit of our knowledge and cannot explain why something happens, we just say “We don’t know why that happens” (and sigh in frustration).
The difference between a scientific view on the world and a religious view is not the absence or existence of beliefs. It is (a) the beliefs, and (b) what do we do when some of our beliefs begin contradicting our own experience (more on this in “What Is The Difference Between Science and Religion? Really.”; at http://www.cognisity.how/2017/12/religion.html).
In conclusion: “anti-atheism” does not mean religion. It means replacing simple trivial atheism with a smarter, deeper, scientific version of it.
And “Thank you, World, for our existence!”
P.S. I am perfectly aware of the fact that religions take a large and important place in human cultures and are not going anywhere any soon. I am also perfectly aware of the fact that my piece represents an extreme simplification of the matter. But I have no goal converting anyone or convincing someone that I am correct. My goal is merely to offer some food for thoughts to people who may still have no strong opinion on the matter. Also, to people who do have a strong opinion but could say to me: "I disagree, but I see your point." 
We may disagree on many things, but let's agree on one, let's not force each other into our way of thinking. You have you beliefs and go into your church, and I have mine. I call my church "The Church of a Common Sense".
N.B. I am sorry for my rough English, I learned it myself, mostly from radio and TV shows, so it reflects the language of mass media.
Thank you for visiting, 
Dr. Valentin Voroshilov
Education Advancement Professionals 
GoMars.xyz

To learn more about my professional experience: