Will AI Take Over The World?
I remember my first experiments with artificial intelligence algorithms, like today. I was appalled by the results I had achieved. The machine learning models I used automatically selected the variables it would use for prediction and automatically calculated the impact of each of them on the result. I was so impressed by this small-scale miracle that took place before my eyes that I started doing more intense science fiction readings and then writing science fiction stories and novels. In the meantime, I also investigated the mathematical background of how this miracle happened. At that time, we could only build models based on numerical and categorical data. With the deep learning revolution that followed, artificial intelligence algorithms began to produce impressive results on sound, images, and free text.
We are familiar with the threat posed by artificial intelligence from cult sci-fi films such as 2001: A Space Odyssey, Terminator, Matrix, and Ex-Machina. But they are, after all, just movies, or aren't they? Could it be possible for artificial intelligence to reach a cognitive capacity equivalent to humans? To form a correct opinion on this issue, it would be fine to understand what exactly artificial intelligence is: Artificial Intelligence - A Brief Introduction
Before we start discussing what artificial intelligence can achieve in the future, it will be appropriate to look at its history briefly. Alan Turing put forward the idea of computer-based artificial intelligence in 1950. Turing thought of a test method to determine whether computers have an equivalent level of intelligence to humans. The test was on whether a computer program could convince a person it was chatting within a digital environment that they were human. Currently, any artificial intelligence chat program has not passed the Turing test. Still, the rapid development that we encounter in chat programs shows that this can happen in the near future. A few months after Turing's description, Princeton University students built the first artificial neural network using 300 vacuum tubes. The term "artificial intelligence" was coined by Dartmouth University in 1955 to name the conference on the subject. In the same year, Carnegie Institute of Technology researchers developed Logic Theorist, the first artificial intelligence computer program.
Developments lasted through the 1950s. Marvin Lee Minsky founded the artificial intelligence lab at MIT. At Cambridge, work was done on machine translation and at IBM on self-learning algorithms. At that time, there was great optimism among researchers about the future of artificial intelligence. It was thought that programs with the ability to think equivalent to a person could be written without 10 years. By the 70s, those who funded artificial intelligence, especially the US State, cut off funding due to the inability to get results that worked in practice. A 10-year period of stagnation called the winter of artificial intelligence started. In the 1980s, artificial intelligence was revived with “expert systems“that operated according to detailed rules written by experts. Artificial neural networks have been brought to a level of maturity close to those used today. The first computer-controlled autonomous vehicle trials have begun. When the application area of practical results obtained from the studies remained in a narrow framework, the second winter of artificial intelligence started. During the 21st century, artificial intelligence has become a popular topic again thanks to faster computers, more data, and advances in deep learning. Today, artificial intelligence systems have become a part of our daily lives. Thanks to these algorithms, we can search by talking on Google, use language-to-language translation services, protect against spam emails, and avoid traffic by using navigation applications.
3 key components determine the performance of an artificial intelligence system: data, processing power, and the learning algorithm. By looking closely at the developments in each component, we can have a clearer idea of the future of artificial intelligence systems.
In modeling studies with traditional artificial intelligence algorithms, the size of the data and the amount of learning were not proportional. After the size of the data reached a certain level, learning almost came to a standstill. With the deep learning revolution in the 2010s, artificial intelligence systems began to learn from extensive data. In this way, breakthroughs such as language-to-language translation, object recognition, which seem to us to have jumped out of sci-fi movies, were made possible.
The amount of data produced in the world is growing like an avalanche due to digitalization. More data has been produced in the last two years than has occurred in the world's entire history. This process continues unabated due to many digital technologies such as social media, mobile technologies, messaging applications, etc. Artificial intelligence algorithms can learn what is happening in the world by using a growing pool of data.
Unit costs for data storage, processing, and transfer are rapidly decreasing. Google's chief engineer and futurist Ray Kurzweil's work shows that 5 years later, the same money as today can be paid, and computers that can process 10 times as much per unit time can be bought. After 10 years, it will be possible to buy computers with 100 times higher capacity than today by paying the same money.
Thanks to the development of information technology capabilities exponentially and steadily for more than 40 years, experts of the subject can make such clear predictions. According to Kurzweil, a computer that can be purchased for $ 1,000 in 2028 will have the same processing power of human brain.
It can be said that the slowest development compared to the other two is in algorithms. Thanks to LSTM, a deep learning algorithm, it was possible to learn words and groups of words in the processing of natural language, in other words, to create a short-term memory. Studies are currently being carried out to store the information learned in the models in memory for a long time and call them back when needed. Despite all the research, the lack of a clear idea of how the human brain works show that working on algorithms is not easy. The rapid development of processing power and data pool makes me think that artificial intelligence will overcome much more complex problems in the coming years. If artificial intelligence continues to develop at this rate, will we humans become inert, useless beings after a while? Rapid development in Information Technologies is possible thanks to the constant reduction of electronic circuits. Here it is possible to base several boundaries arising from the rules of physics. For example, there has been a serious slowdown in the speed growth of processors recently. Another problem with computers is high energy consumption. The fastest supercomputers in the world these days have reached the processing capacity of the human brain. The human brain consumes as much energy as a saving light bulb, while these computers consume enough energy for a town.
If artificial intelligence poses an existential risk to us, why do we continue to develop it? It does not seem possible to go to any restrictions on this issue because of competition between countries and companies. Moreover, we are not sure whether such a restriction is correct. Artificial intelligence has already begun to help us at work and make our daily lives easier.
Will artificial intelligence get out of control like in movies? If so, can we pull the plug and solve this problem? If they're smart enough, we can see for sure that they're going to get out of control. Otherwise, it's going to be against the nature of things. They'll be so involved in our lives, and we'll be so dependent on them that I don't think we'll dare pull the plug. The aristocracy could not prevent the rise of the bourgeoisie in Europe and the gradual seizure of power over time. Given the Marxist method, we can say that it is unlikely that artificial intelligence will take over the world using production power.
By 2030-2050, there are predictions that humanity may be on its way to becoming a human-machine mixed entity. Perhaps it is best not to make ambitious predictions when it comes to the future. As The Thinker Louis Althusser put it, " the future lasts log." Who knows what surprises we will encounter during this long future?
Thanks for reading.
Image Source: pixabay.com and bilimkurgukulubu.com