Alex B via flickr
It is worth noting that many scientists have already noted a possible threat from AI. Dozens of well-known scientists, investors and entrepreneurs, whose activities are somehow connected to development of AI, in 2016 signed an open letter. They calling for more attention to be paid to issue of security and public benefit in the field of artificial intelligence. Among others, the paper was signed by astrophysicist Stephen Hawking and founder of Tesla and SpaceX Elon Musk. The letter, along with an accompanying document of Future of Life Institute (FLI), was written in an atmosphere of growing concern about impact of AI on the labor market. According to the papers, the signers are also worried about long-term survival of all mankind in conditions when capabilities of robots and machines will be growing almost uncontrollably.
The scientists understand the fact that modern AI systems have very high potential. FLI’s letter says that humanity needs to fully explore possibilities of optimal use of AI for us to avoid associated pitfalls. It is necessary that AI systems should do exactly what we want from them.
It should be noted that the Future of Life Institute was founded in 2015 by a number of enthusiasts in the field, including creator of Skype Jaan Tallinn. The creators set up to "minimize risks that humanity faces" and to stimulate research with "an optimistic view on the future". First of all, they meant risks caused by development of AI and robotics. The FLI’s Advisory Board consists of Musk and Hawking, along with actor Morgan Freeman and other famous people. According to Elon Musk, uncontrolled development of artificial intelligence is potentially more dangerous than nuclear weapons.
At the end of 2015, renowned British astrophysicist Stephen Hawking tried to explain his rejection of AI technologies. According to him, super-intelligent machines will eventually regard people as expendables or ants that just hinder solving their tasks. Talking to Reddit’s users, Stephen Hawking said he does not believe that such machines are "evil creatures" who want to destroy all mankind because of their intellectual superiority. Most likely, they simply will not notice the mankind.
In modern life has included numerous developments in the field of artificial intelligence, including image recognition system and speech, unmanned vehicles, and more. According to estimates of observers in Silicon Valley, in the field of more than 150 start-ups realized today. At the same time the development in this area attract more investments and more companies like Google are developing their projects on the basis of the AI. Therefore, authors of the letter believe that now is the high time to pay special attention to all possible consequences of observed boom for economic, social and legal aspects of human life.
The future is now when we speak about increasing automation of production processes. The World Economic Forum (WEF) earlier presented its report. According to the paper, automation result in 5 million dismissed jobs in various fields by 2020 thanks to influence of robots and robotic systems for our lives. To prepare the report, WEF used data on 13.5 million employees from different countries of the world.
source: techcrunch.com
The scientists understand the fact that modern AI systems have very high potential. FLI’s letter says that humanity needs to fully explore possibilities of optimal use of AI for us to avoid associated pitfalls. It is necessary that AI systems should do exactly what we want from them.
It should be noted that the Future of Life Institute was founded in 2015 by a number of enthusiasts in the field, including creator of Skype Jaan Tallinn. The creators set up to "minimize risks that humanity faces" and to stimulate research with "an optimistic view on the future". First of all, they meant risks caused by development of AI and robotics. The FLI’s Advisory Board consists of Musk and Hawking, along with actor Morgan Freeman and other famous people. According to Elon Musk, uncontrolled development of artificial intelligence is potentially more dangerous than nuclear weapons.
At the end of 2015, renowned British astrophysicist Stephen Hawking tried to explain his rejection of AI technologies. According to him, super-intelligent machines will eventually regard people as expendables or ants that just hinder solving their tasks. Talking to Reddit’s users, Stephen Hawking said he does not believe that such machines are "evil creatures" who want to destroy all mankind because of their intellectual superiority. Most likely, they simply will not notice the mankind.
In modern life has included numerous developments in the field of artificial intelligence, including image recognition system and speech, unmanned vehicles, and more. According to estimates of observers in Silicon Valley, in the field of more than 150 start-ups realized today. At the same time the development in this area attract more investments and more companies like Google are developing their projects on the basis of the AI. Therefore, authors of the letter believe that now is the high time to pay special attention to all possible consequences of observed boom for economic, social and legal aspects of human life.
The future is now when we speak about increasing automation of production processes. The World Economic Forum (WEF) earlier presented its report. According to the paper, automation result in 5 million dismissed jobs in various fields by 2020 thanks to influence of robots and robotic systems for our lives. To prepare the report, WEF used data on 13.5 million employees from different countries of the world.
source: techcrunch.com