Recently I was surfing YouTube and came across a clip from Elon Musk titled: “Do You Trust This Computer?”
It seemed like a weird question, so what computer are we talking about? Enter Google Deepmind. Deepmind was formed a mere 7 years ago, and acquired by Google in 2014 for a neat £400 million, and can already beat all of the original Atari games. What’s scary about that is it taught itself how to beat the games.
Deepmind’s artificial intelligence (AI), as coined “digital super intelligence”, comes from a form of reinforcement learning called Q-learning. Reinforcement learning employs dynamic constructs to learn from previous experience and the algorithm computes predicted outcomes, utilizing the Markov Decision Process potentially to determine hidden states, in a highly optimized fashion.
What’s rather scary in all of this is Deepmind has administrative level access to Googles data center. Yes, this is the same Google that processes of a billion web searches daily, and effectively owns our online DNA. How many movies can you think of that have an element which surrounds robots harnessing collective pools of knowledge to advance themselves?
Deepmind has progressed considerably from beating games (and at times the humans behind them). Deepmind has more recently learned to understand the thoughts of others, speak like a human, and as of the end of last month demonstrate a capacity to show aggression in stressful situations – somewhat alarming considering we’d like to think that a logic machine would be unable to be manipulated or swayed by a frustrating situation.
As with any artificial intelligence, ethical constructs are called into play. While Deepmind has been deployed over Google data centers there has also been controversy around the deployment over NHS health-related data. We’ve opened Pandora’s box somewhat by bringing intelligent machines into the picture with regards to personal data privacy, irrespective of perceived malice.
Musk’s suggestion is to merge with artificial intelligence, then we don’t have a case of them and us rather we unite, in a symbiotic relationship. This has some resonance with the concepts shared by Ray Kerzweil since the 1990s in his various books.
Considering the dates of wagers made by Kerzweil and the view the singularity (point in time when AI and man become one) might we have something to worry about in this lifetime? On the other hand, if we look at the capabilities of current AI such as Clever Bot talking to itself in AI versus AI, then perhaps we shouldn’t sweat it!
Image Source: Time
While AI may currently be wielded in a safe and productive manner, there are always others with nefarious intent, besides machine learning will learn based on the input given, take Chappie:
A volume of industry leaders such as; Jack Ma (Alibaba) and Elon Musk (Telsa) who have expressed concern around the future state or ramifications of AI, however digital super intelligence will probably know what the future looks like before we do.
Pure, rapid, exponential growth in AI has been noted across numerous countries and industries, one can only hope this eventuates in a benign result. In the future creation of digital super intelligence, do global leaders need to focus on regulation? Do they understand the implications or even the core concepts that are hanging over the world?
One thing is for sure – time will tell.
Thomas – MacGyver of code