By Andrew Shifren
Google’s artificial intelligence recently beat the reigning champion in the ancient East Asian strategy game, Go. This is a familiar story. In 1997 IBM’s artificial intelligence, Deep Blue, beat the world chess champion, Gary Kasparov. In that case, the computer had calculated all the millions of possible moves and permutations on a chessboard and then executed its program. So who cares about some dusty East Asian board game?
The CEO of Google DeepMind, Demis Hassabis, claims, “There’s more configurations of the [Go] board than there are atoms in the universe.”The implication is that no computer, no matter how powerful, could compute every move. Instead, the AI had to use intuition akin to a human’s to beat the champion, Lee Sedol. Google’s AlphaGo program learned, much as we do, by practicing. It played thousands of games against itself until it worked out successful strategies and tactics. And observers of the game credited the program for being “aggressive” and “inventive,” challenging the assumptions of many players.
Google insists that this is a breakthrough. Whereas DeepBlue, IBM’s chess program, had “narrow intelligence,” Alpha Go has “general intelligence.” Narrow intelligence is the equivalent of being told to do a math problem that you know how to do, and doing it. General intelligence, on the other hand, is more like being told to write a paper on the effect of artificial intelligence on international relations. There are certain rules like grammar and syntax, but there is room for intuition and creativity… like this metaphor. It does not take an enormous leap of the imagination to picture how this kind of machine learning might affect world affairs.
In a Council of Foreign Relations interview with DARPA’s Paul Cohen, Cohen talked about a program he is working on that does more than crunch numbers. “Big Mechanism” would “read the primary literature in cancer biology, assemble models of cell signaling pathways in cancer biology that are much bigger or more detailed than any human can comprehend, and then figure out from those models how to attack and suppress cancer.” That’s an ambitious goal and it sounds like science fiction, but it’s not that far off.
On February 9, 2016, the US Intelligence Community Senate Select Committee report mentioned AI as a major source of global risk. AI could perform sophisticated cyber attacks and make it harder to attribute those attacks to specific actors. It could also be incorporated into “weapons and intelligence systems.” Interestingly enough, the Intelligence committee mentions “unemployment” as one of these global dangers stemming from AI.
Consider that software writes many of the news articles, particularly in sports and business, that you read today. The Associated press uses a program to create “more than 3,000 financial reports per quarter.” Try reading this report and think about whether you could really figure out it was not written by a human: Valenant Reports 4Q Loss.
So far, the Associated Press has claimed that the program has replaced no jobs. Rather it is freeing up journalists to pursue different stories. But as programs get better, jobs will inevitably be replaced. Andrew McAfee, an MIT researcher in the same interview, predicted, “I don’t think a lot of employers are going to be willing to pay a lot of people for doing a lot of what they’re currently doing these days. It’s pretty clear that tech progress has been one of the main drivers behind the polarization of the economy, the hollowing out of the middle class.” Capital will be focused in the hands of those who control cheap programs that do the jobs of payroll clerks and financial advisers. In other words, artificial intelligence could drive unemployment that could lead to instability around the world.
Artificial intelligence brings up problems that have a truly global reach. How can artificial intelligence solve problems that we cannot solve ourselves? How can it undercut the middle class and limit the spread of capital throughout the world? And how might the promise of artificial intelligence push governments to increase surveillance and restrict people’s freedoms?