technological singularity

Review and comment on Nick Bostrom’s book Superintelligence

Posted on Updated on

Back in the 1990s I spent considerable computer time training and optimizing artificial neural networks. It was hot then. Then around year 2000 artificial neural networks became unfashionable with Gaussian processes and support vector machines taking over. During the 2000s computers got faster and some engineers turned to see what graphics processing units (GPU) could do besides doing computer rendering for computer games. GPUs are fast for matrix computations which are central in artificial neural network computations. Oh and Jung’s 2004 paper “GPU implementation of neural networks” seems to be the first according to Jurgen Schmidhuber describing the use of GPUs for neural network computation, but it was perhaps first when Dan Ciresan from Politehnica University of Timisoara began using GPUs that interesting advances began: In Schmidhuber’s lab he trained a GPU-based deep neural network system for Traffic Sign Classification and managed to get superhuman performance in 2011.

Deep learning, i.e., computation with many-layered neural network systems, was already then taking off and now broadly applied where the training of a system for computer gaming (classic Atari 2600 games) is perhaps the most illustrative example on how flexible and powerful modern neural networks are. So in limited domains deep neural networks are presently taking large steps.

A question is whether this will continue and whether we will see artificial intelligence system having more general superhuman capabilities. Nick Bostrom‘s book ‘Superintelligence‘ presupposes so and then starts to discuss “what then”.

Bostrom’s book, written from the standpoint of an academic philosopher, can be regarded as a elaboration from the classic Vernor Venge “The coming technological singularity: how to survive in the post-human era” from 1993. It is generally thought that if or when artificial intelligence become near-human intelligent the artificial intelligence system will be able to improve itself and once improved it will be able to improve yet more, resulting in a quick escalation (Verge’s ‘singularity’) with the artificial intelligence system becoming much more intelligent than humans (Bostrom’s ‘superintelligence’). Bostrom lists surveys among expert showing that the median time for the human-level intelligence is estimated to be around year 2040 and 2050, – a share of experts even believe the singularity will appear in the 2020s.

The book lacks solid empirical work on the singularity. The changes around the industrial revolution is discussed a bit and the horse in society in the 20th Century is mentioned: From having widespread use for transport, its function for humans would be taken over with human-constructed machines and the horses sent the butcher. Horses in the developed world are now mostly being used for entertainment purposes. There are various examples in history where a more ‘advanced’ society competes with an established less developed: neanderthal/modern humans, the age of colonization. It is possible that a superintelligence/human encounter will be quite different though.

The book discusses a number of issues from a theoretical and philosophical point of view: ‘the control problem’, ‘singleton’, equality, strategies for uploading values to the superintelligent entity. It is unclear to me if a singleton is what we should aim at. In capitalism, a monopoly seems not necessarily to be good for society, and in market economy societies put up regulation against monopolies. Even with a superintelligent singleton it appears to me that the system can run into problems when it tries to handle incompatible subgoals, e.g., an ordinary desktop computer – as a singleton – may have individual processes that require a resource which is not available because another resource is using it.

Even if the singularity is avoided there are numerous problems facing us in the future: warbots as autonomous machines with killing capability, do-it-yourself kitchen-table bioterrorism, general intelligent programs and robots taking our jobs. Major problems with it-security occur nowadays with nasty ransomware. The development of intelligent technologies may foster further inequality where a winner-takes-all company will rip all benefits.

Bostrom’s take home message is that the superintelligence is a serious issue, that we do not know how to tackle, so please send more money to superintelligence researchers. It is worth alerting society about the issue. There is general awareness of the evolution of society for some long term issues such as the demographics, future retirement benefits, natural resource depletion and climate change issues. It seems that development in information technology might be much more profound and requires much more attention than, say, climate change. I found Bostrom’s book a bit academically verbose, but I think the book has quite important merit as a coherent work setting up the issue for the major task we have at hand.

 

(Review also published on LibraryThing).

Advertisements

Status on human vs. machines

Posted on Updated on

Are computers beating humans. In mere simple number crunching yes, but also in more complex tasks.

Year Domain Description
2017 Dota 2 1v1 OpenAI reported “We’ve created a bot which beats the world’s top professionals at 1v1 matches of Dota 2 under standard tournament rules”, August 2017.
2017 Poker (heads-up no-limits Texas Hold’em) According to Andrew Ng “AI beats top humans”, January 2017.
Libratus, a reinforcement learning-based algorithm from Carnegie Mellon University, see Poker pros vs the machines.
2016 Lipreading Lip Reading Sentences in the Wild writes “… we demonstrate lip reading performance that beats a professional lip reader on videos from BBC television.”
2016 Conversational speech recognition Microsoft Research reports past human performance on benchmark datasets in Achieving human parity in conversational speech recognition
2016 Geoguessing Google’s PlaNet: “In total, PlaNet won 28 of the 50 rounds with a median localization error of 1131.7 km, while the median human localization error was 2320.75 km” according to Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image
2016 Go DeepMind’s AlphaGo beats best European Go player reported in January Mastering the game of Go with deep neural networks and tree search
2015 Closed-world image classification ImageNet classification by Microsoft Research researchers with deep neural network, see Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Already in 2014 Google was close to human performance, see ImageNet Large Scale Visual Recognition Challenge. Human error rate in the ImageNet has been reported to be 5.1%, – and that was Andrej Karpathy, a dedicated human labeler. Microsoft reported in February 2015 4.94%. Google won one of the competitions in 2014 with “GoogLeNet” having a classification error on 6.66%. Baidu reported in January 2015 an error rate on 5.98% in January 2015 and later in February 5.33%. The initial reports were, however, on the ImageNet dataset with a limited number of classes (1000). A straight out-of-the-box application of Keras-distributed ImageNet-based classifiers does not seem to perform on par with humans, see “Washing machine” in Linking ImageNet WordNet Synsets with Wikidata.
2015 Atari game playing Google DeepMind deep neural network with reinforcement learning, see Human-level control through deep reinforcement learning: “We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games”. See also Playing Atari with Deep Reinforcement Learning
2014 Personality judgement According to Computer-based personality judgments are more accurate than those made by humans. The computer used Facebook Likes.
2014 Deceptive pain expression detection See Automatic Decoding of Facial Movements Reveals Deceptive Pain Expressions: “…and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy.”
2013 Age estimation Estimation of a person’s age from a photo of the face. Age Estimation from Face Images: Human vs. Machine Performance. A considerable improvement with Winner of the ChaLearn LAP 2015 challenge: DEX: Deep EXpectation of apparent age from a single image
2013 Smooth car driving Google robotic car head Chris Urmson claimed that their self-driving cars “is driving more smoothly and more safely than our trained professional drivers.” For general car driving the Google car may as of 2014 not be better than humans, e.g., because of problems with road obstacles, see Hidden Obstacles for Google’s Self-Driving Cars.
2011 Traffic sign reading Dan Ciresan used a convolutive neural network on the German Traffic Sign Recognition Benchmark to beat the best human. Results are reported in Man vs. Computer: Benchmarking Machine Learning Algorithms for Traffic Sign Recognition.
2011 Jeopardy! In January 2011 the IBM Watson system beat two human contestants in the open-domain question-answering television quiz show. An introduction to the technique in Watson is Introduction to “This is Watson”
2008 Poker Michael Bowling, see the news report Battle of chips: Computer beats human experts at poker. In 2015 heads-up limit hold’em poker was reported to be not just better than humans, but “essentially weakly solved”, see Heads-up limit hold’em poker is solved.
2007 Face recognition See Face Recognition Algorithms Surpass Humans Matching Faces over Changes in Illumination
2005 Single character recognition See Computers beat Humans at Single Character Recognition in Reading based Human Interaction Proofs (HIPs)
1997 Chess See Deep Blue versus Garry Kasparov
1979 Backgammon See Backgammon Computer Program Beats World Champion

Still waiting…

Year Domain Description
2014 University entry examination A Japanese system was reported to score 95 in 2014 for the English section of the entrance exam to the Tokyo University. The average for a prospect student was 93.1. See also, e.g., The Most Uncreative Examinee: A First Step toward Wide Coverage Natural Language Math Problem Solving.
2015 Conversation Machines can make conversations and might fool humans to think the machine is a human, but they might not yet be better to converse. See, e.g., Bruce Wilcox and A Neural Conversational Model.
2015 Music Most of what I have heard of RNN music is from Bob Sturm.
His “Lisl’s Stis” is quite good. It returns only the melody. In 2016 Manuel Araoz showed examples with harmony: Composed by Recurrent Neural Network. These are fairly tedious.
2016 Natural speech Speech samples from DeepMind’s WaveNet are not far from on level on natural speech.
2017 Drone flight over fixed course NASA’s Jet Propulsion Laboratory in Pasadena, California reported world-class drone pilot Ken Loo to win over a AI-controlled drone in November 2017

Thanks to Jakob Eg Larsen and Lars Kai Hansen for providing links.