Back in the 1990s I spent considerable computer time training and optimizing artificial neural networks. It was hot then. Then around year 2000 artificial neural networks became unfashionable with Gaussian processes and support vector machines taking over. During the 2000s computers got faster and some engineers turned to see what graphics processing units (GPU) could do besides doing computer rendering for computer games. GPUs are fast for matrix computations which are central in artificial neural network computations. Oh and Jung’s 2004 paper “GPU implementation of neural networks” seems to be the first according to Jurgen Schmidhuber describing the use of GPUs for neural network computation, but it was perhaps first when Dan Ciresan from Politehnica University of Timisoara began using GPUs that interesting advances began: In Schmidhuber’s lab he trained a GPU-based deep neural network system for Traffic Sign Classification and managed to get superhuman performance in 2011.
Deep learning, i.e., computation with many-layered neural network systems, was already then taking off and now broadly applied where the training of a system for computer gaming (classic Atari 2600 games) is perhaps the most illustrative example on how flexible and powerful modern neural networks are. So in limited domains deep neural networks are presently taking large steps.
A question is whether this will continue and whether we will see artificial intelligence system having more general superhuman capabilities. Nick Bostrom‘s book ‘Superintelligence‘ presupposes so and then starts to discuss “what then”.
Bostrom’s book, written from the standpoint of an academic philosopher, can be regarded as a elaboration from the classic Vernor Venge “The coming technological singularity: how to survive in the post-human era” from 1993. It is generally thought that if or when artificial intelligence become near-human intelligent the artificial intelligence system will be able to improve itself and once improved it will be able to improve yet more, resulting in a quick escalation (Verge’s ‘singularity’) with the artificial intelligence system becoming much more intelligent than humans (Bostrom’s ‘superintelligence’). Bostrom lists surveys among expert showing that the median time for the human-level intelligence is estimated to be around year 2040 and 2050, – a share of experts even believe the singularity will appear in the 2020s.
The book lacks solid empirical work on the singularity. The changes around the industrial revolution is discussed a bit and the horse in society in the 20th Century is mentioned: From having widespread use for transport, its function for humans would be taken over with human-constructed machines and the horses sent the butcher. Horses in the developed world are now mostly being used for entertainment purposes. There are various examples in history where a more ‘advanced’ society competes with an established less developed: neanderthal/modern humans, the age of colonization. It is possible that a superintelligence/human encounter will be quite different though.
The book discusses a number of issues from a theoretical and philosophical point of view: ‘the control problem’, ‘singleton’, equality, strategies for uploading values to the superintelligent entity. It is unclear to me if a singleton is what we should aim at. In capitalism, a monopoly seems not necessarily to be good for society, and in market economy societies put up regulation against monopolies. Even with a superintelligent singleton it appears to me that the system can run into problems when it tries to handle incompatible subgoals, e.g., an ordinary desktop computer – as a singleton – may have individual processes that require a resource which is not available because another resource is using it.
Even if the singularity is avoided there are numerous problems facing us in the future: warbots as autonomous machines with killing capability, do-it-yourself kitchen-table bioterrorism, general intelligent programs and robots taking our jobs. Major problems with it-security occur nowadays with nasty ransomware. The development of intelligent technologies may foster further inequality where a winner-takes-all company will rip all benefits.
Bostrom’s take home message is that the superintelligence is a serious issue, that we do not know how to tackle, so please send more money to superintelligence researchers. It is worth alerting society about the issue. There is general awareness of the evolution of society for some long term issues such as the demographics, future retirement benefits, natural resource depletion and climate change issues. It seems that development in information technology might be much more profound and requires much more attention than, say, climate change. I found Bostrom’s book a bit academically verbose, but I think the book has quite important merit as a coherent work setting up the issue for the major task we have at hand.