Superintelligence Control Problem
Quoting singinst.org (2005) Human neurons operate by sending electrochemical signals that propagate at a top speed of 150 meters per second along the fastest neurons. By comparison, the speed of light is 300,000,000 meters per second, two million times greater. Similarly, most human neurons can spike a maximum of 200 times per second; even this may overstate the information-processing capability of neurons, since most modern theories of neural information-processing call for information to be carried by the frequency of the spike train rather than individual signals. By comparison, speeds in modern computer chips are currently at around 2GHz - a ten millionfold difference - and still increasing exponentially. At the very least it should be physically possible to achieve a million-to-one speedup in thinking, at which rate a subjective year would pass in 31 physical seconds. At this rate the entire subjective timespan from Socrates in ancient Greece to modern-day humanity would pass in under twenty-two hours.
Humans also face an upper limit on the size of their brains. The current estimate is that the typical human brain contains something like a hundred billion neurons and a hundred trillion synapses. That's an enormous amount of sheer brute computational force by comparison with today's computers - although if we had to write programs that ran on 200Hz CPUs we'd also need massive parallelism to do anything in realtime. However, in the computing industry, benchmarks increase exponentially, typically with a doubling time of one to two years. The original Moore's Law says that the number of transistors in a given area of silicon doubles every eighteen months; today there is Moore's Law for chip speeds, Moore's Law for computer memory, Moore's Law for disk storage per dollar, Moore's Law for Internet connectivity, and a dozen other variants.
Question comes naturally -- how do you control a superintelligence?
Prioritize the use of AIs to empower humans to communicate and decide together, and it won't take over the world.
(paper: Superintelligence Cannot be Contained: Lessons from Computability Theory