Existential risks and extreme opportunities
What are existential risks?
They are the risks that threaten the very survival of the human species, or those that could dramatically curtail its potential. There are many, from asteroid impact, to engineered pandemic to artificial intelligence (AI), and they are almost all understudied. AI risk is the least understood, but potentially the deadliest of all, as AIs could be extremely powerful agents with insufficiently safe motivations and goals. The problem is very difficult, philosophically and programmatically. If there obstacles are overcome, however, humanity can expect to look forwards to a world of dramatic abundance of health, wealth, and happiness.
Philosophy of Science,
University of Oxford
Stuart Armstrong’s current research focuses on the design of safe Artificial Intelligence, formal decision theory (including “anthropic” probabilities), the long-term potential for intelligent life, and the limitations of predictions. In particular, he wants to establish what large risks are the most dangerous and most worth working on, and how accurate predictions made in those areas are likely to be. He has a background in pure mathematics (parabolic geometries) and computational biochemistry (virtual screening).