Nick Bostrom

Nick Bostrom
Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism. He holds a PhD from the London School of Economics. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and he is currently the founding director of the Future of Humanity Institute at Oxford University...
NationalitySwedish
ProfessionPhilosopher
Date of Birth10 March 1973
CountrySweden
Machine intelligence is the last invention that humanity will ever need to make.
When we are headed the wrong way, the last thing we need is progress.
Knowledge about limitations of your data collection process affects what inferences you can draw from the data.
The challenge presented by the prospect of superintelligence, and how we might best respond is quite possibly the most important and most daunting challenge humanity has ever faced. And-whether we succeed or fail-it is probably the last challenge we will ever face.
We would want the solution to the safety problem before somebody figures out the solution to the AI problem.
Are you living in a computer simulation?
We should not be confident in our ability to keep a super-intelligent genie locked up in its bottle forever.
It’s unlikely that any of those natural hazards will do us in within the next 100 years if we’ve already survived 100,000. By contrast, we are introducing, through human activity, entirely new types of dangers by developing powerful new technologies. We have no record of surviving those.
The Internet is a big boon to academic research. Gone are the days spent in dusty library stacks digging for journal articles. Many articles are available free to the public in open-access journal or as preprints on the authors' website.
The cognitive functioning of a human brain depends on a delicate orchestration of many factors, especially during the critical stages of embryo development-and it is much more likely that this self-organizing structure, to be enhanced, needs to be carefully balanced, tuned, and cultivated rather than simply flooded with some extraneous potion.
Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization - a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
There are some problems that technology can't solve.
Human nature is a work in progress.
Nanotechnology has been moving a little faster than I expected, virtual reality a little slower.