- archive - > other posts - >
Nick Bostrom the philosopher of Silicon Valley and AI
Never as in the last year has it been essential to know the philosophical and scientific thought on artificial
intelligence.
In fact, with the advent of advanced computer systems, such as Open Ai GPT chats for example, the social
repercussions of artificial ingenuity are a fundamental theme. There is talk not only of the fact that
artificial intelligence can supplant human and intellectual work, thus decreasing employment, but also of the
fact that AI can negatively affect society, both from the point of view of the control of social interactions,
both from the point of view of the humanistic evolution of society.
Of course there are also thoughts that go far beyond the realist context and posit that a computer control of
humanity.
I recently came across a scholar who is much discussed but also controversial but who is interesting
know.
Nick Bostrom (March 10, 1973) is a Swedish philosopher, known for his reflections on the so-called risk
existential of humanity and on the anthropic principle.
He received a PhD from the London School of Economics in 2000 and is director of the Future of
Humanity Institute at the University of Oxford.
In addition to studies and writings, both popular and academic, Bostrom has made frequent appearances on several
media dealing above all with issues pertinent to transhumanism and related topics,
such as cloning, artificial intelligence, superintelligence, the possibility of transferring the
consciousness on technological supports, nanotechnologies and theses on simulated reality.
He is known to be a proponent of a thesis that the probability of the human species living inside
of a simulated reality would be relevant from a probabilistic point of view.
Leaving aside for now the riskiest and most controversial thought about the possible simulation of humanity from
part of higher units.
In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that the creation of a
superintelligence represents a possible means to the extinction of mankind. Bostrom argues that a computer
with near human-level general intellectual ability could initiate an intelligence explosion might collaterally
cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days.
He believes an existential risk to humanity from superintelligence would be immediate once brought into being,
thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually
exists.
In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open
letter warning of the potential dangers of A.I. The signatories "... believe that research on how to make AI
systems robust and beneficial is both important and timely, and that concrete research should be pursued today".
Cutting-edge A.I. researcher Demis Hassabis then met with Hawking, subsequent to which he did not mention
"anything inflammatory about AI", which Hassabis, took as 'a win'. Along with Google, Microsoft and various
tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of
A.I.Hassabis suggested the main safety measure would be an agreement for whichever A.I. research team began
to make strides toward an artificial general intelligence to halt their project for a complete solution to the
control problem prior to proceeding. Bostrom had pointed out that even if the crucial advances require the
resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up
crash program or even physical destruction of the project suspected of being on the verge of success.
He has suggested that technology policy aimed at reducing existential risk should seek to influence the order
in which various technological capabilities are attained, proposing the principle of differential technological
development. This principle states that we ought to retard the development of dangerous technologies,
particularly ones that raise the level of existential risk, and accelerate the development of beneficial
technologies, particularly those that protect against the existential risks posed by nature or by other
technologies.