Quantcast
Channel: Analytics India Magazine
Viewing all articles
Browse latest Browse all 21301

AI Doomsday Can Be Avoided If We Establish ‘World Government’: Nick Bostrom

$
0
0
Views of the UN on the day security council will officially nominate a candidate for the position of Secretary general to replace outgoing SG kofi Annan

The progress of artificial intelligence and advanced automation is going to throw large questions in front of governments and world institutes. There are entities who develop policies which do not take the technology and its implications into consideration. In fact, noted Israeli author-historian Yuval Noah Harari says, “If politicians don’t want to talk about AI, don’t vote for them.”

The progress in technology and science will also change how people see incentives and what their capabilities are. A paper published by futurist and researcher Nick Bostrom from the Future of Humanity Institute, University of Oxford talks about the future of governance and how advanced technology can affect world order and governance structures. Experts say that AI may cause a huge “split” in the world. The split, they say, will be between developed countries like the US who will upskill their citizens for new age jobs, and countries like India and Mexico who will struggle with unemployment because of automation and AI.

Nick Bostrom, the author of Superintelligence, has published new research where he tries to figure out what issues policymakers might see if the earth is a “vulnerable world”. The paper talks about the concept of a vulnerable world in which the technological development in the civilization almost certainly gets devastated by default. Bostrom says, “The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or unipolar world order.”

The Vulnerable World Hypothesis

The technologies can be seen as pulling out balls out of an urn. The ball can be white and black where white ball technologies do good things for the world. The latest technology of artificial intelligence can be seen as a grey ball which over time can turn into a white ball or a grey one. The researcher introduces the hypothesis that the technological turn of creativity contains at least one black ball i.e. one bad technology. This is referred to this as the vulnerable world hypothesis.

This hypothesis essentially says that there are some technologies which will end the world. Hence there has to be some policy for world governance and policing that needs to be in place. This kind of governance has never been in place and the move to have a global governing body is “historically unprecedented”.

Bostrom summarises it as following:

“VWH: If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi-anarchic default condition.”

Semi-anarchic default condition can be characterised by these three key features:

  1. Limited capacity for preventive policing.
  2. Limited capacity for global governance.
  3. Diverse motivations

Kinds Of Vulnerabilities And Gaining Stability

The researchers say that we can identify four types of civilizational vulnerability:

  1. Type-1 vulnerability: There is a technology in this world, which is so destructive and so easy to use that (like nuclear missiles), the actions of actors in the apocalyptic residual make civilizational devastation extremely likely.
  2. Type-2 vulnerability: There is a technology in this world, at which powerful actors have the ability to produce civilization-devastating harms and, in the semi-anarchic default condition, face incentives to use that ability.
  3. Type-0 vulnerability: There is a technology in this world, that carries a hidden risk such that the default outcome when it is discovered is inadvertent civilizational devastation.

Researchers propose some solutions which are theoretical in nature for achieving stabilisation even if some vulnerabilities exist.

  1. Figure out ways to restrict technological development.
  2.  Ensure there is not a large number of actors who have a wide and recognizably human distribution of motives.
  3. Figure out ways to establish extremely effective preventive policing.
  4. Figure out ways to establish effective global governance.

The post AI Doomsday Can Be Avoided If We Establish ‘World Government’: Nick Bostrom appeared first on Analytics India Magazine.


Viewing all articles
Browse latest Browse all 21301

Trending Articles