Quantcast
Channel: Analytics India Magazine
Viewing all articles
Browse latest Browse all 21267

Can Morality Be Engineered In Artificial General Intelligence Systems?

$
0
0
Source : https://linablade.deviantart.com/art/Starry-Night-shirt-181141991

 

In the not too distant future when AI possesses true intelligence, AGI morality will become a critical issue. Given the curve of technological progress and futurist Ray Kurzweil’s singularity predictions, AGI progress is within reach and researchers are working out issues related to morality, specifically addressing questions like whether AGI can be rendered as per our moral system, a paper authored by Sophia’s maker Ben Goertzel discusses.

Which brings us to the question – can morality be engineered in AGI system through design and code? Can moral rules be programmed into AGI systems? According to Goertzel, “AGI is a highly complex, highly dynamic self-organizing system”.  Subsequently, this program can even end up deleting the moral rules one programmed in, or it may reinterpret the terms.

Defining Machine Intelligence

Of late, academic research focuses on issues such as computational ethics, machine ethics (ME), machine morality, artificial morality and building friendly AI. By definition, machine morality extends the traditional engineering concern with safety to domains where the machines themselves can make moral decisions. According to Wallach & Allen, 2009, when designers and engineers are unable to predict  how a system will perform in new situations and new inputs, then mechanisms will be required to facilitate the agent’s evaluation functionally moral.

Researchers Shane Legg & Marcus Hutter proposed a formal definition of intelligence based on algorithmic information theory and the AIXI theory. They defined intelligence as a measure of an agent’s ability to achieve goals in a wide range of environments.” The definition is non-anthropomorphic, meaning that it can be applied equally to humans and artificial agents. Clearly the researchers expect that the future super-human AGI will have the ability to react, respond and achieve goals in varied range of environments compared to humans. It follows that the more intelligent an agent is, the more control it will have over aspects of the environment relating to its goals. Hence this points to the risks with intelligent systems – if  their goals are not aligned with ours, then there will likely be a point where their goals will be achieved to the loss of ours.

Some Of The Core Questions Addressed In AGI Research Are:

Experts predict that a technological singularity caused by an AGI may lead to existential risks as well as risks of substantial suffering for the human race. Some clusters of problems appear in multiple research agendas and can be used as design guides for development of AGI:

  • Value specification: How do we get an AGI to work towards the right goals?
  • Reliability: How can we make an agent that keeps pursuing the goals we have designed it with?
  • Corrigibility: If we get something wrong in the design or construction of an agent, will the agent cooperate in us trying to fix it?
  • Security: How to design AGIs that are robust to adversaries and adversarial environments?
  • Safe learning: AGIs should avoid making fatal mistakes during the learning phase.
  • Intelligibility: How can we build agent’s whose decisions we can understand?
  • Societal consequences: AGI will have substantial legal, economic, political, and military consequences. How should we manage the societal consequences?

Approaches To Engineering Artificial Morality

This report Engineering Moral Agents – from Human Morality to Artificial Morality discusses challenges in engineering computational ethics and how mathematically oriented approaches to ethics are gaining traction among researchers from a wide background, including philosophy. AGI-focused research is evolving into the formalization of moral theories to act as a base for implementing moral reasoning in machines. For example, Kevin Baum from the University of Saarland talked about a project about teaching formal ethics to computer-science students wherein the group was involved in building a database of moral-dilemma examples from the literature to be used as benchmarks for implementing moral reasoning.

Another study, titled Towards Moral Autonomous Systems from a group of European researchers states that today there is a real need for a functional system of ethical reasoning as AI systems that function as part of our society are ready to be deployed.One of the suggestions include having every assisted living AI system to have  a “Why did you do that?” button which, when pressed, causes the robot to explain why it carried out the previous action.

Outlook

Researchers from Swedish University Chalmers made a philosophical analysis of the brain-emulation argument for AI and concluded that the question of AGI is certainly within research. Researchers were less optimistic about the timing of AGI,  but predicted that it will happen within this century. Surveys taken of AI researchers estimate that human-level AGI will be created by 2045. The survey was taken by 21 attendees at the 2009 Artificial General Intelligence conference, and found a median of 2045 for superhuman AGI.

According to an industry expert, given how recent discussions on AI ethics are being politicized by companies for economic gain, the two issues – AI ethics and building a robust, human-aligned AGI system are being conflated. American AI researcher Eliezer Yudkowsky believes that the technical task of building a minimal AGI system which is “well-aligned with its operators” intentions is vastly different from “AI ethics”.

 

 

The post Can Morality Be Engineered In Artificial General Intelligence Systems? appeared first on Analytics India Magazine.


Viewing all articles
Browse latest Browse all 21267

Trending Articles