Eliezer Yudkowsky

SIAI Co-Founder & Research Fellow

The Challenge of Friendly AI
56 minutes, 25.9mb, recorded 2007-09-09
Eliezer Yudkowsky

Will future Artificial Intelligence be friendly or hostile? If the objective is to develop ethical Artificial Intelligences, the most straightforward approach would be to program a fixed set of ethical rules. However, Mr. Yudkowsky contends that the better path would be to create an artificial mind with an ethical sense of direction. The self-improving AI would have the capacity to learn and grow. This AI could potentially continue upon a moral trajectory similar to what humans would follow over time, rather than be limited by the fixed cultural values of its creator.


Eliezer Yudkowsky is a Co-Founder & Research Fellow of the Singularity Institute for Artificial Intelligence and one of the world's foremost researchers on Friendly AI and recursive self-improvement. He created the Friendly AI approach to AGI, which emphasizes the importance of the structure of an ethical optimization process and its supergoal, in contrast to the common trend of seeking the right fixed enumeration of ethical rules a moral agent should follow. In 2001, he published the first technical analysis of motivationally stable goal systems, with his book-length Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. In 2002, he wrote "Levels of Organization in General Intelligence," a paper on the evolutionary psychology of human general intelligence, published in the edited volume Artificial General Intelligence (Springer, 2006).

Resources

This free podcast is from our Singularity Summit series.

For The Conversations Network: