Eliezer Yudkowsky
AI safety researcher, founder of MIRI, work on friendly AI
Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American artificial intelligence researcher, decision theorist, and writer who has dedicated his career to studying the existential risks posed by advanced artificial intelligence. He is best known as the founder of the Machine Intelligence Research Institute (MIRI), a non-profit research organization focused on AI safety and reducing existential risk. Yudkowsky has authored numerous influential essays and papers exploring how to ensure that artificial superintelligence aligns with human values. Before focusing on AI safety full-time, he was involved in Alcor Life Extension Foundation and other transhumanist organizations. His work has shaped contemporary discussions about AI alignment, instrumental convergence, and the importance of solving the "friendly AI" problem before creating superintelligent systems. Yudkowsky's writings, including his sequences on LessWrong and his detailed explorations of decision theory, have influenced researchers in AI safety globally and contributed to the growing field of AI alignment research.
Science & Technology
American
1979
Thinking about the name
Eliezer
Hebrew origin
“A Hebrew name meaning 'my God has helped,' Eliezer appears prominently in biblical tradition as Abraham's faithful servant and as a son of Moses. The name carries both spiritual gravitas and practical warmth, honored in Jewish tradition for centuries. It remains popular in contemporary Jewish communities and appeals to families seeking biblical authenticity.”