Promise, Progress, and Challenges in Open-Ended Machine Learning

Joel Lehman / OpenAI

Nov 23, 2020

Abstract: Researchers in open-ended machine learning are inspired by natural and human processes of innovation (like biological evolution or science itself), and aim to uncover the engineering principles underlying boundlessly creative search algorithms, i.e. algorithms capable of continual production of useful and interesting innovations, The speculative promise for machine learning from such open-ended search includes automated discovery of curricula for reinforcement learning, new neural architectures, and most ambitiously, AGI itself. While its most ambitious potential is far from being realized, conceptual and algorithmic progress has been made. I will review progress in open-ended search and highlight open challenges.

Bio: Joel Lehman is a research scientist at OpenAI, and previously was a founding member of Uber AI Labs and an assistant professor at the IT University of Copenhagen. His research focuses on open-endedness, reinforcement learning, evolutionary computation, and AI safety. His PhD dissertation introduced novelty search, an influential method within evolutionary computation, which inspired a popular science book co-written with Ken Stanley on what search algorithms imply for individual and societal objectives, called “Why Greatness Cannot Be Planned.”