[et_pb_section fb_built=”1″ _builder_version=”3.0.47″][et_pb_row _builder_version=”3.0.48″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″ _builder_version=”3.0.47″ parallax=”off” parallax_method=”on”][et_pb_text _builder_version=”3.16.1″]
Stuart Russell
Professor of Computer Science, UC Berkeley
November 30, 2018
Location: E2 Simularium
10:40 a.m. -11:45 a.m. –
includes Q&A, Discussion
Abstract:
I will briefly survey recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans.
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]