As AI-related technologies penetrate ever more aspects of society, responsible integration and management of AI will be a societal challenge as well as a technical one. For example, if left unchecked, AI technologies can promote the spread of disinformation and group polarization, amplify societal biases, exacerbate wealth inequalities, and pose the risk of automating decisions that require human judgment. Appropriate AI governance ensures the societal benefits of AI outweigh these risks while ensuring the evolving legal and regulatory systems do not pointlessly impede AI innovation. While several high-profile gatherings and committees have addressed the challenges posed by AI, they’ve tended to focus on high-level principles and ethical guidelines, rather than tangible governance solutions and field-building efforts.
As a first step in this effort, this grant convened experts and leaders for a virtual symposium “Innovating AI Governance: Shaping the Agenda for a Responsible Future” on December 4, 2020. This initiative, led by the Rockefeller Foundation, was in partnership with the Schwartz Reisman Institute (SRI) for Technology and Society at the University of Toronto.
The CASBS team included CASBS Director Margaret Levi, CASBS fellow Jim Guszcza, Program Director Zachary Ugolnik, the consultant Şerife Wong, and Research Assistant Jeff Sheng. To serve as preparation materials for the event, CASBS produced three briefs: “AI Level Set,” “Historical antecedents for AI governance,” and “Trustworthiness in the context of AI technologies.” In addition, we created a short video introducing these materials and updated the Fluxus Landscape website, featuring current stakeholders in the field of AI, ethics, and governance. Following the event, the Rockefeller foundation published the report “AI + Governance: Bold Action and Novel Approaches.”
This project helped launch Toward a Theory of AI Practice, a continuation of CASBS’s collaboration with the Rockefeller Foundation.
For more information, please contact CASBS program director Zachary Ugolnik (email@example.com).
Fluxus Landscape: An Expansive View of AI Ethics and Governance is an art and research project by Şerife Wong, created in partnership with the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University with support from The Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the Rockefeller Foundation. The project maps and categorizes about 500 AI ethics and governance stakeholders and actors. Its goals are both practical and artistic: to help the global community interested in AI ethics and governance discover new organizations, and encourage a broader, more nuanced perspective on the AI ethics and governance landscape. You can read the initial launch here.