Skip to main content Skip to secondary navigation
Main content start

CASBS Panel Discussion Focuses on AI, Automation, Society

When it comes to artificial intelligence (AI), automation, robotics, and their implications for governance, the economy, jobs, warfare, and much more, there is no shortage of people – and probably bots – talking and writing about them. 

In 2017 there was a crescendo of anxiety, hype, anticipation, optimism, pessimism, and either embrace or fear of the knowns and unknowns. How do we begin to make sense of it all?

One good way was on display in November, as CASBS hosted a symposium on “AI, Automation, and Society,” featuring a multi-disciplinary panel of 2017-18 CASBS fellows – John MarkoffArati Prabhakar, and The Venerable Tenzin Priyadarshi – who are among the world’s deepest thinkers on these issues.

All three fellows are supported by the Berggruen Institute, a key CASBS partner in undertaking inquiry that explores technological innovation and the human condition. The institute’s founder and namesake, Nicolas Berggruen, was in attendance.

The event was the first of a CASBS three-symposium series focused broadly on the consequences of technological advances for our society and values, as well as those of future generations. The co-sponsor for the first two symposia is Stanford’s Catalyst for Collaborative Solutions, a campus-wide interdisciplinary initiative based at the Stanford School of Engineering.

Engineering school dean Jennifer Widom, senior associate dean Laura Breyfogle, and professor and Catalyst director Mark Horowitz, as well as Stanford Graduate School of Business professor and Catalyst advisory board member Garth Saloner were on hand for the November panel discussion. CASBS director Margaret Levi served as moderator.

The public symposia series complements the selection of Levi to lead a university-wide effort to engage and coordinate with stakeholders, including those in government and industry, in areas involving AI, automation, and society.

And what better place than CASBS? The issues under consideration did not emerge in a vacuum; several intellectual titans who performed some of the most critical antecedent work and thinking had spent time at CASBS as fellows, laying a foundation for those to follow. Among them were W. Ross Ashby (CASBS fellow 1955-56), a cybernetics pioneer who wrote the landmark texts Introduction to Cybernetics and Design for a Brain; Claude Shannon (1957-58), acknowledged by many as the founder of information theory and a principal developer of digital circuit design theory; John Tukey (1957-58), a towering figure in early computer design and data analysis; George Dantzig (1978-79), an early innovator in linear programming algorithms; and John McCarthy (1979-80), a “founding father” who, among other things, actually coined the term “artificial intelligence” in 1956.

View photos from the symposium here.

Throughout the symposium the panelists outlined themes and issues that offer windows into the way they approach, frame, and conceptualize issues related to AI, automation, and society. John Markoff, the author and Pulitzer Prize-winning technology journalist recently-retired from the New York Times, discussed a “puzzle of pace” where a perceived consensus on accelerated, society-transforming machine learning is not paralleled by events on the ground.

“I’m repeatedly struck by how much stuff actually remains the same,” he said. “[Silicon] Valley has this religious belief in exponential change but the reality is very different. And if you go back and look at the history of AI, it’s mostly been about over-promise and under-deliver.”

Though Markoff observes significant change and dislocation, overall he doesn’t see “the kind of whirlwind of attack of technology that is going to make humans obsolescent.”

“There’s a lot of task automation; there’s not so much job automation,” he said.

In comparative terms, though, Markoff is astonished by the pace of China’s commitment to AI. Beijing, he noted, is looking more and more like Silicon Valley.

"You feel like you’re in the midst of an entrepreneurial frenzy. They have a national commitment to catching us in AI in three years, in being the dominant global force by 2030…you feel they’re completely serious.”

Arati Prabhakar, an applied engineer who has spent most of her career in Silicon Valley or federal government service, drew upon her most recent experience as director of the Defense Advanced Research Projects Agency (DARPA) under President Obama. DARPA projects employing advances in neurotechnology-based machine mediation and augmentation intrigue her sense of “how we think of ourselves as humans and what authenticity is as we become more and more interconnected with our technologies.”

Prabhakar is acutely aware that none of the issues brought forth by AI and automation will be solved just by technologists or companies or even regulations. The path forward is an iterative, evolutionary process, and “how we end up using these powerful tools is going to be an expression of what our society values…”

Moreover, as she astutely observed, many of the evening’s topics – from potential problems emerging from AI deployment in warfare to anxieties about AI-driven job displacement and AI-amplified inequality, among others – do not necessarily raise questions that are solely technology-based. Rather, in many cases we are “layering” technology questions onto “fundamental social questions that we’re going to have to grapple with as a country.”

Tenzin Piryadarshi, the philosopher-ethicist and Buddhist monk who directs the Ethics Initiative at the MIT Media Lab as well as the MIT-wide Dalai Lama Center for Ethics and Transformative Values, through a series of illuminating examples advocated approaching AI and automation through an ethics framework. However, he emphasized not an “outdated” conception of ethics as a restraining mechanism, as one might expect. Rather, he views ethics as optimization – inserting ethics early, during the design process itself, to account for human biases and anticipate issues of more “distributed moral agency.” The aim, ultimately, is to devise processes and tools that make life better at both the civic and individual levels.

Being based at MIT, Priyadarshi is far from anti-technology; he declared himself a “big fan of [AI] deployment” at modest scales. In fact, he and Media Lab colleagues prefer to call it “extended intelligence,” particularly in the realms of medicine and education at this early stage.

But we must “expand the bandwidth of critical thinking” to be able to ask the right questions precisely to avoid AI’s potential to exacerbate inequalities.

“AI is [the] digital divide on steroids,” he said.

That’s why, according to Priyadarshi, we have to examine AI and automation not just in the context of better efficiency, task management, and productivity, but also in terms of strengthening what humans are actually good at doing and what gives them contentment.

More fundamentally, and in light of recent strains on democratic institutions and dilemmas faced by social media companies, he argued for a new framework geared toward establishing a more moral economy.

Both concerns – shared by all the panelists – tee-up upcoming CASBS activities. The questions, issues, and “limits of democratic practice” that technology are “unbearing to us,” as CASBS director and moderator Margaret Levi put it, will be explored in the third symposia in the 2017-18 CASBS series, taking place at the Center on April 24.

Levi also is spearheading a CASBS-based project on “Creating the Moral Political Economy of the Future.” The inaugural project workshop will take place in spring 2018.

We have to create a new kind of political-economic framework, she said, to create new “leverage points” to counterbalance the current “problem of incentives” that are aligned almost wholly with the profit motive, especially for tech companies. What we’ve largely seen until now are shortcomings, if not failures, of policy interventions, regulations, and governance in general.

“That’s something we can actually do something about,” said Levi. “That’s a solvable problem. It just requires imagination and effort.”

Watch the entire panel discussion above or on the CASBS YouTube channel.

 

Related Events
CASBS Symposium: "AI, Automation, and Society" with John Markoff, Arati Prabhakar and Tenzin Priyadarshi

More News Topics