The Science of Where Magazine meets Michael Littman and Peter Stone. Michael, professor of computer science at Brown University, is the lead author of the 2021 AI100 Report on artificial intelligence, Gathering Strength, Gathering Storms. Peter, founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin, is the AI100 Standing Committee chair.
Michael, in short, explains to our readers the thesis of the artificial intelligence report published every five years by Stanford University: We were charged with answering a set of questions concerning how the field of artificial intelligence has progressed over the past 5 years. After taking our snapshot of where things are, we decided to call our report “Gathering Strength, Gathering Storms”. Essentially, AI technology is becoming more useful and powerful and widespread, but we’re starting to see ways in which it can negatively impact society.
The choice to publish the report every five years is interesting. The phenomenon of artificial intelligence, complex and with medium / long-term repercussions in the life of each of us, must be analyzed in context and in a transdisciplinary way.
Michael says: I think the choice of producing a report every 5 years was a good one. It is long enough that it is possible to see some real shifts from one report to the next, while, at the same time, providing a consistent through line so the trajectory of the field is evident. When we completed the 2021 report, I reread the 2016 report and was surprised by how much has changed in the perspective of people in the field.
Peter points out: I agree with Michael that 5 years is a good cadence. The field of AI changes rapidly, and historically there have been several “up” and “down” cycles with regards to public percptions. It will be interesting to see how this effort evolves if/when the field goes through another “down” period (sometimes called “AI Winters”). One of my favorite parts of the 2021 report is the annotations on the 2016 report in which Michael and his study panel commented directly on what’s changed since the first report. That’s best readable in the online format. Regarding the choice of producing some output every 5 years, that was made by the original Standing Committee as a way of establishing a regular cadence for a longitudinal study of the field – something that is meant to persist over a long period of time.
Some details about the history here in the preface of the 2021 report – see the part labeled “About AI100”: https://ai100.stanford.edu/2021-report/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence. Additional details are here: https://ai100.stanford.edu/history
Many people seem to have a great fear that artificial intelligence, and more generally emerging technologies, may one day replace human intelligence. Is it a justified fear or, as we think, is it instead necessary for humanity to establish global rules of governance of the phenomenon without spreading “technological terrorism”?
Michael says: The report argues that replacing human intelligence is not a trivial task. We lack a foundational understanding of how human intelligence works and existing attempts to create generally competent artificial agents have not produced anything remotely capable of navigating the complexity of the unconstrained real world. Indeed, within the field, replacing human intelligence isn’t really a goal. We are much more interested in finding ways to build machines that can work with us on the problems we identify as being important. Of course, like any other technology capable of large scale influence, global rules of governance are essential. We were happy to see that countries all over the world are seeing the importance of wisely cultivating the technology and are taking steps to invest in it and study it and try to maximize its benefits while minimizing risks.