Skip to content

Francesca Rossi Interview

Published:
January 26, 2017
Author:
Ariel Conn

Contents

The following is an interview with Francesca Rossi about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Rossi is a research scientist at the IBM T.J. Watson Research Centre, and a professor of computer science at the University of Padova, Italy, currently on leave.

Q: From your perspective, what were the highlights of the conference?

“In general, what was great about this conference was the very diverse mix of people. It’s important that this was done on a very multidisciplinary level with experts of many different disciplines – not just AI people.
“But, of course, the days of the conference and workshop that I liked the most were the ones devoted to the research agenda – what people have been doing to make AI more ethical and to have an even more beneficial impact on the real world.
“I think that it’s very important that more and more people in AI and in general understand that these issues of beneficial AI, of ethical implications, and of moral agency are not just the subject of discussion, but are really the subject of concrete research efforts.
“And then, I think the first day of the conference – the one devoted to economics – it was great. They had great talks and great panels. And that’s, of course, a very important issue that everybody is reflecting on.”

Q: And why did you choose to sign the AI Principles?

“I think that was a very interesting and very big step forward to put together all these principles. Then during the conference, people had the chance to say which ones they agreed or didn’t agree on. And we realized how much consensus there is among the people in so many different disciplines, with so many different backgrounds, and on a large set of principles.
“So signing at least the principles that basically everybody unanimously agreed on and giving support to that is very important in trying to guide the community. The principles were already there, but they were not – until now – explicitly written down. I think it is very good that people can read them.
“In Puerto Rico, two years ago, the discussion was not happening around these issues, and things were not as clear as now. We could not have done that kind of effort with the principles, but now I think it’s the right time. And it’s a big step forward in the discussion.”

Q: And then, why do you think it’s important for AI researchers to weigh in on issues like these?

“Like I said, I think it’s important that AI research is the center of discussion. But I would not say that these are two contradictory things: discussing these issues as opposed to doing technical work. Because I think one of the main accomplishments of FLI is that it makes clear that these subjects are also subject to concrete technical research efforts. And before, this was not clear.
“With the FLI grant program, we realize that actually addressing those issues and possibly moving forward to resolving them is also a matter of doing concrete, technical work.
“So, I think that we should make it clear to AI researchers that they can do research and publish scientific papers on these issues as well.”

Q: And then, going into some of the individual principles, were there a few that you were specifically interested in?

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
“I think it’s very important. And it also ties in with the general effort and commitment by IBM to work a lot on education and re-skilling people to be able to engage with the new technologies in the best way. In that way people will be more able to take advantage of all the potential benefits of AI technology.
“That also ties in with the impact of AI on the job market and all the other things that are being discussed. And they are very dear to IBM as well, in really helping people to benefit the most out of the AI technology and all the applications.”

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
“I personally am for building AI systems that augment human intelligence instead of replacing human intelligence. And I think that in that space of augmenting human intelligence there really is a huge potential for AI in making the personal and professional lives of everybody much better. I don’t think that there are upper limits of the future AI capabilities in that respect.
“I think more and more AI systems together with humans will enhance our kind of intelligence, which is complementary to the kind of intelligence that machines have, and will help us make better decisions, and live better, and solve problems that we don’t know how to solve right now. I don’t see any upper limit to that.”

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
“That’s definitely a principle that I think is very important. People should really have the right to own their privacy, and companies like IBM or any other that provide AI capabilities and systems should protect the data of their clients.
“The quality and amount of data is essential for many AI systems to work well, especially in machine learning. But the developers and providers of AI capabilities should really take care of this data in the best way. This is fundamental in order to build this trust between users of AI systems and those that develop and deploy the AI systems, like IBM or any other company.
“It’s also very important that these companies don’t just assure that they are taking care of the data, but that they are transparent about the use of the data. Without this transparency and trust, people will resist giving their data, which would be detrimental to the AI capabilities and the help AI can offer in solving their health problems, or whatever the AI is designed to solve.”

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“The one closest to my heart. I definitely agree with this principle. AI systems should behave in a way that is aligned with human values.
“But actually, I would be even more general than what you’ve written in this principle. Because this principle has to do not only with autonomous AI systems, but I think this is very important and essential also for systems that work tightly with humans in the loop, and also where the human is the final decision maker. Because when you have human and machine tightly working together, you want this to be a real team. So you want the human to be really sure that the AI system works with values aligned to that person. It takes a lot of discussion to understand those values.
“For every job, for every task, we need to write down the values or the principles we want to focus on in that scenario. And then, again, there is scientific research that can be undertaken to actually understand how to go from these values that we all agree on to embedding them into the AI system that’s working with humans. So this principle – value alignment – is a very important principle and a big technical challenge as well.”

ARIEL: “I have a quick follow-up question on that one. I was talking to Toby Walsh earlier today, and we were talking about value alignment. And one of his comments was that the way we’ve been approaching the principles is that this is something we have to deal with in the future, and that value alignment is something we need to worry about as machines get more intelligent and we get closer to a superintelligent AI or something like that. But his comment was that it’s an issue now; that we’re having these issues today.”

FRANCESCA: I agree completely that value alignment is something to be solved not because we think that there will be some sort of superintelligence, but right now. With the machines and AI systems that work right now with doctors or other professionals – we want these machines to behave in a way that is aligned with what we would expect for a human. So I agree that value alignment is a very big challenge and should be solved as soon as possible.

Q: Assuming that all goes well and we achieve the advanced beneficial AI that we’re hoping for, how do you envision that world? What does that look like to you?

“Well, first I think that we don’t have to wait for long to actually see a new world. I mean, the new world is already here, with the AI systems being more and more pervasive in the professional and private lives of people.
“I don’t know if we are able to predict the far future, but I think in a very short time there will be a world where the human-machine relationship will be tighter and tighter. And hopefully in the most beneficial way, so that the quality of work and life of people will be much higher, and that people will be able to trust these machines at the correct level.
“And with the effort that we make in making sure these machines really get the best capabilities in interacting with humans and explaining what they do to humans… all this will really help in making this relationship better and better.”

Q: And was there anything else that you thought was important that you wanted to add, that wasn’t part of the questions?

“I think that at this stage of the conversation and discussion it is very important that everybody engages in the discussion and also engages in educational efforts. I don’t just mean graduate and undergraduate curriculum, but also to selected target groups like business people or policymakers or the media – telling them and educating them about what AI really is, what the state of the art is, what the current capabilities and limitations are, what the potential is, what the concerns are. I think this will be very helpful in shaping the discussion and guiding it in the right way.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.org on January 26, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
July 20, 2017

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
July 20, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
April 19, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
April 19, 2017

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram