FY 2018 Funded Projects
A 'Godfather Of AI' Calls For An Organization To Defend Humanity
I started thinking, what if, within a year, we bridge that gap, and then it scaled up? What's going to happen?
What did you do once you realized this?
At the end of March, before the first [Future of Life Institute] letter [calling on AI labs to immediately pause giant AI experiments] came out, I reached out to Geoff [Hinton]. I tried to convince him to sign the letter. I was surprised to see that we had independently arrived at the same conclusion.
This reminds me of when Issac Newton and Gottfried Leibniz independently discovered calculus at the same time. Was the moment ripe for a multiple, independent discovery?
Don't forget, we had realized something that others had already discovered.
Also, Geoff argued that digital computing technologies have fundamental advantages over brains. In other words, even if we only figure out the principles that are sufficient to explain most of our intelligence and put that in machines, the machines would automatically be smarter than us because of technical things like the ability to read huge quantities of text and integrate that much faster than the human could—like tens of thousands or millions of times faster.
If we were to bridge that gap, we would have machines that were smarter than us. How much does it mean practically? Nobody knows. But you could easily imagine they would be better than us in doing things like programming, launching cyberattacks, or designing things that biologists or chemists currently design by hand.
I've been working for the past three years on machine learning for science, particularly applied to chemistry and biology. The goal was to help design better medicines and materials, respectively for fighting pandemics and climate change. But the same techniques could be used to design something lethal. That realization slowly accumulated, and I signed the letter.
Your reckoning drew a lot of attention, including the BBC article. How did you fare?
The media forced me to articulate all these thoughts. That was a good thing. More recently, in the past few months, I've been thinking more about what we should do in terms of policy. How do we mitigate the risks? I've also been thinking about countermeasures.
I'm a positive person. I'm not a doomer like people may call me. I'm thinking about solutions.
Yoshua Bengio, AI researcher
Some might say, "Oh, Yoshua is trying to scare." But I'm a positive person. I'm not a doomer like people may call me. There's a problem, and I'm thinking about solutions. I want to discuss them with others who may have something to bring. The research on improving AI's capabilities is racing ahead because there's now a lot—a lot—more money invested in this. It means mitigating the largest risks is urgent.
Photo-Illustration: James Marshall; Getty Images
Reith Lectures 2021 - Living With Artificial Intelligence
Stuart Russell, Professor of Computer Science and founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley will be the 2021 BBC Reith Lecturer. Russell will deliver four lectures this autumn, which will explore the impact of AI on our lives and discuss how we can retain power over machines more powerful than ourselves.The lectures will examine what Russell will argue is the most profound change in human history as the world becomes increasingly reliant on super-powerful AI. Examining the impact of AI on jobs, military conflict and human behaviour, Russell will argue that our current approach to AI is wrong and that if we continue down this path, we will have less and less control over AI at the same time as it has an increasing impact on our lives. How can we ensure machines do the right thing? The lectures will suggest a way forward based on a new model for AI, one based on machines that learn about and defer to human preferences.
Stuart Russell
The series of lectures will be held in four locations across the UK; Newcastle, Edinburgh, Manchester and London and will be broadcast on Radio 4 and the World Service as well as available on BBC Sounds. Accompanying the lectures, Adam Rutherford and Hannah Fry will explore the themes of the lectures in a complementary Radio 4 series.
The lectures will be chaired by presenter, journalist and author, Anita Anand.
Audiences can apply to attend the recordings via the BBC tickets website. Apply for tickets here
LECTURE 1, LONDON – THE BIGGEST EVENT IN HUMAN HISTORYWhat is AI and should we fear it?In the first lecture Stuart J. Russell reflects on the birth of AI, tracing our thinking about it back to Aristotle. He will outline the definition of AI, its successes and failures, and potential risks for the future. Why do we often fear the potential of AI? Referencing the representation of AI systems in film and popular culture, Russell will examine whether our fears are well founded. As previous Reith Lecturer Professor Stephen Hawking said in 2014, "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Russell will ask how those risks arise and whether they can be avoided, allowing humanity and AI to coexist successfully.
LECTURE 2, MANCHESTER – AI IN WARFAREFrom drones to robots, what should be the role of AI in military operations?Weapons that locate, select, and engage human targets without human supervision are already available for use in warfare, so what role will AI play in the future of military conflict? Will AI reduce collateral damage and civilian casualties, or will autonomous weapons kill on a scale not seen since Hiroshima and Nagasaki? Will future wars be fought entirely by machines, or will one side surrender only when its real losses, military or civilian, become unacceptable? Stuart Russell will examine the motivation of major powers developing these types of weapons, the morality of creating algorithms that decide to kill humans, and possible ways forward for the international community as it struggles with these questions.
LECTURE 3, EDINBURGH – AI IN THE ECONOMYWhat is the future of work?In lecture three, Russell explores one of the most concerning issues of AI; the threat to jobs. How will the economy adapt as work is increasingly done by machines? Economists' forecasts range from rosy scenarios of human-AI teamwork to dystopic visions in which most people are excluded from the economy altogether. Russell will try to untangle these competing predictions and to pinpoint the comparative advantages that humans may retain over machines. Perhaps counterintuitively, he will suggest greater investment in the humanities and the arts, lead to increased status and pay for professions based on interpersonal services.
LECTURE 4, NEWCASTLE – AI: A FUTURE FOR HUMANS?A new way to think about AI systems and human-AI coexistenceIn the fourth and final lecture, Russell returns to the questions of human control over increasingly capable AI systems. He will argue for the abandonment of the current "standard model" of AI, proposing instead a new model based on three principles—chief among them the idea that machines should know that they don't know what humans' true objectives are. Echoes of the new model are already found in phenomena as diverse as menus, market research, and democracy. Machines designed according to the new model are, Russell suggests, deferential to humans, cautious and minimally invasive in their behaviour and, crucially, willing to be switched off. He will conclude by exploring further the consequences of success in AI for our future as a species.
About Stuart RussellStuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He is a recipient of the IJCAI Computers and Thought Award and held the Chaire Blaise Pascal in Paris. In 2021 he received the OBE from Her Majesty Queen Elizabeth. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.
Nine Things You Should Know About AI
1. AI is already a big part of your life
"Every time you use a credit card or debit card, there's an AI system deciding 'is this a real transaction or a fraudulent one?'," explains Prof. Russell. "Every time you ask Siri a question on your iPhone there's an AI system there that has to understand your speech and then understand the question and then figure out how to answer it."
2. AI could give us so much moreGeneral-purpose AI could – theoretically – have universal access to all the knowledge and skills of the human race, meaning that a multitude of tasks could be carried out more effectively, at far less cost and on a far greater scale. This potentially means that we could raise the living standard of everyone on earth. Professor Russell estimates that this equates to a GDP of around ten times the current level – that's equivalent to a cash value of 14 quadrillion dollars.
3. AI can harm usThere are already a number of negative consequences from the misuse of AI, including racial and gender bias, disinformation, deepfakes and cybercrime. However, Professor Russell says that even the normal way AI is programmed is potentially harmful.
The "standard model" for AI involves specifying a fixed objective, for which the AI system is supposed to find and execute the best solution. So, the problem for AI when it moves into the real world is that we can't specify objectives completely and correctly.
Having fixed but imperfect objectives could lead to an uncontrollable AI that stops at nothing to achieve its aim. Professor Russell gives a number of examples of this including a domestic robot programmed to look after children. In this scenario, the robot tries to feed the children but sees nothing in the fridge.
"And then… the robot sees the cat… Unfortunately, the robot lacks the understanding that the cat's sentimental value is far more important than its nutritional value. So, you can imagine what happens next!"
Comments
Post a Comment