Using ChatGPT for medical education: the technical perspective
What Artificial General Intelligence Could Mean For Our Future
04/04/202529.08 minutes
Each week, tech companies trumpet yet another advance in artificial intelligence, from better chat services to image and video generators that spend less time in the uncanny valley. But the holy grail for AI companies is known as AGI, or artificial general intelligence—a technology that can meet or outperform human capabilities on any number of tasks, not just chat or images.
The roadmap and schedule for getting to AGI depends on who you talk to and their precise definition of AGI. Some say it's just around the corner, while other experts point a few years down the road. In fact, it's not entirely clear whether current approaches to AI tech will be the ones that yield a true artificial general intelligence.
Hosts Ira Flatow and Flora Lichtman talk with Will Douglas Heaven, who reports on AI for MIT Technology Review; and Dr. Rumman Chowdhury, who specializes in ethical, explainable and transparent AI, about the path to AGI and its potential impacts on society.
Further Reading Sign Up For The Week In Science NewsletterKeep up with the week's essential science news headlines, plus stories that offer extra joy and awe.
Subscribe
Donate To Science FridayInvest in quality science journalism by making a donation to Science Friday.
Donate
Segment GuestsWill Douglas Heaven is the senior editor for AI at MIT Technology Review. He's based in London, England.
Dr. Rumman Chowdhury is founder and CEO of Parity Consulting, a Responsible AI Fellow at the Berkman Klein Center of Harvard University, and a visiting researcher in the NYU Tandon School of Engineering in New York, New York.
IRA FLATOW: Every week, it feels like there's another AI advance. Some company produces a system that will write proposals better than you can, or makes more lifelike pictures of videos, or wrangles data in a new way. And most of these systems are still limited to a few specialized tricks in what they can do.
But how close are companies to creating something that can virtually think on its own or outperform humans on any task? What researchers are calling AGI– Artificial General Intelligence. And that's what we are talking about this hour. And we want to hear from you.
FLORA LICHTMAN: Yes, how do you feel about this AGI advance? Is it something you're looking forward to, or are you dreading it? What are your hopes or your fears for how AGI might impact your life? Call us. Our number is 877-925-9174. That's 877-925-9174.
IRA FLATOW: Let's get into this. Let me introduce our guests. Will Douglas Heaven is a senior editor for AI coverage at MIT Technology Review. He's based in the UK. Welcome back.
WILL HEAVEN: Hi, it's good to be back.
IRA FLATOW: Nice to have you. And Dr. Rumman Chowdhury, founder and CEO of Parity Consulting and the responsible AI fellow at the Berkman Klein Center at Harvard. You know where that is– in Cambridge, Massachusetts. She's also a visiting researcher at the NYU Tandon School of Engineering, and she's with us in our New York studios. Welcome back.
RUMMAN CHOWDHURY: Thank you.
IRA FLATOW: Nice to have you. Everybody's heard of AGI. Will, what do we mean when we say AGI? What does that mean? How is it defined?
WILL HEAVEN: I have no idea.
[CHUCKLING]
And that's–
IRA FLATOW: Thank you, folks.
RUMMAN CHOWDHURY: And we're done.
WILL HEAVEN: The end. No, seriously– I mean, that is a fascinating question. My whole problem with AGI, the term, is it means so many different things to different people, and it's sort of it's changed its meaning over the last few years. But for the sake of getting the conversation going, what it seems to mean to people now– the companies putting out their blog posts and their manifestos about what they're building– is an AI system that can do a wide range of tasks, cognitive tasks, as well as a human can.
That's about as good a definition as you're going to find. But, I mean, my problem is that there's so many words in that themselves need defining. Like, what is a cognitive task? What does it mean to do it as well as a human? How many cognitive tasks do we need to call an AI system an AGI? Yeah, we'll get on to this, I'm sure. But that's what we're talking about.
IRA FLATOW: Rumman?
RUMMAN CHOWDHURY: So that's the question of the hour, of the day, the year. And it's interesting because it's on purpose. It's to leave the vagueness where we fill in the narrative ourselves and scare ourselves. But actually, if you look at what OpenAI has defined AGI as, it's the automation of tasks of economic value.
And this is what happens when corporations get to define what intelligence means. They pin it to things that are economically productive. And I think that is a very important distinction from simply saying cognitive tasks. And Will's right. Yesterday, DeepMind had a blog post where they pretty much defined it as the automation of most human cognitive tasks. And I agree with Will. Who knows what that means?
IRA FLATOW: Does that mean self-awareness?
RUMMAN CHOWDHURY: It absolutely does not mean self-awareness. Intelligence and sentience are two totally different things– completely different things.
WILL HEAVEN: If you think AGI is a muddy question, then sentience then– we'll be here for ages and not get anywhere.
IRA FLATOW: So so far, I've mentioned things like chatbots, like questions that get answered and things that make images or video clips. But how do you make the leap from something that's good at doing these sorts of things, Rumman, to something that's good at doing all sorts of things?
RUMMAN CHOWDHURY: How do you make the technical leap?
IRA FLATOW: Yeah, what is that leap? How is that done? Is it learning, teaching computers to do different things? Is it sucking all the energy up like we hear these computers do?
RUMMAN CHOWDHURY: I mean, if we're going to pin this to defining AGI, I think the goal would be that it's able to do these tasks without us explicitly teaching the model to do so. What's captured the imagination with generative artificial intelligence is that it seems as if we're just handing over a pile of random looking information, and it's putting together patterns, these models.
And that is actually an impressive feat. What it is alive, or replacing humans, et cetera. I think what these things are in the real world is very, very different from just capability performance. So one of the interesting things to think about is, when these new models come out– and like you said, they seem to come out like three a week, and they say it's performing better than x, y, and z, the important thing to ask is, what is the measurement by which we're saying it's so impressive? And that's publicly out there.
FLORA LICHTMAN: I want to bring AI into this conversation.
RUMMAN CHOWDHURY: Like, literally an AI?
FLORA LICHTMAN: Yeah, literally an AI. I asked Google's AI assistant, Gemini, what we should ask you two. And we workshopped it a few times, but here's where we got. And let's see what you guys have to say about this.
GEMINI: If I could only ask one question to AI experts about AGI and humanity's preparation, it would be, considering the inherent uncertainties surrounding the development and capabilities of AGI, what is the single most proactive and universally beneficial step that humanity should take now to prepare for its potential arrival, regardless of the specific form AGI might take. This question aims to–
FLORA LICHTMAN: It goes on, and on, and on.
[CHUCKLING]
What do you think?
RUMMAN CHOWDHURY: I'm going to punt that one to Will.
[CHUCKLING]
FLORA LICHTMAN: What is the most proactive and universally beneficial step humanity should take to prepare for AGI's arrival?
WILL HEAVEN: I mean, this is I don't think AGI is coming anytime soon. And like, I'm not really sure what that would be when it came. So just like a little side note there– I think at some point probably soon, because so many companies have said they're building it and it is around the corner, probably someone will just make a definition and say, we're calling this thing we've just made AGI.
So if the question that Gemini is asking is, what do we need to do to prepare for that, then it kind of depends what that is. But more constructively, I would like us to get off this obsession with AGI and focus on the specific technical advances that we are seeing that are coming along really fast. And it's so easy to sort of– dismissing the idea that AGI is around the corner is not to dismiss how amazing the advances have been in video generation, in chatbots over the last few years.
I'm constantly wowed. And it's wonderful doing my job, like seeing the latest thing that's come out and talking to the people that are making it. I'm constantly awed by how good this tech has got. And I'd like to just sit with the capabilities that we have and think about what impacts those are going to have on the world. And there's enough to deal with just with the AI we have today without spending so many hours and words about preparing for AGI.
IRA FLATOW: Well, let me go to Samuel in Rochester, New York, who may have some words like that. Samuel, welcome to Science Friday.
SAMUEL: Hi, can you hear me?
IRA FLATOW: Yes, go right ahead.
SAMUEL: Hi I wanted to just say that I agree with what's been said, where we have a lot of very good image generators and chatbots. But those are pretty far away from something that can reason cognitively and generate new ideas. It's always kind of a– we're making amalgamations of things that are already on the internet. And the jump from that to– or kind of summarizing to generating something new, something that hasn't been done or said before, that's a leap that I think hasn't been made yet. And the trend I see is that tech companies can slap AI-powered on anything now, and it makes investors happy. But the results, the profitability of it, the advancements, it's hard to know what the scale actually will be of that, the impact of that.
IRA FLATOW: Good point. Rumman?
RUMMAN CHOWDHURY: Yeah, there was a report last year, I believe, or two years ago, that pretty much dug into all the companies claiming that their products were AI-powered. It was in the UK. And it found about 60% of them had no AI under the hood. First of all, we have a very slippery slope definition of AI itself. And then now it's translated into AGI.
And again, to Will's point, the analogy I give is how we have gone down this same slippery slope of self-driving cars. Remember the earliest self-driving cars, and what we imagined is like we'd get into this pod, and take a nap, and it whisked us off to where we're going?
IRA FLATOW: Yeah, The Jetsons.
RUMMAN CHOWDHURY: Right. But now, according to Elon Musk, we have self-driving cars, which we still have to sit there in traffic with our hands on 10 and 2 and our foot on the brake. And this car is, quote, unquote, "driving." But if it got into an accident, we are liable. So you're still effectively absorbing all the stress of driving with none of the self-driving.
IRA FLATOW: But let's also go right to the main point that I see, is that the reason all AI exists and AGI is being developed, it's about the money.
RUMMAN CHOWDHURY: 100%.
IRA FLATOW: Isn't it?
RUMMAN CHOWDHURY: It is. And actually, OpenAI and Microsoft have defined what AGI is with a monetary value. They have said it is when they have earned $200 billion of revenue. Then, they will slap on a sticker and say, we have AGI.
IRA FLATOW: So do you agree, Will? It's about the money here?
WILL HEAVEN: Yeah, I do. Yeah, it was like, I'm happy that we were reminded of that definition. I think that's probably the best definition of AGI we have. At least it's precise and clear. But yeah, absolutely. It's so hard to talk about AI advances and really get into the details of what these systems can and can't do because the tech is being developed by companies.
They're doing it for profit. Obviously, they're going to make as big a claim for their new tech as they can. And again, genuinely, when you get a lot of these demos in your hands, they are truly impressive. But they're never going to be as impressive as the companies selling them want them to be.
RUMMAN CHOWDHURY: And the story within the story is that, for many years, companies have poached the brightest scientists and minds from academic institutions. In fact, they poached them straight out of their PhD programs. If you go visit the University of Cambridge, Oxford, MIT, Stanford, there's a very close tie to every single major model developer, and that's on purpose. So there is something also to be said here about the lack of independent researchers who are able to do this work without getting funding or just explicitly being hired by these companies.
IRA FLATOW: We're talking about AI this hour on Science Friday, and we'd like to get your calls. We're going to have to go to a break, but don't forget our number. 877-925-9174. Talking with Dr. Roman Chowdhury and also with Will Douglas Heaven. And we'll be right back after this short break. Stay with us.
This is Science Friday. I'm Ira Flatow with Flora Lichtman. We're talking this hour about artificial general intelligence, systems as smart or maybe smarter than any person in any task, which is not here yet, but could be soon. And we want to hear from you. Our number is 877-925-9174. 877-925-9174. And let's go to the phones to Chris in Scottsdale, Arizona. Hi, Chris.
CHRIS: Hey, Ira. Can you hear me?
IRA FLATOW: I sure can. Go ahead.
CHRIS: Excellent. Well, I was just going to mention that I use AI quite a bit for nutrition analysis. So it helps me come up with plans of what I'm going to eat during the day. And I love it. One question about that that I would have– and it helps me with recipes too. But do they think the memory on these things is going to get better or we'll have personalized AI that can remember what we ate a month ago? Because what I find is, I have ChatGPT, and I've tried Grok, and both of them forget. If you go back to it after a week, now you're having a brand new conversation.
So one thing would be about the memory. And I had just second question about what will happen with AI. Do you think it's– or do your experts think it's more likely that we'll have a situation where they'll replace the jobs loss from AI with universal basic income– something like that? Or do you think it would be something like an assisted situation, where all of our jobs are assisted. We tell AI what to do, and it does the job for us? So that's my two questions.
IRA FLATOW: Two meaty questions, Chris. Thanks for calling. I'm going to divide it up. Will, you want to take the first half of that?
WILL HEAVEN: Sure. Yeah, the memory is a feature that a bunch of these companies making these chatbots have either already added or are talking about adding to the chatbot. I think it's something– an option that you can turn off or in ChatGPT, and probably in the others, like Gemini and Grok. So I don't know if– Chris, wasn't it– the caller is using that feature.
I'm not an expert on the actual different tiers, there are different paid tiers and free versions of these chatbots. But it's certainly something which exists in some of them. And if it doesn't already, then I know that's what people are aiming to improve, like this idea that, yeah, this will be your personal little buddy that knows more about you than anyone else and can recommend stuff is a vision.
IRA FLATOW: And Rumman what about jobs? Is it taking our jobs?
RUMMAN CHOWDHURY: Yeah, I also can chime in on the first one. I use Perplexity to help me do research. They actually have something called threads. And then a thread can be a particular topic, and you can go back to that. So not to promote any particular AI. It just happens to be the one that I use for that reason.
Future of work– I have many thoughts on future of work. Well, first of all, I want to start by saying there is no finite amount of work we do as humans. I think one of the fallacies of this "there will be no jobs" conversation is there's a core assumption that is wrong– that there is a finite amount of work that we do. Any sort of technological advancement has actually not given us less work, but more work.
How much more available are we now that we can be found on these little devices, our phones, 24/7? We used to leave work at 5:00. Very few of us remember that time anymore. So email, internet did not give us less work. It actually gave us more things to do.
And there's some empirical evidence to back this. So there's three studies I like to talk about. The very first one came out last year. It's by this labor economist, Dr. Daron Acemoglu, out of MIT,– like, brilliant labor economist. And he did a macroeconomic measure of the impact of AI over the next 10 years and found that total factor productivity– so all of the stuff we produce in the world– sub-1% will be automated by AI.
But that's not nothing. I mean, sub-1% of what the entire world produces is still something. And what he talks about that's kind of interesting– I think this is what captures the imagination– most automation tends to get rid of blue collar jobs or rote tasks. So email automated sending mail. But what is interesting in capturing our minds about AI is that it automates knowledge tasks, which we've never had before.
So he talks about how the distribution between blue and white collar jobs is actually fairly even, slash, maybe even leading a bit more towards lower tier knowledge jobs. The second paper I'd like to talk about is called GPT or GPTS. It came out in 2023. It was actually by some researchers at OpenAI, as well as some economists, talking about how different sectors– what may be automated? What threat is faced by different sectors? So it's going from big picture to industry level.
And the rough takeaway would be roughly that 80% of jobs will see about 20% automated away, 20% of jobs will see about 80% automated away. And they were talking about jobs like paralegal, et cetera, so research type jobs, so knowledge jobs, which is interesting. The third one just came out last week. Really interesting, and this is getting super nitty gritty about future of work.
Harvard Business School and some other folks worked with Procter Gamble. And they did this study across over 900 employees. They did kind of like a competition, where it was individual humans, individual human plus AI– like teams, teams plus AI. And they looked at things like quality of work, time to completion, how well it augmented people already with a skill set, and how well it augmented people without a skill set in a particular topic.
It's lots of details, but pretty much, the takeaway is that human plus AI is better than human alone is better than AI alone. So it's one of those things where it is a productivity booster. And what that means is probably what it has always meant for us when we've gotten new productivity technology, which is that we will just have more stuff to do.
FLORA LICHTMAN: When we think about AI, we talk about it as a reflection of us, that it learns from us, it learns from our data. Can we teach AI to be better than us?
RUMMAN CHOWDHURY: Oh, that's a good question. I think AI is capable of evaluating data at a scale that is hard for humans to do. That's why the output of these models can be so impressive. So the short answer is yes. The longer, more complicated answer is what do you mean by better?
FLORA LICHTMAN: And I mean it specifically, like doesn't cheat, is more ethical. When people think about these sort of doomsday scenarios with AI, they're like, oh, AI is going to scheme and take down humanity. Can you teach AI ethics?
RUMMAN CHOWDHURY: The short answer is yes. And actually, a lot of these scenarios where AI, quote, unquote, "cheats," it has no normative judgment. It doesn't understand good and bad. So even predating GenAI, I remember some of the earliest models coming out of DeepMind and some of the research bodies, they would play video games. And it would do things like race a car backwards, or it would shoot everybody else in the game and then pick up all the goodies.
But that is not the AI being evil. We have decided that is evil because we made rules. And we implicitly know, if I'm playing a game with other people, what I should not do is get rid of everybody else so I can slowly pick up all the goodies. The AI is simply optimizing for what you have told it to do, like in this very blunt way.
If you are of a particular age and I am of a particular age, and you read like Amelia Bedelia as a kid, think of it as Amelia Bedelia. You literally are like– I don't know– make me a cake. And it will just quite literally–
FLORA LICHTMAN: Very literal. Yeah.
RUMMAN CHOWDHURY: –yes. And a lot of these issues of AI gone awry, it actually can be boiled down to a misspecified objective function. You are telling it to do something. You actually have to think through all the ways in which you are making assumptions, because you have been socialized to do things a certain way. And like, how would Amelia Bedelia understand this?
FLORA LICHTMAN: That's going to be the new way that I interact with GPT.
IRA FLATOW: Let's go to the phones to Anton in Phoenix. Hi, Anton. Welcome to Science Friday.
ANTON: Hey, there. Thank you. Can you hear me OK?
IRA FLATOW: Sure.
ANTON: Yeah, so I just wanted to address the earlier question that you guys asked Gemini, which is if there's one thing that you would want to focus humanity on, what would it be? And I'm thinking about– like, somebody said doomsday scenario. And oftentimes, when we talk about doomsday scenarios, we're thinking about the technology getting smarter than us, and then deciding that we're expendable, and all of that.
But I think that's kind of misguided. It makes for a good science fiction novel, but I think the problem– if that was the problem, then it would be a technology problem. That would be easy. The problem I see is a people problem. So the NRA says guns don't kill people, people kill people. And I think we really need to focus on maybe two things.
One is, who's controlling the AI, both in terms of training it as well as using it to inference to actually do things? But I think the bigger thing is really, if you think about artificial general intelligence or artificial superintelligence, where it has– whatever– godlike intelligence, an AI is not going to necessarily see humans as a threat unless humans are competing for the same resources with the AI. So that could be jobs. It could be electricity. It could be any number of things.
And I think it's– the question that I think about is, how do we arrive at a place where AI isn't being manipulated by humans for human ends?
IRA FLATOW: Yeah, OK.
ANTON: And just so one example. OK, go ahead. Yep.
IRA FLATOW: Yeah, that's a good question. Let me get an answer to it because– I mean, if the point of AI is to make money, it's going to be manipulated to make money.
RUMMAN CHOWDHURY: I mean, it already is. If we think about where money is being spent to build AI capabilities, companies have conveniently found the alignment of things people are willing to spend money on, cross, things that are also important to us. It's not surprising that health care has been one of the primary applications. There's so much money to be made in health care.
But also we want to lead better lives. The other one people talk about quite a bit is education. But no one no one is talking about things that are maybe less profitable but also good for humanity. And I appreciate the statement about, let's think about the access or the people behind the wheel. A lot of these doomsday scenarios are very fantastical. What if AI sets off nuclear weapons? Why the hell did you give AI access to be able to set off– (CHUCKLING) you can just not do that.
IRA FLATOW: For those of people who worry about the singularity– I mean, when AI is smarter than us and takes– we become subservient.
RUMMAN CHOWDHURY: I mean, I think most AI is smarter than me from an ability to answer Jeopardy questions perspective. I probably couldn't beat the average AI system.
WILL HEAVEN: Can I jump in on that?
FLORA LICHTMAN: Yes, please.
WILL HEAVEN: What gets me about all these doomsday scenarios is this weird sense of inevitability that this technology is just going to appear and squash us puny humans. We don't have to make this. We don't have to make it have the nuclear codes, as you said. We don't have to make it so that it has any power over us at all.
IRA FLATOW: But we also already have some advances in medicine, where doctors are doing things doctors couldn't do. Aren't there already positive results, Will, of using AI?
WILL HEAVEN: Oh, yeah– many. And medicine is a great example. I mean, just the everyday conveniences that we're already seeing from chatbots, I think, are great. So I don't want to come– in these conversations where we go straight to AGI, I come across as sort of a naysaying crank, which is not a good professional look for someone who is very much, and has been for more than a decade, a champion of this technology, which I think is amazing. It just gets derailed. I think a lot of the interesting, brilliant things that we could talk about get derailed when we talk about doomsday scenarios–
FLORA LICHTMAN: Well, what are they? What are the interesting things that we should be talking about?
RUMMAN CHOWDHURY: I can chime in on some of that. I mean, we are likely to cure many cancers in our lifetime because of the advanced protein folding AI-driven technologies that have been created. This is a fact. We have advances in genomics and medicine because of the models that have been made there.
We have better weather prediction models. And I live part time in Texas, and hurricanes are a very big deal. We have better weather prediction models that can tell us weeks in advance that a hurricane may be coming because of AI. And the thing is this just won't capture the imagination the way a, quote, unquote, "talking humanoid bot" idea will. But all of that is AI. As Will is saying, it's a disservice to have such a focus driven by multiple narratives, companies included, to push us to look at AGI when we actually can celebrate a lot of the great stuff that AI is being used for today.
IRA FLATOW: Let's go to the phones to Marlena in Washington State. Hi, there. Welcome to Science Friday.
MARLENA: Hi, can you hear me.
IRA FLATOW: Yes, I can.
MARLENA: OK. My question is, what is AI going to do to stop sucking up all the electricity in our environment? I live in a small rural town. And a small rural town really close to ours has made a deal, and they've built a big data center. And we all know this is how AI generates all its juice.
And it made a deal with this town waiving the employment flag. And now, these residents in this small town are experiencing rolling blackouts. I call this predatory behavior. And I would like to know what AI– what these billionaire owners of AI are going to do to be protective of people and save more of the energy in our environment? I mean, come on. This is global warming, people.
IRA FLATOW: Sounds like Marlena's mad as hell and not going to take it anymore. Will, what do you say to that? Good point?
WILL HEAVEN: Yeah, I think that's a really good point, especially if you had this affecting your neighborhood. I think we are going to see that. These massive data centers get set up, and they suck the power out of the local grid. So there's lots of things that could be done.
And let's let that hang in the air for a minute. There's a lot of work being done to reduce the size of models. And a smaller model can do many of the things that a larger model can do for less power. There are things that could be done around the way that these models are trained– train them more efficiently.
Rather than just throw every single bit of data you can scrape up at them, maybe curate that data and show them data that's actually going to be more useful. So the training steps could be fewer, again, using less electricity. That's all on the side of actually building the models. The data centers, of course, are used then to run the models.
We're all invoking ChatGPT for our recipes and everything else. And every time we do that, it's sucking up a lot of power. So I mean, we could be making more efficient chips. We could be running on renewable sources of energy and finding ways to store that energy in the data centers with batteries, et cetera. So all of which is just to say there are solutions available.
Will they happen is a completely different question, because right now, this is a race to the bottom. All these companies really, really having invested everything they have into this race need to come out on top with the punchiest most powerful AI model. And I think the sustainability needs are going to be an afterthought.
IRA FLATOW: 30 seconds to go, Rumman.
RUMMAN CHOWDHURY: Well, Microsoft is rebooting Three Mile Island for those who are local, who know what that is.
IRA FLATOW: I remember it well.
RUMMAN CHOWDHURY: Yes. And when pressed on this, Sam Altman was sort of hand-waving over, we should have fission technology in our lifetimes and everything will be fine. So it seems like they too are banking on scientific advancements to do the work for them.
IRA FLATOW: We've run out of time. I'd like to thank my guests– Will Douglas Heaven, senior editor for AI coverage at MIT Technology Review, and Dr. Rumman Chowdhury, founder and CEO of Parity Consulting and the responsible AI fellow at the Berkman Klein Center at Harvard. Thank you both for taking time to be with us today.
Copyright © 2025 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday's programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.Sciencefriday.Com/about/policies/
Meet the Producers and Host About Charles BergquistAs Science Friday's director and senior producer, Charles Bergquist channels the chaos of a live production studio into something sounding like a radio program. Favorite topics include planetary sciences, chemistry, materials, and shiny things with blinking lights.
About Ira FlatowIra Flatow is the founder and host of Science Friday. His green thumb has revived many an office plant at death's door.
About Flora LichtmanFlora Lichtman is a host of Science Friday. In a previous life, she lived on a research ship where apertivi were served on the top deck, hoisted there via pulley by the ship's chef.
Q&A: Why And How We Compared The Public's Views Of Artificial Intelligence With Those Of AI Experts
A new Pew Research Center report examines attitudes about artificial intelligence (AI) among the U.S. Public, as well as AI experts. The report is based on a pair of surveys that show that the public is far less positive and enthusiastic about AI than experts are. At the same time, similar shares in both groups want to see more control and regulation of the technology.
In this Q&A, we speak with Brian Kennedy, a senior researcher at the Center, on why and how the Center conducted the survey of AI experts to accompany the survey of the broader public.
Why compare the public's views of artificial intelligence with the views of experts?Pew Research Center has a long track record of studying emerging technologies. In 2021, we embarked on a multiyear effort to study the public's attitudes and experiences with artificial intelligence. Since then, we've looked at Americans' hopes and worries around AI, including their views on driverless cars and whether they think algorithms should be used in hiring.
In our latest study, we also wanted to learn the views of those who have expertise in the field. The experts we surveyed include people who work on or study the development, application and implications of AI.
Understanding the views of both these groups – the public and experts – is central to the discussion around the potential benefits and risks of AI. We think it is important to understand how the views of the public compare with those of experts. Where do they see eye to eye? Where are there deep divides?
How did you define "AI expert" in this study?An AI expert in this study is someone who demonstrates expertise in AI or related fields via their work or research. We included people with expertise in technical topics – such as machine learning or natural language processing – and other topics related to AI, including its business applications, social impacts and ethics.
Who are the AI experts you surveyed?One challenge with this study is that we needed a way to identify AI experts to survey. To get a broad group of experts, we built a sample of people who have participated in AI-related conferences as presenters or authors.
We created a list of 21 conferences that took place in 2023 or 2024 and covered a variety of AI-related topics so we would capture a range of perspectives among AI experts. The list included conferences focused on technical AI research; social science about AI; the representativeness and ethics of AI; the business of AI; and the specific applications of AI in health care, finance and government.
Understanding the views of both these groups – the public and experts – is central to the discussion around the potential benefits and risks of AI.
– Brian Kennedy, Senior ResearcherOne concern we had going into the project was whether our sample of AI experts would represent many different perspectives. We knew from our own work on the STEM workforce that women, Black and Hispanic workers make up smaller percentages of people with computing jobs compared with their shares of the overall U.S. Workforce. Related studies have found that these groups are underrepresented among those who earn computer science degrees and in occupations that are or could be working in AI.
With this in mind, we tried to reach these less-represented groups when we put together our list of conferences. For example, our list included the affinity group meetings and mini-conferences at the Conference on Neural Information Processing Systems.
We also created a large enough sample of experts from the conferences we examined to look at differences by gender on how they feel about AI. And we're glad we did: One striking finding from the study is that men and women AI experts aren't always aligned in their views.
After we created our list of conferences, we created a list of everyone who was an author of a paper or presented at each conference. We then tried to find the email address of everyone we identified, ultimately tracking down the vast majority of them. We also decided to only survey experts who live in the United States to make it more directly comparable with our accompanying survey of the American public. (For more information on the makeup of our AI expert sample, read the appendix table in our new report.)
In addition to surveying AI experts for this study, you did in-depth interviews with some of them. Why?That's right. As part of this study, we conducted 30 in-depth interviews with a range of experts who also participated in the underlying survey.
We did this to allow our expert participants to express their views on a number of topics with more nuance, and in their own words. Some of these topics included AI's impact on society today and in the future, representation and bias in AI, and regulation of AI. We included quotes from these in-depth interviews throughout the report.
Do the views of the AI experts in this study represent the views of all AI experts?No. The responses of AI experts are only representative of the views of the experts who responded to the survey. Since there is no definitive source of the makeup of AI experts, we cannot be certain that all segments of this population are represented appropriately in the sample. This is different from Center surveys of U.S. Adults in which we know the characteristics of the population, and can use weighting to make the survey representative. The results for the AI experts are unweighted.
The in-depth interviews with AI experts also aren't representative of the AI expert population or any demographic group. Instead, they provide views that are more detailed than we could capture in the survey.
By contrast, our survey of the general public is representative of the views of U.S. Adults. It was conducted on the Center's American Trends Panel (ATP). Members of the ATP are recruited through national, random sampling of residential addresses. You can read more about the ATP's methodology here.
Read the report: How the U.S. Public and AI Experts View Artificial Intelligence
How Artificial Intelligence Is Changing Creative Testing Of Ads
Traditional methods of measuring creative advertising effectiveness have severe limitations today. ... More Global tech platform DAIVID uses artificial intelligence complemented by a human touch to offer creative ad testing in today's crowded advertising environment
Getty ImagesThe evolution of advertising over the past 20 years has been nothing short of remarkable. It was not until 2005 that digital advertising became recognized as a legitimate advertising medium. According to e-marketer, as of 2024, digital advertising expenditures accounted for more than 75% of global advertising revenue, with streaming and digital video (e.G., podcasts, display video) accounting for a substantial portion of the increase. With such rapid evolution, changes in how advertising creative is tested are needed.
Environmental Changes Drive The Need For New Creative Advertising Testing MethodsWith more communication avenues available and many ads being developed to be more personalized than in the past, most advertisers are producing far more ads than they used to. Generative AI has accelerated this trend, allowing for the mass personalization of digital ads. In the meantime, many markets are more competitive than others making efficient allocation of the advertising budget a key for many businesses. Moreover, with single brands running many different ads, it becomes difficult to monitor whether an individual ad is effective. Yet, passing on creative testing is not a good option given the significant investments made in advertising.
One company that is responsive to industry trends that affect creative testing is DAIVID, which has launched a human-informed AI-powered platform designed to measure creative effectiveness. The platform operates by predicting the attention levels and emotions that an ad will generate along with their likely impact on outcome measures including brand and sales metrics. As it is not dependent on panels, the innovative system is trained using tens of millions of human responses to ads and can allow advertisers to know within minutes the emotional impact of an ad along DAIVID's insight on predicted business outcomes, offering advertisers to test the effectiveness of their campaigns at scale even in contexts where there are large numbers of individual ads.
DAIVID's Use Of Artifical Intelligence With A "Human Touch"Peter Daboll, Head of U.S., Daivid
DaividTo gain insight into the use of artificial intelligence in measuring creative effectiveness, I spoke to Peter Daboll, DAIVID's Head of US and former CEO of Ace Metrix. Regarding why creative ad testing is so important in 2025, Daboll observes that it has always been important, but emphasizes that it has become more challenging to do at scale in a timely and cost-efficient way. "First off, Bud Light has warned the industry that everything in our marketing plan needs to be tested," he says, "Social media blowback can amplify negatives that the brand team never thought about and can cause lasting brand damage. Second, is the explosion in the volume of ad creative. Whether generative AI, or just increasing personalization, expanding media outlets the number of ads being produced is skyrocketing."
Daboll notes that Adobe recently predicted that the number of ads will increase 5 times in the next 2 years alone. He states, "What's new is that we can now leverage AI correctly to handle testing these high volumes when there was no way to do it before. Now it is possible to test everything. These facts point to the importance of creative testing and the promise of new, scalable ways of testing leveraging AI to handle the volume."
Essentially, whereas 10 to 15 years ago an ad had to be shown to a sample of people to get their reactions. Daboll observes that such an approach does not work today: saying, "That worked when you had one or 2 TV spots a year, but not when you have 1000 creative assets today. Creative testing, like DAIVID, has evolved to the point where we can test thousands of ads per day leveraging disruptive AI tech, while maintaining the human connection."
The Importance of Emotions In Creative Advertising Creative TestingDaboll views the industry as being at an inflection point where it will soon be mandatory to test creative assets ahead of time and that the old system will be viewed as a relic. He says, "We'll look back at the inefficiency of testing one ad with a few people and waiting weeks for results and laugh."
Daivid's system uses human informed AI to measure attention, emotions, recall, and intentions.
DaividDAIVID's model for creative testing is based on many practitioner and academic studies that show that creative effectiveness is a function of four "creative pillars," Attention, Emotion, Memory, and Intentions. The basic idea is that a successful ad attracts attention in a way that creates an emotional reaction from the viewer that, in turn, evokes information stored in memory and leads to intention. The system compares these metrics to all other ads to make normative comparisons. Different research technologies including facial coding, eye tracking and survey responses are used to build the database used to train the database.
Regarding the specifics of the system and how it produces effective output measures for measuring creative effectiveness, Daboll says:
"Attention is critical or viewers won't remember the ad (we measure at first 3 seconds, mid 3 seconds, and final 3 seconds and can look at decay over time). Emotions are DAIVID's strong suit-- measuring 39 distinct human emotions, positive and negative. Memory is measured by traditional brand recall metrics in the human sample and markers are compared to the test ads. Intentions are purchase intent, search, and sharing intention as a result of seeing the ad. All of these 4 creative metric pillars are essential to create a successful ad."
Daivid's system measures 39 different emotions.
DaividIn terms of the advantages of his own company's measurement system over other AI-based systems, Daboll emphasizes that DAIVID adds a valuable human touch in addition to drawing on AI's advantages. "A few AI-based companies measure what I call "suitability "of the creative—focusing on the aspect ratio of the ad, whether it will fit into the unit, degree of blurriness, color or length of logo shown, etc.," he says, " These are deterministic characteristics of the ad that AI can identify--useful but not sufficient. But they don't measure human reaction. DAIVID's use of AI compares ads to a massive human training dataset. This training dataset-- thousands of ads with thousands of people, becomes our "true north" on how and why people respond."
Daboll continues, " We then leverage the AI to pattern match and find ads that have the same characteristic markers. These markers include things like storyline, script, visuals, colors, audio, imagery, characters, etc. At the frame by frame and aggregate level. By using the 4 creative effectiveness pillars described above DAIVID can assess WHY and how an ad works on human behavior; and establishes guideposts for brands to hit with future ads. So, our deliverables are prescriptive as well as descriptive informing brands on how to make new ads better. Further, most other systems measure just 1 of the pillars I mentioned. All 4 are required to get the full impact of the creative on behavior."
Lessons from Artificial Intelligence Ad Creative TestingIn terms of what he has learned from running DAIVID's creative measurement system, Daboll offers the following insights:
As with many aspects of AI, it is important to figure out how to use its positive features whi

Comments
Post a Comment