Top 30 Artificial Intelligence Project Ideas in 2025



ai is all about :: Article Creator

It's Time To Think About Generally Intelligent AI

Q&A

A conversation with Shane Legg, the co-founder of Google DeepMind, and Mira Lane, senior director of Technology & Society at Google, about the progress being made toward the goal of artificial general intelligence

By Nick Thompson • Portrait by Uli Knörzer

Mira Lane Where do we stand today in terms of AI development, and how close are we to realizing the possibility of AGI?

Shane Legg We've all seen that AI is progressing quickly. I think we could be living with artificial general intelligence in five years. I think the probability is even higher in 10 years. When I made the first AGI timeline predictions back in 2009, I predicted a 50 percent possibility of AGI by 2028. So, let's just take seriously, for a moment, the possibility that this might happen in the next 10 years. AGI affects all these different areas, and you can't be an expert in all these fields. The advent of AGI is actually something that will require deep expertise in all human endeavors. What we really need would be for all the different departments and all the different faculties in universities to be thinking about the arrival of AGI. What does medicine look like in a post-AGI world? What does accounting look like in the post-AGI world? What does education look like in the post-AGI world? What does research look like in a post-AGI world? What does economics look like in a post-AGI world?

Lane How are you defining AGI right now? I feel like that's a big open question.

Legg I define an AGI to be an artificial agent that can do the kinds of cognitive things that people can typically do. I see this as the natural minimum bar. For some high levels of AGI capability, see the paper "Levels of AGI," written by a group of us at Google DeepMind last year.

Lane What are the most significant open questions in AI research today, and what breakthroughs are still needed to propel the field forward?

Legg We are working on AI in different areas, and many people think that achieving AGI is probably a refinement and an improvement and working on the sorts of things we already know and combining some of the methods we already know in the right way. It's not guaranteed. Maybe there is really a big breakthrough that is required, which we don't know yet. But there probably doesn't need to be a breakthrough as big as transformers—which were developed at Google—to get to AGI. There will be some advances. There are all sorts of low-hanging fruit at the moment to make our models better, like advances in datasets, memory, planning, and reasoning. And as we work on all of them, we see progress in all the different areas. So we are confident that, at least for the next few years, we can make these models—that are already getting very good—much better. And then as we start making agents out of these models, those agents will generate data as they interact with different kinds of environments and try to achieve goals in those environments. We'll then train new foundation models on that data. The resulting models will then be much better for building agents. This process may get us to AGI in as soon as five years.

We are working on AI in different areas, and many people think that achieving AGI is probably a refinement and an improvement and working on the sorts of things we already know and combining some of the methods we already know in the right way.

Shane Legg co-founder of Google DeepMind

Lane How do you envision measuring the level of understanding in AI systems, and what frameworks inform your thinking on this complex issue?

Legg Yeah, it's hard. Not all aspects of intelligent behavior by AI agents are easy to measure. And if there are aspects that are hard to measure, you don't know how well you're doing on them, and if you don't know how well you're doing on them, you may not even realize that you need to do better on them or are already doing well on them.

The other problem is that even if you can measure something, there are just so many things that you can measure. Because if what you are measuring is an AGI, it has generality—it's not that it does a specific thing. If it was doing a very specific thing, you could measure it thoroughly on that aspect. But if it's very, very general, it can do everything from writing code to understanding 20 languages, to making music, to making pictures to poems to legal work, to all kinds of things. That's a lot of things to try to measure. So the measurement problem is very difficult and it's very important.

It's also difficult because it's not a glamorous thing to do. Think about it this way: The most glamorous Olympics event is the 100-meter sprint, right? But you're not going to have a 100-meter sprint event if somebody doesn't build the track and get the start guns and the photo finish equipment all set up. As you know, you're not going to have a good event [without those people], but the glory goes to the runners.

Lane Not the designers of the track.

Legg The design doesn't attract the same kind of attention, but if you don't have a good track and the photographs and all that, it's just not really going to go very well. You need a good track that has to be level and the right surface and all these sorts of things. It's been a problem with machine learning for a long time that, psychologically, people are drawn to building the agent or being state-of-the-art on the benchmark rather than building the benchmark itself.

Lane What governance models do you consider important to ensure this positive transformation? Where do you think public understanding needs to grow?

Legg I mean, they're all enormous questions. I think the biggest thing is how advanced public understanding has recently become around LLMs, in that it's not a technology that you just read about; you can get it on your mobile phone and you can talk and interact with it. And so you can at least start to get some grounding in terms of what this thing is. Members of the public are doing that en masse. Does it mean that people understand that powerful AGI is coming? Weirdly, I think many people do. And I actually think that, sometimes, lay people that have some technology interest have a better mental model of this than some experts who tend to be very skeptical and come at this with a lot of long-standing beliefs and biases. I've seen this throughout my career.

Lane How so?

Legg When we started DeepMind, everybody said it was ridiculous that we thought machine learning was going to be huge. People thought it was ridiculous that we were going to go and do this and that we were going to get Nature papers and be in academic journals or win awards. But no, AI keeps delivering the goods, and it keeps getting better and better.

Lane We were talking about how highly capable AI systems will transform every single industry and human endeavor. I use LLMs for so many things, and it is remarkably good. I have shared emails, documents, and text messages with a model and have had it help me examine different perspectives.

Legg I did something like that recently. I received a few messages and wanted to understand better what this person was trying to say. It seemed like they were hinting at something, so I put it in an LLM and I asked, "What is this person really trying to say?"

Lane Fascinating, isn't it?

Legg It's a whole new world, really.

Lane Looking ahead 50 years, what do you think will be the most profound ways in which AGI will have transformed society, and what are your biggest hopes and concerns for the future of AI? What would it look like if we got it right?

Legg Reductions in poverty and increased access to various kinds of resources and education. I think medicine could be advanced significantly, as well as scientific research. It could be good for the environment. We might have new types of clean energy sources or new types of materials and products. I see potential for improvements in every aspect of society.


Marc Benioff Says AI's Future Is All About Agents, Not Chatbots

Salesforce CEO Marc Benioff at Dreamforce on September 17, 2024 in San Francisco, California. - Photo: Justin Sullivan (Getty Images)

Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience.Generate Key Takeaways

Despite the popularity of artificial intelligence-powered chatbots, one tech leader says the future of AI advancement is in agents that can work autonomously.

Salesforce chief executive Marc Benioff told the Wall Street Journal's "The Future of Everything" podcast that he thinks "we all got drunk on the ChatGPT Kool-Aid."

"It's a tool," Benioff said about AI, "and I hope that we're using it to improve humanity and making things better."

Unlike AI chatbots, which work alongside users, AI agents can complete complex tasks autonomously. With AI agents, users can delegate work to the tool, then check to see if it needs assistance or if it has finished, instead of repeatedly prompting it.

In September, Salesforce launched its Agentforce suite of AI agents that can handle service, sales, marketing, and commerce tasks. Meanwhile, Microsoft released its purpose-built AI agents in Microsoft 365 Copilot earlier this month, that it said can work on simple or complex, multi-step tasks with or on behalf of a team or organization.

Still, Benioff was adament about AI's current limitations. "There is a huge demand for AI products in the enterprise," he told the Wall Street Journal, "but this idea that Microsoft has hypnotized the industry, that this is the panacea, this is the Messiah of AI for the enterprise, is a false prophecy."

He added that he's talked to "a lot of people," including Microsoft chief executive Satya Nadella, about "this idea that these AI priests and priestesses are out there telling the world things about AI that are not true is a huge disservice to these enterprise customers who can, number one, increase their margins, increase their revenues, augment their employees, improve their customer relationships."

While AI can improve work, Benioff said "we're not there" — with "there" being the dystopian future in which AI surpasses the abilities of humans.

But Benioff said he thinks "we're hitting the upper limits of the LLMs [large language models]" that currently power AI chatbots and some AI agents, and that new models with other abilities may replace them in the future.

"We just have to be careful how we think about these things," Benioff said. "So we have to get back to reality."

For the latest news, Facebook, Twitter and Instagram.


AI Improvements Are Slowing Down. Companies Have A Plan To Break Through The Wall.

  • The rate of AI-model improvement appears to be slowing, but some tech leaders say there's no wall.
  • It's prompted a debate over how companies can overcome AI bottlenecks.
  • Business Insider spoke with 12 people at the forefront of the AI boom to find out the path forward.
  • Silicon Valley leaders all in on the artificial-intelligence boom have a message for critics: Their technology has not hit a wall.

    A fierce debate over whether improvements in AI models have hit their limit has taken hold in recent weeks, forcing several CEOs to respond. OpenAI's boss, Sam Altman, was among the first to speak out, posting on X this month that "there is no wall."

    Dario Amodei, CEO of the rival firm Anthropic, and Jensen Huang, CEO of Nvidia, have also disputed reports that AI progress has slowed. Others, including Marc Andreessen, say AI models aren't getting noticeably better and are all converging to perform at roughly similar levels.

    This is a trillion-dollar question for the tech industry. If tried-and-tested AI-model training methods are providing diminishing returns, it could undermine the core reason for an unprecedented investment cycle that's funding new startups, products, and data centers — and even rekindling idled nuclear power plants.

    Business Insider spoke with 12 people at the forefront of the AI industry, including startup founders, investors, and current and former insiders at Google DeepMind and OpenAI, about the challenges and opportunities ahead in the quest for superintelligent AI.

    Together, they said that tapping into new types of data, building reasoning into systems, and creating smaller but more specialized models were some of the ways to keep the wheels of AI progress turning.

    The pre-training dilemma

    Researchers point to two key blocks that companies may encounter in an early phase of AI development, known as pre-training. The first is access to computing power. More specifically, this means getting hold of specialist chips called GPUs. It's a market dominated by the Santa Clara-based chip giant Nvidia, which has battled with supply constraints in the face of nonstop demand.

    "If you have $50 million to spend on GPUs, but you're on the bottom of Nvidia's list — we don't have enough kimchi to throw at this, and it will take time," said Henri Tilloy, a partner at the French venture-capital firm Singular.

    Jensen Huang's Nvidia became the world's most valuable company off the back of the AI boom. Justin Sullivan/Getty

    There's another supply problem, too: training data. AI companies have run into limits on the quantity of public data they can secure to feed into their large language models in pre-training.

    This phase involves training an LLM on a vast corpus of data, typically scraped from the internet and then processed by GPUs. That information is then broken down into "tokens," which form the fundamental units of data processed by a model.

    While throwing more data and GPUs at a model has reliably produced smarter models year after year, companies have been exhausting the supply of publicly available data on the internet. The research firm Epoch AI predicted usable textual data could be squeezed dry by 2028.

    "The internet is only so large," Matthew Zeiler, the founder and CEO of Clarifai, told BI.

    Multimodal and private data

    Eric Landau, cofounder and CEO of the data startup Encord, said this was where other data sources would offer a path forward in the scramble to overcome the bottleneck in public data.

    One example is multimodal data, which involves feeding AI systems visual and audio sources of information, such as photos or podcast recordings. "That's one part of the picture," Landau said. "Just adding more modalities of data." AI labs have already started using multimodal data as a tool, but Landau said it remained "very underutilized."

    Sharon Zhou, cofounder and CEO of the LLM platform Lamini, sees another vastly untapped area: private data. Companies have been securing licensing agreements with publishers to gain access to their vast troves of information. OpenAI, for instance, has struck partnerships with organizations such as Vox Media and Stack Overflow, a Q&A platform for developers, to bring copyrighted data into their models.

    "We are not even close to using all of the private data in the world to supplement the data we need for pre-training," Zhou said. "From work with our enterprise and even startup customers, there's a lot more signal in that data that is very useful for these models to capture."

    Related stories A data quality problem

    A great deal of research effort is now focused on enhancing the quality of data that an LLM is trained on rather than just the quantity. Researchers could previously afford to be "pretty lazy about the data" in pre-training, Zhou said, by just chucking as much as possible at a model to see what stuck. "This isn't totally true anymore," she said.

    One solution that companies are exploring is synthetic data, an artificial form of data generated by AI.

    Daniele Panfilo, CEO of the startup Aindo AI, said synthetic data could be a "powerful tool to improve data quality" as it could "help researchers construct datasets that meet their exact information needs." This is particularly useful in a phase of AI development known as post-training, in which techniques such as fine-tuning can be used to give a pre-trained model a smaller dataset that has been carefully crafted with specific domain expertise, such as law or medicine.

    One former employee at Google DeepMind, the search giant's AI lab, told BI that "Gemini has shifted its strategy" from going bigger to being more efficient. "I think they've realized that it is actually very expensive to serve such large models, and it is better to specialize them for various tasks through better post-training," the former employee said.

    Google launched Gemini, formerly known as Bard, in 2023. Google

    In theory, synthetic data offers a useful way to hone a model's knowledge and make it smaller and more efficient. In practice, there's no full consensus on how effective synthetic data can be in making models smarter.

    "What we discovered this year with our synthetic data, called Cosmopedia, is that it can help for some things, but it's not the silver bullet that's going to solve our data problem," Thomas Wolf, cofounder and chief science officer at the open-source platform Hugging Face, told BI.

    Jonathan Frankle, the chief AI scientist at Databricks, said there was no "free lunch " when it came to synthetic data and emphasized the need for human oversight. "If you don't have any human insight, and you don't have any process of filtering and choosing which synthetic data is most relevant, then all the model is doing is reproducing its own behavior because that's what the model is intended to do," he said.

    Concerns about synthetic data came to a head after a paper published in July in the journal Nature said there was a risk of "model collapse" with "indiscriminate use" of synthetic data. The message was to tread carefully.

    Building a reasoning machine

    For some, simply focusing on the training portion won't cut it.

    Ilya Sutskever, the former OpenAI chief scientist and Safe Superintelligence cofounder, told Reuters this month that results from scaling models in pre-training had plateaued and that "everyone is looking for the next thing."

    That "next thing" looks to be reasoning. Industry attention has increasingly turned to an area of AI known as inference, which focuses on the ability of a trained model to respond to queries and information it may not have seen before with reasoning capabilities.

    At Microsoft's Ignite event this month, CEO Satya Nadella said that instead of seeing so-called AI scaling laws hit a wall, he was seeing the emergence of a new paradigm for "test-time compute," which is when a model has the ability to take longer to respond to more-complex prompts from users. Nadella pointed to a new "think harder" feature for Copilot — Microsoft's AI agent — that boosts test time to "solve even harder problems."

    Aymeric Zhuo, cofounder and CEO of the AI startup Agemo, said AI reasoning "has been an active area of research," particularly as "the industry faces a data wall." He told BI that improving reasoning required increasing test-time or inference-time compute.

    Typically, the longer a model takes to process a dataset, the more accurate the outcomes it generates. Right now, models are being queried in milliseconds. "It doesn't quite make sense," Sivesh Sukumar, an investor at the investment firm Balderton, told BI. "If you think about how the human brain works, even the smartest people take time to come up with solutions to problems."

    In September, OpenAI released a new model, o1, which tries to "think" about an issue before responding. One OpenAI employee, who asked not to be named, told BI that "reasoning from first principles" isn't the forte of LLMs as they work based on "a statistical probability of which words come next," but if we "want them to think and solve novel problem areas, they have to reason."

    Noam Brown, a researcher at OpenAI, thinks the impact of a model with greater reasoning capabilities can be extraordinary. "It turned out that having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer," he said during a talk at TED AI last month.

    Google and OpenAI didn't respond to requests for comment.

    The AI boom meets its tipping point

    These efforts give researchers reasons to remain hopeful, even if current signs point to a slower rate of performance leaps. As a separate former DeepMind employee who worked on Gemini told BI, people are constantly "trying to find all sorts of different kinds of improvements."

    That said, the industry may need to adjust to a slower pace of improvement.

    "I just think we went through this crazy period of the models getting better really fast, like, a year or two ago. It's never been like that before," the former DeepMind employee told BI. "I don't think the rate of improvement has been as fast this year, but I don't think that's like some slowdown."

    Lamini's Zhou echoed this point. Scaling laws — an observation that AI models improve with size, more data, and greater computing power —work on a logarithmic scale rather than a linear one, she said. In other words, think of AI advances as a curve rather than a straight upward line on a graph. That makes development far more expensive "than we'd expect for the next substantive step in this technology," Zhou said.

    She added: "That's why I think our expectations are just not going to be met at the timeline we want, but also why we'll be more surprised by capabilities when they do appear."

    Amazon Web Services CEO Adam Selipsky with Anthropic CEO Dario Amodei during a 2023 conference. Noah Berger/Getty

    Companies will also need to consider how much more expensive it will be to create the next versions of their highly prized models. Anthropic's Amodei said a training run could one day cost $100 billion. These costs include GPUs, energy needs, and data processing.

    Related stories

    Whether investors and customers are willing to wait around longer for the superintelligence they've been promised remains to be seen. Issues with Microsoft's Copilot, for instance, are leading some customers to wonder whether the much-hyped tool is worth the money.

    For now, AI leaders maintain that there are plenty of levers to pull — including new data sources and a focus on inference — to ensure models continue improving. Investors and customers just might have to be prepared for them to come at a slower pace compared with the breakneck pace set by OpenAI when it launched ChatGPT two years ago.

    Bigger problems lie ahead if they don't.






    Comments

    Follow It

    Popular posts from this blog

    Reimagining Healthcare: Unleashing the Power of Artificial ...

    What is Generative AI? Everything You Need to Know

    Top AI Interview Questions and Answers for 2024 | Artificial Intelligence Interview Questions