AI enterprise at scale: Faster, surer rollouts



real world ai :: Article Creator

Google DeepMind AI Reveals Potential For Thousands Of New Materials

By Martin Coulter

LONDON (Reuters) - Google DeepMind has used artificial intelligence (AI) to predict the structure of more than 2 million new materials, a breakthrough it said could soon be used to improve real-world technologies.

In a research paper published in science journal Nature on Wednesday, the Alphabet-owned AI firm said almost 400,000 of its hypothetical material designs could soon be produced in lab conditions.

Potential applications for the research include the production of better-performing batteries, solar panels and computer chips.

The discovery and synthesis of new materials can be a costly and time-consuming process. For example, it took around two decades of research before lithium-ion batteries – today used to power everything from phones and laptops to electric vehicles – were made commercially available.

"We're hoping that big improvements in experimentation, autonomous synthesis, and machine learning models will significantly shorten that 10 to 20-year timeline to something that's much more manageable," said Ekin Dogus Cubuk, a research scientist at DeepMind.

DeepMind's AI was trained on data from the Materials Project, an international research group founded at the Lawrence Berkeley National Laboratory in 2011, made up of existing research of around 50,000 already-known materials.

The company said it would now share its data with the research community, in the hopes of accelerating further breakthroughs in material discovery.

"Industry tends to be a little risk-averse when it comes to cost increases, and new materials typically take a bit of time before they become cost-effective," said Kristin Persson, director of the Materials Project.

"If we can shrink that even a bit more, it would be considered a real breakthrough."

Having used AI to predict the stability of these new materials, DeepMind said it would now turn its focus to predicting how easily they can be synthesised in the lab.

(Reporting by Martin Coulter; Editing by Jan Harvey)


The GAIA Benchmark: Next-gen AI Faces Off Against Real-world Challenges

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

A new artificial intelligence benchmark called GAIA aims to evaluate whether chatbots like ChatGPT can demonstrate human-like reasoning and competence on everyday tasks. 

Created by researchers from Meta, Hugging Face, AutoGPT and GenAI, the benchmark "proposes real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency," the researchers wrote in a paper published on arXiv.

The researchers said GAIA questions are "conceptually simple for humans yet challenging for most advanced AIs." They tested the benchmark on human respondents and GPT-4, finding that humans scored 92 percent while GPT-4 with plugins scored only 15 percent.

"This notable performance disparity contrasts with the recent trend of LLMs [large language models] outperforming humans on tasks requiring professional skills in e.G. Law or chemistry," the paper states.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat's AI Impact Tour coming to a city near you!

Learn More GAIA focuses on human-like competence, not expertise 

Rather than focusing on tasks difficult for humans, the researchers suggest benchmarks should target tasks that demonstrate an AI system has similar robustness to the average human.

The GAIA methodology led the researchers to devise 466 real-world questions with unambiguous answers. Three-hundred answers are being held privately to power a public GAIA leaderboard, while 166 questions and answers were released as a development set.

"Solving GAIA would represent a milestone in AI research," said lead author Grégoire Mialon of Meta AI. "We believe the successful resolution of GAIA would be an important milestone towards the next generation of AI systems."

The human vs. AI performance gap

So far, the leading GAIA score belongs to GPT-4 with manually selected plugins, at 30% accuracy. The benchmark creators said a system that solves GAIA could be considered an artificial general intelligence within a reasonable timeframe.

"Tasks that are difficult for humans are not necessarily difficult for recent systems," the paper states, critiquing the common practice of testing AIs on complex math, science and law exams. 

Instead, GAIA focuses on questions like, "Which city hosted the 2022 Eurovision Song Contest according to the official website?" and "How many images are there in the latest 2022 Lego Wikipedia article?"

"We posit that the advent of Artificial General Intelligence (AGI) hinges on a system's capability to exhibit similar robustness as the average human does on such questions," the researchers wrote.

GAIA could shape the future trajectory of AI 

The release of GAIA represents an exciting new direction for AI research that could have broad implications. By focusing on human-like competence at everyday tasks rather than specialized expertise, GAIA pushes the field beyond more narrow AI benchmarks.

If future systems can demonstrate human-level common sense, adaptability and reasoning as measured by GAIA, it suggests they will have achieved artificial general intelligence (AGI) in a practical sense. This could accelerate deployment of AI assistants, services and products.

However, the authors caution that today's chatbots still have a long way to go to solve GAIA. Their performance shows current limitations in reasoning, tool use and handling diverse real-world situations.

As researchers rise to the GAIA challenge, their results will reveal progress in making AI systems more capable, general and trustworthy. But benchmarks like GAIA also lead to reflection on how to shape AI that benefits humanity.

"We believe the successful resolution of GAIA would be an important milestone towards the next generation of AI systems," the researchers wrote. So in addition to driving technical advances, GAIA could help guide AI in a direction that emphasizes shared human values like empathy, creativity and ethical judgment.

You can view the GAIA benchmark leaderboard right here to see which next-generation LLM is currently performing the best at this evaluation.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


AI Simplified: Real-world Tactics For Transforming Your Financial Workflow Today

Are you ready to see what artificial intelligence can really do for your Credit Union or Community Bank, beyond the buzzwords and the hype?

Forget the jargon and the tech-speak; this episode is tailored for Credit Union and Community Bank professionals who want to harness the power of AI without getting bogged down by complexities.

We're cutting through the noise to focus on what AI really means for you, the backbone of our local financial institutions, and the communities you serve. It's time to uncover the practical implications of AI and explore how these breakthroughs can enhance your operational efficiency and service delivery.

You don't need a tech background to get the most out of this conversation. All you need is to tune in as we break down the opportunities that AI is bringing to your doorstep. Whether you're looking to streamline processes, enhance customer experience, or leverage data like a pro, this episode is your playbook for the digital age.

continue reading »




Comments

Follow It

Popular posts from this blog

Dark Web ChatGPT' - Is your data safe? - PC Guide

Reimagining Healthcare: Unleashing the Power of Artificial ...

Christopher Wylie: we need to regulate artificial intelligence before it ...