What Is Figure AI?



applications of ai in the real world :: Article Creator

20 Real-World Applications Of Quantum Computing To Watch

getty

Quantum computing has long been the domain of theoretical physics and academic labs, but it's starting to move from concept to experimentation in the real world. Industries from logistics and energy to AI and cybersecurity are beginning to explore how quantum capabilities could solve—or cause—complex problems that classical computers struggle with.

Early use cases suggest surprising applications for—and challenges from—quantum computing may arrive sooner than many people expect. Below, members of Forbes Technology Council detail some of the ways quantum may soon be making a real-world, widespread impact.

1. Communication Security

Quantum computing is poised to rapidly transform cybersecurity, likely altering information exchange sooner than organizations expect. It is critical for organizations to explore quantum communication technologies, such as quantum key distribution and quantum networks, to defend against threats and level the playing field by integrating quantum computing defense strategies into defense frameworks. - Mandy Andress, Elastic

2. Simulations For Autonomous Vehicle Testing

Accelerated road testing demands simulating millions of scenarios related to weather, traffic and terrain to train and validate autonomous systems. This involves optimization of scenarios to ensure maximum coverage, risk modeling and detecting anomalies in high-dimensional data obtained from LiDAR, radar and cameras. Quantum computing will be instrumental in performing these simulations much faster. - Ajay Parihar, Fluid Codes

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

3. Rapid Data Analysis

Quantum computing promises to revolutionize data analysis—for example, helping scientists simulate molecules and gene pools and rapidly unlock life-saving cures. However, the same power that accelerates progress also breaks existing data-protection techniques, putting global digital security at risk. It's a double-edged future: Quantum is miraculous for analyzing data, but it's also dangerous for protecting data—unless we prepare now. - Srinivas Shekar, Pantherun Technologies

4. Drug Discovery And Materials Design

One surprising area where quantum computing could help soon is drug discovery and designing new materials. Quantum computers can study molecules in ways normal computers can't. This can help scientists develop new medicines or better batteries faster. Big companies are already working on this, so real-world use may come sooner than people think. - Jay Krishnan, NAIB IT Consultancy Solutions WLL

5. Logistics Optimization

Logistics optimization represents an unexpected area of impact. Quantum computing shows promise for transforming complex routing problems that affect delivery networks and supply chains. The technology could optimize shipping and traffic routes in real time across the globe, which would reduce costs and emissions at a pace that's beyond current supercomputers. - Raju Dandigam, Navan

6. Telecom Network Optimization

Quantum computing could make a real-world impact sooner than expected in telecom network optimization. Quantum computing can revolutionize telecom networks by significantly enhancing their resilience and delivering richer user experiences. Additionally, with principles like superposition and entanglement, QNLP can address current natural language processing challenges, including nuanced understanding and bias. - Anil Pantangi, Capgemini America Inc.

7. Food Waste Reduction

World hunger is one unique challenge where quantum could have an immediate impact. Roughly one-third of all food produced is lost across the entire supply chain, from farm to table. Quantum algorithms could be applied to optimize the food supply chain, improving demand forecasting, logistics and resource allocation. It can determine the best delivery path and ensure no food goes to waste. - Usman Javaid, Orange Business

8. Synthetic Biology Innovation

Entropy-based quantum computing using nanophotonics is optimized for solving very complex polynomial mathematics. This type of quantum computing can be performed at room temperature and could accelerate the development of low-energy protein configurations and synthetic amino acids. That, in turn, may give synthetic biology a boost in biochip and biosensor development. Products using biochips could elevate patient diagnostics, monitoring and drug delivery to a new level. - John Cho, Tria Federal

9. Smarter Energy Grids

Quantum computing will revolutionize energy systems by enabling real-time monitoring and modeling of electric grids. This will be critical as today's grids transition to match distributed sources of renewable energy, with growing demand from EVs, electric heating and data centers. I expect quantum will be a key technology to create smarter grids that deliver reliable, clean and affordable energy. - Steve Smith, National Grid Partners

10. Breaking Of Current Identity And Encryption Systems

Attackers are now harvesting internet data for the time when quantum computers are ready to break today's identity and encryption systems.​ CEOs and boards are asking, "What's our risk? How do we defend ourselves?" It's a reason why lifetimes for TLS certificates—the identity system for the internet—will drop to 47 days as demanded by Google, Apple and Microsoft. - Kevin Bocek, Venafi, a CyberArk Company

11. AI Training

Quantum computing could soon transform large language model training by accelerating matrix operations and optimization, potentially breaking today's cost barrier. With skyrocketing demand for AI and breakthroughs like DeepSeek, quantum-accelerated AI may arrive faster than expected, as the extremely well-funded AI industry considers this its most urgent problem. - Reuven Aronashvili, CYE

12. Smarter Water Systems

Municipal and industrial water systems lose an estimated 20% to 30 % of the water they pump through undetected leaks, pressure miscalibration and energy-hungry pumps. Finding the optimal combination of where to place sensors, how to set valve pressures and when to run pumps is a classic combinatorial-optimization headache; the search space explodes as a network expands. It's a perfect use case for quantum. - Jon Latshaw, Advizex

13. Generation Of Specialized AI Training Data

Quantum computers could impact AI by generating high-fidelity training data for domains like pharmaceuticals, chemistry and materials design, where real-world training data is scarce. They can accurately simulate the complex molecular structures needed for training generative AI algorithms. The synergy of quantum computing and AI is poised to be more transformative than either technology alone. - Stephanie Simmons, Photonic Inc.

14. Cybersecurity Threat Detection

Most of us focus on the risks of quantum in relation to breaking public key cryptography. Quantum will also have a positive impact by preventing and detecting attacks early through its ability to solve complex problems related to pattern recognition and anomaly detection (especially in complex ecosystems). As cybersecurity becomes a priority, investments in quantum are expected sooner rather than later. - Chris Dimitriadis, ISACA

15. Quantum-Enhanced Retirement Plans

By combining AI with quantum computing, we could see quantum-enhanced 401(k) plans that deliver hyper-personalized portfolios. These plans would offer real-time rebalancing based on quantum simulations analyzing millions of combinations. The result is a shield against unexpected market turmoil, providing workers with consistent retirement plans that adapt throughout their careers. - Chris Willis, Domo

16. Financial Risk Modeling

Quantum algorithms enhance financial simulations—such as Monte Carlo methods, used for risk evaluation and scenario building—by reducing the number of qubits required and lowering associated costs. Key applications include improving efficiency, calculating value at risk and modeling market dynamics for traders. Managing these advancements will be essential to prevent unfair monopolization of data and to ensure equitable access to the benefits of quantum computing. - Jeff Schmidt, ECI

17. Agricultural Supply Chain Modeling

One unexpected application of quantum is optimizing supply chains in agriculture. Based on my experience with AI in agri-tech, quantum computing could transform how we model weather, predict yields and optimize commodity logistics, performing much faster than traditional systems. This could bring real-world impact—sooner than most anticipate—in terms of food security and sustainability. - Suri Nuthalapati, Cloudera

18. Renewable Energy Innovation

Quantum computers have a significant advantage over classical computing in terms of simulating complex molecular interactions. This can lead to accelerated research in the area of sustainable and renewable energy development. This is especially critical given the proliferation of EVs and high-energy AI applications. - Arun Kumar, Material

19. Optimized Patient Care Strategies

Quantum computing could accelerate value-based care by solving optimization problems that current AI and cloud systems can only approximate. Even with today's technology, care plan design across thousands of patients requires extensive manual work. Quantum systems can evaluate all possible interventions and constraints in parallel, enabling faster, more precise and globally optimized care strategies. - David Snow, Jr., Cedar Gate Technologies

20. Greener Cloud Solutions

Quantum computing could significantly impact cloud solutions by transforming how providers optimize resource scheduling, load balancing and traffic routing. Today's cloud systems rely on classical algorithms that struggle with the complexity of real-time global workloads. Quantum algorithms could dramatically improve efficiency and data center energy use, ensuring greener cloud operations. - Rahul Bhatia, HCL Tech


A Deep Learning Alternative Can Help AI Agents Gameplay The Real World

A new machine learning approach that draws inspiration from the way the human brain seems to model and learn about the world has proven capable of mastering a number of simple video games with impressive efficiency.

The new system, called Axiom, offers an alternative to the artificial neural networks that are dominant in modern AI. Axiom, developed by a software company called Verses AI, is equipped with prior knowledge about the way objects physically interact with each other in the game world. It then uses an algorithm to model how it expects the game to act in response to input, which is updated based on what it observes—a process dubbed active inference.

The approach draws inspiration from the free energy principle, a theory that seeks to explain intelligence using principles drawn from math, physics, and information theory as well as biology. The free energy principle was developed by Karl Friston, a renowned neuroscientist who is chief scientist at "cognitive computing" company Verses.

Friston told me over video from his home in London that the approach may be especially important for building AI agents. "They have to support the kind of cognition that we see in real brains," he said. "That requires a consideration, not just of the ability to learn stuff but actually to learn how you act in the world."

The conventional approach to learning to play games involves training neural networks through what is known as deep reinforcement learning, which involves experimenting and tweaking their parameters in response to either positive or negative feedback. The approach can produce superhuman game-playing algorithms but it requires a great deal of experimentation to work. Axiom masters various simplified versions of popular video games called drive, bounce, hunt, and jump using far fewer examples and less computation power.

"The general goals of the approach and some of its key features track with what I see as the most important problems to focus on to get to AGI," says François Chollet, an AI researcher who developed ARC 3, a benchmark designed to test the capabilities of modern AI algorithms. Chollet is also exploring novel approaches to machine learning, and is using his benchmark to test models' abilities to learn how to solve unfamiliar problems rather than simply mimic previous examples.

"The work strikes me as very original, which is great," he says. "We need more people trying out new ideas away from the beaten path of large language models and reasoning language models."

Modern AI relies on artificial neural networks that are roughly inspired by the wiring of the brain but work in a fundamentally different way. Over the past decade and a bit, deep learning, an approach that uses neural networks, has enabled computers to do all sorts of impressive things including transcribe speech, recognize faces, and generate images. Most recently, of course, deep learning has led to the large language models that power garrulous and increasingly capable chatbots.

Axiom, in theory, promises a more efficient approach to building AI from scratch. It might be especially effective for creating agents that need to learn efficiently from experience, says Gabe René, the CEO of Verses. René says one finance company has begun experimenting with the company's technology as a way of modeling the market. "It is a new architecture for AI agents that can learn in real time and is more accurate, more efficient, and much smaller," René says. "They are literally designed like a digital brain."

Somewhat ironically, given that Axiom offers an alternative to modern AI and deep learning, the free energy principle was originally influenced by the work of British Canadian computer scientist Geoffrey Hinton, who was awarded both the Turing award and the Nobel Prize for his pioneering work on deep learning. Hinton was a colleague of Friston's at University College London for years.

For more on Friston and the free energy principle, I highly recommend this 2018 WIRED feature article. Friston's work also influenced an exciting new theory of consciousness, described in a book WIRED reviewed in 2021.


How RAFT Is Making AI Smarter, Faster, And More Accurate Than Ever

Diagram explaining how retrieval-augmented fine-tuning works in AI

What if artificial intelligence could think beyond its training, pulling in fresh insights from the vast expanse of human knowledge? Imagine an AI model that doesn't just rely on static datasets but actively retrieves the latest medical research, legal precedents, or financial trends to inform its decisions. This is no longer a futuristic dream—it's the promise of Retrieval-Augmented Fine-Tuning (RAFT). By blending the precision of fine-tuning with the adaptability of retrieval systems, RAFT redefines how AI learns and evolves, making it a fantastic option for industries where accuracy and context are non-negotiable. But with such fantastic potential comes a critical question: how does this hybrid approach actually work, and what makes it so effective?

In this exploration of RAFT, the IBM Technology team uncover the mechanics behind this innovative technique and its ability to bridge the gap between static training data and the ever-changing real world. You'll discover how RAFT enables AI to handle complex, domain-specific challenges with unprecedented accuracy, from diagnosing rare medical conditions to navigating intricate legal frameworks. Along the way, we'll delve into its core components, practical applications, and the challenges that lie ahead. Whether you're curious about the future of machine learning or seeking innovative solutions for your field, RAFT offers a glimpse into a smarter, more adaptable AI. After all, what could be more powerful than an AI that learns not just from the past, but also from the present?

Overview of RAFT

TL;DR Key Takeaways :

  • Retrieval-Augmented Fine-Tuning (RAFT) combines retrieval systems and fine-tuning to integrate external knowledge into AI training, enhancing accuracy and contextual relevance.
  • RAFT dynamically retrieves up-to-date information during training, bridging the gap between static datasets and evolving real-world knowledge.
  • Key components of RAFT include retrieval systems, fine-tuning techniques, external knowledge integration, contextual reasoning, and domain-specific expertise.
  • RAFT is highly effective in specialized fields like medicine, law, and finance, as well as applications such as NLP, customer support, and scientific research.
  • While RAFT offers significant advantages, challenges include computational demands and making sure the quality of retrieved information, with ongoing research aimed at improving efficiency and adaptability.
  • The Mechanism Behind RAFT

    RAFT functions as a dynamic and adaptive training process, improving upon traditional fine-tuning by incorporating retrieval systems. These systems enable AI models to access and retrieve relevant external knowledge during training, rather than relying solely on static datasets. This dynamic retrieval ensures that the model remains aligned with the most current and accurate information available.

    For example, consider training an AI model to address complex medical queries. With RAFT, the model can retrieve the latest medical research, guidelines, or case studies during its training phase. This ensures that the model's responses are not only accurate but also reflective of the most up-to-date knowledge in the field. By integrating external data sources, RAFT bridges the gap between static training data and the ever-evolving nature of real-world information.

    Core Components Driving RAFT

    The effectiveness of RAFT lies in its integration of several critical components, each contributing to its ability to generate precise and context-aware outputs:

  • Retrieval Systems: These systems are designed to identify and extract relevant information from extensive datasets or databases, making sure the model has access to the most pertinent knowledge.
  • Fine-Tuning Techniques: Fine-tuning adjusts the model's internal parameters based on the retrieved knowledge, enhancing its ability to produce accurate and contextually appropriate outputs.
  • External Knowledge Integration: By incorporating external data sources, RAFT ensures that models are not limited to static training datasets, allowing them to adapt to dynamic, real-world information.
  • Contextual Reasoning: RAFT improves the model's capacity to understand and process complex relationships within data, resulting in nuanced and precise outputs.
  • Domain-Specific Knowledge: This approach is particularly effective in specialized fields where accurate and context-aware information is essential for success.
  • What is Retrieval-Augmented Fine-Tuning (RAFT)?

    Here are additional guides from our expansive article library that you may find useful on AI learning.

    Practical Applications of RAFT

    The versatility of RAFT makes it applicable across a wide range of industries and use cases. In natural language processing (NLP), RAFT enhances tasks such as question answering, text summarization, and conversational AI. For instance, customer support chatbots equipped with RAFT can retrieve real-time product information, allowing them to provide more precise and contextually relevant responses to user queries.

    In the realm of scientific research, RAFT can analyze vast datasets by retrieving relevant studies or data, helping researchers draw accurate and insightful conclusions. Similarly, in legal and regulatory fields, RAFT ensures that AI models remain updated with the latest laws, regulations, and guidelines, thereby improving compliance and decision-making accuracy. These applications highlight RAFT's ability to adapt to the specific needs of various domains, making it a valuable tool for tackling complex challenges.

    Advantages and Potential of RAFT

    RAFT offers a range of benefits that extend beyond traditional fine-tuning approaches. By integrating external knowledge retrieval, RAFT enables AI models to:

  • Handle Complex Queries: RAFT equips models to process intricate and multi-faceted queries that require deep contextual understanding.
  • Adapt to Evolving Information: By incorporating up-to-date knowledge during training, RAFT ensures that models remain relevant in dynamic environments.
  • Excel in Specialized Fields: RAFT is particularly effective in domains such as medicine, law, and finance, where static training data often falls short of capturing the complexity of real-world scenarios.
  • Produce Contextually Relevant Outputs: By retrieving and integrating external knowledge, RAFT ensures that the outputs generated are tailored to the specific context of a given query or task.
  • Challenges and Future Prospects

    While RAFT offers significant advantages, it also presents certain challenges. The retrieval process can be computationally intensive, requiring robust infrastructure to manage and process large-scale data efficiently. Additionally, making sure the quality and relevance of the retrieved information is critical to maintaining the accuracy and reliability of the model's outputs.

    Looking ahead, ongoing research aims to optimize retrieval mechanisms and incorporate more diverse data sources into the RAFT framework. These advancements are expected to enhance the efficiency and adaptability of RAFT, allowing AI models to tackle increasingly complex tasks with greater precision. As the field of machine learning continues to evolve, RAFT's ability to integrate external knowledge and improve contextual reasoning will play a pivotal role in addressing the growing demands of AI applications.

    Media Credit: IBM Technology

    Filed Under: AI, Guides

    Latest Geeky Gadgets Deals

    Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.




    Comments

    Follow It

    Popular posts from this blog

    What is Generative AI? Everything You Need to Know

    Top Generative AI Tools 2024

    60 Growing AI Companies & Startups (2025)