AI Examples, Applications & Use Cases



interesting ai applications :: Article Creator

Five AI Trends To Expect In 2025: Beyond ChatGPT And Friends

TOPSHOT - Bipedal robots in testing phase move containers during a mobile-manipulation demonstration ... [+] at Amazon's "Delivering the Future" event at the company's BFI1 Fulfillment Center, Robotics Research and Development Hub in Sumner, Washington on October 18, 2023. (Photo by Jason Redmond / AFP) (Photo by JASON REDMOND/AFP via Getty Images)

AFP via Getty Images

As I was composing this article, I wondered if it would be easier to write what NOT to expect in AI in 2025, rather than what to expect! For a technology that appears to be advancing in almost every area at the same time, how does one even pick 5 areas? To narrow the scope a bit and hopefully make it more interesting, I decided to select trends that are not directly about the growth of ChatGPT or its competitors. It is safe to say these will grow, and these apps and the companies behind them will do their best to make these applications the solutions to every problem possible.

So what else remains? Here are 5 trends, in no particular order:

AI Trend 1: Agents Everywhere

You may have heard the term Agentic AI. While AI was always about learning patterns, the stages that AI has evolved into are (a) learning patterns from data (b) generating new content based on these patterns and (c) taking actions based on these. It is when all three of these come together that you have an AI agent - a piece of software capable of learning, creating actions, and executing them. Expect to see a lot more development in this area in 2025.

AI Trend 2: Transformation Of The Education System

Much has been made of whether AI encourages cheating, replaces teachers, or in other ways fundamentally affects how students learn. While all of this is critical, another force is emerging which is as fundamental, if not more so, This year has seen increasing evidence of new graduates being unable to find jobs due to the AI-driven skills and economic landscapes. This drives the question of not just how students learn, but what they learn. Economic pressure from the job market down will force graduates, and eventually the institutions that produce them, to face the new realities of what businesses want in workers. The students will need to adapt first, via any means available to upskill, and institutions will need to follow. I expect we will start seeing these changes in 2025.

AI Trend 3: AI In Science

Two science Nobel Prizes this year were for AI. This should be a wake-up call that AI in Science is here to stay. It is also worth noting that while much of the world's attention and imagination has been focused on Generative AI, billions in funding are pouring into AI use for scientific applications, with new announcements coming in daily for everything from space exploration to medical advances. It is also worth noting that, for all of this investment and progress, the data suggests that success in Phase II clinical trials for AI-discovered drugs is about the same as other drugs, with caveats that some of these drugs were already "known" in some form. As of this writing, I have not seen news of any AI-generated drug receiving FDA approval. What does this combination tell us? It tells us that the potential is immense and not yet fully realized.

AI Trend 4: Running Out Of (Easy) Data

Skeptics have been predicting that AI will run out of data, while others have countered. What appears to be consistent across these predictions is not the existence of data, but the increasing difficulty of accessing data that is both high quality and ethically appropriate. This I expect to be a trend in 2025. Untapped data, particularly about our physical environment, is still massive. However, Large Language Models have scraped most of the data that is easily available Expect 2025 to show increasing efforts to obtain data, whether these are business contracts to acquire data, labeling systems to curate yet-untapped data, deploying more sensors, and so on. Couple this with the above trend of AI in Science, and we can imagine efforts to tap scientific data to accelerate.

AI Trend 5: Robots

AI has already made inroads in all fields where problems can be solved with software (think emails, content creation, MRI analysis etc.). AI has already driven cost savings and employment disruption in all of these areas. Robotics brings AI access to the physical domain - whether it is manufacturing, surgery, agriculture, or space exploration. The applications of AI when combined with physical automation is also nearly endless. In 2025 we should expect to see existing trends in this space expand and reach wider public consciousness.

Summary

The past year saw Large Language Models and Generative AI improve at massive speeds, seemingly able to tackle any basic task. Next year we should expect to see the next wave, as deeper impacts on specific domains and institutions, and integrations with other technology waves, come into focus.


12 AI Predictions For 2025

Generative AI has seen faster and more widespread adoption than any other technology today, with many companies already seeing ROI and scaling up use cases into wide adoption.

Vendors are adding gen AI across the board to enterprise software products, and AI developers haven't been idle this year either. We've also seen the emergence of agentic AI, multi-modal AI, reasoning AI, and open-source AI projects that rival those of the biggest commercial vendors.

According to a Bank of America survey of global research analysts and strategists released in September, 2024 was the year of ROI determination, and 2025 will be the year of enterprise AI adoption.

"Over the next five to 10 years, BofA Global Research expects gen AI to catalyze an evolution in corporate efficiency and productivity that may transform the global economy, as well as our lives," says Vanessa Cook, content strategist for Bank of America Institute.

Small language models and edge computing

Most of the attention this year and last has been on the big language models —  specifically on ChatGPT in its various permutations, as well as competitors like Anthropic's Claude and Meta's Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.

"Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses," says Andrew Rabinovich, head of AI and ML at Upwork. LLMs aren't just expensive, they're also very broad, and not always relevant to specific industries, he says.

"Smaller models, on the other hand, are more tailored, allowing businesses to create AI systems that are precise, efficient, robust, and built around their unique needs," he adds. Plus, they can be more easily trained on a company's own data, so Upwork is starting to embrace this shift, training its own small language models on more than 20 years of interactions and behaviors on its platform. "Our custom models are already starting to power experiences that aid freelancers in creating better proposals, or businesses in evaluating candidates," he says.

Small language models are also better for edge and mobile deployments, as with Apple's recent mobile AI announcements. Anshu Bhardwaj, SVP and COO at Walmart Global Technology says that consumers aren't the only ones who stand to benefit from mobile AI.

"Enterprises, especially those with large employee and customer bases, will set the standard for on-device AI adoption," she says. "And we're likely to see an increase of tech providers keeping large enterprises top of mind when developing the on-device technologies."

AI will approach human reasoning ability

In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.

"It's extremely good at reasoning through logic-types of problems," says Sheldon Monteiro, chief product officer at Publicis Sapient. That means companies can use it on tough code problems, or large-scale project planning where risks have to be compared against each other.

If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. "Reasoning also helps us use AI as more of a decision support system," he adds. "I'm not suggesting that all of this will happen in 2025, but it's the long-term direction."

According to Gartner's most recent hype cycle for AI, artificial general intelligence is still more than a decade away.

Massive growth in proven use cases

This year, we've seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.

"The fields of customer service, marketing, and customer development are going to see massive adoption," he says. "In these uses case, we have enough reference implementations to point to and say, 'There's value to be had here.'"

He expects the same to happen in all areas of software development, starting with user requirements research through project management and all the way to testing and quality assurance. "We've seen so many reference implementations, and we've done so many reference implementations, that we're going to see massive adoption."

The evolution of agile development

The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.

"For the last 15 years or so, it's been the de-facto standard for how modern software development works," says Monteiro. But agile is organized around human limitations — not just limitations on how fast we can code, but in how teams are organized and managed, and how dependencies are scheduled.

Today, gen AI is an adjunct, used to boost productivity of individual team members. But the entire process will need to be reinvented in order to make full use of the technology, says Monteiro. "We have to look at how we interact with colleagues and how we interact with AI," he adds. "There's too much attention on AI for code development, which is actually just a fraction of the whole software development process."

Increased regulation

At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026, meaning they'll have a little over a year to put systems in place to track the provenance of their training data.

"As a practical matter, a lot of people do have a nexus in California, particularly in AI," says Vivek Mohan, co-chair of the AI practice at law firm Gibson, Dunn & Crutcher LLP. "Many of the world's leading technology companies are headquartered here, and many of them make their tools available here," he says. But there are already many other regulations on the books, both in the US and abroad, that touch on issues like data privacy and algorithmic decision making that would also apply to gen AI.

Take for example the use of AI in deciding whether to approve a loan, a medical procedure, pay an insurance claim or make employment recommendations. "That's an area where there's a reasonably broad consensus that this is something we should think critically about," says Mohan. "Nobody wants to be hired or fired by a machine that has no accountability. That's one use case you probably want to run by your lawyers."

There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU's AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.

"There's a high probability that the EU AI act will lead to more regulations in other parts of the world," says Gartner's Chandrasekaran. "It's a step forward in terms of governance, trying to make sure AI is being used in a socially beneficial way."

AI will become accessible and ubiquitous

When the internet first arrived, early adopters needed to learn HTML if they wanted to have a website, recalls Rakesh Malhotra, principal at Ernst & Young. Users needed modems and special software and accounts with internet providers. "Now you just type in the word you're looking for," he says. With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.

"There's going to be a lot less of that," he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.

Agents will begin replacing services

Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. "Agents are the next phase," he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.

Today, AI agents are relatively expensive, and inference costs can add up quickly for companies looking to deploy massive systems. "But that's going to shift," he says. "And as this gets less expensive, the use cases will explode."

The rise of agentic assistants

In addition to agents replacing software components, we'll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations. Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.

"But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws," says Malhotra. "This isn't science fiction. We're doing this work for our clients now — a less advanced version of it, but next year it becomes a very normal thing."

And it's not just keeping up with regulatory changes. Say a vendor releases a new software product. Enterprise customers need to be sure it complies with their requirements. That could happen in an automated way, with the vendor's agent talking to the customer's agent. "Today this happens with meetings and reports," says Malhotra. "But soon it's all going to happen digitally once we get past some of this newness."

Soon, showing up to a meeting without an AI assistant will be like an accountant trying to do their work without Excel, he adds. "If you're not using the proper tools, that's your first indication you aren't the right person for the job."

It's still early days for AI agents, says Carmen Fontana, IEEE member, and cloud and emerging tech practice lead at Augment Therapy, a digital health company. "But I've found them immensely useful in trimming down busy work." The next step for agents, she says, is pulling together communications from all the different channels, including email, chat, texts, social media, and more.

"Making better spreadsheets doesn't make for great headlines, but the reality is that productivity gains from workplace AI agents can have a bigger impact than some of the more headline-grabbing AI applications," she says.

Multi-agent systems

Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won't happen overnight, of course, and companies will need to be careful that these agentic systems don't go off the rails.

First, an agent has to be able to recognize whether it's capable of carrying out a task, and whether a task is within its purview. Today's AIs often fail in this regard, but companies can build guardrails, supplemented with human oversight, to ensure agents only do what they're allowed to do, and only when they can do it well. Second, companies will need systems in place to monitor the execution of those tasks, so they stay within legal and ethical boundaries. Third, companies will need to be able to measure how confident the agents are in their performance, so that other systems, or humans, can be brought in when confidence is low.

"If it goes through all of those gates, only then do you let the agent do it autonomously," says Hodjat. He recommends that companies keep each individual agent as small as possible. "If you have one agent and tell it to do everything in the sales department, it'll fail a lot," he adds. "But if you have lots of agents, and give them smaller responsibilities, you'll see more work being automated."

Companies such as Sailes and Salesforce are already developing multi-agent workflows, says Rahul Desai, GM at Chief of Staff Network, a professional development organization. "Combine this with chain-of-thought reasoning, or the ability for an AI agent to reason through a problem in multiple steps — recently incorporated into the new ChatGPT-o1 model — and we'll likely see the rise of domain expert AI that's available to everyone," he says.

Multi-modal AI

Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today's AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.

"When people want to do speech generation, they go to a specialized model that does text to speech," says Chandrasekaran. "Or a specialized model for image generation." To have a full understanding of how the world works, for true general intelligence, an AI has to function across all the different modalities. Some of this is available today, though usually the multi-modality is an illusion and the actual work is handled behind the scenes by different specialized, single-mode models.

"Architecturally, these models are separate and the vendor is using a mixture-of-experts architecture," says Chandrasekaran. Next year, however, he expects multi-modality to be an important trend. Multi-modal AI can be more accurate and more resilient to noise and missing data, and can enhance human-computer interaction. Gartner, in fact, predicts that 40% of gen AI solutions will be multi-modal by 2027, up from 1% in 2023.

Multi-model routing

Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there's the matter of having all your eggs in one basket.

"A number of CIOs I've spoken with recently are thinking about the old ERP days of vendor lock," says Brett Barton, global AI practice leader at Unisys. "And it's top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities."

Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change. Today, most companies building AI systems in-house tend to start with just one vendor, since juggling multiple providers is much more difficult. But as they build out scalable architecture next year, having "model gardens" with a selection of vetted, customized, and fine-tuned systems of different sizes and capabilities will be critical to getting maximum performance and highest price efficiency out of their AI.

Jeffrey Hammond, head of WW ISV product management transformation at AWS says he expects to see more companies build internal platforms that provide a common set of services to their development teams, including multi-model routing.

"It helps developers quickly test different LLMs to find the best combination of performance, low-cost, and accuracy for the particular task they're trying to automate," he says.

Mass customization of enterprise software

Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It's just not economically feasible to build large systems for small use cases.

"Right now, people are all using the same version of Teams or Slack or what have you," says Ernst & Young's Malhotra. "Microsoft can't make a custom version just for me." But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

"Imagine an agent watching you work for a couple of weeks and designing a custom desktop just for you," he says. "Companies build custom software all the time, but now AI is making this accessible to everyone. We're going to start seeing it. Having the ability to get custom software made for me without having to hire someone to do it is awesome."


AI Agents May Lead The Next Wave Of Cyberattacks

While artificial intelligence agents are expected to lead the next wave of AI innovation, they'll also empower cyberattackers with a more potent set of tools to probe for an exploit vulnerabilities in enterprise defenses.

That's according to Reed McGinley-Stempel, chief executive officer of identity platform startup Stytch Inc. OpenAI LLC's GPT-4 large language model, which debuted early this year, appears to be far more effective than its predecessors in identifying weaknesses in website security. "AI should improve cybersecurity if you use it for the right reasons, but we're seeing it move much faster on the other end, with attackers realizing that they can use agentic AI means to gain an advantage," he said.

He pointed to a paper published in April by researchers at the University of Illinois Urbana-Champaign that found that GPT-4 can write complex malicious scripts to find vulnerabilities in Mitre Corp.'s list of Common Vulnerabilities and Exposures with an 87% success rate. A comparable experiment using GPT-3.5 had a success rate of 0%. The paper said GPT-4 was able to follow up to 50 steps at one time in its probe for weaknesses.

That raises the specter of armies of AI agents pounding on firewalls constantly looking for cracks. "GPT-4 now can effectively be an automated penetration tester for hackers," McGinley-Stempel said. "You could easily start to see agentic actions being chained together, with one agent recognizing the vulnerabilities and another focused on exploitation."

Defenders overmatched

That kind of constant penetration testing is beyond the scope of most cybersecurity organizations to combat, he said. "Many organizations run a pen test on maybe an annual basis, but a lot of things change within an application or website in a year," he said. "Traditional cybersecurity organizations within companies have not been built for constant self-penetration testing."

Stytch is attempting to improve upon what McGinley-Stempel said are weaknesses in popular authentication schemes of such as the Completely Automated Public Turing test to tell Computers and Humans Apart, or captcha, a type of challenge-response test used to determine whether a user interacting with a system is a human or an bot. Captcha codes may require users to decipher scrambled letters or count the number of traffic lights in an image.

Stytch's technology creates a unique, persistent fingerprint for every visitor. It claims its software can detect automated visitors such as bots and headless browsers with 99.99% accuracy without requiring user interaction. A headless browser is a browser without a graphical user interface that is used primarily to speed up automated tasks such as testing but can also be exploited to confuse authentication systems about whether the visitor is a human or a machine.

A recent increase in the percentage of headless browser automation traffic Stytch has detected on customer websites is one indication that bad actors are already using generative AI to automate attacks. Since the release of GPT-4, the volume of website traffic coming from headless browsers has nearly tripled from 3% to 8%, McGinley-Stempel said.

AI will further diminish the value of captchas, he said. A combination of generative AI vision and headless browsers can defeat schemes that require visitors to identify objects and images, a popular use case. Even sophisticated automation detection technology can be foiled by services like Acaptcha Development LP's Anti-Captcha, which farms out captcha solutions to human workers.

"Putting someone in front of a captcha raises the cost of attack but isn't necessarily a true test," he said.

AI arms race

Ultimately, the use of AI and models to solve cybersecurity challenges will be mostly ineffective, he said. "If you're just going to fight machine learning models on the attacking side with ML models on the defensive side, you're going to get into some bad probabilistic situations that are not going to necessarily be effective," he said.

Probabilistic security provides protections based on probabilities but assumes that absolute security can't be guaranteed. Stytch is working on deterministic approaches such as fingerprinting, which gathers detailed information about a device or software based on known characteristics and can provide a higher level of certainty that the user is who they say they are.

The most effective preventions enterprises can employ with current technology is a combination of distributed denial of service attack prevention, fingerprinting, multifactor authentication and observability. The last technique is often overlooked, he said.

"If you embedded our device fingerprinting JavaScript snippet on your website, you'd get a lot of interesting data on what percentage of your traffic was bots, headless browsers and real humans within an hour,"' he said. Information technology executives are often alarmed to discover what Imperva Inc. Reported earlier this year: Almost half of internet traffic now comes from nonhuman sources.

Image: Freepik Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content.   Join our community on YouTube Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.Com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

"TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well" – Andy Jassy

THANK YOU






Comments

Follow It

Popular posts from this blog

What is Generative AI? Everything You Need to Know

Reimagining Healthcare: Unleashing the Power of Artificial ...

Top AI Interview Questions and Answers for 2024 | Artificial Intelligence Interview Questions