Latent semantic indexing: Why marketers don’t need to worry about it
All AI May Be Created Equal, But Its Uses Are Not: From AI Enhancements To Systems Of Execution
All AI May Be Created Equal, But Its Uses Are Not: From AI Enhancements to Systems of Execution
gettyArtificial intelligence is not one thing. That simple observation, too often overlooked, is at the heart of why so many companies struggle to gain consistent value from their AI investments.
As I talk with leaders across large enterprises, I find a persistent source of frustration: the tendency to lump all AI into a single category. This conflation muddles expectations, clouds strategies, and ultimately slows meaningful adoption. To make real progress, we must begin by segmenting AI not by its type but by how it is used.
Let me propose a framework with three distinct categories of AI use:
This is the most incremental and least disruptive of the three AI use cases. In this model, AI is embedded into existing platforms as an added feature or module. Nearly every software company is enhancing its offerings with AI, either by integrating new capabilities directly into core products or by offering complementary add-ons. In some cases, enterprises are also building their own AI solutions to strengthen their tech stacks, fill functional gaps, or improve the overall customer and user experience.
In these cases, the adoption barrier is relatively low. These tools typically come from trusted providers, are supported with standard implementation processes, and pose manageable risks. The enhancements can be viewed in terms of their cost-benefit analysis. For example, do they give sufficient value for their implementation and the additional cost to either develop them or purchase them?
Early versions of this approach have been useful in helping companies set expectations and identify what needs to be tested. These enhancements often carry hidden technical debt, such as the need for data reformatting, system integrations, or retiring legacy modules. And while they can deliver real value, the returns are typically incremental, not transformative.
AI as a Toolset for the WorkforcePerhaps the best illustration of AI as a toolset is ChatGPT or any of the easily accessible LLM models that employees adopt to boost their productivity. The market is replete with new tools coming out, tools to help your existing employees do things better. These are tools with horizontal utility – used across functions and teams, often with little or no official sanction.
If this feels familiar, it should. It echoes the arrival of PCs and cell phones in the enterprise. Yes, they presented security problems, yes, they presented training problems, but both were unstoppable in that they were so useful and offered advanced capabilities that enterprises quickly figured out that they needed to shape the problem. They moved to purchase PCs for their employees and standardized the software. They allowed employees to use their cell phones for work and pushed employees out from behind the desk. They provided trainings, and yes, the attack surface for security and other issues was greatly expanded, but over time, they purchased and developed cybersecurity and other vehicles to control the attack vector that had been opened up. Likewise, with AI toolsets, we're going to have to take the same approach.
You cannot prevent employees from using generative AI. But you can shape how it's used. Provide them with enterprise-grade, secure alternatives. Invest in training. Create communities of practice. Offer recognition for innovation. In other words, get behind it and push because trying to stand in front and stop it is a losing game.
The challenge here is governance – not just of data security but of use cases, knowledge-sharing, and effectiveness. Organizations that lean into structured enablement will find themselves moving faster and more securely than those that try to hold the tide.
AI as Systems of ExecutionThis is the most complex, most disruptive, and most potentially rewarding category of AI use.
Enterprises have traditionally built Systems of Record, such as ERPs, alongside Systems of Engagement, which leverage internet technologies to interact with customers and employees through digital channels. Now, we are entering a new phase: Systems of Execution.
Systems of Execution are AI-based architectures that execute decisions without human intervention. They require reimagining processes, not just automating steps. You don't get there by layering AI on top of what exists. You build them together with your current systems, just as you did when engagement systems were introduced alongside record systems.
Organizations have increasingly adopted end-to-end processes, for example, in areas like accounting, cash management, and procurement. These processes rely on Systems of Record and Systems of Engagement to deliver the insights and data that employees use to make decisions and take action. Systems of Execiion are now stepping in to assume those roles, effectively replacing the human actors in these workflows.
The impact? In areas where we've seen Systems of Execution deployed, labor requirements have dropped by 60–80%. That's not a small change; it's a fundamental operational pivot. However, we find that you uncover benefits in terms of quality, impact, customer experience, and more.
You have to reimagine how you're going to operate your processes and, hence, your company. There is a mindset change. These initiatives require executive sponsorship, cross-functional collaboration, and a rethinking of how value is created and delivered. The willingness and ability of senior leaders to champion this change is the essential starting point for building a System of Execution. Without that leadership, progress is likely to be limited, given the scale of disruption involved and the level of vision and support required to successfully reimagine core business processes.
Often in our conversations and in our research, we get asked questions like, which are the best processes to start, and are there better processes than others? You just start the journey of putting agents in place. Define the AI agent and then do the data work that enables it. That's how you avoid overinvesting in data projects with no line of sight to ROI.
Also, accuracy concerns, like with generative AI, can be managed just like you manage human error: through supervision, oversight, and layers of review. So, in the same way, we have agents supervising agents, and we may also have people directing agents. This journey or layered approach allows us to be thoughtful about where to introduce AI agents and where not to.
Too often, discussions about AI focus on the technology itself. But as leaders, we must shift our focus to use. Not all AI is equal in how it's deployed, the impact it delivers, or the change it demands.
Segmenting your AI journey based on use not only clarifies priorities but enables more effective communication across your leadership team.
Microsoft Copilot Scores Low On AI IQ Tests — But That's Not The Full Story
Not all brains are created equally, and that is equally true of artificial intelligence.
As big tech companies from Apple to Meta desperately try to figure out how to profit from the big AI boom, one player we're all too familiar with continues to scheme somewhat outside of the limelight.
While Meta reportedly drops tens of millions in bonuses to poach OpenAI researchers, and Apple spooks shareholders by not having a clear plan for its own AI efforts, Microsoft seems relatively content with powering the back-end for a lot of these services.
You may likeIndeed, Microsoft's mass layoffs throughout 2025 to the tune of over 15,000 employees are reportedly designed to fund a splurge on new AI-focused data centers for Azure, as Microsoft bets big on powering AI for other companies.
Microsoft has its own home-grown AI efforts, of course. Microsoft Copilot, for example, is the firm's answer to ChatGPT and other similar AI assistant apps. Microsoft has also baked "AI" features into the Photos app, Microsoft Paint, and even Notepad. So far, few people seem to care, though. And this might be at least partially why.
I noticed this website recently called TrackingAI, which stacks up different models in a variety of IQ-oriented challenges. The website runs AI LLMs through both Mensa Norway's notoriously difficult reasoning tests, as well as fully offline tests designed to prevent AI from surfing the web for the answers. How did Microsoft Copilot do? Well ... Not particularly well (at least on paper).
In the offline tests, Microsoft Copilot languished at the very bottom of the barrel, with a score of 67. By comparison, OpenAI o3 Pro is the current frontrunner, hitting 117. Copilot fared a little better in the Mensa Norway test, hitting 84. Elon Musk's Grok-4 won out on the Mensa Norway test, hitting 136. OpenAI o3 Pro was close behind with 135.
But, does it matter? It should be noted that Microsoft Copilot is based on GPT-4o which prioritizes versatility, speed, and cost-effectiveness over reasoning. OpenAI's o3 models aren't generally available, because they're several times more expensive to run than GPT-4o. Most of the models that beat out Microsoft Copilot are generally "pro" models that are more expensive to run. Copilot is free, and its performance generally reflects that fact.
One of Microsoft's core research areas revolves around figuring out to make more powerful models more cost-effective. Microsoft itself reported a huge surge in carbon emissions, revolving almost entirely around AI power use. Microsoft's own Phi models, which also aren't as widely available, prioritize performance as so-called Small Language Models, potentially designed to run on-device and with minimal costs. Phi models aren't listed on TrackingAI yet, sadly, from what I could tell.
The reality is with these tests is that each language model, at least as of today, are custom-purposed for specific tasks. GPT-4o and Copilot are designed to be more consumer-friendly, dare I say, "fun" to use, even if they suffer when it comes to raw academia. Copilot has deep research modes that boost its accuracy if you're willing to subscribe.
There's a ton of hype in the AI arena right now, with players like Google Gemini, X's Grok, and OpenAI's ChatGPT frequently leapfrogging each other in certain parameters. It's perhaps a bit sad that Microsoft itself, for all its investments, rarely seems to be in the conversation — unless it's negatively so, with the flop of Copilot+ PCs and the privacy backlash of Windows Recall.
Perhaps Microsoft is simply content being in the background, powering the future with Azure instead of being in the limelight.
AI Won't Solve All Of Your Business's Problems—But Your Engineers Might
Guillaume Aymé is CEO of Lenses.Io, a pioneer in the streaming market that has transformed the way engineers work with real-time data.
gettyI'm a glass-pretty-much-full kind of guy when it comes to technology. I love innovation. If there's a new technology, I want to try it; a big idea, I want to chase it.
The pace of technological change is both staggering and exciting. But what worries me is how our industry is treating AI like a magic solution to all of our problems. Consider AI agents, which companies are looking to automate away entire processes and departments with. It feels like everyone is rushing to deploy AI and agents as broadly and in as many different ways as possible because they're afraid of being left behind.
That sentiment makes sense, but in the rush, we run the risk of deprioritizing the basics of good data management and software delivery practices. That could be very costly.
AI is a force multiplier for businesses. However, at the end of the day, success isn't going to come without very good foundations based on company culture and business alignment, engineering practices, tooling and good quality data—and how well they come together.
Why Companies Still Struggle With Data AccessIntegrating AI agents into a business is already complex. Trying to do it with poor or inaccessible data makes it nearly impossible.
Good data isn't just about quality—it's also about access. Yet many companies still make it difficult for engineers and product teams to work with the data they need. This is especially true with streaming data, which holds the greatest potential to enable real-time, AI-driven automation.
Without access to this data, companies are likely to fall behind. The divide is growing clearer: businesses are either leveraging AI successfully or struggling to keep up—there's little middle ground.
When teams lack self-service access to data, it drags down:
• Productivity: Teams waste time chasing down information instead of building and improving systems.
• Collaboration: Silos grow, making it harder for teams to share insights and coordinate.
Most importantly, the AI systems you've already deployed can start to fail. Without the right data foundation, teams get stuck debugging broken automation rather than moving forward.
Using AI To Improve Data FirstToo often, businesses focus on AI agents that replace human work rather than those that enhance foundational processes. One way organizations looking to adopt AI agents can start—or advance—their journey, for instance, is by applying agents to support the engineers who manage and govern data.
These data-focused agents are often easier to scope, operate more predictably and carry less risk than those embedded directly into business operations. Just as important, they help establish the foundation of clean, well-governed data that future AI systems will depend on.
For example, agents that classify data sources can assist both humans and other AI systems in quickly locating relevant datasets. At the same time, they can identify duplicates, improving data quality and reducing storage and processing costs. A well-maintained data catalog makes it far easier for downstream agents to discover and use the right data.
How The Industry Can Promote The BasicsTo this day, few businesses have successfully introduced AI, let alone agents, because both are difficult to operationalize. Doing it right requires strong fundamentals in data and software engineering. To get back to basics, businesses should prioritize empowering their engineers by providing them with the following:
• Simplicity: Enable teams to focus on innovative problem solving by doing away with unnecessary complexities.
• Autonomy: Trust them to take ownership when making decisions.
• Access: Provide them with the real-time data needed to gain insights that eliminate decision-making bottlenecks and drive better outcomes.
• Tools: Give teams modern solutions that streamline their workflows and foster collaboration.
• Guidance: Keep everyone aligned by providing governance that enforces clear and sensible guardrails without inhibiting innovation.
These aren't just best practices. Rather, I see them as prerequisites for any company with ambitions in today's data-driven business environment.
Prioritizing these principles will go a long way toward helping businesses see a return on their investment in AI, helping to pave the way for success further down the road.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Comments
Post a Comment