Latent semantic indexing: Why marketers don’t need to worry about it
DeepSeek And The Future Of Enterprise AI
Fabio Caversan is Vice President of Digital Business and Innovation at Stefanini, driving new product offerings and digital transformation.
gettyIn January 2025, the Chinese startup DeepSeek released R1, a sophisticated, open-source reasoning model. In a field dominated by proprietary models from companies like OpenAI and Google, R1 quickly gained traction as an accessible alternative.
The implications for enterprise AI are significant. Until recently, most leading systems were only available through closed APIs or expensive licensing agreements. With its open-source approach, DeepSeek broadened access to cutting-edge AI capabilities while enabling organizations to better understand, audit and customize the systems they deploy.
Efficiency, Energy And Enterprise ImpactThe market was quick to respond to R1's surprise debut. Within days, OpenAI and Google had announced new, lower pricing structures, and Microsoft began testing deployments through Azure.
However, despite the competitive threat, some industry leaders saw the launch as a step forward. Meta's chief AI scientist, Yann LeCun, praised DeepSeek for accelerating the push toward open-source AI. Meanwhile, Microsoft CEO Satya Nadella called the development "good news," arguing that increased access drives broader adoption.
The launch of R1 also brought benefits for companies focused on energy consumption. Historically, running AI models on enterprise infrastructure has required tremendous energy, so much so that in 2024, Microsoft announced plans to revive the Three Mile Island nuclear power plant in Pennsylvania to supply its data centers.
By enabling high-output performance on even mid-tier machines, the R1 model allows organizations to scale AI capabilities without the major infrastructure or energy costs typically associated with AI operations.
A Model That Does More With LessWith R1, high-performance models are showing up in places they couldn't before—on modest infrastructure, under tighter budgets and in organizations previously priced out of advanced AI solutions entirely.
Key strategic advantages include:
• Flexible Implementation Without Cloud Dependency: DeepSeek can be deployed and tested on local infrastructure. That reduces reliance on third-party APIs and provides more direct control over how systems are built and managed.
• Lower Total Cost Of Ownership: Because it's open source and runs on modest hardware, DeepSeek reduces costs associated with licensing fees and infrastructure.
• Stronger Data Governance And Regulatory Fit: On-premises deployment gives organizations more control over data handling, making it easier to meet internal policies and regional privacy laws.
• Efficient Performance With Less Energy Draw: R1's architecture allows for advanced capabilities without the heavy energy draw typically associated with large-scale AI.
• Enhanced Market Agility: Teams that adopt open-source models early will be able to move quickly and test new ideas in-house.
Auditability And AssuranceDeepSeek's open-source architecture provides enterprises with transparency. As Grammarly CEO Rahul Roy-Chowdhury argued in an article for the World Economic Forum, transparency is a foundational strength of open-source systems. Because the underlying code and model weights are publicly available, organizations can audit and adapt open-source technology to meet their own security and ethical standards.
Barriers To AdoptionDespite these strengths, DeepSeek hasn't yet reached mainstream enterprise adoption. Running a state-of-the-art open-source AI model on-premises requires expertise across DevOps, machine learning (ML) operations and AI. Many organizations lack that level of in-house capability.
Geopolitical tensions also muddy the waters. Because DeepSeek is headquartered in China, some organizations remain cautious. These organizations will need visible, ongoing assurance of data security, regulatory alignment and long-term technological autonomy to overcome this hesitation. Beyond the technology, companies need to understand how well a system runs, how easily it will integrate with existing workflows and whether it will introduce any compliance risks.
The Next Step For Enterprise AIWinning in the next era of enterprise AI will require trust, agility and the ability to meet businesses where they are. As an open-source project, DeepSeek is in a position to outperform competitors in priority areas such as transparency and cost efficiency.
However, any provider looking to compete for enterprise adoption will need to invest in six key areas:
• Explainability And Fairness: For AI decisions to be trusted, especially in scenarios where they impact people, they need to be explainable and fair. Providers should build out or integrate interpretation tools, support external audits and share bias metrics. Clear documentation and audit pathways must be part of any enterprise offering.
• Scaling Open Source And Community Trust: Open-source projects succeed when they're backed by active, well-supported communities. For providers, that means investing in developer experience, strong documentation and ongoing engagement to keep users and contributors connected to their core team.
• Security And Adversarial Risks: Wider deployment will make large AI models more attractive to attackers. Providers should implement "security by design" across the stack, run third-party audits and red team exercises, maintain rapid patch cycles and give self-hosted users detailed, actionable security guidance.
• Interoperability And Integration: Mainstream enterprise adoption will depend on seamless compatibility with legacy, cloud and hybrid IT environments. Providers should prioritize a mature SDK/API layer, build plug-ins for top enterprise platforms (such as Microsoft and Salesforce) and offer onboarding materials and "solution blueprints" for common enterprise use cases.
• Enterprise Support And Sustainability: For mainstream adoption, open source alone isn't enough. Enterprises need support contracts, SLAs and deployment options that fit their infrastructure. Providers should build or enable commercial packages that give companies a choice between total self-hosting and managed or fully supported deployments.
• Continuous Innovation And Talent Retention: Falling behind on model quality or deployment features kills momentum quickly. Providers need strong internal R&D, active collaboration with outside researchers and a culture that prioritizes open peer review and innovation.
ConclusionThe release of R1 has shown that companies can deploy sophisticated AI with more speed and confidence than ever before. However, delivering a technically strong model is only part of the equation. For now, DeepSeek offers a rare combination of performance, flexibility and autonomy, and that puts it ahead of the curve. Whether it will stay there will depend on how quickly it can operationalize support and security at scale.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Evolving AI Security Into Enterprise Risk Strategy
Metin Kortak is the Chief Information Security Officer at Rhymetec, an industry-leading cybersecurity firm for SaaS companies.
gettyAI is exciting, but it's being deployed across enterprises at a pace beyond the effective governance capability of most organizations. While regulatory frameworks such as the EU AI Act represent meaningful progress, they fall short of addressing the full scope of risks, particularly in areas like cybersecurity, misinformation and internal misuse. At the same time, global security standards remain fragmented or nonexistent, leaving organizations without a consistent road map for implementing responsible AI.
For CISOs and technology leaders, this situation demands urgent action. Relying on future regulation to catch up means accepting exposure to operational, reputational and legal risks. The path forward requires proactive, anticipatory leadership: securing AI systems, embedding oversight and establishing governance structures now, not waiting for compliance mandates to dictate the response.
Why AI Security Currently Falls ShortAI is now embedded across enterprise functions—from fraud detection to HR automation. Yet universal security standards do not exist. Developers continue to release large-scale models, often open-source or API-accessible, without consistent safeguards governing their behavior, inputs or decision pathways.
As reliance on AI grows, so do the attack surfaces. Adversaries are already using AI to bypass intrusion detection systems, dynamically adjust tactics and exploit algorithmic blind spots. These threats are evolving faster than conventional cybersecurity defenses can adapt, creating an "arms race" in cybersecurity. In many cases, attackers are using the same tools as defenders—only more aggressively and creatively.
Despite the risks, most companies have yet to establish formal governance around AI systems. These tools are deployed as part of the digital stack but are often excluded from the rigorous security assessments applied to other infrastructure. The result: Organizations are expected to innovate with AI, but they do so without the guidance of coherent or enforceable standards.
What The EU AI Act Still LacksThe European Union's Artificial Intelligence Act (EU AI Act) is the most comprehensive regulatory initiative for generative AI to date. It introduces a tiered risk framework, bans certain harmful AI applications and imposes compliance obligations for high-risk systems. Use cases in hiring, health care or critical infrastructure require transparent data governance, human oversight and explainability.
Penalties are steep—up to 7% of global revenue or 35 million euros for noncompliance. Still, important areas remain uncovered. General-purpose models, such as LLMs, often escape scrutiny unless they are deployed in regulated contexts. And generative misinformation or synthetic content remains blanketed under overly broad categories. Deepfakes, in particular, elude straightforward classification and regulation, despite their growing threat profile.
The EU AI Act may serve as a global benchmark—a Brussels Effect in motion—but enforcement across borders will be difficult, and the speed of AI innovation is unlikely to slow.
The Cost Of Waiting For Regulations To Kick InThe AI threat environment is evolving in real time. Generative tools are being used to create synthetic identities, launch phishing campaigns and impersonate executives with increasing ease. Organizations lacking clear AI controls are vulnerable to a wide range of risks, including external attacks, internal misuse and system misalignment. Many continue to deploy these tools without fully considering their downstream impact.
Security's role is to reduce risk, not eliminate it. That principle is especially critical in the context of unpredictable AI system behavior. AI must shift from risk minimization theory to real-time implementation, particularly as generative threats continue to evolve. Waiting for regulation is not a viable strategy. The risks are immediate and the tools are already in the hands of both innovators and adversaries. AI models, if left unchecked, can reinforce bias, spread misinformation and cause reputational harm.
The cost of inaction is not just financial—it's operational, reputational and legal. Organizations that start aligning technical safeguards with business strategy today will be better positioned to meet the moment—and whatever comes next.
Six Steps Businesses Can Take NowAI security must evolve from a compliance exercise into a core pillar of enterprise risk management. For CISOs and technology leaders, this means anticipating not just today's threats but tomorrow's regulations and adversarial tactics.
1. Conduct A Comprehensive AI InventoryStart by mapping every AI system in use by the organization—internally developed or third-party integrated. This includes tools built into SaaS products, chatbots and decision engines. Capture their functions, input sources, training data, business impact and any reliance on external APIs or models.
2. Apply Zero-Trust Principles To AI WorkflowsPerimeter trust is dead. The same applies to AI. Validate all inputs and continuously monitor outputs. Access to models should be restricted based on functional roles and integrated with identity and access management policies.
3. Adopt Privacy-Preserving TechnologiesTechniques like differential privacy, federated learning and confidential computing allow protection of sensitive data even during training and inference. These strategies help maintain control without sacrificing insight.
4. Implement Targeted Training And GovernanceEvery employee interacting with AI must understand its boundaries. That includes knowing what data can be shared, how to evaluate outputs and where to report anomalies. Define usage policies that govern prompt safety, data sharing and vendor evaluation.
5. Simulate Adversarial ScenariosInclude AI in red team exercises. Explore how systems respond to crafted prompts, poisoned data or prompt injections. This surfaces vulnerabilities before bad actors do.
6. Build Human Oversight Into The AI Life CycleNo AI system should operate without meaningful human-in-the-loop supervision. These systems are inherently fallible—subject to errors, blind spots and unpredictable behaviors. Review layers, escalation paths and audit capabilities must be embedded from the start.
Securing AI is not just a technical task; it's a strategic one. Businesses that move beyond the checklist approach and design for resilience, ethics and transparency will be best equipped to lead.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Udemy Is Powering Enterprise AI Transformation Through Skills
This content was provided by Acumen Media for Udemy.
This advertiser content was paid for and created by Acumen. Neither CBS News nor CBS News Brand Studio, the brand marketing arm of CBS News, were involved in the creation of this content.
Executives across industries are urgently working to build AI fluency in their organizations as digital transformation becomes a business imperative. From finance to manufacturing, companies have doubled down on training employees in artificial intelligence (AI) and other tech skills, making workforce upskilling a necessity across the board.
Udemy, a global AI-powered skills development platform known for its real-world expert instructors and unique marketplace model, is at the forefront of this skills revolution.
"AI is the big demand, and every one of our clients is asking, 'Help me reskill the workforce,'" said Udemy CEO Hugo Sarrazin, underscoring the unprecedented pressure companies face to upskill their people.
Udemy, used by over 17,000 businesses worldwide, is helping companies answer that call by developing AI fluency across their ranks.
AI-Personalized learning at scaleUdemy is leveraging AI to tailor the learning experience for each employee. Traditional online courses have been one-size-fits-all, following a fixed curriculum regardless of a learner's prior knowledge or style. Now, AI-driven features can assess each learner's skills, curate a personalized learning path, and feature role play experiences to maximize engagement.
"What we're doing right now with our AI platform is fundamentally changing the learner experience," Sarrazin explained.
More personalized training yields a much better return on investment (ROI) for companies. Employees can even upskill in the flow of work, as AI guides practice and reinforces learning without pulling them away from day-to-day roles.
Reskilling strategies to drive transformationOrganizations care about outcomes, not just course completion. Employers want employees to gain specific new skills they can apply to strategic initiatives and to retain talent by supporting their growth. To that end, Udemy is enabling organizations to customize learning content to align with their goals. Enterprises now have access to the same AI-powered content creation tools used by Udemy's instructors, allowing them to quickly develop bespoke courses and training pathways. By tailoring learning journeys to company needs, businesses can keep their teams ahead of quickly evolving skill requirements.
By blending their dynamic expert marketplace with AI-powered delivery, Udemy offers a scalable way to foster continuous learning. And by turning employee training into a strategic asset, companies can better maintain a competitive edge. The ability to quickly reskill at scale is becoming essential for staying ahead. Udemy's success shows how the right learning platform can transform not only a workforce, but also an entire organization for the digital age.

Comments
Post a Comment