(PDF) A Review of Applications of Artificial Intelligence in Heavy Duty Trucks



nvidia ai :: Article Creator

Microsoft's Custom AI Chip Hits Delays, Giving Nvidia More Runway

Microsoft's push into custom artificial intelligence hardware has hit a serious snag. Its next-generation Maia chip, code-named Braga, won't enter mass production until 2026 – at least six months behind schedule. The Information reports that the delay raises fresh doubts about Microsoft's ability to challenge Nvidia's dominance in the AI chip market and underscores the steep technical and organizational hurdles of building competitive silicon.

Microsoft launched its chip program to reduce its heavy reliance on Nvidia's high-performance GPUs, which power most AI data centers worldwide. Like cloud rivals Amazon and Google, it has invested heavily in custom silicon for AI workloads. However, the latest delay means Braga will likely lag behind Nvidia's Blackwell chips in performance by the time it ships, widening the gap between the two companies.

The Braga chip's development has faced numerous setbacks. Sources familiar with the project told The Information that unexpected design changes, staffing shortages, and high turnover have repeatedly delayed the timeline.

One setback came when OpenAI, a key Microsoft partner, requested new features late in development. These changes reportedly destabilized the chip during simulations, causing further delays. Meanwhile, pressure to meet deadlines has driven significant attrition, with some teams losing up to 20 percent of their members.

The Maia series, including Braga, reflects Microsoft's push to vertically integrate its AI infrastructure by designing chips tailored for Azure cloud workloads. Announced in late 2023, the Maia 100 uses advanced 5-nanometer technology and features custom rack-level power management and liquid cooling to manage AI's intense thermal demands.

Microsoft optimized the chips for inference, not the more demanding training phase. That design choice aligns with the company's plan to deploy them in data centers powering services like Copilot and Azure OpenAI. However, the Maia 100 has seen limited use beyond internal testing because Microsoft designed it before the recent surge in generative AI and large language models.

"What's the point of building an ASIC if it's not going to be better than the one you can buy?" – Nividia CEO Jensen Huang

In contrast, Nvidia's Blackwell chips, which began rolling out in late 2024, are designed for both training and inference at a massive scale. Featuring over 200 billion transistors and built on a custom TSMC process, these chips deliver exceptional speed and energy efficiency. This technological advantage has solidified Nvidia's position as the preferred supplier for AI infrastructure worldwide.

The stakes in the AI chip race are high. Microsoft's delay means Azure customers will rely on Nvidia hardware longer, potentially driving up costs and limiting Microsoft's ability to differentiate its cloud services. Meanwhile, Amazon and Google are progressing with silicon designs as Amazon's Trainium 3 and Google's seventh-generation Tensor Processing Units gain traction in data centers.

Team Green, for its part, appears unfazed by the competition. Nvidia CEO Jensen Huang recently acknowledged that major tech companies are investing in custom AI chips but questioned the rationale for doing so if Nvidia's products already set the standard for performance and efficiency.


Nvidia Becomes World's Most Valuable Company After AI Surge Lifts Stock To Record High

Your browser is not supportedusatoday.Com

logo

usatoday.Com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on usatoday.Com


Nvidia Taps 2 Young Chinese AI Experts To Strengthen Research

US chip giant Nvidia has hired two prominent artificial intelligence (AI) experts who hail from China, underscoring the rising global recognition of talent from the mainland and their key contributions to the field's advancement.

Zhu Banghua and Jiao Jiantao, both alumni of China's Tsinghua University, said on their respective social media accounts that they joined Nvidia, sharing photos of themselves with Jensen Huang, the founder and CEO of the company.

Zhu, who received his bachelor's degree in electrical and electronics engineering from Tsinghua in 2018 and a PhD in electrical engineering and computer science from the University of California, Berkeley, in 2024, joined Nvidia's Nemotron team as a principal research scientist, according to Zhu's post on X from over the weekend.

Zhu's LinkedIn profile showed that he has also been an assistant professor at the University of Washington since September 2024.

"We'll be joining forces on efforts in [AI] model post-training, evaluation, agents, and building better AI infrastructure – with a strong emphasis on collaboration with developers and academia," Zhu said, adding that the team was committed to open-sourcing its work and sharing it with the world.

Nemotron is a group at Nvidia dedicated to building enterprise-level AI agents, according to the team's official website. The team's Nemotron multimodal models power AI agents for sophisticated text and visual reasoning, coding and tool-use capabilities.

Jiao, who received a PhD in electrical, electronics and communications in engineering from Stanford University in 2018 after graduating from Tsinghua with a bachelor's degree in electrical engineering, said on LinkedIn over the weekend that he joined Nvidia to "help push the frontier of artificial general intelligence (AGI) and artificial super intelligence (ASI)."






Comments

Follow It

Popular posts from this blog

What is Generative AI? Everything You Need to Know

Top Generative AI Tools 2024

60 Growing AI Companies & Startups (2025)