Back to Perspectives
Artificial Intelligence

The Rise of Reasoning Models

By
Alban Cousin
5/16/2025
2 Minutes Read

AI continues reshaping semiconductor demand beyond two years, shifting from training towards inference, with growing importance for edge applications across automotive and mobile sectors. The dominant GPUs face competition from application-specific integrated circuits (ASICs), tailored for AI workloads.

A diverse group of disruptors challenges Nvidia's hegemony. Established innovators — Ampere, Cerebras, SambaNova, and Graphcore — target extensive infrastructure deployments. Ampere leverages ARM-based architectures for efficient cloud inference, while Cerebras bets on wafer-scale engines, minimizing chip-to-chip communication for massive model training.

Newer specialized firms like Groq, Tenstorrent, d-Matrix, Furiosa, Recogni, and Lambda Labs each tackle distinct aspects of AI acceleration. Groq focuses on deterministic, ultra-low latency inference. Tenstorrent develops programmable AI processors. d-Matrix leverages digital in-memory computing to minimize data movement, maximizing efficiency in inference tasks.

The integration of photonics into semiconductor designs promises faster, more energy-efficient AI hardware, breaking current computational barriers. This fundamental shift in semiconductor architecture not only addresses AI's voracious data demands but also sets a new direction for semiconductor innovation, reshaping the industry's long-term landscape.

[
JOIN OUR MAILING LIST
]

Get the best stories from the Opportuna community.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.