top of page


Nvidia Rubin AI Architecture Launch: Faster Training and Inference
Nvidia has unveiled its next major leap in artificial intelligence hardware with the official launch of its Rubin computing architecture at the Consumer Electronics Show. Announced by CEO Jensen Huang, Rubin is positioned as Nvidia’s most advanced AI platform to date, designed to confront the rapidly growing computational demands of modern AI systems. Production of the architecture is already underway, with further ramp up expected in the second half of the year, signaling
Covertly AI
Jan 103 min read


Google Launches Gemini 3 Flash: Default AI Model for Search and Gemini
Google has officially launched Gemini 3 Flash, a faster and more cost efficient version of its Gemini 3 large language model, and is making it the default AI model across the Gemini app and Google Search’s AI Mode worldwide. Built on the same foundation as Gemini 3 Pro, Gemini 3 Flash is designed to deliver strong reasoning and multimodal performance while prioritizing speed, lower latency, and affordability. The release comes six months after Gemini 2.5 Flash and reflects
Covertly AI
Dec 18, 20253 min read


Zyphra Breakthrough: Can ZAYA1 Prove AMD Is Ready to Rival NVIDIA?
Zyphra’s unveiling of ZAYA1 marks a significant moment in the AI hardware landscape, proving that large-scale model training can succeed outside NVIDIA’s well-established ecosystem. After a year of collaboration with AMD and IBM, Zyphra trained ZAYA1 entirely on AMD Instinct MI300X GPUs, Pensando networking, and the ROCm software stack, demonstrating that AMD’s platform can support major foundation models without exotic configurations or performance compromises (Artificial
Covertly AI
Nov 26, 20254 min read


Google Unveils Ironwood TPU to Power the New Age of AI Inference
Google’s latest innovation in artificial intelligence hardware marks a significant step toward the future of computing, as the company unveils its seventh-generation Tensor Processing Unit, dubbed Ironwood . These custom-built chips are designed specifically for artificial intelligence workloads and signal a major shift toward what Google calls the “age of inference.” Unlike Nvidia’s general-purpose GPUs, Google’s TPUs are application-specific integrated circuits, meaning the
Covertly AI
Nov 10, 20253 min read
bottom of page
.png)



