BBS:      TELESC.NET.BR
Assunto:  Meta, Nvida partner for AI project
De:       Mike Powell
Data:     Sun, 22 Feb 2026 11:05:53 -0500
-----------------------------------------------------------
'No one deploys AI at Meta's scale': Meta signs up Nvidia to power its next
big AI projects - so what exactly do Mark Zuckerberg and Jensen Huang have
planned?

By Efosa Udinmwen published 20 hours ago

Initiative will combine Meta's production workloads with Nvidia's hardware
and software ecosystem

    Meta and Nvidia launch multiyear partnership for hyperscale AI
infrastructure
    Millions of Nvidia's GPUs and Arm-based CPUs will handle extreme
workloads
    Unified architecture spans data centers and Nvidia cloud partner
deployments

Meta has announced a multi-year partnership with Nvidia aimed at building
hyperscale AI infrastructure capable of handling some of the largest workloads
in the technology sector.  This collaboration will deploy millions of GPUs and
Arm-based CPUs, expand network capacity, and integrate advanced
privacy-preserving computing techniques across the company's platforms.

The initiative seeks to combine Meta's extensive production workloads with
Nvidia's hardware and software ecosystem to optimize performance and
efficiency.

Unified architecture across data centers

The two companies are creating a unified infrastructure architecture that spans
on-premises data centers and Nvidia cloud partner deployments.  This approach
simplifies operations while providing scalable, high-performance computing
resources for AI training and inference.

"No one deploys AI at Meta's scale - integrating frontier research with
industrial-scale infrastructure to power the world's largest personalization
and recommendation systems for billions of users," said Jensen Huang, founder
and CEO of Nvidia.

"Through deep codesign across CPUs, GPUs, networking and software, we are
bringing the full Nvidia platform to Meta's researchers and engineers as they
build the foundation for the next AI frontier."

Nvidia's GB300-based systems will form the backbone of these deployments.
They will offer a platform that integrates compute, memory, and storage to meet
the demands of next-generation AI models.

Meta is also expanding Nvidia Spectrum-X Ethernet networking throughout its
footprint and aims to deliver predictable, low-latency performance while
improving operational and energy efficiency for large-scale workloads.

Meta has begun adopting Nvidia Confidential Computing to support AI-powered
capabilities within WhatsApp, allowing machine learning models to process user
data while maintaining privacy and integrity.

The collaboration plans to extend this approach to other Meta services,
integrating privacy-enhanced AI techniques into multiple applications.  Meta
and Nvidia engineering teams are working closely to codesign AI models and
optimize software across the infrastructure stack.  By aligning hardware,
software, and workloads, the companies aim to improve performance per watt and
accelerate training for state-of-the-art models.

Large-scale deployment of Nvidia Grace CPUs is a core part of this effort, with
the collaboration representing the first major Grace-only deployment at this
scale.

Software optimizations in CPU ecosystem libraries are also being implemented to
improve throughput and energy efficiency for successive generations of AI
workloads.

"We're excited to expand our partnership with Nvidia to build leading-edge
clusters using their Vera Rubin platform to deliver personal superintelligence
to everyone in the world," said Mark Zuckerberg, founder and CEO of Meta.


https://www.techradar.com/pro/no-one-deploys-ai-at-metas-scale-meta-signs-up-nv
idia-to-power-its-next-big-ai-projects-so-what-exactly-do-mark-zuckerberg-and-j
ensen-huang-have-planned

$$
--- SBBSecho 3.28-Linux
 * Origin: Capitol City Online (1:2320/105)

-----------------------------------------------------------
[Voltar]