We engineer tomorrow to build a better future.
Solutions to your liquid cooling challenges.
 
 
DANFOSS
数据中心液冷产品
  数据中心液冷产品
  FD83接头
  UQD快速接头
  UQDB盲插接头
  BMQC盲插接头
  EHW194液冷软管
  EHW094液冷软管
  5400制冷剂接头
  Manifold 分水器
  液冷系统生产及集成
Danfoss流体管阀件
 
 
 
 
 
非标定制液冷产品
液冷系统生产及集成
阀门
传感器
选型资料下载
  新闻通告
  成功案例
  资料下载

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


   

 

NVIDIA Blackwell 平台问世,助力计算新时代
NVIDIA Blackwell Platform Arrives to Power a New Era of Computing

2024 年 3 月 18 日


- NVIDIA Blackwell 平台问世,助力计算新时代
- 全新 Blackwell GPU、NVLink 和 Resilience 技术支持万亿参数规模的 AI 模型
- 全新 Tensor Core 和 TensorRT- LLM 编译器可将 LLM 推理运行成本和能耗降低高达 25 倍
- 新型加速器助力数据处理、工程模拟、电子设计自动化、计算机辅助药物设计和量子计算取得突破
- 各大云提供商、服务器制造商和领先的人工智能公司广泛采用


GTC — 为推动计算新时代的发展,NVIDIA 今天宣布 NVIDIA Blackwell 平台已经面世,该平台可帮助世界各地的组织在数万亿参数大型语言模型上构建和运行实时生成式 AI,且成本和能耗较上一代平台降低高达 25 倍。

Blackwell GPU 架构具有 六项用于加速计算的变革性技术,这些技术将有助于在数据处理、工程模拟、电子设计自动化、计算机辅助药物设计、量子计算和生成式 AI 等领域实现突破——这些都是 NVIDIA 的新兴行业机遇。

NVIDIA 创始人兼首席执行官黄仁勋表示:“三十年来,我们一直致力于加速计算,目标是实现深度学习和人工智能等变革性突破。生成式人工智能是我们这个时代的决定性技术。Blackwell 是推动这场新工业革命的引擎。通过与世界上最具活力的公司合作,我们将实现人工智能在每个行业的承诺。”

预计将采用 Blackwell 的众多组织包括亚马逊网络服务、戴尔科技、谷歌、Meta、微软、OpenAI、甲骨文、特斯拉和 xAI。

Alphabet 和 Google 首席执行官 Sundar Pichai 表示: “将搜索和 Gmail 等服务扩展到数十亿用户让我们学到了很多关于管理计算基础设施的知识。随着我们进入 AI 平台转型,我们将继续大力投资于我们自己的产品和服务以及我们的云客户的基础设施。我们很幸运能与 NVIDIA 建立长期合作伙伴关系,并期待将 Blackwell GPU 的突破性功能带给我们的云客户和 Google 团队,包括 Google DeepMind,以加速未来的发现。”

亚马逊总裁兼首席执行官 Andy Jassy 表示: “我们与 NVIDIA 的深度合作可以追溯到 13 年前,当时我们在 AWS 上推出了世界上第一个 GPU 云实例。如今,我们提供云端任何地方可用的最广泛的 GPU 解决方案,支持世界上技术最先进的加速工作负载。这就是新款 NVIDIA Blackwell GPU 能够在 AWS 上运行如此出色的原因,也是 NVIDIA 选择 AWS 共同开发 Project Ceiba 的原因,该项目将 NVIDIA 的下一代 Grace Blackwell 超级芯片与 AWS Nitro System 的先进虚拟化和超快 Elastic Fabric Adapter 网络相结合,用于 NVIDIA 自己的 AI 研发。通过 AWS 和 NVIDIA 工程师的共同努力,我们

戴尔科技创始人兼首席执行官 Michael Dell 表示: “生成式人工智能对于打造更智能、更可靠、更高效的系统至关重要。戴尔科技和 NVIDIA 正在携手塑造技术的未来。随着 Blackwell 的推出,我们将继续为客户提供下一代加速产品和服务,为他们提供推动跨行业创新所需的工具。”

Google DeepMind 联合创始人兼首席执行官 Demis Hassabis 表示: “人工智能的变革潜力令人难以置信,它将帮助我们解决世界上一些最重要的科学问题。Blackwell 的突破性技术能力将提供所需的关键计算能力,帮助世界上最聪明的人绘制新的科学发现。”

Meta 创始人兼首席执行官马克·扎克伯格表示: “人工智能已经为我们的大型语言模型、内容推荐、广告和安全系统等一切事物提供了支持,而且未来它只会变得更加重要。我们期待使用 NVIDIA 的 Blackwell 来帮助训练我们的开源 Llama 模型并构建下一代 Meta 人工智能和消费产品。”

微软执行董事长兼首席执行官萨蒂亚·纳德拉 (Satya Nadella) 表示: “我们致力于为客户提供最先进的基础设施,以支持他们的 AI 工作负载。通过将 GB200 Grace Blackwell 处理器引入我们全球的数据中心,我们将在长期优化 NVIDIA GPU 以适应我们的云的基础上继续前进,为世界各地的组织实现 AI 的承诺。”

OpenAI 首席执行官 Sam Altman 表示: “Blackwell 实现了巨大的性能飞跃,并将加速我们提供尖端模型的能力。我们很高兴能继续与 NVIDIA 合作,以增强 AI 计算能力。”

Oracle 董事长兼首席技术官拉里·埃里森 (Larry Ellison) 表示: “Oracle 与 NVIDIA 的密切合作将推动 AI、机器学习和数据分析领域实现质和量的突破。为了让客户发现更多可操作的见解,需要像 Blackwell 这样更强大的引擎,它专为加速计算和生成 AI 而打造。”

特斯拉和xAI首席执行官埃隆·马斯克: “目前没有什么比NVIDIA硬件更适合AI。”

新架构以大卫·哈罗德·布莱克威尔 (David Harold Blackwell) 的名字命名,他是一位专门研究博弈论和统计学的数学家,也是第一位入选美国国家科学院的黑人学者。新架构取代了两年前推出的 NVIDIA Hopper? 架构。

Blackwell 创新助力加速计算和生成式 AI
Blackwell 的六大革命性技术共同支持高达 10 万亿参数的模型进行 AI 训练和实时 LLM 推理,包括:

世界上最强大的芯片 — Blackwell 架构 GPU 配备了 2080 亿个晶体管,采用定制的 4NP TSMC 工艺制造,具有两个光罩极限 GPU 芯片,通过 10 TB/秒的芯片到芯片链路连接成单个统一的 GPU。


第二代 Transformer 引擎 ——借助新的微张量缩放支持以及集成到 NVIDIA TensorRT?-LLM 和 NeMo Megatron 框架中的 NVIDIA 先进动态范围管理算法,Blackwell 将通过新的 4 位浮点 AI 推理功能支持两倍的计算和模型大小。


第五代 NVLink — 为了加速数万亿参数和混合专家 AI 模型的性能,最新版本的 NVIDIA NVLink? 为每个 GPU 提供了突破性的 1.8TB/s 双向吞吐量,确保最复杂的 LLM 之间最多 576 个 GPU 之间的无缝高速通信。


RAS 引擎 — Blackwell 驱动的 GPU 包含一个专用引擎,用于提高可靠性、可用性和可维护性。此外,Blackwell 架构在芯片级别增加了功能,可利用基于 AI 的预防性维护来运行诊断并预测可靠性问题。这可最大限度地延长系统正常运行时间,并提高大规模 AI 部署的弹性,使其能够连续数周甚至数月不间断运行,并降低运营成本。
安全的 AI—— 先进的机密计算功能可在不影响性能的情况下保护 AI 模型和客户数据,并支持新的本机接口加密协议,这对于医疗保健和金融服务等隐私敏感行业至关重要。
解压缩引擎 — 专用解压缩引擎支持最新格式,加速数据库查询,以在数据分析和数据科学中提供最高的性能。未来几年,公司每年花费数百亿美元的数据处理将越来越多地采用 GPU 加速。


大型超级芯片
NVIDIA GB200 Grace Blackwell 超级芯片通过 900GB/s 超低功耗 NVLink 芯片到芯片互连 将两个 NVIDIA B200 Tensor Core GPU 连接到 NVIDIA Grace CPU。

为了获得最高的 AI 性能,基于 GB200 的系统可以与今天发布的NVIDIA Quantum-X800 InfiniBand 和 Spectrum?-X800 以太网平台连接, 可提供高达 800Gb/s 速度的先进网络。

GB200 是 NVIDIA GB200 NVL72的关键组件,NVIDIA GB200 NVL72 是一款多节点、液冷、机架级系统,适用于计算密集型工作负载。它结合了 36 个 Grace Blackwell 超级芯片,其中包括 72 个 Blackwell GPU 和 36 个 Grace CPU,它们通过第五代 NVLink 互连。此外,GB200 NVL72 还包括 NVIDIA BlueField?-3 数据处理单元,可在超大规模 AI 云中实现云网络加速、可组合存储、零信任安全性和 GPU 计算弹性。与相同数量的 NVIDIA H100 Tensor Core GPU 相比,GB200 NVL72 可将 LLM 推理工作负载的性能提高 30 倍,并将成本和能耗降低 25 倍。

该平台作为单个 GPU,具有 1.4 exaflops 的 AI 性能和 30 TB 的快速内存,是最新 DGX SuperPOD 的构建模块。

NVIDIA 提供 HGX B200,这是一款服务器主板,可通过 NVLink 连接八个 B200 GPU,以支持基于 x86 的生成式 AI 平台。HGX B200 通过 NVIDIA Quantum-2 InfiniBand 和 Spectrum-X 以太网网络平台支持高达 400Gb/s 的网络速度。

Blackwell 合作伙伴全球网络
基于 Blackwell 的产品将于今年晚些时候从合作伙伴处发售。

AWS、 Google Cloud、 Microsoft Azure 和 Oracle Cloud Infrastructure 将成为首批提供 Blackwell 驱动实例的云服务提供商,NVIDIA Cloud 合作伙伴计划公司 Applied Digital、CoreWeave、Crusoe、IBM Cloud、Lambda和 Nebius也将提供此类服务。Sovereign AI 云也将提供基于 Blackwell 的云服务和基础设施,其中包括 Indosat Ooredoo Hutchinson、Nexgen Cloud、Oracle EU Sovereign Cloud、Oracle 美国、英国和澳大利亚政府云、Scaleway、Singtel、Northern Data Group 的 Taiga Cloud、Yotta Data Services 的 Shakti Cloud 和YTL Power International。

GB200 还将在 NVIDIA DGX? Cloud上提供,这是一个与领先的云服务提供商共同设计的 AI 平台,可为企业开发人员提供构建和部署高级生成式 AI 模型所需的基础设施和软件的专用访问权限。AWS、Google Cloud 和 Oracle Cloud Infrastructure 计划在今年晚些时候托管新的基于 NVIDIA Grace Blackwell 的实例。

思科、 戴尔、 惠普企业、 联想 和超微预计将推出大量基于 Blackwell 产品的服务器,Aivres、 ASRock Rack、 华硕、Eviden、 富士康、 技嘉、 英业达、 和硕、 QCT、纬创、 Wiwynn 和 ZT Systems 也将如此。

此外,越来越多的软件制造商,包括 Ansys、Cadence 和 Synopsys(工程模拟领域的全球领导者)将使用基于 Blackwell 的处理器来加速其用于设计和模拟电气、机械和制造系统及零件的软件。他们的客户可以使用生成式人工智能和加速计算以更快、更低成本和更高能效将产品推向市场。

NVIDIA 软件支持Blackwell 产品组合由NVIDIA AI Enterprise(用于生产级 AI 的端到端操作系统)
提供支持 。NVIDIA AI Enterprise 包括NVIDIA NIM? 推理微服务 (今天也宣布)以及企业可以在 NVIDIA 加速云、数据中心和工作站上部署的 AI 框架、库和工具。

要了解有关 NVIDIA Blackwell 平台的更多信息,请观看 GTC 主题演讲 ,并 注册参加 NVIDIA 和行业领导者在 GTC 上举办的会议,会议将持续到 3 月 21 日。

媒体联系人
克里斯汀·内山
企业和边缘计算
+1-408-486-2248
kuchiyama@nvidia.com
下载
下载新闻稿
下载附件
更多图片
NVIDIA Blackwell
NVIDIA GB200 NVL72
NVIDIA GB200 Grace Blackwell 超级芯片
更多新闻

NVIDIA 将召开第二季度财务业绩电话会议
2024 年 7 月 31 日

NVIDIA 宣布推出适用于 OpenUSD 语言、几何、物理和材料的生成式 AI 模型和 NIM 微服务
2024 年 7 月 29 日

NVIDIA 加速人形机器人开发
2024 年 7 月 29 日

NVIDIA AI Foundry 为全球企业构建定制 Llama 3.1 生成式 AI 模型
2024 年 7 月 23 日

惠普企业和 NVIDIA 宣布推出“NVIDIA AI 计算 by HPE”,加速生成式 AI 工业革命
2024 年 6 月 18 日
关于 NVIDIA
自 1993 年成立以来, NVIDIA (NASDAQ: NVDA) 一直是加速计算领域的先驱。该公司于 1999 年发明的 GPU 激发了 PC 游戏市场的增长,重新定义了计算机图形,开启了现代 AI 时代,并推动了各个市场的工业数字化。NVIDIA 现在是一家全栈计算基础设施公司,提供数据中心规模的产品,正在重塑行业。更多信息请访问 https://nvidianews.nvidia.com/。

本新闻稿中的某些声明包括但不限于以下声明:NVIDIA 产品和技术的优势、影响、性能、功能和可用性,包括 NVIDIA Blackwell 平台、Blackwell GPU 架构、Resilience Technologies、Custom Tensor Core 技术、NVIDIA TensorRT-LLM、NeMo Megatron 框架、NVLink、NVIDIA GB200 Grace Blackwell 超级芯片、B200 Tensor Core GPU、NVIDIA Grace CPU、NVIDIA H100 Tensor Core GPU、NVIDIA Quantum-X800 InfiniBand 和 Spectrum-X800 以太网平台、NVIDIA GB200 NVL72、NVIDIA BlueField-3 数据处理单元、DGX SuperPOD、HGX B200、Quantum-2 InfiniBand 和 Spectrum-X 以太网平台、BlueField-3 DPU、NVIDIA DGX Cloud、NVIDIA AI Enterprise 和 NVIDIA NIM 推理微服务;我们的目标是实现深度学习和人工智能等变革性突破;Blackwell GPU 是推动新工业革命的引擎;我们与世界上最具活力的公司合作,实现人工智能应用于每个行业的能力;我们与第三方的合作与伙伴关系及其利益和影响;将提供或使用我们的产品、服务和基础设施以及将基于我们的产品交付服务器的第三方;以及全球工程模拟领导者的客户使用生成人工智能和加速计算以更快、更低成本和更高能源效率将产品推向市场的能力,这些都是前瞻性陈述,受风险和不确定性的影响,这些风险和不确定性可能导致结果与预期存在重大差异。可能导致实际结果出现重大差异的重要因素包括:全球经济状况;我们对第三方制造、组装、包装和测试我们产品的依赖;技术发展和竞争的影响;新产品和技术的开发或现有产品和技术的增强;市场对我们产品或我们合作伙伴产品的接受度;设计、制造或软件缺陷;消费者偏好或需求的变化;行业标准和界面的变化;我们的产品或技术集成到系统中时出现的意外性能损失;以及 NVIDIA 不时向美国证券交易委员会 (SEC) 提交的最新报告中详述的其他因素,包括但不限于:其年度报告(表格 10-K)和季度报告(表格 10-Q)均已提交给 SEC。提交给 SEC 的报告副本已发布在公司网站上,可从 NVIDIA 免费获取。这些前瞻性声明并非未来业绩的保证,仅代表截至本文发布之日的情况,除法律要求外,NVIDIA 不承担更新这些前瞻性声明以反映未来事件或情况的任何义务。NVIDIA 不承担更新这些前瞻性陈述以反映未来事件或情况的义务。NVIDIA 不承担更新这些前瞻性陈述以反映未来事件或情况的义务。

? 2024 NVIDIA Corporation。保留所有权利。NVIDIA、NVIDIA 徽标、BlueField、DGX、NVIDIA HGX、NVIDIA Hopper、NVIDIA NeMo、NVIDIA NIM、NVIDIA Spectrum、NVLink 和 TensorRT 是 NVIDIA Corporation 在美国和其他国家/地区的商标和/或注册商标。其他公司和产品名称可能是与其相关的各自公司的商标。功能、价格、可用性和规格如有变更,恕不另行通知。

 

NVIDIA Blackwell Platform Arrives to Power a New Era of Computing
March 18, 2024


NVIDIA Blackwell Platform Arrives to Power a New Era of Computing
New Blackwell GPU, NVLink and Resilience Technologies Enable Trillion-Parameter-Scale AI Models
New Tensor Cores and TensorRT- LLM Compiler Reduce LLM Inference Operating Cost and Energy by up to 25x
New Accelerators Enable Breakthroughs in Data Processing, Engineering Simulation, Electronic Design Automation, Computer-Aided Drug Design and Quantum Computing
Widespread Adoption by Every Major Cloud Provider, Server Maker and Leading AI Company

GTC—Powering a new era of computing, NVIDIA today announced that the NVIDIA Blackwell platform has arrived — enabling organizations everywhere to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor.

The Blackwell GPU architecture features six transformative technologies for accelerated computing, which will help unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing and generative AI — all emerging industry opportunities for NVIDIA.

“For three decades we’ve pursued accelerated computing, with the goal of enabling transformative breakthroughs like deep learning and AI,” said Jensen Huang, founder and CEO of NVIDIA. “Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution. Working with the most dynamic companies in the world, we will realize the promise of AI for every industry.”

Among the many organizations expected to adopt Blackwell are Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla and xAI.

Sundar Pichai, CEO of Alphabet and Google: “Scaling services like Search and Gmail to billions of users has taught us a lot about managing compute infrastructure. As we enter the AI platform shift, we continue to invest deeply in infrastructure for our own products and services, and for our Cloud customers. We are fortunate to have a longstanding partnership with NVIDIA, and look forward to bringing the breakthrough capabilities of the Blackwell GPU to our Cloud customers and teams across Google, including Google DeepMind, to accelerate future discoveries.”

Andy Jassy, president and CEO of Amazon: “Our deep collaboration with NVIDIA goes back more than 13 years, when we launched the world’s first GPU cloud instance on AWS. Today we offer the widest range of GPU solutions available anywhere in the cloud, supporting the world’s most technologically advanced accelerated workloads. It's why the new NVIDIA Blackwell GPU will run so well on AWS and the reason that NVIDIA chose AWS to co-develop Project Ceiba, combining NVIDIA’s next-generation Grace Blackwell Superchips with the AWS Nitro System's advanced virtualization and ultra-fast Elastic Fabric Adapter networking, for NVIDIA's own AI research and development. Through this joint effort between AWS and NVIDIA engineers, we're continuing to innovate together to make AWS the best place for anyone to run NVIDIA GPUs in the cloud.”

Michael Dell, founder and CEO of Dell Technologies: “Generative AI is critical to creating smarter, more reliable and efficient systems. Dell Technologies and NVIDIA are working together to shape the future of technology. With the launch of Blackwell, we will continue to deliver the next-generation of accelerated products and services to our customers, providing them with the tools they need to drive innovation across industries.”

Demis Hassabis, cofounder and CEO of Google DeepMind: “The transformative potential of AI is incredible, and it will help us solve some of the world’s most important scientific problems. Blackwell’s breakthrough technological capabilities will provide the critical compute needed to help the world’s brightest minds chart new scientific discoveries.”

Mark Zuckerberg, founder and CEO of Meta: “AI already powers everything from our large language models to our content recommendations, ads, and safety systems, and it's only going to get more important in the future. We're looking forward to using NVIDIA's Blackwell to help train our open-source Llama models and build the next generation of Meta AI and consumer products.”

Satya Nadella, executive chairman and CEO of Microsoft: “We are committed to offering our customers the most advanced infrastructure to power their AI workloads. By bringing the GB200 Grace Blackwell processor to our datacenters globally, we are building on our long-standing history of optimizing NVIDIA GPUs for our cloud, as we make the promise of AI real for organizations everywhere.”

Sam Altman, CEO of OpenAI: “Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We’re excited to continue working with NVIDIA to enhance AI compute.”

Larry Ellison, chairman and CTO of Oracle: "Oracle’s close collaboration with NVIDIA will enable qualitative and quantitative breakthroughs in AI, machine learning and data analytics. In order for customers to uncover more actionable insights, an even more powerful engine like Blackwell is needed, which is purpose-built for accelerated computing and generative AI.”

Elon Musk, CEO of Tesla and xAI: “There is currently nothing better than NVIDIA hardware for AI.”

Named in honor of David Harold Blackwell — a mathematician who specialized in game theory and statistics, and the first Black scholar inducted into the National Academy of Sciences — the new architecture succeeds the NVIDIA Hopper? architecture, launched two years ago.

Blackwell Innovations to Fuel Accelerated Computing and Generative AI
Blackwell’s six revolutionary technologies, which together enable AI training and real-time LLM inference for models scaling up to 10 trillion parameters, include:

World’s Most Powerful Chip — Packed with 208 billion transistors, Blackwell-architecture GPUs are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB/second chip-to-chip link into a single, unified GPU.
Second-Generation Transformer Engine — Fueled by new micro-tensor scaling support and NVIDIA’s advanced dynamic range management algorithms integrated into NVIDIA TensorRT?-LLM and NeMo Megatron frameworks, Blackwell will support double the compute and model sizes with new 4-bit floating point AI inference capabilities.
Fifth-Generation NVLink — To accelerate performance for multitrillion-parameter and mixture-of-experts AI models, the latest iteration of NVIDIA NVLink? delivers groundbreaking 1.8TB/s bidirectional throughput per GPU, ensuring seamless high-speed communication among up to 576 GPUs for the most complex LLMs.
RAS Engine — Blackwell-powered GPUs include a dedicated engine for reliability, availability and serviceability. Additionally, the Blackwell architecture adds capabilities at the chip level to utilize AI-based preventative maintenance to run diagnostics and forecast reliability issues. This maximizes system uptime and improves resiliency for massive-scale AI deployments to run uninterrupted for weeks or even months at a time and to reduce operating costs.
Secure AI — Advanced confidential computing capabilities protect AI models and customer data without compromising performance, with support for new native interface encryption protocols, which are critical for privacy-sensitive industries like healthcare and financial services.
Decompression Engine — A dedicated decompression engine supports the latest formats, accelerating database queries to deliver the highest performance in data analytics and data science. In the coming years, data processing, on which companies spend tens of billions of dollars annually, will be increasingly GPU-accelerated.
A Massive Superchip
The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.

For the highest AI performance, GB200-powered systems can be connected with the NVIDIA Quantum-X800 InfiniBand and Spectrum?-X800 Ethernet platforms, also announced today, which deliver advanced networking at speeds up to 800Gb/s.

The GB200 is a key component of the NVIDIA GB200 NVL72, a multi-node, liquid-cooled, rack-scale system for the most compute-intensive workloads. It combines 36 Grace Blackwell Superchips, which include 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink. Additionally, GB200 NVL72 includes NVIDIA BlueField?-3 data processing units to enable cloud network acceleration, composable storage, zero-trust security and GPU compute elasticity in hyperscale AI clouds. The GB200 NVL72 provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads, and reduces cost and energy consumption by up to 25x.

The platform acts as a single GPU with 1.4 exaflops of AI performance and 30TB of fast memory, and is a building block for the newest DGX SuperPOD.

NVIDIA offers the HGX B200, a server board that links eight B200 GPUs through NVLink to support x86-based generative AI platforms. HGX B200 supports networking speeds up to 400Gb/s through the NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet networking platforms.

Global Network of Blackwell Partners
Blackwell-based products will be available from partners starting later this year.

AWS, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to offer Blackwell-powered instances, as will NVIDIA Cloud Partner program companies Applied Digital, CoreWeave, Crusoe, IBM Cloud, Lambda and Nebius. Sovereign AI clouds will also provide Blackwell-based cloud services and infrastructure, including Indosat Ooredoo Hutchinson, Nexgen Cloud, Oracle EU Sovereign Cloud, the Oracle US, UK, and Australian Government Clouds, Scaleway, Singtel, Northern Data Group's Taiga Cloud, Yotta Data Services’ Shakti Cloud and YTL Power International.

GB200 will also be available on NVIDIA DGX? Cloud, an AI platform co-engineered with leading cloud service providers that gives enterprise developers dedicated access to the infrastructure and software needed to build and deploy advanced generative AI models. AWS, Google Cloud and Oracle Cloud Infrastructure plan to host new NVIDIA Grace Blackwell-based instances later this year.

Cisco, Dell, Hewlett Packard Enterprise, Lenovo and Supermicro are expected to deliver a wide range of servers based on Blackwell products, as are Aivres, ASRock Rack, ASUS, Eviden, Foxconn, GIGABYTE, Inventec, Pegatron, QCT, Wistron, Wiwynn and ZT Systems.

Additionally, a growing network of software makers, including Ansys, Cadence and Synopsys — global leaders in engineering simulation — will use Blackwell-based processors to accelerate their software for designing and simulating electrical, mechanical and manufacturing systems and parts. Their customers can use generative AI and accelerated computing to bring products to market faster, at lower cost and with higher energy efficiency.

NVIDIA Software Support
The Blackwell product portfolio is supported by NVIDIA AI Enterprise, the end-to-end operating system for production-grade AI. NVIDIA AI Enterprise includes NVIDIA NIM? inference microservices — also announced today — as well as AI frameworks, libraries and tools that enterprises can deploy on NVIDIA-accelerated clouds, data centers and workstations.

To learn more about the NVIDIA Blackwell platform, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at GTC, which runs through March 21.

Media Contacts
Kristin Uchiyama
Enterprise and Edge Computing
+1-408-486-2248
kuchiyama@nvidia.com
Downloads
Download Press Release
Download Attachments
More Images
NVIDIA Blackwell
NVIDIA GB200 NVL72
NVIDIA GB200 Grace Blackwell Superchip
More News

NVIDIA Sets Conference Call for Second-Quarter Financial Results
July 31, 2024

NVIDIA Announces Generative AI Models and NIM Microservices for OpenUSD Language, Geometry, Physics and Materials
July 29, 2024

NVIDIA Accelerates Humanoid Robotics Development
July 29, 2024

NVIDIA AI Foundry Builds Custom Llama 3.1 Generative AI Models for the World’s Enterprises
July 23, 2024

Hewlett Packard Enterprise and NVIDIA Announce ‘NVIDIA AI Computing by HPE’ to Accelerate Generative AI Industrial Revolution
June 18, 2024
About NVIDIA
Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing infrastructure company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.

Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, features, and availability of NVIDIA’s products and technologies, including NVIDIA Blackwell platform, Blackwell GPU architecture, Resilience Technologies, Custom Tensor Core technology, NVIDIA TensorRT-LLM, NeMo Megatron framework, NVLink, NVIDIA GB200 Grace Blackwell Superchip, B200 Tensor Core GPUs, NVIDIA Grace CPU, NVIDIA H100 Tensor Core GPU, NVIDIA Quantum-X800 InfiniBand and Spectrum-X800 Ethernet platforms, NVIDIA GB200 NVL72, NVIDIA BlueField-3 data processing units, DGX SuperPOD, HGX B200, Quantum-2 InfiniBand and Spectrum-X Ethernet platforms, BlueField-3 DPUs, NVIDIA DGX Cloud, NVIDIA AI Enterprise, and NVIDIA NIM inference microservices; our goal of enabling transformative breakthroughs like deep learning and AI; Blackwell GPUs being the engine to power a new industrial revolution; our ability to realize the promise of AI for every industry as we working with the most dynamic companies in the world; our collaborations and partnerships with third parties and the benefits and impacts thereof; third parties who will offer or use our products, services and infrastructures and who will deliver servers based on our products; and the ability of the customers of global leaders in engineering simulation to use generative AI and accelerated computing to bring products to market faster, at lower cost and with higher energy efficiency are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

? 2024 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, DGX, NVIDIA HGX, NVIDIA Hopper, NVIDIA NeMo, NVIDIA NIM, NVIDIA Spectrum, NVLink, and TensorRT are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

 

关于我们

北京汉深流体技术有限公司是丹佛斯中国数据中心签约代理商。产品包括FD83全流量自锁球阀接头,UQD系列液冷快速接头、EHW194 EPDM液冷软管、电磁阀、压力和温度传感器及Manifold的生产和集成服务。在国家数字经济、东数西算、双碳、新基建战略的交汇点,公司聚焦组建高素质、经验丰富的液冷工程师团队,为客户提供卓越的工程设计和强大的客户服务。

公司产品涵盖:丹佛斯液冷流体连接器、EPDM软管、电磁阀、压力和温度传感器及Manifold。
未来公司发展规划:数据中心液冷基础设施解决方案厂家,具备冷量分配单元(CDU)、二次侧管路(SFN)和Manifold的专业研发设计制造能力。


- 针对机架式服务器中Manifold/节点、CDU/主回路等应用场景,提供不同口径及锁紧方式的手动和全自动快速连接器。
- 针对高可用和高密度要求的刀片式机架,可提供带浮动、自动校正不对中误差的盲插连接器。以实现狭小空间的精准对接。
- 基于OCP标准全新打造的UQD/UQDB通用快速连接器也将首次亮相, 支持全球范围内的大批量交付。

 

 

北京汉深流体技术有限公司 Hansen Fluid
丹佛斯签约中国经销商 Danfoss Authorized Distributor

地址:北京市朝阳区望京街10号望京SOHO塔1C座2115室
邮编:100102
电话:010-8428 2935 , 8428 3983 , 13910962635
手机:15801532751,17310484595 ,13910122694
13011089770,15313809303
Http://www.hansenfluid.com
E-mail:sales@cnmec.biz

传真:010-8428 8762

京ICP备2023024665号
京公网安备 11010502019740

Since 2007 Strong Distribution & Powerful Partnerships