We engineer tomorrow to build a better future.
Solutions to your liquid cooling challenges.
 
 
DANFOSS
数据中心液冷产品
  数据中心液冷产品
  FD83接头
  UQD快速接头
  UQDB盲插接头
  BMQC盲插接头
  NVQD液冷接头
  NVQD02
  NVBQD02
  NVQD03
  NVQD04
  EHW194液冷软管
  EHW094液冷软管
  5400制冷剂接头
  不锈钢90度旋转接头
  Manifold 分水器
  液冷系统生产及集成
 
选型资料下载
  新闻通告
  成功案例
  行业动态
  资料下载
 
汉深公司仓库

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


   

 

AMD Helios 推动基于Meta 2025 OCP开放机架的AI基础设施的开放性设计
2025年10月14日

介绍
今天在加利福尼亚州圣何塞举行的开放计算项目(OCP)全球峰会上,Meta推出了一个新的AI专用开放机架规格——开放机架宽版(ORW)形式因素,这标志着开放基础设施创新的重大进步。

为满足AI规模数据中心的现实需求,ORW规范定义了一种开放的、双宽机架,旨在满足下一代AI系统对电力、冷却和可维护性的需求。这代表了行业向标准化、可互操作和可扩展的数据中心设计的基金会转变。

AMD很自豪地与Meta和开放计算项目社区合作,通过 Helios推进这一愿景——AMD最先进的 rack-scale 参考系统,完全基于 ORW 开放标准构建。 Helios将 AMD 从硅片到系统再到机架再到大规模集群的开放理念延伸,实现了支持 ORW 规范的开放硬件原则。

Helios:将开放标准变成机架规模现实


AMD Helios AI 机架是基于 Meta 在 OCP 2025 上提交的开放设计蓝图构建的,旨在实现优化的、可部署的 AI 数据中心性能。 Helios围绕下一代 AMD Instinct? MI450 系列 GPU 构建,重新定义了开放的、机架规模的 AI 基础设施可以实现什么。

由AMD CDNA?架构驱动,每块MI450系列GPU提供高达432 GB的HBM4内存和19.6 TB/s的内存带宽,为渴望数据的AI模型提供行业领先的容量和带宽。在机架规模上, Helios系统搭载72块MI450系列GPU,提供高达1.4 exaFLOPS的FP8和2.9 exaFLOPS的FP4性能,总HBM4内存达31 TB,总带宽达1.4 PB/s——这是一代性的飞跃,使万亿参数训练和大规模AI推理成为可能。

Helios还提供了高达260 TB/s的垂直扩展互连带宽和43 TB/s的以太网为基础的水平扩展带宽,帮助确保GPU、节点和机架之间的无缝通信。 Helios相比上一代性能提升了高达17.9倍1,同时提供比NVIDIA的Vera Rubin系统多50%的内存容量和带宽。

Helios 机架将在 2025 年 OCP 大会上展示 这是AMD为前沿AI工作负载专门设计的第一个机架规模解决方案,为超大规模计算和企业提供了面向未来、基于开放标准的平台,该平台结合了强大的性能、灵活性和互操作性。

像 Helios这样的大规模系统对于下一代人工智能至关重要,因为性能依赖于数千个加速器之间的高效通信。AMD在开放标准方面的领导地位,如开放计算项目(OCP)、Ultra Accelerator Link(UALink?)和Ultra Ethernet Consortium(UEC),有助于确保这种扩展是通过行业合作实现的——使开放、高性能的 fabrics成为向上扩展和向外扩展人工智能集群的可能。这些努力共同定义了为人工智能时代构建的可互操作、节能基础设施的路径。

 

推动生态系统中的开放创新


Helios 机架不仅仅是一个硬件参考——它是 AI 生态系统的合作蓝图。

基于Meta提交给OCP的ORW规范, Helios使OEM和ODM合作伙伴能够:

采用并扩展 Helios参考设计,加速新AI系统的上市时间。
整合AMD Instinct? GPU、EPYC? CPU和Pensando? DPU,以及它们自己的差异化的解决方案。
参与一个开放、基于标准的生态系统,推动互操作性、可扩展性和长期创新。
通过围绕ORW规范进行协作,行业获得了共享的、开放的AI大规模部署基础——减少了碎片化并消除了专有一次性设计的低效率。

为现代数据中心需求而设计


AI数据中心正在迅速发展,需要能够大规模提供更高性能、效率和服务性的架构。 Helios是为满足这些需求而设计的,通过创新简化部署、提高可管理性,并在密集的AI环境中维持性能。

更高的扩展吞吐量和HBM带宽相比上一代使模型训练和推理速度更快。
双宽布局降低了重量密度并提高了可用性。
基于标准的以太网横向扩展确保了多路径冗余和无缝互操作性。
背面快速断开液冷系统在高密度下提供持续高效的散热性能。
这些功能共同使AMD的 Helios机架成为客户扩展到exascale AI的可部署、可生产系统,提供突破性的性能、运营效率和可持续性。

实现人工智能基础设施革命的开放性


通过 Helios,AMD将其实现开放硬件和软件的领导地位扩展到机架级别——将硅创新与开放的、由行业推动的设计原则结合在一起。

对于OEM和ODM, Helios提供了一个现成的、与OCP齐头并进的系统来构建差异化的AI基础设施。

对于客户来说,这意味着更快的部署、更低的风险以及在为AI、HPC和主权项目扩展计算能力方面的更多灵活性。

 

迈向2026年


Helios目前正在向OEM和ODM合作伙伴发布作为参考设计,预计将在2026年大规模部署。作为一个开放的、符合OCP的Design, Helios为生态系统在AI基础设施的未来上进行合作创造了新的机会——一个建立在开放性、互操作性和共同创新基础上的未来。

基于Meta提交给开放计算项目(Open Compute Project)的ORW规范, Helios体现了AMD对开放和协作创新的承诺——塑造人工智能基础设施的下一阶段,并证明当行业共同建设时,每个人都能加速发展。

脚注
根据AMD Performance Labs在2025年9月的工程预测,估计使用FP4密集型矩阵数据类型的72个AMD Instinct? MI450系列GPU(机架)的峰值理论精度性能,与使用FP6密集型矩阵数据类型的8xGPU AMD Instinct MI355X平台相比。当产品上市时,结果可能会有所变化。MI350-047A

 

AMD “Helios’’: Advancing Openness in AI Infrastructure Built on Meta’s 2025 OCP Open Rack for AI Design
Oct 14, 2025

AMD “Helios’’: Advancing Open AI Infrastructure Built on Meta’s 2025 OCP Open Rack for AI Design
Introduction
Today at the Open Compute Project (OCP) Global Summit in San Jose, California, Meta introduced specifications for a new Open Rack for AI featuring an Open Rack Wide (ORW) form factor — marking a major leap forward in open infrastructure innovation.

Designed to meet the realities of AI-scale data centers, the ORW specification defines an open, double-wide rack optimized for the power, cooling, and serviceability demands of next generation AI systems. It represents a foundational shift toward standardized, interoperable, and scalable data center design across the industry.

AMD is proud to align with Meta and the Open Compute Project community in advancing this vision through “Helios” — the most advanced rack-scale reference system from AMD, built fully on the ORW open standards. “Helios” extends the AMD philosophy of openness from silicon to system to rack to large-scale clusters, bringing to life the open hardware principles that underpin the ORW specification.

“Helios”: Turning Open Standards into Rack-Scale Reality
The AMD “Helios” AI rack is built on the blueprint of open design submitted by Meta at OCP 2025 to enable optimized, deployable performance across AI data centers. Built around the next-generation AMD Instinct? MI450 Series GPUs, “Helios” redefines what open, rack-scale AI infrastructure can achieve.

Powered by the AMD CDNA? architecture, each MI450 Series GPU delivers up to 432 GB of HBM4 memory and 19.6 TB/s of memory bandwidth, providing industry-leading capacity and bandwidth for data-hungry AI models. At rack scale, a “Helios” system with 72 MI450 Series GPUs delivers up to 1.4 exaFLOPS of FP8 and 2.9 exaFLOPS of FP4 performance, with 31 TB of total HBM4 memory and 1.4 PB/s of aggregate bandwidth — a generational leap that enables trillion parameter training and large scale AI inference.

“Helios” also features up to 260 TB/s of scale-up interconnect bandwidth and 43 TB/s of Ethernet-based scale-out bandwidth, helping ensure seamless communication across GPUs, nodes, and racks. “Helios” delivers up to 17.9× higher performance compared to previous generations1, while offering 50% more memory capacity and bandwidth than NVIDIA’s Vera Rubin system.

“Helios” Rack on Display at OCP 2025 Conference
This is the first rack-scale design from AMD engineered specifically for frontier AI workloads providing hyperscalers and enterprises a future-proof, open-standard based platform that unites power, flexibility, and interoperability.

Rack-scale systems like “Helios” are essential for the next generation of AI, where performance depends on efficient communication across thousands of accelerators. AMD leadership in open standards such as the Open Compute Project (OCP), Ultra Accelerator Link (UALink?), and Ultra Ethernet Consortium (UEC) helps ensure that this scaling happens through industry collaboration — enabling open, high-performance fabrics for both scale-up and scale-out AI clusters. Together, these efforts define the path toward interoperable, energy-efficient infrastructure built for the AI era.

Driving Open Innovation Across the Ecosystem
The “Helios” rack is more than a hardware reference — it’s a collaboration blueprint for the AI ecosystem.

Built on the ORW specification submitted by Meta to OCP, “Helios” enables OEM and ODM partners to:

Adopt and extend the “Helios” reference design, accelerating time-to-market for new AI systems.
Integrate AMD Instinct? GPUs, EPYC? CPUs, and Pensando? DPUs with their own differentiated solutions.
Participate in an open, standards-based ecosystem that drives interoperability, scalability, and long-term innovation.
By aligning around the ORW specification, the industry gains a shared, open foundation for rack-scale AI deployments — reducing fragmentation and removing the inefficiencies of proprietary, one-off designs.

Purpose-Built for Modern Data Center Realities
AI data centers are evolving rapidly, demanding architectures that deliver greater performance, efficiency, and serviceability at scale. “Helios” is purpose-built to meet these needs with innovations that simplify deployment, improve manageability, and sustain performance in dense AI environments.

Higher scale-out throughput and HBM bandwidth compared to previous generations enable faster model training and inference.
Double-wide layout reduces weight density and improves serviceability.
Standards-based Ethernet scale-out ensures multipath resiliency and seamless interoperability.
Backside quick-disconnect liquid cooling provides sustained, efficient thermal performance at high density.
Together, these features make the AMD “Helios” Rack a deployable, production-ready system for customers scaling to exascale AI — delivering breakthrough performance with operational efficiency and sustainability.

Enabling the Openness in AI Infrastructure Revolution
With “Helios”, AMD extends its open hardware and software leadership to the rack level — uniting silicon innovation with open, industry-driven design principles.

For OEMs and ODMs, “Helios” provides a ready-made, OCP-aligned system to build differentiated AI infrastructure.

For customers, it means faster deployment, lower risk, and more flexibility in how they scale compute for AI, HPC, and sovereign initiatives.

On Track for 2026
“Helios” is currently being released as a reference design to OEM and ODM partners, with volume deployment expected in 2026. As an open, OCP-aligned design, “Helios” creates new opportunities for the ecosystem to collaborate on the future of AI infrastructure — one built on openness, interoperability, and shared innovation.

Built on the ORW specifications submitted by Meta to Open Compute Project, “Helios” embodies AMD’s commitment to open, collaborative innovation — shaping the next phase of AI infrastructure and proving that when the industry builds together, everyone accelerates.

Footnotes
Based on engineering projections by AMD Performance Labs in September 2025, to estimate the peak theoretical precision performance of seventy-two (72) AMD Instinct? MI450 Series GPUs (Rack) using FP4 dense Matrix datatype vs. an 8xGPU AMD Instinct MI355X platform using the FP6 dense Matrix datatype. Results subject to change when products are released in market. MI350-047A

 

- “Helios”基于 Meta 贡献的新 Open Rack Wide (ORW) 标准构建,代表了开放 AI 基础设施的重大飞跃。它由 AMD Instinct GPU、EPYC CPU 和 Pensando 网络提供支持,旨在满足下一代 AI 工作负载的需求,具有无与伦比的性能和可扩展性。Meta推出了Open Rack Wide的双宽机柜标准,采用AMD GPU打了个样
ORW的标准是Meta推出的,AMD的Helios 双宽机柜就是采用了ORW的标准,在AI服务器硬件方面,Meat还是走在前面。

Dual Rank Wide的双宽机柜还是搭有18个compute tray,每个compute tray有4个GPU,合计72个GPU;switch tray的数量变为6个,和NVL72 9个 switch tray 不一样。一直奇怪为什么要搞双宽的机柜,但还是72个GPU?和Meta的技术人员交流后发现,Dual rack wide的优势是用空间换方便,更容易更换compute tay和switch tray,直接抽拔就行;更大的空间更利于进行硬件设计,供电和散热的设计比较简单,不像单宽机柜那么紧凑,更加容易维护。

Helios机柜内的GPU纸面参数
1. 72个 MI450 GPU
2. 每个GPU HBM4容量为432GB,显存带宽19.6TB/s
3. 72个 MI450 提供 1.4 EFLOPS FP8 算力, 31TB的HBM4的容量

Helios的第一个客户是Oracle,预计发货时间为2026年Q3,2027年实现量产。

 

- 首先,我们推出了突破性的“Helios”机架级平台。这个基于开放的人工智能参考平台建立在 Meta 为 Open Compute Project Foundation 贡献的新 Open Rack Wide (ORW) 标准之上,改变了游戏规则。Helios 由 AMD Instinct GPU、EPYC CPU 和 Pensando 网络组合提供支持,旨在提供下一代 AI 工作负载所需的性能、效率和可扩展性。

在此基础上,我们正在扩大与 甲骨文 的合作伙伴关系,后者将在基于 50,000 个 AMD Instinct MI450 系列 GPU 和 Helios 的 Oracle Cloud 基础设施 (OCI) 上部署 AI 超级集群。

 

关于我们

北京汉深流体技术有限公司 是丹佛斯中国数据中心签约代理商。产品包括FD83全流量双联锁液冷快换接头(互锁球阀);液冷通用快速接头UQD & UQDB;OCP ORV3盲插快换接头BMQC;EHW194 EPDM液冷软管、电磁阀、压力和温度传感器。在人工智能AI、国家数字经济、东数西算、双碳、新基建战略的交汇点,公司聚焦组建高素质、经验丰富的液冷工程师团队,为客户提供卓越的工程设计和强大的客户服务,支持全球范围内的大批量交付。

公司产品涵盖:丹佛斯液冷流体连接器、EPDM软管、电磁阀、压力和温度传感器及Manifold。
未来公司发展规划:数据中心液冷基础设施解决方案厂家,具备冷量分配单元(CDU)、二次侧管路(SFN)和Manifold的专业研发设计制造能力。

- 针对机架式服务器中Manifold/节点、CDU/主回路等应用场景,提供不同口径及锁紧方式的手动和全自动快速连接器。
- 针对高可用和高密度要求的刀片式机架,可提供带浮动、自动校正不对中误差的盲插连接器。以实现狭小空间的精准对接。
- 基于OCP标准全新打造的液冷通用快速接头UQD & UQDB ;OCP ORV3盲插快换接头BMQC , 支持全球范围内的大批量交付。
- 新型体积更小的NVQD液冷快换接头。NVQD02 (H20); NVQD03 (Blackwell B300 GB300); NVQD04 。
- 液冷服务器、液冷板、CDU、液冷接头、管路、Manifold、液冷泵\阀门、换热器、冷却塔、漏液检测、液冷模块、过滤器、激光焊接、清洁度检测等。

 

About Us

Beijing Hansen Fluid Technology Co., Ltd. is an authorized distributor of Danfoss China, specializing in the data center industry. Our product portfolio includes Danfoss FD83 full-flow double-interlock liquid cooling quick-disconnect couplings (equipped with interlocking ball valves); universal liquid cooling quick-disconnect couplings UQD & UQDB; OCP ORV3 blind-mate quick-disconnect couplings BMQC; EHW194 EPDM liquid cooling hoses; solenoid valves; and pressure/temperature sensors. Amid the convergence of strategic trends such as artificial intelligence (AI), China’s national digital economy, the “Eastern Data and Western Computing” initiative, the “dual carbon” goals, and new infrastructure development, we are committed to building a high-caliber, experienced team of liquid cooling engineers. We deliver exceptional engineering design, robust customer service, and support global large-scale deployment.

Products: Danfoss liquid cooling fluid connectors, EPDM hoses, solenoid valves, pressure/temperature sensors, and manifolds.
Development Plan:Our goal is to become a leading provider of liquid cooling infrastructure solutions for data centers, with professional R&D, design, and manufacturing capabilities for cooling distribution units (CDUs), secondary fluid networks (SFNs), and manifolds.

- We offer manual and fully automatic quick-disconnect couplings in various calibers and locking mechanisms, suitable for scenarios such as Manifold/Node and CDU/Main Circuit in rack-mounted servers.
- For blade racks requiring high availability and density, we provide blind-mate connectors with floating functionality and automatic misalignment correction—enabling precise docking in confined spaces.
- Our newly developed universal liquid cooling quick-disconnect couplings UQD & UQDB (based on OCP standards) and OCP ORV3 blind-mate quick-disconnect couplings BMQC support global large-scale deployment.
- The new NVQD series of liquid cooling quick-disconnect couplings features a more compact design, including models NVQD02 (H20), NVQD03 (Blackwell B300, GB300), and NVQD04
- Our business range includes liquid-cooling servers, liquid cooling plates, CDU (Cooling Distribution Units), liquid cooling connectors, piping, manifold, liquid cooling pumps and valves, heat exchangers, cooling towers, leak detection systems, liquid cooling modules, filters, laser welding, cleanliness testing, and more.

 

 
北京汉深流体技术有限公司 Hansen Fluid
丹佛斯签约中国经销商 Danfoss Authorized Distributor

地址:北京市朝阳区望京街10号望京SOHO塔1C座2115室
邮编:100102
电话:010-8428 2935 , 8428 3983 , 13910962635
手机:15801532751,17310484595 ,13910122694
13011089770,15313809303
Http://www.hansenfluid.com

E-mail:sales@cnmec.biz
传真:010-8428 8762

京ICP备2023024665号
京公网安备 11010502019740

Since 2007 Strong Distribution & Powerful Partnerships