We engineer tomorrow to build a better future.
Solutions to your liquid cooling challenges.
 
 
DANFOSS
数据中心液冷产品
  数据中心液冷产品
  FD83接头
  UQD快速接头
  UQDB盲插接头
  BMQC盲插接头
  NVQD02
  NVBQD02
  EHW194液冷软管
  EHW094液冷软管
  5400制冷剂接头
  Manifold 分水器
  液冷系统生产及集成
Danfoss流体管阀件
 
 
 
 
 
非标定制液冷产品
液冷系统生产及集成
阀门
传感器
选型资料下载
  新闻通告
  成功案例
  资料下载

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


   

 

 

Nvidia shows off Blackwell server installations in progress — AI and data center roadmap has Blackwell Ultra coming next year with Vera CPUs and Rubin GPUs in 2026
Nvidia also emphasized that it sees Blackwell and Rubin as platforms and not just GPUs

By Jarred Walton published 2 days ago

Nvidia 展示了正在进行的 Blackwell 服务器安装——人工智能和数据中心路线图显示 Blackwell Ultra 将于明年推出,并于 2026 年配备 Vera CPU 和 Rubin GPU。Nvidia 还强调,它将 Blackwell 和 Rubin 视为平台而不仅仅是 GPU

 

 

英伟达产品路线图曝光,事关Blackwell和Rubin

在 Hot Chips 2024 展会开始之前,Nvidia 展示了其 Blackwell 平台的更多元素,包括正在安装和配置的服务器。这是一种不太含蓄的说法,表明 Blackwell 仍在发展中——别介意延迟。它还谈到了现有的 Hopper H200 解决方案,展示了使用其新 Quasar 量化系统进行的 FP4 LLM 优化,讨论了数据中心的温水液体冷却,并谈到了使用 AI 来帮助构建更好的 AI 芯片。它重申,Blackwell 不仅仅是一个 GPU,它是一个完整的平台和生态系统。

Nvidia在 Hot Chips 2024 上展示的大部分内容已经为人所知,例如数据中心和 AI 路线图显示 Blackwell Ultra 将于明年推出,Vera CPU 和 Rubin GPU 将于 2026 年推出,随后是 2027 年的 Vera Ultra。Nvidia 早在 6 月份就在Computex上首次确认了这些细节。但 AI 仍然是一个大话题,Nvidia 非常乐意继续推动 AI 的发展。

据报道,Blackwell 的发布推迟了三个月,但Nvidia 既没有证实也没有否认这一信息,而是选择展示正在安装的 Blackwell 系统的图像,以及展示 Blackwell GB200 机架和 NVLink 交换机中更多内部硬件的照片和渲染图。除了该硬件看起来可以消耗大量电量并且具有相当强大的冷却功能之外,没有太多可说的。它看起来也很昂贵。

Nvidia还展示了其现有 H200 的一些性能结果,运行时使用和不使用 NVSwitch。它表示,与运行点对点设计相比,推理工作负载的性能可以提高 1.5 倍——这是使用 Llama 3.1 70B 参数模型。Blackwell 将 NVLink 带宽翻倍以提供进一步的改进,NVLink Switch Tray 提供总计 14.4 TB/s 的总带宽。

由于数据中心的功率需求不断增加,Nvidia 也在与合作伙伴合作以提高性能和效率。其中一个更有希望的结果是使用温水冷却,其中加热的水可以再循环用于加热以进一步降低成本。Nvidia 声称,使用该技术可以使数据中心的用电量减少 28%,其中很大一部分来自移除低于环境温度的冷却硬件。

上面是 Nvidia 演示文稿的完整幻灯片。还有一些其他值得注意的有趣内容。

为了准备 Blackwell,现在增加了原生 FP4 支持,可以进一步提高性能,Nvidia 一直致力于确保其最新软件从新硬件功能中受益,而不会牺牲准确性。在使用其 Quasar 量化系统调整工作负载结果后,Nvidia 能够提供与 FP16 基本相同的质量,同时使用四分之一的带宽。生成的两个兔子图像可能在细微方面有所不同,但这对于 Stable Diffusion 等文本到图像工具来说非常典型。Nvidia

还谈到了使用 AI 工具来设计更好的芯片——AI 构建 AI,一路向下都是乌龟。Nvidia 创建了一个内部使用的 LLM,有助于加快设计、调试、分析和优化。它与用于描述电路的 Verilog 语言一起工作,是创建 2080 亿个晶体管 Blackwell B200 GPU 的关键因素。然后,这将用于创建更好的模型,使 Nvidia 能够在下一代 Rubin GPU 及以后的产品上工作。[此时,您可以随意插入您自己的 Skynet 笑话。]

总结一下,我们对 Nvidia 未来几年的 AI 路线图有了更高质量的了解,该路线图再次将“Rubin 平台”与交换机和互连定义为一个整体包。Nvidia 将在下周的 Hot Chips 会议上介绍有关 Blackwell 架构、使用生成 AI 进行计算机辅助工程和液体冷却的更多细节。

 

参考链接

https://www.tomshardware.com/tech-industry/artificial-intelligence/nvidia-shows-off-blackwell-server-installations-in-progress-ai-and-data-center-roadmap-has-blackwell-ultra-coming-next-year-with-vera-cpus-and-rubin-gpus-in-2026

END

 

Nvidia shows off Blackwell server installations in progress — AI and data center roadmap has Blackwell Ultra coming next year with Vera CPUs and Rubin GPUs in 2026
By Jarred Walton published 2 days ago


Nvidia also emphasized that it sees Blackwell and Rubin as platforms and not just GPUs.

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Nvidia Hot Chips 2024
(Image credit: Nvidia)
Prior to the start of the Hot Chips 2024 tradeshow, Nvidia showed off more elements of its Blackwell platform, including servers being installed and configured. It's a less than subtle way of saying that Blackwell is still coming — never mind the delays. It also talked about its existing Hopper H200 solutions, showed FP4 LLM optimizations using its new Quasar Quantization System, discussed warm water liquid cooling for data centers, and talked about using AI to help build even better chips for AI. It reiterated that Blackwell is more than just a GPU, it's an entire platform and ecosystem.

Much of what will be presented by Nvidia at Hot Chips 2024 is already known, like the data center and AI roadmap showing Blackwell Ultra coming next year, with Vera CPUs and Rubin GPUs in 2026, followed by Vera Ultra in 2027. Nvidia first confirmed those details at Computex back in June. But AI remains a big topic and Nvidia is more than happy to keep beating the AI drum.

While Blackwell was reportedly delayed three months, Nvidia neither confirmed nor denied that information, instead opting to show images of Blackwell systems being installed, as well as providing photos and renders showing more of the internal hardware in the Blackwell GB200 racks and NVLink switches. There's not much to say, other than the hardware looks like it can suck down a lot of juice and has some pretty robust cooling. It also looks very expensive.

Nvidia also showed some performance results from its existing H200, running with and without NVSwitch. It says performance can be up to 1.5X higher on inference workloads compared to running point-to-point designs — that was using a Llama 3.1 70B parameter model. Blackwell doubles the NVLink bandwidth to offer further improvements, with an NVLink Switch Tray offering an aggregate 14.4 TB/s of total bandwidth.

Because data center power requirements keep increasing, Nvidia is also working with partners to boost performance and efficiency. One of the more promising results is using warm water cooling, where the heated water can potentially be recirculated for heating to further reduce costs. Nvidia claims it has seen up to a 28% reduction in data center power use using the tech, with a large portion of that coming from the removal of below ambient cooling hardware.

Above you can see the full slide deck from Nvidia's presentation. There are a few other interesting items of note.

To prepare for Blackwell, which now adds native FP4 support that can further boost performance, Nvidia has worked to ensure it's latest software benefits from the new hardware features without sacrificing accuracy. After using its Quasar Quantization System to tune the workloads results, Nvidia is able to deliver basically the same quality as FP16 while using one quarter as much bandwidth. The two generated bunny images may very in minor ways, but that's pretty typical of text-to-image tools like Stable Diffusion.

Nvidia also talked about using AI tools to design better chips — AI building AI, with turtles all the way down. Nvidia created an LLM for internal use that helps to speed up design, debug, analysis, and optimization. It works with the Verilog language that's used to describe circuits and was a key factor in the creation of the 208 billion transistor Blackwell B200 GPU. This will then be used to create even better models to enable Nvidia to work on the next generation Rubin GPUs and beyond. [Feel free to insert your own Skynet joke at this point.]

Wrapping things up, we have a better quality image of Nvidia's AI roadmap for the next several years, which again defines the "Rubin platform" with switches and interlinks as an entire package. Nvidia will be presenting more details on the Blackwell architecture, using generative AI for computer aided engineering, and liquid cooling at the Hot Chips conference next week.

 

关于我们

北京汉深流体技术有限公司是丹佛斯中国数据中心签约代理商。产品包括FD83全流量自锁球阀接头,UQD系列液冷快速接头、EHW194 EPDM液冷软管、电磁阀、压力和温度传感器及Manifold的生产和集成服务。在国家数字经济、东数西算、双碳、新基建战略的交汇点,公司聚焦组建高素质、经验丰富的液冷工程师团队,为客户提供卓越的工程设计和强大的客户服务。

公司产品涵盖:丹佛斯液冷流体连接器、EPDM软管、电磁阀、压力和温度传感器及Manifold。
未来公司发展规划:数据中心液冷基础设施解决方案厂家,具备冷量分配单元(CDU)、二次侧管路(SFN)和Manifold的专业研发设计制造能力。


- 针对机架式服务器中Manifold/节点、CDU/主回路等应用场景,提供不同口径及锁紧方式的手动和全自动快速连接器。
- 针对高可用和高密度要求的刀片式机架,可提供带浮动、自动校正不对中误差的盲插连接器。以实现狭小空间的精准对接。
- 基于OCP标准全新打造的UQD/UQDB通用快速连接器也将首次亮相, 支持全球范围内的大批量交付。

 

 

 

北京汉深流体技术有限公司 Hansen Fluid
丹佛斯签约中国经销商 Danfoss Authorized Distributor

地址:北京市朝阳区望京街10号望京SOHO塔1C座2115室
邮编:100102
电话:010-8428 2935 , 8428 3983 , 13910962635
手机:15801532751,17310484595 ,13910122694
13011089770,15313809303
Http://www.hansenfluid.com
E-mail:sales@cnmec.biz

传真:010-8428 8762

京ICP备2023024665号
京公网安备 11010502019740

Since 2007 Strong Distribution & Powerful Partnerships