As AI workload demands continue to accelerate, Cloud Service Providers, System OEMs, and IP/Silicon vendors require a scalable, high-performance solution to support advanced workloads. By enhancing performance, optimizing power and cost efficiency, and promoting interoperability and supply chain diversity, the UALink 200G 1.0 Specification delivers a low-latency, high-bandwidth interconnect designed for efficient communication between accelerators and switches within AI computing pods.
Room 201

Nafea Bshara

Amber Huffman
Amber Huffman is a Principal Engineer in Google Cloud responsible for leading industry engagement efforts in the data center ecosystem across servers, storage, networking, accelerators, power, cooling, security, and more. Before joining Google, she spent 25 years at Intel serving as an Intel Fellow and VP. Amber is the President of NVM Express, on the Board of Directors for the Open Compute Project Foundation (OCP), on the Board of Directors for Ultra Accelerator Link (UALink), and the chair of the RISC-V Software Ecosystem (RISE) Project. She has led numerous industry standards to successful adoption, including NVM Express, Open NAND Flash Interface, and Serial ATA.
UALink Consortium
Website: https://ualinkconsortium.org/
The Ultra Accelerator Link (UALink) Consortium, incorporated in October 2024, is the open industry standard group dedicated to developing the UALink specifications, a high-speed, scale-up accelerator interconnect technology that advances next-generation AI & HPC cluster performance. The consortium is led by a board made up of stalwarts of the industry; Alibaba, AMD, Apple, Astera Labs, AWS, Cisco, Google, HPE, Intel, Meta, Microsoft, and Synopsys. The Consortium develops technical specifications that facilitate breakthrough performance for emerging AI usage models while supporting an open ecosystem for data center accelerators. For more information on the UALink Consortium, please visit www.UALinkConsortium.org.