Highlights:
- Google Cloud has not yet provided a comprehensive overview of Axion’s architecture, including details such as the number of cores it incorporates and the size of its onboard cache.
- Titanium, the infrastructure optimization system, offloads specific tasks from Google LLC’s Axion processors, freeing up more compute capacity for customer workloads. According to the search giant, Titanium consists of three internally developed chips.
Google LLC’s cloud division recently unveiled Axion, an internally developed central processing unit (CPU) built on a core design from Arm Holdings plc.
The CPU debuted at the Cloud Next conference in Las Vegas during the keynote address delivered by Google Cloud Chief Executive Thomas Kurian. At the event, the cloud unit also announced the general availability of TPU v5p, its latest artificial intelligence accelerator. The newer chip can deliver twice the performance of its predecessor when processing floating-point numbers, which AI models commonly utilize.
“Amdahl’s Law suggests that as accelerators continue to improve, general purpose compute will dominate the cost and limit the capability of our infrastructure unless we make commensurate investments to keep up,” Google Cloud machine learning, systems, and Cloud AI Vice President Amin Vahdat blogged.
Arm-based architecture
Google Cloud has not yet provided a comprehensive overview of Axion’s architecture, including details such as the number of cores it incorporates and the size of its onboard cache. The company informed a prominent media outlet that further details regarding the chip’s design will be disclosed later this year. However, Google Cloud recently revealed that Axion is founded on Arm’s Neoverse V2 CPU core design.
Introduced in 2022, Neoverse V2 is tailored for cloud data centers and other demanding high-performance computing environments. With a speed doubling compared to Arm’s previous-generation core design, Neoverse V2 achieves this advancement through optimizations that enable quicker processing of integers, which are fundamental data units utilized in various computational tasks.
Processors built on the Neoverse V2 can be configured with up to 256 cores and 512 megabytes of cache. They also can utilize ARMv9, which is Arm’s latest instruction set architecture. A chip’s instruction set architecture encompasses the machine language that expresses computations and certain associated technologies.
One of the ARMv9 features supported by Neoverse V2 is a cybersecurity mechanism known as the Memory Tagging Extension. According to Arm, the Memory Tagging Extension divides the memory attached to a chip into 16-bit segments and incorporates four additional bits to each segment, which function as a lock. Only applications granting access to a memory segment can bypass the lock, diminishing the hacking risk.
Neoverse also embraces ARMv9’s PDP, or performance-defined power, feature. This capability enhances a CPU’s power efficiency by scaling down its maximum performance.
Custom cloud chips
In Google Cloud’s data centers, Axion processors will be deployed with an infrastructure optimization system known as Titanium. Titanium offloads specific tasks from Google LLC’s Axion processors, freeing up more compute capacity for customer workloads. According to the search giant, Titanium consists of three internally developed chips.
The system utilizes a microcontroller called Titan as a root of trust for Axion. A root of trust is a hardware module designed to prevent hackers from injecting malicious code into a server during the boot process. According to Google Cloud, Titan also aids in securing network traffic within its data centers.
Axion offloads certain computations involved in processing users’ network traffic to another custom chip, referred to as TOP, which is also integrated into the Titanium system.
Meanwhile, a third custom processor, the Titanium adaptor, assists in running the virtualization software that drives Google Cloud instances. Hyperdisk, Google Cloud’s block storage service, also assumes specific computing tasks that would otherwise be handled by Google LLC’s Axion, thereby further enhancing performance.
“Axion processors deliver giant leaps in performance for general-purpose workloads like web and app servers, containerized microservices, open-source databases, in-memory caches, data analytics engines, media processing, CPU-based AI training and inferencing, and more,” Vahdat elaborated.
Google Cloud claims Axion-powered instances can deliver 30% better performance than competitors’ fastest general-purpose Arm-based instances. Furthermore, the search giant pledges up to 50% higher processing speeds and 60% improved power efficiency compared to cases based on Intel Corp. silicon. Google Cloud intends to make Axion available to customers later this year.
The search giant will also utilize the chip to power several internal workloads.
Google has begun retooling its data centers to operate certain Google Cloud services, YouTube’s advertising system, and the Google Earth Engine satellite image analysis platform on Arm-based hardware. It intends to deploy some workloads on servers powered by Google LLC’s Axion.