HOLIGHT LOGO

What Is Causing the Decline in Data Center Switch Sales?

ICC News: Despite the surge in demand for AI networks, data center switch sales revenue experienced its first decline in over three years in the first quarter of 2024. According to Sameh Boujelbene, Vice President at Dell’Oro Group, the culprits are the normalization of backlogs for cloud service providers and enterprise customers, inventory digestion, and spending optimization. However, it appears that some suppliers have successfully avoided the downturn.

Boujelbene stated, “Arista managed to outperform the market because it has a diversified group of cloud service providers and an increasing penetration among major enterprise customers, including the acquisition of new customers, which began to contribute to Arista’s revenue this quarter.”
Huawei also successfully achieved growth, and it’s not just because of its home country.
“All of their growth was driven by regions outside of China, particularly Europe, the Middle East, and Africa,” Boujelbene said about Huawei. “Most of the large projects they won were in collaboration with governments.”

Network Realities
200G, 400G, and 800G switches together accounted for approximately one-quarter of total revenue, with the remainder coming from switches with speeds of 100G and below.

Although the adoption of 400G and 800G switches is expected to accelerate over the next two years, Boujelbene explained that this is partly due to the different speeds of switches used in data centers.

For the front-end network connecting general-purpose servers, speeds of 100G and below are most suitable. Part of the reason is that these networks “are usually related to compute rather than network, which means the bottleneck is in the compute, not the network.” Network utilization is typically 50% or lower, which explains the lower bandwidth demand.

Therefore, for example, 100G switches can be used for core aggregation or leaf and spine use cases, while 25G and 10G are used for server access. That said, some hyperscale enterprises are beginning to use 200G and 400G in their front-end networks.

Generally speaking, switch speeds above 400G are currently mainly used for accelerating the back-end networks of servers (specifically those used for AI). In these back-end use cases, the bottleneck is the network, not the compute. Therefore, faster speeds are required to maximize the utilization of expensive GPUs.

“Ideally, the network needs to run at 100% speed so that very expensive accelerators (like GPUs) are not idling, waiting for network response,” Boujelbene said.

Looking ahead to the rest of 2024, Boujelbene stated that Dell’Oro expects “spending on front-end networks to be weak, but spending on AI back-end networks to be substantial.”
She concluded, “We will be watching to see how much share Ethernet can capture in back-end networks.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Post

Newsletter

Signup for our newsletter to get updated information, promo, or insight.