Blog
May 15, 2025
Building Tomorrow’s Internet: A 2025 Update on Cable Investment
A steady stream of investment has driven tremendous growth in subsea cable infrastructure...
By Paul Brodsky
Latency is a term that’s frequently cited when discussing long-haul networks. But what is it really?
Simply put, latency is the time it takes for data to travel between locations.
Latency can be increased by a variety of sources, including network congestion, over-utilized routers, and firewalls.
For long-haul networks, the key cause of increased latency is called propagation delay. This type of latency refers to the time it takes for a signal to travel through optical fiber and is a function of distance. Light travels through fiber at roughly 203,000 kilometers per second, which is about two-thirds the speed of light through a vacuum.
When you see latency metrics cited, they're commonly shown in terms of the round trip delay (RTD) in milliseconds. RTD is the time required for a signal to travel in both directions over a link. One rule of thumb for determining the expected RTD on a segment is that 1,000 kilometers equates to roughly 10 milliseconds of latency.
Transport Networks Subscribers: We've assembled sample RTD latencies from network operators for a variety of long-haul terrestrial and submarine routes. Transport Networks Research Service users can access the full data set at the bottom of this page.
Financial enterprises, content providers, gaming companies, and cloud computing providers all want to lower the latency of long-haul data transmission.
In the case of financial enterprises, reducing the delay by as little as a few milliseconds can impact the profitability of trading operations. Online search companies, including Google and Bing, have indicated that increased latency leads to decreased click-throughs and search result views. Amazon has claimed that every 100 milliseconds of latency reduces their sales by 1%.
Content and cloud computing providers optimize their services, including AI applications, for lower latency. Besides selecting low-latency paths linking data centers, providers also try to strategically place servers to limit latency for as many users as possible. For instance, the use of a Singapore data center could bring all the users in India and Southeast Asia within 70 milliseconds of a server.
When data travels between two cities, the distance it traverses varies depending on the terrestrial and submarine cables used.
Terrestrial networks normally follow established rights-of-way, such as railways, highways, or pipelines, which do not provide the shortest path between cities. Similarly, submarine cables are laid to follow precisely designed routes that avoid major fishing areas, anchoring zones, sensitive environmental areas, and earthquake-prone locations.
The latency on submarine cables varies considerably, given the fact that no two cables follow the exact same path.
Paul Brodsky is a Senior Research Manager at TeleGeography. He is part of the network, internet, cloud, and voice research team. His regional expertise includes Europe, Africa, and the Middle East.
May 15, 2025
A steady stream of investment has driven tremendous growth in subsea cable infrastructure...
By Paul Brodsky
Oct 12, 2023
Three years after the COVID-19 pandemic struck, the internet seems to have achieved a...
By Paul Brodsky
Oct 19, 2023
While artificial intelligence (AI) has been the most hyped demand driver in recent years,...
By Paul Brodsky
All Rights Reserved 2026