
Bounded Latency Networking May be the Most Important Networking Trend You Haven’t Heard About Yet
Mar 27
4 min read

There’s a trend in networking technology now underway that is an enabler for emerging enterprise AI deployments as well as other important new applications and network architectures.
That trend is called bounded latency networks which offer the ability to effectively guarantee the delivery of a data flow to its destination. Like IP addressing set the stage for today’s universal best-efforts connectivity, in the next 20 years bounded latency connections will be just as pervasive.
So important is this technology and all that it enables that I am dedicating this blog post, and others that follow, to discussing advances, impacts and challenges in building bounded latency networks.
What Is Bounded Latency?
Let’s start with some definitions. A good definition of bounded latency is a network or system where, according to the Oxford dictionary, “total delay experienced by data traversing the network can be guaranteed not to exceed some predetermined value.”
Bounded latency networks are also related to deterministic networking and time-sensitive networking (TSN) all of which deliver quality of service for data packets.
The technology used for bounded latency networks is based on standards that come from three standards bodies:
IEEE (Institute of Electrical and Electronics Engineers):
The IEEE is developing TSN standards as a part of the IEEE 802.1 working group. These standards enable deterministic, low-latency communication over Ethernet networks. TSN ensures reliable data delivery for applications in industrial automation, automotive networks, and professional audio/video systems.
3GPP (3rd Generation Partnership Project):
The 3GPP is integrating TSN capabilities into 5G networks focusing on ultra-reliable low-latency communication (URLLC). The 5G-TSN integration enables cellular networks to support industrial automation, smart grids, and autonomous systems by ensuring precise time synchronization and bounded latency over wireless networks.
IETF (Internet Engineering Task Force):
The IETF is developing Deterministic Networking (DetNet) standards to provide TSN-like capabilities over IP-based routers and wireless networks. DetNet standards define mechanisms for achieving low-latency, low-jitter, and highly reliable data delivery across wide-area networks (WANs). This enables real-time applications in industrial IoT and critical infrastructure communications.
While these standards bodies are focused on wired and wireless LANs and WANs, the bound latency concept is also found in CPUs and memory chips, which I will be writing about in a future post.
Bounded Latency in Edge Computing and AI
Edge computing as an enabling technology for Content Delivery Networks (CDNs) has been around since the 1990s. As cloud computing became more prevalent and then exploded, edge computing, although still important, took a backseat to cloud computing. Enterprise AI and the consequent need to provide specific business function inference / training at the edge is bringing edge computing back to the forefront.
Moving computing back to the edge is the next evolution of cloud computing and increasingly uses private 5G networks for connectivity.
Enterprise AI, 5G TSN and other emerging low latency applications require edge computing to augment cloud computing with cloud servers that are typically located on or near an enterprise premises. New rules driving data sovereignty and privacy needs are also creating the need for edge computing.
Wi-Fi has been the primary enterprise wireless networking technology because of the complexity of 5G equipment and the cost of licensing spectrum. But Wi-Fi networks are not good for bounded latency applications because they are inherently best efforts-based and have an imprecise hand off mechanism that could add delay to a data transmission.
5G doesn’t have these latency issues and thanks to the availability of unlicensed 5G spectrum (called Citizens Broadband Radio Service (CBRS) in the U.S. and established as well in other countries), and the virtualization of private 5G systems onto commercial-off-the-shelf (COTS) servers, IT departments can build local 5G networks without spectrum hassles using hardware that is common in IT environments.
The Performance Impact of Latency on Enterprise AI
Private 5G networks are ideal for enterprise AI systems, particularly in real-time applications. High latency can lead to delayed responses, reducing the effectiveness of AI-driven insights and impacting user experience.
Even milliseconds of delay can lead to significant operational risks. As enterprise AI increasingly relies on real-time data processing, minimizing latency becomes essential to maintaining efficiency and competitive advantage.
In many sensor-based AI applications, data will be collected by a sensor that needs to communicate with an AI model because it does not have the compute and storage needed to remotely train using the data or take inference actions. For example, most installed security video cameras don’t have the compute power or storage to process video.
IT teams then either need to use capital budget to replace these with more expensive cameras with CPUs or create a bounded latency network to an edge server with computer video analytics software.
Other Use Cases
While edge networks, enterprise AI are some of the most important drivers of bounded latency networks, other real time applications will also benefit from the technology including smart city and other IoT applications, autonomous vehicles, virtual / augmented reality and online gaming.
Conclusion
Bounded latency networks are still in their early stages, and there are some technical and economic challenges will need to be addressed before these networks become widespread. All of this will be addressed in future blog posts as I seek to spread the word about the significant benefits of the technology.