Much of the data people use the modern Internet to access resides in data centers, and users access the data through the Hypertext Markup Language (HTML).
Note
Chapter 15, “Application Transport, ” discusses HTML and the Hypertext Transfer Protocol (HTTP).
Content providers, such as search engines and social media networks, develop and manage some of the largest web applications on the global Internet. Banks, retail websites, and many other businesses also develop and manage large-scale web applications.
Figure 13-1 illustrates the relationship between users, web applications, and user accessed data.
Figure 13-1 Web Application Architecture
Figure 13-1 shows a single host communicating with a web service. There are three kinds of servers used to build the web service:
• The web server or front-end server takes data from the back-end server and formats it for different browsers.
• The back-end server takes information from many different data sources, including other applications and databases, to build a page.
• The database service stores information about users, services, events, etc., and feeds them to the back-end server.
While the illustration shows one of each kind of server or application within the web service, there are potentially thousands of each:
• The service might need thousands of web servers to support millions of concurrent users. When users connect to the service, they must be routed to a server with low utilization and access to the correct information.
• The service might need thousands of back-end services to supply the various sections of a single page in a web service.
For instance, a retail site might have one back-end server to build a list of items on sale, a second to build a list of popular items, a third to build a list of recommended items based on previous purchases, and a fourth to show the status of current orders. Social media services often have thousands of back-end services building and supplying information to the front-end servers.
• The service will need thousands of databases to store, sort, and analyze data.
All these applications must communicate within one or more data centers over a network. Web applications encounter network scale, accumulated jitter, and accumulated delay challenges.
Web-application network scale can be described with one word: huge. Many web-based applications require hundreds of thousands of servers. To support huge numbers of hosts and services, DC networks must sometimes be designed to support hundreds of thousands of servers, and DC fabrics must be connected through data center interconnect (DCI ).
Figure 13-2 illustrates the accumulation of delay and jitter across a network.
Figure 13-2 Delay Accumulation Across a Network
In Figure 13-2, each user’s request for data from a web-based service causes four separate connections, each of which must travel across the same network. If it takes even a tenth of a second for a packet to cross the network, the user will not receive the information for about half a second.
The example shown in Figure 13-2 is simpler than the real world. For every byte of data a typical web-based application receives, it will send about 10 bytes of traffic across the data center network.
Because user experience directly results from the time required for the web-based service to respond to a request, engineers must design DC fabrics to reduce delay and jitter.
Internet Exchange Points
Internet exchange points (IXPs ) are critical in the global Internet. IXPs provide
• Connectivity for regional providers
• Colocation facilities
• Regional access to users for content providers, edge security, etc.
Note
Chapter 5, “What’s in a Network? ” describes the role of IXPs in the global Internet.
Figure 13-3 illustrates global connectivity from an IXP’s perspective.
Figure 13-3 IXP Connectivity
Figure 13-3 contains several points of interest:
• A regional access provider, enterprise operator, and transit provider connect along the left side of the IXP fabric. The enterprise operator and regional access provider may connect to the transit provider through the IXP fabric separately.
Connecting to a transit provider through the IXP fabric and separately at some other peering point with a second transit provider offers redundant connections and resilience.
• The route server at the top carries routing information
between the routers connected to the IXP fabric. Rather than exchanging routes ( peering) directly with the transit provider, they can peer with the route server. If every organization connected to the IXP fabric peers with the route server, they can receive every reachable destination over a single connection.
However, traffic does not flow through the route server; it flows across the IXP fabric directly between the various organizations.
• The IXP’s customers can install servers, routers, and other devices in the colocation facility, connecting them directly to the IXP fabric. Servers in this colocation facility have much higher-speed access to destinations on the global Internet than a server located within a corporate data center.
There is no direct connection between the IXP fabric and the global Internet. IXPs do not provide access to the Internet, just a place to connect between all the various kinds of providers and organizations on the global Internet.