I recently took an amazing trip focused on launching Cisco Application Centric Infrastructure (ACI) across Africa (I work as a Technical Marketing Engineer for the Cisco BU responsible for ACI.) During the trip I learned as much information as I was there to share. One of the more interesting lessons I learned was about the importance of infrastructure, and the parallels that can be drawn to networking. Lagos Nigeria was the inspiration for this lesson. Before beginning, let me state for the record that I enjoyed my trip, the people I had the pleasure to work with, and the parts of the culture I was able to experience. This is simply an observation of the infrastructure and its parallels to data center networks.
Nigeria is known as the ‘Giant of Africa’ because of its population and economy. It has explosive birth rates which have brought it quickly to 174 million inhabitants, and its GDP has become the largest in Africa at $500 billion. This GDP is primarily oil based (40%) and surpasses South Africa with its mining, banking, trade, and agricultural industries. Nigeria also has a large and quickly growing telecommunications sector, and a highly developed financial services sector. Even more important is that Nigeria is expected to be one of the world’s top 20 economies by 2050. (Source: https://en.wikipedia.org/wiki/Nigeria.)
With this and several other industries and natural resources, Nigeria has the potential to quickly become a very dominant player in the global stage. The issue the country faces is that all of this industry is dependant on one thing: infrastructure. Government, transportation, electrical, telecommunications, water, security, etc. infrastructure is required to deliver on the value of these industries. The Nigerian infrastructure is abysmal.
Corruption is rampant at all stages, instantly apparent before even passing through immigration at the airport. Once outside of the airport if you travel the roads, especially at night you can expect to be stopped at roadside checkpoints and asked for a ‘gift’ from armed military or police forces. This is definitely not a problem unique to Nigeria, but having travelled to many similar places I found it to be much more in your face, and ingrained in the overall system.
Those same roads that require gifts to travel on are commonly hard-packed dirt, or deteriorating pavement. Large pot holes filled with water scatter the roadways making travel difficult. Intersections are typically unmarked and free of traffic signals, stop, or yield signs. Traffic chokes the streets and even short trips can take hours depending on traffic that is unpredictable.
The electrical grid is fragile and unstable with brownouts frequent throughout the day. In some areas power is on for a day or two at a time, followed by days of darkness. In the nicer complexes generators are used to fill the gaps. The hotel we stayed at was a very nice global chain, and the power went out briefly several times a day for a few moments while the generator kicked back in.
The overall security infrastructure of Nigeria has issues of its own. Because of the weaknesses in central security most any business establishment you enter will have its own security. This means you’ll go through metal detectors, x-rays, pat-downs, car searches, etc before entering most places.
Additionally you may be required to hire private security while in country, depending on your employer. Private security is always a catch-22, to be secure you hire security, by having security you become a more prominent target. As a base example of this, one can assume that someone who can afford private security guards must be important enough, to someone, for a ransom.
All of these aspects pose significant challenges to doing business in Nigeria. The roads and security issues mean that you’ll spend far more time than necessary getting between meetings. You’ll have the unpredictable travel times, the added time of going through security at each end, parking challenges, etc. Along the way you may experience check-points that demand gifts, etc. The power may pose a problem depending on the generator capabilities of the locations your visiting.
All of these issues choke the profitability of doing business in countries like this. They also make doing business in these countries more difficult. Some simple examples of this would be companies that simply choose not to send staff due to security reasons, or individual employees who are not comfortable travelling to these types of locations. It’s far easier to find someone who’s willing to travel the expanse of the European Union with its solid infrastructure, relative safety,etc. than it may be to find people willing to travel to such locations.
All of this quickly drew a parallel in my mind to the current change going on within data center networks, specifically Software Defined Networking (SDN.) SDN has the potential to drive new revenue streams in commercial business, and more quickly/efficiently accomplish the mission at hand for non-commercial organizations. That being said, SDN will always be limited by the infrastructure that supports it.
A lot of talk around SDN focuses on software solutions that ride on top of existing networking equipment in order to provide features x, y and z. Very little talk is given to the networking equipment below. This will quickly become an issue for organizations looking to improve the application and service delivery of their data center. Like Nigeria, these business services will be hindered by the infrastructure that supports them.
Looking at today’s networks, many are not far off from the roads pictured above. We have 20+ years of quick fixes, protocol band-aids, and duct tape layered on, to fix point problems before moving on to the next. The physical transport of the network has become extremely complex.
Beyond these issues there are new physical requirements for today’s data center traffic. 1 Gig server links are saturated and quickly transitioning to 10 Gig. 10 Gig adoption at the access layer is driving demand for higher speeds at the aggregation and core layers, including 40 Gig and above. Simple speeds and feeds increases cannot be solved by software alone. A congested network with additional overlay headers, will simply become a more congested network.
A more systemic problem is the network designs themselves. The most prominent network design in the data center is the 3-tier design. This design consists of some combination of logical and physical Access, Aggregation and Core tiers. In some cases one or more tiers are collapsed, based on size and scale, but the logical topology remains the same. These designs are based on traditional North/South traffic patterns. With these traffic patterns, data is primarily coming into the data center through the core (north) and being sent south to the server for processing, then back out. Today, the majority of the data center traffic travels East/West between servers. This can be multi-tier applications, distributed applications, etc. The change in traffic pattern puts constraints on the traditional designs.
The first constraint is the traffic flow itself. As shown in the diagram below, traffic is typically sent to the aggregation tier for policy enforcement (security, user experience, etc.) This pattern causes a ping-pong effect for traffic moving between server ports.
Equally as important is the design of the hardware in place today. Networking hardware in the data center is typically oversubscribed to reduce cost. This means that while a switch may offer 48x 10 Gig ports, its hardware design may only offer a portion of that total bandwidth. This is done with two assumptions:
1) the traffic will eventually be egressing the data center network on slower WAN links
2) not all ports will be attempting to send packets at full-rate at the same time.
With the way in which modern applications are being built and used, this is no longer the case. Due to the distribution of applications we more often have 1 Gig or 10 Gig server ports communicating with other 1 Gig or 10 Gig ports. Additionally many applications will actually attempt to push all ports at line-rate at one time. Big data applications are a common example of this.
The new traffic demands in the data center require new hardware designs, and new network topologies. Most modern network hardware solutions are designed for full-rate non-blocking traffic, or as close to it as possible. Additionally the designs being recommended by most vendors today are flatter two tier architectures known as Spine/Leaf or Clos architectures. These designs lend themselves well to scalability and consistent latency betweens servers, service appliances (virtual/physical) and WAN or data center interconnect links.
Like Nigeria, our business solutions will only be as effective as the infrastructure that supports them. We can of course, move forward and grow at some rate, for some time, by layering over top of the existing but we’ll be limited. At some point in our futures we’ll need to overhaul the infrastructure itself to support the full potential of the services that ride on top of it.