The Lorax: A History of Silicon Valley

This is adapted from ‘The Lorax’ by the Great Dr. Seuss. If you have not read his work, please do. His stories teach beautiful lessons through the use of whimsy and wonder.

I love Dr. Seuss, so this is a thing I do. If you like it, there are links to others at the end. I make no guarantees as to the freshness of the content.

Unless someone like you cares a whole awful lot, nothing is going to get better. It’s not.

Dr. Seuss 'The Lorax'
  At the far end of tech 
 where the products are sold
 and the wind smells of sandwiches delivered half-cold,
 where no roadmap is ever delivered when told…
 is the street of the Lifted Lorax.
  
 And deep in that end, some people say, 
 if you look deep enough you can still see, today, 
 where the Lorax once stood
 just as long as it could
 before somebody lifted the Lorax away.
 What was the Lorax? 
 And why was it there? 
 And why was it lifted and taken somewhere 
 from the far end of town where the products are sold? 
 The old Once-ler still lives here.
 Ask him. He knows.
  
 You won’t see the Once-ler.
 Don’t look for his booth.
 He stays in his mansion, alone with his things,
 where he drinks cold-pressed juice
 that someone else brings.
 And on rare occasions, out of the blue,
 he tweets
 out a message
 he often repeats
 and tells how the Lorax was lifted away.
  
 He’ll tell you, perhaps…
 If you’re willing to pay.
  
 He’ll send you a link
 to an app where you lay
 one third of your equity, then sign
 NDA
 of course, he will say
 it’s always this way.
    
 He then checks the app
 triple checks the amount
 to ensure he owns you
 that you can’t dismount.
  
 Then he adds what you paid him
 to the piles of cash
 some used for the mortgage
 the rest wipe his ass.
  
 He slacks, “I will ping you by video call,
 While out on my yacht, with crappy sig-nal.
  
 BLURRP!
 The blurps of his call, ring loud in your ear
 and the old Once-ler’s voice is not at all clear,
 since he’s out on the water on cell-phone connection
 choppy and garbled,
 This makes him sound
 quite verbally hobbled.
  
 “Now I’ll tell you.” He says, with his ego displayed,
 “how the Lorax got lifted and taken away…
  
 It all started way back…
 such a long, long time back…
  
 Way back in the days when “The Valley” was green
 and orchards spread far
 for a beautiful scene,
 and a house could be bought by a regular Jane…
 one morning I came to this place I remain.
 And I first saw the schools!
 Stanford and Berkley
 their talent you see!
 So much innovation, but money was lacking,
 an untapped resource, for someone like me.
  
 Between them a freeway Junipero Serra
 with a great halfway point up above Santa Clara
 where Sand Hill Road sat, doing just fine, in a soon to die era.
  
 From the nearby South bay
 came cool morning breezes
 which moistened the fruit
 as it hung in the treeses.
  
 But that talent! Those brains!
 Those smart engineers!
 All my life I’ve been searching
 seeking to obtain
 a resource like this
 that I could abuse.
 A resource I’d care about,
 If I’d read Dr. Seuss.
  
 My heart leapt with joy,
 I’d be an investor!
 I leased a small space
 Near an old shopping center
  
 With GREAT BRAINS AND SKILL, plus some damn lucky timing, 
 We started to watch, our net-worth start climbing.
 In no time at all, I had built a small group
 so I cut down an orchard, at the end of the loop.
  
 The moment I’d finished, I heard W-T-F!
 I looked.
 Something popped out of a plum that had struck
 the ground next to where the last tree lay dead,
 His looks were as strange as the things that he said.
  
 He was small. He was old.
 Had a drawl and was bossy.
 He looked straight on over
 Like he didn’t even know me.
  
 “Douche bag! He said, with a stern knowing tone,
 “I am the Lorax. I speak for what’s grown.
 I speak for what’s grown and warn of what comes!
 And I demand to know, what you’ve done to my plums”-
 He was winded and red; his anger was showing.
 “Why the hell would you destroy, all the things that are growing!”
  
 “Look bro” I said. “No need to get pissy.
 It’s one little orchard. No one will miss. See?
 I’m saving the world. This thing is a network.
 To connect all the people, he said as he smirked.
 It’s a book. It’s a phone. It’s music! It’s apps!
 But it has more to offer than all of that crap!
 You can use it for ads and make tons of money!
 Selling people like products while they use a freebie”
  
               The Lorax replied,
               “Dude, your ego is large, so this may just sting.
               There is no one on earth
               who would need such a thing.
  
 Just as my mouth opened to say “go-to-hell”
 around the corner came AOL,
 they thought this web would be great for a buck.
 They hired some people and backed up a truck.
  
 I clowned the old Lorax, “You stupid old man!
 You’ll never quite get, what we just began!”
  
 “I repeat cried the Lorax,
 I speak for what’s grown!”
  
 “You’re expired. I told him.
 “Go retire in peace.”
  
 I ran for the phone, in those days they plugged in,
 I put in quick calls to nephews and cousins.
 I called all my friends, my college frat buddies
 said here’s the scoop, lets go make some monies!
 We’re going to make the old world move forwards!
 Get over here fast, take the road through the orchards,
 Turn left when there’s strip malls instead of more woods.
  
 And in no time at all,
 the cement was flowing,
 buildings and car lots sprung up in quick fashion,
 concrete and rebar were doing the growing.
 We ‘innovated’
 and we stayed very busy,
 with two maybe three drinks at lunches
 wining and dining,
 betting millions on hunches.
  
 Then…
 Hello, there, hello!
 How the money did flow!
 We needed more buildings
 more car lots
 more blow!
  
 So we cleared orchards with speed
 driven purely by greed.
 We were changing the world
 this was progress we said.
 And that Lorax?...
 We guessed he was dead.
  
 The very next month
 a knock at the door
 open it up, and he’s standing there.
  
 He bellowed, “I’m the Lorax, I speak for what grows,
 Which you are destroying, wherever it shows.
 But I’m also in charge of the birds and the bees
 Who live on the fruit of these orchard trees
 and gorge on the nectar and fruit as they please.”
  
 “Because of your buildings, your car lots, and malls
 there’s not enough food for the winter and falls.
 My poor birds and bees and dying in droves
 the rest are out searching for new homes and new groves.”
  
 “This was paradise to them, but now they must go.
 They require new orchards where their families can grow.
 Good luck my fine friends,” he said as he hung his head low.
  
 I, the Once-ler, felt something
 As I watched them all go.
 BUT…
 Money I worship!
 And I’ve got plenty of blow.
 Who needs birds anyway? I drive a Lambo.
  
 It wasn’t intentional. I didn’t want that.
 But bigger is better when wallets are fat.
 I biggered my bets. I biggered my tech.
 I biggered my campuses. I biggered my head.
 Our tech started shipping, all over the globe
 from Bangkok to Paris and back to Latrobe.
 So I kept on biggering… selling more tech.
 And I biggered my wealth, with each inbound check.
  
 Then there he was, the Lorax was back
 That angry old coot with more shit that was whack.
  
 “I am the Lorax,” he choked through a cough.
 Clearing his throat he readied a scoff.
 “Once-ler!” He roared, with the rasp of his age.
 “Once-ler! The air’s filled with smog. Disengage!
 My poor lotis butterfly, well they can’t see their way.
 At this rate we’ll lose sight of the sun through the day.
  
 “And so,” said the Lorax,
 “-please pardon my tone
 They can’t survive here.
 I’ve sent them off to places unknown.”
  
 “Where will they end?...
 I don’t comprehend.”
  
 “They may have to fly for week upon week
 To get away from you, and the smog that you leak.”
  
 “But worse,” cried the Lorax, his neck hair stood up.
 “Let me say a few words about this f’ng slop.
 Your plants are dumping this shit without stop.
 They build your chips and out this stuff pops.
 And what do you do with this poo smelling goo?
 I’ll show you, you self-entitled boy-man you!”
  
 “You’re killing the lakes where the Lake Splittail fish swims!
 No more can they frolic and live out their whims.
 So I’ve ordered them off. Their future is bleak.
 They’ll wander on land, flip-flopping and weak
 searching for water without oil streaks.”
  
 And then I got angry.
 So shakingly angry.
 I yelled at the Lorax, “Now listen here, Pops!
 All you do is whine, and scream Stop! Stop! Stop!
 Well, I have my liberty, sir, and I’ll tell you
 I intend to keep doing what I want to do!
 And! For your information, you Lorax, I’m going to keep biggering
               And BIGGERING
                             And BIGGERING
                                          And BIGGERING,
 Turning orchards into lots for engineers cars
 to build more tech we can trade for gold bars!”
  
 And at that very moment, we heard a loud sound!
 Outside in the orchards a tree hit the ground.
 The final fruit tree did finally fall.
 The orchards were gone, once and for all.
  
 No room. No more boom. No work to be done.
 So, in no time, my friends, nephews, cousins, every one,
 Threw up two fingers as they hopped in my cars,
 Peace out, they said as the tires burned tar. 
  
 Now all that was left was a bad smelling sky
 Office buildings, parking lots…
 the Lorax…
 and I.
  
 The Lorax said nothing. Stared through my soul…
 his stare said to me, what he saw wasn’t whole…
 as he rose to get going, his mood black as coal.
 I’ll never forget that look on his face
 when he stood one last time, to take leave of this place,
 this Garden of Eden, that I had erased.
  
 And all that the Lorax left here in this mess
 Was a pile of rocks, with one word…
 “Unless.”
 Whatever that meant, well, I just couldn’t guess.
  
 It’s ancient history now.
 But I’ve thought of it lots.
 Worried, and muddled
 to untangle the plot. 
 While Silicon Valley crumbled away
 I’ve tried to make sense
 I’ve worried, I’ve wondered,
 and not just for legal defense.
  
 “But now,” says the Once-ler,
 “Now that you’re here,
 The word of the Lorax seems perfectly clear.
 UNLESS someone like you
 Cares a whole awful lot,
 Nothing is going to get better.
 It’s not.
  
 “So…
 Listen!” cries the Once-ler
 “I’ve sent you a seed
 in it you’ll find the hope that you need.
 It’s the last of its kind, so treat it as such
 there’s no other thing, the world needs this much.
 Plant it somewhere bleak and dreary
 Feed it, water it, and in theory
 The hope will grow big and strong
 and one day the Lorax will come back along. 

Driving Digital Transformation

Driving Digital Transformation

“Digital, Digitization, Digital, Digital, Digital Transformation. There, I've hit my mandatory quota of 5 digital mentions for my presentation, now we can get to something interesting.”

That was my opening line at a large data center and cloud conference in Rome. It wasn't the one I'd planned, but I had just spent a day listening to my executive colleagues from around the industry wax philosophically about 'digital' with no mention of how, why, or what. No call to action, no roadmap, no substance. The previous presenter was sitting front row center with his jaw wide open when I finished the sentence. He'd had digital-this, digital-that, as the title for every slide in his deck. Sorry, not sorry.

I haven't watched 'Game of Thrones' but I imagine 'Winter is Coming' might be similar to the way 'Digital Transformation' gets thrown around. 'Um yeah, it's this thing, it's on it's way, it's already happening in some places. Everyone knows what it is, definitely, for sure.' Let's agree that: it is a thing, it is happening, and it is coming in stronger waves. From there let's look at what it is, where it's coming from, and how it can be embraced.

Let's rewind to the beginning of widespread Information Technology adoption. We'll go back to the early days of networked computing and use the adoption of email systems as an example. As a company adopted email systems for the first time, they were dipping their toe into digital transformation. Paper based systems and analog based voice calls were converted to a digital medium. What that was doing under the surface was creating business value through technology adoption. That is the key to digital transformation.

Theoretically if there were two companies in the same industry and one was first to deploy and adopt an email system, they'd have a competitive advantage. The advantage of speed and agility. The hidden key phrase of the sentence being adopt. Deploying an email system wasn't enough. They had to drive adoption, incorporate it into their process and modify work flows to take advantage of it.

As technology became commonplace a shift occurred behind the scenes. Information Technology (IT) moved from a value-creation center to a cost-center. Technology purchase decisions moved from 'what can it do for the business' to how much money can we save doing the same thing. IT sales conversations shifted to circular conversations about return-on-investment (ROI), and sales cycles began incorporating any number of questionable ROI calculations.

Now comes Digital Transformation with all it's hype being treated as something new. It's not. Like most everything in technology it's circular. We're at a technology inflection point where IT can move back into the 'what can it do for the business' seat. Digital Transformation is simply using emerging technology and new IT operational models to drive new value streams for the business or mission. No more, no less.

Several things are coming together at once to form the catalyst of this shift. New technologies like big data, and AI. New consumption models like mobile first compute users. And new delivery models like cloud which provide an extremely low compute entry cost and a scale up model as a company grows. Uber is one of the most touted examples of combining these things to create market disruption, which is just silicon valley's term of the week for transformation.

Uber is an example I like, and not in the doom and gloom 'disrupt or be disrupted' way people love to use them. The question I ask my customers is different: 'If you were the taxi companies three years before Uber launched, and you had the idea for an Uber like app, could you have executed on it? Would your IT infrastructure and organization been able to build and adopt the new model?' Universally the answer is no.

The first stage of digital transformation is modernizing the technology delivery stack into a system that provides agility. Agility to test out new ideas, agility to fail and try again. Agility to deploy the bright ideas that your organization comes up with. The world moves fast, the longer it takes to process an idea, and get it stood up, the higher the chance of missing the market and being out maneuvered.

The dirty secret in all of this is that the technology is easy. There are hundreds of great options to choose from when it comes to the right technology. You can cloud it, automate it, DevOps it, etc. Alone or in tandem all of these things can work perfectly from a technology perspective to achieve your goals. The tech is easy, but most still fail.

The hard part is choosing the technology stack that fits your organization, then remodeling your people and process to take full advantage of it. Nobody likes to admit that getting new technology running is the easy part. The hard part is getting it adopted to it's fullest potential within your organization. Successfully launching a product or project internally is as important as picking the right tech and standing it up.

I look at this like Marine Corps boot camp. As a recruit we spend all of boot camp hating it and waiting to graduate, thinking boot camp is the hard part. Our drill instructors assure us boot camp is the easiest part of being a Marine. Years later we find out they were right. Boot camp, like a technology install, is fairly color by the numbers, if you follow the instructions things work as expected. Being in the fleet, post boot camp is like technology adoption. You're up and running but now it's your responsibility to apply the skills and capabilities the right way every day.

When looking at making technology shifts be ready to tackle the people and process with as much energy as you do the technology. You'll need leaders, champions, early adopters. You'll need to provide a clear sense of direction, intended outcome, and a sense of 'why'. If your team is bought in, and all moving towards the same goal the technology stack becomes a supporting character in the transformation you'll drive.

As a parting thought on Digital Transformation try and think big. I've been privileged to travel the world working with customers of all types in some very interesting places. I've gotten to see first hand the positive transformative power technology can have. From banks in Africa using cell-phone usage statistics to assess credit worthiness and provide small-business loans to people with no credit history, to hospitals in India using tele-medicine to provide advanced patient care on-site in remote villages.

Digital transformation is as much about change and a better future as it is about profit lines. Even better, the two don't have to be separate goals. This is why I wake up every morning excited to see what I can help my customers achieve that day.

Best Practices of Women in Tech

The following is a guest post by Sara (Ms. Digital Diva)

Today’s tech industry has a new face, and that face is female. Though traditionally male dominated, more and more women are making their mark as leaders in the tech field. Contributing not only to the continuous advancements we’re seeing in technology, these women are making a point to build up one another and the young women who look up to them. Progress has been made, but there’s still work to be done. Here are some of the ways these women are doing it.

Hit the Ground Running
Just as important as the women who are already working in the tech field, are the young girls who aspire to be like them. Supporting these young women and girls to follow their passion and providing them with the necessary resources to reach their goals, is key to the future of tech. An example of these efforts comes from founder of Girls Who Code, Reshma Saujani, who aims to close the gender gap by providing an outlet for girls to explore their abilities and pursue interests computer science. Similarly with Women Who Code, Alaina Percival empowers women by offering services to assist in building successful careers in technology. Breaking out of the stereotypical boxes and utilizing these sorts of programs not only builds confidence, but helps those just starting out find their niche. This can have a important impact on professional development when it becomes time to specialize.

Pursue What’s Most Beneficial to You
There’s no stopping a woman with goals. Once you have that goal set, it’s up to you to do everything it takes to get it done. In this industry, technology is constantly advancing. To stay current, you must maintain a hunger for learning. Staying up-to-date with trends, and qualities that are most in-demand by employers, will keep you ahead of the game and closer to reaching your goals. This quality fortunately seems to come natural to women. According to HackerRank's Women in Tech Report, women are incredibly practical in this sense, and tend to pursue proficiency in whichever languages are most valued at the moment.

Succeed Together
It’s tough to admit, but getting more women in tech is still a work in progress, and in order to continue progressing we must work together. Rarely does anyone succeed in life without mentorship, guidance or at least support from others. There’s nothing wrong with asking for help. Taking the time to network with women who have earned a position you hope to achieve someday is essential in overcoming workplace challenges and clarifying questions. Even if you can’t get in physical contact with a role model of yours, keeping up with what their writing, saying and working on, can help you expand your own interests and continue learning. The process of working towards your ultimate potential is a long one, but embracing advice can help you get there efficiently.

Lessons Learned
Like anything in life, developing your professional career comes with lots of trial and error. You’ll succeed and you’ll fail, you’ll try things you like and try things you hate. It’s all a part of the process. When you’re the only woman in an office full of men it can be difficult to speak up or put yourself out there in fear of making a mistake. But if I’ve learned anything in my career, it’s that staying silent signifies acceptance and not involving yourself in situations that can help you grow only hurts you. Getting involved in groups, committees, projects, anything that interests you is the biggest piece of advice I can give. Not only will you expand your knowledge and experience, but it’s a great way to get to know others in the tech community. Building relationships is a key part of any profession, but especially in environments where you want to build confidence.

A final thought to take with you is, to always be advancing. So much of the technology industry is self development and striving to discover the next best thing. Curiosity is what will keep you afloat. Utilizing programs, and keeping up with verticals that interest you can help in develop strong points of view on emerging technologies. This is crucial as you grow in your career, as people generally listen to those who have something to say. What you don’t want to do is get swept up in the crowd and lose your voice. If tech is what you’re interested in than it’s where you belong, whether you’ve been studying it your whole life or just getting started. Never underestimate yourself and don’t confuse experience with ability. There are so many incredible women doing incredible things in the tech industry. All they need to be even greater, is you.

Intent-Driven Data Center: A Brief Video Overview

Here's a brief video overview of Intent-Driven data center. More blogs to come.

Intent Driven Architecture Part II: Policy Analytics

*** Disclaimer: Yes I work for a company that sells products in this category. You are welcome to assume that biases me and disregard this article completely. ***

In my first post on Intent-Driven Architectures (http://www.definethecloud.net/intent-driven-architectures-wtf-is-intent/) I attempted to explain the basics of an Intent-Based, or Intent-driven approach. I also explained the use of Intent-Driven architecture in a network perspective. The next piece of building a fully Intent-Driven architecture is analytics. This post will focus there.

Let's assume you intend to deploy, or have deployed a network, server, storage, etc. system that can consume intent and automate provisioning based on that. How do you identify your policy, or intent for your existing workloads? This is a tough question, and a common place for policy automation, micro-segmentation, and other projects to stall or fail. This is less challenging for that shiny new app your about to deploy (because you're defining requirements, the policy/intent), it's all of those existing apps that create the nightmare. How do you automate the infrastructures based on the applications intent, if you don't know the applications intent?

This is one of the places where analytics becomes a key piece of an intent-driven architecture. You not only need a tool to discover the existing policy, but one that can keep that up-to-date as things change. Was policy implemented correctly on day 0? Is policy still being adhered to on day 5, 50, 500? This is where real-time, or near real-time analytics will come into play for intent-driven architectures.

I'm going to go back to the network and security as my primary example, I'm a one-trick pony that way. These same concepts are applicable to compute, storage and other parts of the architecture. Using the network example the diagram below shows a very generalized version of a typical policy enforcement example in traditional architectures.

Network Policy

 

Using the example above we see that most policy is pushed to the distribution layer of the network and enforced in the routing, firewalls, load-balancers etc. The other thing to note is that most policy is very broad deny rules. This is what's known as a blacklist model; anything is allowed unless explicitly denied. This loose level of policy creates large security gaps, and is very rigid and fragile. Additionally, because the intent or policy is described so loosely it's nearly impossible to use existing infrastructure to discover application intent.

In order to gather intent and automate the policy requirements based on that intent, we need to look at the actual traffic, not the existing rules. We need a granular look at how the applications communicate, this shows us what needs to be allowed, and can be used to gather what should be blocked. It can also show us policies that enforce user-experience, app-priority, traffic-load requirements, etc. Generally this information can be gathered from one of two locations: the operating system/app-stack, or the network, even better would be using both. With this data we can see much more detail. The figure below shows moving from a broad subnet allow rule, to granular knowledge of the TCP/UDP ports that need to be open between specific points.

Old policy vs new policy

These granular rule-sets are definitely not intent, but they are the infrastructures implementation of that intent. This first step of analytics assists with tightening security through micro-segmentation, but also allows agility in that tightened security. For example if you waved a magic wand and it implemented perfect micro-segmentation, that micro-segmentation would quickly start to create problems without analytics. Developers open a new port? A software patch change the connections ports for an app? Downtime, and slow remediation will be unavoidable. With real, or near-real-time analytics the change can be detected immediately, and possibly remediated with a click.

Analytics plays a much bigger role than just policy/intent discovery. The analytics engine of an Intent-based system should also provide visibility into the policy enforcement. Some examples:

All of this should be done by looking at the actual communication between apps or devices, not by looking at infrastructure configuration. For example, I can look at a firewall rule and determine that it is properly configured to segment traffic a, from traffic b. There is nothing in the firewall config to show me that the rest of the network is properly configured to ensure all traffic passes through that firewall. If traffic is somehow bypassing the firewall, all the rules in the world make no difference.

Analytics engines designed for, or as part of, an intent-based networking system provide two critical things: policy discovery, and policy verification. Even with a completely green-filed environment where the policy can be designed fresh, you'll want analytics to ensure it is deployed correctly and keep you up-to-date on changes.

There are three major components of an intent-driven architecture. I've discussed intent-based automation in the previous post, and analytics in this post. I'll discuss the third piece in the near future: assurance, knowing your system can consume the new intent.

*** Disclaimer: See disclaimer above. ***

Intent Driven Architectures: WTF is Intent?

*** Disclaimer: I work for a vendor who has several offerings in the world of intent-based infrastructure. If you choose to assume that makes my opinion biased and irrelevant, that's your mistake to make, and you can save time by skipping the rest of this post. ***

*** Update at the end of the blog (10/20/2017)***

In the ever evolving world of data center and cloud buzzwords, the word 'intent' is slowly gaining momentum: Intent-based x, intent-driven y, etc. What is 'intent' and how does that apply to networks, storage, servers, or infrastructure as a whole, or better yet to automation? Let's take a look.

First, let's peek at status quo automation. Traditional automation systems for technology infrastructure (switches, servers, storage, etc.) utilize low level commands to configure multiple points at once. For example the diagram below shows a network management system being used to provision VLAN 20 onto 15 switches from a single point of control.

Basic Automation

The issue here is the requirement for low level policy rendering, meaning getting down to the: VLAN, RAID pool, firewall rule level to automate the deployment of a higher level business policy. Higher level business policy is the 'intent' and it can be definied in terms of: security, SLA, compliance, geo-dependancy, user-experience, etc. With a traditional automation method a lot of human interaction is required to translate from an applications business requirements, intent, and the infrastructure configuration. Worse, this communication typically occurs between groups that speak very different languages: engineers, developers, lines-of-business. The picture below deipicts this.

App Deployment Chain

This 'telephone game' of passing app requirments is not only slow, it is also risk prone because a lot gets lost in the multiple layers of communication.

Hopefully you now have a slight grasp on the way traditional automation works, basically the overall problem statement. Now let's take a dive into using intent to alleviate this issue.

I'm going to use the network as my example for the remainder of this post. The same concepts are applicable to any infrastructure, or the whole infrastructure, I just want to simplify the explanation. Starting at the top, a network construct like a VLAN is a low-level representation of some type of business policy. A great example might be compliance regulations. An app processes financial data that is regulated to be segmented from all other data. A VLAN is a Layer 2 segment, that in-part, helps to support this. The idea of an intent-driven architecture is to automate the infrastructure based on the high level business policy, and skip the middle layers of translation. Ideally you'd define how you implement policy/intent for something like financial data one time. From them on, simply tagging an app as financial data ensures the system provisions that policy. The diagram below shows this process.

Intent Driven Workflow

One common misnomer is that the network, or infrastructure must be intelligent enough to interpret intent. This is absolutely false. The infrastructure needs to be able to consume intent, not interpret or define it. Intent is already understood in business logic. The infrstructure should be able to consume that, and automate configuration based on that business logic intent. In the example in the diagram business logic has already been defined for the given organizations compliance requirments. Once it has been defined, it is a resuable object allowing automation of that policy for any app tagged requiring it. Another note is that the example uses a 'dev' referencing custom built software, the same methodology can be used with off the shelf software.

There are many reasons for not trying to build intent based systems that can automatically detect and consume intent. One, non-minimal reason is the cost of those systems. More important is the ability to actually execute on that vision. Using a network example, it would be fairly simple to build a network that can automatically detect an Oracle application using standard ports and connectivity. What the network alone would not be able to detect is whether that workload was a dev, test, or production environment. Each environment would require different policies or intent. Another example would be difference in policy enforcement. One company may consider a VLAN to be adequate segmentation for different traffic types, another would require a firewall, and a third might require 'air-gap.' These differences would not be able to be automatically understood by the infrastructure. Intent based systems should instead consume the existing business logic, and automate provisioning based on that, not attempt to reinterpret that business logic themselves.

The other major misnomer regarding intent based systems is that they must be 'open' and able to incorporate any underlying hardware and software. This is definitely not a requirement of intent based systems. There are pros, and cons to open portability across hardware and software platforms. Those should always be weighed when purchasing a system, intent-based or otherwise. One pro for an open system supporting heterogeneity might be the avoidance of 'vendor lock-in.' The opposing con, would be the additional engineering, QA costs as well as fragility of the system. There are many more pros/cons to both. To see some of my old, yet still relevant thoughts on 'lock-in' see this post: http://www.definethecloud.net/the-difference-between-foothold-and-lock-in/.

Overall intent-based systems are emerging and creating a lot of buzz, both within the vendor space and the analyst space. There are examples of intent-based automation for networking in products like Cisco's Application Centric Infrastructure (ACI). System like these are one piece of a fully intent-driven architecture. I'll discuss the other two pieces, assurance and analytics, in future posts, if I'm not simply too lazy to care.

** Update: Out of ignorance I neglected to mention another Intent-Based Networking system. Doug Gourlay was kind enough to point out Apstra to me (http://www.apstra.com/). After taking a look, I wanted to mention that they offer a vendor agnostic Intent-based networking solution. The omission was unintentional and I'm happy to add other examples brought to my attention. **

*** These thoughts are mine, not sponsored, paid for, or influenced by a paycheck. Take them as you will. ***

Data Center Analytics: A Can’t Live Without Technology

** There is a disclaimer at the bottom of this blog. You may want to read that first. **

Earlier in June (2016) Chuck Robbin’s announced the Cisco Tetration Analytics Appliance. This is a major push into a much needed space: Data Center Analytics. The appliance itself is a Big Data appliance, purpose built from the ground up to provide enhanced real-time visibility into the transactions occurring in a data center. Before we get into what that actually means, let me set the stage.

IT organizations have a lot on their hands with the rapid change in applications, and the fast-paced new demands for IT service. Some of these include:

A first step in successfully executing on any of these is having an understanding of the applications, and their dependencies. Most IT shops don’t have any idea exactly how many applications run in their data center, much less what all of their interdependencies are. What is required is known as an Application Dependency Mapping (ADM).

Quick aside on ‘application dependency’:

Applications are not typically a single container, VM, or server. They are complex ecosystems with outside dependencies. Let’s take a simple example of an FTP server. This may be a single VM application, yet it still has various external dependencies. Think of things like: DNS, DHCP, IP Storage, AD, etc. (if you don’t know the acronyms you should still see the point.) If you were to migrate that single VM running FTP to a DR site, cloud etc. that did not have access to those dependencies, your ‘app’ would be broken.

The reason these dependency mappings are so vital for DR/cloud migration, is that you need to know what apps you have, and how they’re interdependent before you can safely move, replicate, or change them.

From a security perspective the value is also immense. First, with full visibility into what apps are running, you can make intelligent decisions on which apps you may be able to decommission. Removing unneeded apps, reduces your overall attack vector. This may be something as simple as shutting down an FTP server someone spun up for a single file move, but never spun down (much less patched, etc.) The second security advantage is that once you have visibility into everything that is working, and needs to be, you can more easily create security rules that block the rest.

Traditionally getting an Application Dependency Mapping is a painful, slow and expensive process. It can be done using internal resources, documenting the applications manually. More traditionally it’s done by a 3rd party consultant using both manual and tooled processes. Aside from cost and difficulty, the real problem with these traditional services is that they’re so slow and static, the results become useless. If it takes six months to document your applications in a static form, the end result has little-to-no value, because the data center changes so rapidly.

The original Tetration project was set out to solve this problem first, automatically and in real-time. I’ll discuss both what this means, and the enhanced use-cases it provides because of the method Tetration uses.

Data Collection: Pervasive Sensors

First, let’s discuss where we collect data to begin the process of ADM. Tetration uses several:

Network Sensors:

One place with great visibility is the network itself. Each and every device (server, VM, container) hereafter known as ‘end-point’ must be connected through the network. That means the network sees everything. When looking to collect application data in the past, the best tool available would be Netflow. Netflow was designed as a troubleshooting tool, and provides great visibility into ‘flow header’ info from the switch. While Netfflow is quite useful, it has limitations when utilized for things like security or analytics.

The first limitation is collection rates. Netflow is not a hardware (ASIC level) implementation. That means that in order to do its job on a switch, it must push processing to the switch CPU. To quickly simplify this, it means that Netflow requires heavy lifting, and data center switches can’t handle full-rate Netflow. Because of this, Netflow is set to sample data. For troubleshooting, this is no problem, but when trying to build an ADM, you’d ideally look at everything, not a sampling.

The next problem with Netflow for our purposes is things it doesn’t look at. Remember, Netflow was really designed to assist in ‘after-action’, troubleshooting problems. Because of that intent, Netflow was never designed to collect things like time-stamps from the switch. When looking at ADM and application-performance time-stamping becomes very important, so having that, and other detailed information Netflow can’t provide became very relevant.

Knowing this, the team chose not to rely on Netflow for our purposes. They needed something more robust, specifically when speaking about the data center space. Instead they designed a next generation of switching Silicon technology that can provide what I lovingly call ‘Netflow on steroids.’ This is ‘line-rate’ data about all of the transactions traversing a switch, along with things like time-stamping and more.

That becomes our network ‘sensor.’ using those sensors gives us an amazing view, but it’s not everything. What those sensors are really doing is not ADM, they’re simply telling us ‘who’s talking to whom, about what.’ For network engineers this is to/from IP combined with TCP/UDP port plus some other info. Think of this as a connectivity mapping. To make this into an application mapping more data was needed.

As of the time of this writing, these ‘network sensors’ are built into the hardware of the Cisco Nexus 9200, and 9300 EX series switches.

Server Sensors:

To really understand an application, you want to be close to the app itself. This means sitting in the operating system. The team needed data that is just simply not seen by the switch port. Therefore they had to build an agent that could reside within the OS, and provide additional information not seen by the switch. These are things like: service name, who ran the service, what privileges was the service run with, etc. These server sensors provide an additional layer of application information. The server agents are built highly secured, and very low-overhead.

Additional Data:

So far we’ve got a connectivity map, that can be compared against service/daemon name, and user/privileges. That’s still not quite enough. We don’t think of applications as the ‘service name’ in the OS. We think of applications like ‘expense system’, ‘definethecloud.net’, etc. To be able to turn the sensor data into real application mappings the team needed to cross-reference additional information. They built integration for systems like AD, DNS, DHCP, and existing CMDB systems to get this information. This allows the connectivity map and OS data to be cross-referenced back to the business level application descriptions.

Which sensors do I need?:

Obviously, the more pervasively deployed your sensors are, the better the data available. That being said, neither sensor has to be everywhere. Let’s go through three scenarios:

Ideal:

Ideally you would use a combination of both network and server agents. This means you would have the OS agent in any supported operating system (VM or physical.) You would also be using supported hardware for network sensors. Every switch in the data center doesn’t need this capability to be in an ‘ideal mode’. As long as every flow/transaction is seen once, you are in an ideal mode. This means that you could rely solely on leaf or access switches with this capability.

Server only mode:

This mode relies solely on agents in the servers. This is the mode most of Tetration’s early field trial customers ran in. This mode can be used as you transition to the ideal mode over the course of network refreshes, or can be a permanent solution.

Network only mode:

In instances where there is no desire to run a server agent, Tetration can still be used. In this operational mode the system relies solely on data from switches with the built in Tetration capability.

Note: The less pervasive you sensor network, the more manual input or data manipulation required. The goal should always be to move towards the ideal mode described here over time.

So that sounds like a sh** load of data:

The next step is solving the big data problem all of these sensors created. This is a lot of data, coming in very fast, and it has to be turned over as usable very, very quickly. If the data has to sit while it’s being processed it will become stale and useless. Tetration needed to be able to ingest this data at line-rate and process it in real-time.

To solve this the engineers built a very specialized big data appliance. The appliance runs on ‘bleeding edge IT buzzword soup’: Hadoop, Spark, Kafka, Zookeeper, etc. etc. It also contains quite a bit of custom software developed internally at Cisco for this specific task. On top of this underlying analytics engine, there is an easy to use interface that doesn’t require a degree in data science.

The Tetration appliance isn’t intended to be a generalist big data system, where you can through any dataset at it, and ask any question you want. Instead it’s fine-tuned for data center analytics. The major advantage here is that you don’t need a handful of big data experts and data scientists to use the appliance.

Now what does this number crunching monster do?:

The appliance provides the following five supported use-cases. I’ll go into some detail on each.

That’s a lot, and some of it won’t be familiar, so let’s get into each.

ADM:

First and foremost the Tetration Appliance provides an ADM. This baseline mapping of your applications, and their dependencies is core to everything else Tetration does. This ADM on it’s own is extremely useful, as mentioned in the opening of this blog above. Once you have visibility into your apps, you can start building that DR site, migrating apps, or parts of apps to the cloud, and assessing which apps may be prime for decommission.

Automated white-list creation:

If you’re looking to implement a ‘micro-segmentation’ strategy their are several products like Cisco’s Application Centric Infrastructure that can do that for you. These are enforcement tools that can implement tighter, tighter security segmentation down to the server NIC, or vNIC. The problem is figuring out what rules to put into these micro-segmentation tools. Prior to Tetration, nobody had a good answer to this. The issue is that without a current ADM, it’s hard to figure out what you can block, because you don’t know what you need open. In steps Tetration.

Once Tetration builds the initial ADM, you have the option to automatically generate a white-list. Think of the ADM as the original (what needs to talk), the white-list is the opposite, or negative of this. Since Tetration knows everything that needs to be open for your production apps, it can convert this to a list of things to block using your existing tools, or new-fangled fancy micro-segmentation tools.

Auditing and Compliance:

Auditing and compliance regulations are always a time, money, and frustration challenge, but they’re both necessary and required. There are two issues with traditional audit methodologies that Tetration helps with. Auditing is typically done by pulling configurations from multiple devices (network, security, etc.) and then verifying the security rules in those devices meets the compliance requirements.

The two ways Tetration helps are; centralization (single source of truth), and real-time accuracy. Because Tetration is viewing all transactions on the network, it can be the tool you audit against. This alleviates the need to pull information from multiple different devices in the data center. This stream-lines the audit process significantly, from both a collection and correlation perspective.

What I find the more interesting aspect is that using Tetration as the auditing tool lets you audit reality, rather than theory. Let me explain, when you do a traditional audit, you’re looking at the configuration of rules in security devices, and making the assumption that those rules and devices are doing there job, and nobodies gotten around them. On the other hand, when you do your audit using Tetration your auditing against the real-time traffic flows in your data center, ‘the reality.’

Policy impact simulation:

One of the things the Tetration appliance is doing as it collects data is extremely relevant to the following two use-cases. As the appliance ingests data, it may receive multiple copies of the same transaction. Think server A talking to server B across switch Z, all three reporting that transaction. As this occurs, the appliance de-duplicates this data and stores one master copy of every transaction down to the cluster file system. This means, the appliance keeps a historical record of every transaction in your data center. Don’t start worrying about space yet, remember this is all metadata (data about the data) not payload information, so it’s very lightweight.

The first thing the appliance can do with that historical record is policy simulation. Picture yourself as the security team wanting to implement a new security rule. You always have one challenge, the possibility of an adverse effect of a rule you implement. How do you ensure you won’t break something in production if you don’t have full visibility into real-time traffic? The answer is, you don’t.

With Tetration, you do. Tetration’s policy impact simulation allows you to model a security change (FW rule, ACL, etc.) and then have the system do an impact analysis. The system assesses your proposed change against the historical transaction records and let’s you know the real-world implications of making that change. I call this a ‘parachute’ for new security policies. Rather than waiting for a change window, hoping the rule works, and rolling it back if it breaks something, they can simply test against the real-time traffic.

Historical Forensics:

As stated above, Tetration maintains a de-duplicated copy of every transaction that occurs in the data center. On top of that unique source of truth, they’ve built both an advanced, granular search capability and a data-center ‘DVR’. What this means is that you can go back and search anything that happened in the data center post Tetration installation, or even go and playback all transaction records for a given period of time. This is an extremely powerful tool in the area of security forensics.

Summary:

Tetration is a very unique product, with a wide range of features. It’s a purpose-built data center analytics appliance providing visibility, and granular control not formerly possible. If you have more time than sense, feel free to learn more by watching these white board videos I’ve done on Tetration:

Overview: https://youtu.be/bw-w3T7JN-0

App Migration: https://youtu.be/KkehzpCXL70

Micro-segmentation: https://youtu.be/fIQhOFc5h2o

Real-time Analytics: https://youtu.be/iTB6CZZxyY0

** Disclaimer **

I work for Cisco, with direct ties to this product, therefore you may feel free to consider this post biased and useless.

This post is not endorsed by Cisco, nor a representation of anyone’s thoughts or views other than my own.

** Disclaimer **

The Data Center Network OS – Cisco Open NXOS

The Insieme team (INSBU) at Cisco has been working hard for three years bringing some major advances to Cisco’s Nexus portfolio. The two key platforms we’ve developed are Cisco Application Centric Infrastructure (ACI) and the Nexus 9000 data center switching platform. One of the biggest projects and innovations we’ve focused on is the operating system (OS) itself on the Nexus 9000 and 3000 platforms (more platforms to follow.) This OS is known as Cisco open NXOS. The focus here is on open programmability of network functionality on a device-by-device or system level basis.

Lots of features and supported tools have been baked into the OS to produce the industry’s leading OS for network automation and programmability. These tools drive faster time-to-market for new applications and services, better integration with automation and orchestration systems, and lowered Operating Expenses (OpEx). Some of the key features baked into the OS are:

The overall goal is customer choice. Networking and Software Defined Networking (SDN) is not a one size fits all technology. The needs of cloud providers, carrier networks, enterprise, commercial, financials, etc. are all different. With that in mind we built in the most popular existing and emerging tools to provide customers the ability to use the tool(s) that best support their operational model. This, while being built on the foundation for Cisco ACI, the industries lead SDN solution.

image

Rather than bore you with my babble I’ll let the experts show you what we’ve developed. The video below is a Demo Friday with SDx Central. Two of my colleagues: Technical marketing Engineer Nicolas Delecroix, and Product Manager Shane Corbin put this content and demo together for your viewing pleasure.

https://www.sdxcentral.com/network-programmability-cisco-demofriday/

 

For another look and some more info see the video the folks at Cisco TechWise TV put together here: http://www.cisco.com/web/learning/le21/onlineevts/offers/twtv/en/twtv176/preview.html

Application Centric Infrastructure – The Platform

** I earn a paycheck as a Principle Engineer for the Cisco business unit responsible for ACI, Nexus 9000, and Nexus 3000. This means you may want to simply consider anything I have to say on the subject biased and or useless. Your call. **

I recently listened through a 30 minute SDN analysis from Greg Ferro who considers himself to be a networking industry expert. In it he goes through his opinions of several SDN approaches and provides several guesses as to where things will be. One of the things that struck me during the recording was that he describes Cisco’s Application Centric Infrastructure (ACI) as a platform for lock-in. While he’s right on the platform part, he’s missing the mark on the lock-in. This is not an attack on Greg, if someone who considers themselves to be an in-the-know expert on the subject still believes this, maybe we haven’t gotten the message out in the right way. Let’s try and rectify that here.

What is ACI:

This is a little of a tough question, because ACI really is an application connectivity platform. ACI can be:

ACI – A Policy Automation Engine:

Let’s start with a quick definition of Policy for these purposes: Policy is the requirements of data as it traverses the network. Policy can be thought of with two mindsets. Some examples of this are below, they are in no particular order, and not intended to match by column.

Business/Application Mindset

Infrastructure/Operations Mindset

Security VLAN
Compliance Subnet
Governance Firewall Rule
Risk Load-Balancing
Geo-dependancy ACL
Application Tiers Port-config
SLAs IP/MACsec
User Experience Redundancy
 
The application of policy on the network has two major problems:
 
  1. No direct translation between business requirements and infrastructure capabilities/configuration.
  2. Today it’s manual configuration on disparate devices, typically via a CLI.

As a policy automation engine ACI looks to alleviate that by mapping application level language on policy, like the left column, to automated infrastructure provisioning using the constructs in the right column. In this usage ACI can be deployed in a much different fashion than the network fabric it’s more traditionally thought of as.

The lack of understanding around this model for ACI revolves around three misconceptions or misunderstandings:

With that in mind, let’s take a look at deploying ACI into an existing data center switching infrastructure as a policy automation appliance.

 

ACI Requires all Nexus 9000 Hardware:

ACI does not require all devices to be Nexus 9000 based. ACI is based on a combination of hardware and software. This is a balanced approach of what is done best where. Software control systems provide agility, while hardware can provide acceleration for that software along with performance, scale, etc. Because of this ACI has a small set of controller and switching components. This hardware does not need to replace any other hardware, and can simply be added with a natural data center expansion, then integrated into the existing network. In fact, this is the way the majority of ACI customers are deploying ACI today.

Here is a breakdown of the ACI requirements for a production environment:

That is the complete initial and final requirement to simply use ACI as a policy automation engine. From there, policy on any network equipment can be automated. The system can then optionally scale with more Nexus 9000, or other switching solutions can be used for connectivity.

Integrating ACI with other switching products:

First, we need to understand where policy is enforced on a traditional network. For reference take a look at the graphic below.

imageOn a traditional data center network we group connected objects by using Layer 2 VLANs, providing us approximately 4000 groups or segments depending on implementation. Within these VLAN groups, most, if not all, communication is default allowed (no policy is enforced). We then attach a single subnet, Layer 3 object, to the VLAN. When traffic moves between VLANs it is also moving between subnets which requires a routed point. This default gateway is where policy is enforced. Many device can act as the default gateway, but they are the most typical policy enforcement point.

The way traditional networks handle this policy enforcement point is at the Aggregation Layer of a physical or logical 3-tier network design. The diagram below depicts this.

imageDepending on design, the L3 boundary (default gateway) may be a router, or a L4-7 appliance such as a firewall. This is the policy enforcement point within traditional network designs. For traffic to move between groupings/segments (VLANs) it must traverse this boundary where policy is enforced. It is exactly this point where ACI is inserted as a Policy Automation Engine, integrated with the existing infrastructure. The diagram below shows one example of this.

image

In the diagram you’re attaching the minimal production ACI requirements: Controller cluster, 2x spine switches, and 2x leaf switches to the existing network at the aggregation layer (green links depict new links.) From there the only requirement to utilize ACI as the policy automation engine for the entire network is to trunk the VLANs to the ACI leaf, and move the default gateway (the policy enforcement point as shown above.) ACI can now automate policy for the network as a whole.

This can be used as a migration strategy, or a permanent solution. There is no requirement to migrate all switches to Nexus 9000 or even Cisco switches up front or over time. A customer can easily maintain a dual-vendor strategy, etc. while utilizing ACI. Many benefits can be found by implementing with Nexus 9000 as needed, but that is always a decision based on the pros and cons seen by a given organization.

The diagram above shows a logical depiction of how ACI would be added to an existing network while automating policy enforcement for the entire network. The diagram below shows the same thing in a different fashion. The diagram below is no different, except that it will help visualize ACI as a service appliance automating policy. The only additional changes are that the firewalls have been re-cabled to ACI leaf switches for traffic optimization, and a second pair of ACI leaf switches has been added for visual symmetry.

image

In this model ACI is not ‘managing’ the existing switching, it is automating policy network wide. Policy is the most frequent change point, if not the only change point, therefore it is the point that requires automation. The existing infrastructure is in place, configured and working, there is no need to begin managing it in a different fashion, that’s not where agility comes from.

 

ACI’s Integration with L4-7 service devices requires management of those devices:

This is one of the more interesting points of the way ACI integrates with other systems. With most solutions when you add a control/management system it is in complete control of all devices, and needs them at a default or baseline. ACI operates on a different control model which allows it to integrate and pass instructions to a device without needing to fully manage it. What this means is that ACI can integrate with L4-7 devices already in place, configured, and in use. ACI makes no assumptions of existing configuration, and simply passes commands to be implemented on those devices for new configurations in ACI. Additionally these commands are implemented natively on the device.

What this means is that there is no lock-in at all by integrating ACI with existing devices, or adding new virtual/physical appliances and integrating them with ACI. To put this more succinctly:

Summary:

ACI is  a platform for the deployment of applications on the network, not a platform for lock-in. In fact it is designed as a completely open system using standards based protocols, open APIs northbound and southbound, and open 3rd party device integration. It can be used as:

There is plenty of information available on ACI, take some time to get an idea of what it can do for you.

The Power of Fully Open Programmability With Cisco ACI

** Disclaimer: I work as a Principal Engineer for Cisco focused on ACI and Nexus 9000 products. Feel free to assume I have bias if you wish. **

 

One of the many things that make me passionate about Cisco Application Centric Infrastructure is the fully open programmable nature of the product. Unlike competitive products that claim a programmable API and then hide the important stuff behind licensing, royalties, or other add-on software you have to buy, Cisco ACI fully exposes all system functionality including the full object model and RESTful API with XML and JSON bindings. This means that anything the system can do, you can code to. As an example of this, our own GUI and CLI that ship with the product utilize the same API that’s exposed to anyone, no additional hooks or capability.

To many people this may not mean much on the surface, because they don’t work with code. That being said, whether your at home on a CLI or a GUI, native programmability can help you with your day job, whether or not you want to dabble in writing code. Obviously you can home grow anything to automate, or simplify day-to-day tasks, but if you don’t live in a world of code and compilers you can simply leverage the community through Cisco DevNet or GitHub. Plenty of applications and use-cases are already freely available with contributions happening daily. This means you can download what you want, tweak it, or use it as is.

One specific set of tools struck my interest and prompted this blog. Michael Smith, one of Cisco’s Distinguished Engineers developed the ‘ACI Tool Kit’ (http://acitoolkit.readthedocs.org/en/latest/index.html) along with the help of some other engineers. ACI provides an abstracted object model of network, security, and service functionality, this is the basis for the overall programmability of the architecture. Michael’s tool kit exposes a subset of the overall functionality of ACI in a way that’s more consumable for day-to-day use. You can think of it as an introduction, but the functionality is far greater than that. Basically it’s a fast track to getting the most common workflows rolling quickly.

_images/aci.png

I won’t belabor the description of the Tool Kit because it’s well documented via the link above. Better yet, because all of the documentation is auto-generated from the code itself, it’s always up-to-date. Instead let’s take a deeper look into some examples of applications and use-cases for them already available.

ACI End-point Tracker(http://acitoolkit.readthedocs.org/en/latest/endpointtracker.html):

Ever wonder what’s attached to your data center network as a whole? Wonder what moves where, what attaches or detaches? Within a traditional network environment that’s a tough piece of information to gather, much less track and keep updated. Within ACI, that story changes completely. The brains behind an ACI fabric is the Application Policy Infrastructure Controller (APIC) and it’s always aware of everything attached to the fabric. This means that an ACI fabric natively has all of the information described above, the trick is getting it out and working with it. In steps ACI Endpoint Tracker.

In a nutshell ACI Endpoint Tracker subscribes to a web socket on the APIC which then pushes endpoint information to a MySQL database where it is stored and can be queried. Endpoint tracker then provides a set of tools to look at and analyze that data. You can run direct database queries or use the GUI front-end Mike developed. An example of that front-end is pictured below.

imageThis provides a quick easy interface to dig out information on which end-points are attached, when they attached, when they detached, etc. You can also search based on date/time, MAC, IP, etc for any given tenant, app, or group. This provides some pretty powerful analytics. Better yet, you can take what’s there and extend it. Have a pretty static environment and want to be alerted when new devices attach? No problem. Want to see what was connected for a specific tenant at midnight on Christmas? No problem.

The best part is your controller doesn’t take any performance hit for doing this, because it’s not doing it. The information is pushed to the MySQL database in real-time, and all of the queries done by ACI End-point Tracker queries that database server. This means the amount of retention, performance, etc. are all up to you based on disk size and compute capacity you want to dedicate. To play with the GUI front-end using some example data just hit this link: ACI Endpoint Tracker GUI Demo.

ACI Lint (http://acitoolkit.readthedocs.org/en/latest/acilint.html):

The ACI Lint tool is used for static analysis of your ACI fabric. Basically it’s a configuration analysis tool that checks against several policies to report configuration that could be problematic. This is similar to the static code analysis tool lint checker does for C. It provides a list of configuration warnings/errors that should be looked into. These could be orphaned objects, stale config. The code is also designed to be extensible for custom rules. Mike uses a compliance check as an example. ACI can utilize ‘tags’ for any object. In an environment requiring compliance you can use ‘secure’ and ‘nonsecure’ tags. ACI Lint can then do a check to ensure every object is tagged, and another check to ensure that no secure group is configured to communicate with a nonsecure group. This is just one example, the possibilities are endless. The beauty here is these aren’t mis-configuration or system errors, Lint is checking for inconsistencies in configured objects.

Cableplan Application (http://acitoolkit.readthedocs.org/en/latest/cableplan.html):

Anyone who’s worked in networking long enough has had some experience with cabling problems. The Cableplan app let’s you take existing cable plans and match them against the running cable plan imported from the APIC. This is a quick and easy way to ensure that the intended cabling stays consistent, or verify Layer 1 topology before moving up the stack with troubleshooting. See if your network virtualization solution can help you troubleshoot the network stack in any similar fashion.

Snapback: Configuration Snapshot and Rollback (http://acitoolkit.readthedocs.org/en/latest/snapback.html):

Humans aren’t perfect, we all make mistakes (myself excluded of course.) Those mistakes can mean downtime when we’re talking network configuration. Because of that, this is my favorite of the tools Mike built. The configuration of an entire ACI physical and virtual network is basically text formatted as XML or JSON. Therefore it can easily be imported/exported, etc. Using this functionality and ACI’s programmability, Mike built a tool that provides capabilities to snapshot the entire network (not just the virtual overlay) and roll it back if/when needed.

Some examples Mike provides are:

Summary:

These are just a few examples of what is freely available online right now. More importantly this is just a regurgitation of the documentation at the link at the top of this post. Don’t take my word for it, download the code, dig into the documentation and play. Once you’ve done that, CONTRIBUTE!

ACI all the things!