Micro-segmentation: What, Why, How?

There’s a lot of buzz around the term micro-segmentation (uSeg) and I thought I’d take some time to demystify it, starting with some history. If you’re more of a visual learner skip to the end and check out the video.

uSeg has roots in ‘zero-trust model’ type of thinking and architectures. At the most basic level the idea is to greatly enhance security models based primarily on perimeter security implementations, like firewalls.

The reason for this is simple, if you rely solely on perimeter security you are completely exposed when (not if) the perimeter is breached. The graphic below depicts this.

In the graphic a single penetration of the firewall can lead to a comprised server or workload which then becomes the attacker with no security left to stop it.

Architectures attempting to enhance perimeter security have been implemented using firewalls as a funnel for all traffic, and VLAN Access Control Lists (VACL), among other similar techniques.

The failure of these attempts comes down to four things:

  1. Visibility: limited knowledge of what traffic  can/can’t be blocked.
  2. Cost: firewall hardware, etc.
  3. Manageability: there’s no good way to manage that many distributed firewall rules or ACLs.
  4. Complexity: any way you slice it this is complex, and complexity kills agility while adding risk, cost, and reducing manageability.

Micro-segmentation spins the conversation back up in a new format. The reason it has created so much buzz is that the tools have caught up to the point where we can reduce, or eliminate the four problems above.

Technologies including big data, SDN, and advanced automation have matured enough to provide frameworks to accomplish granular segmentation at a micro, or even nano level (another term some use).

The advantage of this level of segmentation is depicted below. In the graphic a penetration of the perimeter security compromises a host or workload, but malicious traffic from that host is blocked by micro-segmentation zones. This prevents the attack from propagating further.

As the graphic depicts, micro-segmentation should not be looked at as a replacement for perimeter security, instead it is an enhancement. Micro-segmentation provides advanced  security within the secure perimeter, and in some cases can simplify, not replace,  the perimeter security architecture.

In many cases a 3rd layer of security is also implemented. This is a layer of ‘macro-segmentation.’ Macro-segmentation can be used as a starting point to micro-segmentation, deployed in conjunction, or ignored if not required.

The macro-segmentation layer provides segmentation between large static groups. Great examples are compliant vs. non-compliant, and development life-cycles (dev, test, prod, etc.)

Macro-segments can be deployed in a much wider variety of devices due to the reduced need for granularity and change. Typically macro-segmentation is deployed using Software Defined Networking (SDN) solutions.

The two primary requirements for macro-segments are broad scope, and limited change rate. The reason for this is the broader number of solutions it will deployed in. In general the more granular the scope, or the higher the change rate, the more automated the platform will need to be.

In the next graphic we see the three layers of security operating together. Each layer expands on the last becoming more granular and enhancing protection.

Micro-segmentation is the most granular of the three layers, and there are many options for how to address these segments. Micro-segments can be built around workloads (Server, VM, Container), applications (www.onisick.com, WordPress, Oracle), or traffic flows themselves (TCP X and UDP Y to IP Z). The best workload protection tools in this space offer the ability to do all three.

The ability to use various segmentation methods in parallel is important. Every environment will have different security needs. More so, within every environment different applications/data/workloads will have different needs. Having options allows you to fine-tune cost, time-to-deploy, and security risk accordingly.

The most critical thing to account for as you deploy granular segmentation will be change rate. Many tools can enforce micro-segments, very few can handle authorized change at a rate that doesn’t impact business agility.

Connectivity in a data center tends to change rapidly, static non-automated, micro-segmentation will quickly create outages based on authorized change. A great example of this is software patching.

Software patches often modify the TCP/UDP port(s) used by the application or operating system (OS). If this occurs in an environment where micro-segmentation is tightly deployed, that port change can cause outages.

The old port remains open while the new, required, port is blocked by now-outdated segmentation. Manual remediation processes for this type of thing take 48-72 hours. That will not be nearly fast enough in a micro-segmented world. This is shown below.

Micro-segmentation is a security architecture that should be explored and assessed by organizations of all sizes and types. The level of granularity required, speed-to-deploy, etc. will vary.

To take another view on this topic check out the video below that I produced on the topic.

Passwords Are Doomed: You NEED Two-Factor Authentication


How many people use eight-character or less passwords with the first letter being capital and last entries being numbers? People are predictable and so are their passwords. To make things worse, people are lazy and tend to use the same passwords for just about everything that requires one. A study from the DEFCON hacker conference stated, “with $3,000 dollars and 10 days, we can find your password. If the dollar amount is increased, the time can be reduced further”. This means regardless of how clever you think your password is, its eventually going to be crack-able as computers get faster utilizing brute force algorithms mixed with human probability. Next year the same researchers may state, “with 30 dollars and 10 seconds, we can have your password”. Time is against you.

Increasing password sizes and changing mandatory character types helps combat this threat however humans naturally will utilize predictable practices as passwords become difficult to remember. It’s better to separate authentication keys into different factors so attackers must compromise multiple targets to gain access. This dramatically improves security but doesn’t make it bullet proof as seen with RSA tokens being compromised by Chinese hackers. Ways to separate keys are leveraging something you know, have and are. The most common two-factor solutions are something you have and know which is a combination of a known password/pin and having a token, CAC/PIV card or digital certificate. Biometrics is becoming more popular as the cost for the technology becomes affordable.

There are tons of vendors in the authentication market. Axway and Active Identity focus on something you have offering CAC/PIV card solutions. These can be integrated with door readers to provide access control to buildings along with two-factor access to data. RSA and Symantec focus on hardware or software certificate/token based solutions. These can be physical key chains or software on smartphones and laptops that generate a unique digit security code every 30 seconds. Symantec acquired the leader of the cloud space VeriSign, which offers recognizable images, challenge and response type solutions. Symantec took the acquisition further by changing their company logo to match the VeriSign “Check” based on its reputation for cloud security.

VeriSign

PRE ACQUSITION LOGO

POST ACQUSITION LOGO

The consumer market is starting to offer two-factor options to their customers. Cloud services such as Google and Facebook contain tons of personal information and now offer optional two-factor authentication. Its common practice for financial agencies to use combinations of challenge and response questions, known images and verifying downloadable certificates used to verify machines to accounts. The commercial trend is moving in the right direction however common practice for average users is leveraging predictable passwords. As many security experts have stated, security is as strong as the weakest link. Weak authentication will continue to be a target as hackers utilizing advance computing to overcome passwords.

More security concepts can be found at http://www.thesecurityblogger.com/

An End User’s Cloud Security Question

I recently received an email with a question about the security of cloud computing environments.  The question comes from a knowledgeable user and boils down to ‘Isn’t my data safer on my systems?’  I thought this would be a great question to open up to the wider community.  Does anyone have any thoughts or feedback for ‘Gramps’ question below?

Joe, I'm not a college grad, but a 70 yr old grandfather, that began programming on a Color Computer using an audio tape recorder for storage.  I've written some corporate code for Owens Corning Fiberglas before I retired, so I've been around the keyboard for a while. <grin>  To make a point, notice how you've told me what your email address is, on your blog (see the about page.)  Hackers, and scammers are so efficient, you and I can't even put our actual email out there.  Now, You are in high gear with putting almost your heart and soul on servers that can be anywhere on the planet... even where there are little or no laws (enforced) governing data piracy.  Joe, I'm not trying to pick a fight, no need to, but look at the Wikileaks > etc.  I guess I could cope with using cloud software for doing my things... but can you tell me you are willing to even leave your emails or data files out there too? Somehow, I just feel a whole lot safer having my critical stuff on my flash drive... Talk to me buddy... 

Jim 'Gramps' , Hillsboro OH