In many data centers large and small there is a history of making short-term decisions that affect long-term design. These may be based on putting out immediate fires, such as rolling out a new application, expanding an old one, or replacing failed hardware. They may also be made by short-sighted or near-sighted policies, or more commonly old policies that aren't question in the light of new technology. These types of decisions can range from costly to crippling for data center operations…
To see the full post visit NetworkComputing: http://www.networkcomputing.com/private-cloud/231700329
Most private cloud discussions revolve around the return on investment of the architecture. Many discussions begin and quickly end with ROI. The reason is that ROI is very difficult to show in real numbers for any IT investment, but more so when the majority of the costs are soft costs.
ROI is an important factor and can’t be left out of discussions, but it’s not the only factor and likely not the most important factor.
To read the rest see the blog on Network Computing (no registration required): http://www.networkcomputing.com/private-cloud/231601280
For VMworld this year I decided to pack heavy. I spent the week in suits rather than my typical IT polos and jeans, slacks and shirts type attire. No particular reasoning for the change although the factors were along the lines of: change of pace, stepping it up, and ‘It’s Vegas Baby!.’ To say the least it was an interesting experience being at VMworld in a suit for the first time.
The IT community is typically a dressed down society, we toil away in data centers, call centers, and cubicles and have no need for dress up. Jeans and a polo is business and flip flops put us in business casual. This means a suit is out of the norm. VMworld only amplifies that as the walking, back-to-back sessions, and being away from the home office make the case for casual. This means wearing a suit is not necessarily unique but noticeable, especially on the show floor.
The Spinach:
Like spinach for Popeye the suit had its benefits. I’m a firm believer in you can’t be overdressed (even if I don’t heed that often) and Vegas is no exception. During customer engagements, partner meetings, and vendor video shoots the suit boosted my confidence and professional appearance. It even had benefits on the gaming floor as pit bosses were much more accommodating of special requests such as opening new tables, raising maximum bets or lowering minimums than I recall in t-shirt and jeans. You definitely can’t overdress in Vegas.
The Kryptonite:
Like Kryptonite for Superman while working the booth or having discussions with vendor engineers I found that the suit downgraded my status as an engineer. By that I mean I had to prove I was technical, rather than sales, business development, etc. the immediate assumption of customers while I was at the WWT booth was that I was the sales guy and they needed to find the engineer. It was definitely an interesting experience. I had a lot more high level sales pitches, marketing fluff etc. thrown at me while walking the floor than I have in past years. Even more interesting was that I did not get harassed by the ‘booth babes’ as much. That brings me to my next point.
Booth Babes:
I’ve always enjoyed the attractive models known as booth babes that many vendors hire to scan badges and attract attention at trade shows. IT is a very male heavy industry and I looked at it as harmless marketing. I didn’t however think of it from the big picture perspective. Matt Simmons enlightened me with one of his post show blogs: http://www.standalone-sysadmin.com/blog/2011/09/seriously-stop-with-the-booth-babes/. The booth babes themselves may be harmless but the way in which they train us to stereotype women in a booth is not. In a similar way to the way in which my suit identified me as non-technical, booth babes cause us to look at women working trade show booths as non-technical or eye candy, that is a very bad thing (quick note I’m not equating the suit discrimination to sexual discrimination only drawing a parallel to the way our brains begin to stereotype.) There are some amazing women in IT and we should be encouraging more to join the ranks, not making an inhospitable atmosphere.
I encourage you to read matt’s blog and take part in ‘Operation Eliminate booth Babes.’
Clouds fail. That’s a fact. But if your company uses business apps that are tied to the availability of public cloud services, you can—and must—take steps to mitigate these failures by getting schooled on a few key factors: service-level agreements (SLAs), redundancy options, application design, and the type of service being used. We’ll outline how these factors affect the availability of your applications in the cloud…
Read my full article in the August issue of Network Computing (For IT by IT) (Requires a free registration, my apologies.)
http://www.informationweek.com/nwcdigital/nwcaug11?k=nwchp&cid=onedit_ds_nwchp
I recently had the privilege to attend and participate in a global technology leadership forum. The forum consisted of technology investors, vendors and thought leaders and was an excellent event. The tracks I focused on were VDI, Big Data, Data Center Infrastructure, Data Center Networks, Cloud and Collaboration. The following are my notes from the event:
VDI:
There was a lot of discussion around VDI and a track dedicated to it. The overall feeling was that VDI has not lived up to its hype over the last few years, and while it continues to grow market share it never reaches the predicted numbers, or hits the bubble that is predicted for it. For the most part the technical experts agreed on the following:
There was some disagreement on whether VDI is the right next step for the enterprise. The split I saw was nearly 50/50 with half thinking it is the way forward and will be deployed in greater and greater scale, and the other half thinking it is one of many viable current solutions and may not be the right 3-5 year goal. I’ve expressed my thoughts previously: http://www.definethecloud.net/vdi-the-next-generation-or-the-final-frontier. Lastly we agreed that the key leaders in this space are still VMware and Citrix. While each have pros and cons it was believed that both solutions are close enough as to be viable and that VMware’s market share and muscle make it very possible to pull into a dominant lead. Other players in this space were complete afterthoughts.
Big Data:
Let me start by saying I know nothing about big data. I sat in these expert sessions to understand more about it, and they were quite interesting. Big data sets are being built, stored, and analyzed. Customer data, click traffic, etc. are being housed to gather all types of information and insight. Hadoop clusters are being used for processing data, cloud storage such as Amazon S3 is being utilized as well as on-premises solutions. The main questions were in regard to where the data should be stored and where it should be processed, as well as the compliance issues that may arise with both. Another interesting question was the ability to leave the public cloud if your startup turns big enough to beat the costs of public cloud with a private one. For example if you have a lot of data you can mail Amazon disks to get it into S3 faster than WAN speed, but to our knowledge they can’t/won’t mail your disk back if you want to leave.
Data Center Infrastructure:
Overall there was an agreement that very few data center infrastructure (defined here as compute, network, storage) conversations occur without chat about cloud. Cloud is a consideration for IT leaders from the SMB to large global enterprise. That being said while cloud may frame the discussion the majority of current purchases are still focused on consolidation and virtualization, with some automation sprinkled in. Private-cloud stacks from the major vendors also come into play helping to accelerate the journey, but many are still not true private clouds (see: http://www.definethecloud.net/the-difference-between-private-cloud-and-converged-infrastructure.)
Data Center Networks:
I moderated a session on flattening the data center networks, this is currently referred to as building ‘fabrics.’ The majority of the large network players have announced or are shipping ‘fabric’ solutions. These solutions build multiple active paths at Layer 2 alleviating the blocked links traditional Spanning-Tree requires. This is necessary as we converge our data and ask more of our networks. The panel agreed that these tools are necessary but that standards are required to push this forward and avoid vendor lock-in. As an industry we don’t want to downgrade our vendor independence to move to a Fabric concept. That being said most agree that pre-standard proprietary deployments are acceptable as long as the vendor is committed to the standard and the hardware is intended to be standards compliant.
Cloud:
One of the main discussions conversations I had was in regards to PaaS. While many agree that PaaS and SaaS are the end goals of public and private clouds, the PaaS market is not yet fully mature (see: http://www.networkcomputing.com/private-cloud/231300278.) Compatibility, interoperability and lock-in were major concerns overall for PaaS. Additionally while there are many PaaS leaders, the market is so immature leadership could change at any time, making it hard to pick which horse to back.
Another big topic was open and open source. Open Stack, Open Flow and open source players like RedHat. With RedHat’s impressive YoY growth they are tough to ignore and there is a lot of push for open source solutions as we move to larger and larger cloud systems. The feeling is that larger and more technically adept IT shops will be looking to these solutions first when building private clouds.
Collaboration:
Yet another subject I’m not an expert on but wanted to learn more about. The first part of the discussion entailed deciding what we were discussing i.e. ‘What is collaboration.’ With the term collaboration encompassing: voice, video, IM, conferencing, messaging, social media, etc. depending on who you talk to this was needed. We settled into a focus on enterprise productivity tools, messaging, information repositories, etc. The overall feeling was that there are more questions than answers in this space. Great tools exist but there is no clear leaders. Additionally integration between enterprise tools and public tools was a topic and involved the idea of ensuring compliance. One of the major discussions was building internal adoption and maintaining momentum. The concern with a collaboration tool rollout is the initial boom of interest followed by a lull and eventual death of the tool as users get bored with the novelty before finding any ‘stickiness.’

How many people use eight-character or less passwords with the first letter being capital and last entries being numbers? People are predictable and so are their passwords. To make things worse, people are lazy and tend to use the same passwords for just about everything that requires one. A study from the DEFCON hacker conference stated, “with $3,000 dollars and 10 days, we can find your password. If the dollar amount is increased, the time can be reduced furtherâ€. This means regardless of how clever you think your password is, its eventually going to be crack-able as computers get faster utilizing brute force algorithms mixed with human probability. Next year the same researchers may state, “with 30 dollars and 10 seconds, we can have your passwordâ€. Time is against you.
Increasing password sizes and changing mandatory character types helps combat this threat however humans naturally will utilize predictable practices as passwords become difficult to remember. It’s better to separate authentication keys into different factors so attackers must compromise multiple targets to gain access. This dramatically improves security but doesn’t make it bullet proof as seen with RSA tokens being compromised by Chinese hackers. Ways to separate keys are leveraging something you know, have and are. The most common two-factor solutions are something you have and know which is a combination of a known password/pin and having a token, CAC/PIV card or digital certificate. Biometrics is becoming more popular as the cost for the technology becomes affordable.
There are tons of vendors in the authentication market. Axway and Active Identity focus on something you have offering CAC/PIV card solutions. These can be integrated with door readers to provide access control to buildings along with two-factor access to data. RSA and Symantec focus on hardware or software certificate/token based solutions. These can be physical key chains or software on smartphones and laptops that generate a unique digit security code every 30 seconds. Symantec acquired the leader of the cloud space VeriSign, which offers recognizable images, challenge and response type solutions. Symantec took the acquisition further by changing their company logo to match the VeriSign “Check†based on its reputation for cloud security.
VeriSign

PRE ACQUSITION LOGO

POST ACQUSITION LOGO

The consumer market is starting to offer two-factor options to their customers. Cloud services such as Google and Facebook contain tons of personal information and now offer optional two-factor authentication. Its common practice for financial agencies to use combinations of challenge and response questions, known images and verifying downloadable certificates used to verify machines to accounts. The commercial trend is moving in the right direction however common practice for average users is leveraging predictable passwords. As many security experts have stated, security is as strong as the weakest link. Weak authentication will continue to be a target as hackers utilizing advance computing to overcome passwords.
More security concepts can be found at http://www.thesecurityblogger.com/
Mike Fratto at Network Computing recently wrote an article titled ‘FCoE: Standards Don’t Matter; Vendor Choice Does’ (http://www.networkcomputing.com/storage-networking-management/231002706.)
I definitely differ from Mike’s opinion on the subject. While I’m no fan of the process of making standards (puts sausage making to shame), or the idea of slowing progress to wait on standards, I do feel they are an absolutely necessary part of FCoE’s future. It’s all about the timing at which we expect them, the way in which they’re written, and most importantly the way in which they’re adhered to.
Mike bases his opinion on Fibre Channel history and accurately describes the strangle hold the storage vendors have had on the customer. The vendor’s Hardware Compatibility List (HCL) dictates which vendor you could connect to, and which model and which firmware you can use. Slip off the list and you lose support. This means that in the FC world customers typically went with the Storage Area Network (SAN) their VAR or storage vendor recommended, and stuck with it. While not ideal this worked fine in the small network environment of SAN with the specialized and dedicated purpose of delivering block data from array to server. These extreme restrictions based on storage vendors and protocol compatibility will not fly as we converge networks.
As worried as storage/SAN admins may be about moving their block data onto Ethernet networks, the traditional network admins may be more worried because of the interoperability concept. For years network admins have been able to intermix disparate vendors technology to build the networks that they desired, best-of-breed or not. A load-balancer here, firewall there, data center switch here and presto everything works. They may have had to sacrifice some features (proprietary value add-that isn’t compatible) but they could safely connect the devices. More importantly they didn’t have to answer to an HCL dictated by some end-point (storage disk) or another on their network.
For converged networking to work, this freedom must remain. Adding FCoE to consolidate infrastructure cannot lock network admins into storage HCLs and extreme hardware incompatibility. This means that the standards must exist, be agreed upon, be specific enough, and be adhered to. While Mike is correct, you probably won’t want to build multi-vendor networks day one, you will want to have the opportunity to incorporate other services, and products, migrate from one vendor to another, etc. You’ll want an interoperable standard that allows you to buy 3rd party FCoE appliances for things like de-duplication, compression, encryption or whatever you may need down the road. We’re not talking about building an Ethernet network dedicated to FCoE, we’re talking about building one network to rule them all (hopefully we never have to take it to Mordor and toss it into molten lava.) To run one network we need the standards and compatibility that provide us flexibility.
There is no reason for storage vendors to hold the keys to what you can deploy any longer. Hardware is stable, and if standards are in place the network will properly transport the blocks. Customers and resellers shouldn’t accept lock in and HCL dictation just because that has been the status quo. We’re moving the technology forward move your thinking forward. The issue in the past has been the looseness with which IEEE FCBB-5 is written on some aspects since it’s inception. This leaves room for interpretation which is where interoperability issues arise between vendors who are both ‘standards based.’ The onus is on us as customers, resellers and an IT community to demand that the standards be well defined, and that the vendors adhere to them in an interoperable fashion.
Do not accept incompatibility and lack of interoperability in your FCoE switching just because we made the mistake of allowing that to happen with pure FC SANs. Next time your storage vendor wants a few hundred thousand for your next disk array tell them it isn't happening unless you can plug it into any standards compliant network without fear of their HCL and loss of support.
I enjoyed a great conversation with Netapp's Vaughn Stewart and Cisco's Abhinav Joshi about FlexPod last week during Cisco Live 2011. Check out the video below.
After sitting through a virtualization sales pitch focused around Virtual Desktop Infrastructures (VDI) this afternoon I had several thoughts on the topic I thought may be blog worthy.
VDI has been a constant buzzword for a few years now, riding the coattails of server virtualization. For the majority of those years you can search back and find predictions from the likes of Gartner touting ‘This is the year for VDI’ or making other similar statements, typically with projected growth rates that don’t ever happen. What you won’t see is those same analyst organizations reaching back the year after and answering to why they over hyped it, or were blatantly incorrect. (Great idea for a yearly blog here, analyzing previous years failed predictions.)
The reasons they’ve been incorrect vary over the years starting with technical inadequacy of the infrastructures and lack of real understanding as an industry. When VDI first hit the forefront many of us (myself included) made the assumption desktops could be virtualized the same as servers (Windows is Windows right?) What we neglected to account for is the plethora of varying user applications, the difficulty of video and voice, and other factors such as boot storms which are unique and or more amplified within VDI environments than their server counterparts. From there for a short while the VDI rollout horror stories and memories of failed Proof of Concepts slowed adoption and interest for a short period.
Now we’re at a point where the technology can overcome the challenges and the experts are battle hardened with knowledge of success and failures in various environments; yet still adoption is slow. Users are bringing new devices into the workplace and expecting them to interface with enterprise services; yet still adoption is slow. We supposedly have a more demanding influx of younger generation employees who demand remote access from their chosen devices; yet still adoption is slow. This doesn’t mean that VDI isn’t being adopted, nor that the market share numbers aren’t increasing across the board; it’s just slow.
The reason for this is that our thinking and capabilities for service delivery have surpassed the need for VDI in many environments. VDI wasn’t an end-goal but instead an improvement over individually managed, monitored, and secured local end-user OS environments. The end-goal isn’t removing the OS tie to the hardware on the end-point (which is what VDI does) but instead removing the applications tie to the OS; or more simply put: removing any local requirements for access to the services. Starting to sound like cloud?
Cloud is the reason enterprise IT hasn’t been diving into VDI head first, the movement to cloud services has shown that for many we may have passed the point where VDI could show true Return On Investment (ROI) before being obsoleted. Cloud is about delivering the service to any web connected end-point on-demand regardless of platform (OS.) If you can push the service to my iOS, Android, Windows, Linux, etc. device without the requirement for a particular OS, then what’s the need for VDI?
To use a real world example I am a Microsoft zealot, I use Windows 7, Bing for search and only IE for browsing on my work and personal computers (call me retro.) I also own an iPad, mainly due to the novelty and the fact that I got addicted to ‘Flight Control’ on a friends iPad at release of the original. I occasionally use the iPad for what I’d call ‘productivity work’ related to my primary role or side projects. Using my iPad I do things like: Access corporate email for the company I work for and my own, review files, access Salesforce, and Salesforce Chatter, and even perform some remote equipment demos, my files are seamlessly synched between my various other computers. I do all of this without a Windows 7 virtual desktop running on my iPad, it’s all done through apps connected to these services directly. In fact the only reason I have VDI client applications on my iPad is to demo VDI, not to actually work.
Now an iPad is not a perfect example, I’d never use it for developing content (slides, reports, spreadsheets, etc.) but I do use it for consuming content, email, etc. To develop I turn to a laptop with full keyboard, screen and some monitor outputs. This laptop may be a case for VDI but in reality why? If the services I use are cloud based, public or private, and the data I utilize is as well, then the OS is irrelevant again. With office applications moving to the cloud (Microsoft Office 365, Google Docs, etc.) along with many others, and many services and applications already there, what is the need for a VDI infrastructure?
VDI like server virtualization is really a band-aid for an outdated application deployment process which uses local applications tied to a local OS and hardware. Virtualizing the hardware doesn’t change that model but can provide benefits such as:
Once the wound of our current application deployment model has fully healed, the band-aid comes off and we have service delivery from cloud computing environments free of any OS or hardware ties.
So friends don’t let friends virtualize desktops right?
Not necessarily. As shown above VDI can have significant advantages over standard desktop deployment. Those advantages can drive business flexibility and reduce costs. The difficult questions will become
Many organizations will still see benefits from deploying VDI today because the ROI of VDI will occur more quickly than the ability to deliver all business apps as a service. Additionally VDI is an excellent way to begin getting your feet wet with the concepts of supporting any device with organizational controls and delivering services remotely. Coupling VDI with things like thin apps will put you one step closer while providing additional flexibility to your IT environment.
When assessing a VDI project you’ll want to take a close look at the time it will take your organization to hit ROI with the deployment and assess that against the time it would take to move to a pure service delivery model (if your organization would be capable of such.) VDI is a fantastic tool in the data center tool bag, but like all others it’s not the right tool for every job. VDI is definitely the Next Generation but it is not The Final Frontier.
Additional fun:
Here are some sales statements that are commonly used when pitching VDI, all of these I consider to be total hogwash. Try out or modify a few of my one line answers next time your vendors there telling you about the wonderful world of VDI and why you need it now.
Vendor: ‘<Insert analyst here (Gartner, etc.)> says that 2011 is the year for VDI.’ Alternatively ‘<Insert analyst here (Gartner, etc.)> predicts VDI to grow X amount this year.’
My answer: ‘That’s quite interesting, let’s adjourn for now and reconvene when you’ve got data on <Insert analyst here (Gartner, etc.)>’s VDI predictions for the previous 5 years.’
Vendor: ‘The next generation of workers coming from college demand to use the devices and services they are used to, to do their job.’
My answer: ‘Excellent, they’ll enjoy working somewhere that allows that, we have corporate policies and rules to protect our data and network.’ This won’t work in every case as Mike Stanley (@mikestanley) pointed out to me, universities for example have student IT consumers who are the paying customers, this would be much more difficult in such cases.
Vendor: ‘People want a Bring Your Own (BYO) device model.’
My Answer: ‘If I bring my own device and the fact that I want to matters, what makes you think I’ll want your desktop? Just give me application or service.’