Humans, you’re so linear.

We always assume that things will continue to be on a linear trajectory. In statistics we can project future points on a graph based on previous points. In technology, we have always assumed that because things have always been, they will continue to grow as they always have.

When the iPhone was released I was happy with my Blackberry Curve 8300. I was at Comcast managing a datacenter in California, and I was just fine with what I had. The iPhone was a toy for yuppie kids with too much of daddies money. Today I have an iPhone, and iPad, a Mac Book Pro, 2 apple TV’s, my wife has an iPhone and an iPad, 2 of my 3 kids have iPod Touch, and one of them has an iPhone. I do more work on my iOS devices than I do on my Mac Book. When Steve Jobs and Apple created the iPhone, they didn’t assume that a small screen and a qwerty keyboard were going to be what we all wanted, they went out and built something most of us said we would never buy and changed an industry.

Seth Godin, in his book tribes said, “The secret of leadership is simple: Do what you believe in. Paint a picture of the future. Go there.
People will follow. ” The real message here, well the one I take for the purposes of this post, is that we need to stop thinking that things are how they are and won’t change. Things change because we make them.

In his post Do You Drink The Kool-Aid Or Do You Make It? Gabriel Chapman points out that we need to be deeply involved in the technology process to be effective at our jobs in technical sales. We can’t just drink the Kool-Aid, we can’t just take what marketing gives us and accept it as a fact. Being hands on, being able to communicate, and being able to stand up and be honest when something is not the right fit is critical.

So in closing, don’t be so linear, don’t just drink the Kool-Aid, get out there, get your hands dirty, understand what you are doing, whether you are in technical sales, or any other field, and do what excites you. If you’re not passionate about what you’re doing, it shows and you will not be nearly as effective.  Don’t be afraid to step out of your comfort zone and go in the opposite direction from everyone else, you might just create something amazing.

Humans, you’re so linear.

Changing Direction…Again

The funny thing about life and careers is you never know where you will end up. Alistair Cooke wrote a fascinating article, about how random his career has been, http://www.demitasse.co.nz/wordpress2/?p=1174. Recently a number of my friends from around the globe have been asking me about career moves, or for advice, assuming I actually know something or know what I am doing. I have come to realize, similarly to Alistair, planning my career does not work.  I am no where near where I thought I would be at any point in my career, I have surpassed my own expectations, not because I am smarter than anyone else, but because I love to learn and I have been very blessed to meet some very intelligent people who have taught me more than I ever thought possible.

Over the past 20+ years I have had the privilege of working as a soldier, tech support, a systems admin, a systems engineer, and a technology architect.  Recently I had a conversation with the team at VMware.  I was pretty convinced that I didn’t want to change jobs, but I do love VMware, and I have spent a great deal of time and effort to understand the strategy, as well as to work with VMware as a partner on many levels.

The interesting thing I learned during my interview process was that I have actually been interviewing for this job for nearly 9 months now without realizing it.  Interactions with various VMware employees showed them I was interested in VMware as a company, and in helping customers understand more about the solutions.  Through the conversation, the team at VMware laid out a strategy, and a future which is compelling.  The thing though that finally sold me though was the people.  I have a number of friends at VMware, and I follow many on the Tech Marketing team, so I feel like I know what things are like there, but meeting with the local team, and getting their perspective, and understanding the vision from their level.

I do want to say, HP is an exceptional company with some amazing products, and with some of the smartest people I have ever met.  I am humbled to say I was a part of the team at HP, and I am equally humbled and excited to be joining the VMware team.   I will continue to write my own opinions, and things that interest me.  If I have anything to recommend to anyone considering how to improve their current position, or find another it is the following.

  • Never stop learning
  • Ask questions
  • Find smart people and hang out with them
  • Learn from everyone
  • Give something back to the community.
  • Thank you’s go a long way
  • Humility saves you from looking silly
  • Always be polite and helpful, you never know when someone might help you, or when you might be able to help them.

So all this to say, this month I will be joining the VMware Health Care team as a Senior Systems Engineer.  I have much to learn, but I have confidence in the team, the product, and the strategy.  I look forward to continuing my journey, and to giving back to the community wherever possible.

Changing Direction…Again

Hyper-Convergence: a paradigm shift, or an inevitable evolution?

With the recent article on CRN about HP considering the acquisition of Simplivity, http://www.crn.com/news/data-center/300073066/sources-hewlett-packard-in-talks-to-acquire-hyper-converged-infrastructure-startup-simplivity.htm, it seems a good time to look at what simplivity does, and why they are an attractive acquisition target.

In order to answer both questions, we need to look at history. First was the mainframe. That was great, but inflexible, so we moved to distributed computing. This was substantially better, and brought us into the new way of computing, but there was a great deal of waste. Virtualization came along and enabled us to get higher utilization rates on our systems, but this required an incredible amount of design work up front, and it allowed the siloed IT department to proliferate since it did not force anyone to learn a skillset outside their own particular area of expertice. This lead us to converged infrastructure, a concept that if you could get everything from a single vendor, or support from a single vendor at the very least. Finally came the converged system, it provided a single vendor/support solution, packaged as one system, and we used it to grow the infrastructure based on performance or capacity. It was largely inflexible, by design, but it was simple to scale, and predictible.

To solve this problem, companies started working on the concept of Hyper-Convergence. Basically there were smaller discrete converged systems, many of which created high availability zones not through redundant hardware in each node, but through clustering. The software lived on each discrete converged node, and it was good. Compute, Network, and Storage, all scaling out in pre-defined small discrete nodes, enabling capacity planning, and fewer IT administrators to manage larger environments. Truly Software Defined Data Center, but at a scale that could start small and grow organically.

Why then is this interesting for a company like HP? As always I am not an insider, I have no information that is not public, I am engaging in speculation, based on what I am seeing in the industry. Looking at HP’s Converged Systems strategy, looking at what the market is doing, I believe that in the near future, the larger players in this space will look to converged systems as the way to sell. Hyper-convergence is a way forward to address the market space that is either too small, or needing something that traditional converged systems cannot provide. Hyper-convergence can provide a complimentary product to existing converged systems, and will round out solutions in many datacenters.

Hyper-Convergence is a part of the inevitable evolution of technology, whether HP ends up purchasing simplivity, these types of conversations show that such concepts are starting to pickup steam. It is time for companies to innovate or die, and this is a perfect opportunity for an already great company to keep moving forward.

Hyper-Convergence: a paradigm shift, or an inevitable evolution?

Defining the cloud Part 4: Supported

As I try to bring this series to a close, I want to look at what I would consider one of the final high level requirements in evaluating a cloud solution.  In the previous posts, we looked at the cloud as being application centric, self service, and open.  These are critical, but one of the more important parts of any technology is support.  This is something which has plagued linux for years.  For many of us, linux and unix are considered to be far superior to windows for many reasons.  The challenge has been the support.  Certainly Red Hat has done a fairly good job of providing support around their Fedora Based Red Hat Enterprise Linux, but that is one distro.  Canonical provides some support around Ubuntu, and there are others.

The major challenge with the opensource community is just that, it is open.  Open is good, but when we look at the broader opensource community, many of the best tools are written and maintained by one person or a small group.  They provide some support for their systems, but often times that is done as a favor to the community, or for a very small fee, they need to keep day jobs to make that work.

One challenge which seems to be better understood with the cloud, especially around openstack, is the need for enterprise support.  More and more companies are starting to jump on board and provide support for openstack, or their variant.  This works well, so long as you only use the core modules which are common.  In order to make money, all companies want you to use their addons.  This leads to some interesting issues for customers who want to add automation on top of the cloud or other features not in the core.

At the end of the day, a compromise must be struck.  It is unlikely that most companies will use a single vendor for all their cloud software, although that could make it less challenging in some regards.  It comes down to trade offs, but it is certain that we will continue to see further definition and development around the cloud, and around enterprise support for technologies which further abstract us from the hardware and enable us to be more connected, use the data which is already being collected, and the devices which are being and will be developed for this crazy new world.

Defining the cloud Part 4: Supported

Defining the cloud Part 3: Open

Open

This may seem like an odd topic for the cloud, but I think it is important.  One of the questions I have been asked many times when discussing cloud solutions with customers is around portability of virtual machines, and interoperability with other providers.  This of course raises some obvious concerns for companies who want to make money building or providing cloud services and platforms.

We live in a soundbite culture.  If it can’t be said in 140 characters or less, we don’t really want to read it.  Hopefully you are still reading at this point, this is way past a tweet.  We like monthly services versus owning a datacenter, who wants to pay for the equipment when you can just rent it in the cloud.  More and more services are popping up to make it simpler for us to rent houses for a day or a few days, get a taxi, rent a car by the mile, or a bike by the hour.  There is nothing wrong with this, but we need to understand the impact.  What if each car had different controls to steer, what if there was no standard?  How could the providers then create services, it is all based on an open and agreed upon standard.

In order for the cloud to be truly useful, it must be based on standards.  This is where OpenStack is the most important.  Going far beyond just a set of API’s, OpenStack enables us to have a core set of features that are common to everyone.  Of course in order to make money, beyond just selling support for this, many companies choose to add additional features which differentiate them.  This is not opensource, but still based on the open framework.  For most companies, this still uses open standards such as the rest API, and other standards based ways of consuming the service.  Even VMware, perhaps the largest cloud software provider, uses standard API’s, and supports popular tools for managing their systems.

Open standards, open API’s, and standards based management features are critical for the cloud.  Of course everyone wants you to choose their cloud, but to be honest, most of us consume multiple cloud services at once.  I use DropBox, Box.Net, Google Drive, Skydrive, and a few other cloud storage providers because they all have different use cases for me.  I use Netflix and Hulu Plus because they give me different content.  Why then should business consumers not use some AWS, some Google Enterprise Cloud, some HP Public Cloud, and perhaps even some of the other smaller providers?  For the cloud to continue to be of value, we will have to adjust to the multi service provider cloud, and everyone will have to compete on the best services, the best features, and the best value.

Aside

Defining the cloud Part 2: Self Service

In Defining the cloud Part 1: Applications, we discussed how applications are the reason for the cloud, and how we move abstraction from the servers to the applications.  Moving forward we now look at how the cloud should enable users to provision their own systems within given parameters.

Self Service

In the early days of virtualization, we were very excited because we were making IT departments more efficient. I remember IT managers actually telling young server admins to stall when creating virtual servers to prevent users from realizing how quickly it could be done. What took the IT department hours, weeks, or months previously, now was done with the press of a button, and a few minutes, assuming proper capacity planning.

IT is often seen as a cost center. For years now we have been preaching the gospel of IT As A Service, basically the concept that technology becomes a utility to the business. Nicholas Carr championed this concept in his book, The Big Switch. Basically he popularized the concept that much like electricity, technology was becoming something that should just work. IT is no longer just for those of us who understand it, but rather it becomes a tool that anyone can use just like flipping a switch to turn on a light, or turning on the TV.

So how do we make this happen? It is as simple as looking at the smart phone you have in front of you or in your pocket.  The thing that makes your phone so great is not the brand, not the operating system, not even the interface, the most important thing is the application ecosystem.  I can go on my phone and grab an app to do just about anything.  I don’t need to know how the system works, I just go grab apps and don’t really think about how they interact with the phone.

Imagine giving this to to our end users, simply give them an catalog to say what they need, a user wants to build an application, so they go to a catalog select from a pre-defined template, and the rest is handled by the system.  No IT intervention, no human interaction required, just a few simple clicks, almost like grabbing an app on a phone.  Their entire virtual infrastructure is built out for them and they are notified when it is complete.

So what does this all have to do with HP?  Stick with me on this, this is the future, this is HP Helion, and this is amazing.

Defining the cloud Part 2: Self Service

Defining the cloud Part 1: Applications

With the recent launch of HP Helion, and with HP Discover coming in a few weeks, it is a good time to talk about the difference between private cloud and virtualization.

Interestingly enough most companies assume that because they have virtualized 70-80% of their applications they have deployed a cloud environment.  This is the unfortunate result of marketing and sales people trying to move products or ideas without having an understanding of where we are headed, and what is happening in the market.  I confess I am guilty of this to some extent, I have written about private cloud being virtualizaiton, which is correct but incomplete.  So just what is the difference?  Well that largely depends on who you ask, but here is my take.

Application Centric

Virtualization came about to fill a gap in technology.  We were at a point where servers were more powerful than the applications, and so there was a great deal of wasted space.  When VMware started their server virtualization line, the big value was consolidation.  There was little to do with applications, it was about server consolidation, datacenter efficiency, and moving the status quo to the next level.  The application were only impacted in that they had to be supported in a virtual environment, but high availability, performance, everything was managed at the virtual server level similar to how it was managed at the physical server level previously.

In the cloud, abstraction is done at the application level rather than the server level.  the cloud rides on server virtualizaiton, but ideally applications should scale out, using multiple servers in a cluster each doing the same function with manager nodes to direct traffic.  While this may seem less efficient, since there needs to be multiple nodes to operate a single application, it actually frees the application from needing to reside in a specific datacenter or on a specific host, and indeed it should span multiple of each.  It also enables rapid scaling of applications since rather than adding additional physical CPU or Memory to the virtualized systems, you simply spin up additional resource nodes, and then when the peak demand is gone, you can tear them down.

So the first difference between virtualization and private cloud is the focus on applications rather than infrastructure. As we continue to explore this, I hope to tie this together and help to explain some of these differences.

Defining the cloud Part 1: Applications

Lead, follow, or stop trying to sell technology

“Leadership is a choice. It is not a rank.” — http://www.ted.com/talks/simon_sinek_why_good_leaders_make_you_feel_safe/transcript

If you haven’t seen the Ted Talk on leadership by Simon Sinek, this is a must see. I do enjoy watching these, but usually they are not this impactful. “Leadership is a choice”. So why would anyone choose to lead when it is so much easier to just follow the rules, get promoted, and become a manager, and what does all this have to do with technology and things that interest me, and hopefully you.

Sitting at breakfast with two friends, one in sales, one in pre-sales, we were talking about what it would take to be really great at our jobs. We came to the conclusion that the great tech sales companies generate excitement. If you remember the 80’s, Pontiac had a successful ad campaign around We Build Excitement They did not build excitement BTW.

What we need in the IT industry are motivational speakers. No one likes when someone comes in and does a powerpoint product pitch. I have actually seen people fall asleep in such sessions. Not ideal. In order to lead, we have to get in and generate some excitement, show people why the technology is exciting for them. Instead of trying to make products that fit our customers environments, we need to build the products that our customers don’t know they need and get them excited about following us on the journey. We are changing an industry here, imagine if Steve Jobs had made the phones most people wanted. Instead he came out with a product no one wanted and created a culture around it and an entire industry was born.

So my challange to you, look at your vendors if you are a customer, look at your products if you are a vendor. Are you excited about what you are selling or what you are buying? If not, maybe you are not talking to the right people. Don’t buy a product because it is safe, don’t buy based on what you have always done, take some interest, get excited, and buy ino a strategy.

Lead, follow, or stop trying to sell technology

Software Defined Storage Replication – vSphere Replication

When VMware introduced some of the storage API’s, some of the more important ones were around storage. I don’t say this because I am a storage guy, but more because this is an area that frequently causes problems for virtualization. When VMware released their backup API, features such as Change Block Tracking, CBT, became particularly compelling. Now vSphere could tell the backup software what had changed since the last backup, rather than relying on the backup catalog to look at everything. This created less reads on the storage, and more effecient.

It was not a huge leap then when vSphere replication was released as a standalone product separate from Site Recovery Manager, SRM, as well as a function of SRM. Prior to this, vSphere would rely on the underlying storage to do the replication, but now replication could be handled by vSphere itself.

One of the challenges with replication has traditionally been bandwith. To handle this we have used compression, caching, deduplication, and sending only the changed data to the remote site. When VMware introduced CBT for backups, this enabled them to later release replication using the CBT technology since they were already tracking changes and could use those for replication as well as backups. This would like like the diagram below.

vSphere_Replication

 

In the previous post, Software Defined Storage Replication, the discussion was around abstraction through a third party product, in our case the HP StoreVirtual VSA. In this case, we are looking at a built in product. Much like VSAN, this is a solid way of doing things, but the challange comes in because it only replicates vSphere, unlike 3rd party products.

The other consideration here is the efficiency of doing things in software versus hardware. Of course the abstraction does have efficiencies from an operational sense, management is built in to the vSphere console, and you are not tied to a specific storage vendor. On the other side, we do need to look at the inherent performance benefits of replicating between two block storage arrays. Anytime we do anything in hardware, it is going to naturally be faster.

One thing VMware has done very well is providing solutions based in software to compete with hardware vendors. This does not mean that VMware does not partner with hardware vendors, but for some customers the price is right for this. VMware almost always tells us to use array based replication for maximum performance, but this is a good solution for abstraction or a smaller environment where cost is a larger factor over performance.

Software Defined Storage Replication – vSphere Replication

Software Defined Storage Replication

In a conversation recently with a colleague, we were discussing storage replication in a VMware environment.  Basically the customer in question had bought a competitors array, brand X, and wanted to see if there was a way to replicate to one of our arrays at a lower price point.

This is a fairly common question coming from customers, more so in the SMB space, but with the increasing popularity of Software Defined Storage, customers want options, they don’t want to be locked into a single vendor solution.  In an openstack environment, high availability is handled at the application level, and I have to say I strongly recommend this as a policy for all new applications, however how do we handle legacy apps in the interim?

In a traditional storage array, we typically do replication at the storage level.  VMware Site Recovery Manager allows us to automate the replication and recovery process integrating with the storage replication, and in smaller environments, can even handle replication at the vSphere host level. Array based replication is generally considered the most effecient, and the most recoverable. This does require similar arrays from the same vendor, with replication licensing. In a virtual environment this looks something like the picture below.

Storage_Replication_ArrayBased

This works well, but can be costly and leads to storage vendor lockin, not a bad thing if you are a storage vendor, but not always the best solution from a consumer perspective. So how do we abstract the replication from the storage? Remember, one of the purposes of virtualization and openstack is to abstract as much as possible from the hardware layer. That is not to mean hardware is not important, quite the contrary, but it does enable us to become more flexible.

So to provide this abstraction there are a couple options. We can always rewrite the application, but that takes time, we can do replication at the file system level or similarly using a 3rd party software to move data, but in order to really abstract the replication from the hardware/software we need to insert something in the middle.

In the conversation I was having at the begining, the goal was to replicate from the production datacenter running brand X storage to a remote location using an HP storage product. To accomplish this, we discussed using vSphere replication, something I will discuss in a future post, we discussed host based replication, but that is not as seamless, and what we settled on is below. Not the most elegant solution, but something that helps us abstract the replication layer. Essentially using the HP StoreVirtual VSA, since it has replication built in, we put that in front of the brand X storage, and then on the other side we can put another VSA on a server with some large hard drives, and voila, replication and DR storage handled.

Storage_Replication_VSA - Edited

Not the most elegant solution, but it is a way to abstract the replication from the storage, and to do so at a reasonable cost. The advantage to this solution is that we have also given ourselves DR storage. Next I will explore vSphere replication, but as always I want to point out, this solution minimized vendor lock in on the hypervisor and storage levels.

Software Defined Storage Replication