Some people live more in 20 years than others do in 80.

Last month marked 20 years since I first enlisted in the military.  While I was in just short of 9 years, being a soldier changed my life, and taught me so much about relationships, leadership, and life in general.  To that end, I wanted to share my thoughts in context of what I do now, and what we do as a community.

Things I kept:

Integrity

One of the most important lessons I recall was as a young soldier in basic training.  A couple of recruits had gotten into a fist fight in the barracks.  Of course being the great leaders we were, none of us broke it up, or tried to stop them.  Naturally the Drill Sergeants found out, and spent hours giving us some extra PT to remind us fighting was not to be tolerated.  One of our Drill Sergeants, who tried to pretend he couldn’t stand us, lectured us for 30 min or more about the importance of integrity.  I will never forget that moment, it stuck with me through my military career and through civilian life.  You can win or lose a thousand times, but at the end of the day all you have is your integrity.

Sacrifice

An early lesson as a young soldier was that leaders never eat until their troops have eaten.  We always lined up in order of rank at meal times, and when I moved up the ranks, I always put my soldiers needs first.  This wasn’t limited to just eating though, it applied to everything, a good leader is one who is willing to sacrifice for the team.

Communication

I will never forget having to memorize mission briefings.  Everyone, down to the lowest ranking private knew the mission, and was prepared to complete it even if they were the last one left.  Many times a leader would be taken out, and it was up to the next one to step up.  A few times that was me.  I remember how it felt to realize that the weight of the entire mission was on my shoulders, and there were several soldiers looking to me for guidance.  A company, much like a military unit, is only successful if everyone is part of the team and knows the company mission and is executing on it.

Things I left behind:

Everyone is a leader (Manager)

One of the worst ideas ever was that everyone needed to progress into a leadership position.  Now I think we all need to be leaders in the sense of community, but there are many of us who are individual contributors and that is great.  Not all of us are cut out to be in charge of our peers, it is just not our strong suite.  As a soldier I watched more exceptional individuals forced into leadership and end up failing.  It is ok to be amazing as an individual contributor, do what you love and be amazing at it.

Mandatory Diversity

I know this one probably won’t make me very popular, but I am not a big fan of forcing people together from different backgrounds.  I respect pretty much everyone, but I don’t think that forcing people to work together or to get along is productive.  Put people together with similar interests and backgrounds.  Offer them the opportunity to work with diverse groups, but don’t force them, it is their loss if they chose not to, but given a choice most people will make the right one.

Needs of the Army

This is a phrase which was familiar to most of us who served, and it basically means that you do what is best for the government.  One of the best lessons I have learned since joining VMware is that if you do your day job well, and work on side projects, you are going to get moved into what you love.  Doing what you love will make you awesome, doing what your told will make you average.

I am thankful to have served, it was an honor.  I have learned so much, but mostly questioning why we do things and adjusting as we go is the only way to be successful.

Some people live more in 20 years than others do in 80.

I am definitely a mad man with a box!

So in keeping with the theme of Dr. Who quotes for the titles of my posts, I think this is a good topic to discuss.

When I started virtualizing servers, vMotion was still a roadmap item, HA was still a frightening concept that we were only comfortable using in test environments, and iSCSI was not quite ready for the enterprise. At that time, we were convinced there would be no more datacenters built, everything would be virtualized, and we would just buy cheap commodity servers and storage.

In this utopia, hardware companies would become software companies, and all our applications would become more and more advanced, hardware would be built from parts or purchased from the lowest bidder. Interestingly the past few weeks I have had the privilege to meet with a number of my customers in the Enterprise Health Care space. Most of the conversations have been around the Software Defined Data Center, and the concept of driving more value to the business. Many of these customers want to get out of the datacenter business, they don’t want to manage boxes. The challenge for many of them though is getting their applications to come along.

To solve this, many of these types of customers are looking at implementing SDDC solutions, and treating their hardware as a commodity, purchasing servers from the most cost effective source, while storage and networks still seem to come down to the teams preference. As we observe this shift in strategy, it is time for the applications to catch up. Hardware is a dying business, customers need to get out of the business of managing boxes, it is time to shift and realize what matters. Drive value to the business, or become obsolete. We are in a software world, the age of hyper convergence through software.

This is an exciting time, but we have to move with the times. We can no longer be tied to legacy hardware architectures, we must move forward, faster, better stronger, and stop worrying about the box.

I am definitely a mad man with a box!

Defining the cloud Part 1: Applications

With the recent launch of HP Helion, and with HP Discover coming in a few weeks, it is a good time to talk about the difference between private cloud and virtualization.

Interestingly enough most companies assume that because they have virtualized 70-80% of their applications they have deployed a cloud environment.  This is the unfortunate result of marketing and sales people trying to move products or ideas without having an understanding of where we are headed, and what is happening in the market.  I confess I am guilty of this to some extent, I have written about private cloud being virtualizaiton, which is correct but incomplete.  So just what is the difference?  Well that largely depends on who you ask, but here is my take.

Application Centric

Virtualization came about to fill a gap in technology.  We were at a point where servers were more powerful than the applications, and so there was a great deal of wasted space.  When VMware started their server virtualization line, the big value was consolidation.  There was little to do with applications, it was about server consolidation, datacenter efficiency, and moving the status quo to the next level.  The application were only impacted in that they had to be supported in a virtual environment, but high availability, performance, everything was managed at the virtual server level similar to how it was managed at the physical server level previously.

In the cloud, abstraction is done at the application level rather than the server level.  the cloud rides on server virtualizaiton, but ideally applications should scale out, using multiple servers in a cluster each doing the same function with manager nodes to direct traffic.  While this may seem less efficient, since there needs to be multiple nodes to operate a single application, it actually frees the application from needing to reside in a specific datacenter or on a specific host, and indeed it should span multiple of each.  It also enables rapid scaling of applications since rather than adding additional physical CPU or Memory to the virtualized systems, you simply spin up additional resource nodes, and then when the peak demand is gone, you can tear them down.

So the first difference between virtualization and private cloud is the focus on applications rather than infrastructure. As we continue to explore this, I hope to tie this together and help to explain some of these differences.

Defining the cloud Part 1: Applications

Lead, follow, or stop trying to sell technology

“Leadership is a choice. It is not a rank.” — http://www.ted.com/talks/simon_sinek_why_good_leaders_make_you_feel_safe/transcript

If you haven’t seen the Ted Talk on leadership by Simon Sinek, this is a must see. I do enjoy watching these, but usually they are not this impactful. “Leadership is a choice”. So why would anyone choose to lead when it is so much easier to just follow the rules, get promoted, and become a manager, and what does all this have to do with technology and things that interest me, and hopefully you.

Sitting at breakfast with two friends, one in sales, one in pre-sales, we were talking about what it would take to be really great at our jobs. We came to the conclusion that the great tech sales companies generate excitement. If you remember the 80’s, Pontiac had a successful ad campaign around We Build Excitement They did not build excitement BTW.

What we need in the IT industry are motivational speakers. No one likes when someone comes in and does a powerpoint product pitch. I have actually seen people fall asleep in such sessions. Not ideal. In order to lead, we have to get in and generate some excitement, show people why the technology is exciting for them. Instead of trying to make products that fit our customers environments, we need to build the products that our customers don’t know they need and get them excited about following us on the journey. We are changing an industry here, imagine if Steve Jobs had made the phones most people wanted. Instead he came out with a product no one wanted and created a culture around it and an entire industry was born.

So my challange to you, look at your vendors if you are a customer, look at your products if you are a vendor. Are you excited about what you are selling or what you are buying? If not, maybe you are not talking to the right people. Don’t buy a product because it is safe, don’t buy based on what you have always done, take some interest, get excited, and buy ino a strategy.

Lead, follow, or stop trying to sell technology

Software Defined Storage Replication – vSphere Replication

When VMware introduced some of the storage API’s, some of the more important ones were around storage. I don’t say this because I am a storage guy, but more because this is an area that frequently causes problems for virtualization. When VMware released their backup API, features such as Change Block Tracking, CBT, became particularly compelling. Now vSphere could tell the backup software what had changed since the last backup, rather than relying on the backup catalog to look at everything. This created less reads on the storage, and more effecient.

It was not a huge leap then when vSphere replication was released as a standalone product separate from Site Recovery Manager, SRM, as well as a function of SRM. Prior to this, vSphere would rely on the underlying storage to do the replication, but now replication could be handled by vSphere itself.

One of the challenges with replication has traditionally been bandwith. To handle this we have used compression, caching, deduplication, and sending only the changed data to the remote site. When VMware introduced CBT for backups, this enabled them to later release replication using the CBT technology since they were already tracking changes and could use those for replication as well as backups. This would like like the diagram below.

vSphere_Replication

 

In the previous post, Software Defined Storage Replication, the discussion was around abstraction through a third party product, in our case the HP StoreVirtual VSA. In this case, we are looking at a built in product. Much like VSAN, this is a solid way of doing things, but the challange comes in because it only replicates vSphere, unlike 3rd party products.

The other consideration here is the efficiency of doing things in software versus hardware. Of course the abstraction does have efficiencies from an operational sense, management is built in to the vSphere console, and you are not tied to a specific storage vendor. On the other side, we do need to look at the inherent performance benefits of replicating between two block storage arrays. Anytime we do anything in hardware, it is going to naturally be faster.

One thing VMware has done very well is providing solutions based in software to compete with hardware vendors. This does not mean that VMware does not partner with hardware vendors, but for some customers the price is right for this. VMware almost always tells us to use array based replication for maximum performance, but this is a good solution for abstraction or a smaller environment where cost is a larger factor over performance.

Software Defined Storage Replication – vSphere Replication

Software Defined Storage Replication

In a conversation recently with a colleague, we were discussing storage replication in a VMware environment.  Basically the customer in question had bought a competitors array, brand X, and wanted to see if there was a way to replicate to one of our arrays at a lower price point.

This is a fairly common question coming from customers, more so in the SMB space, but with the increasing popularity of Software Defined Storage, customers want options, they don’t want to be locked into a single vendor solution.  In an openstack environment, high availability is handled at the application level, and I have to say I strongly recommend this as a policy for all new applications, however how do we handle legacy apps in the interim?

In a traditional storage array, we typically do replication at the storage level.  VMware Site Recovery Manager allows us to automate the replication and recovery process integrating with the storage replication, and in smaller environments, can even handle replication at the vSphere host level. Array based replication is generally considered the most effecient, and the most recoverable. This does require similar arrays from the same vendor, with replication licensing. In a virtual environment this looks something like the picture below.

Storage_Replication_ArrayBased

This works well, but can be costly and leads to storage vendor lockin, not a bad thing if you are a storage vendor, but not always the best solution from a consumer perspective. So how do we abstract the replication from the storage? Remember, one of the purposes of virtualization and openstack is to abstract as much as possible from the hardware layer. That is not to mean hardware is not important, quite the contrary, but it does enable us to become more flexible.

So to provide this abstraction there are a couple options. We can always rewrite the application, but that takes time, we can do replication at the file system level or similarly using a 3rd party software to move data, but in order to really abstract the replication from the hardware/software we need to insert something in the middle.

In the conversation I was having at the begining, the goal was to replicate from the production datacenter running brand X storage to a remote location using an HP storage product. To accomplish this, we discussed using vSphere replication, something I will discuss in a future post, we discussed host based replication, but that is not as seamless, and what we settled on is below. Not the most elegant solution, but something that helps us abstract the replication layer. Essentially using the HP StoreVirtual VSA, since it has replication built in, we put that in front of the brand X storage, and then on the other side we can put another VSA on a server with some large hard drives, and voila, replication and DR storage handled.

Storage_Replication_VSA - Edited

Not the most elegant solution, but it is a way to abstract the replication from the storage, and to do so at a reasonable cost. The advantage to this solution is that we have also given ourselves DR storage. Next I will explore vSphere replication, but as always I want to point out, this solution minimized vendor lock in on the hypervisor and storage levels.

Software Defined Storage Replication

How relevant is VMware in an Openstack world?

Last August, during VMworld, I had the honor to sit in a briefing with VMware’s Paul Strong, VP, Office of the CTO. As always he was engaging, and incredibly knowledgable. When he asked if we had any questions, I casually glanced around, and when none of my peers had anything to ask, I figured why not.

“What is the impact of Openstack on VMware, do you see it as a threat?”

The answer was not what I was looking for, he talked about the marketplace, how they compete on some fronts, but compliment on others, not quite what I wanted, but it was the right answer, I just didn’t realize it yet.

I have been doing more work recently with customers wanting to do internal private cloud, which of course means something different to each of them, but from my view, the critical piece is a self service portal. Enabling users to make requests through a website and removing the requirement for IT intervention, or minimizing it at the least.

What I thought of Openstack, when I asked the question, is that it would invalidate VMware, and all the things we have been talking about for years. As I have been working with this more, I think VMware becomes more relevant in these environments. While it is fun to put KVM in a lab and run Openstack on top of it, it is not at the same level. Openstack itself struggles with commercial support, with HP offering one of the more viable enterprise solutions.

In the end it all comes back to the GNU Manifesto, give the software away and charge for the support, thus those who want it for free can have it, but for most companies it makes more sense to get something with enterprise support.

So to answer the question, I would say that VMware, on many levels makes sense, adding Openstack on top of VMware simply opens more doors to have a well supported private or hybrid cloud environment.

How relevant is VMware in an Openstack world?

To the cloud

The other day I was having lunch with a partner who had recently changed companies, and was interested in working more closely with HP. He posed a simple question that really made me think. “What has you excited, what are you interested in right now.” Not a typical question I get, especially from sales people, but the answer was simple. The cloud of course.

Cloud is a big buzz word, and we love to make jokes about it, and the fact that no one can really define it since we all have different definitions. I have posted on this before, but I think this is a topic worth developing.

One of my customers is very concerned with effeciency. I know dumb statement, everyone is concerned with effeciency, unless you work for the government… but seriously. So this customer is working on a hybrid cloud model for their internal customers, but the key to the whole thing working is not the cloud, but automation.

When I got started in IT, many years ago, we used to do everything manually. When someone needed a server, we would go through our procurement process to get the hardware. When the server arrived, we would rack it, install the OS, provision the networking, add storage, and then turn it over to the application owner to install their application.

When VMware went primetime a few years back, we were thrilled. The whole process was cut down since it could be templatized, but it still required IT intervention, the process was quicker, but still not much different.

So to the point, what is automation, and what has me excited in the IT world? What if we remove the requirement for IT intervention? What if application owners could provision their own servers via a simple interface or programmatically? Imagine what the IT resources could do, what they were freed up to do more productive tasks.

Now this is a high and mighty goal, and does require more work, but this is the utopia of the cloud we are all shooting for.

To the cloud

What does Google Fiber mean for Portland?

In case you are living under a rock, or still using dial up, Google Fiber has announced recently they are considering deploying their Gigibit fiber offering to a number of new cities, including our very own beloved Portland, OR.  https://fiber.google.com/newcities/.  I thought I would give my perspective to the screaming voices on both sides, and try to put this into context.

So first of all to be clear, this is my personal opinion, not related to HP, not related to any other vendor I may talk about during my day job.  Portland, from a technology vendor, is a poorly served market.  With many of the technology vendors being in Silicone Valley, and many of the start-ups existing there, or in Seattle, WA, Portland is seen as too small to matter.  This is a bit ironic when we consider the growing tech community in the Portland area, certainly small but not insignificant, the sought after lifestyle, and the inexpensive land and power here.  I will admit, the taxes and business climate can be challenging, but those can be negotiated.

So what does this have to do with Google Fiber?  Currently I pay around $70 per month for Comcast.  I get around 50 Mbps for internet speed, and around 12 basic cable channels.  I look at upgrading from time to time, but since I use Netflix, Hulu Plus, and a few other channels available on my Apple TVs, there is not much reason.  If I could get better speeds and cut the cable out I would, but thus far Comcast has refused.  As a former Comcast employee, I understand how the whole thing works, but I am not happy with the restrictiveness, and the poor services.  I currently stream content on 3 TV’s as well as a half dozen miscellaneous devices.  I telecommute for work, so my internet service is critical to my family and I.  Google Fiber is going to be around 20 times faster than what I currently have.  Plus I can pay a little more to get a better DVR than Comcast offers, and not worry about rate hikes.

So what does this mean for Portland?  As much as I may not love some of the things about Portland, we are nothing if not an innovative city.  We have a growing food industry, brewing and distilleries, hundreds of small businesses, and many well established ones such as Intel and Nike to name a few.  Give Portland more speed, and watch what we do with it.  Despite being treated by many tech vendors as a second class citizen to other larger markets, Portland continues to innovate and make a name for itself.

I know there are many options for Google, but I for one would be thrilled to have internet so fast that I will actually have to upgrade all my wireless routers.

What does Google Fiber mean for Portland?

SAN v. NAS

In my job at HP I come into contact with many people with many questions.  Some are quick answers, but many are an opportunity for me to spend a few min educating them on storage or virtualization.  Anyone who knows me knows I am passionate about Storage and Virtualizaiton, and I love to be able to help explain what I have learned.  I have a similar passion for learning from others and asking questions, but that is a topic for another post.  Going forward, rather than responding to questions via e-mail I am going to try to use this as a forum to reach a broader audience.

This question came to me from a friend, whom I work with from time to time, who is in a similar line of work, and loves to learn and ask questions much as I do.  He writes, “Why would anyone use a (Insert Unified Storage Here)  as a “NAS”?”.

Unified Storage essentially refers to any array that can present both file and block.  Unfortunately this is a bit misleading.  The reality is that File and Block are two different things.  Block storage is the underlying logical storage typically presented as a raw device to the operating system, whereas File storage requires a file system and is presented to the file system through a path using SMB(CIFS), or NFS.  So the misleading part of this is that you can’t have both in a single controller, or controller pair.

Most companies start as one, File for example, and then present the block storage out through the file controllers.  This creates some significant performance issues.  This requires a multi layer system which is inherently inefficient.  On the other side for a company starting with a block system and adding file, it ends up as a block system and a fiber attached “Gateway”, which is essentially a dedicated server.  As you can see neither solution is doing both well, but it works well enough.  Some systems do better at masking themselves, but at the end of the day they only do one well.

So back to the question, why would anyone use a Unified System as a NAS?  Honestly it usually comes down to simplicity and licensing.  Coming from a mixed windows and *nix background both, I am torn here.  In a windows environment, SMB v3 on Windows 2012 will provide a much higher level of performance, but there is also a cost for the windows licensing.  For many smaller shops, either using a linux server, running Samba, or a small unified appliance will provide a simulated windows file server.  While it might lack some of the advanced features of the newer versions of windows, the costs are usually lower, and there are sometimes management or other features which are attractive.

The short answer is that many small shops like a hybrid appliance, and find that the cost makes it very attractive.  For larger companies, this approach does not make sense, but when you are running an one man IT shop, this is a potentially compelling fit.

SAN v. NAS