Bow Ties are cool!

Now that I am past my first 90 days here at VMware, I consider myself something of an authority on absolutely nothing. Thus I feel it incumbent on me to post a semi serious post about life, liberty, and the pursuit of virtualizaiton.

Since coming here I get the question at least once a month about how to get hired at VMware. The truth is there is no secret formula, no one trick that will get you an interview, or past the interview. The truth is that you just have to stand out and bring something unique to our growing team. I have seen many of my friends go through interviews, some get hired, and some not make it. It is not that they aren’t good enough, but there has to be something which sets those who make it apart from those who don’t. What follows may or may not make sense, be true, or be helpful, but it is my attempt to shed some light into what it takes to be a part of our team and a part of changing the technology world.

Flexible

Working at a company growing as quickly and disrupting the technology world the way VMware has requires flexibility. Being amenable to change on a moments notice is a requirement here. Every day we wake up and have a new requirement, a new idea, a new challenge. No day is ever dull or the same as the last, and just when you think you have it figured out, there is a new strategy, or a new solution for our customers.

Humble

This one caught me by surprise too. The best people at VMware are the most humble. They are the types who are willing to sweep the floors, talk to the new hire class about how great VMware is, or talk to our largest customers about how we are taking responsibility for something that may not have gone as well as we thought it would. Being here means remembering that it has nothing to do with me, it is all about the cool technology and the team. Imagine walking around the Palo Alto campus and bumping into the guy that literally wrote the book on VMware storage or networking, and talking to them as a team member.

Curious

Everyone I meet here, well almost everyone, has a love for learning. Since joining the team, I have spent most of my time asking questions, reading, studying our roadmaps, and debating strategy, technology, and ideas with some incredibly smart people. I have found that most of the people here want to know what others think, they are well read, and generally trying to absorb as much information as they can. It is hard to be around and not get motivated to read the latest white papers, learn a new programming language, or grab someone who has been here a while and ask them questions.

Being Awesome

We are a team of winners. That isn’t me being prideful or putting anyone down, we just love to win. We love bringing amazing ideas to life, everyone on the team, at least everyone I have met so far, is all about teamwork. That being said, we all work for a greater good, we are executing on a vision, not for ourselves, but to make our little section of the world a better place. Nowhere is this more evident than in the commitment to giving back. We are encouraged to volunteer, not because it makes the company look good, but because it is part of the culture. We are encouraged to be involved in things we believe in and to make a positive difference wherever we are.

Where do I sign up?

So really the best way to join us is to be involved in the community. I didn’t realize it at the time, but my current manager began to screen me at the VMUG User Conference. Since moving to Portland, I have done my best to get involved in the local tech community and help out where I can. I have volunteered to speak at the conferences, work the booth for my employer, evangelize the various user groups, and just show up to support friends. Get your name out there as someone who is willing to do what ever is needed. Be active, be sincere, and be present. VMware is a great place to work, but only because we have an awesome community, awesome partners, and awesome users. Whether you want to work at VMware, or just be involved, this is an amazing place. Get out there and be involved in the community. The Portland VMUG Conference is November 4th this year at the Oregon Convention Center. Come by and check it out, learn more, and find a way to pitch in. We are all about community, and that is the best way to find out about working here.

Bow Ties are cool!

EVO Rail, is technology really making things easier for us?

This week at VMworld, the announcement of what had been Project Marvin became official.  I wanted to add my voice to the debate on the use case for this, and where I believe the industry goes with products like this.  To answer the title question, EVO is a step in the right direction, but it is not the end of the evolution.  As always I have no inside information, I am not speaking on behalf of VMware, this is my opinion on where the industry goes and what I think is cool and fun.

To understand this, we need to consider something my wife said recently.  As a teacher, she was a bit frustrated this week to return to school to find her laptop re-imaged, and her printer was not configured.  I tried to help her remotely, but it is something I will need to work on when I get back.  Her comment was, “Technology is supposed to make things easier”.  This stung for a moment, after all technology is my life, but when I thought about her perspective, it struck me just how right she is.  Why afterall shouldn’t the laptop have reached out, discovered a printer near by and been prepared to print to it, afterall, my iPhone/iPad can do that with no configuration on the device itself.

So what does this have to do with EVO?  If we look at EVO as a stand alone product, it doesn’t quite add up.  It is essentially a faster way of implimenting a product which is not too complicated to install.  I have personally installed thousands of Nodes of vSphere, hundreds of vCenters, it is pretty simple with a proper design.  The real value here though, the trend, is simplification.  Just because I know how to build a computer, doesn’t mean I want to.  Just because I can easily impliment a massive vSphere environment, that doesn’t mean I want to go through the steps.  That is why scripting is so popular, it enables us to do repetetive tasks more effeciently.

The second part of this though really comes down to a vision, where are we going.  If you look at where we are going as an industry, we are moving to do more at the application layer in terms of high availability, disaster recovery, and performance.  We see this with the openstack movement, the cloud movement, docker, and so many others.  At some point, we are going to stop worrying about highly available infrastructure.  At some point our applications will all work everywhere, and if the infrastructure fails, we will be redirected to infrastructure in another location without realizing it.  

That is the future, but for now we have to find a way to hide the complexity from our users, and still provide the infrastructure.  We need to scale faster, better, stronger, and more resilient, without impacting legacy applications.  Someday we will all be free from our devices, and use what ever is in our hand, or in front of us, or just get a chip in our brains, someday HA won’t be an infrastructure issue, but until then projects like EVO will help us to bridge that gap.  Not perfect arguably, but this is a bridge to get us a step closer to a better world.  At the end of the day the more complexity we hide with software, the better we are, provide that software is solid, and we can continusiouly improve.

EVO Rail, is technology really making things easier for us?

VMware, come for the people, stay for the vision

As I approach the end of my second month here at VMware, having had this conversation with some friends, I thought it would be valuable to talk about the reasons why I chose to join, and what it looks like from this side of the fence. As a disclaimer, what I am disclosing is all public knowledge, nothing untoward here, I am speaking for myself not VMware, and this is intended as a larger statement on careers and where we choose to go.

When I made the choice to join VMware I was very happy at HP. I was looking for a new position within the company to align with my career goals, but I was very happy. HP is a great place to work, my peers, management, and teams were amazing. When VMware approached me I was very adamant that I was not interested, even though I was helping to lead the VMware Champions team within HP, and have a great love for all things virtual and cloud related. What finally convinced me was the people. Everyone I talked to, both during interviews, and friends who worked there, was excited. Everyone had the vision and was on fire to change our industry for the better. There were many conversations around products, around culture, and around where the company was headed.

When I got here, I couldn’t help but be sucked into that culture. I was excited, and every day I get a little more excited. We are doing amazing things here. When I go talk to customers I am sharing the vision with them. Where we see the industry going, how we are improving businesses, simplifying them. This is my dream job…well for now. Next week VMworld 2014 kicks off, and we share a little more of the dream with over 20,000 of our friends, customers, and partners. We have a vision and it is so awesome.

So what is the point of all of this? Choosing a career is tough, but finding the right company is tougher. One of the things I have learned the hard way has been finding a company with a vision that I can really believe in. I don’t just work for VMware, I believe we are changing the industry. I came here because there are some really amazing people and products, but I am here now because I believe in our vision. It doesn’t matter if you are an entrepreneur, or a plumber, if you don’t believe in what you are doing it shows. Life is short, and if you are not sold out for your job, you need to ask yourself why. I am not saying go out and quit your job because you don’t like it, but take a long hard look at yourself and ask why you aren’t passionate about it. No matter what you do, it is up to you to make it awesome. VMware is awesome because I believe in what we are doing. Do you believe in what you are doing?

VMware, come for the people, stay for the vision

Changing Direction…Again

The funny thing about life and careers is you never know where you will end up. Alistair Cooke wrote a fascinating article, about how random his career has been, http://www.demitasse.co.nz/wordpress2/?p=1174. Recently a number of my friends from around the globe have been asking me about career moves, or for advice, assuming I actually know something or know what I am doing. I have come to realize, similarly to Alistair, planning my career does not work.  I am no where near where I thought I would be at any point in my career, I have surpassed my own expectations, not because I am smarter than anyone else, but because I love to learn and I have been very blessed to meet some very intelligent people who have taught me more than I ever thought possible.

Over the past 20+ years I have had the privilege of working as a soldier, tech support, a systems admin, a systems engineer, and a technology architect.  Recently I had a conversation with the team at VMware.  I was pretty convinced that I didn’t want to change jobs, but I do love VMware, and I have spent a great deal of time and effort to understand the strategy, as well as to work with VMware as a partner on many levels.

The interesting thing I learned during my interview process was that I have actually been interviewing for this job for nearly 9 months now without realizing it.  Interactions with various VMware employees showed them I was interested in VMware as a company, and in helping customers understand more about the solutions.  Through the conversation, the team at VMware laid out a strategy, and a future which is compelling.  The thing though that finally sold me though was the people.  I have a number of friends at VMware, and I follow many on the Tech Marketing team, so I feel like I know what things are like there, but meeting with the local team, and getting their perspective, and understanding the vision from their level.

I do want to say, HP is an exceptional company with some amazing products, and with some of the smartest people I have ever met.  I am humbled to say I was a part of the team at HP, and I am equally humbled and excited to be joining the VMware team.   I will continue to write my own opinions, and things that interest me.  If I have anything to recommend to anyone considering how to improve their current position, or find another it is the following.

  • Never stop learning
  • Ask questions
  • Find smart people and hang out with them
  • Learn from everyone
  • Give something back to the community.
  • Thank you’s go a long way
  • Humility saves you from looking silly
  • Always be polite and helpful, you never know when someone might help you, or when you might be able to help them.

So all this to say, this month I will be joining the VMware Health Care team as a Senior Systems Engineer.  I have much to learn, but I have confidence in the team, the product, and the strategy.  I look forward to continuing my journey, and to giving back to the community wherever possible.

Changing Direction…Again

Hyper-Convergence: a paradigm shift, or an inevitable evolution?

With the recent article on CRN about HP considering the acquisition of Simplivity, http://www.crn.com/news/data-center/300073066/sources-hewlett-packard-in-talks-to-acquire-hyper-converged-infrastructure-startup-simplivity.htm, it seems a good time to look at what simplivity does, and why they are an attractive acquisition target.

In order to answer both questions, we need to look at history. First was the mainframe. That was great, but inflexible, so we moved to distributed computing. This was substantially better, and brought us into the new way of computing, but there was a great deal of waste. Virtualization came along and enabled us to get higher utilization rates on our systems, but this required an incredible amount of design work up front, and it allowed the siloed IT department to proliferate since it did not force anyone to learn a skillset outside their own particular area of expertice. This lead us to converged infrastructure, a concept that if you could get everything from a single vendor, or support from a single vendor at the very least. Finally came the converged system, it provided a single vendor/support solution, packaged as one system, and we used it to grow the infrastructure based on performance or capacity. It was largely inflexible, by design, but it was simple to scale, and predictible.

To solve this problem, companies started working on the concept of Hyper-Convergence. Basically there were smaller discrete converged systems, many of which created high availability zones not through redundant hardware in each node, but through clustering. The software lived on each discrete converged node, and it was good. Compute, Network, and Storage, all scaling out in pre-defined small discrete nodes, enabling capacity planning, and fewer IT administrators to manage larger environments. Truly Software Defined Data Center, but at a scale that could start small and grow organically.

Why then is this interesting for a company like HP? As always I am not an insider, I have no information that is not public, I am engaging in speculation, based on what I am seeing in the industry. Looking at HP’s Converged Systems strategy, looking at what the market is doing, I believe that in the near future, the larger players in this space will look to converged systems as the way to sell. Hyper-convergence is a way forward to address the market space that is either too small, or needing something that traditional converged systems cannot provide. Hyper-convergence can provide a complimentary product to existing converged systems, and will round out solutions in many datacenters.

Hyper-Convergence is a part of the inevitable evolution of technology, whether HP ends up purchasing simplivity, these types of conversations show that such concepts are starting to pickup steam. It is time for companies to innovate or die, and this is a perfect opportunity for an already great company to keep moving forward.

Hyper-Convergence: a paradigm shift, or an inevitable evolution?

Software Defined Storage Replication

In a conversation recently with a colleague, we were discussing storage replication in a VMware environment.  Basically the customer in question had bought a competitors array, brand X, and wanted to see if there was a way to replicate to one of our arrays at a lower price point.

This is a fairly common question coming from customers, more so in the SMB space, but with the increasing popularity of Software Defined Storage, customers want options, they don’t want to be locked into a single vendor solution.  In an openstack environment, high availability is handled at the application level, and I have to say I strongly recommend this as a policy for all new applications, however how do we handle legacy apps in the interim?

In a traditional storage array, we typically do replication at the storage level.  VMware Site Recovery Manager allows us to automate the replication and recovery process integrating with the storage replication, and in smaller environments, can even handle replication at the vSphere host level. Array based replication is generally considered the most effecient, and the most recoverable. This does require similar arrays from the same vendor, with replication licensing. In a virtual environment this looks something like the picture below.

Storage_Replication_ArrayBased

This works well, but can be costly and leads to storage vendor lockin, not a bad thing if you are a storage vendor, but not always the best solution from a consumer perspective. So how do we abstract the replication from the storage? Remember, one of the purposes of virtualization and openstack is to abstract as much as possible from the hardware layer. That is not to mean hardware is not important, quite the contrary, but it does enable us to become more flexible.

So to provide this abstraction there are a couple options. We can always rewrite the application, but that takes time, we can do replication at the file system level or similarly using a 3rd party software to move data, but in order to really abstract the replication from the hardware/software we need to insert something in the middle.

In the conversation I was having at the begining, the goal was to replicate from the production datacenter running brand X storage to a remote location using an HP storage product. To accomplish this, we discussed using vSphere replication, something I will discuss in a future post, we discussed host based replication, but that is not as seamless, and what we settled on is below. Not the most elegant solution, but something that helps us abstract the replication layer. Essentially using the HP StoreVirtual VSA, since it has replication built in, we put that in front of the brand X storage, and then on the other side we can put another VSA on a server with some large hard drives, and voila, replication and DR storage handled.

Storage_Replication_VSA - Edited

Not the most elegant solution, but it is a way to abstract the replication from the storage, and to do so at a reasonable cost. The advantage to this solution is that we have also given ourselves DR storage. Next I will explore vSphere replication, but as always I want to point out, this solution minimized vendor lock in on the hypervisor and storage levels.

Software Defined Storage Replication

Reference Architectures, Hardware Compatibility Lists, and you.

Recently I was giving a presentation on designing storage for VMware Horizon.  I was referencing the HP Client Virtualization SMB Reference Architecture for VMware View, based on an earlier version, but still valid.  The conversation kept coming back to well can’t I do more than that, or why wouldn’t I just do it this way.

One of the better hardware compatibility lists is actually the VMware Compatibility Guide.  The best feature is that it is simple to understand, searchable, and matrixed.  This is a critical tool because it enables us to know what has been tested and what works, but more importantly what can be supported.  Of course it is often more expensive to go with supported configurations, but if we are looking at cost as the primary criteria, it would make more sense to use open source technologies.  While I am a big fan of open source for labs and various projects, the cost of supporting these in a production environment is often far more than simply using supported configuration and paying for support.  This is also true for using commodity hardware which is not supported.

The same can be said of reference architectures.  HP does an excellent job of creating these, especially because they have hardware in all major categories.  In the example I started with, the major issue was that the questions were around cost.  The person creating the design wanted to know why the can’t remove parts or replace them for cheaper ones.  The short answer is simply that the reference architecture is tested with all the components it contains.  It is a known quantity so it will work, and if it doesn’t the support teams can fix it since they know all the pieces.

So to sum up, doing things the way the manufacturer recommends will save a great deal of heartache.  To answer the question, you can do things your own way, but you may find that it is more trouble to support than it is worth.

Reference Architectures, Hardware Compatibility Lists, and you.

The changing role of shared storage in the Software Defined Datacenter: Part 3

As we have discussed, the role of shared storage is changing.  VMware has supported vMotion without shared storage for a while now, software defined storage is enabling shared compute and storage virtualization, and for the past year or so, we have been hearing more about the concept of vVols.  I am certainly not the first to talk about this, there are a number of blogs on this, my personal favorite being The future of VMware storage – vVol demo by @hpstorageguy.

As always, in the interests of full disclosure, I do work for HP, but this is my personal blog, and I write about things I think are interesting.  I am not going into great detail on how vVol’s work, but I do want to show a few diagrams to differentiate current architecture from what we MAY see in the future.

So looking at the current and legacy architecture of VMware storage, we typically present storage to all hosts in the cluster in the form of a shared LUN or Volume.  This is very simple, the VMware admin asks the storage admin for a number of volumes of a specific size, in our example below, let’s say they are 2TB volumes and they request 2 of them.  The VMware administrator then creates datastores, which formats them with the VMFS file system and allows virtual machines to be created within it.  Of course this whole process can be done through the VMware GUI using the vSphere storage API’s, but the net effect is the same.  We still create another layer in the storage stack which is not the most efficient way of handling this.

Traditional_VMware_Storage

 

vVols are VMwares new way of handling storage which resolves this problem in a rather unique way.  Currently we can bypass the datastore concept and do a raw disk map or RDM, which allows us to present a raw disk device to the virtual machine itself.  Unfortunately this does not give us a measurable difference in performance, and can become tedious to manage.  vVols on the other hand, appear to be datastores, but really pass through the individual volumes to the individual VM’s.  In the drawing below, the individual volumes appear to the VM administrator as Datastores, but they are broken out on the storage array.  This removes the performance layer, and enables a more policy based storage interface for the VMware administrator.  This is critical to note, policy based storage at a VMware level.  This brings us closer to self service in a virtualized environment.  I don’t yet have a full handle on how this will be managed, but I think it is safe to say the storage administrator will create a container giving the VMware admin a specific amount of storage with specific characteristics.  In the case of our example, 2TB containers.

 

vVols_Storage

 

Note above the volumes are of varying sizes, but what is not shown is the volumes or luns are individual disks presented directly to the virtual machine itself.  This is important to remember since we are offloading the performance of each individual disk presented to the virtual machine to the storage array, but we are still able to manage it as a datastore or a container on the VMware side.

Coming back to the policy based storage thought, this is not dissimilar to how the HP 3Par storage operates, volumes within common provisioning groups which are containers.  The policy is set on the container in both cases, so it isn’t a stretch to see how this will work well together.  Again I don’t have any inside information, but if you look at the post from referenced above, Calvin does an excellent job of showing us  what is coming.  This, combined with VMware’s VSAN announcements recently, seem to show that there is going to be a role for the traditional storage array in addition to software defined storage in the software defined datacenter at least for now.

The changing role of shared storage in the Software Defined Datacenter: Part 3

The changing role of shared storage in the Software Defined Datacenter: Part 2

Previously we discussed the shift from the traditional array based storage to a more software defined model.  Of course this is not a huge shift per-say, but rather a changing of marketing terms and perceptions.  The Software Defined Datacenter is nothing more than virtualzation at all its levels, simply a further shift of our distributed compute trend.  It is also important to remember, traditional array based shared storage is not dead, despite the numerous storage vendors competing in the software defined storage space, there is still a great deal of development around the traditional storage array, but that is a future topic.

When looking at traditional storage, it is necessary to understand the essence of the storage array.  You have a processor, memory, networking, and hard drives, all tied together by an operating system.  In essence you have a very specialized, or in some cases a commodity, server.  So what differentiates all the vendors?  Generally speaking, the software and support.  Most arrays provide similar functionality to one degree or another.  Certainly one manufacturer may do something somewhat better, another company may have some specialized hardware, but from a very high level business business perspective, they essentially perform similar functions, and it is left to those of us who are fascinated by the details to sort out which array is best in a specific environment, and determine who can best support them for the long term.

As we begin to shift storage onto servers, relying on industry standard processors, memory, and network components rather than specific and dedicated parts, there is a trade off, much like we saw when we began to virtualize the compute layer.  No longer does the storage vendor have control of the underlying hardware.  No longer is purpose built hardware available to absorb some of the load.  This presents an interesting challenge for storage and server vendors alike.

Unfortunately while manufacturers will set standards, write reference architectures, and create support matrices, many users will bend or even simply ignore them.  When the storage vendor cannot control the hardware, it becomes much more difficult to provide performance guarantees or support.  There will always be a certain need for traditional array based storage, for performance guarantees, and workloads that software defined storage just cannot support.  As users demand more, faster, and cheaper storage for their apps, we are going to have to find a way to strike a balance between the traditional arrays, software defined storage, and the new technologies being created in labs around the world.

The changing role of shared storage in the Software Defined Datacenter: Part 2

The changing role of shared storage in the Software Defined Datacenter: Part 1

I was having a conversation the other day with some colleagues, around the future of our profession.  As you probably know by now, I have spent the better part of the past decade working on storage and virtualizaiton specifically.  More and more, I have been finding myself discussing the erosion of traditional storage in the market.  Certainly there will always be storage arrays, they have their effeciencies, and enabling storage to be shared between servers, and just in time provisioning as well as preventing stranded capacity which was a challenge for many of us in the not so distant past.

To demonstrate this, we should look at the traditional virtualizaiton model.  servers_shared_storage

We have traditionally used the shared storage array for redundancy, clustering, and minimizing storage waste.  When I was a storage administrator, I was very good at high performance databases.  We would use spindle count and raid type to make our databases keep up with the applications.  When I moved on to being a consultant, I found ways to not only give the performance needed, but also to minimize wasted space by using faster drives, tiering software, and virtualization to cram more data onto my storage arrays.  As above, in the traditional model, deduplicaiton, thin technologies, and similar solutions were of huge benefit to us.  It became all about efficiency, and speed.  With virtualization this was also a way to enable high availability and even distribution of resources.

What we have seen over the past several years, is a change in architecture known as software defined storage.

Software Defined Storage (StoreVirtual VSA)
Software Defined Storage

With SSD drives in excess of 900GB, and that size expected to continuously increase, with small form factor sata drives at 4TB and even larger drives coming, the way we think about storage is changing.  We can now use software to keep multiple copies of the data which allows us to simulate a large traditional storage array, and newer features such as tiering in the software brings us one step closer.

Ironically as I was writing this, @DuncanYB re-posted on twitter, an article he wrote a year ago, RE: Is VSA the future of Software Defined Storage? (OpenIO).  I do follow Duncan among several others quite closely, and I think what he is saying makes sense.  Interestingly, some of what he is talking about is being handled by Openstack, but that does introduce some other questions.  Among these are, is this something Openstack should be solving, or does this need to be larger than Openstack in order to gain wide adoption?  What is the role of traditional arrays in the Software Defined Datacenter?

Needless to say, this is a larger conversation than any of us, and it is highly subjective.  I would hope that the next few posts become part of the larger conversation, and I would hope that this will cause others to think, debate, and bring their ideas to the table.  As always I have no inside information, these are my personal thoughts, not those of my employer, or anyone other company.

The changing role of shared storage in the Software Defined Datacenter: Part 1