Reference Architectures, Hardware Compatibility Lists, and you.

Recently I was giving a presentation on designing storage for VMware Horizon.  I was referencing the HP Client Virtualization SMB Reference Architecture for VMware View, based on an earlier version, but still valid.  The conversation kept coming back to well can’t I do more than that, or why wouldn’t I just do it this way.

One of the better hardware compatibility lists is actually the VMware Compatibility Guide.  The best feature is that it is simple to understand, searchable, and matrixed.  This is a critical tool because it enables us to know what has been tested and what works, but more importantly what can be supported.  Of course it is often more expensive to go with supported configurations, but if we are looking at cost as the primary criteria, it would make more sense to use open source technologies.  While I am a big fan of open source for labs and various projects, the cost of supporting these in a production environment is often far more than simply using supported configuration and paying for support.  This is also true for using commodity hardware which is not supported.

The same can be said of reference architectures.  HP does an excellent job of creating these, especially because they have hardware in all major categories.  In the example I started with, the major issue was that the questions were around cost.  The person creating the design wanted to know why the can’t remove parts or replace them for cheaper ones.  The short answer is simply that the reference architecture is tested with all the components it contains.  It is a known quantity so it will work, and if it doesn’t the support teams can fix it since they know all the pieces.

So to sum up, doing things the way the manufacturer recommends will save a great deal of heartache.  To answer the question, you can do things your own way, but you may find that it is more trouble to support than it is worth.

Reference Architectures, Hardware Compatibility Lists, and you.

The changing role of shared storage in the Software Defined Datacenter: Part 3

As we have discussed, the role of shared storage is changing.  VMware has supported vMotion without shared storage for a while now, software defined storage is enabling shared compute and storage virtualization, and for the past year or so, we have been hearing more about the concept of vVols.  I am certainly not the first to talk about this, there are a number of blogs on this, my personal favorite being The future of VMware storage – vVol demo by @hpstorageguy.

As always, in the interests of full disclosure, I do work for HP, but this is my personal blog, and I write about things I think are interesting.  I am not going into great detail on how vVol’s work, but I do want to show a few diagrams to differentiate current architecture from what we MAY see in the future.

So looking at the current and legacy architecture of VMware storage, we typically present storage to all hosts in the cluster in the form of a shared LUN or Volume.  This is very simple, the VMware admin asks the storage admin for a number of volumes of a specific size, in our example below, let’s say they are 2TB volumes and they request 2 of them.  The VMware administrator then creates datastores, which formats them with the VMFS file system and allows virtual machines to be created within it.  Of course this whole process can be done through the VMware GUI using the vSphere storage API’s, but the net effect is the same.  We still create another layer in the storage stack which is not the most efficient way of handling this.

Traditional_VMware_Storage

 

vVols are VMwares new way of handling storage which resolves this problem in a rather unique way.  Currently we can bypass the datastore concept and do a raw disk map or RDM, which allows us to present a raw disk device to the virtual machine itself.  Unfortunately this does not give us a measurable difference in performance, and can become tedious to manage.  vVols on the other hand, appear to be datastores, but really pass through the individual volumes to the individual VM’s.  In the drawing below, the individual volumes appear to the VM administrator as Datastores, but they are broken out on the storage array.  This removes the performance layer, and enables a more policy based storage interface for the VMware administrator.  This is critical to note, policy based storage at a VMware level.  This brings us closer to self service in a virtualized environment.  I don’t yet have a full handle on how this will be managed, but I think it is safe to say the storage administrator will create a container giving the VMware admin a specific amount of storage with specific characteristics.  In the case of our example, 2TB containers.

 

vVols_Storage

 

Note above the volumes are of varying sizes, but what is not shown is the volumes or luns are individual disks presented directly to the virtual machine itself.  This is important to remember since we are offloading the performance of each individual disk presented to the virtual machine to the storage array, but we are still able to manage it as a datastore or a container on the VMware side.

Coming back to the policy based storage thought, this is not dissimilar to how the HP 3Par storage operates, volumes within common provisioning groups which are containers.  The policy is set on the container in both cases, so it isn’t a stretch to see how this will work well together.  Again I don’t have any inside information, but if you look at the post from referenced above, Calvin does an excellent job of showing us  what is coming.  This, combined with VMware’s VSAN announcements recently, seem to show that there is going to be a role for the traditional storage array in addition to software defined storage in the software defined datacenter at least for now.

The changing role of shared storage in the Software Defined Datacenter: Part 3

It takes a community to create change

A brief break from talking about the software defined datacenter, I thought it might be good to talk about community groups, and a little about giving back.

This morning in Church, the message was about being generous, how we are much happier when we give others what we have, not so much talking about money or possessions, but rather something of ourselves, something that is important. I thought that was an interesting parallel to what our technical communities do.

I was recently recognized for my work in the VMware community with the title of vExpert, the reason I applied to the program is similar to my reason for joining HP. I have for about a decade worked on virtualization and storage platforms. When the opportunity to join HP came up, it was not HP specifically I was excited about. Don’t get me wrong, it is a great company, and an exciting place to be, but it wasn’t on my career plan. What intrigued me was the opportunity to join a company where I could teach others about the technology I am so passionate about, give back to the community, but what sealed the deal was the company philosophy. At the heart of the HP Way are the rules of the garage.

  • Believe you can change the world.
  • Work quickly, keep the tools unlocked, work whenever.
  • Know when to work alone and when to work together.
  • Share – tools, ideas. Trust your colleagues.
  • No politics. No bureaucracy. (These are ridiculous in a garage.)
  • The customer defines a job well done.
  • Radical ideas are not bad ideas.
  • Invent different ways of working.
  • Make a contribution every day. If it doesn’t contribute, it doesn’t leave the garage.
  • Believe that together we can do anything.
  • Invent.

When I look at these, I see them on display in the tech communities, in the user groups, in the books, in the blog posts, in the twitter debates, in the sessions at trade shows, and in the conversations we have with customers, competitors, peers, and friends.  It is amazing how we can all work with different products, for different companies, we can debate who has the best product, but we do seem to have a common goal of educating those around us, and making the world better through technology.

As you go throughout your week, I would encourage you to look around and see what you can do to be generous. Teach someone something, help a junior team members career, get involved in an online community, or better yet join a user group. We all have something to contribute, just remember it’s not about me, it’s not about you, but it is time we started living the rules of the garage, and it is time we all made a difference.

It takes a community to create change

The changing role of shared storage in the Software Defined Datacenter: Part 2

Previously we discussed the shift from the traditional array based storage to a more software defined model.  Of course this is not a huge shift per-say, but rather a changing of marketing terms and perceptions.  The Software Defined Datacenter is nothing more than virtualzation at all its levels, simply a further shift of our distributed compute trend.  It is also important to remember, traditional array based shared storage is not dead, despite the numerous storage vendors competing in the software defined storage space, there is still a great deal of development around the traditional storage array, but that is a future topic.

When looking at traditional storage, it is necessary to understand the essence of the storage array.  You have a processor, memory, networking, and hard drives, all tied together by an operating system.  In essence you have a very specialized, or in some cases a commodity, server.  So what differentiates all the vendors?  Generally speaking, the software and support.  Most arrays provide similar functionality to one degree or another.  Certainly one manufacturer may do something somewhat better, another company may have some specialized hardware, but from a very high level business business perspective, they essentially perform similar functions, and it is left to those of us who are fascinated by the details to sort out which array is best in a specific environment, and determine who can best support them for the long term.

As we begin to shift storage onto servers, relying on industry standard processors, memory, and network components rather than specific and dedicated parts, there is a trade off, much like we saw when we began to virtualize the compute layer.  No longer does the storage vendor have control of the underlying hardware.  No longer is purpose built hardware available to absorb some of the load.  This presents an interesting challenge for storage and server vendors alike.

Unfortunately while manufacturers will set standards, write reference architectures, and create support matrices, many users will bend or even simply ignore them.  When the storage vendor cannot control the hardware, it becomes much more difficult to provide performance guarantees or support.  There will always be a certain need for traditional array based storage, for performance guarantees, and workloads that software defined storage just cannot support.  As users demand more, faster, and cheaper storage for their apps, we are going to have to find a way to strike a balance between the traditional arrays, software defined storage, and the new technologies being created in labs around the world.

The changing role of shared storage in the Software Defined Datacenter: Part 2

The changing role of shared storage in the Software Defined Datacenter: Part 1

I was having a conversation the other day with some colleagues, around the future of our profession.  As you probably know by now, I have spent the better part of the past decade working on storage and virtualizaiton specifically.  More and more, I have been finding myself discussing the erosion of traditional storage in the market.  Certainly there will always be storage arrays, they have their effeciencies, and enabling storage to be shared between servers, and just in time provisioning as well as preventing stranded capacity which was a challenge for many of us in the not so distant past.

To demonstrate this, we should look at the traditional virtualizaiton model.  servers_shared_storage

We have traditionally used the shared storage array for redundancy, clustering, and minimizing storage waste.  When I was a storage administrator, I was very good at high performance databases.  We would use spindle count and raid type to make our databases keep up with the applications.  When I moved on to being a consultant, I found ways to not only give the performance needed, but also to minimize wasted space by using faster drives, tiering software, and virtualization to cram more data onto my storage arrays.  As above, in the traditional model, deduplicaiton, thin technologies, and similar solutions were of huge benefit to us.  It became all about efficiency, and speed.  With virtualization this was also a way to enable high availability and even distribution of resources.

What we have seen over the past several years, is a change in architecture known as software defined storage.

Software Defined Storage (StoreVirtual VSA)
Software Defined Storage

With SSD drives in excess of 900GB, and that size expected to continuously increase, with small form factor sata drives at 4TB and even larger drives coming, the way we think about storage is changing.  We can now use software to keep multiple copies of the data which allows us to simulate a large traditional storage array, and newer features such as tiering in the software brings us one step closer.

Ironically as I was writing this, @DuncanYB re-posted on twitter, an article he wrote a year ago, RE: Is VSA the future of Software Defined Storage? (OpenIO).  I do follow Duncan among several others quite closely, and I think what he is saying makes sense.  Interestingly, some of what he is talking about is being handled by Openstack, but that does introduce some other questions.  Among these are, is this something Openstack should be solving, or does this need to be larger than Openstack in order to gain wide adoption?  What is the role of traditional arrays in the Software Defined Datacenter?

Needless to say, this is a larger conversation than any of us, and it is highly subjective.  I would hope that the next few posts become part of the larger conversation, and I would hope that this will cause others to think, debate, and bring their ideas to the table.  As always I have no inside information, these are my personal thoughts, not those of my employer, or anyone other company.

The changing role of shared storage in the Software Defined Datacenter: Part 1

Forget converged systems and the cloud, what’s really important are the apps.

My wife is a special education teacher, uses technology at school and at home, but looks to just get the job done, and is not interested in all the flashy features a product may offer.  She does try to love technology for my sake, but I can tell when I get some new idea or piece of technology to test, she is just not excited about it.  She is, however, long suffering and patient with my crazy ideas, which makes her an excellent sounding board for many of my theories or concepts.  One thing I have learned from our conversations is that she doesn’t care what the technology is, she is very much more concerned about the apps.

I remember when I first convinced her to take my old iPhone.  She had previously used an LG VUE, a basic phone with a slid QWERTY keyboard.  My company had given me a new iPhone 4s, so I dropped my personal line, and was left with my iPhone 4 no longer in use.  She agreed to try it, so I set her account up and started downloading some apps she typically used on the PC.  This was the summer she also decided to get serious about running, so she loved the Nike + app, and the Chase Mobile Banking app became very convenient.  Ironically, a few months after I gave her the phone I was in the middle of a job transition, and without a work phone for a few weeks.  I immediately started going through smartphone withdrawals, so I asked her if we could switch back for a few weeks till I got a new work phone.  Needless to say that was a mistake, as soon as she was back on the standard phone, she missed all her apps.

The following summer, against her will, I convinced her to upgrade to the iPhone5.  Her only concern was, what will change?  Will my apps be there?  There was no thought to the faster technology, there was no wow factor, she just wanted to know that her apps would all be there.  We had a similar issue when she finally upgraded from ios 6 –> ios 7, what will become of my apps?

This is a long winded story, but I think it makes a good point.  At the end of the day, what I care about is shiny new technology.  Is it faster, is it cooler, what can I make it do?  For the consumers of technology though, it isn’t about the cloud, it isn’t about converged systems, those are simply words to them. It is about can I have my apps,  can I have them everywhere, on every device, and can I have them now?

I would leave you with this thought, as the providers of the applications, or the infrastructure on which the applications run, it is long past time for us to stop thinking about how cool our technology is, about who makes a better system or product, but rather about what is the users expectations, and how can we provide them the experience they need without regard for what device they are on, or what the platform or technology they prefer.  This is the real reason for cloud, for converged system, and for the time we all put into making this all work.

Forget converged systems and the cloud, what’s really important are the apps.

Software Defined Storage – Hype or the Future?

If you have a twitter account, or read any of VMware’s press releases, or any of the technical blogs, you have to know by now, VMware is back in the storage business with a vengeance.  As most of us will recall, the first rollout, their Virtual Storage Appliance, VSA, was less than exciting, so I have to admit when I first heard about vSan I was a little skeptical.  Of course over the past several months, we have watched things play out on social media with the competitors, arguments over software defined storage versus traditional hardware arrays, which begs the question, is Software Defined Storage all Hype, or is this the future of storage?

So as always, in the interests of full disclosure, I work for HP, who clearly has a dog in this fight, I have worked with VMware products for nearly ten years now, as a customer, a consultant, and in my current role speaking about VMware to customers, and in various public presentation forums as often as possible.  While I attempt to be unbiased, I do have some strong opinions on this.  That being said…

When I first touched VMware, I was a DBA/Systems Engineer at a Casino in Northern California.  We badly needed a lab environment to run some test updates in, and despite some begging and pleading, I was denied the opportunity to build the entire production environment in my test lab.  We debated going with workstations and building that way, but one of our managers had read about VMware, and wanted us to determine if we could use it for a lab, with the thought that we could virtualize some of our production servers.  Keep in mind this was in the early ESX 2 days, so things were pretty bare at that point, documentation was spotty, and management was nothing like we have today.  By the time we completed our analysis and were ready to go to production, ESX 3 was released and we were sold.  We were convinced that we would cut our physical infrastructure substantially, and we thought that servers would become a commodity.  While compute virtualization does reduce physical footprint, it does introduce additional challenges, and in most cases it simply changes the growth pattern, as infrastructure becomes easier to deploy, we experience virtual sprawl versus physical sprawl, which leads to growth of physical infrastructure.  Servers are far from a commodity today, server vendors are pushing harder to distinguish themselves and to go further, higher density, and give just a little bit more performance or value.  In the end, VMware’s compute virtualization just forced server vendors to take it to another level.

When VMware started talking about their idea of a vSan, I immediately started trying to find a way to get in on the early beta testing.  It was a compelling story, and I was eager to prove that VMware was going to fall short of my expectations again.  There is no way the leader in compute virtualizaiton can compete with storage manufacturers.  Besides, software defined storage was becoming fairly common in many environments, and something that is moving from test/dev into production environments, so the market was already pretty saturated.  As I started to research and test vSan for myself, as well as reading up on what the experts were saying about it, I was quite surprised.  This is a much different way of looking at software defined storage, especially where VMware is concerned.

At the end of the day there are a lot of choices out there from a software defined storage perspective.  The biggest difference is who is backing them.  When I was building my first home lab, and software defined storage was not really prime time, we used to play around with Openfiler and Freenas, which were great for home labs at the time.  They gave us iSCSI storage so we could test and demo, but I have only met a few customers using it for production, and they usually were asking me to help them get something with support to replace it.  The main difference with vSan, and the other commercially supported software defined storage implementations are the features.  The reality is that no matter what you choose, far more important than which is the best solution, is having enterprise level support.  The important thing is to look at the features, put aside all the hype, and decide what makes sense for your environment.

I don’t think we will see the end of traditional storage anytime soon, if ever, although I think in many environments, we will continue to see high availability move into the application layer and shared storage will become less of an issue, think Openstack.  I do think though that most of us will agree that software defined storage is the future, for now, so it is up to you, the consumer to decide what features make sense, and what vendor can support your environment for the term of the contract.

Software Defined Storage – Hype or the Future?

Converged Systems – More than the sum of its parts

In the interests of full disclosure, I work for Hewlett-Packard, so my this is my semi-unbiased opinion, but I do spend my days talking about HP products, so I am not completely independent, but this has less to do with product, and more to do with concepts and standards.

Over the past few years, we have seen a number of vendors releasing converged systems, pods, blocks, and other types of systems which are essentially designed to simplify the ordering, provisioning, and support processes.  In my job I am fortunate enough to speak with many smart people, as I was discussing this trend with technical sales person, they asked me why would anyone buy a system this way when it would be cheaper to purchase the components and build it like we always have.  Why would anyone want to pay more to get one of these systems?

To get the answer we really need to determine what the motives are.  I posted previously about converged infrastructure, and I do tend to talk about the cloud, automation, and the need for a new, more efficient way of deploying infrastructure.  The best part about statistics is that they can always make your point for you, but having worked in IT for over 20 years in many roles, I believe it is safe to say IT typically spends anywhere from 70-80% of their time on operations.  That is just keeping the lights on.  To put that into terms of $$’s, that means if my IT budget, excluding salary, is $1m, I am spending $700k-800k on keeping the lights on.  That also means out of a 40 hour work week, yeah right, they are spending between 28-32 hours on support, and basic operational tasks, not making the business productive, implementing new projects, or planning for future.  This lack of time for innovation creates delays when new projects need to be done, and is counter productive.  To solve this, you either wait, hire more people, or bring in temporary help in the form of vendors or consultants to do some work for you.  If you do bring in the vendors though, or even a consultant, they will often stand up their components, but it is rare to find one who builds you out a solution and hands it to you ready to install your applications.

One of the values of a converged system, no matter who it comes from, is having a complete system.  I like the analogy of my computer.  I am pretty technical, and I love tinkering.  I used to love building Gaming PC’s.  I would spend months planning and designing, order the parts, and then spend hours painstakingly assembling the system.  Now I purchase the computer I want, and when it is too slow I purchase something faster.  I know when it arrives I will install apps, I might even put on a new operating system, but other than memory or hard drive upgrades, I typically don’t mess with the hardware.  It is just not worth the time, besides, with some of the newer systems, I can do much more using VMware workstation, or using Crouton on my Chromebook, so I can run virtual systems or different environments on a single system.  The concept behind the converged system is that unlike a reference architecture, you aren’t building from parts, but rather purchasing the whole system along with the services to stand it up.  Then, much like a shiny new computer, it is handed to you all ready to go, just add your applications.  For a majority of systems, the hypervisor is already in place,  with VMware still preferred by many.

There are many other benefits to explore, but the key is to remember, sometimes there are other considerations outside the cost of hardware, sometimes you have to consider what the opportunity cost of building your own systems can be.

Converged Systems – More than the sum of its parts

Living with a Chromebook

A few months ago I wrote about my HP Chromebook and some of it’s advantages and limitations.  It has been a bit and I have been doing more with it, and I thought it would be interesting to share my experiences.

So previously I was using Crouton to enable a full linux desktop, using LXDE as my preferred environment.  I have since changed to using gnome, I really like the minimalist interface it provides me.  For the most part though I was using the Chromebook for web browsing, quick lookups, watching netflix or hulu while I was working, or other basic things.  I would occasionally use the Linux desktop if I wanted to setup an eclipse environment to test something, but for the most part it was disposable.

A couple weeks ago, my work laptop bit the dust.  In it’s defense, I am not very easy on my equipment, I drive it to it’s max and expect it to perform, so it may have been me pushing it a little too hard, but come on this is what I do.  I was pretty ticked, but luckily it was a few days before I went on PTO, and I still had my iPad and Chromebook.  That got me thinking though, mostly because it took Corporate IT 1-2 business days to get me a replacement.  Now I know that is pretty good since I am a remote employee, and it had to come across the country, but in my world that is an eternity.  I mean overnight shipping is too slow, I want a stinkin replicator so I don’t have to wait.

I did a little digging, trying to find a way to make my life a little less dependent on windows, and to avoid buying yet another personal computer, which would not have gone well with my wife.  I found this article, http://www.techrepublic.com/blog/smb-technologist/connect-the-thunderbird-email-client-to-your-exchange-server/, which led me to install thunderbird and run my e-mail through that for about an hour, which was not a positive experience.  I just didn’t like the interface, and the lack of proper calendaring support was not good.  Then I realized that the same concept would hold true for evolution.  I did have to dig through some ubuntu posts to find out that ews has to be installed separately, but it worked like a charm.  Much better.

I have since removed xterm in favor of gnome terminal, added the Chrome web browser, and a couple other tweeks, but for the most part it is great.  I still switch back to the Chrome Desktop for basic browsing, it is just faster and easier, but that is a simple keystroke, no reboot.

All in all, I am pretty happy with this setup.  I love the new gnome interface and how clean it is.  I love that the system is faster than my windows PC, and I am very happy with the apps I have so far.  If I need windows Apps I just remote into my issued windows 8 laptop, but I am doing more and more work from my Chromebook.  Kudos to the HP and Google teams for the efforts they put into this, I am becoming more and more enamored with the simplicity of this setup, and with the ability to change as needed.  Let’s see where I am on this in a few more months.

 

 

Living with a Chromebook

How relevant is VMware in an Openstack world?

Last August, during VMworld, I had the honor to sit in a briefing with VMware’s Paul Strong, VP, Office of the CTO. As always he was engaging, and incredibly knowledgable. When he asked if we had any questions, I casually glanced around, and when none of my peers had anything to ask, I figured why not.

“What is the impact of Openstack on VMware, do you see it as a threat?”

The answer was not what I was looking for, he talked about the marketplace, how they compete on some fronts, but compliment on others, not quite what I wanted, but it was the right answer, I just didn’t realize it yet.

I have been doing more work recently with customers wanting to do internal private cloud, which of course means something different to each of them, but from my view, the critical piece is a self service portal. Enabling users to make requests through a website and removing the requirement for IT intervention, or minimizing it at the least.

What I thought of Openstack, when I asked the question, is that it would invalidate VMware, and all the things we have been talking about for years. As I have been working with this more, I think VMware becomes more relevant in these environments. While it is fun to put KVM in a lab and run Openstack on top of it, it is not at the same level. Openstack itself struggles with commercial support, with HP offering one of the more viable enterprise solutions.

In the end it all comes back to the GNU Manifesto, give the software away and charge for the support, thus those who want it for free can have it, but for most companies it makes more sense to get something with enterprise support.

So to answer the question, I would say that VMware, on many levels makes sense, adding Openstack on top of VMware simply opens more doors to have a well supported private or hybrid cloud environment.

How relevant is VMware in an Openstack world?