The changing role of shared storage in the Software Defined Datacenter: Part 1

I was having a conversation the other day with some colleagues, around the future of our profession.  As you probably know by now, I have spent the better part of the past decade working on storage and virtualizaiton specifically.  More and more, I have been finding myself discussing the erosion of traditional storage in the market.  Certainly there will always be storage arrays, they have their effeciencies, and enabling storage to be shared between servers, and just in time provisioning as well as preventing stranded capacity which was a challenge for many of us in the not so distant past.

To demonstrate this, we should look at the traditional virtualizaiton model.  servers_shared_storage

We have traditionally used the shared storage array for redundancy, clustering, and minimizing storage waste.  When I was a storage administrator, I was very good at high performance databases.  We would use spindle count and raid type to make our databases keep up with the applications.  When I moved on to being a consultant, I found ways to not only give the performance needed, but also to minimize wasted space by using faster drives, tiering software, and virtualization to cram more data onto my storage arrays.  As above, in the traditional model, deduplicaiton, thin technologies, and similar solutions were of huge benefit to us.  It became all about efficiency, and speed.  With virtualization this was also a way to enable high availability and even distribution of resources.

What we have seen over the past several years, is a change in architecture known as software defined storage.

Software Defined Storage (StoreVirtual VSA)
Software Defined Storage

With SSD drives in excess of 900GB, and that size expected to continuously increase, with small form factor sata drives at 4TB and even larger drives coming, the way we think about storage is changing.  We can now use software to keep multiple copies of the data which allows us to simulate a large traditional storage array, and newer features such as tiering in the software brings us one step closer.

Ironically as I was writing this, @DuncanYB re-posted on twitter, an article he wrote a year ago, RE: Is VSA the future of Software Defined Storage? (OpenIO).  I do follow Duncan among several others quite closely, and I think what he is saying makes sense.  Interestingly, some of what he is talking about is being handled by Openstack, but that does introduce some other questions.  Among these are, is this something Openstack should be solving, or does this need to be larger than Openstack in order to gain wide adoption?  What is the role of traditional arrays in the Software Defined Datacenter?

Needless to say, this is a larger conversation than any of us, and it is highly subjective.  I would hope that the next few posts become part of the larger conversation, and I would hope that this will cause others to think, debate, and bring their ideas to the table.  As always I have no inside information, these are my personal thoughts, not those of my employer, or anyone other company.

The changing role of shared storage in the Software Defined Datacenter: Part 1

Leave a Reply

Your email address will not be published. Required fields are marked *