It takes a community to create change

A brief break from talking about the software defined datacenter, I thought it might be good to talk about community groups, and a little about giving back.

This morning in Church, the message was about being generous, how we are much happier when we give others what we have, not so much talking about money or possessions, but rather something of ourselves, something that is important. I thought that was an interesting parallel to what our technical communities do.

I was recently recognized for my work in the VMware community with the title of vExpert, the reason I applied to the program is similar to my reason for joining HP. I have for about a decade worked on virtualization and storage platforms. When the opportunity to join HP came up, it was not HP specifically I was excited about. Don’t get me wrong, it is a great company, and an exciting place to be, but it wasn’t on my career plan. What intrigued me was the opportunity to join a company where I could teach others about the technology I am so passionate about, give back to the community, but what sealed the deal was the company philosophy. At the heart of the HP Way are the rules of the garage.

  • Believe you can change the world.
  • Work quickly, keep the tools unlocked, work whenever.
  • Know when to work alone and when to work together.
  • Share – tools, ideas. Trust your colleagues.
  • No politics. No bureaucracy. (These are ridiculous in a garage.)
  • The customer defines a job well done.
  • Radical ideas are not bad ideas.
  • Invent different ways of working.
  • Make a contribution every day. If it doesn’t contribute, it doesn’t leave the garage.
  • Believe that together we can do anything.
  • Invent.

When I look at these, I see them on display in the tech communities, in the user groups, in the books, in the blog posts, in the twitter debates, in the sessions at trade shows, and in the conversations we have with customers, competitors, peers, and friends.  It is amazing how we can all work with different products, for different companies, we can debate who has the best product, but we do seem to have a common goal of educating those around us, and making the world better through technology.

As you go throughout your week, I would encourage you to look around and see what you can do to be generous. Teach someone something, help a junior team members career, get involved in an online community, or better yet join a user group. We all have something to contribute, just remember it’s not about me, it’s not about you, but it is time we started living the rules of the garage, and it is time we all made a difference.

It takes a community to create change

The changing role of shared storage in the Software Defined Datacenter: Part 2

Previously we discussed the shift from the traditional array based storage to a more software defined model.  Of course this is not a huge shift per-say, but rather a changing of marketing terms and perceptions.  The Software Defined Datacenter is nothing more than virtualzation at all its levels, simply a further shift of our distributed compute trend.  It is also important to remember, traditional array based shared storage is not dead, despite the numerous storage vendors competing in the software defined storage space, there is still a great deal of development around the traditional storage array, but that is a future topic.

When looking at traditional storage, it is necessary to understand the essence of the storage array.  You have a processor, memory, networking, and hard drives, all tied together by an operating system.  In essence you have a very specialized, or in some cases a commodity, server.  So what differentiates all the vendors?  Generally speaking, the software and support.  Most arrays provide similar functionality to one degree or another.  Certainly one manufacturer may do something somewhat better, another company may have some specialized hardware, but from a very high level business business perspective, they essentially perform similar functions, and it is left to those of us who are fascinated by the details to sort out which array is best in a specific environment, and determine who can best support them for the long term.

As we begin to shift storage onto servers, relying on industry standard processors, memory, and network components rather than specific and dedicated parts, there is a trade off, much like we saw when we began to virtualize the compute layer.  No longer does the storage vendor have control of the underlying hardware.  No longer is purpose built hardware available to absorb some of the load.  This presents an interesting challenge for storage and server vendors alike.

Unfortunately while manufacturers will set standards, write reference architectures, and create support matrices, many users will bend or even simply ignore them.  When the storage vendor cannot control the hardware, it becomes much more difficult to provide performance guarantees or support.  There will always be a certain need for traditional array based storage, for performance guarantees, and workloads that software defined storage just cannot support.  As users demand more, faster, and cheaper storage for their apps, we are going to have to find a way to strike a balance between the traditional arrays, software defined storage, and the new technologies being created in labs around the world.

The changing role of shared storage in the Software Defined Datacenter: Part 2

The changing role of shared storage in the Software Defined Datacenter: Part 1

I was having a conversation the other day with some colleagues, around the future of our profession.  As you probably know by now, I have spent the better part of the past decade working on storage and virtualizaiton specifically.  More and more, I have been finding myself discussing the erosion of traditional storage in the market.  Certainly there will always be storage arrays, they have their effeciencies, and enabling storage to be shared between servers, and just in time provisioning as well as preventing stranded capacity which was a challenge for many of us in the not so distant past.

To demonstrate this, we should look at the traditional virtualizaiton model.  servers_shared_storage

We have traditionally used the shared storage array for redundancy, clustering, and minimizing storage waste.  When I was a storage administrator, I was very good at high performance databases.  We would use spindle count and raid type to make our databases keep up with the applications.  When I moved on to being a consultant, I found ways to not only give the performance needed, but also to minimize wasted space by using faster drives, tiering software, and virtualization to cram more data onto my storage arrays.  As above, in the traditional model, deduplicaiton, thin technologies, and similar solutions were of huge benefit to us.  It became all about efficiency, and speed.  With virtualization this was also a way to enable high availability and even distribution of resources.

What we have seen over the past several years, is a change in architecture known as software defined storage.

Software Defined Storage (StoreVirtual VSA)
Software Defined Storage

With SSD drives in excess of 900GB, and that size expected to continuously increase, with small form factor sata drives at 4TB and even larger drives coming, the way we think about storage is changing.  We can now use software to keep multiple copies of the data which allows us to simulate a large traditional storage array, and newer features such as tiering in the software brings us one step closer.

Ironically as I was writing this, @DuncanYB re-posted on twitter, an article he wrote a year ago, RE: Is VSA the future of Software Defined Storage? (OpenIO).  I do follow Duncan among several others quite closely, and I think what he is saying makes sense.  Interestingly, some of what he is talking about is being handled by Openstack, but that does introduce some other questions.  Among these are, is this something Openstack should be solving, or does this need to be larger than Openstack in order to gain wide adoption?  What is the role of traditional arrays in the Software Defined Datacenter?

Needless to say, this is a larger conversation than any of us, and it is highly subjective.  I would hope that the next few posts become part of the larger conversation, and I would hope that this will cause others to think, debate, and bring their ideas to the table.  As always I have no inside information, these are my personal thoughts, not those of my employer, or anyone other company.

The changing role of shared storage in the Software Defined Datacenter: Part 1

Forget converged systems and the cloud, what’s really important are the apps.

My wife is a special education teacher, uses technology at school and at home, but looks to just get the job done, and is not interested in all the flashy features a product may offer.  She does try to love technology for my sake, but I can tell when I get some new idea or piece of technology to test, she is just not excited about it.  She is, however, long suffering and patient with my crazy ideas, which makes her an excellent sounding board for many of my theories or concepts.  One thing I have learned from our conversations is that she doesn’t care what the technology is, she is very much more concerned about the apps.

I remember when I first convinced her to take my old iPhone.  She had previously used an LG VUE, a basic phone with a slid QWERTY keyboard.  My company had given me a new iPhone 4s, so I dropped my personal line, and was left with my iPhone 4 no longer in use.  She agreed to try it, so I set her account up and started downloading some apps she typically used on the PC.  This was the summer she also decided to get serious about running, so she loved the Nike + app, and the Chase Mobile Banking app became very convenient.  Ironically, a few months after I gave her the phone I was in the middle of a job transition, and without a work phone for a few weeks.  I immediately started going through smartphone withdrawals, so I asked her if we could switch back for a few weeks till I got a new work phone.  Needless to say that was a mistake, as soon as she was back on the standard phone, she missed all her apps.

The following summer, against her will, I convinced her to upgrade to the iPhone5.  Her only concern was, what will change?  Will my apps be there?  There was no thought to the faster technology, there was no wow factor, she just wanted to know that her apps would all be there.  We had a similar issue when she finally upgraded from ios 6 –> ios 7, what will become of my apps?

This is a long winded story, but I think it makes a good point.  At the end of the day, what I care about is shiny new technology.  Is it faster, is it cooler, what can I make it do?  For the consumers of technology though, it isn’t about the cloud, it isn’t about converged systems, those are simply words to them. It is about can I have my apps,  can I have them everywhere, on every device, and can I have them now?

I would leave you with this thought, as the providers of the applications, or the infrastructure on which the applications run, it is long past time for us to stop thinking about how cool our technology is, about who makes a better system or product, but rather about what is the users expectations, and how can we provide them the experience they need without regard for what device they are on, or what the platform or technology they prefer.  This is the real reason for cloud, for converged system, and for the time we all put into making this all work.

Forget converged systems and the cloud, what’s really important are the apps.

Software Defined Storage – Hype or the Future?

If you have a twitter account, or read any of VMware’s press releases, or any of the technical blogs, you have to know by now, VMware is back in the storage business with a vengeance.  As most of us will recall, the first rollout, their Virtual Storage Appliance, VSA, was less than exciting, so I have to admit when I first heard about vSan I was a little skeptical.  Of course over the past several months, we have watched things play out on social media with the competitors, arguments over software defined storage versus traditional hardware arrays, which begs the question, is Software Defined Storage all Hype, or is this the future of storage?

So as always, in the interests of full disclosure, I work for HP, who clearly has a dog in this fight, I have worked with VMware products for nearly ten years now, as a customer, a consultant, and in my current role speaking about VMware to customers, and in various public presentation forums as often as possible.  While I attempt to be unbiased, I do have some strong opinions on this.  That being said…

When I first touched VMware, I was a DBA/Systems Engineer at a Casino in Northern California.  We badly needed a lab environment to run some test updates in, and despite some begging and pleading, I was denied the opportunity to build the entire production environment in my test lab.  We debated going with workstations and building that way, but one of our managers had read about VMware, and wanted us to determine if we could use it for a lab, with the thought that we could virtualize some of our production servers.  Keep in mind this was in the early ESX 2 days, so things were pretty bare at that point, documentation was spotty, and management was nothing like we have today.  By the time we completed our analysis and were ready to go to production, ESX 3 was released and we were sold.  We were convinced that we would cut our physical infrastructure substantially, and we thought that servers would become a commodity.  While compute virtualization does reduce physical footprint, it does introduce additional challenges, and in most cases it simply changes the growth pattern, as infrastructure becomes easier to deploy, we experience virtual sprawl versus physical sprawl, which leads to growth of physical infrastructure.  Servers are far from a commodity today, server vendors are pushing harder to distinguish themselves and to go further, higher density, and give just a little bit more performance or value.  In the end, VMware’s compute virtualization just forced server vendors to take it to another level.

When VMware started talking about their idea of a vSan, I immediately started trying to find a way to get in on the early beta testing.  It was a compelling story, and I was eager to prove that VMware was going to fall short of my expectations again.  There is no way the leader in compute virtualizaiton can compete with storage manufacturers.  Besides, software defined storage was becoming fairly common in many environments, and something that is moving from test/dev into production environments, so the market was already pretty saturated.  As I started to research and test vSan for myself, as well as reading up on what the experts were saying about it, I was quite surprised.  This is a much different way of looking at software defined storage, especially where VMware is concerned.

At the end of the day there are a lot of choices out there from a software defined storage perspective.  The biggest difference is who is backing them.  When I was building my first home lab, and software defined storage was not really prime time, we used to play around with Openfiler and Freenas, which were great for home labs at the time.  They gave us iSCSI storage so we could test and demo, but I have only met a few customers using it for production, and they usually were asking me to help them get something with support to replace it.  The main difference with vSan, and the other commercially supported software defined storage implementations are the features.  The reality is that no matter what you choose, far more important than which is the best solution, is having enterprise level support.  The important thing is to look at the features, put aside all the hype, and decide what makes sense for your environment.

I don’t think we will see the end of traditional storage anytime soon, if ever, although I think in many environments, we will continue to see high availability move into the application layer and shared storage will become less of an issue, think Openstack.  I do think though that most of us will agree that software defined storage is the future, for now, so it is up to you, the consumer to decide what features make sense, and what vendor can support your environment for the term of the contract.

Software Defined Storage – Hype or the Future?

Converged Systems – More than the sum of its parts

In the interests of full disclosure, I work for Hewlett-Packard, so my this is my semi-unbiased opinion, but I do spend my days talking about HP products, so I am not completely independent, but this has less to do with product, and more to do with concepts and standards.

Over the past few years, we have seen a number of vendors releasing converged systems, pods, blocks, and other types of systems which are essentially designed to simplify the ordering, provisioning, and support processes.  In my job I am fortunate enough to speak with many smart people, as I was discussing this trend with technical sales person, they asked me why would anyone buy a system this way when it would be cheaper to purchase the components and build it like we always have.  Why would anyone want to pay more to get one of these systems?

To get the answer we really need to determine what the motives are.  I posted previously about converged infrastructure, and I do tend to talk about the cloud, automation, and the need for a new, more efficient way of deploying infrastructure.  The best part about statistics is that they can always make your point for you, but having worked in IT for over 20 years in many roles, I believe it is safe to say IT typically spends anywhere from 70-80% of their time on operations.  That is just keeping the lights on.  To put that into terms of $$’s, that means if my IT budget, excluding salary, is $1m, I am spending $700k-800k on keeping the lights on.  That also means out of a 40 hour work week, yeah right, they are spending between 28-32 hours on support, and basic operational tasks, not making the business productive, implementing new projects, or planning for future.  This lack of time for innovation creates delays when new projects need to be done, and is counter productive.  To solve this, you either wait, hire more people, or bring in temporary help in the form of vendors or consultants to do some work for you.  If you do bring in the vendors though, or even a consultant, they will often stand up their components, but it is rare to find one who builds you out a solution and hands it to you ready to install your applications.

One of the values of a converged system, no matter who it comes from, is having a complete system.  I like the analogy of my computer.  I am pretty technical, and I love tinkering.  I used to love building Gaming PC’s.  I would spend months planning and designing, order the parts, and then spend hours painstakingly assembling the system.  Now I purchase the computer I want, and when it is too slow I purchase something faster.  I know when it arrives I will install apps, I might even put on a new operating system, but other than memory or hard drive upgrades, I typically don’t mess with the hardware.  It is just not worth the time, besides, with some of the newer systems, I can do much more using VMware workstation, or using Crouton on my Chromebook, so I can run virtual systems or different environments on a single system.  The concept behind the converged system is that unlike a reference architecture, you aren’t building from parts, but rather purchasing the whole system along with the services to stand it up.  Then, much like a shiny new computer, it is handed to you all ready to go, just add your applications.  For a majority of systems, the hypervisor is already in place,  with VMware still preferred by many.

There are many other benefits to explore, but the key is to remember, sometimes there are other considerations outside the cost of hardware, sometimes you have to consider what the opportunity cost of building your own systems can be.

Converged Systems – More than the sum of its parts

Living with a Chromebook

A few months ago I wrote about my HP Chromebook and some of it’s advantages and limitations.  It has been a bit and I have been doing more with it, and I thought it would be interesting to share my experiences.

So previously I was using Crouton to enable a full linux desktop, using LXDE as my preferred environment.  I have since changed to using gnome, I really like the minimalist interface it provides me.  For the most part though I was using the Chromebook for web browsing, quick lookups, watching netflix or hulu while I was working, or other basic things.  I would occasionally use the Linux desktop if I wanted to setup an eclipse environment to test something, but for the most part it was disposable.

A couple weeks ago, my work laptop bit the dust.  In it’s defense, I am not very easy on my equipment, I drive it to it’s max and expect it to perform, so it may have been me pushing it a little too hard, but come on this is what I do.  I was pretty ticked, but luckily it was a few days before I went on PTO, and I still had my iPad and Chromebook.  That got me thinking though, mostly because it took Corporate IT 1-2 business days to get me a replacement.  Now I know that is pretty good since I am a remote employee, and it had to come across the country, but in my world that is an eternity.  I mean overnight shipping is too slow, I want a stinkin replicator so I don’t have to wait.

I did a little digging, trying to find a way to make my life a little less dependent on windows, and to avoid buying yet another personal computer, which would not have gone well with my wife.  I found this article, http://www.techrepublic.com/blog/smb-technologist/connect-the-thunderbird-email-client-to-your-exchange-server/, which led me to install thunderbird and run my e-mail through that for about an hour, which was not a positive experience.  I just didn’t like the interface, and the lack of proper calendaring support was not good.  Then I realized that the same concept would hold true for evolution.  I did have to dig through some ubuntu posts to find out that ews has to be installed separately, but it worked like a charm.  Much better.

I have since removed xterm in favor of gnome terminal, added the Chrome web browser, and a couple other tweeks, but for the most part it is great.  I still switch back to the Chrome Desktop for basic browsing, it is just faster and easier, but that is a simple keystroke, no reboot.

All in all, I am pretty happy with this setup.  I love the new gnome interface and how clean it is.  I love that the system is faster than my windows PC, and I am very happy with the apps I have so far.  If I need windows Apps I just remote into my issued windows 8 laptop, but I am doing more and more work from my Chromebook.  Kudos to the HP and Google teams for the efforts they put into this, I am becoming more and more enamored with the simplicity of this setup, and with the ability to change as needed.  Let’s see where I am on this in a few more months.

 

 

Living with a Chromebook

How relevant is VMware in an Openstack world?

Last August, during VMworld, I had the honor to sit in a briefing with VMware’s Paul Strong, VP, Office of the CTO. As always he was engaging, and incredibly knowledgable. When he asked if we had any questions, I casually glanced around, and when none of my peers had anything to ask, I figured why not.

“What is the impact of Openstack on VMware, do you see it as a threat?”

The answer was not what I was looking for, he talked about the marketplace, how they compete on some fronts, but compliment on others, not quite what I wanted, but it was the right answer, I just didn’t realize it yet.

I have been doing more work recently with customers wanting to do internal private cloud, which of course means something different to each of them, but from my view, the critical piece is a self service portal. Enabling users to make requests through a website and removing the requirement for IT intervention, or minimizing it at the least.

What I thought of Openstack, when I asked the question, is that it would invalidate VMware, and all the things we have been talking about for years. As I have been working with this more, I think VMware becomes more relevant in these environments. While it is fun to put KVM in a lab and run Openstack on top of it, it is not at the same level. Openstack itself struggles with commercial support, with HP offering one of the more viable enterprise solutions.

In the end it all comes back to the GNU Manifesto, give the software away and charge for the support, thus those who want it for free can have it, but for most companies it makes more sense to get something with enterprise support.

So to answer the question, I would say that VMware, on many levels makes sense, adding Openstack on top of VMware simply opens more doors to have a well supported private or hybrid cloud environment.

How relevant is VMware in an Openstack world?

To the cloud

The other day I was having lunch with a partner who had recently changed companies, and was interested in working more closely with HP. He posed a simple question that really made me think. “What has you excited, what are you interested in right now.” Not a typical question I get, especially from sales people, but the answer was simple. The cloud of course.

Cloud is a big buzz word, and we love to make jokes about it, and the fact that no one can really define it since we all have different definitions. I have posted on this before, but I think this is a topic worth developing.

One of my customers is very concerned with effeciency. I know dumb statement, everyone is concerned with effeciency, unless you work for the government… but seriously. So this customer is working on a hybrid cloud model for their internal customers, but the key to the whole thing working is not the cloud, but automation.

When I got started in IT, many years ago, we used to do everything manually. When someone needed a server, we would go through our procurement process to get the hardware. When the server arrived, we would rack it, install the OS, provision the networking, add storage, and then turn it over to the application owner to install their application.

When VMware went primetime a few years back, we were thrilled. The whole process was cut down since it could be templatized, but it still required IT intervention, the process was quicker, but still not much different.

So to the point, what is automation, and what has me excited in the IT world? What if we remove the requirement for IT intervention? What if application owners could provision their own servers via a simple interface or programmatically? Imagine what the IT resources could do, what they were freed up to do more productive tasks.

Now this is a high and mighty goal, and does require more work, but this is the utopia of the cloud we are all shooting for.

To the cloud

What does Google Fiber mean for Portland?

In case you are living under a rock, or still using dial up, Google Fiber has announced recently they are considering deploying their Gigibit fiber offering to a number of new cities, including our very own beloved Portland, OR.  https://fiber.google.com/newcities/.  I thought I would give my perspective to the screaming voices on both sides, and try to put this into context.

So first of all to be clear, this is my personal opinion, not related to HP, not related to any other vendor I may talk about during my day job.  Portland, from a technology vendor, is a poorly served market.  With many of the technology vendors being in Silicone Valley, and many of the start-ups existing there, or in Seattle, WA, Portland is seen as too small to matter.  This is a bit ironic when we consider the growing tech community in the Portland area, certainly small but not insignificant, the sought after lifestyle, and the inexpensive land and power here.  I will admit, the taxes and business climate can be challenging, but those can be negotiated.

So what does this have to do with Google Fiber?  Currently I pay around $70 per month for Comcast.  I get around 50 Mbps for internet speed, and around 12 basic cable channels.  I look at upgrading from time to time, but since I use Netflix, Hulu Plus, and a few other channels available on my Apple TVs, there is not much reason.  If I could get better speeds and cut the cable out I would, but thus far Comcast has refused.  As a former Comcast employee, I understand how the whole thing works, but I am not happy with the restrictiveness, and the poor services.  I currently stream content on 3 TV’s as well as a half dozen miscellaneous devices.  I telecommute for work, so my internet service is critical to my family and I.  Google Fiber is going to be around 20 times faster than what I currently have.  Plus I can pay a little more to get a better DVR than Comcast offers, and not worry about rate hikes.

So what does this mean for Portland?  As much as I may not love some of the things about Portland, we are nothing if not an innovative city.  We have a growing food industry, brewing and distilleries, hundreds of small businesses, and many well established ones such as Intel and Nike to name a few.  Give Portland more speed, and watch what we do with it.  Despite being treated by many tech vendors as a second class citizen to other larger markets, Portland continues to innovate and make a name for itself.

I know there are many options for Google, but I for one would be thrilled to have internet so fast that I will actually have to upgrade all my wireless routers.

What does Google Fiber mean for Portland?