VMware Cloud Disaster Recovery – Advanced Solutions Design

Previously I talked about VCDR (VMware Cloud Disaster Disaster Recovery) Solutions Validation, and how to properly run a proof of concept.  The reality is that while it would be nice if all applications were as straightforward as our wordpress example, there are often far more complex applications requiring far more advanced designs for a disaster recovery plan to be successful.

External Network Dependencies

For many applications, even in modern datacenters, external network dependencies, or even Virtual Machines which are too large for traditional replication solutions can create challenges when creating disaster recovery plans.

To solve for this, 3rd party partners may be used to provide array or other host based replication services.  This solution requires far more effort on the part of the managed services partner, or the DR Admin.  Since the physical or too large for VCDR workloads cannot be orchestrated through the traditional SaaS orchestrator, there is an additional requirement to both test, and manually failing over.  A Layer 2 VPN between the VMware Cloud Environment and the partner location provides the connectivity for the applications running in both environments.

Script VM

For the more complex VM only environments, some scripts may need to be run during the test and recovery phases.Similar to bootstrap scripts for operating system provisioning, the scripts may be used for basic or even more advanced configuration changes.

The script VM should be an administrative or management server which is either the first VM started in the test recovery, or production recovery plans.  Separate VMs may be designated for testing versus production recovery as well, enabling further isolated testing one of the biggest value propositions of the solution.  Further specific documentation is located here, Configure Script VM.

Extending into VMC on AWS

WIth more complex architectures, it often makes sense to consider a mixed DR scenario.  In these cases, moving some of the applications to the VMC on AWS environment to run permanently, or leveraging another method for replicating outside the traditional VCDR SaaS orchestrator may be warranted.  While this does present some risk again since this is not tested with the rest of the VCDR environment, it does provide for additional options.  

With the recent addition of Cloud to Cloud DR, more options were made available for complex disaster recovery solutions.  Once an environment has been migrated full time to VMC on AWS, VCDR can be leveraged as a cost effective backup solution between reasons without a need to refactor applications.

Even in advanced DR scenarios, the VCDR solution is one of the more cost effective and user friendly available.  With the simplicity of the VMC on AWS cloud based interface, and policy based protection and recovery plans, even more complex environments can take advantage of the automated testing and low management overhead.  The best and most impactful DR solution is the one which is tested and which will successful recover in the event it is needed.

VMware Cloud Disaster Recovery – Advanced Solutions Design

The strange and exciting world of the Internet of Things

For some time I have been passionate about connected devices, home automation, and digging deeper into emerging technologies. If you have read my previous posts, I tend to share what I am passionate about. As I return to blogging I will be writing about the Internet of Things, connected devices, and more on home automation. I plan to write simultaneously about Enterprise IoT and Home Automation.

The focus of the first Enterprise IoT series will be on VMware’s VMware’s Pulse IoT Center 2.0. Much of this builds on the exceptional work of a friend and colleague, Ken Osborne. Please take a look at his work here, http://iotken.com/.

From the home automation front, I have a number of updates, and will demonstrate building out a better home automation experience, and further document my families experience interacting with connected devices.

Join me as I explore the strange and exciting world of connected devices and learn with me about where the future of technology seems to be leading us.

The strange and exciting world of the Internet of Things

Who moved my VMware C# Client?

Years ago I was handed a rack of HP servers, a small EMC storage array, and a few CDs with something called ESX 2 on them. I was told I could use this software to put several virtual servers on the handful of physical servers I had available to me. There was a limited web client, available, most of my time was spent on the command line over SSH. The documentation was limited, I spent most of my time writing procedures for the company I was at, quickly earning my self a promotion, and a new role as a storage engineer.

Today VMware is announcing that the next release of the vSphere product line will deprecate the C# client in favor of the web client. As I have gone through this process, both as a vExpert and a VMware employee, there have been many questions. During our pre-announcement call with the product team at VMware, there were a number of concerns voiced about what will work on day 1 and what this does to the customers who have come to rely on performance. Rather than focus on the actual changes, most of which are still to be determined, it seemed more helpful to talk about the future of managing systems, and the future of operations.

george

When I started working on server administration, the number of systems one admin might manage was pretty low, maybe less than a dozen. With the advent of virtualization and cloud native applications, devops and no-ops, administrators are managing farms of servers, most of them virtual. We often hear about pets vs. cattle, the concept that most of our servers are moving from being pets, something we care for as a part of our family, to cattle, something we use to make money, if one of our cattle have a problem, we don’t spend too much time on it, we have many others, we can just make more.

Whether it is a VMware product, Openstack, or another management tool, abstracting deployment and management of systems is becoming more mainstream, and more cost effective. In this model, a management client is far less important than APIs and the full stack management they can enable. For the few use cases where the client is needed, the web client will continue to improve, but the true value is these improvements will drive new APIs and new tools developed for managing systems. While change is never easy, a longer term view both where we came from, and where we are going with the interfaces reminds us this is a necessary change, and less impactful than it may seem at first glance.

Who moved my VMware C# Client?

What is Dell really buying?

Standard disclaimer, this is my personal opinions, and does not reflect those of my employer, or of any insider knowledge, take it for what it is worth.

When I heard rumors of the Dell EMC deal, I was pretty skeptical.  I am a numbers guy, and the amount of debt that would be required is a bit staggering.  Why would a company like Dell even want to acquire a company like EMC?  Especially after we all watched the pain they went through to take the company private.  Why would EMC want to go through the pain of being taken private, by a former competitor no less?  With the HP breakup, and IBM selling off a number of their product lines over the past decade or so, this almost seems counterintuitive, an attempt to recreate the big tech companies of the 90’s & 2000’s which are all but gone.

Sales and Engineering Talent

I have many friends at Dell, I was even a customer when I worked for some small startups many years ago.  In my experience, Dell is really good at putting together commodity products, and pricing them to move.  Their sales teams are good, but the compensation model makes them tough to partner with.

EMC has a world class sales and marketing organization.  EMC enterprise sales reps are all about the customer experience.  They are machines with amazing relationship skills, and they are well taken care of.  Engineering at EMC is a huge priority as well.  EMC’s higher end support offerings, while costly, are worth every penny.  I have seen them fly in engineers for some larger customers to fix problems.  EMC products are all about the customer experience.  Even though I have not been a fan of their hardware lately, they have done some amazing things around making the experience second to none.

An Enterprise Storage & Software product

Let’s be honest, Dell has not been a truly enterprise player in the storage and software arena.  If we look at the products they have acquired, a majority of them are mid market plays.  Compellent was supposed to be their big enterprise storage play, but that is mid market at best.  From a software perspective, most of the products are low end, and they don’t tend to develop them further.

EMC on the other hand has enterprise class storage.  Say what you want about the complexity of the VMAX line, it is pretty solid.  It may be a pain to manage sometimes, but it does set the standard in enterprise storage.  EMC has also done amazing things with software.  ViPR Controller and ViPR SRM are impressive technologies when implemented appropriately.  EMC has also done quite well with some of their other software products, but more so they treat software as a critical part of the stack.

VMware

Enough said, the real value for Dell is getting a good stake in VMware.  Like it or not VMware is the market leader in Hypervisors, Cloud Management, Software Defined Networking, and making incredible strides in Automation, and Software Defined Storage.  The best thing that EMC has done is allowing VMware to continue to be independant.  If Dell can stick to that plan, the rewards can be incredible.

The reality is this deal won’t change much in the short term from an IT industry perspective.  Large storage companies such as EMC and HP Storage are getting their lunch eaten by smaller more agile storage startups.  Servers are becoming more of a commodity, and software continues to be the path forward for many enterprises.  This is a good deal for both Dell and EMC, the challenge will be not to go the way of HP.  If I could give Michael Dell one piece of advice, it would be to hire smart people and listen to them.  Culture matters and the culture is what makes EMC and VMware what they are so don’t try to change it.  Culture is the true value of this acquisition.

What is Dell really buying?

VMware User Groups: Not for just for VMware employees and Vendors

A couple weeks ago we had our annual Portland VMUG User Conference.  First of all, big Kudos to the local VMUG leaders, and a big thank to the VMUG National Headquarters, and the vendors who sponsored us.  A reoccurring theme with all of these events I participate in is the number of vendors and VMware employees presenting.  I say this not to be critical but to encourage a different mindset.  I am hesitant to say this, because I love getting up in front of the VMware users and talking about what we are doing, and getting their feedback and questions.  That is one of my favorite parts about being here is talking to our customers.

Something which has made the rounds with the usual suspects is the concept of mentoring customers to speak at the VMUGs.  Mike Laverick wrote this article last year, and I think we need to keep pushing this concept forward.  The VMUG has a program called Feed Forward, to make this a reality.  Now I am not the foremost expert on presenting, but the VMUG is something I consider personally important to me, especially in Portland.  I have been a member for 4 years now, and I have been presenting for 2-3 of those years as a partner and VMware employee.  I have met more cool people, and had more amazing conversations through the process.

The VMUG is not about me, it is not about vendors, it is absolutely all about the customer.  It does very little good to have our partners and employees present every session.  Of course there are some customers who do present, but as a VMUG member, and someone who cares deeply for what we do, I would encourage you to get out there and speak up and get involved.  There are literally hundreds of us who are willing to help you and encourage you.  Most of us are not perfect presenters, but we just want you to be successful.  I encourage you to start small, but let us help you start being more involved and grow your personal brand at your local VMUG.

VMware User Groups: Not for just for VMware employees and Vendors

Software Defined Storage – Hype or the Future?

If you have a twitter account, or read any of VMware’s press releases, or any of the technical blogs, you have to know by now, VMware is back in the storage business with a vengeance.  As most of us will recall, the first rollout, their Virtual Storage Appliance, VSA, was less than exciting, so I have to admit when I first heard about vSan I was a little skeptical.  Of course over the past several months, we have watched things play out on social media with the competitors, arguments over software defined storage versus traditional hardware arrays, which begs the question, is Software Defined Storage all Hype, or is this the future of storage?

So as always, in the interests of full disclosure, I work for HP, who clearly has a dog in this fight, I have worked with VMware products for nearly ten years now, as a customer, a consultant, and in my current role speaking about VMware to customers, and in various public presentation forums as often as possible.  While I attempt to be unbiased, I do have some strong opinions on this.  That being said…

When I first touched VMware, I was a DBA/Systems Engineer at a Casino in Northern California.  We badly needed a lab environment to run some test updates in, and despite some begging and pleading, I was denied the opportunity to build the entire production environment in my test lab.  We debated going with workstations and building that way, but one of our managers had read about VMware, and wanted us to determine if we could use it for a lab, with the thought that we could virtualize some of our production servers.  Keep in mind this was in the early ESX 2 days, so things were pretty bare at that point, documentation was spotty, and management was nothing like we have today.  By the time we completed our analysis and were ready to go to production, ESX 3 was released and we were sold.  We were convinced that we would cut our physical infrastructure substantially, and we thought that servers would become a commodity.  While compute virtualization does reduce physical footprint, it does introduce additional challenges, and in most cases it simply changes the growth pattern, as infrastructure becomes easier to deploy, we experience virtual sprawl versus physical sprawl, which leads to growth of physical infrastructure.  Servers are far from a commodity today, server vendors are pushing harder to distinguish themselves and to go further, higher density, and give just a little bit more performance or value.  In the end, VMware’s compute virtualization just forced server vendors to take it to another level.

When VMware started talking about their idea of a vSan, I immediately started trying to find a way to get in on the early beta testing.  It was a compelling story, and I was eager to prove that VMware was going to fall short of my expectations again.  There is no way the leader in compute virtualizaiton can compete with storage manufacturers.  Besides, software defined storage was becoming fairly common in many environments, and something that is moving from test/dev into production environments, so the market was already pretty saturated.  As I started to research and test vSan for myself, as well as reading up on what the experts were saying about it, I was quite surprised.  This is a much different way of looking at software defined storage, especially where VMware is concerned.

At the end of the day there are a lot of choices out there from a software defined storage perspective.  The biggest difference is who is backing them.  When I was building my first home lab, and software defined storage was not really prime time, we used to play around with Openfiler and Freenas, which were great for home labs at the time.  They gave us iSCSI storage so we could test and demo, but I have only met a few customers using it for production, and they usually were asking me to help them get something with support to replace it.  The main difference with vSan, and the other commercially supported software defined storage implementations are the features.  The reality is that no matter what you choose, far more important than which is the best solution, is having enterprise level support.  The important thing is to look at the features, put aside all the hype, and decide what makes sense for your environment.

I don’t think we will see the end of traditional storage anytime soon, if ever, although I think in many environments, we will continue to see high availability move into the application layer and shared storage will become less of an issue, think Openstack.  I do think though that most of us will agree that software defined storage is the future, for now, so it is up to you, the consumer to decide what features make sense, and what vendor can support your environment for the term of the contract.

Software Defined Storage – Hype or the Future?

VMWARE STORAGE PART 4: VSAN

Going through the VMware storage options I would be remiss if I did not talk about VSAN.  VSAN is simply VMware’s next way of helping to further improve the Software Defined Datacenter.  To begin with though, it is important to understand a little about what VMware is trying to accomplish. 2 years ago, on gigaom, an interesting article came through on VMware’s slow and steady attack on storage. This was following the release of their less than stellar Virtual Storage Appliance, VSA. At the time I took exception with this. Of course VMware would never want to get into the storage business, they are a software company. Then came VSAN.

I still do not believe VMware intends to completely own the storage market, but they are certainly changing the game. Now I work for HP, and I remember when server virtualization started to take off, we thought the server was going to become irrelevant, we would just use some cheap whitebox server. Fortunately we at HP realized that we had to step up our game. As usual the server engineering team worked with our alliance partners and built even better products designed around virtualization to give higher virtualization density, and higher performance. I equate this latest storage product to the same thing. It will certainly capture certain market segments, but it is not a threat to the core storage business of the larger storage vendors.

With that said, just what is VSAN? The concept behind VSAN is actually an old one come again. We have been doing scale out object storage for some time in this industry. VSAN simply moves this into the hypervisor stack. The requirements are pretty simple, you need a minimum of 3 host servers running vSphere 5.5 each with an SSD and at least one SAS or SATA drive, HDD. The requirements are well documented so I don’t want to get into those details, but this is enough to get started.

Conceptually, the SSD becomes a cache to accelerate the reads and writes to the drives. The HDDs are used for storage, and replicas are kept based on the rules, generally at least 2 copies of the data on separate hosts.

The setup is pretty simple, and there are hands on labs available online with VMware. It is also quite simple to setup using VMware Workstation running vSphere 5.5 for labs.

This scales currently, in Beta v1, to 8 hosts, so this is not going to be a massive system, more of an SMB environment or a lab system. It also introduces some interesting challenges on the server and network side. On the server, there is pretty limited official support since the raid controller has to enable pass through. There is no raid since this is an object store, data protection is accomplished through multiple copies of the data. On the network side, this is challenging because we are copying data between hosts to retain consistancy. This generally, in my mind, means that the era of running VMware on 1GbE networks is probably nearing an end.

At the end of the day, VMware has a number of use cases. This is a Beta v1 product, I am nervous about running it in production just yet. Many of their use cases are around high performance workloads, VDI for example, where user experience can make or break a project. I do think that this is an exceptional way of creating shared storage in a lab, and gives us many new ways to work in a lab environment.

As to the future of VMware storage, traditional storage, and our industry, I think this is the beginning of the next big thing. I will be discussing HP’s own software defined storage soon, as well as our traditional storage platforms in a VMware context. I don’t see VSAN as a threat, but rather as a call to action on our part to make our products better and continue to innovate. I will personally use VSAN in my lab along side HP StoreVirtual, different use cases, and fun to test.

VMWARE STORAGE PART 4: VSAN

The future of storage

So in a previous post, VMware Fundamentals Part 3 I talked about VMware and storage, and made a case for a mixed protocol storage environment. While I stand behind what I said, I always think it is interesting to take a deeper look at the industry. Calvin Zito, the HP storage guy, made some very good points in his post, Block of File Based Storage for VMware which got me thinking a bit more. That coupled with the recent product releases from HP have inspired me to talk a little more about this topic.

Calvin points out some compelling points about how block gets the most attention on the development cycle from VMware, and at this point, with the software initiators, and the ease of use, it often makes more sense to simply use a block based protocol.

That being said, in many virtual environments, we often find that the traditional storage array doesn’t fit the bill. We are running a number of host servers, with internal storage that is going to waste. So how do we take advantage of this lost capacity, and how can we lower our costs while adding additional value?

A tough concept for many of us who came up through the ranks of storage administration to swallow is that storage is not king any more. It is certainly important, but gone are the days when I can walk into a company as a storage admin and name my price. Now certainly as a Storage Architect I can demand more, but even so, I am required to know more than just storage. The really tough part though is storage is no longer defined as a monolithic array. We have to start embracing the concept that storage, like everything else must be defined as software. This becomes more and more important when we look at the move toward ITaaS. Nicholas Carr drives this point home in his book, The Big Switch.

So the short answer to the question, what is the future of storage, much like compute, networking, and the rest of what we do, the future of storage is software. Whether it is open source or supported by a company such as HP, this is where we are headed.

The future of storage

VMware Fundamentals Part 2

A little behind, but better late than never. In my last post I covered the basic layout of VMware in modern terms.

Clustering in a VMware environment provides us with a number of benefits. The first of course is high availability. High availability simply put means if one host server dies, the virtual machines on that host are restarted on another host. This is excellent as it allows us to minimize downtime. Of course this feature in addition to a majority of the other excellent features in VMware require shared storage, but that is a post for another day.

If we can tolerate zero downtime, we can use the additional feature, Fault Tolerance. This enables us to have two virtual machines running in a master/slave configuration so if one dies the other picks up where it left off with no downtime. This is nice but it comes with a number of limitations and at significant cost since it doubles the number of virtual machines in use and thus the amount of resources required.

One of the coolest tricks, in my opinion, about VMware though is vMotion. We can actually take a live virtual machine, and move it between physical hosts without interruption, provided we have properly configured our network. This is excellent because it allows us to automate this process to keep the physical hosts balanced by moving live virtual machines around to ensure no one host is overloaded.

Taking it yet a step further, we often have multiple datastores in VMware. This is done to reduce contention on the file systems since we are seeing larger and larger deployments, with more and more virtual machines. Storage vMotion allows us to move the disk of the virtual machine between datastores. With the release of vSphere 5 we can now even automate this portion through clustered datastores, but that is again a topic for another day.

Migrating a virtual machine whether the live state, or the virtual disks is as simple as either dragging the virtual machine to a new host, or right clicking the virtual machine and selecting migrate.

Next post I plan to cover a little on the storage side, about how we configure datastores, and hopefully demystify why we chose different protocols in a VMware environment.

VMware Fundamentals Part 2

VMware Fundamentals Part 1

While this blog is designed to be specific to EMC VNX and VMware vSphere, I have decided to dig a little bit into the basics of VMware, how we manage it, how we work on it, and generally just how your daily life works in a VMware environment. Of course as always your mileage may vary, but this is to give those new to the product a little bit of confidence in what they just paid for, and hopefully to take some of the fear and mystery out of VMware. This originally started on my other blog, but I am going to be using this one more often, so part 1 is just a re-post.

Having worked with VMware since ESX version 2, sometimes I forget that not everyone speaks the lingo, and is obsessed with every feature of vSphere. This realization came when a client, whom I consider a good friend, asked me for some training on how high availability works, and how to vMotion a virtual server. In light of this I have realized the time has come to write a series on the fundamentals of VMware. This will be part 1 of many, I plan to continue to write until I run out of topics.

So to start with we need to look at the layout of vSphere. vSphere is configured in a hierarchical layout similar to the following.

Virtual Center or vCenter, controls the cluster. This may be a physical host, but more often than not it will be a virtual machine riding ontop of one of the hosts it manages. This allows it to take advantage of the high availability features in VMware. In our logical layout, this is the top level object in the virtualization hierarchy.
Next down is the virtual datacenter. This is simply a logical representation of the datacenter. Think of it similar to how you might have 1 or more datacenters within a corporation.
Next is the Cluster. This is a bit of a new concept in the IT world, we are used to clusters which might have a couple servers in it, but they are typically application specific. In the case of vSphere, we are actually grouping a number hosts physically together to create a pool of shared resources.
Under this we have the hosts and virtual machines. At a logical level these appear in the same level, they are both members of the cluster, but of course physically the virtual machine lives on one physical host or another. The virtual machine is nothing more than a series of files, a configuration file, a disk file, and a few others. This enables us to do some cool things like move the virtual server between hosts without interrupting normal operations.
So that about covers the basics, next I plan to cover vMotion, and then possibly get deeper into the networking layout and storage layouts. If you have specific questions, please feel free to reply to this thread.
VMware Fundamentals Part 1