Enterprise Architecture – When is good enough, good enough?

In a conversation with a large customer recently we were discussions their enterprise architecture.  A new CIO had come in and wants to move them to a converged infrastructure.  I digging into what their environment was going to look like as they migrated, and why they wanted to make that move.  It came down to a good enough design versus maximizing hardware efficiency.  Rather than trying to squeeze every bit of efficiency out of the systems, they were looking at how could they deploy a standard, and get a high degree of efficiency but the focus was more on time to market with new features.

My first foray into enterprise architecture was early in my career at a casino.  I moved from a DBA role to a storage engineer position vacated by my new manager.  I spend most of my time designing for performance to resolve poorly coded applications.  As applications improved, I started to push application teams and vendors to fix the code on their side.  As I started to work on the virtualization infrastructure design for this and other companies, I took pride in driving CPU and memory as hard as I could.  Getting as close to maxing out the systems while providing enough overhead for failover.  We kept putting more and more virtual systems into fewer and fewer servers.  In hindsight we spent far more time designing, deploying, and managing our individual snowflake hosts and guests that what we were saving in capital costs.  We were masters of “straining the gnat to swallow the camel”.

Good enterprise design should always take advantage of new technologies.  Enterprise architects must be looking at roadmaps to prevent obsolescence.  With the increased rate of change, just think about unikernel vs containers vs virtual machines, we are moving faster than our hardware refresh cycles on all of our infrastructure.

This doesn’t mean that converged or hyper-converged infrastructure is better or worse, it is an option, but one that is restrictive since the vendor must certify your chosen hypervisor, management software, automation software, etc. with each part of the system they put together.  On the other hand, building your own requires you do that.
The best solution is going to come with compromises.  We cannot continue to look at virtual machines or services per physical host.  Time to market for new or updated features are the new infrastructure metric.  The application teams ability to deploy packaged or developed software is what matters.  For those of us who grew up as infrastructure engineers and architects, we need to change our thinking, change our focus, and continue to add value by being partners to our development and application admin brethren.  That is how we truly add business value.

Enterprise Architecture – When is good enough, good enough?

VMware Storage Part 1: Introduction & Local Storage

One of my favorite radio talk hosts talks about the importance of having the “heart of a teacher”. When I started out in IT, I thought I wanted to be in tech support, teach others how to use their computers, and how to make the technology work for them. I have come to realize that while I enjoy helping others, I prefer to talk about concepts, and help them understand storage and virtualization. I am going to spend the next several posts going through some of the VMware storage concepts, in what to many may seem simple terms, but many of the people I talk to do not have a solid understanding, so I think it is always wise to level set, to start from a common point as it were. While there are many blogs out there with some incredibly technical content on this, many well written and helpful, I thought I would give this my own slant in an attempt to help some of the people I interact with and meet new ones. Feedback is appreciated, and I am always open to suggestions for new topics.

VMware in general is all about abstraction. With compute we put the software layer between the physical hardware and the operating system. This enables us to have portable servers, and to consolidate many workloads on to a smaller physical footprint. When we think about this from the storage side of things, it is not so much different. If we think about VMware creating a container to hold many servers, then a datastore, storage presented to VMware to hold Virtual Machines can be considered a container to store the hard drives and configuration files that make up a Virtual Machine. This storage is presented as one or more logical drives, datastores in VMware terms, up to 64TB in size. The reason behind sizing a datastore will be covered later, and is certainly open for discussion, but it is enough to know for now that we create a datastore from logical disk space.

When creating a Virtual Machine, VMware will ask you how much space you want, and which datastore you want to place it on. This will again be covered in a future post about design, but it is important to note, a datastore can contain multiple Virtual Machines, much like a VMware Host, physical machine running VMware, can contain multiple Virtual Machines.

Each VMware Host machine, provided it contains local hard drives, will have a datastore called “Local Datastore” or something similar. This is not a bad thing, it can be useful for Virtual Machines which you do not want to be able to move, but it is limited in that shared storage is required for high availability and resource distribution. With the release of VSAN in vSphere 5.5, as well as many Virtual Storage Appliance, VSA products, this can be used as shared storage as well, but more on that later.

To wrap up, storage is one of the more critical aspects to virtualization success. There are many more features to cover, I will be explaining many of the VMware features as well as where each different HP storage product may make sense as well as some reasons why I personally would choose HP over competitors products. Stay tuned and please let me know if there are other topics I should be covering, or if there is more detail needed.

VMware Storage Part 1: Introduction & Local Storage

VMware Fundamentals Part 1

While this blog is designed to be specific to EMC VNX and VMware vSphere, I have decided to dig a little bit into the basics of VMware, how we manage it, how we work on it, and generally just how your daily life works in a VMware environment. Of course as always your mileage may vary, but this is to give those new to the product a little bit of confidence in what they just paid for, and hopefully to take some of the fear and mystery out of VMware. This originally started on my other blog, but I am going to be using this one more often, so part 1 is just a re-post.

Having worked with VMware since ESX version 2, sometimes I forget that not everyone speaks the lingo, and is obsessed with every feature of vSphere. This realization came when a client, whom I consider a good friend, asked me for some training on how high availability works, and how to vMotion a virtual server. In light of this I have realized the time has come to write a series on the fundamentals of VMware. This will be part 1 of many, I plan to continue to write until I run out of topics.

So to start with we need to look at the layout of vSphere. vSphere is configured in a hierarchical layout similar to the following.

Virtual Center or vCenter, controls the cluster. This may be a physical host, but more often than not it will be a virtual machine riding ontop of one of the hosts it manages. This allows it to take advantage of the high availability features in VMware. In our logical layout, this is the top level object in the virtualization hierarchy.
Next down is the virtual datacenter. This is simply a logical representation of the datacenter. Think of it similar to how you might have 1 or more datacenters within a corporation.
Next is the Cluster. This is a bit of a new concept in the IT world, we are used to clusters which might have a couple servers in it, but they are typically application specific. In the case of vSphere, we are actually grouping a number hosts physically together to create a pool of shared resources.
Under this we have the hosts and virtual machines. At a logical level these appear in the same level, they are both members of the cluster, but of course physically the virtual machine lives on one physical host or another. The virtual machine is nothing more than a series of files, a configuration file, a disk file, and a few others. This enables us to do some cool things like move the virtual server between hosts without interrupting normal operations.
So that about covers the basics, next I plan to cover vMotion, and then possibly get deeper into the networking layout and storage layouts. If you have specific questions, please feel free to reply to this thread.
VMware Fundamentals Part 1

Micro Niche

Hello World! For anyone who has written code, that is something which will be quite familiar, something which should bring a smile, or a grimace depending on your experience. This is the beginning of a new foray into blogging for me, hopefully, this time a bit more focused one. A brief note about the name and title of this blog. Carpe Diem is of course Latin for Seize the Day. CarpeTech is a shameless ripoff, to grab your attention and point you to the title. I truly want technology to enable business, and be a positive thing for businesses, thus Seize the Technology, make it work for the business.

A bit about me, I am a Senior Datacenter Consultant for a EMC and VMware partner in the Pacific North West. Basically that means I love to talk about technology, and generally speaking about EMC and VMware products. In honesty though I am interested in evangelizing technology on a much broader level, products are commodities, some better than others, though the purpose of this blog is to focus around those two specifically. As a standard disclaimer, my opinions are my own and do not necessarily represent the positions of my employer, EMC, or VMware. I welcome feedback and if necessary correction. I strive for honesty and integrity in everything, so please feel free to take me to task if you feel I am in error.

So why Micro Niche. A friend and colleague recently told me that it is not enough to find your niche, but we need to find a micro niche. There are more than enough virtualization blogs, or storage blogs, it is time for consultants to start differentiating themselves. While we are always a jack of all trades to a certain extent, mostly due to our love of technology, we all have something that we are passionate about.

All this being said, I plan to focus on VMware vSphere best practices specifically on the EMC VNX product line. Now I know that Chad at Virtual Geek covers this space quite effectively, but he also covers many other topics. I hope to bring an outsiders perspective, with many links to his and others postings with a bit of my own perspective on things. So thanks for reading, and hopefully this provides some value to the community in general and specifically to the customers I work with.

Micro Niche