Monday, September 27, 2010

Welcoming the Virtual Storage Platform

Hitachi Data Systems (HDS) has just announced their latest flagship storage offering, the Virtual Storage Platform (VSP). As we all have come to expect, speeds and feeds have been improved, and performance of the storage subsystem overall has increased. While I've seen these numbers, until I get a feel for it myself they are just numbers. What makes this platform different from the previous generation subsystems is something that I am pretty excited about; honest to goodness flexibility! Bear with me, this is going to be long, and still not all inclusive.

Previous generation storage systems have generally had similar issues; when you needed more capacity, you needed to add more infrastructure (such as cache boards, back end directors, etc) to support it. This requirement was there regardless of the performance requirements you are experiencing today. Likewise, if you needed more performance for your current workload, you typically had to purchase additional spindles which led to additional infrastructure.

Overview of the VSP

Hitachi's new VSP changes things, a lot. First off let's talk about the hardware. Gone are the non-row conforming frames! The VSP uses from one and up to six standard 19 inch racks and conforms with hot-cold aisles. Cables can be fed in from the top or bottom. Hitachi provides the standard rack pre-configured, although I'll speculate that they will have an option to allow non-HDS racks for colo use. Hopefully you won't miss the expensive three phase power whips because the VSP uses standard L6-30 whips now. A single VSP can consist of two modules. Each module can operate up to 1024 HDDs, for a total of 2048 HDDs in a single subsystem (excluding externally attached storage). The new architecture consists of 2.5 inch SAS HDDs and 3.5 inch SATA HDDs. You can mix and match these, there are two different DKUs (disk frames if you will) and up to three DKUs fit in a 19inch rack.

Back-end architecture

It is no secret that Hitachi is leading the industry with its product line moving to SAS. This has been very successful with the AMS2000 line. The VSP follows suit with a switched 6Gb SAS back-end leaving behind the long in the tooth FC-AL technology.

The next significant change is moving away from dedicated microprocessors on each Front/Back End Director (FED/BED) to centralized processor nodes. There are specialized chips on the FEDs/BEDs, but they service a different role than before. The benefits of this are tremendous. For the first time that I can think of in the Enterprise space, you can start off with a "base" configuration, meaning one BED feature, and drive 1024 2.5 inch SAS drives in a single module VSP. If you need more drives, you can add a second VSP module and BED pair for up to an additional 1024 2.5 inch drives giving you 2048 (internal) drives in a single VSP. There are other components in the mix here, but what I'm trying to point out is that in the old days I added BEDs when I added drives and cabinets in order to provide connectivity to said drives; and that goes away with the new architecture.

The other cool thing is if I have a number of HDDs today, and need more bandwidth to those existing drives, I can add a BED feature, and immediately double back end bandwidth. With the previous FC-AL technology, loops could only service their own drives. With the new SAS switched architecture, any BED link can talk to any drive in within the module. This becomes important if I have a workload that requires very high IOPS, utilizing SSDs and I don't need or want lots of storage capacity.

I am excited about this because out of my wide customer base, I can count several instances where there is plenty of back end processing power for IOPS and bandwidth, but more and more capacity is needed. In many of these cases virtualization became the defacto standard because the added expense of BEDs drove the cost of internal storage too high. Virtualization is a great solution, one that we have had great success with in driving customer adoption, driving costs down for storage, etc, but it's not for everyone's environment. Fortunately the new VSP architecture gives us more options.

More new ways to carve

In a previous post I discussed different ways to carve up storage, and manage it. The Hitachi VSP redefines this again with Hitachi Dynamic Tiering (HDT).

HDT is HDP (Hitachi Dynamic Provisioning), but on a more granular scale. HDP is great, it made storage management easier, faster, and made performance issues more or less a thing of the past by practically eliminating hot spots with wide striping. HDT takes it to a new level by allowing you to TIER your HDP pools with up to three mini-pools of different performance characteristics.

Lets say we start off with an HDP pool of 600GB 10K SAS drives. We have 10 raid groups of 7D+1P, or roughly 38.5TB of storage and about 11,200 raw IOPS. I start shoving data and hosts onto the pool. Keep in mind, that I told my boss(es) that I wanted 300GB drives because I'm concerned about the IOP density of these "massive" drives, but when he saw the price tag he articulated the benefits of 600GB drives to me. Well, that new web 2.0 product that we deployed is having great success, and we're taking on more customers than we expected; hence my IOP requirement for that service is quite a bit higher than I expected. Note my use of the word service here. I have a database, middleware, and web servers all in the mix here. So the answer is buy SSDs. That's great, I get lots of IOPS out of them but which LUNs do I put in there? What if I need a lift for the database and the middleware servers? What if it is a subset of data on the LUN that would benefit but the rest of the data is relatively untouched? Can I afford to waste my expensive SSDs?

HDT makes it easy. I can add a tier of SSD into my pool, which automatically becomes the fast tier, pushing the SAS drives to the lower tier. HDT monitors pages of data (at the 42MB level) and will reallocate the pages to the proper tier based on access patterns. I can add 1TB of SSD, and even though I'm using 500GB LUNs, only the hot data will get moved into the SSD tier.

Fast forward a year, and my web 2.0 product has generated more data than I anticipated. I have to keep that data around in case the user access it, but for the most part, it is stale. With HDT, I can add in some high density SATA storage into my pool, which automatically becomes the lower tier leaving the SAS drives as the middle tier, and SSD as the fast tier; and HDT will start moving pages to SATA freeing up my SAS spindles for more active data.

If we consider that 80% of our IOPS come from 20% of our data, and data growth is out of control, sizing our pools and tiers can be problematic. Fortunately HDT takes the hard work out of our hands and places it into the VSP. We can add/resize/remove tiers from our pools as we our workload and storage demands require it, all seamlessly, and non-disruptively to the data, applications, and end users.

Additional tidbits

The centralized microprocessor nodes give advantages to your FEDs as well as the BEDs that I've discussed already. In the past, when I flipped a port for use in replication or virtualizing storage, I flipped two ports at the same time; as they were serviced by the same MP. With the new architecture, the MPs run all of the microcode; meaning any one of the centralized MPs can service any IO need, whether FED, BED, replication or UVM (universal volume manager). I can actually flip one port, and use just that one for external storage, however at a minimum we should do two; one per cluster.

Another improvement with the centralized MP architecture; when an IO comes in from the host, it is accepted by the MP that is currently servicing that LDEV, so the IO stays with the same MP from front to back reducing latency. Previous generations handed the IO from the FED MP to the BED MP, and back. This was done very efficiently for many generations of the product, however Hitachi found ways to improve on it, and did.

As with the flexibility in how or why you would expand your BED capacity, you can do the same with the microprocessor nodes. If I can service all 1024 HDDs effectively with the processing power I have, I'm good. If I need more processing power, I can add it. The key being that we're not bound to design requirements based on hardware architecture now, we have choices based on performance needs.



If data at rest encryption is important to you, it is just a license away. The BEDs already have the encryption hardware embedded so you don't have to plan maintenance windows for the painful task of swapping out BED pairs. You can manage encryption at the parity group level, but with it being at line speed and no impact to performance I don't know if I would bother.

Final thoughts, for now...

There are few products that excite me. I've been optimistic on some innovative products in the past only to watch them fail to execute and stall out overall. The VSP as a product excites me, it's new, fresh, and takes hardware to a new level. Everyone does this, and someday (insert your favorite manufacturer here) might produce a product that takes the lead. What excites me more is what I can see myself doing with the VSP, Hitachi's vision for the VSP wrapped into a holistic solution, and what it will mean for the front line data center guy who holds the keys to the entire business that he supports each and every day. You will do more with less, and your life will get better from it. Of course, if your boss hasn't read this blog, you can take credit for all of the hard work and push for that promotion....

Share/Save/Bookmark

1 comment:

  1. You nailed it!! I can't wait to get a couple. The whole new vision at HDS of simplifying management and driving up efficiency and driving down costs aligns directly with my storage strategy. you'd think they were sitting next to me during my strategy creation meetings over 12 months ago.

    ReplyDelete