Tuesday, September 4, 2012

Array sizing in practice

As I mentioned in the previous post (here) array sizing has become more complex with the advent of tiering and deduplication. It's not just a function of new technology - it's also that storage is handled differently from an architectural standpoint, so even with good performance data there are still inferences that have to be made.

Wednesday, August 29, 2012

Reflections on VMworld 2012

“The Clone Wars” or “The Battle of Five [Platform] Armies”

Do you remember the Mac vs PC platform wars? Novell vs Microsoft? Token ring vs TCP/IP? iPhone vs Android? Java vs .NET (vs C#)? Betamax vs VHS?

Learning to love uncertainty

Historically storage administration was a relatively straightforward endeavor. You could empirically measure how much capacity you needed, project anticipated growth, and plan accordingly. Likewise you could measure an application's existing IO requirements, extrapolate based on planned growth, and feel comfortable that you probably got the sizing right. Sure sometimes it felt more like art than science, but at the end of the day you could point to an Excel spreadsheet and some fudge factor you used to do your math.

Saturday, August 18, 2012

Decoding WWIDs (or how to tell what's what)

The difficulty of determining exactly which storage is presented is a common refrain in organizations where the storage and server teams are separate. At best this is an inconvenience (which LUN are you talking about?) while at worst it can be catastrophic (you put a file system on what!?).

Saturday, August 11, 2012

Replicating a multipathed Linux boot device

One of our customers currently replicates an Oracle database that runs on top of RedHat Enterprise Linux 5.6 using HDS' TrueCopy product. While the system has been in place for awhile the replication is relatively new.


Tuesday, June 12, 2012

Marcel the Constitutional Sales Engineer

Recently a storage vendor boasted that they could provide "1.08PB of raw capacity." Is it really 1PB?! I know most (if not all) of us understand what’s going on; this post is mainly to reinforce the “reality delta” that exists between what a disk vendor sells and what the computer actually uses. I wrote it mainly because this vendor’s claim struck me as so brazen.

Thursday, January 5, 2012

HDS Hitachi Content Platform HCP

Back in 2007, HDS purchased a small software company named Archivas. Archivas was formed by some ex-EMC folks. They set out to create a product that was better than EMC's Centerra archive platform. The product became known as the HDS HCAP platform after the HDS acquisition. It was specifically designed to be an archive platform for fixed content. It answered the need to comply with recent regulations requiring certain industries to retain immutable copies of their data. The HCAP platform provided client access via standard network attached file system APIs such as CIFS and NFS.