Which led us to check /etc/nsswitch.conf and see that aliases was referencing the nisplus database. But this host isn't using nisplus.
Changing the aliases entry to just "files" fixed the problem.
Lumenate is a technical consulting firm focused on enabling the virtualized enterprise. With disciplines in Storage, Security and Networking, Lumenate designs and implements customized, integrated solutions which enable our customers to transition to a converged infrastructure and enjoy the benefits of virtualization with increased security and scalability. Not just storage. Storage meat. Our goal – design and deliver the optimal customer experience.
If the answer's recompiling sendmail, then you're asking the wrong question
Problems encapsulating root on Solaris with Volume Manager 5.0 MP3
This story does not begin, though, with the ill-advised stuff. Instead it begins with fully supported stuff. As we bring new folks into the fold the lab is a key resource for teaching them - before they go to class they can kick the tires on zoning, mapping storage, virtualizing storage, etc.
And so the plan was to show one of the new folks how to virtualize an AMS2100 behind a USP-VM. Easy as pie, it should work like a hose, let me show you this quick virtualization demo. Except it didn't work. Instead of popping up with "HITACHI" for the Vendor, and "AMS" for the Product Name we got the dreaded "OTHER" "OTHER." That's not exactly the demo we were hoping for.
We double-checked everything and still no luck. Well, maybe it's the release of code on the AMS, let's upgrade that. Nope. Hmm, how about the code on the USP-VM. Nope. Eventually we went back and noticed that someone (who for the purposes of this blog shall remain nameless) changed the Product Id on the AMS from the default, which is "DF600F" to "AMS2100". Changing it back fixed the problem.
After basking in the success of accomplishing something that would normally take just a few minutes, we thought to ourselves, "Hmm, what else have we got that we could virtualize?" And because it's a lab, and because there are no repercussions (and because Justin loves him some ZFS) we decided to virtualize a Sun X4500, or "Thumper".
I won't cover the steps for setting up an OpenSolaris box as a storage server, since it's well documented under COMSTAR Administration on the Sun wiki. But basically we followed the documented steps and presented the LUNs up to the USP-VM. And, as you'd expect, got "OTHER" "OTHER" for our trouble.
And that's where the "Bad Idea" tag comes in. You see, it's possible to load profiles for new arrays into Universal Volume manager. We took the profile for a Sun 6780, modified it to match the COMSTAR information, and loaded it up, to get this:
After that, it virtualized without issue and we presented it up to a Solaris server to do some basic dd's against it. As far as the host knew, it was just another LUN from the USP-VM.
Of course this is just one way to do it. After a little more thought maybe you could recompile parts of OpenSolaris (like this one, for instance) and have the OpenSolaris server tell the USP-V that it's something else entirely. We'll leave that as an exercise for the reader, though.
Let me reiterate: This is a Really Bad Idea for a production array because it's not supported (or supportable).
Because It's There
Backing up that large Oracle database with Netbackup’s Snapshot client
Hitachi High Availability Manager – USP V Clustering
If Banking Were Like Storage
Real world goodness with ZFS
While Hitachi Dynamic Provisioning (HDP) is new to HDS storage; isn’t a new concept by any stretch of the imagination. That said, you can rest assured that when the engineering gurus at Hitachi released the technology for consumer use, you’re data can be put there safely and with confidence. There are many benefits to HDP, you can google search for a couple hours and read about them, but the few that catch my eye are:
1) Simplify storage provisioning by creating volumes that meet the user’s requirements without the typical backend storage planning on how you’ll meet the IO requirements of the volume – HDP will let you stripe that data across as many parity groups as you add to the pool.
2) Investment savings. Your DBA or application admin wants 4 TB of storage presented to his new server because he ‘wants to build it right the first time’ but 4TB is a 3 year projection of storage consumption and he’ll only use about 800GB this year. Great, provision that 4TB, and keep an eye on your pool usage. Buy the storage you need today, and grow it as needed tomorrow delaying the acquisition of storage as long as possible. This affords you the benefit of time, as storage typically gets a little cheaper every few months. Don’t be fooled though, as a storage admin, you need to watch our pool consumption and ensure you add storage to the pool before you hit 100%. (I’d personally want to have budget set aside for storage purchases so that I don’t have to fight purchasing or finance guys when my pool hits the 75-80% mark.
3) Maximize storage utilization by virtualizing storage into pools vs. manually manipulating parity group utilization and hot spots. This has been a pain point since the beginning of SANs, and as drives get bigger hot spots become more prevalent. I have seen cases where customers have purchased 4x the amount of storage required to meet the IO requirements of specific applications just to minimize or eliminate hot spots. Spindle count will still play a role in HDP but the method used to lay the data down to disk should provide a noticeable reduction in number of spindles required to meet those same IO requirements.
4) If you are replicating data, the use of HDP will reduce the amount of data and time required for the initial sync. If you’re HDP volumes are 50GB, but only 10% utilized, replication will only move the used blocks, so you’re not pushing empty space across your network. This can be a significant savings in time when performing initial syncs or full re-syncs.
The next issue for HDP is reclaiming space from the DP-VOLs when you have an application that writes and deletes large amounts of data. This could be a file server, a log server, or whatever. Today, there is no ‘official’ way to recover space from your DP-VOLs once that space becomes allocated from the pool. I have tested a process that will allow you to recover space from your DP-VOLs after deleting files from the volume. I came up with this concept by looking at how you shrink virtual disks in VMware. Before vmware tools were around, you could do a couple commands to manually shrink a virtual disk. This is what I used for my solaris and linux guests:
Basically what happens in most file systems is when a file is deleted, only the pointer to the files is removed, the blocks of data are not over written until the space is needed by new files. By creating a large file full of zeros, we overwrite all empty blocks with zeros, allowing the vmware-vdiskmanager program to find and remove zero data from the virtual disk.
You can accomplish the same thing in Windows using SDELETE –c x: (x = the drive you want to shrink). SDELETE can be found on Microsoft’s website (http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx), as well as usage instructions.
Now that we know how this works in VMware, how does this help us with HDP? As of firmware 60-04 for the USP-V / VM, there is an option called “discard zero data” which tells the array to scan DP-VOL(s) for un-used space (42MB chunks of storage that has nothing but zeros in it). These chunks are then de-allocate from the DP-VOL and returned to the pool as free space. This was implemented to enable users to migrate from THICK provisioned LUNs to THIN provisioned LUNs. We can take things a step further by writing zeros to our DP-VOLs, to clean out that deleted data then run the “discard zero data” command from Storage Navigator effectively accomplishing the same thing we did above in VMware. In my case, HDP has allocated 91% of my 50GB lun, but I deleted 15GB of data so I should be far less than 91%. Running the “discard zero data” accomplishes nothing, as the deleted data has not been cleaned yet. Now I run the sdelete command (as this was a Windows 2003 host), in this case SDELETE –c e: which wrote a big file of zeros and then deleted it. Now, back in Storage Navigator I check my volume, and I’m at 100% allocated! What happened? Simply, I wrote zeros to my disk, zeros are data, and HDP has to allocate the storage to the volume. Now we perform the “discard zero data” again and my DP-VOL is now at 68% allocated, which is what we were shooting for.
The most important thing to keep in mind if you decide to do this is writing zeros to a DP-VOL will use pool space!! I wouldn’t suggest doing this to every LUN in your pool at the same time, unless you have enough physical storage in the back end to support all of the DP-VOLs you have. While the process will help you reclaim space after deleting files, it is manual in nature. Perhaps we’ll see Hitachi provide an automated mechanism for this in the future, but its nice to know we can do it today with a little bit of keyboard action.
Hitachi Dynamic Provisioning - reclaim free/deleted file space in your pool today.
Data Domain - Appliance or Gateway?
Lumenate holds first Customer Storage Forum
The Lumenate Storage Practice Blog – off we go!