Here are some of the highlights:
Multi Tier File System (MTFS)
This feature allows you to separate out the metadata and associate it with high performance SAS or SSD devices. Locating the metadata on high performance disk can significantly improve performance across a number of operations and workloads.
• Bluearc solutions tend to be used in environments with very high file counts, in some cases millions of files exist within a single directory and a file system may have billions of files. Operations such as replication, backups and migrations all rely heavily on metadata. Using the Multi Tier File System greatly improves the performance of these types of operations. Directory listings are improved by as much as 500%. Replication operations have shown improvements as high as 87%.
• Traditional workloads also benefit from MTFS. Heavy write workloads, processes that scan large parts of a file system or require concurrent access to large files have shown performance improvements between 30% and 50%
MTFS will improve performance in most environments but just how much benefit you see will depends on your data and the types of operations you perform. The nice thing is that this approach is built directly into the solution, you simply add the tier of storage from the existing arrays you have rather than adding a completely new device.
Something to consider is that the amount of file system space consumed by metadata depends on the average file size. The smaller the file size the greater the percentage of space consumed by metadata. Bluearc provides some guidelines that can help with sizing the space required for the metadata.
Increased Scalability
Capacity has been doubled across the entire product line for all of you that were struggling with the 8PB limit.
Mercury 50 HDS 3080 | Mercury 100 HDS 3090 | Titan 3100 HNAS 3100 | Titan 3200 HNAS 3200 |
4 PB | 8 PB | 8 PB | 16 PB |
Data Migration Improvements
One of the nicest Bluearc features is Data Migrator. Data Migrator provides policy based data migration between tiers of storage. When combined with External Volume Links (XVL) it can also migrate to any NFS device, whether it is another NAS solution, a deduplication appliance or a file server. In previous versions of Data Migrator files are not automatically migrated back to their original location when recalled. You can move them back but it requires a separate operation through the CLI. This reverse migration is now built in making the overall process much simpler.
Improved Reporting
The reporting features have been enhanced to provide more of a dashboard type view for a cluster node. This view is helpful for getting a quick look at how a node is performing. File System capacity trending information is now included. File system reporting is particularly useful in environments with large file systems and millions or billions of files. The performance monitoring statistics are now stored in a database allowing longer retention periods. They’ve also added a links that allow you to quickly switch between various sample sets such as the last hour, 1 day, 1 week, 3 months even 1 year.
We were told that 7.0 is GA but not yet shipping on the units. Version 7.1 should be out within the next few weeks and will be included on new units shortly thereafter. When we get the 7.0 code in our lab we will be sure to post our findings as well as what the upgrade process looks like.
gotta love this! i need a demo unit to test xvl to netapp, have you done that yet?
ReplyDeleteWe haven't tried it with a Netapp yet but we have tried it with Isilon and SAM-FS. We are in the process of switching out our backend AMS arrays but once that is done we can certainly try it out. Give me a call and we can work out the demo unit.
ReplyDelete