HP product teaches dumb storage new tricks

Since Hewlett-Packard acquired LeftHand Networks in 2008, it has continued to develop LeftHand’s complete line of software-based iSCSI storage under the HP LeftHand P4000-series moniker. Based on the feature-rich SAN/iQ 9.5 software platform, the currently available P4000 G2 series includes a range of different physical form factors based on HP’s ProLiant server line, as well as the virtualized P4000 VSA (Virtual SAN Appliance), which runs on VMware vSphere or Microsoft Hyper-V.

At its heart, the P4000 VSA is simply a virtualized (and thus hardware agnostic) version of the same SAN/iQ software that powers its physical brethren. Though the virtualized version comes with notable scalability limitations, it offers a great deal of flexibility in configuring storage either in concert with physical P4000-series SANs or on its own as a purely virtual SAN.

Use cases for the P4000 VSA are wide and varied, including everything from utilizing direct-attached storage to implement redundant shared storage in small-business environments to allowing single-host remote offices to asynchronously replicate back to a headquarters site for disaster recovery purposes. Plus, given that the P4000 VSA can utilize any storage hardware supported by its host hypervisor, the VSA can be used to breathe new life into outdated or retired storage platforms, be they DAS-, NAS-, or SAN-based.

P4000 VSA feeds and speeds
As noted, the HP P4000 VSA has nearly all of the same features as its physical counterparts, including local synchronous (clustered) replication, multisite synchronous replication (stretched clustering), remote asynchronous replication, demand-allocated snapshots with application-awareness, and thin provisioning. All storage access is enabled through the use of standards-based iSCSI, and a wide range of operating systems is supported.

This list includes virtualization platforms such as VMware vSphere and Microsoft Hyper-V. The P4000 VSA can both run on and support connections from these hypervisors at the same time, allowing the VSA to abstract direct or SAN-attached storage and offer it back to the hosts as feature-rich shared storage.

The P4000 VSA does have some limitations not found in the rest of the P4000 series. Those include a maximum storage assignment of 10TB per VSA (attached in up to five 2TB virtual disks) and no support for multicore vSMP. Considering the increasing use of flash-based acceleration and SAN/iQ’s nearly complete support for VMware’s VAAI API extensions (which allow virtual machine cloning and other VM-related storage operations to be shifted onto the storage array), the absence of vSMP support is a substantial limitation.

Given that each VSA draws on only one virtual CPU core, it is not very difficult to drive the VSA to high CPU utilization under extremely high-bandwidth storage workloads. Thus, the VSA may not be a great choice for infrastructures that regularly experience those kinds of loads. HP has indicated this restriction is likely to be lifted in future releases.

Otherwise, overall performance of the VSA depends entirely upon the server, network, and storage hardware used in concert with the hypervisor on which it runs, and it can be made to scale to almost any heights that the underlying hardware can soar. That said, when designing clustered storage systems that will take advantage of SAN/iQ’s Network RAID for redundancy, keep in mind the aggregate performance available to iSCSI clients will be slightly less than half what the underlying hardware is capable of (due to the overhead of mirroring writes across the network).

From a capacity perspective, the P4000 VSA is more limited than its physical counterparts due to the license-based 10TB/VSA limitation. And the P4000-series limitations are already notable: Because best practice generally dictates using both RAID10 on the underlying storage hardware (DAS or SAN) as well as Network RAID10 across nodes in the storage cluster, the ratio of accessible storage to raw storage is about four to one — extremely low by any measure.

Arguably, the performance and capacity hits that result from network-based mirroring should be expected in any synchronously replicated storage platform. While I think that’s largely true, the P4000’s approach to synchronous mirroring has one major drawback you don’t find in other solutions. Typically, storage vendors offer synchronous mirroring as a means to provide an extremely low RPO when protecting a storage infrastructure that’s already shielded by multiple layers of redundancy: local RAID, multiple controllers with cache mirroring, diverse storage networks, and so on. In other words, synchronous mirroring is only used for extremely mission-critical systems that can benefit from it and for which the added capacity and wasted performance is worthwhile.

But in the P4000 series, synchronous mirroring is almost always used because each storage node represents a nonredundant controller and storage combination. To protect against the relatively common eventuality of a catastrophic “controller” (server) failure, both the controller and the storage attached to it must be duplicated. It’s a trade-off born from the assumption that using redundant industry-standard server hardware and DAS is ultimately less expensive and more flexible than a purpose-built SAN that includes fully redundant controller resources. While this may be the case in a large number of instances, it’s important that potential customers account for the resulting overhead in their planning.

The P4000 VSA in the lab
In testing the P4000 VSA, my goal was to replicate the process of implementing shared storage in a preexisting virtualization environment. To that end, I installed VMware vSphere 5 on two HP ProLiant DL385 G7 servers, each equipped with dual AMD “Interlagos” 6220 processors and 32GB of RAM. Each server also included a brick of 15K SAS disks attached to the onboard P410i RAID controller and accelerated by a flash-backed write cache.

Once the initial tasks of configuring a virtual machine for VMware’s vCenter management console and a few Windows server test boxes (to emulate existing VMs) were complete, it was time to get the VSA running. There are two ways to do this. You can manually import and configure the OVF-based (Open Virtualization Format) virtual appliances onto the virtualization hosts and install the Centralized Management Console, or you can use HP’s automated, wizard-driven installation tools that do all of that for you.


Getting started
 requires you to opt to take the road less traveled and go about the task manually. This allowed me to get an idea of what’s actually happening under the hood. Please note that much of the following can be achieved in much less time using the wizards.

The first thing to do was prepare each of the hosts to connect to a SAN via iSCSI (most stand-alone hosts would not already be configured for this). In my case, that meant attaching a pair of unused gigabit NICs (the DL385 G7 ships with four) to a new VMware vSwitch, configuring a pair of VMkernel interfaces for the host to use to connect to the SAN, and configuring a VM port group to allow the VSAs to coexist on the same network. I then connected those NICs to a pair of redundant switches and configured the switch ports for a new VLAN that would be dedicated to storage traffic.

The next thing to do was import a copy of the VSA virtual machine onto each host’s local disk through the vSphere Client — a relatively painless process that required only a couple of minutes each. After both VSAs had finished importing, I attached their NICs to the new iSCSI VM port group and attached a 200GB VMDK-based disk from each of the hosts’ local storage. (If you do this, note that you must use SCSI ID 1:0 through 1:4 for the system to recognize the disks you add and you must use them in order.) From there, I powered up the VSA VMs and used the vSphere Client to access the console and configure basic IP address info. Once I was able to reach the VSAs’ IP addresses over the network, it was time to install the management console.

All P4000-series SANs — virtual or physical — are managed through the same common client: the Centralized Management Console. This Windows-based client can be installed and run anywhere so long as it has access to the VSAs, though it’s generally best not to run it on a virtual machine that will be dependent on the VSAs themselves.

The CMC prompted me to discover existing VSA systems, a task that can be completed by manually entering the VSA’s IP addresses or by discovering a range of IP addresses (an ability that makes adding a large number of P4000 appliances easy). Once the CMC had discovered the VSAs, it prompted me to create a Management Group to contain the new appliances. The Management Group is a collection of P4000 appliances that will be managed within the same administrative domain. Each Management Group has its own administrators, iSCSI server definitions, and alerting properties. Most organizations will have a single group.

Creating a storage cluster

My next task was to create a storage cluster with my VSAs. It should be noted that this isn’t strictly necessary. You can allow each VSA to offer up its own storage without being in a cluster, but I wanted to take advantage of the redundancy benefits that come from mirroring storage across multiple appliances. Note that clustering does allow you to create a single-VSA member on backup hardware to act as a nonredundant remote replication target if you wish.

Creating a cluster is typically as simple as picking the VSAs you want to participate, specifying a virtual IP address for the cluster, and hitting go. However, one wrinkle was introduced by the fact that my test configuration involved only two vSphere hosts, each with its own VSA. One of the challenges that the VSA must deal with as it effectively implements RAID over the network is a storage isolation scenario where one or both of the VSAs or hosts becomes disconnected from the network.

In these cases, it’s extremely critical the VSAs do not both assume the other has failed and continue to operate. This situation can lead to the dreaded “split brain” scenario wherein the two mirrored copies start to diverge as the active virtual machines on each host continue to make changes to their volumes independently of each other.

To avoid this, P4000 clusters must always maintain a quorum of more than half of the member nodes. If that quorum isn’t achieved, all of the appliances will take their volumes offline. However, in a two-node cluster, there is no way to maintain a quorum. In this situation, a third, storageless VSA called a Fail-Over Manager (or FOM) is introduced to the Management Group.

The FOM’s job is to ensure that a quorum can always be achieved by at least one of the cluster nodes should an isolation scenario occur. If I had three hosts to work with, I wouldn’t have needed to introduce the FOM to the mix. Fortunately, the FOM is extremely easy to install — very similar to the process used to install the VSAs except that no local storage is added to the appliance. Although I installed the FOM on one of my two VSA hosts, in practice it should be located on a third, completely separate box.


After I directed the CMC to discover the FOM appliance and add it to the Management Group, I could create my storage cluster and start to allocate storage. When creating a volume, I was able to choose between two different types of volume redundancy: Network RAID0 and Network RAID10.

As the names imply, Network RAID0 will stripe the stored data across each of the VSAs without any redundancy beyond that offered by the RAID controller on each host, while Network RAID10 will synchronously mirror data across the nodes. If I had a larger cluster, my choices would have expanded to include Network RAID5 (minimum of four nodes) and Network RAID6 (minimum of eight nodes) as well. These RAID levels utilize background snapshots to effectively reduce data redundancy and increase capacity efficiency. It’s worth noting that all of the RAID levels utilize RAID10 for writes; the transition to RAID5 or RAID6 only happens after the system takes a snapshot.

Since I was looking for redundancy, I chose to use Network RAID10. After specifying the size of the volume and the hosts I wanted to access it, the CMC commanded the VSAs to create the volume. Moments later, I was ready to attach to the volume from the vSphere hosts, format a VMFS file system, and start using the new storage.

Now that the new VSA-based iSCSI volume was accessible by both hosts, I could start moving VMs onto the storage, effectively moving the VMs off one host’s local storage and into a mirrored storage container that crossed both hosts. Because I was running vSphere Enterprise Plus on my test servers, I could accomplish this with no downtime by using VMware’s Storage vMotion. Shops without licensing for that feature will need to power off their VMs prior to moving them.

Expanding storage


With my VMs running on the VSA, I was ready to make some changes to the environment that would commonly be undertaken by real-life users. In an era where storage needs are growing in leaps and bounds, one of the most common storage management tasks involves adding more storage, either to individual presented volumes or to the storage cluster as a whole.

Growing an individual volume is extremely easy: Simply edit the volume in the CMC interface and punch in a larger number. Growing my initial test volume from 200GB to 250GB took only a few seconds. Afterward, all that remained to do was to expand the VMFS volume from within the vSphere Client — again, a matter of only a few seconds.

Adding storage to the entire VSA cluster is slightly more complex; an equal amount of storage must be added to each VSA (the usable space is limited to that of the smallest cluster member), and each VSA must be shut down in order to add the disks. These two factors combine to make the process fairly time consuming.

Each VSA shutdown and restart cycle — while not disruptive if volumes are configured using Network RAID10 (mirroring) — requires a storage resync before the next VSA can be taken down for maintenance. This resync is generally fairly quick, with only the changes made since the VSA was taken down copied over, but it is by no means instant and can vary heavily depending upon how much write activity is taking place on the volumes that the cluster serves.

As previously mentioned, each VSA can support no more than five 2TB disks. If the initial disks added to the appliance are less than 2TB in size (in my case, the first I created was 500GB), there is no way to increase the size of those disks. To get the full five-by-2TB capacity, you’ll need to remove the VSA from the cluster, delete any sub-2TB disks from it, and install the larger disks before adding the VSA back into the cluster. In these cases, the restripe time is substantially longer than the resync time involved in adding a new disk — generally measured in hours or days depending upon the amount of data involved. In short, you’ll typically want to add storage in increments of 2TB if it is likely that the full capacity of the VSA will be used in the long run.

Monitoring and managing


In addition to allowing you to view current and historical alerts, the CMC will also let you configure email and SNMP-based alerting to make sure you’re aware of any trouble brewing within the environment. The CMC also contains a fairly detailed performance graphing utility, but this is substantially limited by the fact that the CMC must be explicitly commanded to start and stop recording performance statistics.

Those interested in retaining these stats (highly recommended) will undoubtedly want to implement a third-party trending/graphing platform so that performance can be monitored over long periods of time and trends can be identified. Fortunately, this is easily done. Most of the platform’s performance stats are exposed via SNMP, and a large number of third-party monitoring platforms have prebuilt templates made for the P4000 series.

Putting it all together


Overall, I found the HP P4000 VSA to be an extremely flexible storage system that’s also easy to work with. End to end, it took me only an hour or so to get it set up running — and I didn’t use the Zero-to-SAN wizard included with the installation package. Simply put, for existing P4000-series users who already leverage virtualization technology, the VSA is a no-brainer for solving a variety of problems. In fact, a number of P4000 system bundles include VSA licensing, so many customers may already own VSAs without being aware of it. For prospective P4000-series users, the cross-compatibility between the VSA and the rest of the P4000 series will undoubtedly be a huge selling point.

For small-business customers seeking to implement shared storage to support features such as live virtual machine migration and automated host failover, the HP P4000 VSA is definitely worth a look. The only strikes against it are the relatively high cost of the stand-alone VSA licenses (about $3,700 each on the street) and the cost of equipping host servers with enough raw storage to support Network RAID10. In many cases you’ll pay no more for an internally redundant physical SAN. But if you already own the storage and you simply want to take advantage of the clustering, snapshotting, or remote replication functionality, the VSA licenses may be well worth the expense.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

CDN Staff
CDN Staffhttps://channeldailynews.com
For over 25 years, CDN has been the voice of the IT channel community in Canada. Today through our digital magazine, e-mail newsletter, video reports, events and social media platforms, we provide channel partners with the information they need to grow their business.

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.