I’m excited to present the first episode of the Tech ONTAP Podcast, hosted by Glenn Sizemore, Andrew Sullivan, and myself, Pete Flecha. If those names sound familiar, you may know us from our former podcast, the NetApp Communities Podcast. Although the name has changed, our mission is still the same: each week we will discuss industry news, all-things NetApp, and have a lot of fun.  We also have frequent interviews with subject-matter experts across the industry, and all the NetApp business units, to give storage insights, deep dives, and maybe a great story or two from fighting on the front lines of IT.

Subscribe on iTunes

For our inaugural episode, we are be joined by Jay Goldfinch, Technical Marketing Engineer for Data ONTAP. Jay will be walking us through the payload of Clustered Data ONTAP 8.3.1. For more details on Clustered Data ONTAP 8.3., visit the NetApp Support site to check out the release notes and documentation!

Jay Goldfinch

Looking to meet the Tech ONTAP Podcast team in-person?  We will be at Insight-US, Insight-EMEA, and VMworld-US.

Have You Registered for VMworld?

Register for VMworld today and visit the team on the show floor at booth number 1005. While at VMworld, don’t forget to attend these great NetApp sessions:

    • Mission Possible: Delivering IT continuity in the hybrid cloud era (STO6466-SPO)
    • NetApp Integrated VMware EVO:RAIL Solution Deep Dive (SDDC6595-SPO)
    • The Value of Speed: NetApp All Flash FAS (STO6567-SPO)
    • Virtual Volumes on NetApp – Ready For Prime Time! (STO5721)
    • Virtual Volumes Technical Panel (STO5522)
    • How the Denver Broncos Win With Advanced Technology OPT6693-SPO

NetApp Insight is Less Than 90 Days Away!

Have you registered for NetApp Insight yet? With over 300 technical breakout sessions to choose from, as well as Hands-On labs and onsite NetApp Certification Exams, Insight is one event you won’t want to miss. And if that isn’t enough, you can come and hang out with Glenn, Sully and me.

Each week, the Tech OnTap Podcast discusses all-things NetApp, interviews subject-matter experts, and provides insights into the storage industry. Follow the hosts on Twitter: Pete Flecha (@vPedroArrow), Glenn Sizemore (@glnsize), and Andrew Sullivan (@andrew_NTAP). Subscribe to the podcast on SoundCloud, iTunes or sign-up to receive the Tech OnTap newsletter.

Mark Kulacz @markkulacz brilliantly dissects the Gartner 2015 Magic Quadrant for Solid State Storage Arrays.



NetApp was not placed high in Gartner’s 2015 Magic Quadrant for Solid-state Storage Arrays. Your’s truly did some research to see if this was warranted or not. The result is this article.

This article reviews the 2015 Gartner Magic Quadrant for SSAs (Solid-state Storage Arrays), and notable changes to the ratings of key vendors in this market. There is a critical look to see if NetApp experienced improved results since 2014, and why (or why not). It concludes with a description of additional concerns that 46and2bits.com has with the Magic Quadrant for SSAs (concerns that are not specific to NetApp).

  • Note to the reader: The author of this article is a Competitive Analyst at NetApp. Statements and opinions made do not reflect those of NetApp Corp.


What is the Gartner Magic Quadrant for Solid-state Storage Arrays?

According to Wikipedia.org, Gartner (founded in 1979) “is an American…

View original post 2,566 more words

Most of the questions I receive regarding the NetApp SRA are in some way related to the discovery of devices.  In SRA 2.1 for clustered Data ONTAP, there are many different configurations a user could have for their protected/recovery site export policies. This post aims to explain expected behavior for some of the more advanced configurations.

Alternative Export Policy Names
By default, clustered Data ONTAP includes an export policy named ‘default’. In some cases, users may want to use an alternative policy or a policy with a different name. While this is supported in SRA for the purposes of detecting exports on the production side, the user might encounter behavior different from what they expect in regards to usage of recovery site export policies and rules.

Here is an example of a perfectly acceptable alternative export policy name on your protected site, which SRA will detect and use for the purposes of discovery:

On your recovery site, you can choose to use only a single export policy called Conforming_Single_IP_Policy, which is set on both your root namespace as well as another datastore used for production VMs.

Here is an example of what the Conforming_Single_IP_Policy policy looks like on the recovery site.

After a recovery operation or test failover has completed, a few modifications should be noted on your recovery site. Firstly, the volumes that come online will ONLY be exported with the default policy (there is no way to select an alternative policy name in SRA):

Next, it can be seen that the default policy has been modified to include the VMKernel IP address of the recovery ESX host:

This is an important consideration, because if the existing ‘default’ export policy had existing rules, a new rule with IP would have been ADDED to the end of the list (as we see is the result in the Export Policies with Subnets section). This could cause other volumes using this export to become inappropriately configured by virtue of increased unintended access to them. As a rule, you should NOT use the default policy for anything other than access to the root namespace when SRA is managing the recovery operations.

Export Policies with Subnets
SRA for Clustered Data ONTAP supports the detection of export policies on the protected site, which involves the use of subnets instead of individual IP addresses. The configuration on the protected site may look like this:

On the recovery site however, SRA will ignore existing rules involving subnets. Instead, SRA will modify the export to include each individual IP address that SRM classifies as an ‘access group’. Below is an expected export policy modification from the perspective of the recovery site, where a single ESX recovery host exists with VMKernel IP It is important to note, during a recovery operation to your DR site and then another recovery back to your production site, you should expect to see the production site export policies modified to reflect each individual IP, regardless of whether it has a subnet rule:

Root Export Policy Modification on the Recovery Site
SRA for clustered Data ONTAP supports the use of different export polices at both the root namespace and sub-namespace level on both protected and recovery sites. For recovery site purposes, it is important to recognize that BOTH the root namespace policy and the policy assigned to the volume to be mounted will be modified, to include the IP addresses of each individual VMKernel port on the ESX hosts that are part of the SRM Access List.

Here, we can see a configuration where a single export policy named ‘Conforming_Subnet_Policy’ for both the root namespace and the sub-namespace is used on the recovery site.

The recovery site’s export policy ‘Conforming_Subnet_Policy’ shows the rule that includes the entire subnet with full access, and the default export policy is empty:

Once a test failover or recovery is triggered, the volume will come online at the recovery site with the default export policy:

Take special note of what happened to the export polices on the recovery site; notice that the default policy has been modified to include the VMKernel IP address of our recovery ESX host, but that the export policy assigned to the root namespace was also modified to include the same. This is because SRA does not honor subnet exports on recovery; however, it will use them for the purpose of recovery on the protected site. It is therefore recommended that the default policy be used for the root namespace, to avoid the unintentional modification of an export policy used by other sub-namespaces:

Again these are advanced export policy configurations.  For most environments the default export policy should work fine.  Many thanks to Donald Patterson for putting together this collection of alternate configurations.

For those of you who attended my Insight sessions, as promised, the 5.0P1 patch releases for VSC and VASA Provider for clustered Data ONTAP are now available.  This patch fixes a number of issues found in VSC 5.0. (release notes)

To download, go to the NetApp Support Site Software Download page:

  1. From the Software Download page, scroll to the last row in the Product list: “To access a specific… .”
  2. From the <Select Software> drop-down list, choose Virtual Storage Console (VMware vSphere).
  3. Enter 5.0P1 in the version box and click “Go.”

Virtual Storage Console for VMware vSphere software is a single vCenter Server plug-in that enables you to manage complete virtual machine end-to-end lifecycles in VMware environments using NetApp storage systems.
VSC integrates smoothly with the VMware vSphere Web Client and enables you to use single sign- on (SSO) services. In addition, the VSC Summary page enables you to quickly check the overall status of your vSphere environment.
By running VSC, you can perform tasks such as the following:

  • Manage storage and configure the ESX host
  • Create storage capability profiles and set alarms
  • Provision datastores and clone virtual machines
  • Perform online alignments and migrate virtual machines individually and in groups into new or existing datastores
  • Back up and restore virtual machines and datastores

LasVegasComing off of my 3rd consecutive NetApp Insight I have to say NetApp Insight 2014 set the bar extremely high.  Maybe it’s because I was newer to NetApp, but previous years paled in comparison. I’m not sure if it was the timely announcements of NetApp Cloud ONTAP, clustered Data ONTAP 8.3, the acquisition of Riverbed’s SteelStore product, or the inclusion of our customers for the first time, but this year was very different. Never before have I felt so excited to be part of a winning team like NetApp.

NetApp Cloud ONTAP was announced last week at Insight US as a simple solution to control public cloud storage resources with NetApp Data ONTAP. This is a software-only storage appliance that allows you to provision and manage storage on Amazon Web Services. Having the same storage operating system in the cloud as you do on-premises brings you the true value of a hybrid cloud environment without having to train your IT staff new methods to manage your storage.

The first version of Cloud ONTAP is deployed and managed from OnCommand Cloud Manager as a virtual machine on Amazon EC2 compute instances managing Amazon EBS storage, allowing customers to build a virtual storage solution directly on Amazon resources.  It allows you to provision both NAS and SAN storage for your application environment with CIFS, NFS, and iSCSI support. Using Cloud ONTAP you get the same performance and storage efficiency you know and love with Data ONTAP (i.e. zero-impact Snapshot copies, Deduplication and data compression) as well as storage replication SnapMirror® technology, which brings your hybrid cloud together by tying your on-premises FAS storage to your Cloud ONTAP environment.

There are also multiple consumption models that range from the smaller Pay-as-You-Go solution at 2TB to the larger Subscription model of up to 50TB.

Clustered Data ONTAP 8.3 was also announced last week with several new features.

  • MetroCluster: Transparent fail-over protection and zero data loss. Synchronous replication and local HA.
  • SnapMirror and SnapVault enhancements: Shorter backup windows and higher speed granular restores.
  • Automated Nondisruptive Upgrade (NDU): Cuts manual steps to upgrading clustered Data ONTAP from 35 to 3 steps.
  • SMTape: Simplifies and speeds backups to tape.

There were tons of sessions covering the various features of 8.3 at Insight.  If you are coming to Insight Berlin in two weeks check the bottom of this post for session recommendations.  if you are unable to attend and are interested in clustered Data ONTAP 8.3 check out TR-4053

The NetApp Private Storage solutions team launched it’s latest solution at Insight called NetApp Private Storage for SoftLayer solution.  This is a hybrid cloud architecture that allows enterprises to build an agile cloud infrastructure that combines the scalability and flexibility of the SoftLayer Direct Link service with the control and performance of NetApp enterprise storage. NetApp storage is deployed at a colocation facility where the SoftLayer Direct Link service is available, and the NetApp storage is connected to SoftLayer computing resources through the SoftLayer Direct Link service.

Having Customers attend Insight was awesome.  It hit me on the very first session I presented.  Just before the session started, I looked up and noticed a sea of light blue badges (Indicating customers).  At that moment it got real for me. We finally get to host our customers at our show.  For years it bothered me to have to attend partner events (i.e. VMworld, TechEd, Cisco Live, Oracle OpenWorld etc.) to see our customers. Of course I love attending those events, but having customers in NetApp specific sessions and offering our Labs to customers was huge.  I had some really great conversations with several customers and I even teamed up with one in Foosball and ran the tables at the NetApp party. I can’t wait to do the same in Insight EMEA in two weeks.

Together We Can was the theme for the General Sessions. On day two of Insight I was in the Speaker Resource Center preparing for my session and they were live streaming the General Session.  I was trying to focus on my task at hand, but NetApp CEO Tom Georgens was delivering the most compelling discussion on how together we can embrace Hybrid Cloud and together we can create competitive advantage for our customers.  WOW.  If you are a NetApp employee, partner or customer and missed this keynote, stop what you are doing and make a point to watch it.  As the day went on I noticed several of my session attendees had these Lego blocks.  Later when entering the expo I saw an enormous NetApp “N” that was built by the individual Lego blocks belonging to employees, partners and customers. Brilliant visual to support the theme of Insight this week. Together We Can!

The NetApp Communities Podcast

Pete, Nick and Glenn

It was great to bring the NetApp Communities Podcast back on the road. Nick, Glenn and I recorded daily wrap-ups while at Insight US and will be doing the same in Berlin.  The boys and I were really stoked to meet so many listeners of the podcast.  Thanks for the positive feedback!  Be sure to listen to the next episode of the NetApp Communities Podcast as we bring in Kevin Hill to learn more about NetApp Cloud ONTAP.

I also got to catch up with old friends and meet some new ones.  I finally got to chase down the NetApp A-Team.  These guys and gals were on fire this week with Twitter chats and interviews with Forbes NetApp Voice crew. It was great chatting with Adam Bergh, Jesse AndersonvMiss, Michael Cade and crew. You guys rock!


The A-Team

Insight EMEA in Berlin is just around the corner.  Our technical team is locked and loaded and ready to make an even better presentation of NetApp content to our attendees. If you are attending, be sure to take a look at the Session Catalog.  below you will find a few of my recommendations.

I will be presenting the following sessions:

  • VI-2-2201 NetApp Best Practices for vSphere Part 4: Integrations—Leveraging Plug-Ins, VAAI, and VASA
  • VI-3-2263 Business Continuity and Disaster Recovery—Choosing the Best Option to Protect Your Business
    • This will be co-presented with VMware Sr Technical Marketing Engineer Ken Werneburg @vmken 
  • VI-2-1892-TT – Site Recovery Manager on Clustered Data ONTAP
    • This will be co-presented with VMware Sr Technical Marketing Engineer Ken Werneburg @vmken 

Some other really great sessions to catch are:

  • VI-2-2024 VDI Design, Architecture, and Best Practices for Citrix and VMware
  • VI-2-2109 VMware Storage Management Tools from NetApp: What They Are and Why You Need Them
  • VI-2-2076 Test/Dev Automation Using the NetApp Powershell Toolkit and VMware PowerCLI
  • VI-2-2243 NetApp Best Practices for vSphere Part 1: VMware on Clustered Data ONTAP
  • VI-2-2202 NetApp Best Practices for vSphere Part 2: Sizing and Performance
  • VI-3-2236 NetApp Best Practices for vSphere Part 3b: Avoiding the Seven Deadly Sins of VMware
    Storage Networking, Part 2, Advanced
  • PL-3-2188 FlexArray Virtualization Best Practices for E-Series
  • PL-1-1923 Introduction to NetApp StorageGRID
  • VI-2-2178 Understanding and Designing the Software-Defined Data Center with NetApp and VMware
  • DP-2-1780 Introduction to MetroCluster

And lastly, I may or may not have hugged one of the founders of NetApp and took a selfie, so scratch that off the bucket list. 🙂

Hope to see you in Berlin,

Bye for now… or Auf Wiedersehen!


I don’t always take selfies, but when I do it’s with NetApp founder Dave Hitz

This year at VMworld Glenn Sizemore and I will be presenting VMworld session BCO3107-SPO – Business Continuity and Disaster Recovery: Choosing the best option for your business. In light of the fact that NetApp will be demonstrating Storage Capability Profiles (SCPs), the NetApp VASA Provider (VP), and VVOLs in the Hands-on Labs, I thought it would be fun to tie-in profile driven storage with managing BC\DR requirements during the VM provisioning process.

What is the VASA Provider?

The basic premise of vSphere APIs for Storage Awareness (VASA) is that the VASA provider (VP), which is created by the storage vendor, surfaces capability information about the storage array. The VP will pass up capabilities like deduplication, replication, snapshot status, RAID level, drive type, and performance (IOps/MBps) capacities. vSphere administrators can then use these profiles to create a VM Storage Profile. VM Storage Profiles are then used to determine which datastores are compatible (support the necessary capabilities) or incompatible (don’t support the required capabilities) when provisioning new VMDKs, performing Storage vMotion operations, cloning a VM, or deploying a VM from a template.

A Use Case Scenario:

The Storage Administrator want to make sure the VI Admin is provisioning VMDKs on datastores with the appropriate capabilities. The VI admin doesn’t want to have to guess the features of every single datastore in the environment.  If the VMDK belongs on the stretched cluster he wants to ensure the DRS Affinity Groups are configured properly to prevent any latency from unnecessarily traversing the inter-cluster links. In the case of DR, he wants to make sure the VM is protected and available for recovery in the event of a disaster.

The Solution:

Using the NetApp VASA provider, vSphere Tags, and a little PowerCLI, we achieve the following:

  1. Use NetApp Storage Capability Profiles to map storage features (i.e. deduplication, replication, snapshot status, RAID level, drive type, performance etc) to datastores.
  2. Leverage VM Storage Policies configured to combine SCPs and vCenter Tags in order to align storage service tier with BC\DR requirements.
  3. Finally, use PowerCLI 5.5 R2 to automate the best practices and other manual configuration:
    • DRS Affinity Group configuration
    • Add newly created VM(s) to the existing SRM Protection Group
    • Initiate an SRM test fail over, including clean-up and sending a status notification.


Sample Scripts:

There are three scripts used in the above demo to automate different tasks.  The first one adds newly created virtual machines to the DRS affinity group according to the defined site layout, the second adds virtual machines to the SRM recovery plan protecting a datastore, and the third tests SRM failover with the newly added VMs. In the demo we executed these consecutively, however they do not have to be.

DRS Affinity Group Auto Config

# connection variables
$vcenterIp =
$vcenterUser = "administrator@vsphere.local"
$vcenterPassword = "Password!"

# Define the site resource associations
$infrastructure = @{
    # the vCenter cluster to operate on
    'cluster1' = @{
        # the site name
        'site A'= @{
            # hosts belonging to this site
            'hosts' = @('esx1.demo.netapp.com');
            # datastores belonging to this site
            'datastores' = @('metrocluster_siteA_1');
        # repeat as above
        'site B' = @{
            'hosts' = @('esx2.demo.netapp.com');
            'datastores' = @('metrocluster_siteB_1');
    # add additional clusters as needed

# import the VMware PowerCli module
if (!(Get-PSSnapin vmware.vimautomation.core -ErrorAction SilentlyContinue)) { 
    Add-PSSnapin vmware.vimautomation.core

# eliminate invalid certificate errors, only connect to one vCenter at a time
Set-PowerCLIConfiguration -DefaultVIServerMode Single -InvalidCertificateAction Ignore  -Scope session -Confirm:$false | Out-Null

# connect to vCenter
Connect-VIServer -Server $vcenterIp -User $vcenterUser -Password $vcenterPassword | Out-Null
Write-Host "Connected to vCenter server $($vcenterIp)"

while ($true) {
    # loop through our clusters
    $infrastructure.GetEnumerator() | %{ 
        # get the cluster object
        $clusterName = $_.Key
        Write-Host "Getting cluster $($clusterName) data..."
        $cluster = Get-Cluster $clusterName
        # create the specification objects for making the modification
        $clusterSpec = New-Object VMware.Vim.ClusterConfigSpecEx
        # for each of the sites, add the hosts and VMs to the spec
        # value is the hash with keys equal to each site name
        $_.Value.GetEnumerator() | %{
            $siteName = $_.Key
            # value is the hash with keys of "hosts" and "datastores"
            $hostGroupSpec = New-Object VMware.Vim.ClusterGroupSpec
            $hostGroupSpec.operation = "edit"
            $hostGroupSpec.Info = New-Object VMware.Vim.ClusterHostGroup
            $hostGroupSpec.Info.Name = "$($siteName) Hosts"
            Write-Host "  Editing host group $($hostGroupSpec.Info.Name)"
            # make sure all of the hosts have been added
            $_.Value.hosts | %{
                # add each host MoRef to the list
                $hostGroupSpec.Info.Host += (Get-VMHost -Name $_).ExtensionData.MoRef
                Write-Host "    Adding host $($_) to DRS group"
            $clusterSpec.GroupSpec += $hostGroupSpec

            $vmGroupSpec = New-Object VMware.Vim.ClusterGroupSpec
            $vmGroupSpec.operation = "edit"
            $vmGroupSpec.Info = New-Object VMware.Vim.ClusterVmGroup
            $vmGroupSpec.Info.Name = "$($siteName) VMs"
            Write-Host "  Editing VM group $($vmGroupSpec.Info.Name)"
            # make sure all the VMs in the datastores are added
            $_.Value.datastores | %{
                Write-Host "    Adding VMs from datastore $($_)"
                # get the VMs in the datastore
                Get-Datastore -Name $_ | Get-VM | %{ 
                    # add them to the list
                    $vmGroupSpec.Info.VM += $_.Extensiondata.MoRef
                    Write-Host "      Adding VM $($_.Name)"
            $clusterSpec.GroupSpec += $vmGroupSpec
            # now we want to associate the site VMs with the site hosts
            $ruleSpec = New-Object VMware.Vim.ClusterRuleSpec
            $ruleSpec.operation = "edit"
            $ruleSpec.Info = New-Object VMware.Vim.ClusterVmHostRuleInfo
            $ruleSpec.Info.enabled = $true
            $ruleSpec.Info.name = "$($siteName) Affinity Rules"
            $ruleSpec.Info.mandatory = $false
            $ruleSpec.Info.vmGroupName = "$($siteName) VMs"
            $ruleSpec.Info.affineHostGroupName = "$($siteName) Hosts"
            Write-Host "  Associating host and VM groups..."
            $clusterSpec.RulesSpec += $ruleSpec
        Write-Host "  Implementing new configuration..."
        # implement the rules
        $cluster.ExtensionData.ReconfigureComputeResource( $clusterSpec, $true )

    Write-Host "Iteration complete!"
    Write-Host "Sleeping..."
    Start-Sleep -Seconds 60

Add Virtual Machines to SRM Failover Group

# connection variables
$vcenter = "192.168.0."
$user = "administrator@vsphere.local"
$pass = "Password!"

# we can only update local plans...not remote
$plan = "NYC"


$credential = New-Object System.Management.Automation.PSCredential ($user, (ConvertTo-SecureString $pass -AsPlainText -Force))

if (! (Get-PSSnapin vmware.vimautomation.core -ErrorAction SilentlyContinue)) {
    Add-PSSnapin vmware.vimautomation.core

# eliminate invalid certificate errors, only connect to one vCenter at a time
Set-PowerCLIConfiguration -DefaultVIServerMode Single -InvalidCertificateAction Ignore  -Scope session -Confirm:$false | Out-Null

# Connect to vCenter
Connect-VIServer -Server $vcenter -Credential $credential | Out-Null

# Connct to SRM
$srm = Connect-SrmServer -RemoteCredential $credential -Credential $credential
$api = $srm.ExtensionData

# Get the desired datastores...this is to compare later
$datastores = @{}

# just SRM
Get-Datastore -Tag "SRM" | %{ 
    $datastores.Add($_.Name, @{})

# for each protection plan, get the datatores and VMs
$protectionGroups = $api.Protection.ListProtectionGroups();

$protectionGroups | %{
    $plan = $_
    Write-Host "Checking protected entities for plan $($plan.GetInfo().Name)"
    Write-Host "  Getting protected datastores"
    $planDatastores = $plan.ListProtectedDatastores()
    Write-Host "    Found $($planDatastores.count) datastores"
    $planDatastores | %{
        $ds = $_
        try {
        } catch {
            # a remote datastore will fail, we're just going to skip the failure
        $dsName = $ds.Name
        Write-Host "    Found protected datastore $($dsName)"
        # skip un-tagged datastores, even if they're protected
        if (!($datastores.keys -contains $dsName)) {
            Write-Host -ForegroundColor red "      Skipping datastore - it's not tagged!"
        $datastores.$dsName.Add("protectionGroup", $plan)
        $datastores.$dsName.Add("object", (Get-VIObjectByVIView $ds))
        # the datastore view has references to the VMs, let's collect all
        # vms in the datastore now
        $dsVms = @()
        $ds.Vm | %{
            $dsVms += Get-VIObjectByVIView $_
        # add the array of VMs to the hash
        $datastores.$dsName.Add("currentVms", $dsVms)
        # a placeholder for later
        $datastores.$dsName.Add("protectedVms", @())
    # get the protected VMs and associate them with the datastore
    Write-Host "  Getting protected Virtual Machines"
    $plan.ListProtectedVms() | %{
        try {
        } catch {
            # skip remote VMs, which will fail
        $vm = Get-VIObjectByVIView $_.Vm
        # ignore templates
        if ($vm.GetType().name -eq "TemplateImpl") {
        Write-Host "    Found protected VM $($vm.Name)"
        # get the datastore
        $vmDatastore = $vm | Get-Datastore
        $vmDatastoreName = $vmDatastore.Name
        Write-Host "      VM uses datastore $($vmDatastoreName)"
        #skip un-tagged datastores, even if they're protected
        if (!($datastores.keys -contains $vmDatastoreName)) {
            Write-Host -ForegroundColor red "      Skipping VM - datastore not tagged!"
        $datastores.$vmDatastoreName.protectedVms += $vm

Write-Host "Data collection complete..."

# at this point our hash named "datastores" contains keys equal to the names
# of each of the datastores.  Each of those keys has a value that is another
# hash, which has 4 keys: object (the datastore object itself), protectionGroup
# (the protection group which is protecting the VMs in the datastore), 
# protectedVms (an array of all the protected VMs), and currentVms (an array of 
# all the VMs in the datastore currently)

# Compare the VMs in the datastore with the VMs in the SRM plan
$datastores.GetEnumerator() | %{ 
    $datastoreName = $_.Key
    $datastoreData = $_.Value
    Write-Host "Checking datastore $($datastoreName) for unprotected VMs"
    # an empty array to hold all the VMs to be protected
    $vmsToProtect = @()
    # loop through each of the VMs in the datastore, check to see if it's being
    # protected
    $datastoreData.currentVms | %{
        if (! $datastoreData.protectedVms -contains $_) {
            Write-Host -ForegroundColor yellow "  VM $($_.Name) is not protected"
            $spec = New-Object VMware.VimAutomation.Srm.Views.SrmProtectionGroupVmProtectionSpec
            $spec.Vm = $_.ExtensionData.MoRef
            $vmsToProtect += $spec
    if ($vmsToProtect.count -gt 0) {
        # add any missing VMs to the SRM plan
        Write-Host -ForegroundColor green "  Protecting VMs in datastore $($datastoreName) using plan $($datastoreData.protectionGroup.GetInfo().Name)"
        $task = $datastoreData.protectionGroup.ProtectVms( $vmsToProtect )
        while (-not $task.IsComplete()) { 
            Write-Host "  Waiting for task to complete..."
            sleep -Seconds 5
    } else {
        Write-Host "  No unprotected VMs found"

# disconnect
Disconnect-SrmServer -Confirm:$false
Disconnect-VIServer -Confirm:$false

Start an SRM Test and Cleanup

# connection variables
$vcenter = ""
$user = "administrator@vsphere.local"
$pass = "Password!"

# the name of the recovery plan to test
$RecoveryPlanName = "NYC"

# how many minutes to wait for the test and cleanup to execute
$waitTime = 10

# email settings
$sendEmail = $true
$emailFrom = "srm_test@netapp.com"
$emailTo = @("you@yourcompany.com")
$emailCc = @("minion1@yourcompany.com","hmfic@yourcompany.com")
$emailSubject = "SRM Test Report $(Get-Date -Format yyyy-MM-dd)"
$emailSmtpServer = "smtp.yourcompany.com"
$emailSmtpServerPort = 25
$emailSmtpUser = 'service_account'
$emailSmtpPassword = 'Password!'
$emailSmtpSsl = $true

# ######### End of Editable Section ######### #

function Send-Report( $message ) {
    # create the email
    $email = New-Object System.Net.Mail.MailMessage($emailFrom, $emailTo)
    $email.IsBodyHtml = $True
    $email.body = @"

SRM Test Report



    # add any CC recipients
    if ($emailCc.count -gt 0) {
    $emailCc | %{ $email.CC.Add($_) }

    # craft the rest of the message
    $email.subject = $EmailSubject
    # send the message
    $smtp = New-Object System.Net.Mail.SmtpClient($emailSmtpServer,$emailSmtpServerPort)
    $smtp.EnableSsl = $emailSmtpSsl
    $smtp.credentials = New-Object System.Net.NetworkCredential( $emailSmtpUser,$emailSmtpPassword) 

$credential = New-Object System.Management.Automation.PSCredential ($user, (ConvertTo-SecureString $pass -AsPlainText -Force))

if (! (Get-PSSnapin vmware.vimautomation.core -ErrorAction SilentlyContinue)) {
    Add-PSSnapin vmware.vimautomation.core

# eliminate invalid certificate errors, only connect to one vCenter at a time
Set-PowerCLIConfiguration -DefaultVIServerMode Single -InvalidCertificateAction Ignore  -Scope session -Confirm:$false | Out-Null

# Connect to vCenter
Connect-VIServer -Server $vcenter -Credential $credential | Out-Null

# connect to srm and get the api moref
$srm = Connect-SrmServer -Credential $credential -RemoteCredential $credential -SrmServerAddress
$srmapi = $srm.ExtensionData

# find the recovery plan we want
$recoveryplan = $srmapi.Recovery.ListPlans() | ?{ $_.GetInfo().Name -eq $RecoveryPlanName }
Write-Host "Found plan: $($recoveryplan.Name)"

# if it's in the ready state, do a test
if ($recoveryplan.getInfo().State -eq "Ready") {
    Write-Host "Plan is in the ready state, continuing"
    # set the mode to test (1 = test)
    $rpmode = New-Object VMware.VimAutomation.Srm.Views.SrmRecoveryPlanRecoveryMode
    $rpmode.value__ = 1
    # trigger the test
    Write-Host "Starting test..." -NoNewline
    $recoveryplan.Start( $rpmode )
    # wait until it's done or $waittime minutes have passed
    $start = get-date
    while ((New-TimeSpan $start).TotalMinutes -le $waittime) {
        if ($recoveryplan.GetInfo().State -ne "running") {
        Write-Host "." -NoNewline
        sleep -seconds 5
    Write-Host "DONE!"

    # when the test is done the state *should* be "NeedsCleanup", if not, an error happened
    $history = $srmapi.Recovery.GetHistory( $recoveryplan.MoRef )
    $last = $history.GetRecoveryResult(1)[0]

    $testresult = $last.ResultState
    Write-Host "Last result was: $($testresult)"
    if ($last.ResultState -ne "Success") {
        # do something to figure out what went wrong
        $mailmessage = "something went wrong during the test!"
        Write-Host $mailmessage
        if ($sendEmail) {
            Send-Report $mailmessage
    } else {
        # get the report, if desired
        #[XML]$resultXML = $history.RetrieveStatus( $last.Key, 0, $history.GetResultLength( $last.Key ) )
        # trigger cleanup
        $rpmode.value__ = 2
        Write-Host "Starting cleanup..." -NoNewline
        # wait for cleaup to finish/fail
        $start = Get-Date
        while ((New-TimeSpan $start).TotalMinutes -le $waittime) {
            if ($recoveryplan.GetInfo().State -ne "running") {
            Write-Host "." -NoNewline
            sleep -seconds 5
        write-host "DONE!"
        # get cleanup result
        $cleanupresult = $history.GetRecoveryResult(1)[0].ResultState
        Write-Host "Cleanup result: $($cleanupresult)"
        # get the report, if desired
        #[XML]$cleanupResultXML = $history.RetrieveStatus( $history.GetRecoveryResult(1)[0].Key, 0, $history.GetResultLength( $history.GetRecoveryResult(1)[0].Key ) )
        # craft the message
        $mailmessage = "The result of the test was $($testresult), the result of cleanup was $($cleanupresult)."
        # send the message
        if ($sendEmail) {
            Send-Report $mailmessage
        } else {
            Write-Host $mailmessage
} else {
    # send email saying some kind of error because recovery plan not ready
    $mailmessage = "Unable to proceed, plan in state " + $recoveryplan.getInfo().State
    Write-Host $mailmessage
    if ($sendEmail) {
        Send-Report $mailmessage 


PowerShell is a powerful automator and can elminiate many of the tedious burdens placed on VI admins who are trying to ensure that business applications meet the BC/DR SLAs that have been placed on them. Automation ensures that these rules are adhered to, kept in place, and done in a consistent manner, regardless of who (or what) is deploying virtual machines in your environment.

If you are going to VMworld 2014 be sure to swing by NetApp Booth 1205 and check out the NetApp VASA Provider in the Hands-On Labs. We have experts on automation, disaster recovery, business continuity, and any other subject you want to talk about. Be sure to stop by to schedule your time with any of the Technical Marketing team from NetApp supporting VMware integration!

NetApp just released a new VMware plug-in for disaster recovery for it’s MetroCluster customers.

The MetroCluster Plug-in 1.0 for vSphere plug-in evacuates all virtual machines to another site and facilitates failover/giveback operations of the storage systems. It is viewable as the “MetroCluster” tab under the “Hosts and Clusters” view within vCenter. Additionally, it allows administrators to view 7-Mode storage systems, vFiler units, ESXi hosts, datastores, and virtual machines.

The architecture of this plug-in leverages the vSphere API to “talk” between the plug-in and the vSphere Server. Communication between the plug-in and the MetroCluster storage system is facilitated via the NetApp Manageability SDK. Lastly, for the plug-in to “talk” to the vSphere client, Spring Remoting calls are made via HTTPS.

MetroCluster Plug-in 1.0 requires VMware vCenter Server 5.5 running Microsoft Windows Server 2012 R2 (6.3), 2012 (6.2), 2008 R2 (6.1), or 2008 (6.0) with VMware ESX 5.5, ESXi 5.1, or even ESXi 5.0. The plug-in has been qualified to also support vCenter appliance.

On the storage side, the plug-in is officially supported with 7-Mode versions of Data ONTAP 8.1 – 8.1.4 and 8.2 – 8.2.2

It should also be noted that there are vSphere Distributed Resource Scheduler (DRS) requirements: DRS should be turned on and the automation level set to Fully Automated with the DRS migration threshold set to apply at least priority 1 and priority 2 recommendations for the cluster.

The 148-megabyte installation package is now available to download from the NetApp Support Site.