Sunday, November 9, 2014

vSphere Claim Rules with PowerCLI

We’ve been implementing HP 3PAR 7000 series into our environment. During this task we’ve tryied to be good admins by following the HP 3PAR StoreServ Storage and VMware vSphere 5 best practices.

Within this excellent resource there is a section for automating round robin policy for all 3PAR LUNs with a custom SATP rule.  That’s great, but it involves an esxcli command that must be run on each host:
# esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" 

   –O "iops=1" -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP 3PAR Custom Rule”

For a small number of hosts that’s not too terrible until you realize that you must then reboot each host or modify the policy for each LUN currently presented on each host.  That’s not something that I’m likely to do.

With my technology buddy Google, I discovered several tidbits online:
  • Philip Sellers has a blog that works through much of this in PowerCLI.
  • Cormac Hogan has a blog that re-iterates the above, along with the commands required to unload each already claimed device (LUN), reload the claim rules, and rescan the HBAs.
Most of what I needed was out there already, of course. But I wanted to reload the claim rules in order to test the custom rule and without setting the policy individually on each LUN. I also didn’t want to impact every iSCSI LUN—just 3PAR—and there’s no need to reclaim a LUN that is already configured for round robin.

Here’s how I did it for each host:
$esxCred = Get-Credential
Connect-VIServer $esx -Credential $esxCred
 
$esxcli = Get-ESXcli 
 
# Add the claim rule and test for its existence.  This also fails if the claim rule
# is already present.
try {
    $esxcli.storage.nmp.satp.rule.add($null,"tpgs_on", 
      "HP 3PAR Custom ALUA Rule", $null, $null, $null, "VV", 
      $null, "VMW_PSP_RR","iops=100", "VMW_SATP_ALUA", $null,
      $null, "3PARdata")
    } 
catch {} 
 
if (($esxcli.storage.nmp.satp.rule.list() | 
  where { $_.description -like "*3PAR*" }).count -neq 1) {
    write-output "Custom claim rule addition failed!"
    exit
    }
 
# Get a list of 3PAR devices that aren't configured for RR 
# and unclaim them
$3parDevices = $esxcli.storage.nmp.device.list() | 
   Where { $_.DeviceDisplayName -like "3PARdata*" -and 
   $_.PathSelectionPolicy -notlike "*_RR"}
$3parDevices | % {
    $esxcli.storage.core.claiming.unclaim($null, $null, $null, 
    $_.Device, $null, $null, $null, $null, $null, $null, 
    "device", "3PARdata")
}
 
# now reload all claim rules
$esxcli.storage.claimrule.load
$esxcli.storage.claimrule.run
 
# rescan all HBAs
Get-VMHost | Get-VMHostStorage -RescanAllHba

Of course, I wrapped all of this around more PowerCLI that obtained a list of hosts in each cluster and managed the connections for esxcli.

If you receive any errors during the unclaim, make sure that your HA configuration is not using those LUNs as heartbeat datastores.  I experienced that problem which was easy to remedy in our environment.

Thursday, June 12, 2014

Office 365 Staged Migrations with PowerShell

I have a love-hate relationship with PowerShell.  I want to love it and often do--except when my knowledge is lacking or it acts mysteriously (to me).

To aid my memory and perhaps help others, I've decided to post snippets as I continue my journey toward improving my PowerShell skills (I hear that shaking Luc Dekens' and Alan Renouf's hand at the same time increases one's PS Mojo).

With that out of the way, onward and upward.


We are in the midst of migrating from Exchange Server 2003 to Office 365 as a Staged Migration.  While not a particularly arduous task, it does involve pulling mailbox statistics and email addresses and using that information for the planning and creation of migration batches. It's tedious and error-prone.

Given a set of Exchange Servers, Domain Controllers, and a Universal Security Group, Get-MailboxSizes.ps1 generates a CSV containing mailbox names, server, size, number of items, office location, department, active mailbox flag, and string suitable for directly pasting into a Staged Migration CSV file.

2014-06-12 16_09_48-MailboxSizes

The ActiveMbx column is TRUE if the user is not enabled or a member of a Universal Security Group used for tracking progress and automating other things.

The odd column header at the last column should be directly pasted into your Staged Migration CSV file as row 1.  For successive entries, copy that column value in filtered rows as you plan the migration.   Please note that this column assumes Active Directory Synchronization with Office 365.

We've been known to paste entries from the last column into Notepad for a quick Search-Replace of ",,FALSE" with ";".  That yields a list that you can paste into the To field of a notification email or add members box of the Universal Security Group.

I’d like to thank Iain Brighton who lent a hand when I was stuck with Custom Objects. Yea Twitter!

# Get-MailboxSizes.ps1
#
# Requires Microsoft Active Directory module
 
If ((Get-Module | Where { $_.Name -eq "ActiveDirectory"}) -eq $null) { 
   Import-Module ActiveDirectory; 
   If ((Get-Module | Where { $_.Name -eq "ActiveDirectory"}) -eq $null) { throw "ActiveDirectory Module is required." }
   }
 
$colServers = @("Server1", "Server2")                      # array of Exchange Servers to poll
$strGroup = "CN=O365-Migrated,CN=YourOU,DC=domain,DC=com"  # dn of security group used to track those already migrated
$colDCs = @("domain1-dc")                                  # one more more domain controllers -- in case spanning domains
$strCsvFile = "mailbox-sizes.csv"
 
# get all mailboxes, their size, their number of items
$colBoxes = @()
ForEach ($server in $colServers) {
   Write-Host "Getting mailboxes from $($server)..."
   $colBoxes += Get-WMIObject -namespace "root\MicrosoftExchangeV2" -class "Exchange_Mailbox" -Computer $server `
      -Filter "NOT MailboxDisplayName like 'System%' and NOT MailboxDisplayName like 'SMTP%'" `
      | Select-Object ServerName, MailboxDisplayName, @{N="Size";E={"{0:N0}" -f [int]($_.Size/1024)}}, @{N="Items";E={"{0:N0}" -f $_.TotalItems}} 
}
 
# turn all of that into custom objects
$colDetails = @();
$colBoxes | % { $colDetails += New-Object PSObject -Property @{ Server = $_.ServerName; Mailbox = $_.MailboxDisplayName; Size = $_.Size;  `
    Items = $_.Items; Department = ""; Office = ""; ForCSV = ""; ActiveMbx = "" } }
 
 
Write-Host "Getting accounts. This can take a while."
$colDetails | ForEach-Object { 
   $mbx = $_
   $name = $mbx.Mailbox
   ForEach ($strDC in $colDCs) {
      $user = Get-ADUser -Server $strDC -Filter { DisplayName -like $name } -Properties DisplayName, MemberOf, Enabled, Department, Office, mail
      if ($user) { 
         $mbx.Department = $user.Department
         $mbx.Office = $user.Office
         $mbx.ForCSV = "$($user.mail),,FALSE" 
         $mbx.ActiveMbx = ($user.Enabled -eq $true) -and ($user.Memberof -notcontains $strGroup)
         break
         }
      }
}
 
$coldetails | Select-Object Mailbox, Server, @{N="Size (MB)";E={$_.Size}}, Items, Office, Department, ActiveMbx, `
      @{N="EmailAddress,Password,ForceChangePassword";E={$_.ForCSV}} | Sort-Object Mailbox | Export-Csv $strCsvFile
      
Write-Output Wrote $strCsvFile
 

Tuesday, September 4, 2012

Traditional DR…and its Imminent Demise?

 

Backhoe Damaging Underground Lines

My primary focus at VMworld 2012 was Disaster Recovery, which caused me to think a fair amount about the future of DR in general—it’s necessity, utility, and longevity.  Have we really escaped “traditional” DR?  Will the methods employed today exist as we know them in 10 years, or just be another integral part of the infrastructure?

Each session invariably started off comparing the “traditional” disaster recovery of yesterday against the virtualization-enabled DR of today, where the old machinations are replaced with flipping a software switch.

With the exception of American National Bank’s and Varrow’s Active/Active datacenter (INF-BCO1883), I can’t help but see this as still being traditional DR—only with today’s tools.

Let’s take a look at some of the main points in “traditional” DR versus today’s:

  • Before virtualization, restoring to the same hardware used in production was a challenge. If it could not be met, time was wasted.  Virtualization gives us a common hardware set, eliminating those hardware compatibility woes. 

    While a valid point, what if you could always purchase bland hardware, generic x86 servers at Wal-Mart as a commodity, like that which virtualization presents to the OS?
  • Tapes could not always be restored and took too much time. Virtualization gets us to replication technologies that avoid tape.

    Excluding array-based replication, disk-to-disk-to-tape solutions with replication were pitched as disaster recovery aids 10 years ago, specifically to get around the problems of tape.
  • New systems/applications required new servers.

    You got me there. Virtualization wins, and I’m happy with that.

Same process, new tools—albeit faster, better, stronger tools. It’s still traditional DR to me.

Now enter Cloud-based DR.  With DR in the cloud, there is no need to lease or require that which remains idle. In essence, keep an off-site copy of your data—a Good Thing anyway—and pay for what you need when you need.  Disaster Recovery has now moved fully from capex to opex.  It’s cloud being used for what cloud is intended. 

But not at the application level.

In an application-centric world, everything behaves like the modern applications to which we’ve become accustomed.  We are blissfully unaware, yet fully appreciative of Facebook, Google, Twitter, and the like spanning multiple datacenters.  We aren’t exposed to datacenter failures that they may encounter and we shouldn’t be. Nor should your customers. 

Your line-of-business systems need to be heading this way, today, for it is the key to availability across datacenters and devices (EUC was a big push at VMworld this year).  They shouldn’t care any more about what datacenter they occupy than how many instances are deployed.

The pieces are there. We’re seeing the increased popularity of orchestration with the likes of Chef and Puppet (in no particular order). Infrastructure manipulation via APIs such as Amazon provides into their Elastic Load Balancer.  Data—big or otherwise—replication, and sharding is becoming commonplace.

The hold-outs are back office systems that won’t get where we need them to be soon enough, yet demonstrate significant movement in this direction when you consider Office 365 and the like.

Once achieved, is expanding from private to public cloud based on increased load any different than contracting from one to the other based on availability?

Private, public, or hybrid the cloud is an extension of your datacenter. It’s the elasticity of your workloads at web-scale, that need not be within one datacenter.  If well orchestrated, you have “simple” contractions of your cloud based on not only load, but availability.

I see the agile system encompassing multiple datacenters at any point in time, expanding and contracting as load and availability changes. This, will be the new DR—no DR.  Just a well-designed modern system.

What are your thoughts?

Takeaway: Developers need to be aware of infrastructure; this could be interesting.

Sunday, February 26, 2012

Tech Field Day Recap – Virtualization Field Day 2

I’ll have more on an individual session or two in the coming days, but I wanted to take the time to provide a brief recap of Virtualization Field Day 2.

It was great to see the familiar faces of Edward Haletky, Mike Laverick, Roger LundDavid Owen, and Rick Schlander while meeting Rodney Haywood, Bill Hill, Dwayne Lessner, Scott Lowe, Robert Novak, Brandon Riley, and Chris Wahl for the first time. It’s solid group of people on both personal and professional levels.

Day zero allowed time for all of the delegates to arrive and closed with a welcome dinner at Zeytoun. We had great food; our hometown gift exchange; a shrine and webcam chat with Stephen who was not able to make it to TFD for the first time, but for the right reason.

For sponsors on day one we had Symantec, who's competing strategy appeared to PowerPoint overload, Zerto with their solid continuous data protection DNA, and Xangati with their monitoring solutions and perfect bacon execution.

Day one ended with a private tour and mystery theatre at Winchester Mystery House.

On tap for day two was flashy Pure Storage, Pivot3 with their integrated storage and compute solution and W. Curtis Preston with TruthinIT for a “battle of the bloggers.”  To be honest, I surprised myself by how much I enjoyed that last session.

There was a lot of good content, but the ones that really appealed to me were Pure Storage, Zerto, Pivot3, and Xangati so I’ll summarize each of those briefly for now.

Pure Storage

Fast, cool, exciting technology in an environment that matches. Wowed by their space from when we walked in the door to their tech in the presentation and to the lab. And the Psycho Donuts seemed to go over well for many.  RBaaS (Red Bull as a Service) was saw a trial run as well.

Zerto

Gil was back for another Tech Field Day and this time with CTO and Co-founder Oded Kedem to talk about the upcoming 2.0 release of their flagship BC/DR product. Zerto visited us at Tech Field Day 8 in Boston and won this year’s vmworld Best of Show.  Each time I’ve seen their tech I have wished that I worked in the segment to which it is marketed.

Pivot3

They came with an appliance, out of the box, configured it during the presentation in 40 minutes, and then used it in a demo. How great is that?  Pivot3’s tech DNA comes from the likes of Adaptec and Compaq and is branching out from the surveillance industry. Having founder and CTO Bill Galloway present showed their commitment to Tech Field Day.

Xangati

Bacon, ice cream, and Star Wars references abound is not a bad way to start with this crowd, and Xangati appears to have one of the best monitoring products for your virtual infrastructure. They’ve been concentrating on VDI lately and their focus was on the VDI Dashboard.

 

We brought it all to a close at the end of day two with great food and conversation at Antonella’s.  Unfortunately Rodney Haywood could not join us; he needed to start his long journey home.

 

Disclaimer
My travel, accommodations, and meals were paid for by the
Tech Field Day sponsors. Often there is swag involved as well. Like me, all TFD delegates are independents. We tweet, write, and say whatever we wish at all times, including during the sessions.

Follow the other delegates!

Sunday, February 19, 2012

Virtualization Field Day Two

I am honored to have been invited back for another virtualization-themed Tech Field Day. For those unfamiliar, Tech Field Day is a great opportunity for vendors to get together on a technical level with independents that represent their target market.  It's a great short feedback loop that generates much excitement on both sides.


This second virtualization-themed event marks a big step for GestaltIT and Tech Field Day as Matt Simmons will be the primary "show runner", paving the way for expanded Tech Field Day events.  As a side note, this frees Stephen Foskett to be with his son as he is presented the award for winning the Ohio Civil Rights MLK Essay Contest. Go Grant!


This time around Symantec and Zerto return and are joined by Xangati, PureStorage, and Pivot3 and a blogger event with truthinIT.


Join us, February 23 and 24th and be sure to follow us on Twitter with hashtag #VFD2.


I look forward to seeing many familiar faces among the delegates and the opportunity to meet some new ones.  



Thursday, September 15, 2011

Capturing CPU Trends with PowerCLI

Inspired by the creeping CPU that we see in Linux guests and helped greatly by @BoerLowie at his blog, I’ve come up with a little PowerCLI to capture CPU trends of the top consumers per cluster.
This is my first cut and will likely see changes over time, like any script should. HTML output and emailed results are the most likely candidates.
The script should be fairly self-explanatory. For each cluster, traverse all VMs and get their OverallCpuUsage (the number that you see in the vSphere Client when selecting a cluster and then the Virtual Machines tab).  Take the top X consumers based on that number and get their average CPU usage performance statistic for N days back in time and compare it to today’s.
The output looks something like this:
CPU-Trend


So here you go:
#
#  Produce guest CPU trending from a time period back versus a shorter 
#  more immediate time frame.  e.g. 30 days ago versus past 2 days.
#
param(
    [string] $vCenter
)
 
$DaysOld = -30        # compare to full day stats this many days back
$DaysRecent = -1    # get stats for this many recent days.
$GetTop = 10        # look at top x CPU consumers
 
Add-PSSnapin VMware.VimAutomation.Core -ErrorAction SilentlyContinue
 
#if ($vCenter -eq "") {
#    $vCenter = Read-Host "VI Server: "
#}
 
#if ($DefaultVIServers.Count) {
#    Disconnect-VIServer -Server * -Force -Confirm:$false
#}
#Connect-VIServer $vCenter
 
$AllClusters = Get-Cluster
 
Foreach ($Cluster in $AllClusters) {
    Write-Host "`n$($Cluster.Name)"
    
    $VMs = Get-Cluster $Cluster | Get-VM | `
        Where-Object { $_.PowerState -eq "PoweredOn" }
    $NumVMs = $VMs.Count
    
    # Get the Overall CPU Usage for each VM in the cluster.  Then cap that 
    # list at the top $GetTop highest for Overall CPU Usage
    $vm_list = @()
    $Count = 0
    Foreach ($vm in $VMs)
    {
        $Count += 1
        Write-Progress -Activity "Getting VM views" -Status "Progress:" `
            -PercentComplete ($Count / $NumVMs * 100)
            
        # the vSphere .Net view object has the OverallCpuUsage 
        # (VirtualMachineQuickStats)
        # http://www.vmware.com/support/developer/vc-sdk/visdk400pubs/ReferenceGuide/vim.vm.Summary.QuickStats.html
        $view = Get-View $vm
        
        $objOutput = "" | Select-Object VMName, CpuMhz
        $objOutput.VMName = $view.Name
        $objOutput.CpuMhz = $view.Summary.QuickStats.OverallCpuUsage
        $vm_list += $objOutput
    }
    # Reduce to our Top X
    $vm_list = $vm_list | sort-object CpuMhz -Descending | select -First $GetTop 
        
    #
    # For each of those VMs, get the statistics for past and current CPU usage
    $NumVMs = $vm_list.Count
    $Out_List = @()
    $Count = 0
    Foreach ($vm in $vm_list)
    {
        $Count += 1
        Write-Progress -Activity "Compiling CPU stats" -Status "Progress:" `
            -PercentComplete ($Count / $NumVMs * 100)
            
           [Double] $ldblPerfAged = (Get-Stat -Entity $vm.VMName -Stat cpu.usage.average `
            -Start $((Get-Date).AddDays($DaysOld)) `
            -Finish $((Get-Date).AddDays($DaysOld + 1)) -ErrorAction Continue | `
            Measure-Object -Average Value).Average
        
        If ($ldblPerfAged -gt 0) {
               [Double] $lblPerfNow = (Get-Stat -Entity $vm.VMName -Stat cpu.usage.average `
                -Start $((Get-Date).AddDays($DaysRecent)) `
                -ErrorAction Continue | Measure-Object -Average Value).Average
            [Int] $lintTrend = (($lblPerfNow - $ldblPerfAged) / $ldblPerfAged) * 100
        
            $objOutput = "" | Select-Object VMName, CpuMhz, PerfAged, PerfNow, Trend
            $objOutput.VMName = $vm.VMName
            $objOutput.CpuMhz = $vm.CpuMhz
            $objOutput.PerfAged = "{0:f2}%" -f $ldblPerfAged
            $objOutput.PerfNow = "{0:f2}%" -f $lblPerfNow
            $objOutput.Trend = "{0}%" -f $lintTrend
        
            $out_list += $objOutput
        }
    }
 
    # Spit 'er out
    Write-Host "Top CPU Consumers Trending, $($DaysOld) days vs today`n"
    $out_list | Format-Table -Property VMName, `
        @{Expression={$_.CpuMhz};Name='CPU Mhz';align='right'}, `
        @{Expression={$_.PerfAged};Name='CPU Aged';align='right'}, `
        @{Expression={$_.PerfNow};Name='CPU Now';align='right'}, `
        @{Expression={$_.Trend};Name='Trend';align='right'}
}