Jun 12, 2014

Office 365 Staged Migrations with PowerShell

I have a love-hate relationship with PowerShell.  I want to love it and often do--except when my knowledge is lacking or it acts mysteriously (to me).

To aid my memory and perhaps help others, I've decided to post snippets as I continue my journey toward improving my PowerShell skills (I hear that shaking Luc Dekens' and Alan Renouf's hand at the same time increases one's PS Mojo).

With that out of the way, onward and upward.


We are in the midst of migrating from Exchange Server 2003 to Office 365 as a Staged Migration.  While not a particularly arduous task, it does involve pulling mailbox statistics and email addresses and using that information for the planning and creation of migration batches. It's tedious and error-prone.

Given a set of Exchange Servers, Domain Controllers, and a Universal Security Group, Get-MailboxSizes.ps1 generates a CSV containing mailbox names, server, size, number of items, office location, department, active mailbox flag, and string suitable for directly pasting into a Staged Migration CSV file.

2014-06-12 16_09_48-MailboxSizes

The ActiveMbx column is TRUE if the user is not enabled or a member of a Universal Security Group used for tracking progress and automating other things.

The odd column header at the last column should be directly pasted into your Staged Migration CSV file as row 1.  For successive entries, copy that column value in filtered rows as you plan the migration.   Please note that this column assumes Active Directory Synchronization with Office 365.

We've been known to paste entries from the last column into Notepad for a quick Search-Replace of ",,FALSE" with ";".  That yields a list that you can paste into the To field of a notification email or add members box of the Universal Security Group.

I’d like to thank Iain Brighton who lent a hand when I was stuck with Custom Objects. Yea Twitter!

# Get-MailboxSizes.ps1
#
# Requires Microsoft Active Directory module
 
If ((Get-Module | Where { $_.Name -eq "ActiveDirectory"}) -eq $null) { 
   Import-Module ActiveDirectory; 
   If ((Get-Module | Where { $_.Name -eq "ActiveDirectory"}) -eq $null) { throw "ActiveDirectory Module is required." }
   }
 
$colServers = @("Server1", "Server2")                      # array of Exchange Servers to poll
$strGroup = "CN=O365-Migrated,CN=YourOU,DC=domain,DC=com"  # dn of security group used to track those already migrated
$colDCs = @("domain1-dc")                                  # one more more domain controllers -- in case spanning domains
$strCsvFile = "mailbox-sizes.csv"
 
# get all mailboxes, their size, their number of items
$colBoxes = @()
ForEach ($server in $colServers) {
   Write-Host "Getting mailboxes from $($server)..."
   $colBoxes += Get-WMIObject -namespace "root\MicrosoftExchangeV2" -class "Exchange_Mailbox" -Computer $server `
      -Filter "NOT MailboxDisplayName like 'System%' and NOT MailboxDisplayName like 'SMTP%'" `
      | Select-Object ServerName, MailboxDisplayName, @{N="Size";E={"{0:N0}" -f [int]($_.Size/1024)}}, @{N="Items";E={"{0:N0}" -f $_.TotalItems}} 
}
 
# turn all of that into custom objects
$colDetails = @();
$colBoxes | % { $colDetails += New-Object PSObject -Property @{ Server = $_.ServerName; Mailbox = $_.MailboxDisplayName; Size = $_.Size;  `
    Items = $_.Items; Department = ""; Office = ""; ForCSV = ""; ActiveMbx = "" } }
 
 
Write-Host "Getting accounts. This can take a while."
$colDetails | ForEach-Object { 
   $mbx = $_
   $name = $mbx.Mailbox
   ForEach ($strDC in $colDCs) {
      $user = Get-ADUser -Server $strDC -Filter { DisplayName -like $name } -Properties DisplayName, MemberOf, Enabled, Department, Office, mail
      if ($user) { 
         $mbx.Department = $user.Department
         $mbx.Office = $user.Office
         $mbx.ForCSV = "$($user.mail),,FALSE" 
         $mbx.ActiveMbx = ($user.Enabled -eq $true) -and ($user.Memberof -notcontains $strGroup)
         break
         }
      }
}
 
$coldetails | Select-Object Mailbox, Server, @{N="Size (MB)";E={$_.Size}}, Items, Office, Department, ActiveMbx, `
      @{N="EmailAddress,Password,ForceChangePassword";E={$_.ForCSV}} | Sort-Object Mailbox | Export-Csv $strCsvFile
      
Write-Output Wrote $strCsvFile
 

Sep 4, 2012

Traditional DR…and its Imminent Demise?

 

Backhoe Damaging Underground Lines

My primary focus at VMworld 2012 was Disaster Recovery, which caused me to think a fair amount about the future of DR in general—it’s necessity, utility, and longevity.  Have we really escaped “traditional” DR?  Will the methods employed today exist as we know them in 10 years, or just be another integral part of the infrastructure?

Each session invariably started off comparing the “traditional” disaster recovery of yesterday against the virtualization-enabled DR of today, where the old machinations are replaced with flipping a software switch.

With the exception of American National Bank’s and Varrow’s Active/Active datacenter (INF-BCO1883), I can’t help but see this as still being traditional DR—only with today’s tools.

Let’s take a look at some of the main points in “traditional” DR versus today’s:

  • Before virtualization, restoring to the same hardware used in production was a challenge. If it could not be met, time was wasted.  Virtualization gives us a common hardware set, eliminating those hardware compatibility woes. 

    While a valid point, what if you could always purchase bland hardware, generic x86 servers at Wal-Mart as a commodity, like that which virtualization presents to the OS?
  • Tapes could not always be restored and took too much time. Virtualization gets us to replication technologies that avoid tape.

    Excluding array-based replication, disk-to-disk-to-tape solutions with replication were pitched as disaster recovery aids 10 years ago, specifically to get around the problems of tape.
  • New systems/applications required new servers.

    You got me there. Virtualization wins, and I’m happy with that.

Same process, new tools—albeit faster, better, stronger tools. It’s still traditional DR to me.

Now enter Cloud-based DR.  With DR in the cloud, there is no need to lease or require that which remains idle. In essence, keep an off-site copy of your data—a Good Thing anyway—and pay for what you need when you need.  Disaster Recovery has now moved fully from capex to opex.  It’s cloud being used for what cloud is intended. 

But not at the application level.

In an application-centric world, everything behaves like the modern applications to which we’ve become accustomed.  We are blissfully unaware, yet fully appreciative of Facebook, Google, Twitter, and the like spanning multiple datacenters.  We aren’t exposed to datacenter failures that they may encounter and we shouldn’t be. Nor should your customers. 

Your line-of-business systems need to be heading this way, today, for it is the key to availability across datacenters and devices (EUC was a big push at VMworld this year).  They shouldn’t care any more about what datacenter they occupy than how many instances are deployed.

The pieces are there. We’re seeing the increased popularity of orchestration with the likes of Chef and Puppet (in no particular order). Infrastructure manipulation via APIs such as Amazon provides into their Elastic Load Balancer.  Data—big or otherwise—replication, and sharding is becoming commonplace.

The hold-outs are back office systems that won’t get where we need them to be soon enough, yet demonstrate significant movement in this direction when you consider Office 365 and the like.

Once achieved, is expanding from private to public cloud based on increased load any different than contracting from one to the other based on availability?

Private, public, or hybrid the cloud is an extension of your datacenter. It’s the elasticity of your workloads at web-scale, that need not be within one datacenter.  If well orchestrated, you have “simple” contractions of your cloud based on not only load, but availability.

I see the agile system encompassing multiple datacenters at any point in time, expanding and contracting as load and availability changes. This, will be the new DR—no DR.  Just a well-designed modern system.

What are your thoughts?

Takeaway: Developers need to be aware of infrastructure; this could be interesting.

Feb 26, 2012

Tech Field Day Recap – Virtualization Field Day 2

I’ll have more on an individual session or two in the coming days, but I wanted to take the time to provide a brief recap of Virtualization Field Day 2.

It was great to see the familiar faces of Edward Haletky, Mike Laverick, Roger LundDavid Owen, and Rick Schlander while meeting Rodney Haywood, Bill Hill, Dwayne Lessner, Scott Lowe, Robert Novak, Brandon Riley, and Chris Wahl for the first time. It’s solid group of people on both personal and professional levels.

Day zero allowed time for all of the delegates to arrive and closed with a welcome dinner at Zeytoun. We had great food; our hometown gift exchange; a shrine and webcam chat with Stephen who was not able to make it to TFD for the first time, but for the right reason.

For sponsors on day one we had Symantec, who's competing strategy appeared to PowerPoint overload, Zerto with their solid continuous data protection DNA, and Xangati with their monitoring solutions and perfect bacon execution.

Day one ended with a private tour and mystery theatre at Winchester Mystery House.

On tap for day two was flashy Pure Storage, Pivot3 with their integrated storage and compute solution and W. Curtis Preston with TruthinIT for a “battle of the bloggers.”  To be honest, I surprised myself by how much I enjoyed that last session.

There was a lot of good content, but the ones that really appealed to me were Pure Storage, Zerto, Pivot3, and Xangati so I’ll summarize each of those briefly for now.

Pure Storage

Fast, cool, exciting technology in an environment that matches. Wowed by their space from when we walked in the door to their tech in the presentation and to the lab. And the Psycho Donuts seemed to go over well for many.  RBaaS (Red Bull as a Service) was saw a trial run as well.

Zerto

Gil was back for another Tech Field Day and this time with CTO and Co-founder Oded Kedem to talk about the upcoming 2.0 release of their flagship BC/DR product. Zerto visited us at Tech Field Day 8 in Boston and won this year’s vmworld Best of Show.  Each time I’ve seen their tech I have wished that I worked in the segment to which it is marketed.

Pivot3

They came with an appliance, out of the box, configured it during the presentation in 40 minutes, and then used it in a demo. How great is that?  Pivot3’s tech DNA comes from the likes of Adaptec and Compaq and is branching out from the surveillance industry. Having founder and CTO Bill Galloway present showed their commitment to Tech Field Day.

Xangati

Bacon, ice cream, and Star Wars references abound is not a bad way to start with this crowd, and Xangati appears to have one of the best monitoring products for your virtual infrastructure. They’ve been concentrating on VDI lately and their focus was on the VDI Dashboard.

 

We brought it all to a close at the end of day two with great food and conversation at Antonella’s.  Unfortunately Rodney Haywood could not join us; he needed to start his long journey home.

 

Disclaimer
My travel, accommodations, and meals were paid for by the
Tech Field Day sponsors. Often there is swag involved as well. Like me, all TFD delegates are independents. We tweet, write, and say whatever we wish at all times, including during the sessions.

Follow the other delegates!

Feb 19, 2012

Virtualization Field Day Two

I am honored to have been invited back for another virtualization-themed Tech Field Day. For those unfamiliar, Tech Field Day is a great opportunity for vendors to get together on a technical level with independents that represent their target market.  It's a great short feedback loop that generates much excitement on both sides.


This second virtualization-themed event marks a big step for GestaltIT and Tech Field Day as Matt Simmons will be the primary "show runner", paving the way for expanded Tech Field Day events.  As a side note, this frees Stephen Foskett to be with his son as he is presented the award for winning the Ohio Civil Rights MLK Essay Contest. Go Grant!


This time around Symantec and Zerto return and are joined by Xangati, PureStorage, and Pivot3 and a blogger event with truthinIT.


Join us, February 23 and 24th and be sure to follow us on Twitter with hashtag #VFD2.


I look forward to seeing many familiar faces among the delegates and the opportunity to meet some new ones.  



Sep 15, 2011

Capturing CPU Trends with PowerCLI

Inspired by the creeping CPU that we see in Linux guests and helped greatly by @BoerLowie at his blog, I’ve come up with a little PowerCLI to capture CPU trends of the top consumers per cluster.
This is my first cut and will likely see changes over time, like any script should. HTML output and emailed results are the most likely candidates.
The script should be fairly self-explanatory. For each cluster, traverse all VMs and get their OverallCpuUsage (the number that you see in the vSphere Client when selecting a cluster and then the Virtual Machines tab).  Take the top X consumers based on that number and get their average CPU usage performance statistic for N days back in time and compare it to today’s.
The output looks something like this:
CPU-Trend


So here you go:
#
#  Produce guest CPU trending from a time period back versus a shorter 
#  more immediate time frame.  e.g. 30 days ago versus past 2 days.
#
param(
    [string] $vCenter
)
 
$DaysOld = -30        # compare to full day stats this many days back
$DaysRecent = -1    # get stats for this many recent days.
$GetTop = 10        # look at top x CPU consumers
 
Add-PSSnapin VMware.VimAutomation.Core -ErrorAction SilentlyContinue
 
#if ($vCenter -eq "") {
#    $vCenter = Read-Host "VI Server: "
#}
 
#if ($DefaultVIServers.Count) {
#    Disconnect-VIServer -Server * -Force -Confirm:$false
#}
#Connect-VIServer $vCenter
 
$AllClusters = Get-Cluster
 
Foreach ($Cluster in $AllClusters) {
    Write-Host "`n$($Cluster.Name)"
    
    $VMs = Get-Cluster $Cluster | Get-VM | `
        Where-Object { $_.PowerState -eq "PoweredOn" }
    $NumVMs = $VMs.Count
    
    # Get the Overall CPU Usage for each VM in the cluster.  Then cap that 
    # list at the top $GetTop highest for Overall CPU Usage
    $vm_list = @()
    $Count = 0
    Foreach ($vm in $VMs)
    {
        $Count += 1
        Write-Progress -Activity "Getting VM views" -Status "Progress:" `
            -PercentComplete ($Count / $NumVMs * 100)
            
        # the vSphere .Net view object has the OverallCpuUsage 
        # (VirtualMachineQuickStats)
        # http://www.vmware.com/support/developer/vc-sdk/visdk400pubs/ReferenceGuide/vim.vm.Summary.QuickStats.html
        $view = Get-View $vm
        
        $objOutput = "" | Select-Object VMName, CpuMhz
        $objOutput.VMName = $view.Name
        $objOutput.CpuMhz = $view.Summary.QuickStats.OverallCpuUsage
        $vm_list += $objOutput
    }
    # Reduce to our Top X
    $vm_list = $vm_list | sort-object CpuMhz -Descending | select -First $GetTop 
        
    #
    # For each of those VMs, get the statistics for past and current CPU usage
    $NumVMs = $vm_list.Count
    $Out_List = @()
    $Count = 0
    Foreach ($vm in $vm_list)
    {
        $Count += 1
        Write-Progress -Activity "Compiling CPU stats" -Status "Progress:" `
            -PercentComplete ($Count / $NumVMs * 100)
            
           [Double] $ldblPerfAged = (Get-Stat -Entity $vm.VMName -Stat cpu.usage.average `
            -Start $((Get-Date).AddDays($DaysOld)) `
            -Finish $((Get-Date).AddDays($DaysOld + 1)) -ErrorAction Continue | `
            Measure-Object -Average Value).Average
        
        If ($ldblPerfAged -gt 0) {
               [Double] $lblPerfNow = (Get-Stat -Entity $vm.VMName -Stat cpu.usage.average `
                -Start $((Get-Date).AddDays($DaysRecent)) `
                -ErrorAction Continue | Measure-Object -Average Value).Average
            [Int] $lintTrend = (($lblPerfNow - $ldblPerfAged) / $ldblPerfAged) * 100
        
            $objOutput = "" | Select-Object VMName, CpuMhz, PerfAged, PerfNow, Trend
            $objOutput.VMName = $vm.VMName
            $objOutput.CpuMhz = $vm.CpuMhz
            $objOutput.PerfAged = "{0:f2}%" -f $ldblPerfAged
            $objOutput.PerfNow = "{0:f2}%" -f $lblPerfNow
            $objOutput.Trend = "{0}%" -f $lintTrend
        
            $out_list += $objOutput
        }
    }
 
    # Spit 'er out
    Write-Host "Top CPU Consumers Trending, $($DaysOld) days vs today`n"
    $out_list | Format-Table -Property VMName, `
        @{Expression={$_.CpuMhz};Name='CPU Mhz';align='right'}, `
        @{Expression={$_.PerfAged};Name='CPU Aged';align='right'}, `
        @{Expression={$_.PerfNow};Name='CPU Now';align='right'}, `
        @{Expression={$_.Trend};Name='Trend';align='right'}
}

Sep 7, 2011

Linux Guest CPU Creep

We run a lot of tiny VMs on vSphere 4 in a rather unique environment.  The densities are high and the kernel OS is officially unsupported Fedora Core 8 (2.6.26 kernel). This causes us to be more tolerant of aberrations.

The biggest aberration of note has been CPU creep.  The tiny guests will run along just fine using 30 - 40 MHz of CPU and then start a slow upward trend.  It will creep slowly over the course of a week.  No useful perspective can be gained from within the guest using traditional means.  More interesting, performing a guest-initiated reboot will reveal a slow crawl all the way through the BIOS at boot and no CPU dip beyond the new baseline.  They are stuck, and a reset from the vSphere client resolves the issue.

This has been acceptable so far.  The guests are stateless, only a few are impacted at any one time, and no one guest is critical by itself.  We automated the remediation, became accustomed, and moved on.  The issue has stuck to one functional cluster and persisted across minor vSphere 4 upgrades.

Becoming accustomed caused us to miss another occurrence.

The software architects have been busy troubleshooting the core application running in a separate vSphere cluster on Ubuntu Server 8.04 LTS (2.6.24 kernel).  CPU has been creeping slowly up for the past couple of months with a marked recent acceleration. We’ve been attributing it to increased load as we grow.  The software was optimized and the CPU remained steady and on its upward path.

MQ-Creeping

The solution:

Stop all running processes, verify a higher than expected CPU load, and reset the VM.  We’re down substantially.

MQ-Creeping2

In a small shop with few resources and too many projects, it’s time to implement trending alerts.

Have you experienced this behavior before?

Aug 26, 2011

What a Rush

I’m not even there yet and I’m excited.  VMworld 2011 is upon us. 

I’ve had the pleasure of attending VMworld since 2007, when I was just getting into virtualization and wanted to learn more.  VMware Workstation was it for me, but I wanted to learn and prepare for server virtualization.  We had no suitable servers and no shared storage.

Fast forward to this year where my team manages a heavily virtualized environment with very high densities including a unique business case. VMworld was a major destination on the path.

Sure, it was all achievable without.  But I’d venture to say that a proper course includes VMworld. 

The ability to learn at VMworld hit new highs last year with course content, Solutions Center, and the excellent Hands on Labs.  This year can only be better.

That said, one of the biggest opportunities is to network with your peers.  I missed out on much of that the first two VMworlds that I attended, and that was a mistake.  Get out there and meet your peers.  You’ll find a vast pool of great folks that can relate to your virtualization and cloud efforts and others that will blow you away with endless possibilities in real world scenarios.

Will you be there?  Please come up, say hello, and forgive me if I can not quickly connect your name to your Twitter handle or the discussion that we had a couple of years back.

I look forward to seeing you there!