Presentation and code for Experts Live 2018

Today I had the honour to speak at Experts Live NL about deploying Microsoft’s cloud PBX. Experts Live is always a fun gig and the interaction between al nerds in the venue (during and after the sessions) are always something that really gives energy 🙂

I promised I would upload my slides and PowerShell code for this session, so here is a quick blogpost to let you know you can find them on GitHub.

For those of you that were at the venue, thanks for coming. For those of you that weren’t: see you next year! 😉

Analyzing hybrid migration logs using PowerShell

While I’m currently working on migrating a customer from an on-prem Exchange environment to Exchange Online, in ran in to some problems with a few mailboxes.

In this case, there were three mailboxes that would fail the first (staging) sync from on-prem to ExO, due to the infamous ‘bad item limit reached’ error. So, I increased the bad item limit for these mailboxes and resubmitted the move request. After some time, the migration failed again, with the same error. The number of bad items had increased to above the limit I had set before. So, time to do some further digging. First, i’ll do a selection on the move requests to see which requests actually did fail.

I get the move requests that have a status of ‘failed’, get the statistics for those requests and load them to the variable $statistics.

Let’s see what the current amount of ‘bad items’ is for these mailboxes

An example from the output for one of the three mailboxes (please note that part of the displayname is hidden in this picture):

As you can see, I previously set the bad item limit to 700, but the migration currently encountered 788 bad items and therefore failed. I always do expect some bad items to occur during these migrations, but this sure is a lot. Where do all these errors come from? To find out, we have to take a look at the actual migration report.

Because I was looking at the third failed mailbox in my list of failed mailboxes, I’ll request the statistics for this mailbox, including the migration report.

This returns a huge wall of text, including all the errors that were encountered moving the messages. One of the last lines is the last failure recorded in the move request.

Of course, you can export this report to a text a file to go through the items to find the root cause. Personally, I find it easier to export the report to an XML-file, so I can use PowerShell to do some further digging.

With this cmdlet, I take the statistics for the given user, including the report, and export it to the given file. Next, I can import this XML-file to an object in PowerShell.

I now have the $report variable, which holds the XML-file with the migration report. I can now navigate through this report as I could with any other XML object within PowerShell. The ‘LastFailure’ entry I mentioned earlier, for example, is in fact an entry in the XML.

So, can we extract some actual info from these bad items from the report? We can. The encountered failures are located in the actual report, in the failures section.

Again, I obfuscated the folder name in this screenshot. This is just a part of the output from the above command, all encountered errors will be listed in the output.

So, let’s see if we can find some common denominator in these errors. I’d like to see all errors, but just a few properties for each error.

Because there is no index number for the entries, I add one manually. That way, I can always look up a specific error by referencing the number. As arrays start to count at zero, I do the same for my index number. For each error in the file, I then select the given index number, the timestamp, failuretype and the error message. At the end, I increase the index number with one, so the next error will have a correct index.

For the mailbox in our  example, this gives the following output:

So there you have it: it seems the mailbox has some items that probably have access rights mapped to an non-existing user. Of course, we can check this from the Exchange Management Shell. In this case, some of the errors referenced items in a subfolder of the ‘verwijderde items’ folder, which is Dutch for ‘Deleted Items’. So, i’ll get the folder permissions for this folder.

And indeed it does show a lot of non-existing, previously deleted, users.

So in this case, I can resolve the issue by removing the legacy permissions and restarting the job. You can also decide, after reviewing the report, to restart the job with the ‘BadItemLimit’ paramater increased to a number high enough the not cause the move request to fail, because these errors indicate that although the permissions will not be migrated, the items itself will be copied to Exchange Online so no data will be lost.

In conclusion, you can see why I prefer to review the errors in an Exchange hybrid migration using the export-clixml cmdlet. It is a much more convenient way to navigate around all errors and get a complete view of the issues.

Azure AD Group Based Licensing

When working with larger Office 365 and / or EM+S deployments, one of the pains for me has always been the automation of license assignment. You can provision users in Azure AD (and thus in Office 365) automatically using Azure AD Connect, and you could even add some magic with some PowerShell scripts to assign a license to these users based on OU or group-membership in your on-prem AD. The removal of these licenses takes even more scripting, where you would need to compare you on-prem group membership with the active licenses and remove licenses when needed.

About two weeks ago, Microsoft announced something to fill this gap: Azure AD Group based licensing. Currently, this feature is available in public preview. While in preview, you’ll need a paid Azure AD subscription to use the feature, like the one included in EM+S. Just a plain Office 365 tenant won’t be sufficient. When the feature hits GA, this prerequisite will be lifted.

Using the feature is as simpel as it sounds. From the (new) Azure Portal, you navigate to the Azure AD resource and click the ‘licensing’ tab. From there, you can navigate to all the licenses that are available in your environment.

By clicking through to one of the available licenses, you get the option to assign this license to a group.

All Microsoft Online services that require user-based licensing are supported and are displayed in this pane.

When selecting the ‘groups’ option, you can select the group to assign this license to. and search for the group you would like to use. This group can live solely in Azure AD, or can be a security group synced from you on-prem AD.


You can specify which parts of the license you would like to assign, you can create a pretty fine-grained solution using various groups to enable or disable specific features.

All in all, this is a pretty neat solution to provide some ease in managing user licenses for Microsoft Online services.

There is some good documentation on this feature on the Microsoft website. Of course, as is a feature that’s currently in public preview, there are some known issues and limitation, which are also pretty well documented.

For me, I can’t wait to get this feature implemented at some of the larger tenants I manage, to ease the administrative tasks of on- and offboarding users.

Completing individual moverequests from a migrationbatch

While working on a hybrid migration from Exchange 2010 to Office 365, I found myself in a place where I had a migration batch of approximately 350 mailboxes that was ready to complete, but I only wanted to complete around five of them so my costumer could do some further testing. This was something that was decided later on in the migration traject, so the migration batch was already set up and had synced.

Now, when you use PowerShell, you can complete the entire migration batch with the complete-migrationbatch cmdlet. But there’s no such thing for completing individual mailboxes. With some workarounds though, you can complete individual moverequests from within a migration batch.

First, get the moverequest you’re completing:

You’ll see the status is ‘synced’, so the mailbox is ready to complete. To do this, we change some properties for the individual moverequest:

In this example, I set the ‘completeafter’ propertiy to 5 minutes. You can also provide a date/time string to already schedule the completion for a few days from now.

After this is set, we need to resume the migrationbatch so it will complete:

After a while, you can do another get-moverequest and see the status will be ‘completed’.

Pass-through authentication and SSO

In an earlier blogpost I wrote about the new ‘pass-through authentication’ feature that is in public preview in the new Azure AD Sync client.

One of the most common reasons to use ADFS in an Office 365 setup, is that it allows you to do Single Sign-On. You only have to authenticate once, when you log on to your domain joined device, and your Kerberos ticket is used to authenticate you against Azure AD and therefore Office 365.

With the pass-through authentication feature, you get the same benefits. Because the PTA agent authenticates you against your on-premises Active Directory, you can use PTA to do single sign-on as well. So lets take this setup for a test-drive!

Basis of this test is the setup from my previous blog: an on-premises AD with PTA enabled in Azure AD Connect combined with the 365Dude Office 365-tenant. I added a Windows 10 machine to the mix, and domain-joined it to the on-premises AD. I use the Mr. Dude account to log on to this machine.

There are a few prerequisites to take into account when doing SSO. Mainly, they regard the OS and browser used. It is required to use a Windows machine, as it uses Kerberos for the underlying authentication. When using Windows 10, 8.1 or 8 clients, Internet Explorer, Chrome and Firefox are supported, where Edge isn’t.  Lower Windows-versions aren’t supported. When installing Azure AD Connect with the PTA / SSO option, a computer account is created in AD to handle your authentication requests.

First, we need to make sure that the Office 365 tenant is configured to allow OAuth. We do this with one single line of PowerShell:

Furthermore, we need to add the Microsoft authentication servers to the Intranet zone of the browser’s security settings. They need to be explicitly added to the machine’s Intranet zone, so that the browser will automatically send the currently logged in user’s credentials in the form of a Kerberos ticket to Azure AD. The easiest way to add the required URLs to the Intranet zone is to simply create a group policy in Active Directory.

There are two URL’s that need to be configured here:

When done, the GPO should look as follows:

That’s it! After configuring our tenant and setting up the URL’s to be in the Intranet zone, you can ‘single sign-on’ to your Office 365 service.

So, what’s the user experience here? Let’s check it out.

After logging on the machine, we open IE and navigate to the URL.

We are prompted to enter username and password. After we enter the username part, and navigate to the password field, the computer checks for a Kerberos ticket for this username. When it exists, the client is automatically logged on without the need of entering the password.

Well, it’s a bit hard to capture that on a screenshot. But you catch my drift, right? 😉 When the login completes, we just find our Outlook for the Web logged on and ready to use.

So that’s the webmail part. Most of my clients however, prefer to use Outlook as part of the Office suite for their e-mail work. How does SSO work there? Well, almost in the same way.

Prerequisite for SSO to work with client apps, is that the apps support modern authentication. So for our Windows-based clients, that will be Office 2013 and 2016. For this test, I used Office 2016.

When first starting Outlook, we are prompted to connect to Office 365.

So lets do just that. When we click the ‘connect’ button, we get a login screen equal to the one being used in the Outlook for the Web based interface.

And there we are. After checking the username and navigating to the ‘password’ field, the dialog box checks to see if a Kerberos tickets exists for this account and when it does, it uses that to log on to the Office 365 service. No further user interaction is required, It Just Works.

So, it’s a wrap. This single sign-on functionality provides you with all the ease of use you have with ADFS, without the need of a complete ADFS infrastructure. Ofcourse, when you plan on (or need to be) running ADFS in your environment for single sign-on with other applications, such as CRM or ERP software for example, there is no use in using PTA with SSO for Office 365, you just take the ADFS route. If you however want to provide your user with the same experience without the need for the infrastructure, PTA with SSO can be a great alternative!


AAD Pass-through authentication checked out

Exactly one week ago, Alex Simons announced what he called ‘the biggest news of the year’: the public preview of Azure AD Pass-through authentication and seamless single sign-on.

I tweeted something about how this would be the start of the end of ADFS for Office 365, but is it? Time to take this new options for a test drive!

First, some side-notes. The new functions are in public preview, which means it’s ‘beta’ and SLA is non-existent. You should therefore not use it in production environments. But I guess you’re smart enough to know that.

So, for the test I created a simpel AD domain 365dude.local, with one user: Mr. Dude. This will be the user I will be syncing to my existing Office 365 tenant that houses To make synchronisation easier, I added to the list of UPN-suffixes in my AD domain and changed the UPN of this user to

I enabled the dirsync functionality on my Office 365 tenant using PowerShell:

After that, I start with installing the Azure AD Connect tool

I guess you’re smart enough to know how to install AADConnect, so I’ll just focus on the new and shiny stuff here.

In the ‘user sign in’ screen, we can select how we want to handle sing in requests for our synced users. Of course we can use password sync or ADFS, but we’ll choose the new ‘pass-through authentication’ option.

So, what does this new pass-through authentication mechanism do? It relies on a connector which runs in local network. When you authenticate through the Office 365 portal (or a supported client app), your authentication request is sent to the connector over a secure channel. The connector will then check your credentials against your on-prem AD. So, no more syncing of password hashes to the cloud and using AAD as your authentication source: you authenticate directly to your on-prem AD. Just like you would when using ADFS, but without the need for the complete ADFS infrastructure.

On the ‘optional features’ tab, you can still choose to do password sync. This way the on-prem passwords are still synced to AAD, so you can easily fall back to authenticating to AAD if you want to, because the passwords are already in place.


In this screenshot, I also selected the ‘password write-back’ option. To use this, you need an AD Premium license.

You get some information on what the software will do, and the installation will continue.

Once completed, I can find the user from my on-prem AD in the active users overview of Office 365 and assign a license.

Remember, with pass-through authentication when you sign in to Office 365, your authentication is routed through the connector in your local network and your credentials are verified against your on-prem AD. Thus, you can find eventlog messages for your sign-in request with Office 365.

So, will this be the end of ADFS? In some ways it will, I guess. You can get the same functionality, without the need of the complete ADFS infra. And ofcourse, with the pass-through authentication mechanism in place, you don’t need to sync your (hashed) passwords to the cloud to get the same sign-in experience, which could please your security officers. On the other hand, many (larger) organizations already have the ADFS infrastructure in place to get single sign-in functionality with other LOB applications. If you already went down that road, you can simply use that to connect Office 365, too. Another disadvantage is that when you use directory synchronisation to connect your on-prem AD to AAD, you’ll need an on-prem Exchange Server to perform certain management tasks, such as adding aliases to a mail enabled user. For smaller organizations, this can be reason to not choose the directory sync option all, be it with or withou pass-trough authentication. I expect this will be something that Microsoft will fix somewhere in the next year, but that’s just a guess.

Want more information on this new (preview) functionality in AAD Connect and Office 365? Be sure to check out the official documentation.

Further developing the homelab script

My previous post was about the script I used for my presentation at Experts Live: (re)building your homelab using PowerShell. As it turns out, someone got inspired 😉 Sven already did an update to the script by creating a variable for specifying the gateway address for your new VM.

As we both have plans on further developing this script into a nice module with some extra functionalities, I’ve decided to move the code to a seperate repository on GitHub. This way, the demo code from Experts Live stays the same for future reference, and the project we’re working on truly becomes a seperate project.

If you’d like to add in on further expending and developing this module, please feel free to contribute through GitHub!

Slides and code for Experts Live 2016

Last week i had the privilige to speak at Experts Live 2016. This has always been a great event to brush up on skills, gain some new knowledge and catch up with old and new friends from ‘the industry’.

I did a talk about automating (re)building your home lab environment, to make sure you can keep up with all new developments in the Microsoft world. I promissed to put my slidedeck and code online, so I made them available on GitHub. I did a blog post on (sort of) the same subject a while ago, you can check that out here.

Just so you know: I know I’ve been slacking off a bit on writing new blog posts the last couple of months, but there are some new posts on their way. So keep coming back 😉

Building a home lab, the PowerShell way

I haven’t posted here in a while, but for a good reason: I started a new job! As of May 1th, I started working as a Senior IT Consultant at Hands On.
Partly because of the job change, I wanted to build a home lab for some testing an practicing for certification exams.
My lab will run an Intel NUC, the sixth generation, with 32GB of RAM and a 500GB M.2 SSD drive. This box currently runs Windows 10 Pro, mainly because that means I can also use it as workstation while working at home. Perhaps in the future I’ll switch over to running a server OS, with a workstation as a VM… Still thinking about what the best option is.
Anyway, the box runs Hyper-V and because using it as a lab means I’ll frequently be installing new servers or reinstalling existing ones, I wanted a workflow to speed up this process. And of course I will use PowerShell to do so 🙂 I’ll share my setup and scripts in this post, but please note that this is (as always) work in progress and is by no means a perfect solution. However, because I received some requests to share my work, I decided to put them online already 🙂
First, a small look at the networking setup for Hyper-V. I wanted to isolate my guests from my main network. And because the NUC is such a nice, portable little box, I want to have the ability to take it with me and use it somewhere else, without the need to change my entire network configuration. To realize this, I decided to go with a NAT-switch in Hyper-V. This way, the guests can use there entire own subnet and still have internet-access through the host machine. For more information on this, check this out.
To save up on disk space needed for the servers, I wanted to use differencing disks. This way, there is one ‘main’ disk, with each VM having it’s own differencing disk with just the changes / delta’s to the main disk. So, I installed a plain Windows Server 2012 R2, installed all available updates and ran sysprep so I can use this as a base image.
After this, i copied the VHDX for this machine to seperate folder and marked it as read-only. Once it is used as a base disk for all other servers, changing this disk directly would break all underlying VM’s, so the read-only part is just for security.
After this, the ground work for the lab is done. Now i’ll need to build a PowerShell script to do the actual deployment.
I’ll break down the script piece by piece, to show what work is being done by the script.

This first part is where I’ll define the variables that I change whenever I run the script. I set the desired name for the VM, the IP-address I want it to use and the IP-address for the DNS server. In the future, I’ll most likely change this script into a function so I can just supply them as paramaters. For now, I’ll just manually change the script each time I run it. Like I said: work in progress 😉

These are static variables that contain stuff as file paths. These won’t change each time the script is run, but I think setting them here is nicer that just hardcoding them in the script. The ‘CertLab’ VM-Switch is the NAT-switch I created earlier.

The real work starts off with creating the VHD-file for the new server. The disk is created as a differencing disk, with the base image I created earlier as the parent disk.

This part of the script mounts the newly created VHD-file for the new-VM, and then copies an unattend.xml to this disk. I use some scripting to add content to the XML-file. This way I change the hostname, IP-address and DNS server address for the new server, based on the variables I defined in the beginning of the script.

Here, I just create the new VM. I set the MemoryStartupBytes paramater here hardcoded in the script, but ofcourse you can always choose to specify this as a variable (or a parameter if you create a function) to be able to easily change this.

Finaly, I simply start the new VM.

That’s it! By using this script, I’m able to set up a new VM in my lab with just a few clicks. Like I said, it’s work in progress. I’d like to implement PowerShell Direct to do some final configuration on the guest machine, such as adding certain roles. At this time, however, PowerShell direct requires the guest to run a Server 2016 TP OS, and as I’m also using this lab for certification and training purposes, I’m sticking with Server 2012 R2 for now. I could set up PowerShell Remoting (Robert has a great blogpost on this), but that’s something for the future 😉
If you have any comments on my code, or if you would like to add something to the script, please feel free to do so!

Exchange 2010 EMC issue resolved – so it seems

I blogged earlier about an issue when setting up a connection to Office 365 in the Exchange Management Console in Exchange 2010 (in this case, Service Pack 3, Rollup Update 12).

It seems that this issue was resolved. Despite the fact that I didn’t get any further feedback on the ticket I raised at MS Support or within the support forums thread mentioned in my earlier post, I decided to simply give it another try today… And It Just Worked™.

I didn’t change any thing on the existing configuration, just simply tried adding the connection to EMC again. At another client that encountered the same issue (but for which we didn’t raise a ticket yet), the issue seems to be resolved too, which leads to think there was a configuration change on the Office 365 side. However, we will need to wait for official feedback from Microsoft about this issue to have this confirmed.