Encrypting disks on an Azure VM

Due to all the new privacy and data-protection rules like GDPR, I see a lot of companies looking at using disk encryption on their servers. In this case, a customer asked me to enable Azure Disk Encryption on their Azure VM’s.ย  Azure Disk Encryption encrypts the disks for your Azure VM at rest using the known Bitlocker technology. This is free -as in free beer. There is no extra charge for encrypting you disks in Azure. The crypthographic keysย  are stored in Azure Key Vault.ย These cryptographic keys are used to encrypt and decrypt virtual disks attached to your VM. You retain control of these cryptographic keys and can audit their use through the Key Vault. An Azure Active Directory service principal is used for issuing these cryptographic keys as VMs are powered on and off.

Proces for encrypting a VM

The process for encrypting a VM is pretty straight forward:

  1. Create a cryptographic key in Azure Key Vault
  2. Configure the key to be usable for Azure Disk Encryption
  3. Create an Azure Active Directory service principal to be able to read the key
  4. Run the PowerShell command to encrypt your virtual disks, specifying the Azure Active Directory service principal and appropriate cryptographic key to be used
  5. The Azure AD service principal will request the required cryptographic key from Azure Key Vault
  6. The virtual disks are encrypting using the provided key

There are some requirements and limitations to keep in mind. Encryption can only be applied to VM’s in de standard tier and all resources (Key Vault, Storage Account and VM) should be in the same region. You can’t use Azure Disk Encryption for VM’s created in the classic deployment model and you can’t update the key on already encrypted VM’s.

 

So, let’s code!

In the following examples, we’ll use PowerShell to enable Azure Disk Encryption on the ‘myVm’ Virtual Machine in the ‘myResourceGroup’ resource group. All resources are based in the West Europe region. Of course, before you start, make sure that you are running the latest AzureRM module in Powershell.

First of all, we’ll set some variables and register the Azure Key Vault provider.

Next, we’ll create the key vault. I choose to use a specific key vault for storing my key, but you can use an existing key vault. Of course you can freely choose the name of your vault ๐Ÿ™‚ Make sure to enable the key vault for disk encryption.

Next, we’ll create a software-protected key in the vault to use with Azure Disk Encryption. This is the key that will actually be used for encrypting and decrypting your disks. Again, pick your own name for the key.

When virtual disks are encrypted or decrypted, you specify an account to handle the authentication and exchanging of cryptographic keys from the key vault. This account, an Azure Active Directory service principal, allows the Azure platform to request the appropriate keys on behalf of the VM. We’ll create the service principal using the following code. When choosing your password, keep the Azure AD password policies and restrictions in mind.

After creating the service principal, we have to set permissions on the key stored in the key vault to be read by the service principal.

So, now all prerequisites are met to start encrypting disks. We kick off the encryption process on our VM

You will be prompted to continue with the encryption. Once you accepted, the encryption process will be started and the VM will be restarted in the process.

Once the process has completed, you can check the status with the following command:

The output will look something like this:

That’s it! You just encrypted your disks using Azure Disk Encryption. Want to know more about this feature? Check the documentation.

 

Getting Intune device config Powershell scripts via the Graph API

Since a few months, it’s possible to upload PowerShell scripts to Intune as a part of you device configuration policies. These scripts will then be pushed to the linked Windows devies and run either under the SYSTEM account, or as the logged on user.

While working on an Intune deployment, I wanted to check the PowerShell scripts that are currently in use, and found out you can’t do that through the portal. You can change the properties of the script and upload a new file, but can’t view the current script.

Looking for a way to make the script visible, I started playing around with the Graph API, to see if we can do it via this route. Spoiler: we can! ๐Ÿ™‚

First of all, we need to authenticate to the graph API. There is some great example code on the Microsoft Graph GitHub pages that explains how to do this, so I won’t go into any detail here. The scriptblock I use to authenticate result in a $authHeader hashtable, that we can include in our REST-calls to the Graph.

First, I set a few variables in my script that I can re-use in my calls:

We need to use the beta version of the API, because the resources we need (deviceManagement/deviceManagementScrips) are not exposed in the currently stable version.

So, lets make our first call to the API to see what results we get back.

We set the URI we want to call, including the API version and resources specified earlier. Next, we call this URI using the invoke-restmethod cmdlet, including our authentication header we retrieved in the beginning. We use the ‘GET’ method, because we want to retrieve data. Because we set the resource to be deviceManagementScripts, the response will include the deviceManagementScripts currently in use.

The response is a PSObject with several properties. Of course, we have the most interest in the ‘value’ property, as this has the actual data we are looking for. So, rewrite our line of code to get just the ‘value’.

This returns the actual deviceManagementScripts that are currently in use.

In my case, this is only one script that apparently is used to redirect certain folders to OneDrive.

By referencing the id for this script in our API call, we can get more information on this particular object.

Again, we use the ‘GET method to retrieve the information from the Graph API. Because we are referencing this single object, now we don’t need to distinctly retrieve the ‘value’ property to get the actual data.

For ease of reading in this output I truncated the ‘scriptContent’ property shown here a bit, but as you can see we can retrieve all the information we have in the portal: the description, the runAsAccount (that can be either ‘user’ or ‘system’), if a signature check is enforced, and of course the filename of the script and the actual content.

The content of the script is stored (and displayed) in a Base64-encoded string. To make this human readable, we need to decode it.

First, we specify the $script64 variable to store the Base64-encoding string. Next, we decode the Base64-string to UTF8 and store it in the $decodedscript variable.

When we now display this $decodedscript variable, we see the contents of the script!

Again, I truncated to output for readability here. Since we have this variable, we can store the contents in a file and thus save the script to our local harddrive.

Pretty neat, right? But, if we can use this to download a script, why shouldn’t we be able to upload a script this way? Let’s check. The documentation by Microsoft is pretty clear: we call the same deviceManagementSripts resource, but with a POST-method in stead of a GET. In this POST we need to include a JSON in the body with the details of the script we would like to set, including the actual content of the script in the same Base64-encoding we saw earlier.

So, let’s put the building blocks together. I’ve created a mind blowing Powershell-script that I stored as c:\temp\testscript.ps1

We get the content for this file and then encode it using Base64

So now we have the $UploadScriptEncoded variable, containing the Base64-encoded script. Next, we need to build the JSON file to include in the POST to the REST API. I do this by creating a hastable with all the needed information and piping this to the ConvertTo-JSON cmdlet.

In the hashtable I specify the Displayname for the script and a short description, include the scriptContent we just encoded, specify it should run in the user-context and that we don’t want to check for a valid signature. Finaly, we give the filename for the script that will be displayed in the Intune portal where the script is referenced.

To finish up, we call the API with the given parameters to do the actual uploading.

We call the URI specified earlier, including our authentication header with the POST-method. We include the JSON-file we stored in $postbody as the body of the request, specifying that this is indeed a JSON-file.

The response indicates that file was uploaded and configured.

We can now check the Intune portal to double check if the script is there.

There you have it: using the Graph API you can do stuff you can’t do in the portal and automate many things you do Intune. For example, you can place the PowerShell scripts you deploy using Intune in a repository in (for example) VSTS and create a build sequence that uses the Graph API to update the script in Intune every time you push to master. If you haven’t played around with the Graph API, now is the time do so. The possibilities are endless.

Happy scripting!

 

Deleting rogue mailbox folder permissions using PowerShell

Yesterday, I wrote a little post about analyzing your hybrid migration logs, using PowerShell. In the case I showed in that post, the large number of BadItems that were causing my move to fail, turned out to be caused by rogue permissions on mailbox folders in the users mailbox. These permissions were given to users that no longer exist in the directory, so they can not be moved to Exchange Online, causing the move to fail.

So… How do we remove these permissions? Well, with PowerShell ofcourse ๐Ÿ˜‰ I wrote up a quick script that checks for rogue permissions on a given mailbox and then removes them. The script is tested only in my environment, so if you want to use or adopt it, please be careful.

First, we need to get a list of al folders in a mailbox so we can check the permissions for those folders. Unfortunately, get-mailboxfolder only works if your querying a mailbox that your the owner of. You can’t use this cmdlet as an administrator to check other people’s mailbox. But, we can use get-mailboxfolderstatistics as a workaround. We just need to make sure we only select the output we need.

This gives us a list of folder that are in the given mailbox. We can then use this list to check al those folders for any rogue permissions. If you investigate the permissions on a mailbox folder, you’ll see that the ‘User’ attribute for these rogue permissions will be the user SID, in stead of the username. As al SID’s start with ‘NT:’, we can use this to filter out the rogue permissions.

We now have a list of folders and the corresponding invalid permissions. It’s fairly easy to delete those with the remove-mailboxfolderpermission cmdlet.

So now for the cool part: putting all those puzzle pieces together to create one script. It’s fairly simple, using two foreach-loops: one to loop through all the folders for a mailbox to get the incorrect permissions, and another one to loop through all the rogue permissions to actually remove them.

The nasty part is in creating a correct list of folders to query the permissions. The list of folders form the get-mailboxfolderstatistics cmdlet contains only folder names, using a forward slash (/) to separate the folders, while the get-mailboxfolderpermission cmdlet expects the folder path to use backslashes (\), and include the name of the mailbox followed by a colon symbol (:). To work around this, I build a $folderpath variable combining the alias, a colon symbol and the folder path from get-mailboxfolderstatistics, combined with the -replace parameter to replace all forward slashes with a backslash.

To top it all off, I do some filtering in the get-mailboxfolderstatistics cmdlet to exclude some folders. These are folders (like ‘top of information store’) that will generate an error if you try to query the permissions.

The entire script then ends up looking like this:

Of course, if you like to run this in your own environment, be careful and make sure to know what your doing. If you are really, really sure it will be okay, remove the -whatif parameter from the last line and have fun.

Happy scripting!

Analyzing hybrid migration logs using PowerShell

While I’m currently working on migrating a customer from an on-prem Exchange environment to Exchange Online, in ran in to some problems with a few mailboxes.

In this case, there were three mailboxes that would fail the first (staging) sync from on-prem to ExO, due to the infamous ‘bad item limit reached’ error. So, I increased the bad item limit for these mailboxes and resubmitted the move request. After some time, the migration failed again, with the same error. The number of bad items had increased to above the limit I had set before. So, time to do some further digging. First, i’ll do a selection on the move requests to see which requests actually did fail.

I get the move requests that have a status of ‘failed’, get the statistics for those requests and load them to the variable $statistics.

Let’s see what the current amount of ‘bad items’ is for these mailboxes

An example from the output for one of the three mailboxes (please note that part of the displayname is hidden in this picture):

As you can see, I previously set the bad item limit to 700, but the migration currently encountered 788 bad items and therefore failed. I always do expect some bad items to occur during these migrations, but this sure is a lot. Where do all these errors come from? To find out, we have to take a look at the actual migration report.

Because I was looking at the third failed mailbox in my list of failed mailboxes, I’ll request the statistics for this mailbox, including the migration report.

This returns a huge wall of text, including all the errors that were encountered moving the messages. One of the last lines is the last failure recorded in the move request.

Of course, you can export this report to a text a file to go through the items to find the root cause. Personally, I find it easier to export the report to an XML-file, so I can use PowerShell to do some further digging.

With this cmdlet, I take the statistics for the given user, including the report, and export it to the given file. Next, I can import this XML-file to an object in PowerShell.

I now have the $report variable, which holds the XML-file with the migration report. I can now navigate through this report as I could with any other XML object within PowerShell. The ‘LastFailure’ entry I mentioned earlier, for example, is in fact an entry in the XML.

So, can we extract some actual info from these bad items from the report? We can. The encountered failures are located in the actual report, in the failures section.

Again, I obfuscated the folder name in this screenshot. This is just a part of the output from the above command, all encountered errors will be listed in the output.

So, let’s see if we can find some common denominator in these errors. I’d like to see all errors, but just a few properties for each error.

Because there is no index number for the entries, I add one manually. That way, I can always look up a specific error by referencing the number. As arrays start to count at zero, I do the same for my index number. For each error in the file, I then select the given index number, the timestamp, failuretype and the error message. At the end, I increase the index number with one, so the next error will have a correct index.

For the mailbox in ourย  example, this gives the following output:

So there you have it: it seems the mailbox has some items that probably have access rights mapped to an non-existing user. Of course, we can check this from the Exchange Management Shell. In this case, some of the errors referenced items in a subfolder of the ‘verwijderde items’ folder, which is Dutch for ‘Deleted Items’. So, i’ll get the folder permissions for this folder.

And indeed it does show a lot of non-existing, previously deleted, users.

So in this case, I can resolve the issue by removing the legacy permissions and restarting the job. You can also decide, after reviewing the report, to restart the job with the ‘BadItemLimit’ paramater increased to a number high enough the not cause the move request to fail, because these errors indicate that although the permissions will not be migrated, the items itself will be copied to Exchange Online so no data will be lost.

In conclusion, you can see why I prefer to review the errors in an Exchange hybrid migration using the export-clixml cmdlet. It is a much more convenient way to navigate around all errors and get a complete view of the issues.

Update to the 365Tools PowerShell Module

Earlier this week, I decided to add a new function to the 365Tools PowerShell module.

This Get-MSOLIPRanges function prompts you to select one or more Office 365 Products, and then provides you with the IP Ranges used by this product, so you can whitelist these addresses in your firewall if you need to do so.

It started off as a quick write-up, but thanks to the help of Robert (Twitter) the code was cleaned up and is ready for you to use.

You can find the 365Tools module on the PowerShell gallery, so you can simply install it by running Install-Module 365Tools. The entire code for the module can be found on GitHub.

 

Dupsug Basics – Part Deux

Op 19 september 2017 organiseert de Dutch Powershell User Group weer een ‘DuPSUG Basics’ event. Op 22 maart vorig jaar was de eerste keer dat er zo’n dag georganiseerd werd. Deze zeer goed bezochte editie smaakte waarschijnlijk naar meer, want er wordt nog regelmatig gevraagd wanneer de tweede editie gehouden wordt. Op Prinsjesdag, dus!

In totaal zijn er op deze dag 7 sessies van zeven verschillende sprekers (waaronder twee MVP’s) over uiteenlopende onderwerpen, zoals SQL en Office 365. Ikzelf zal de sessie ‘Powershell for Office 365 Administrators’ verzorgen. Het volledige tijdschema is als volgt:

Tijdstip Spreker Onderwerp
9:00 Welkom.
9:15 โ€“ 10:30 Mark van de Waarsenburg Powershell basis.
10:30 โ€“ 10:40 Koffie
10:40 โ€“ 11:25 Erik Heeres Powershell Remoting.
11:30 โ€“ 12:15 Jaap Brasser [MVP] Manage your infrastructure with PowerShell.
12:15 โ€“ 13:15 Lunch
13:15 โ€“ 14:00 Robert Prust Improving your scripts.
14:00 โ€“ 14:45 Sander Stad DBAtools โ€“ PowerShell and SQL Server Working Together.
14:45 โ€“ 15:00 koffie
15:20 โ€“ 16:05 Ralph Eckhard Powershell for Office 365 Administrators.
16:10 โ€“ 16:45 Jeff Wouters [MVP] Tips and tricks.

Meer info, of (gratis!) kaarten bestellen? Ga naarย http://dupsug.com/2017/07/14/dupsug-presents-dupsug-basics-part-deux/. Wees snel, want er zijn niet veel kaarten meer beschikbaar!

Getting VPN logging from Azure

Today, I was working with a customer who had some issues with a VPN connection from Azure to his on-premises environment. After checking the configuration, everything seemed to be okay. I decided to run some logging on the VPN connection from the Azure side, to see if I could pin-point the issue. For that, I usually use the following script, of course after logging in to Azure in Powershell using add-azureaccount:

This script prompts to select your Azure Subscription, Storage Account and vNet, and then starts 300 seconds of logging on the vNetGateway in that vNet, writing the output of these logs to the Storage Account you specified.

However, when the start-azurevnetgatewaydiagnostics cmdlet was run, I received an error that the StorageContext could not be read. After some Google searches, I found out I was running in to this issue. As discussed there, uninstalling the Azure module and reinstalling it using the 3.8.0 version solved my issue and I could start the logging.

Once done, you can retrieve the logged data using the following code:

In my case, it turned out there was a mismatch in the preshared key between the Azure side of the VPN connection and the on-premises firewall.

If you want to retrieve the PSK’s of all VPN’s in a specific vNet, you can do that using PowerShell.

This prompts for your subscription and vNet name, and outputs the PSK’s of all VPN’s in that vNet.

365Tools is in the Powershell gallery

After my previous post about my ‘open-msolconnection’ function, I decided that it would be nice if I grouped all my commonly used PowerShell scripts for managing Office 365 in a single module, so I could publish it to the PowerShell gallery… So here it is!

As of today, the 365Tools module is available from the Powershell Gallery. Of course, I’ve also created a GitHub repo for maintaining the whole thing.

Currently, the module just includes the open-msolconnection function and a function I use for reporting on mailbox sizes, licensing status, etc.

You can install the module directly from the gallery by using one simple line of code:

After that, you can see the commands that are made available through the function:

Enjoy ๐Ÿ™‚ If you have any issues or questions regarding the module, please leave a reply to this blogpost.

Connect to O365 Powershell with one command

For a few years, I’ve been using a simple script to connect to Office 365 using Powershell. In stead of typing all the commands needed to connect to both Azure AD and Exchange Online, I use a small piece of code placed in my Powershell profile to do all this for me. I simply type open-msolconnection from my Powershell prompt, specify my username and password and that’s it.

I’ve received some questions about this little function, so I decided to share it on GitHub. You can find it here. Enjoy!