Getting Intune device config Powershell scripts via the Graph API

Since a few months, it’s possible to upload PowerShell scripts to Intune as a part of you device configuration policies. These scripts will then be pushed to the linked Windows devies and run either under the SYSTEM account, or as the logged on user.

While working on an Intune deployment, I wanted to check the PowerShell scripts that are currently in use, and found out you can’t do that through the portal. You can change the properties of the script and upload a new file, but can’t view the current script.

Looking for a way to make the script visible, I started playing around with the Graph API, to see if we can do it via this route. Spoiler: we can! 🙂

First of all, we need to authenticate to the graph API. There is some great example code on the Microsoft Graph GitHub pages that explains how to do this, so I won’t go into any detail here. The scriptblock I use to authenticate result in a $authHeader hashtable, that we can include in our REST-calls to the Graph.

First, I set a few variables in my script that I can re-use in my calls:

We need to use the beta version of the API, because the resources we need (deviceManagement/deviceManagementScrips) are not exposed in the currently stable version.

So, lets make our first call to the API to see what results we get back.

We set the URI we want to call, including the API version and resources specified earlier. Next, we call this URI using the invoke-restmethod cmdlet, including our authentication header we retrieved in the beginning. We use the ‘GET’ method, because we want to retrieve data. Because we set the resource to be deviceManagementScripts, the response will include the deviceManagementScripts currently in use.

The response is a PSObject with several properties. Of course, we have the most interest in the ‘value’ property, as this has the actual data we are looking for. So, rewrite our line of code to get just the ‘value’.

This returns the actual deviceManagementScripts that are currently in use.

In my case, this is only one script that apparently is used to redirect certain folders to OneDrive.

By referencing the id for this script in our API call, we can get more information on this particular object.

Again, we use the ‘GET method to retrieve the information from the Graph API. Because we are referencing this single object, now we don’t need to distinctly retrieve the ‘value’ property to get the actual data.

For ease of reading in this output I truncated the ‘scriptContent’ property shown here a bit, but as you can see we can retrieve all the information we have in the portal: the description, the runAsAccount (that can be either ‘user’ or ‘system’), if a signature check is enforced, and of course the filename of the script and the actual content.

The content of the script is stored (and displayed) in a Base64-encoded string. To make this human readable, we need to decode it.

First, we specify the $script64 variable to store the Base64-encoding string. Next, we decode the Base64-string to UTF8 and store it in the $decodedscript variable.

When we now display this $decodedscript variable, we see the contents of the script!

Again, I truncated to output for readability here. Since we have this variable, we can store the contents in a file and thus save the script to our local harddrive.

Pretty neat, right? But, if we can use this to download a script, why shouldn’t we be able to upload a script this way? Let’s check. The documentation by Microsoft is pretty clear: we call the same deviceManagementSripts resource, but with a POST-method in stead of a GET. In this POST we need to include a JSON in the body with the details of the script we would like to set, including the actual content of the script in the same Base64-encoding we saw earlier.

So, let’s put the building blocks together. I’ve created a mind blowing Powershell-script that I stored as c:\temp\testscript.ps1

We get the content for this file and then encode it using Base64

So now we have the $UploadScriptEncoded variable, containing the Base64-encoded script. Next, we need to build the JSON file to include in the POST to the REST API. I do this by creating a hastable with all the needed information and piping this to the ConvertTo-JSON cmdlet.

In the hashtable I specify the Displayname for the script and a short description, include the scriptContent we just encoded, specify it should run in the user-context and that we don’t want to check for a valid signature. Finaly, we give the filename for the script that will be displayed in the Intune portal where the script is referenced.

To finish up, we call the API with the given parameters to do the actual uploading.

We call the URI specified earlier, including our authentication header with the POST-method. We include the JSON-file we stored in $postbody as the body of the request, specifying that this is indeed a JSON-file.

The response indicates that file was uploaded and configured.

We can now check the Intune portal to double check if the script is there.

There you have it: using the Graph API you can do stuff you can’t do in the portal and automate many things you do Intune. For example, you can place the PowerShell scripts you deploy using Intune in a repository in (for example) VSTS and create a build sequence that uses the Graph API to update the script in Intune every time you push to master. If you haven’t played around with the Graph API, now is the time do so. The possibilities are endless.

Happy scripting!

 

Microsoft Authenticator to support account backup & recovery

Good news for everyone who, like me, uses the Microsoft Authenticator app for all his (or hers) multifactor authentication needs: a much requested feature will soon be available!

Microsoft announced that they will soon start rolling out the account backup and recovery functionality for their authenticator app. This way, when you switch devices, you won’t need reconfigure all your account credentials on the new device.

The Microsoft Authenticator app beta for iOS already supports this feature, so I went ahead and configured the backup functionality.

 

The backup is encrypted with your personal Microsoft-account and then stored to iCloud. Because building the foundations using iCloud storage simplified the development process, Microsoft is starting the roll-out on iOS devices the next few weeks. After that, the function will become available in the Android-app too.

More information, and a form to sign up for the beta-release of the Authenticator app for iOS, can be found here.

Saving MS Forms responses to SharePoint

Last Friday I had the privilege to speek at a Dutch Meetup on Office 365 adoption. This really was a nice experience, as it was a small group of people sharing the love for Office 365 and just talking about what makes the platform so fun to work with but also what the pain points are.

I did my talk about the way we used Microsoft Forms and PowerBI to organise our company skiing trip. In stead of sending out calendar items in Outlook and afterwards ask everyone to email their details like contact information and diet wishes, we decided to use Microsoft Forms to do this inventory. Forms is one of the lesser known components in Office 365, but you can do some real magic with it. We created a form to collect responses from colleagues about whether they would be joining us on the trip. Using the  branching feature, you can ‘guide’ people through the form. When people select the option that they will be joining, they will be asked about what they would like to eat. If they select the option they won’t be joining, they will be asked about their reasons why, so we can see if we need to adjust something to have more people joining us on the next trip.

 

The forms render great on both regular and mobile devices, so you can just send out the link to you colleagues or even have a QR-code generated that you can display around your office so people can access the form from there.

As the creator of the form, you can view the results from the dashboard, or download as an Excel-file. If you want more insights however, you might want to add some extra functionality. For example, I like to have insight in why people won’t be joining, mapped out against their function. Do people that do mostly remote work tend to join less often? Of course, PowerBI is the right tool for the job. But, because the results can only be downloaded as an Excel-file, setting this up can be cumbersome. After each response, you would have to re-download the file to import the new results in PowerBI.

After some testing, we decided to go with a more robust solution: storing the results in a SharePoint list, so we can dynamically get the data from that list. After creating both the form and the list, we set up a task in Microsoft Flow to add new responses to this list.

Flow is one of my favourite tools in Office 365, because it gives you the ability to interconnect almost everything with easy ‘what you see is what you get’ logic. A flow consists of a trigger (something that starts the flow) and one or more actions. These actions can contain ‘dynamic content’; content that is determined based on the earlier trigger or actions. For example, when creating a flow from a Microsoft Form, you can use the content supplied in the form in the following actions. In simple written logic, our flow contains a trigger (the fact that a new response was submitted to our form), a first action (get the details of this response) and a second action (insert these details into a SharePoint list).

So, how does this look like inside Flow? When creating a flow, we first have to define the trigger. In this case, we use the ‘MS Flow’ connector and define the trigger as a new response to our form.

Here, the Form ID is the name we gave our form when creating it.

So, after the trigger we need to define an action. We need to use the dynamic content with responses from our form to insert into the SharePoin list, but the dynamic content from this trigger only includes the response id, the unique id of the responses for this entry. Therefore, we can’t use this to insert into the list, but we are able to use this response id to fetch the additional details of the response.

Using this action, we get the response details for each of the submitted responses. The next step is to import the responses into our SharePoint list.

When setting to action to be ‘create item’ in the SharePoint connector, we submit our site address en select the list we would like to create the item in. Flow then reads the list en populates the action with all columns in the list. We can the place the dynamic content from the form to fill out the columns.

The result is a SharePoint list that will dynamically update when responses are submitted. In the flow interface, we can watch the results come in!

So, from here on you can do almost anything you like with the data. We used PowerBI to aggregate the data from the list to create a visual dashboard. Of course, you can choose to display this dashboard in the Microsoft Teams you use within your organisation, for example.

There you have it: all the data you need, in a nice format, automatically updating dashboards, and you didn’t even need to hire a developer to get them!

New in OneDrive: File Restore

A few days ago, Microsoft announced a new feature for the Office 365 Suite, specifically within OneDrive: the ability to restore files as a user.

When you navigate to your OneDrive page and click the settings-icon, you can select ‘Restore OneDrive’.

After that, it’s pretty straightforward. There is a great instruction on the OneDrive blog, so I won’t be going into detail here 🙂

The feature is currently rolling out across all tenants and should be globally available by mid-februari.

Deleting rogue mailbox folder permissions using PowerShell

Yesterday, I wrote a little post about analyzing your hybrid migration logs, using PowerShell. In the case I showed in that post, the large number of BadItems that were causing my move to fail, turned out to be caused by rogue permissions on mailbox folders in the users mailbox. These permissions were given to users that no longer exist in the directory, so they can not be moved to Exchange Online, causing the move to fail.

So… How do we remove these permissions? Well, with PowerShell ofcourse 😉 I wrote up a quick script that checks for rogue permissions on a given mailbox and then removes them. The script is tested only in my environment, so if you want to use or adopt it, please be careful.

First, we need to get a list of al folders in a mailbox so we can check the permissions for those folders. Unfortunately, get-mailboxfolder only works if your querying a mailbox that your the owner of. You can’t use this cmdlet as an administrator to check other people’s mailbox. But, we can use get-mailboxfolderstatistics as a workaround. We just need to make sure we only select the output we need.

This gives us a list of folder that are in the given mailbox. We can then use this list to check al those folders for any rogue permissions. If you investigate the permissions on a mailbox folder, you’ll see that the ‘User’ attribute for these rogue permissions will be the user SID, in stead of the username. As al SID’s start with ‘NT:’, we can use this to filter out the rogue permissions.

We now have a list of folders and the corresponding invalid permissions. It’s fairly easy to delete those with the remove-mailboxfolderpermission cmdlet.

So now for the cool part: putting all those puzzle pieces together to create one script. It’s fairly simple, using two foreach-loops: one to loop through all the folders for a mailbox to get the incorrect permissions, and another one to loop through all the rogue permissions to actually remove them.

The nasty part is in creating a correct list of folders to query the permissions. The list of folders form the get-mailboxfolderstatistics cmdlet contains only folder names, using a forward slash (/) to separate the folders, while the get-mailboxfolderpermission cmdlet expects the folder path to use backslashes (\), and include the name of the mailbox followed by a colon symbol (:). To work around this, I build a $folderpath variable combining the alias, a colon symbol and the folder path from get-mailboxfolderstatistics, combined with the -replace parameter to replace all forward slashes with a backslash.

To top it all off, I do some filtering in the get-mailboxfolderstatistics cmdlet to exclude some folders. These are folders (like ‘top of information store’) that will generate an error if you try to query the permissions.

The entire script then ends up looking like this:

Of course, if you like to run this in your own environment, be careful and make sure to know what your doing. If you are really, really sure it will be okay, remove the -whatif parameter from the last line and have fun.

Happy scripting!

Analyzing hybrid migration logs using PowerShell

While I’m currently working on migrating a customer from an on-prem Exchange environment to Exchange Online, in ran in to some problems with a few mailboxes.

In this case, there were three mailboxes that would fail the first (staging) sync from on-prem to ExO, due to the infamous ‘bad item limit reached’ error. So, I increased the bad item limit for these mailboxes and resubmitted the move request. After some time, the migration failed again, with the same error. The number of bad items had increased to above the limit I had set before. So, time to do some further digging. First, i’ll do a selection on the move requests to see which requests actually did fail.

I get the move requests that have a status of ‘failed’, get the statistics for those requests and load them to the variable $statistics.

Let’s see what the current amount of ‘bad items’ is for these mailboxes

An example from the output for one of the three mailboxes (please note that part of the displayname is hidden in this picture):

As you can see, I previously set the bad item limit to 700, but the migration currently encountered 788 bad items and therefore failed. I always do expect some bad items to occur during these migrations, but this sure is a lot. Where do all these errors come from? To find out, we have to take a look at the actual migration report.

Because I was looking at the third failed mailbox in my list of failed mailboxes, I’ll request the statistics for this mailbox, including the migration report.

This returns a huge wall of text, including all the errors that were encountered moving the messages. One of the last lines is the last failure recorded in the move request.

Of course, you can export this report to a text a file to go through the items to find the root cause. Personally, I find it easier to export the report to an XML-file, so I can use PowerShell to do some further digging.

With this cmdlet, I take the statistics for the given user, including the report, and export it to the given file. Next, I can import this XML-file to an object in PowerShell.

I now have the $report variable, which holds the XML-file with the migration report. I can now navigate through this report as I could with any other XML object within PowerShell. The ‘LastFailure’ entry I mentioned earlier, for example, is in fact an entry in the XML.

So, can we extract some actual info from these bad items from the report? We can. The encountered failures are located in the actual report, in the failures section.

Again, I obfuscated the folder name in this screenshot. This is just a part of the output from the above command, all encountered errors will be listed in the output.

So, let’s see if we can find some common denominator in these errors. I’d like to see all errors, but just a few properties for each error.

Because there is no index number for the entries, I add one manually. That way, I can always look up a specific error by referencing the number. As arrays start to count at zero, I do the same for my index number. For each error in the file, I then select the given index number, the timestamp, failuretype and the error message. At the end, I increase the index number with one, so the next error will have a correct index.

For the mailbox in our  example, this gives the following output:

So there you have it: it seems the mailbox has some items that probably have access rights mapped to an non-existing user. Of course, we can check this from the Exchange Management Shell. In this case, some of the errors referenced items in a subfolder of the ‘verwijderde items’ folder, which is Dutch for ‘Deleted Items’. So, i’ll get the folder permissions for this folder.

And indeed it does show a lot of non-existing, previously deleted, users.

So in this case, I can resolve the issue by removing the legacy permissions and restarting the job. You can also decide, after reviewing the report, to restart the job with the ‘BadItemLimit’ paramater increased to a number high enough the not cause the move request to fail, because these errors indicate that although the permissions will not be migrated, the items itself will be copied to Exchange Online so no data will be lost.

In conclusion, you can see why I prefer to review the errors in an Exchange hybrid migration using the export-clixml cmdlet. It is a much more convenient way to navigate around all errors and get a complete view of the issues.

Video

Teams guest access: user experience

Recently, a long awaited feature in MS Teams was released: access for guests from outside your tenant. But how does this work? I took it for a test drive 🙂

I started off by logging in to MS Teams on my 365dude.nl tenant.

From here, I tried adding my business account as a guest to the team. Unfortunately, that account was not recognized, so I can’t add it to the team.

I decided to go to the Azure AD control panel, to add the account from there.

 

 

After doing so, I receive an email on my business account to welcome me as a guest to the tenant.

After completing the invite, I am able to invite my business account to my tenant as a guest from within the Teams application. For example. I can @-mention the account just like I would with internal users.

When I start the Teams app and login using my business  account, I see both tenants and can switch between the two.

After switching, I can use the tenant just like I would as a normal user, for example be viewing contact information or replying to messages.

As a final test, a few days later I decided to add someone who was not previously known as a guest in my tenant. This time, probably due to an update of the Teams application, I could just type the e-mail address and add the guest that way. No need to revert back to the Azure AD portal!

So there you go. Adding guests to your MS Team to improve collaboration is easy as that! If you feel the need to make de display names of your guests a little more appealing, you can do so by simply editing the guest user object in the Azure AD Portal.

Update to the 365Tools PowerShell Module

Earlier this week, I decided to add a new function to the 365Tools PowerShell module.

This Get-MSOLIPRanges function prompts you to select one or more Office 365 Products, and then provides you with the IP Ranges used by this product, so you can whitelist these addresses in your firewall if you need to do so.

It started off as a quick write-up, but thanks to the help of Robert (Twitter) the code was cleaned up and is ready for you to use.

You can find the 365Tools module on the PowerShell gallery, so you can simply install it by running Install-Module 365Tools. The entire code for the module can be found on GitHub.

 

Whoomp, there it is: guest access for MS Teams!

It’s been a long awaited feature for the app that should be Microsoft’s answer to Slack: collaborating with users from outside your organization in Teams. While the feature has been announced earlier, it has been postponed moments before it’s initial launch date. But it’s finaly here: in this blogpost the general manager for Teams announces guest access for Teams.

Of course, for an app that aims at facilitating collaboration and fighting shadow IT, when rolling out externa access security should be a big priority. Microsoft accomplished this by leveraging Azure AD B2B Colloboration. This enables, for example, conditional access policies to be applied to guest accounts.

Along with this major feature, new developer tools for MS Teams have been announced on the MS Dev Blog.

Dupsug Basics – Part Deux

Op 19 september 2017 organiseert de Dutch Powershell User Group weer een ‘DuPSUG Basics’ event. Op 22 maart vorig jaar was de eerste keer dat er zo’n dag georganiseerd werd. Deze zeer goed bezochte editie smaakte waarschijnlijk naar meer, want er wordt nog regelmatig gevraagd wanneer de tweede editie gehouden wordt. Op Prinsjesdag, dus!

In totaal zijn er op deze dag 7 sessies van zeven verschillende sprekers (waaronder twee MVP’s) over uiteenlopende onderwerpen, zoals SQL en Office 365. Ikzelf zal de sessie ‘Powershell for Office 365 Administrators’ verzorgen. Het volledige tijdschema is als volgt:

Tijdstip Spreker Onderwerp
9:00 Welkom.
9:15 – 10:30 Mark van de Waarsenburg Powershell basis.
10:30 – 10:40 Koffie
10:40 – 11:25 Erik Heeres Powershell Remoting.
11:30 – 12:15 Jaap Brasser [MVP] Manage your infrastructure with PowerShell.
12:15 – 13:15 Lunch
13:15 – 14:00 Robert Prust Improving your scripts.
14:00 – 14:45 Sander Stad DBAtools – PowerShell and SQL Server Working Together.
14:45 – 15:00 koffie
15:20 – 16:05 Ralph Eckhard Powershell for Office 365 Administrators.
16:10 – 16:45 Jeff Wouters [MVP] Tips and tricks.

Meer info, of (gratis!) kaarten bestellen? Ga naar http://dupsug.com/2017/07/14/dupsug-presents-dupsug-basics-part-deux/. Wees snel, want er zijn niet veel kaarten meer beschikbaar!