The OneDrive admin-portal is in preview

Recently, Microsoft announced the OneDrive admin portal to be available in preview.

Through this portal, you can set permissions for your users on sharing files, syncing files and more.

A quick run through:

On the ‘sharing’ menu, you can control if users can share SharePoint- and OneDrive content with external users. You can also set the default type for sharing links.

For anonymous access links, you can set if and when these links will expire. Furthermore, you can block or allow external sharing with certain domains and set if external users can share your content with other.s

The next tab is the sync tab, where you can allow or disallow the synchronization of certain files and set if users can download the sync client.

If you want to limit the amount of storage available for users, you can use the settings on the storage tab.

As you can see, you can also control how long files are retained in OneDrive after a user account is deleted.

On the devices tab, you can set device access rules. I expect some integration with Intune to appear here in the future, so you can manage these rules from one simple interface.

The final tab is the compliance tab, which simply provides links to the respective settings in the Office 365 admin portal.

Want to try out the OneDrive admin portal yourself? It’s currently being rolled out in preview. Visit https://admin.onedrive.com and log on with your tenant administrator’s credentials to check it out!

Pass-through authentication and SSO

In an earlier blogpost I wrote about the new ‘pass-through authentication’ feature that is in public preview in the new Azure AD Sync client.

One of the most common reasons to use ADFS in an Office 365 setup, is that it allows you to do Single Sign-On. You only have to authenticate once, when you log on to your domain joined device, and your Kerberos ticket is used to authenticate you against Azure AD and therefore Office 365.

With the pass-through authentication feature, you get the same benefits. Because the PTA agent authenticates you against your on-premises Active Directory, you can use PTA to do single sign-on as well. So lets take this setup for a test-drive!

Basis of this test is the setup from my previous blog: an on-premises AD with PTA enabled in Azure AD Connect combined with the 365Dude Office 365-tenant. I added a Windows 10 machine to the mix, and domain-joined it to the on-premises AD. I use the Mr. Dude account to log on to this machine.

There are a few prerequisites to take into account when doing SSO. Mainly, they regard the OS and browser used. It is required to use a Windows machine, as it uses Kerberos for the underlying authentication. When using Windows 10, 8.1 or 8 clients, Internet Explorer, Chrome and Firefox are supported, where Edge isn’t.  Lower Windows-versions aren’t supported. When installing Azure AD Connect with the PTA / SSO option, a computer account is created in AD to handle your authentication requests.

First, we need to make sure that the Office 365 tenant is configured to allow OAuth. We do this with one single line of PowerShell:

Furthermore, we need to add the Microsoft authentication servers to the Intranet zone of the browser’s security settings. They need to be explicitly added to the machine’s Intranet zone, so that the browser will automatically send the currently logged in user’s credentials in the form of a Kerberos ticket to Azure AD. The easiest way to add the required URLs to the Intranet zone is to simply create a group policy in Active Directory.

There are two URL’s that need to be configured here:

https://autologon.microsoftazuread-sso.com

https://aadg.windows.net.nsatc.net

When done, the GPO should look as follows:

That’s it! After configuring our tenant and setting up the URL’s to be in the Intranet zone, you can ‘single sign-on’ to your Office 365 service.

So, what’s the user experience here? Let’s check it out.

After logging on the machine, we open IE and navigate to the https://outlook.office.com URL.

We are prompted to enter username and password. After we enter the username part, and navigate to the password field, the computer checks for a Kerberos ticket for this username. When it exists, the client is automatically logged on without the need of entering the password.

Well, it’s a bit hard to capture that on a screenshot. But you catch my drift, right? 😉 When the login completes, we just find our Outlook for the Web logged on and ready to use.

So that’s the webmail part. Most of my clients however, prefer to use Outlook as part of the Office suite for their e-mail work. How does SSO work there? Well, almost in the same way.

Prerequisite for SSO to work with client apps, is that the apps support modern authentication. So for our Windows-based clients, that will be Office 2013 and 2016. For this test, I used Office 2016.

When first starting Outlook, we are prompted to connect to Office 365.

So lets do just that. When we click the ‘connect’ button, we get a login screen equal to the one being used in the Outlook for the Web based interface.

And there we are. After checking the username and navigating to the ‘password’ field, the dialog box checks to see if a Kerberos tickets exists for this account and when it does, it uses that to log on to the Office 365 service. No further user interaction is required, It Just Works.

So, it’s a wrap. This single sign-on functionality provides you with all the ease of use you have with ADFS, without the need of a complete ADFS infrastructure. Ofcourse, when you plan on (or need to be) running ADFS in your environment for single sign-on with other applications, such as CRM or ERP software for example, there is no use in using PTA with SSO for Office 365, you just take the ADFS route. If you however want to provide your user with the same experience without the need for the infrastructure, PTA with SSO can be a great alternative!

 

AAD Pass-through authentication checked out

Exactly one week ago, Alex Simons announced what he called ‘the biggest news of the year’: the public preview of Azure AD Pass-through authentication and seamless single sign-on.

I tweeted something about how this would be the start of the end of ADFS for Office 365, but is it? Time to take this new options for a test drive!

First, some side-notes. The new functions are in public preview, which means it’s ‘beta’ and SLA is non-existent. You should therefore not use it in production environments. But I guess you’re smart enough to know that.

So, for the test I created a simpel AD domain 365dude.local, with one user: Mr. Dude. This will be the user I will be syncing to my existing Office 365 tenant that houses 365dude.nl. To make synchronisation easier, I added 365dude.nl to the list of UPN-suffixes in my AD domain and changed the UPN of this user to mrdude@365dude.nl.

I enabled the dirsync functionality on my Office 365 tenant using PowerShell:

After that, I start with installing the Azure AD Connect tool

I guess you’re smart enough to know how to install AADConnect, so I’ll just focus on the new and shiny stuff here.

In the ‘user sign in’ screen, we can select how we want to handle sing in requests for our synced users. Of course we can use password sync or ADFS, but we’ll choose the new ‘pass-through authentication’ option.

So, what does this new pass-through authentication mechanism do? It relies on a connector which runs in local network. When you authenticate through the Office 365 portal (or a supported client app), your authentication request is sent to the connector over a secure channel. The connector will then check your credentials against your on-prem AD. So, no more syncing of password hashes to the cloud and using AAD as your authentication source: you authenticate directly to your on-prem AD. Just like you would when using ADFS, but without the need for the complete ADFS infrastructure.

On the ‘optional features’ tab, you can still choose to do password sync. This way the on-prem passwords are still synced to AAD, so you can easily fall back to authenticating to AAD if you want to, because the passwords are already in place.

 

In this screenshot, I also selected the ‘password write-back’ option. To use this, you need an AD Premium license.

You get some information on what the software will do, and the installation will continue.

Once completed, I can find the user from my on-prem AD in the active users overview of Office 365 and assign a license.

Remember, with pass-through authentication when you sign in to Office 365, your authentication is routed through the connector in your local network and your credentials are verified against your on-prem AD. Thus, you can find eventlog messages for your sign-in request with Office 365.

So, will this be the end of ADFS? In some ways it will, I guess. You can get the same functionality, without the need of the complete ADFS infra. And ofcourse, with the pass-through authentication mechanism in place, you don’t need to sync your (hashed) passwords to the cloud to get the same sign-in experience, which could please your security officers. On the other hand, many (larger) organizations already have the ADFS infrastructure in place to get single sign-in functionality with other LOB applications. If you already went down that road, you can simply use that to connect Office 365, too. Another disadvantage is that when you use directory synchronisation to connect your on-prem AD to AAD, you’ll need an on-prem Exchange Server to perform certain management tasks, such as adding aliases to a mail enabled user. For smaller organizations, this can be reason to not choose the directory sync option all, be it with or withou pass-trough authentication. I expect this will be something that Microsoft will fix somewhere in the next year, but that’s just a guess.

Want more information on this new (preview) functionality in AAD Connect and Office 365? Be sure to check out the official documentation.

Further developing the homelab script

My previous post was about the script I used for my presentation at Experts Live: (re)building your homelab using PowerShell. As it turns out, someone got inspired 😉 Sven already did an update to the script by creating a variable for specifying the gateway address for your new VM.

As we both have plans on further developing this script into a nice module with some extra functionalities, I’ve decided to move the code to a seperate repository on GitHub. This way, the demo code from Experts Live stays the same for future reference, and the project we’re working on truly becomes a seperate project.

If you’d like to add in on further expending and developing this module, please feel free to contribute through GitHub!

Slides and code for Experts Live 2016

Last week i had the privilige to speak at Experts Live 2016. This has always been a great event to brush up on skills, gain some new knowledge and catch up with old and new friends from ‘the industry’.

I did a talk about automating (re)building your home lab environment, to make sure you can keep up with all new developments in the Microsoft world. I promissed to put my slidedeck and code online, so I made them available on GitHub. I did a blog post on (sort of) the same subject a while ago, you can check that out here.

Just so you know: I know I’ve been slacking off a bit on writing new blog posts the last couple of months, but there are some new posts on their way. So keep coming back 😉

Building a home lab, the PowerShell way

I haven’t posted here in a while, but for a good reason: I started a new job! As of May 1th, I started working as a Senior IT Consultant at Hands On.
Partly because of the job change, I wanted to build a home lab for some testing an practicing for certification exams.
My lab will run an Intel NUC, the sixth generation, with 32GB of RAM and a 500GB M.2 SSD drive. This box currently runs Windows 10 Pro, mainly because that means I can also use it as workstation while working at home. Perhaps in the future I’ll switch over to running a server OS, with a workstation as a VM… Still thinking about what the best option is.
Anyway, the box runs Hyper-V and because using it as a lab means I’ll frequently be installing new servers or reinstalling existing ones, I wanted a workflow to speed up this process. And of course I will use PowerShell to do so 🙂 I’ll share my setup and scripts in this post, but please note that this is (as always) work in progress and is by no means a perfect solution. However, because I received some requests to share my work, I decided to put them online already 🙂
First, a small look at the networking setup for Hyper-V. I wanted to isolate my guests from my main network. And because the NUC is such a nice, portable little box, I want to have the ability to take it with me and use it somewhere else, without the need to change my entire network configuration. To realize this, I decided to go with a NAT-switch in Hyper-V. This way, the guests can use there entire own subnet and still have internet-access through the host machine. For more information on this, check this out.
To save up on disk space needed for the servers, I wanted to use differencing disks. This way, there is one ‘main’ disk, with each VM having it’s own differencing disk with just the changes / delta’s to the main disk. So, I installed a plain Windows Server 2012 R2, installed all available updates and ran sysprep so I can use this as a base image.
After this, i copied the VHDX for this machine to seperate folder and marked it as read-only. Once it is used as a base disk for all other servers, changing this disk directly would break all underlying VM’s, so the read-only part is just for security.
After this, the ground work for the lab is done. Now i’ll need to build a PowerShell script to do the actual deployment.
I’ll break down the script piece by piece, to show what work is being done by the script.

This first part is where I’ll define the variables that I change whenever I run the script. I set the desired name for the VM, the IP-address I want it to use and the IP-address for the DNS server. In the future, I’ll most likely change this script into a function so I can just supply them as paramaters. For now, I’ll just manually change the script each time I run it. Like I said: work in progress 😉

These are static variables that contain stuff as file paths. These won’t change each time the script is run, but I think setting them here is nicer that just hardcoding them in the script. The ‘CertLab’ VM-Switch is the NAT-switch I created earlier.

The real work starts off with creating the VHD-file for the new server. The disk is created as a differencing disk, with the base image I created earlier as the parent disk.

This part of the script mounts the newly created VHD-file for the new-VM, and then copies an unattend.xml to this disk. I use some scripting to add content to the XML-file. This way I change the hostname, IP-address and DNS server address for the new server, based on the variables I defined in the beginning of the script.


Here, I just create the new VM. I set the MemoryStartupBytes paramater here hardcoded in the script, but ofcourse you can always choose to specify this as a variable (or a parameter if you create a function) to be able to easily change this.

Finaly, I simply start the new VM.

That’s it! By using this script, I’m able to set up a new VM in my lab with just a few clicks. Like I said, it’s work in progress. I’d like to implement PowerShell Direct to do some final configuration on the guest machine, such as adding certain roles. At this time, however, PowerShell direct requires the guest to run a Server 2016 TP OS, and as I’m also using this lab for certification and training purposes, I’m sticking with Server 2012 R2 for now. I could set up PowerShell Remoting (Robert has a great blogpost on this), but that’s something for the future 😉
If you have any comments on my code, or if you would like to add something to the script, please feel free to do so!

Exchange 2010 EMC issue resolved – so it seems

I blogged earlier about an issue when setting up a connection to Office 365 in the Exchange Management Console in Exchange 2010 (in this case, Service Pack 3, Rollup Update 12).

It seems that this issue was resolved. Despite the fact that I didn’t get any further feedback on the ticket I raised at MS Support or within the support forums thread mentioned in my earlier post, I decided to simply give it another try today… And It Just Worked™.

I didn’t change any thing on the existing configuration, just simply tried adding the connection to EMC again. At another client that encountered the same issue (but for which we didn’t raise a ticket yet), the issue seems to be resolved too, which leads to think there was a configuration change on the Office 365 side. However, we will need to wait for official feedback from Microsoft about this issue to have this confirmed.

Exchange 2010 hybrid connection in EMC fails with ‘ExchangeBuild’ error

I ran in to an issue using Exchange 2010 in a Hybrid setup.

In this case, I am running a fully patched Exchange 2010 SP3 CU12 machine, and I ran the new Hybrid Configuration Wizard. This completed successfully, so I do have a working Hybrid setup and should be able to move mailboxes cross-prem and send and receive mail on both sides.

Next step is to add the Office 365 tenant to my Exchange Management Console. To do this, I right-click the ‘Microsoft Exchange’ tree and click the ‘add new forest’ link. After naming the forest and selecting the ‘Exchange Online’ option for the remote PowerShell instance, I need to enter the credentials for my tenant. When validating, the wizard throws an error.

Exchange Error

‘The format of the Exchange object version is wrong. Parameter name: ExchangeBuild’.

After some searching, I found other people running into the same problem. Some get the error when trying to open an existing instance in EMC, some get it when trying to create a new one, like me. For those who get the error on an existing instance, some users report that deleting and re-adding the instance solves the problem. Others can’t re-add the instance after deleting it, facing the same error as me. A MSFT employee responded in the thread, stating that the problem has been found and the Exchange team is working on resolving it. For now, there isn’t a resolution.

The issue is also mentioned on support.microsoft.com, but aside from two obvious workarounds, there isn’t a solution yet.

For creating the hybrid setup, you can use the new HCW without running in to the issue. For viewing properties for users that are homed on Exchange Online in your hybrid scenario, you can use the Exchange Online console online. However, for the time being, there isn’t a supported way to modify these properties in this scenario.

Edit 3-9-2016: The issue seems to be resolved.

 

Extra strorage for SharePoint Online

Finally: SharePoint Online storage increase

The future is here! It was announced several times, but Microsoft finally upgraded the default available SharePoint Online storage for an Office 365 tenant to 1TB.

This means that every tenant will get 1TB of pooled storage for SharePoint Online, increased with 500MB for each licensed user. That’s a big upgrade from the previous 10GB + 500MB.

And the best part is that the increased storage is available directly!

Extra strorage for SharePoint Online

I think this is a huge improvement: it makes the Team Sites in SharePoint online a better candidate to store your shared documents, in stead of placing them on OneDrive for Business. And a team site is where those files belong, as OneDrive for Business is designed to be used just for personal files. The increase to 1TB will make that most businesses can start using SharePoint Online without the need for purchasing extra storage.

Talking about OneDrive for Businesss: there is an update there to. Where the current storage limit for OneDrive for Business is 1TB per user, Microsoft is offering unlimited OneDrive for Business storage for premium plans. Other Office 365 plans will get 5TB for each user. The first fase of rolling these upgrades has finished recently.

More information on these changes can be found in this announcement bij Microsoft.