Query Windows Hello for Business registrations and usage

So recently I was planning on requiring authentication stenghts in a Conditional Access policy – more precisely requiring Windows Hello for Business – when I realized that I’m not 100% sure that every user will meet this requirement. I wanted to make sure everybody has WHfB enrollment and that it is actively in use – so let’s see the process.

Note: I will use ‘Hello’ for simplicity, but don’t confuse Windows Hello with Windows Hello for Business – two totally different things.

TL;DR

  • Having a Hello for Business enrollment does not necessarily mean that it is actively used or that it is even a “valid” enrollment
  • Entra portal – Protection – Authentication methods – User registration details can be used to filter for those who have Hello
  • For a particular user, the Authentication methods blade can give information about Hello device registrations
  • Filtering the Sig-in logs to “Windows Sign In” application can give some overview about Hello usage
  • I wrote a script to have all this info in one ugly PowerShell object

First of all, I want to highlight this section from MS documentation:

Windows 10 or newer maintain a partitioned list of PRTs for each credential. So, there’s a PRT for each of Windows Hello for Business, password, or smartcard. This partitioning ensures that MFA claims are isolated based on the credential used, and not mixed up during token requests.

It means that when you log in to Windows using your password, the PRT used will not get the MFA claim even if the user has Hello registration on the device. And it can happen that the user reverts to password usage [eg. forgot the PIN code, the fingerprint reader didn’t recognize him/her, etc.] – and Windows tends to ask for the last credential used* – so Bye-bye Hello and hello again Password (sorry for this terrible joke).

*Update: This behaviour is controlled by the NgcFirst registry key, in the following hive: HKLM\Software\Microsoft\Windows\Currentversion\Authentication\CredentialProviders\{D6886603-9D2F-4EB2-B667-1971041FA96B}\<usersid>\NgcFirst
There is a ConsecutiveSwitchCount counter, which increases by 1 when the user logs in using a password. Also here, you can find the MaxSwitchCount DWORD which is set to 3 by default. When the user uses password login 3 times in a row, then it is considered an Opt-out, which is visible in the OptOut entry (set to 1)

User opted out from Windows Hello for Business authentication

But let’s get back to square one: when you open the Authentication methods blade on Entra, you have the User registration details which can be used to list users with Hello:

User registration details filtered to Hello registrations

Let’s open one user to see the devices registered:

Authentication methods for one user

Yes, sometimes the Detail column is not showing the computer name – however, if you click on the three dots menu and select View details, you can see the device Id and object Id – very user friedly, isn’t it?

Hello registration details

Note: when a device is deleted, the registration will remain but it will not be tied to any device

And the last piece is the sign-in log: if you filter the sign-ins to application “Windows Sign In” and open the entries, the Authentication Details will reveal the method used:

Windows Sign in event using Hello

My requirement was to have a table about each Hello registration for every user and a timestamp of the last Hello sign-in event. This is why I wrote the following script (assuming you use Graph Powershell in your environment):

#MSAL.PS module required

$tenantID = '<tenantID>'
$graphPowerShellAppId = '14d82eec-204b-4c2f-b7e8-296a70dab67e'

$token = Get-MsalToken -TenantId $tenantID -Interactive -ClientId $graphPowerShellAppId -Scopes "AuditLog.Read.All","Directory.Read.All","UserAuthenticationMethod.Read.All"
$accessToken = $token.AccessToken

 $headers = @{ 
    'Content-Type' = 'application/json'
    'Accept' = 'application/json'
    'Authorization' = "Bearer $accessToken" 
    }

#WHfB enrolledusers
Write-Host -ForegroundColor Green "Fetching information from User registration details"
$url = 'https://graph.windows.net/myorganization/activities/authenticationMethodUserDetails?$filter=((methodsRegistered/any(t:%20t%20eq%20%27windowsHelloForBusiness%27)))&$orderby=userPrincipalName%20asc&api-version=beta'
$response = Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json | % {$_.value} 

Write-host -ForegroundColor Green "Querying authentication methods"
$whfb_authInfo = $response.userprincipalname | % {
        Write-Host -ForegroundColor Yellow "Querying $_"
      $url = "https://graph.microsoft.com/v1.0/users/$($_)/authentication/methods"
    [pscustomobject]@{
    UPN = $_
    WHfBInfo = Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json | % {$_.value} | ? {$_.'@odata.type' -eq '#microsoft.graph.windowsHelloForBusinessAuthenticationMethod'}
  }
}

function Expand-WHfBMethod ($UPN,$id){
Write-Host -ForegroundColor Yellow "Expanding WHfB authentication method for $UPN"
$url = "https://graph.microsoft.com/beta/users/$($upn)/authentication/windowsHelloForBusinessMethods/$($id)/?" + '$expand=device'
#Write-Host $url -ForegroundColor Red
Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json 
}

function Search-WHfBWindowsSignIn ($UPN,$deviceid){
Write-Host -ForegroundColor Yellow "Searching WHfB Windows Sign-in event for $UPN on device $deviceid"
$url = "https://graph.microsoft.com/beta/auditLogs/signIns?" + '$filter=(userprincipalname eq' + " '" + $UPN + "') and (appid eq '38aa3b87-a06d-4817-b275-7a316988d93b')" + " and (devicedetail/deviceid eq '" + $deviceid + "')" #appId for Windows Sign-in
$response = Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json
$response.value | ? {$_.authenticationdetails.authenticationmethod -eq "Windows Hello for Business"} | sort createdDatetime | select -Last 1 | % {$_.createdDatetime}
}

$report_WHfb = foreach ($item in $whfb_authInfo){
    $item.whfbinfo | % {
        $whfbmethod = $null
        $whfbmethod = Expand-WHfBMethod -UPN $item.UPN -id $_.id
    [pscustomobject]@{
        UPN = $item.UPN
        DeviceDisplayName = $whfbmethod.displayname
        DevieID = $whfbmethod.device.deviceid
        HelloForBusinessMethodLastUsed = Search-WHfBWindowsSignIn -UPN $item.UPN -deviceid $whfbmethod.device.deviceid
        Enrollmentdate = $_.createdDateTime
        KeyStrenght = $_.keyStrength
    }
    }
}

$report_WHfb | ft

Example output:

Note: to find the HelloForBusinessMethodLastUsed value, the script is querying the sign-in logs which will take some time in a larger environment.

Note2: if the DeviceID field equals 00000000-0000-0000-0000-000000000000, then this is a Hello registration that is not corresponding to any Entra joined device – probably the device was deleted. You may want to review these entries and delete them.

Conditional Access Gap Analyzer – without Log Analytics Integration

Recently, John Savill* uploaded a video on this very cool feature and I thought to give it a try when I realized I have no Log Analytics integration enabled, so no Workbooks for me 🙁
[*big fan of John’s videos, pure gold]

This is not fair to those who only use Microsoft 365 products or who are prevented from enabling the integration due to some circumstances – that’s why I decided to look for a workaround.

TL;DR

  • When you have the correct licences (AAD P2 and any Defender licence I guess) there is a table in Advanced Hunting on the Microsoft 365 Defender portal, called “AADSignInEventsBeta” (MS doc)
  • This table is intented to be a temporary offering, but has almost the same information as the SigninLogs table that you get with Log Analytics integration
  • The queries used by the Gap Analyzer workbook are available on github, here
  • Below you can find the queries aligned to the AADSignInEventsBeta table’s schema

Disclaimer: I tried to validate every query in my demo environment, but some may require fine-tuning (especially because there are some columns that are not well documented, so values are set as per my testing – see notes under each query). Also, don’t forget to set the query range according to your needs:

Users Signing-In Using Legacy vs. Modern Authentication

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project ClientAppUsed, ErrorCode, ConditionalAccessPolicies, AccountDisplayName
| where ConditionalAccessPolicies != "[]"
| where ErrorCode == 0
| extend filterClientApp = case(ClientAppUsed != "Browser" and ClientAppUsed != "Mobile Apps and Desktop clients", "Legacy Authentication", "Modern Authentication")
| summarize count() by AccountDisplayName, filterClientApp
| summarize Count = count() by filterClientApp

Note: The Gap Analyzer workbook uses the SignInLogs table which contains only the interactive sign-in logs while the AADSignInEventsBeta has both the interactive and the non-interactive logs – that’s why every query starts with a LogonType filter for InteractiveUser.

Users Using Legacy Authentication by Application

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project ClientAppUsed, Application, ConditionalAccessPolicies, ErrorCode, AccountDisplayName
| where ClientAppUsed != "Browser" and ClientAppUsed != "Mobile Apps and Desktop clients"
| where ConditionalAccessPolicies != "[]"
| where ErrorCode == 0
| summarize count() by AccountDisplayName, Application
| summarize Count = count() by Application 

Users using Legay Authentication

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project ClientAppUsed, Application, ConditionalAccessPolicies, ErrorCode, AccountDisplayName
| where ClientAppUsed != "Browser" and ClientAppUsed != "Mobile Apps and Desktop clients"
| where ConditionalAccessPolicies != "[]"
| where ErrorCode == 0
| summarize count() by AccountDisplayName, Application, ClientAppUsed
| project-away count_

{user} legacy authentication sign-ins to {app} – this is an additional query to the previous one. Make sure you insert the Application and the AccountDisplayName values in the app and user variables

let app = "<Application>";
let user = "<AccountDisplayName>";
AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project AccountDisplayName, ClientAppUsed, Application, ConditionalAccessPolicies, ErrorCode, Timestamp, CorrelationId, NetworkLocationDetails, DeviceName, AadDeviceId, OSPlatform, Browser, UserAgent
| where ClientAppUsed != "Browser" and ClientAppUsed != "Mobile Apps and Desktop clients"
| where ConditionalAccessPolicies != "[]"
| where ErrorCode == 0
| where Application == app
| where AccountDisplayName == user
| project Timestamp, CorrelationId

Number of Users Signing In to Applications with Conditional Access Policies Not Applied

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project Application, ConditionalAccessStatus, ErrorCode, AccountDisplayName, AuthenticationRequirement
| where ErrorCode == 0 // sign-in was successful
| where ConditionalAccessStatus == "2"
| where AuthenticationRequirement == "singleFactorAuthentication" // Sign-in was not MFA

| summarize Count = count() by Application, AccountDisplayName
| summarize count() by Application

Note: as per the documentation for the table, ConditionalAccessStatus == “2” means policies are not applied.
Note2: the original query has the following clause:
| where Status.additionalDetails != “MFA requirement satisfied by claim in the token” and Status.additionalDetails != “MFA requirement skipped due to remembered device” // Sign-in was not strong auth
This detail is not available in the AADSignInEventsBeta table, but I guess this means that MFA was not required – which can be filtered using | where AuthenticationRequirement == “singleFactorAuthentication”. To be precise: if sign-in was successful, I guess it is more relevant if single factor was required than the actual authentication strenght

{User} sign-ins to {App} without CA coverage – this is an additional query to the previous one. Make sure you insert the Application and the AccountDisplayName values in the App and User variables

let App = "<Application>";
let User = "<AccountDisplayName>";
AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project Application, AccountDisplayName, ConditionalAccessStatus, ErrorCode, Timestamp, CorrelationId, DeviceName, AadDeviceId, OSPlatform, Browser, UserAgent, AuthenticationRequirement
| where ErrorCode == 0 // sign-in was successful
| where AuthenticationRequirement == "singleFactorAuthentication" // Sign-in was not MFA
| where ConditionalAccessStatus == "2"
| where Application == App
| where AccountDisplayName == User
| project Timestamp, CorrelationId, OSPlatform

High Risk Sign-In Events Bypassing Conditional Access Policies

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project AccountDisplayName, ConditionalAccessStatus, RiskLevelDuringSignIn, ErrorCode, AuthenticationRequirement
| where AuthenticationRequirement == "singleFactorAuthentication" // Sign-in was not MFA
| where ErrorCode == 0 // sign-in was successful
| where RiskLevelDuringSignIn > 50
| summarize Count = count() by AccountDisplayName, RiskLevelDuringSignIn
| order by Count desc

Note: RiskLevelDuringSignIn column is not even documented, but based on my testing 10 = low, 50 = medium, 100 = high

Risky sign-Ins from {user} with no CA policies

let user = "<accountDisplayname>";
AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project Timestamp, ErrorCode, RiskLevelDuringSignIn, AccountDisplayName, Country, AuthenticationRequirement, DeviceName, AadDeviceId, OSPlatform, Browser, UserAgent, NetworkLocationDetails, Application, CorrelationId
| where ErrorCode == 0 // sign-in was successful
| where RiskLevelDuringSignIn > 0
| where AccountDisplayName == user
| where AuthenticationRequirement == "singleFactorAuthentication" // Sign-in was not MFA
| project Timestamp, Application, CorrelationId, Country, OSPlatform

Users With No Conditional Access Coverage by Location – summary

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project AccountDisplayName, ConditionalAccessStatus, AuthenticationRequirement, ErrorCode, Country
| where ConditionalAccessStatus == "2"
| where AuthenticationRequirement == "singleFactorAuthentication" // Sign-in was not MFA
| where ErrorCode == 0
| summarize count() by Country, AccountDisplayName
| summarize Count = count() by Country
| order by Count desc

Users With No Conditional Access Coverage by Location

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project AccountDisplayName, ConditionalAccessStatus, AuthenticationRequirement, ErrorCode, NetworkLocationDetails, Country
| where ErrorCode == 0 // sign-in was successful
| where ConditionalAccessStatus == "2"
| where AuthenticationRequirement == "singleFactorAuthentication" // Sign-in was not MFA
| extend Location = case(Country == "", "Unknown", Country)
| summarize Count = count() by AccountDisplayName, Location
| project-away Count

Named locations without Conditional Access Coverage

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project AccountDisplayName, ConditionalAccessStatus, AuthenticationRequirement, ErrorCode, NetworkLocationDetails
| where ConditionalAccessStatus == "2"
| where AuthenticationRequirement == "singleFactorAuthentication" // Sign-in was not strong auth
| where ErrorCode == 0
| extend test = parse_json(NetworkLocationDetails)
| mv-expand test
| project test
| extend ["Named Location"] = tostring(test["networkNames"])
| summarize ["Sign-in Count"]=count() by ["Named Location"]

Users sign-ins from IPv6 addresses not assigned to a Named Location

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project ConditionalAccessStatus, AccountUpn, Application, IPAddress, NetworkLocationDetails
| where IPAddress has ":"
| where NetworkLocationDetails has '[]'
| summarize ["Sign-in Count"] = count() by  IPAddress, NetworkLocationDetails
//| summarize Count = count() by IPAddress
| sort by ["Sign-in Count"] desc
| project-away NetworkLocationDetails

Users sign-ins from IPv6 addresses not assigned to a Named Location (Separated by application)

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| project ConditionalAccessStatus, AccountUpn, Application, IPAddress, NetworkLocationDetails
| where IPAddress has ":"
| where NetworkLocationDetails has '[]'
| summarize ["Sign-in Count"] = count() by  Application, IPAddress
//| summarize Count = count() by IPAddress
| sort by ["Sign-in Count"] desc
//| project-away NetworkNetworkLocationDetails

Happy hunting 🙂

Fighting AzureAD App registration client secrets – step3: using Conditional Access for Workload Identities (+custom security attributes)

Disclaimer: the following configurations require Microsoft Entra Workload Identities Premium licence (link)

Note: This post is not strictly related to fighting client secret usage for apps. However, it may provide a basis for considering the purchase of Microsoft Entra Workload Identities Premium licence for at least those apps that use client secret.

In my previous posts I wrote about reviewing client secret usage (part1) and limiting app password lifetime (part2). This time I will protect one of my applications with Conditional Access policy using a location condition.

The process is very straightforward: create a CA policy, choose the service principal(s) to protect, select All cloud apps as Target resources, set up the location condition and the access controls. In my case it was simply blocking access from anywhere but the ‘Office’

The result when trying to get a token outside the office:

Get-MsalToken : AADSTS53003: Access has been blocked by Conditional Access policies. The access policy does not allow token issuance.

Looking at the Sign-in logs:

Okay, this is cool, but it is nothing more than implementing what is already documented by Microsoft (here). What I thought might be useful to share is to combine Workload identities conditional access with custom security attributes (preview feature at the time of writing).

Workload identity Conditional Access with Custom security attributes

Application custom security attributes (link) is an awesome feature and is a great way to “group” applications/service principals. In this demo, I will mark some service principals that are supposed to be used only in the office and nowhere else – and use this property as a filter for the CA policy.

To add an attribute set you need to be assigned the Attribute Definition Administrator role (Global Admins do not have this permission by default, but they can assign it to themselves). Unfortunately Integer or Boolean attributes can’t be used for filtering*:

Using custom security attributes you can use the rule builder or rule syntax text box to create or edit the filter rules. In the preview, only attributes of type String are supported. Attributes of type Integer or Boolean will not be shown.

*My first idea was to mark applications with something like ‘CanOnlySignInFromOffice’ boolean attribute, but this restriction pushed me to a more sophisticated approach.

To overcome this restriction, I will create an attribute which will have a predefined set of sign-in-permitted locations (Office1, Office2… sorry for not being too creative here 🙂 ):

First, I created the ‘ProtectedWorkloadIdentites’ attribute set:

Next, click on Add attribute:

Then create the attribute (SignInRestrictedToNamedLocation) with String type, allowing only predefined values (hint: if value names are the same as the Named Locations it will be easier to administer):

Next step is to assign the attribute to the service principal that should be restricted: Enterprise applications -> [app to be restricted] -> Custom security attributes

Final step is the Conditional Access policy – this time using the filter instead of directly choosing the identites:

Target resources remains “All cloud apps”:

Location condition is set to Any location with Office1 excluded:

The action is Block access of course.

Let’s look at the results:

Sign-in blocked from outside
Sign-in granted from Office1

I’m sure there are more complex situations where this approach does not fit well, but I hope the basic idea is helpful.

Fighting AzureAD App registration client secrets – step2: limiting app password lifetime

Disclaimer: the following configurations require Microsoft Entra Workload Identities Premium licence (link)

In my previous post, I highlighted the risks of using password credentials for apps and how to spot client secret usage for service principals. This post will focus on limiting password lifetime for apps (scoped to tenant or specific application level) which can be configured if your tenant has Workload Identities Premium licence – otherwise you will receive the following error:

To add and configure organizational settings,you'll need to link a subscription with Azure AD Workload identity license to your tenant.

Error message when no Workload Identities Premium is linked to the tenant

As per the documentation, apps and service principals can have restrictions at object level and tenant level. Scoped restrictions take precedence over tenant level settings and only one policy object can be assigned to an application or service principal (link).

Create a tenant level restriction

For demo purposes, I will create a simple setting which restricts password lifetime to 1 year for applications. I’m using Graph Explorer for simplicity. This action requires Policy.ReadWrite.ApplicationConfiguration right, make sure you are using an account with this privilege and consented.

The endpoint is https://graph.microsoft.com/v1.0/policies/defaultAppManagementPolicy, PATCH method is needed. The request body is as follows:

{
"isEnabled": true,
"applicationRestrictions": {
"passwordCredentials": [
{
"restrictionType": "passwordLifetime",
"maxLifetime": "P12M"
}
]
}
}

Creating a sample defaultAppManagementPolicy

The result is almost instant:

Password lifetime longer than 1 year is greyed out when adding a new client secret to an app

Create an appManagementConfiguration and assign to an app

We may want to further restrict some apps to have a shorter password lifetime, so we create a separate policy and assign it to the application. As per the documentation, assigning requires Application.Read.All and Policy.ReadWrite.ApplicationConfiguration – for me, it wasn’t enough, I received the following error:

Insufficient privileges to complete the operation.

I added Application.ReadWrite.All to my permission set and the error disappeared.

So, first, we will create the configuration object (documentation), which will restrict password lifetime to 6 months. The payload is the following:

{
"displayName": "F12 - App password max lifetime 6 months",
"description": "App password max lifetime 6 months",
"isEnabled": true,
"restrictions": {
"passwordCredentials": [
{
"restrictionType": "passwordLifetime",
"maxLifetime": "P6M"
}
]
}
}

It needs to be POST-ed to https://graph.microsoft.com/v1.0/policies/appManagementPolicies:

Creating an appManagementPolicy in Graph Explorer

Take note of the result, the policy ID will be used in the following step.

Next is to assign this policy to the application object (documentation). We will POST to https://graph.microsoft.com/v1.0/applications/{id}/appManagementPolicies/$ref this payload. Hint: {id} is the application’s objectID not the client ID.

{
"@odata.id":"https://graph.microsoft.com/v1.0/policies/appManagementPolicies/{id}"
}

Assigning appManagementPolicies to an app

Let’s verify the result:

App password lifetime limited by appManagementPolicy

Disable password creation for apps

The most restrictive policy is to prohibit password creation. This can be achieved using the same method described above, with this example payload:

{
    "displayName": "F12 - APPS - No password allowed",
    "description": "No password allowed for apps",
    "isEnabled": true,
    "restrictions": {
        "passwordCredentials": [
            {
                "restrictionType": "passwordAddition",
                "maxLifetime": null
            }
        ]
    }
}

The result is a warning message and the "New client secret" option greyed out:
Password addition disabled for this app

There are many other aspects of a service principal/app credential which can be managed this way, ie.: symmetricKeyAddition, customPasswordAddition, asymmetricKeyLifeTime which may worth considering (and I hope to have an occasion to try them and share my experiences).

To be continued 🙂

Fighting AzureAD App registration client secrets – step1: reviewing client secret usage

Workload identity (including service principals) security keeps bugging me, especially the password credentials (aka client secret). I’m sure there are scenarios where this is the only option, but I see environments where these are used just because it is easier to implement. And one day I woke up and realized how dangerous it can be – so now I’m fighting client secrets as hard as I can.

TL;DR
– Why: a leaked client secret can be easly used without being noticed (or hardly noticed… you may keep an eye on risky workload identities or have other solutions in place)
– How:
– review client secret usage and try to migrate to certificate based auth,
– at least don’t store these secrets hard coded in scripts or programs,
– use conditional access for workload identities (Microsoft Entra Workload Identities licence is required),
– limit password lifetime (Microsoft Entra Workload Identities licence is required)

This is a huge topic, so I will split it into some form of “series”.

So let’s start with the Why?
As I mentioned, a leaked credential can be hard to notice (if there is no monitoring in place, or IT is not aware of the available option to review risky workload identities). In AzureAD – Security – Identity Protection (Entra: Protect&secure – Identity Protection) you can find “Risky workload identities” (will be discussed in other post).

Let’s imagine a targeted brute force scenario: to access resources using a service principal, you need 3 things: the tenant ID, the application ID and the password. Tenant ID for a domain can be easily acquired, the easiest way is navigate to AzureAD – External Identities – Cross-tenant access settings – Add organization, then enter the domain name:

Gathering tenant ID for a domain

Guessing an application ID is nearly impossible, however, with enough compute power, this is only a matter of time: when someone tries to access your tenant using a non-existent app id, the response will be an HTTP 400 (Bad request) error with the following message:

“error”:”unauthorized_client”,”error_description”:”AADSTS700016: Application with identifier ‘<appID>’ was not found in the directory ‘<tenant>’

On the other side, when using an existing app id with a wrong password, the response will be an HTTP 401 (Unauthorized) error with the following message:

“error”:”invalid_client”,”error_description”:”AADSTS7000215: Invalid client secret provided. Ensure the secret being sent in the request is the client secret value, not the client secret ID, for a secret added to app ‘<appID>’

The last step is to brute force every password combination 😁 Okay, okay it is already hard to get the app ID and it is even more difficult to pick an app that has password credentials then guess these credentials, but not impossible. And I’m sure there are more sophisticated ways to skip to the last step (eg.: by default a user can read app registrations in the AzureAD portal and even read the hint for the Client secret value).

A non-privileged user has access to the client secret hint by default

Sure, you can review Service principal sign-ins from the portal to detect anomalous activities, but this sounds a very tedious task – unless you have some monitoring solution in place.

How to spot client credentials usage?

My first step towards achieving a password-free environment is to make an inventory of apps with client secrets. In my previous post, I wrote some words about AzureADToolkit and shared a custom script to get a report on these apps with the API permissions assinged.

This time, I’m focusing on sign-in activity to find active password credential usage. When we switch to “Service principal singn-ins” in Sign-in logs menu on the AzureAD portal, we can filter the results by client credential type used:

Filtering logs by client credential type

While this may be enough for a one time review, you may want to monitor password usage later. I prefer to have a PowerShell script for this purpose, but there are certainly other solutions available. I didn’t find ready-to-use cmdlets to query service principal sign-ins so I chose the hard way to write my own. Using developer tools in the browser (by hitting F12 😉) we can analyze Network traffic when opening a service principal sign-in event:

Request URL for a service principal sign-in event

What we need to see here is that the portal is using the Graph API beta endpoint and at the end of the request the source is specified source=sp where sp probably stands for “service principal”. To filter by client secret usage, we will use the ‘clientCredentialType eq clientSecret’ clause. To access sign-in information, the identity used requires ‘AuditLog.Read.All’ permission on Microsoft Graph. If you want to access these informations in an unattended manner (eg.: a scheduled task), you need to create a new app registration, grant the permission (application type) with admin consent and provide a credential (hopefully a certificate 😅)

Quick guide:

1. Create an app registration
2. Remove default permission, add AuditLog.Read.All Application permission and grant admin consent

3. Create a self-signed certificate (use admin PS if you want it to be created in Local Machine container; in a highly secure scenario, you can disable private key export by appending ‘-KeyExportPolicy NonExportable’):
Update 2025.02.19: the command was incorrectly using the -Container parameter, which has been corrected to -CertStoreLocation (+added -TextExtension to restrict the Intended Purposes to Client Authentication only)

New-SelfSignedCertificate -FriendlyName "F12 - SP client secret usage monitor" -NotAfter (Get-date).AddYears(2) -Subject "F12 - SP client secret usage monitor" -CertStoreLocation Cert:\LocalMachine\My\ -Provider “Microsoft Enhanced RSA and AES Cryptographic Provider” -KeyExportPolicy NonExportable -TextExtension @(“2.5.29.37={text}1.3.6.1.5.5.7.3.2”)

4. Export the certificate in cer format (only the cert, not the private key)

5. Upload the certificate on the app registration page

Now we have the right app for our needs, let’s query the information needed. To use certificate authentication, we will install MSAL.PS module for simplicity:

Install-Module MSAL.PS

The following script will write out client secret usage for the last 24 hours:

Import-Module msal.ps
$tenantID = '<tenantID>'
$appID = '<app ID>'
$certThumbprint = '<certificate thumbprint created for the app>'
$token = Get-MsalToken -TenantId $tenantID -ClientId $appID -ClientCertificate (get-item Cert:\LocalMachine\my\$certThumbprint) -AzureCloudInstance 1

#query sign-ins for the last 24 hours
$dateStart = (([datetime]::UtcNow).AddHours(-24)).ToString("yyyy-MM-ddTHH:mm:ss.fffZ")
$url = "https://graph.microsoft.com/beta/auditLogs/signIns?api-version=beta" + '&$filter=createdDateTime%20ge%20' + $($datestart) + "%20and%20createdDateTime%20lt%20" + (([datetime]::UtcNow).ToString("yyyy-MM-ddTHH:mm:ss.fffZ")) + ' and clientCredentialType%20eq%20' + "'clientSecret'&source=sp"

$servicePrincipalSignIns = $null
while ($url -ne $null){
        #Write-Host "Fetching $url" -ForegroundColor Yellow
        $response =  Invoke-RestMethod -Method Get -Uri $url -Headers @{ Authorization = $Token.CreateAuthorizationHeader()}
        $servicePrincipalSignIns += $response.value
        $url = $response.'@odata.nextLink'
    }

$servicePrincipalSignIns | select createdDateTime,ipAddress,serviceprincipalname,clientCredentialType
Sample output

To be continued…

AzureAD App registrations – the “application” permission + credentials combination security nightmare

When talking about Azure AD security, we tend to put less focus on service principals/app registrations*. But when we take into consideration that these principals can have assigned API permissions and “static” credentials (certificate or password) and that these credentials in the wrong hands can cause serious damage, we may change our attitude.
* While “App registrations” and “service principals” are different entites (link) they can be used interchangeably (link)

TL;DR
– Follow best practices for securing service principals: Conditional Access for workload identites, review AAD roles and API permissions of SPs, review SP sign-in logs, pioritize key credential usage over password credentials
– Explore the AzureADToolkit to gain insights on application credentials and API permissions
– Try out my script to start reviewing apps with Application type API permissions

Imaginary example: an IT admin created an app registration which is used in a PowerShell script for some repetitive tasks. The app was granted Directory.ReadWrite.All API permission (Application type, admin consent granted) on Microsoft Graph and a client secret was generated for the app – and this secret is saved as plain text in a script, along with the tenant id and app id. Something like this:

If this script gets into the wrong hands… what a nightmare! 😱

What to do with these app registrations?
Follow security best practices:
– Apply Conditional Access to workload identities (link)
– Review sign-in logs (service principal sign-ins)
– Implement a credential rotation process (especially when key/password credentials are/were accessible for a leaver)
– Review service principals with AzureAD role granted (in preview)

– Prefer key credentials over password credentials (link), don’t store password credentials hardcoded if possible
– Review API permissions for App Registrations
– Identify, investigate and remediate risky workload identities (link)

AzureADToolkit
The last page linked is referencing a very cool toolkit, the AzureADToolkit which can easily identify service principals that have credentials.

Install-Module AzureADToolkit
Connect-AADToolkit
Get-AADToolkitApplicationCredentials | Out-GridView
Sample result of Get-AADToolkitApplicationCredentials

The other useful cmdlet in the toolkit (Build-AzureADAppConsentGrantReport) returns all service principals that have admin consented permissions (each entry contains the resource displayname and the permission* ie.: Microsoft Graph, User.Read)

*sometimes it’s unable to return all the info, in my case the following application has Exchange.ManageAsApp permission, but this property is empty

The two commands combined are probably able to display information for app registrations with admin constented API permissions that have credentials… but, to be honest, I already prepared a script to gather this info when I found that toolkit 🙃

Report script

The following script will return those app registrations that have active (non-expired) credentials, with admin consent on application type API permissions (delegated permissions are intentionally filtered out because those are tied to the authenticated users’ delegated permissions)

Connect-MgGraph

#List apps with cert or password credentials
$apps = Get-MgApplication -all | ? {($_.KeyCredentials -ne $null) -or ($_.PasswordCredentials -ne $null)}
# filter apps with expired credentials
$apps_activeCred = foreach ($app in $apps){ if ((($app.KeyCredentials.EndDateTime | sort -Descending | select -First 1) -gt (get-date)) -or (($app.PasswordCredentials.EndDateTime | sort -Descending | select -First 1) -gt (get-date))){$app}}

function Get-ServicePrincipalRoleAssignmentReadable ($appId){
#query apps that have application permissions with admin consent
$roleAssignments = Get-MgServicePrincipalAppRoleAssignment -ServicePrincipalId (Get-MgServicePrincipal -Filter "appId eq '$($appid)'").id
#match permission entries with resource name and permission name
foreach ($roleAssignment in $roleAssignments){(Get-MgServicePrincipal -ServicePrincipalId $roleAssignment.ResourceId) | select @{'L'="Value";'E'={"$($_.DisplayName)/"+($_.AppRoles | ? {$_.id -eq $roleAssignment.approleid}).value}} }
}

$report = foreach ($app in $apps_activeCred){
[pscustomobject]@{ 
    Name = $app.DisplayName
    AppId = $app.AppId
    LatestKeyExpiration = $app.KeyCredentials.enddatetime | sort -Descending | select -First 1
    LatesPasswordCredential = $app.PasswordCredentials.enddatetime | sort -Descending | select -First 1
    APIPermissions = (Get-ServicePrincipalRoleAssignmentReadable -appId $app.AppId).value

    }
} 
#filter out apps with no application type permissions
$report | ? {$_.apipermissions} | Out-GridView
Sample result of the script

Edge Drop vs. SharePoint’s access control policy for unmanaged devices

The Edge Drop is a really wonderful feature, but my inner data protection officer was bugging me to investigate if it is safe in an enterprise (or SMB) environment. There are several options to protect corporate data (labels, App Protection Policy, DLP policies, etc.) but not every business is lucky enough to afford the required licenses or to implement all these funcitionalities. Anyways, the goal of this post is to raise awareness and a call for action to evaluate the current policies with Edge’s Drop feature in focus.

TL;DR
– Edge’s Drop feature’s impact depends on the current policies and needs
– SharePoint access control policies don’t protect against Drop
– You may want to disable Drop, even if it is currently limited to Windows/macOS

What is Edge Drop?
It’s is actually a chat with yourself with the option to share files (MS doc). Files are stored in the user’s OneDrive for Business “Microsoft Edge Drop Files” folder. All you need to do is to log in to the Edge browser and it is ready to use (first time use will need 1-2 minutes to set up).

Problem statement
From the AzureAD perspective, this action is actually a sign-in to the “Microsoft Edge” application. If you have policies targeting the “Office 365 SharePoint Online” application (eg. policies created by SharePoint admin center’s Unmanaged devices access control setting) or “Office 365” application these policies may not apply. This can lead to accidental data loss.

In the following scenarios, the primary principal is that cloud content should be accessible only on managed devices (Hybrid AzureAD joined or compliant) – other devices are blocked (or restricted to view-only web access).

Scenario1
SharePoint admin center – Policies – Access control – Unmanaged devices = Allow limited, web-only access

This setting creates two Conditional Access policies:
1. [SharePoint admin center]Block access from apps on unmanaged devices
Office 365 SharePoint Online app, Client app = Mobile apps and desktop client as condition, Require device to be marked as compliant or Require Hybrid Azure AD joined device as Grant control
2. [SharePoint admin center]Use app-enforced Restrictions for browser access
Office 365 SharePoint Online app, Client app = Browser as condition, Use app enforced restrictions as session control

Experience on a non corporate device, user is logged in to Edge:

As you can see, there is a warning message that files can’t be downloaded and there is no download option in OneDrive. Let’s see if Drop allows downloading:

Yes, it does… too bad.

Scenario2
SharePoint admin center – Policies – Access control – Unmanaged devices = Block access

This option generates one Conditional Access policy:
[SharePoint admin center]Use app-enforced Restrictions for browser access
Office 365 SharePoint Online app, Client app = Browser as condition, Use app enforced restrictions as session control

Experience on a non corporate device, user is logged in to Edge:

As you can see, OneDrive can’t be opened from the browser. Let’s see if Drop is dropped 🙂

Not really…

Scenario3
Conditional Access Policy – Office365 – Require device to be marked as compliant or Require Hybrid Azure AD joined device as Grant control

The purpose of this demo policy is to restrict every Office 365 resource to managed devices (including OneDrive). This is a more restrictive policy, needs a lot of planning and testing before implementing – but I hope it will prevent Drop from downloading files on unmanaged devices.

Experience: if I do not register the device, it keeps asking me to log in:

When I register the device, access is blocked:

How about Drop? This time it is stuck in Initializing, files can’t be downloaded (same experience when device is not registered):

Okay, this policy does the trick but it has large impact on users.

A less painful solution is to disable Drop on managed devices (link), so users won’t be able to upload files from corporate devices. This can be done via Group Policy or Intune configuration profile. However, the setting is only supported on Windows and macOS devices, so users will not be prevented from uploading via Drop on iOS/Android devices.

Conclusion: several other scenarios are possible and several tools are available to prevent accidental data leak, which will not be covered in this post because it aims only to raise awareness, make you review your security settings when new features are available in your tenant.

“Don’t do that” series – migrate personal user profile to (Azure)AD user profile with Win32_UserProfile.ChangeOwner method

Scenario: the business is now convinced that computers should be managed centrally (either with Active Directory or Azure Active Directory) instead of having WORKGROUP computers.
Problem: after joining to (Azure)AD, users will have a new profile created. Gone are their settings, wallpaper, pinned icons, etc. You need to note these settings, copy the files to the new profile and so on.
After searching the net you may come across 3rd party solutions to address this headache – or decide to find some Microsoft way to do this.
The “don’t do that” solution: use the Win32_UserProfile class’ ChangeOwner method (link).
Why: several settings are tied to cached credentials (these should be entered again), some icon pinnings on the taskbar will lose, but the worst thing is that some settings may be tied to a personal account that should be migrated to the work account (user is logged in to a personal Microsoft account, OneDrive is syncing business data to personal OneDrive, etc.) – with a lift-and-shift approach, these settings will remain in place and this should be avoided.

DISCLAIMER: I’m just sharing this “don’t do that” tutorial just in case someone has the same idea that Win32_UserProfile.ChangeOwner is a good solution. If you considered the above and still want to give it a try, do this on your own risk. There are tools available to accelerate the process developed or recommended by Microsoft (USMT or PCmover Express [link])

So what I did after AzureAD joining the computer was to log in with the AzureAD account of the user and noted the profile path for both the personal account and the corporate account. Login created the local profile, but since the Win32_UserProfile.ChangeOwner method fails if the source or the target profile is loaded, another admin is required to perform the changes. So I logged in with a Global Administrator (the two accounts logged off), then launched PowerShell (as admin) – or you can configure additional administrators [link].

gwmi -query "select * from Win32_UserProfile" | ft localpath,sid
User profiles created on the computer

Next was to store the personal profile in a variable:

$profileToReplace = gwmi -query "select * from Win32_UserProfile" | ? {$_.localpath -eq 'C:\Users\kovac'}

Then call the ChangeOwner method:

$profileToReplace.ChangeOwner('<sid of AzureAD profile>', 1)
Using the ChangeOwner method

Now when the user logs on using AzureAD credentials, the old profile is loaded with almost all settings… Almost. First welcome message:

Cached credentials missing

The same applies to Edge/Chrome profile. Another inconvenience I noticed is that pages pinned by Edge to taskbar are missing their icons:

Icons before
Icons after

The login screen still shows the personal profile (if the users logs in a new local profile will be created). This entry can be removed with netplwiz:

Remove personal account using netplwiz

And at this point I realized that lot of settings can be tied to a personal cloud account which you may not want to migrate to the business profile – so I didn’t go further, but wanted to share my experiences so that you can learn from my mistake 🙂

SharePoint Online external file sharing report using Graph API and PowerShell

The story in short: one of my customers asked me if it is possible to generate a report on all content in Office365 shared externally. Doing some searches I found the following solutions:
– Run the sharing reports on each site and each OneDrive (link, link)
– Run reports based on audit logs (link)

While these reports may seem adequate for most of the time, I have some issues with them:
– Native reporting capabilities require opening each site manually, these reports contain internal sharings too
– Some info may be missing from these native reports (for example: expiration date for date limited sharing links, password protection property, email address of tenant guests who haven’t opened the link yet)
– Audit log based reports’ capabilities are limited by audit log retention

So these issues gave me the intention to write a script that fits my needs and I hope others will benefit from it too.

Important note: Sharepoint sharing will soon default to Azure B2B Invitation Manager for External Sharing (link). You may want to review your affected settings (link).

TL;DR

  • create an app registration in your tenant with the following application permissions for Graph API:
    • Sites.Read.All (will be needed for accessing every SPO site)
  • give the app a client secret which will be used to authenticate
    • it is more secure to opt for certificate-based auth (great article on this here) but I did not have the occasion to test it, so I stay with client secret now
  • Copy the following script below
  • $tenantID, $appID, $appSecret variables need to be declared
  • The script has one required parameter (-ReportFile) which should be the path of the HTML report (parent directory must exist) and two mutually exclusive parameters: -All if you want a report on all document libraries in your tenant (SharePoint sites and OneDrive for Business too) OR -SiteUrl <string[]> which will report only on the site specified. Example:
    • PS C:\temp> .\Get-SPExternalSharingReport.ps1 -ReportFile C:\temp\reportDaniel1.html -SiteUrl “https://ftwelvehu-my.sharepoint.com/personal/daniel_f12_hu”
    • PS C:\temp> .\Get-SPExternalSharingReport.ps1 -ReportFile C:\temp\reportAll.html -All
[CmdletBinding(DefaultParametersetName="default")]
Param(
    [Parameter(Mandatory=$true)][string]$ReportFile,
    [parameter(ParameterSetName="seta")][string]$SiteUrl,
    [parameter(ParameterSetName="setb")][switch]$All
)

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$tenantID = '<tenantID>'
$appID = '<application (client) ID>'
$appSecret = '<client secret>'
$scope = 'https://graph.microsoft.com/.default'
$oAuthUri = "https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token"
$body = [Ordered] @{
    scope = "$scope"
    client_id = "$appId"
    client_secret = "$appSecret"
    grant_type = 'client_credentials'
}
$headers = $null
$tokenexpiration = $null
function Check-AccessToken {
    if (((Get-date).AddMinutes(5)) -gt $tokenExpiration){
     #   Write-host "Token expires in 5 minutes, refreshing" -ForegroundColor Yellow
        $response = Invoke-RestMethod -Method Post -Uri $oAuthUri -Body $body -ErrorAction Stop
        $AccessToken = $response.access_token
        $script:tokenExpiration = (Get-date).AddSeconds($response.expires_in)
    
#Define headers
    $script:headers = @{ 
    'Content-Type' = 'application/json'
    'Accept' = 'application/json'
    'Authorization' = "Bearer $AccessToken" 
    }
 }
}
Check-AccessToken
#Get all SP sites
$url = 'https://graph.microsoft.com/v1.0/sites/'
$spSites = $null
while ($url -ne $null){
        Check-AccessToken
        $json_response =  (Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json) 
        $spSites += $json_response.value
        $url = $json_response.'@odata.nextLink'
    }


function Get-SPSiteSharedDocuments ($siteID){
    $url = "https://graph.microsoft.com/v1.0/sites/$($siteid)/lists"
    $obj_DocumentsList = $null
   # Write-host -ForegroundColor Yellow "Querying site lists $url"
    while ($url -ne $null){
        Check-AccessToken
        $json_response =  (Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json) 
        $obj_DocumentsList += $json_response.value
        $url = $json_response.'@odata.nextLink'
    }
    $obj_DocumentsList = $obj_DocumentsList | ? {$_.list.template -match "DocumentLibrary"} #mySiteDocumentLibrary for OneDrive, documentLibrary for SP
    foreach ($doclib in $obj_DocumentsList){
        $url = "https://graph.microsoft.com/v1.0/sites/$($siteid)/lists/$($doclib.id)/items?expand=driveitem"
   #     Write-host -ForegroundColor Yellow "Querying documents $url"
        $ListItems = $null
        while ($url -ne $null){
            Check-AccessToken
            Write-host -ForegroundColor Yellow "Querying documents $url"
            $json_response =  (Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json)
            $ListItems += $json_response.value
            $url = $json_response.'@odata.nextLink'
    }   
    $ListItems | % {$_.driveitem} | ? {$_.shared}
    }
}

function Get-SPSharedDocumentPermission ($driveID,$docID){
    $url = "https://graph.microsoft.com/v1.0/drives/$($driveID)/items/$($docID)/permissions"
   # write-host $url -ForegroundColor Yellow
    $obj_Permissions = $null
        while ($url -ne $null){
        Check-AccessToken
        $json_response =  (Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json)
        $obj_Permissions += $json_response.value
        $url = $json_response.'@odata.nextLink'
    }   

    $obj_Permissions | ? {$_.link}
    }

function Get-SPSiteDocSharingReport ($siteid){
 foreach ($item in (Get-SPSiteSharedDocuments $siteid)){
    Get-SPSharedDocumentPermission $item.parentreference.driveid $item.id | % {
    [pscustomobject]@{
        WebURL = ($spSites.Where({$_.id -eq $siteid}) ).weburl
        Path = $item.parentReference.path.substring($item.parentReference.path.indexof("root:"))
        ItemName = $item.name
        Role = $_.roles -join ","
        HasPassword = $_.haspassword
        ExpirationDate = $_.expirationDateTime
        Scope = $_.link.scope
        GuestUserMail = ($_.grantedtoidentitiesv2.siteuser.loginname | ? {$_ -match 'guest#'} | % {$_.split('#') | select -Last 1} | select -Unique ) -join ", "
        AADExternaluserUPN = ($_.grantedtoidentitiesv2.siteuser.loginname | ? {$_ -match '#ext#'} | % {$_.split('|') | select -Last 1} | select -Unique ) -join ", "
        } | % {if(($_.scope -eq "users") -and ($_.guestusermail -eq "") -and ($_.AADExternaluserUPN -eq "")){}else{$_}} # filter out entries shared only with org users
    }
   } 
}

$HTMLHeader = @"
<style>
TABLE {border-width: 1px; border-style: solid; border-color: black; border-collapse: collapse;}
TH {border-width: 1px; padding: 3px; border-style: solid; border-color: black;}
TD {border-width: 1px; padding: 3px; border-style: solid; border-color: black;}
</style>
"@
 
 if ($All){
$obj_report = foreach ($site in $spSites){Write-host "querying site $($site.weburl)" -ForegroundColor Yellow ;Get-SPSiteDocSharingReport $site.id}
$obj_report | ConvertTo-Html -Head $HTMLHeader | Out-File $ReportFile
}

 if ($SiteUrl){
   $siteToQuery = $spSites.Where({$_.webUrl -eq $SiteUrl})
   if ($siteToQuery){Get-SPSiteDocSharingReport $siteToQuery.id | ConvertTo-Html -Head $HTMLHeader | Out-File $ReportFile }else{Write-Host -ForegroundColor Red "Site not found"} 
 }

Example result:

Example output for Get-SPExternalSharingReport

Explained

The script starts with the authentication process. Nothing new here, except for the Check-AccessToken function which will be used before each webrequest:

Check-AccessToken

Access tokens tipically expire in 1 hour so it needs to be refreshed during script execution (if running for more than 1 hour). This little function will renew the access token 5 minutes before expiration.

First step is to query all sites and store it in a variable (@odata.nextlink logic explained below):

Then we declare the Get-SPSiteSharedDocuments function, which does the following:

  • queries the lists for the site specified (document libraries are list items too)
  • selects those lists that are based on a document library template (based on my research the template is mySiteDocumentLibrary for OneDrive and documentLibrary for SharePoint sites)
  • because it is possible to have multiple document libraries in a site, we loop through each library and query the items in the list (MS doc here)
  • if there are more than 200 items in a list the result are paged which is reflected in the response – it contains the url for the next page in @odata.nextLink, so we use a while statement to go through each site until the response does not containt this @odata.nextLink member
  • at the end of the function only those driveitems are selected which have a member named “shared”
Get-SPSiteSharedDocuments

Next function is the Get-SPSharedDocumentPermission. This function needs the driveID and the documentID as parameter and returns only those items that have a member named “link”. Some things to note:
– MS doc on the permissions API request (here) states the following:
The permissions relationship of DriveItem cannot be expanded as part of a call to get DriveItem or a collection of DriveItems. You must access the permissions property directly.
This is why it is called separately.
– SharePoint content shared externally is always link based (as far as I know) this is why only those items are selected

Get-SPSharedDocumentPermissions

The last function creates the report itself. The Get-SPSiteDocSharingReport creates a pscustomobject with the information displayed in the HTML. There are some points that are not the most beautiful (this may be due to my lack of hardcore scripting skills), but let me try to explain 🙂
Path: the original answer didn’t seem to contain a relative path, so this one is derived from the parentreference of the driveItem, example:

HasPassword: if the document sharing is protected with a password, than this one is reflected here (not visible on the native report)
ExpirationDate: if the sharing link is valid for a limited time, than expiration is shown here (not visible on the native report)
Scope: anonymous (anyone with the link can access), user (only the user specified can open), organisation (shared with everyone in the org – in an external sharing report this can be relevant if you have guest users in the tenant)
GuestUserMail and AADExternalUserUPN: this took me some time to figure out. Link based sharings have the grantedtoidentitiesv2 property (link). This property may contain user and siteuser objects, user is a microsoft.graph identity while siteuser is a sharePointIdentity (link). It means that (I guess) every invitee gets a siteuser identity but those that can be mapped to an AzureAD identity will be represented as a user object too. When B2B Invitation Manager is enabled then these two will contain the same entries. My experience is that the mapped user object doesn’t get its email attribute populated until the invitee opens the link (the native report only shows displayname for these entries). So to include all invitee, I decided to rely on siteuser.loginname which is not too human readable but can be parsed. If it contains ‘guest#’ then the email address is extracted; ‘#ext#’ refers to an AzureAD guest user and its UPN is returned.

Because internal sharing can be based on link too, entries where the scope is user but either GuestUserMail or AADExternalUserUPN is populated (= only shared with org users) are filtered out.

Get-SPSiteDocSharingReport

The rest of the script is just the header for the HTML report and the execution of these functions. When using the -All parameter, the script will display the URL of the site being queried. When passing an invalid URL to the -SiteURL parameter, the script will display a “Site not found” message. Invalid here is a URL which is not in the $spSites variable.

The rest of the script

I really want to emphasise that many findings here are based on experience and testing – but not based on documentations (except where I refer to MS docs). You may eventually want to countercheck the results against the native reports.

Conditional Access policies – do you backup them ALL?

This will be a short post about a recent finding: AzureAD Conditional Access policies created from template may miss from your backups if not using Graph API beta endpoint.

TL;DR
– When you create a Conditional Access policy using the “New policy from template (Preview)” button, the policy will not show when querying policies using the “traditional tools”
– This may also apply anytime you have preview features set in your policy
– You may want to check if all your policies are backed up
– To switch to the beta endpoint in Microsoft Graph PowerShell, use the Select-MgProfile -Name “beta” command

Explained

I was creating a new Conditional Access policy for a customer where I have my own script running as a scheduled task to backup these settings. When there is a change I get notified and it became strange that this time I did not receive such alert. I started to investigate the issue, no errors, but even the “full” backup did not notice the new policy.

Time to reproduce it in sandbox environment. Here are 4 policies, one is created from template:

Policy created from template

When querying the policies this one is missing:

Results using Get-AzureADMSConditionalAccessPolicy

Same results with Graph PowerShell:

Results using Get-MgIdentityConditionalAccessPolicy

Now it is time to open up F12 developer tools to see what is the trick. Opening the policy let me find the policy ID:

Finding the policy ID using F12 Dev Tools

Now if I try to query the policy by ID, I get the following error message:

Get-AzureADMSConditionalAccessPolicy -PolicyId '09981539-1959-4b4a-8543-1f71bc34217d'
Get-AzureADMSConditionalAccessPolicy : Error occurred while executing GetAzureADMSConditionalAccessPolicy
Code: BadRequest
Message: 1037: The policy you requested contains preview features. Use the Beta endpoint to retrieve this policy.
InnerError:
  RequestId: 1016ffe3-6338-4eaf-b334-4595a6023a6c
  DateTimeStamp: Tue, 28 Feb 2023 16:47:18 GMT
HttpStatusCode: BadRequest
HttpStatusDescription: Bad Request
HttpResponseStatus: Completed
At line:1 char:1
+ Get-AzureADMSConditionalAccessPolicy -PolicyId '09981539-1959-4b4a-85 ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Get-AzureADMSConditionalAccessPolicy], ApiException
    + FullyQualifiedErrorId : Microsoft.Open.MSGraphV10.Client.ApiException,Microsoft.Open.MSGraphV10.PowerShell.GetAzureADMSConditionalAccessPolicy
Error message

Same error with Graph PowerShell:

Get-MgIdentityConditionalAccessPolicy -ConditionalAccessPolicyId '09981539-1959-4b4a-8543-1f71bc34217d'                                                                                                                     Get-MgIdentityConditionalAccessPolicy : 1037: The policy you requested contains preview features. Use the Beta
endpoint to retrieve this policy.
At line:1 char:1
+ Get-MgIdentityConditionalAccessPolicy -ConditionalAccessPolicyId '099 ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: ({ ConditionalAc...ndProperty =  }:<>f__AnonymousType35`3) [Get-MgIden
   tityC...cessPolicy_Get1], RestException`1
    + FullyQualifiedErrorId : BadRequest,Microsoft.Graph.PowerShell.Cmdlets.GetMgIdentityConditionalAccessPolicy_Get1

I don’t know how to use the beta endpoint in AzureAD PowerShell, but since it is being deprecated, I will rely on the Graph Powershell cmdlet which has the Select-MgProfile option to switch to the beta endpoint (link):

Select-MgProfile -Name "beta"

And here we go, now these policies are retrieved too:

Policies before and after switching to beta endpoint

So if you have a policy backup solution in place, take a look on those backups if you are using templates or other preview features.