Disabling Entra Seamless SSO – some extra notes

There are some great resources on this topic (link, link2) and it is supposed to be an easy task – but why not overcomplicate it? So I thought I share my experiences.

TL;DR
– Seamless SSO is a nice convenience feature, with some drawbacks. It is easy and recommended to disable it, even if it causes minimal inconvenience
– The documentation was not clear enough for me, so I tried to track the SSO flow from the beginning
– I tried to cover some scenarios (browsers) to prepare the transition to PRT based SSO – but keep in mind, that domain-only clients will not benefit from Seamless SSO (which doesn’t necessarily mean that users will be overwhelmed by authentication prompts)
– After disabling the feature, some extra steps may be needed for a complete removal (GPO, monitoring, documentation)

Seamless SSO on a web browser

The documentation (link) provides a detailed explanation on the behaviour, but it does not make easy to connect the dots – so I share my interpretation:

User tries to access a web application, gets redirected to the Entra sign-in page

After entering the username, the GetCredentialType endpoint (https://login.microsoftonline.com/common/GetCredentialType?mkt=en-US) returns the EstsProperties which contains a “DesktopSsoEnabled” node with “true” value:

This triggers a connection to the tenant specific autologon URL (SSOprobe): https://autologon.microsoftazuread-sso.com/<upnsuffix>/winauth/ssoprobe?<parameters>

The endpoint returns a 401 Unauthorized response, challenging the browser to provide a Kerberos ticket. The browser needs to be configured to trust this endpoint enough to provide a Kerberos ticket (eg. by adding the site to the intranet zone)

The client will attempt to find the SPN for HTTP/autologon.microsoftazuread-sso.com in AD. This SPN is registered to the AZUREADSSOACC computer account:

The client requests the ticket for this service, sends this ticket to the ssoprobe. Entra verifies the ticket. This results in an HTTP 200 OK request to the ssoprobe endpoint:

The next step is a request to the sso endpoint which returns the dssoToken: https://autologon.microsoftazuread-sso.com/<upnsuffix>/winauth/sso?<parameters>

This is followed by a POST to the https://login.microsoftonline.com/common/instrumentation/dssostatus endpoint, which returns some additional cookies (fpc) used during authentication:

Next the “actual login” is performed on the https://login.microsoftonline.com/common/login endpoint, using the dssoToken and the cookies received:

From an admin standpoint, at this stage it’s like the user entered the password. If additional controls are in place (like MFA requirement or other Conditional Access policies), these are evaluated and access is granted (or not).

The flow described above assumes that the user is not signed into Edge and no other settings are in place that automates that sign-in process.

Seamless SSO on a client application

The same logic applies in this scenario: the docs are a bit shady, so I try to understand what is happening in the background. Fiddler + OneDrive login is used here:

The user is not signed in, “the native application retrieves the username of the user from the device’s Windows session” – this one is very interesting. First I thought that it uses the UserPrincipalName and sends the UPN suffix (see next step), but this is may not true. As it is stated in another doc:

Sign-in username can be either the on-premises default username (userPrincipalName) or another attribute configured in Microsoft Entra Connect (Alternate ID). Both use cases work because Seamless SSO uses the securityIdentifier claim in the Kerberos ticket to look up the corresponding user object in Microsoft Entra ID.

My guess is that if the UPN suffix corresponds to a verified custom domain, then the common endpoint is queried (see below) – if Alternate_id is used, then the client is configured with a DomainHint which determines the tenant to be used.

The client issues an HTTP GET to the following endpoint (no authentication):

https://login.microsoftonline.com/common/UserRealm/?user=f12.hu&api-version=1.0&checkForMicrosoftAccount=false&fallback_domain=ftwelvehu.onmicrosoft.com

This request in itself does not return the MEX endpoint, but inserting the following header to the request does the job:

"tb-aad-env-id" = "10.0.26100.5074"

The result:

It even works with the on-premises domain name – just replace the URL’s ?user=<domain> with ?user=<on-premises domain> – the only difference here is that the domain will be replaced by the fallback domain*, eg.:

{"ver":"1.0","account_type":"Federated","domain_name":"ftwelvehu.onmicrosoft.com","federation_protocol":"WSTrust","federation_metadata_url":"https://autologon.microsoftazuread-sso.com/ftwelvehu.onmicrosoft.com/winauth/trust/mex?client-request-id=9c122fb5-263b-46af-a2fc-508ffab7bf3c","cloud_instance_name
":"microsoftonline.com","cloud_audience_urn":"urn:federation:MicrosoftOnline"}

*in case of the on-premises domain, the fallback_domain needs to be specified in the URL. I didn’t find any trace of how this information is fetched, but it is an easy task so I didn’t investigate that part (any verified domain’s fallback domain can be queried).

So now we have the MEX endpoint, the next step is a request to the MEX endpoint which returns a lot of information – I guess this is the step described in the documentation as follows:

The app then queries the WS-Trust MEX endpoint to see if integrated authentication endpoint is available. 

Integrated authentication endpoint is available, so a Kerberos challenge is issued (HTTP 401 at first):

Since the autologon URL is added to the intranet zone list, the client requests a ticket for HTTP/autologon.microsoftazuread-sso.com, then this ticket is passed to Entra and a SAML token (including the DesktopSsoToken) is returned (POST to https://autologon.microsoftazuread-sso.com/f12.hu/winauth/trust/2005/windowstransport?client-request-id=<id>) :

The SAML token is then presented to the OAuth2 endpoint and we have a refresh token, access token and id token.

Now that I’ve come to understand how the feature works, let’s get rid of it 🙃

Prepairing for the non-DSSO world

On domain joined devices you don’t have a PRT (Primary Refresh Token), so you have to either Hybrid Join the computers affected or accept that SSO will not work here (users will be prompted for authentication when accessing cloud resources). Microsoft is a bit more professional on this:

Seamless SSO is an opportunistic feature. If it fails for any reason, the user sign-in experience goes back to its regular behavior – that is, the user needs to enter their password on the sign-in page.

DesktopSSO login does not provide information on the device identity in Entra. This makes sense on a domain-only device, but may cause some headaches on Hybrid Joined devices

Example: if you require a compliant device to access a resource, but DSSO is used (hence no device identity is provided) the user will get blocked.

To overcome this issue, the client browsers need to be instructed to use the PRT (Primary Refresh Token) for the authentication process. Microsoft provided some guidance here.

Let’s start with 3rd party browsers:
– Chrome: enable CloudAPAuth as described in the previous link (same logic applies to other Chromium based browsers)
– Firefox: the Microsoft guidance says “Allow Windows single sign-on for Microsoft, work, and school accounts”. If you use group policy to configure the setting, import the Firefox ADMX and look for Windows SSO setting under Mozilla/Firefox

Edge: the docs state that Edge 85+ requires the user to be signed in to the browser to properly pass device identity:

This is the official and supported scenario – however, since the new Edge is Chromium based, you can also enable CloudAPAuth. Enforcing signing into Edge combined with implicit sign-in may probably fit for standard scenarios, but in my case it just resulted in more questions than answers (overcomplicating, as usual 🙃).

So as an alternative, I used the following setting:

Either setting will make the browser skip the DSSO flow and use the signed-in user’s credentials.

Unfortunately, I don’t have experience with native clients. Microsoft apps automatically use the user’s cloud identity on a hybrid joined device (or at least they transition when disabling DSSO)

Detecting DSSO usage

This topic was greatly covered by Nathan McNulty and Daniel Bradley. One thing to overcomplicate this topic: if you happen to have Defender for Endpoint on your devices, you can use the DeviceNetworkEvents table to find which process is connecting to autologon.microsoftazuread-sso.com

DeviceNetworkEvents
| where RemoteUrl == @"autologon.microsoftazuread-sso.com"
| project TimeGenerated,DeviceName, InitiatingProcessFileName, InitiatingProcessAccountName, InitiatingProcessParentFileName, DeviceId, InitiatingProcessCommandLine
| sort by TimeGenerated desc 

This will return an enormous amount of events that I wasn’t able to process – but revealed some funny circumstances (like computer accounts that didn’t successfully finish the hybrid join process; or that Edge attempts to reach this URL when running as a network service [probably some update mechanism])

Most of these events will disappear when DSSO is “deactivated”.

Disabling DSSO

Microsoft provides a detailed guidance on disabling the feature. I opted for the Powershell approach:

The state can be verified under the Connect Sync menu

At this point, the configuration is not removed so it can be re-enabled if needed. On the client side, DSSO will still be attempted, but the SSO endpoint will not return any data (no dssotoken is received):

This means that users/applications will transition to other modern authentication flows (which also means that DSSO will be phased out, will not be tried again). I think 1-2 weeks is enough to wait for user feedback. If everything is okay, the feature can be completely removed:

Disable-AzureADSSOForest -DomainFqdn <on-premises domain>

Then finally, the AZUREADSSOACC computer account can be deleted from AD.

Some additional cleanup

Entra SSSO does not work out of the box – the autologon URLs need to be added to the intranet zone settings (docs). This may be implemented via GPO or Intune, but my point is that for a complete removal, you may want to remove these settings as well.

If you set up some automation to remind you for the Kerberos decryption key rotation (eg. by monitoring the AZUREADSSOACC computer account’s password age), don’t forget to remove it.

And update your documentation 😉

Quicknote: Hybrid Exchange mailbox migration account vs. modern authentication policy

Recently, I came across an uncommon issue while disabling legacy authentication in a hybrid Exchange environment. Since I did not find any exact solutions, I thought I share my story about modern authentication in on-premises Exchange server and how it affects the mailbox migration account. Spoiler: it breaks the mailbox migration

TL;DR
– Exchange Online uses NTLM authentication for MRSProxy
– if you set the Exchange organization config to disable legacy authentication as default authentication policy, the mailbox migration account will not be able to authenticate (except when this account has a dedicated “allow legacy auth” policy assigned)

The story

So the Exchange Team blogged about disabling legacy authentication in Exchange (link) and I thought that this is an easy win: we have HMA enabled, we notified the users about the upcoming change, all we have to do is to create the “Block Legacy Auth” policy, gradually roll it out to users then set it as default (Set-OrganizationConfig -DefaultAuthenticationPolicy “Block Legacy Auth”). Everything went well, but some weeks later a mailbox migration batch to Exchange Online failed with the following error:

Error: CommunicationErrorTransientException: The call to 'https://<exchangeserver>/EWS/mrsproxy.svc' failed. Error details: The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The authentication header received from the server was 'Negotiate, NTLM'.. --> The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The authentication header received from the server was 'Negotiate, NTLM'.

We figured it out that it has to do something with the new authenticationpolicy – but every other aspect of the hybrid config was working fine. So as a workaround we created a “Allow legacy authentication” policy (to be honest, it’s more like: “don’t disable any authentication method”) and assigned it to the mailbox migration account:

Get-User _srv_exombxmigr | Set-User -AuthenticationPolicy "Allow legacy authentication"

Authentication policies take up to 30 minutes to apply, or iisreset will reload the config immediately.

To make it more secure, it is sufficient to enable legacy auth for only EWS (omit –BlockLegacyAuthWebServices)

New-AuthenticationPolicy "Enable EWS legacy auth" -BlockLegacyAuthImap -BlockLegacyAuthActiveSync -BlockLegacyAuthOfflineAddressBook -BlockLegacyAuthRpc -BlockLegacyAuthAutodiscover -BlockLegacyAuthMapi -BlockLegacyAuthPop

get-user _srv_exombxmigr | set-user -AuthenticationPolicy "Enable EWS legacy auth"

Also worth checking: troubleshooting guide for hybrid migration

Powershell with Entra CBA – unattended access to Defender portal when Graph API or Application permission does not fit

One of my previous posts covered a “basic” way to track secure score changes using Graph API with application permissions. While I still prefer application permissions (over service accounts) for unattended access to certain resources, sometimes it is not possible – for example when you want to access resources which are behind the Defender portal’s apiproxy (like the scoreImpactChangeLogs node in the secureScore report). To overcome this issue, I decided to use Entra Certificate-based Authentication as this method provides a “scriptable” (and “MFA capable”) way to access these resources.

Lot of credit goes to the legendary Dr. Nestori Syynimaa (aka DrAzureAD) and the AADInternals toolkit (referring to the CBA module as this provided me the foundamentals to understand the authentication flow). My script is mostly a stripped version of his work but it targets the security.microsoft.com portal. Credit goes to Marius Solbakken as well for his great blogpost on Azure AD CBA which gave me the hint to fix an error during the authentication flow (details below).

TL;DR

  • the script uses certificate-based auth (not to be confused with app auth with certificate) to access https://security.microsoft.com/apiproxy/mtp/secureScore/security/secureScoresV2 which is used to display secure score informations on the Defender portal
  • prerequisites: Entra CBA configured for the “service account”, appropriate permissions granted for the account to access secure score informations, certificate to be used for auth
  • the script provided is only for research/entertainment purposes, this post is more about the journey and the caveats than the result
  • tested on Windows Powershell ( v5.1), encountered issues with Microsoft Powershell (v7.5)

The scipt

$tenantID = "<your tenant id>"
$userUPN = "<CBA user UPN>"
$thumbprint = "<thumbprint of certificate installed in Cert:\CurrentUser\My\ >"

function Extract-Config ($inputstring){
    $regex_pattern = '\$Config=.*'
    $matches = [regex]::Match($inputstring, $regex_pattern)
    $config= $matches.Value.replace("`$Config=","") #remove $Config=
    $config = $config.substring(0, $config.length-1) #remove last semicolon
    $config | ConvertFrom-Json
}

#https://learn.microsoft.com/en-us/entra/identity/authentication/concept-authentication-web-browser-cookies
##Cert auth to security.microsoft.com 
# Credit: https://github.com/Gerenios/AADInternals/blob/master/CBA.ps1
# STEP1 - Invoke the first request to get redirect url
$webSession = New-Object Microsoft.PowerShell.Commands.WebRequestSession
$response = Invoke-WebRequest -Uri "https://security.microsoft.com/" -Method Get -WebSession $webSession  -ErrorAction SilentlyContinue -MaximumRedirection 0 -UseBasicParsing
$url = $response.Headers.'Location'

# STEP2 - Send HTTP GET to RedirectUrl
$login_get = Invoke-WebRequest  -Uri $Url -Method Get -WebSession $WebSession -ErrorAction SilentlyContinue -UseBasicParsing -MaximumRedirection 0

# STEP3 - Send POST to GetCredentialType endpoint
#Credit: https://goodworkaround.com/2022/02/15/digging-into-azure-ad-certificate-based-authentication/
$GetCredentialType_Body = @{
    username = $userUPN
    flowtoken = (Extract-Config -inputstring $login_get.Content).sFT
    }

$getCredentialType_response = Invoke-RestMethod -method Post -uri "https://login.microsoftonline.com/common/GetCredentialType?mkt=en-US" -ContentType "application/json" -WebSession $webSession -Headers @{"Referer"= $url; "Origin" = "https://login.microsoftonline.com"} -Body ($GetCredentialType_Body | convertto-json -Compress) -UseBasicParsing

#STEP 4 - Invoke REST POST to certauth endpoint with ctx and flowtoken using certificate
$CBA_Body = @{
    ctx = (Extract-Config -inputstring $login_get.Content).sctx
    flowtoken = $getCredentialType_response.FlowToken
    }
$CBA_Response = Invoke-RestMethod -UseBasicParsing -Uri "https://certauth.login.microsoftonline.com/$TenantId/certauth" -Method Post -Body $CBA_Body -Certificate (get-item Cert:\CurrentUser\My\$thumbprint)

#STEP 5 - Send authentication information to the login endpoint
$login_msolbody = $null
$login_msolbody = @{
        login = $userUPN
        ctx = ($CBA_Response.html.body.form.input.Where({$_.name -eq "ctx"})).value
        flowtoken = ($CBA_Response.html.body.form.input.Where({$_.name -eq "flowtoken"})).value
        canary = ($CBA_Response.html.body.form.input.Where({$_.name -eq "canary"})).value
        certificatetoken = ($CBA_Response.html.body.form.input.Where({$_.name -eq "certificatetoken"})).value
        }

$headersToUse = @{
        'Referer'="https://certauth.login.microsoftonline.com/" 
        'Origin'= "https://certauth.login.microsoftonline.com"                
        }

$login_postCBA = Invoke-WebRequest -UseBasicParsing -Uri "https://login.microsoftonline.com/common/login" -Method Post -Body $login_msolbody -Headers $headersToUse -WebSession $webSession 

#STEP 6 - Make a request to login.microsoftonline.com/kmsi to get code and id_token
$login_postCBA_config = (Extract-Config -inputstring $login_postCBA.Content)
        $KMSI_body = @{
            "LoginOptions" = "3"
            "type" = "28"
            "ctx" = $login_postCBA_config.sCtx
            "hpgrequestid" = $login_postCBA_config.sessionId
            "flowToken"	= $login_postCBA_config.sFT
            "canary" = $login_postCBA_config.canary
            "i19" = "2326"
        }
        
        
$KMSI_response = Invoke-WebRequest -UseBasicParsing -Uri "https://login.microsoftonline.com/kmsi" -Method Post -WebSession $WebSession -Body $KMSI_body

#STEP 7 - add sessionID cookie to the websession as this will be required to access security.microsoft.com (probably unnecessary)
#$websession.Cookies.Add((New-Object System.Net.Cookie("s.SessID", ($response.BaseResponse.Cookies | ? {$_.name -eq "s.SessID"}).value, "/", "security.microsoft.com"))) #s.SessID cookie is retrived during first GET to defender portal

#STEP 8 - POST the id_token and session information to security.microsoft.com to get sccauth and XSRF-TOKEN cookies
$securityPortal_POST_body = @{
    code = ($KMSI_response.InputFields.Where({$_.name -eq "code"})).value
    id_token = ($KMSI_response.InputFields.Where({$_.name -eq "id_token"})).value
    state = ($KMSI_response.InputFields.Where({$_.name -eq "state"})).value
    session_state = ($KMSI_response.InputFields.Where({$_.name -eq "session_state"})).value
    correlation_id = ($KMSI_response.InputFields.Where({$_.name -eq "correlation_id"})).value
    }
$securityPortal_POST_response = Invoke-WebRequest -UseBasicParsing -Uri "https://security.microsoft.com/" -Method Post -WebSession $webSession -Body $securityPortal_POST_body -MaximumRedirection 1
##END of Cert auth to security.microsoft.com 

## Query the secureScoresV2
#Decode the XSRF-TOKEN
$xsrfToken = $webSession.Cookies.GetCookies("https://security.microsoft.com") | ? {$_.name -eq "XSRF-TOKEN"} | % {$_.value}
$xsrfToken_decoded = [System.Web.HttpUtility]::UrlDecode($xsrfToken)

#Send GET to secureScoresV2 with the decoded XSRF-TOKEN added to the headers
$SecureScoresV2_headers = @{
    "x-xsrf-token" = $xsrfToken_decoded
    }
$secureScoresV2_response = Invoke-WebRequest -UseBasicParsing -Uri "https://security.microsoft.com/apiproxy/mtp/secureScore/security/secureScoresV2?`$top=400" -WebSession $webSession -Headers $SecureScoresV2_headers 

#RESULT
$secureScoreInfo = $secureScoresV2_response.Content | ConvertFrom-Json
$secureScoreInfo.value

Explained

Since I’m not a developer, I will explain all the steps (result of research and lot of guesswork) as I experienced them (let’s call it sysadmin aspect). So essentially, this script “mimics” the user opening the Defender portal, authenticates with CBA, clicks on Secure Score and returns the raw information which is transformed in the browser to something user-friendly. As a prerequisite, the certificate (with the private key) needs to be installed in the Current User personal certificate store of the user running the script.

Step 0 is to populate the $tenantID, $userUPN and $thumbprint variables accordingly

Step 1 is creating a WebRequestSession object (like a browser session, from my perspective the $websession variable is just a cookie store) and navigating to https://security.microsoft.com. When performed in a browser, we get redirected to the login portal – if we open the browser developer tools, we can see in the network trace that this means a 302 HTTP code (redirect) with a Location header in the response. This is where we get redirected:

From the script aspect, we will store this Location header in the $url variable:

Notice that every Invoke-WebRequest/Invoke-Restmethod command uses the -UseBasicParsing parameter. According to documentation, this parameter is deprecated in newer PowerShell versions and from v6.0.0 all requests are using basic parsing only. However, I’m using v5.1 which uses Internet Explorer to get the content – so if it is not configured, disabled or anything else, the command could fail.

At this point the $webSession variable contains the following cookies for security.microsoft.com: s.SessID, X-PortalEndpoint-RouteKey and an OpenIdConnect.nonce:

Step 2 is to open the redirectUrl:

When opened, we receive some cookies for login.microsoftonline.com, including buid,fpc,esctx (documentation for the cookies here):

But the most important information is the flowtoken (sFT) which can be found in the response content. In the browser trace it looks like this:

In PowerShell, the response content is the $login_get variable’s Content member, returned as string. This needs to be parsed, because it is embedded in a script HTML node, beginning with $Config:

I’m using the Extract-Config function to get this configuration data (later I found that AADInternals is using the Get-Substring function defined in CommonUtils.ps1 which is more sophisticated 🙃):

Step 3 took some time to figure out. When I tried to use AADInternals’ Get-AADIntadminPortalAccessTokenUsingCBA command I got an error message:

AADSTS90036: An unexpected, non-retryable error stemming from the directory service has occurred.

Luckily I found this blogpost which led me to think that this GetCredentialType call is missing in AADInternals (probably something is misconfigured on my side and this can be skipped). This call – from my standpoint – is returning a new flowtoken and this new one needs to be sent to the certauth endpoint. (Until I figured it out, every other attempt to authenticate on the certauth endpoint resulted in AADSTS90036).

Step 4 is basically the same as in the AADInternals’ module: the flowtoken and ctx is posted to the certauth.login.microsoftonline.com endpoint.

Notice here, that the ContentType parameter is set to “application/json” – where it is not specified, it defaults to “application/x-www-form-urlencoded” for a POST call. In the browser trace, this is defined in the Content-Type header:

Step 5 is slightly different from AADInternals’ CBA module, but follows the same logic: send the login (userprincipalname), ctx, flowtoken, canary and certificatetoken content to the https://login.microsoftonline.com/common/login endpoint and in turn we receive the updated flowtoken, ctx, sessionid, canary informations which are posted to the https://login.microsoftonline.com/kmsi endpoint in Step 6

The KMSI_response contains the id_token, code, state, session_state and correlation_id. When we look back on the browser trace, we will see that these parameters are passed to the security.microsoft.com portal to authenticate the user.

Step 7 is probably totally unnecessary (commented out) and can be the result of too much desparate testing. It is just adding the s.SessID cookie to our websession which is also needed during authentication (without this cookie, you will immedately receive some timeout errors). This cookie is received upon the first request (I guess my testing involved clearing some variables… anyways, it won’t hurt)

Step 8 is the final step in this authentication journey: we post content we received in the $KMSI_response variable. In the browser trace we can see that an HTTP 302 is the status code for this request, followed by a new request to the same endpoint.

This is why the -MaximumRedirection parameter is set to 1 in this step. (Some of my tests failed with 1 redirection allowed, so if it fails, it can be increased to 5 for example).

Finally we have the sccauth and XSRF-TOKEN cookies which are required to access resources.

I thought this is the green light, all I need is to use the websession to access the secureScoresv2 – but some tweaking was required because the Invoke-WebRequest failed with the following error message:

Invoke-WebRequest : {"Message":"The web page isn\u0027t loading correctly. Please reload the page by refreshing your browser, or try deleting the cookies from your browser and then sign in again. If the problem persists, contact 
Microsoft support."

Taking a look on the request, I noticed that the XSRF-TOKEN is used as X-Xsrf-Token header info (even though the cookie is present in the $websession)

XSRF-TOKEN sent as X-Xsrf-Token header

Took some (~a lot) time to figure out that this token is encoded so it needs to be decoded as well before using it as header:

Slight but crucial difference between the encoded and the decoded XSRF-TOKEN

So once we have the decoded token, it can be used as x-xsrf-token:

The response content is in JSON format, the ConvertFrom-Json cmdlet will do the parsing.

Compared to secureScore exposed by Graph API, here we have the ScoreImpactChangeLogs property which is missing in Graph.

Example of the ScoreImpactChangeLogs property

This is just one example (of endless possibilities) of using Entra CBA to access the Defender portal, but my main goal was to share my findings and give a hint on reaching other useful stuff on security.microsoft.com.

How much time your users are wasting with “traditional” MFA?

Recently, I came across a post on LinkedIn which demonstrated that Passkey authentication is way faster than traditional Password+MFA notification login. It made me curious: how much time does it exactly take to do MFA?

TL;DR
– This report uses the SignInLogs table which needs to be configured in Diagnostic settings
– Unfortunately I did not manage to gather the same info from AADSignInEventsBeta table in Defender or sign-in logs from Microsoft Graph
– Everything written here is based on my tests and measurements, so it may contain inaccurate conclusions

The query that will display the authentication method, the average and overall time spent with completing the MFA prompt:

let StrongAuthRequiredSignInAttempts = SigninLogs
	| where ResultType == "50074"
	| distinct ResultType,UniqueTokenIdentifier,CorrelationId;
let MFA1 =SigninLogs
	| join kind=inner StrongAuthRequiredSignInAttempts on UniqueTokenIdentifier
	| mv-expand todynamic(AuthenticationDetails)
	| project stepdate=todatetime(AuthenticationDetails.authenticationStepDateTime), authMethod = tostring(AuthenticationDetails.authenticationMethod), stepResult = tostring(AuthenticationDetails.authenticationStepResultDetail), RequestSequence = todouble(AuthenticationDetails.RequestSequence), StatusSequence = todouble(AuthenticationDetails.StatusSequence), CorrelationId,RequestSeq_UnixTime = unixtime_milliseconds_todatetime(todouble(AuthenticationDetails.RequestSequence)), UniqueTokenIdentifier, StatusSeq_UnixTime = unixtime_milliseconds_todatetime(todouble(AuthenticationDetails.StatusSequence)), MFAMethod =tostring(MfaDetail.authMethod)
    | summarize make_set(stepResult), MFAStart=min(stepdate), MFAEnd=max(stepdate), TimeSpent=totimespan(max(stepdate)-min(stepdate)),TimeSpentv2=totimespan(maxif(StatusSeq_UnixTime, StatusSequence > 1)-minif(RequestSeq_UnixTime, RequestSequence > 1)) by UniqueTokenIdentifier,MFAMethod
    | where set_stepResult has "MFA successfully completed"
    ;
MFA1
| where isnotempty(MFAMethod)
| project MFAMethod,TimeSpent = coalesce(TimeSpentv2,TimeSpent)
| summarize AverageMFATime=avg(TimeSpent),SumMFATime=sum(TimeSpent) by MFAMethod

Example result:

Explanation

The first step was to find those sign-in attempts that are interrupted, because MFA is needed. This can be easly found as there is a ResultDescription column where we can filter for “Strong Authentication is required.” entries:

SigninLogs
| where ResultDescription == "Strong Authentication is required."

Or use the ResultType column, where 50074 state code indicates the same (reference: https://login.microsoftonline.com/error?code=50074).

The first catch is that not the entire sign-in session has this field populated with the same value (for logical reasons). Let’s take a simple login to the Azure portal with Authenticator Code as MFA:

In this example, I intentionally waited 30 seconds to provide the code (after successful password entry) [code prompt started on 2024.12.09 9:41:15, code sent on 9:41:45]. The TimeGenerated field is a bit misleading, because it is the creation timestamp of the event entry not the authentication event (this part is stored in the AuthenticationDetails column).
It is also worth mentioning that the CorrelationId remains the same in a browser session (even if session policies require re-authentication) – so if for example the Azure portal is kept open in the browser but re-authentication happens, the CorrelationId is the same but the authentication steps (reentering password, new MFA prompt) need to be handled separately. This is why I’m using the UniqueTokenIdentifier.

But let’s get back to the example and extend the AuthenticationDetails column:

Some fields are not totally clear for me, but according to my measures the most accurate timespan of “doing MFA” is the time between the “MFA required in Azure AD” and the “MFA completed in Azure AD” events (it’s not totally accurate because I spent some time to change the MFA method).

However, this approach (time between “MFA required” and “MFA completed”) will not cover all other MFA methods, because “MFA required” is not always present in the logs. For example, the next sign-in example was using Mobile app notification as MFA:

At this point the possible solution is to either write a query for each authentication method or try to find a unified approach. I opted for the unified option: assume that the “MFA start time” is the first logged AuthenticationStepDate and the “MFA end time” is the last logged AuthenticationStepDate where we have “MFA successfully completed” entry (this one seems to be present in every MFA type).

This looks almost appropriate, but in the case of “Mobile app notification” I found the RequestSequence and StatusSequence fields which are Unix timestamps and look more precise:

But since these fields are not always present, I chose the KQL coalesce() function to return the TimeSpentv2 value when present – otherwise return the TimeSpent value.

Note1: the summarize operator needs to group by UniqueTokenIdentifier and MFAMethod, because without the MFAMethod, the “Password” will also be returned as authentication factor.

Note2: when calculating TimeSpentv2, there were other authentication steps where StatusSequence fields were empty, 0 or 1. They are clearly not Unix timestamps, so only values greater than 1 are considered here

+1 point for passkey authentication 🙃

Find clients authenticating from unassigned AD subnets – using Defeder for Identity

A well maintained AD topology is very important because domain joined clients use this information to locate the optimal Domain Controller (DCLocator documentation here) – failing to find the most suitable domain controller will have performance impact on client side (slow logon, group policy processing, etc.). In an ideal world, when a new subnet is created and AD joined computers are placed here, AD admins are notified and they assign the subnet to the appropriate site – but sometimes this is not the case.

There are several methods to detect IP addresses coming from unassinged subnets:
– By analyzing the \\<dc>\admin$\debug\netlogon.log logfiles (example here)
– Looking for 5778 EventID in System log (idea from here)
– Using Powershell get all client registered DNS entries and look up against the replication subnets (some IP subnet calculator will be needed)

My idea was to use Defender for Identity logs (mainly because I recently (re)discovered the ipv4_lookup plugin in Kusto 🙃).

TL;DR
– by defining the ADReplicationSubnets as a datatable, we can find logon events from the IdentityLogonEvents table where clients use an IP address that is not in any replication subnet
– we can use a “static” datatable, or schedule a PowerShell script which will dynamically populate the items in this table

The query:

let IP_Data = datatable(network:string)
[
 "10.0.1.0/24", //example subnet1
"10.0.2.0/24", //example subnet2
"192.168.0.0/16", //example subnet3
];
IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)

Quite simple, isn’t it? So we filter for successful Kerberos logon events (without Protocol filter, other logon events could generate noise) and use the ipv4_lookup function to look up the IP address in the “IP_Data” variable’s “network” column, including those entries that cannot be matched with any subnet – then filter for the unmatched entries.

Example result

Scheduling the query as a PowerShell script

So far, so good. But over time, the list of subnets may change, grow, etc. – how can this subnet list be dynamically populated? Using the Get-ADReplicationSubnet command for example. As a prerequisite I created an app registration with ThreatHunting.Read.All application permission (with a certificate as credential):

App registration for script scheduling

The following script is used:

#required scope: ThreatHunting.Read.All

##Connect Microsoft Graph using Certauth
$tenantID = '<tenantID>'
$clientID = '<clientID>'
$certThumbprint = "<certThumbprint>"

Connect-MgGraph -TenantId $tenantID -ClientId $clientID -CertificateThumbprint $certThumbprint

##Define hunting query
$huntingQuery = '
let IP_Data = datatable(network:string)
['+( (Get-ADReplicationSubnet -filter *).Name | % {'"' + $_ + '",'}) +'
];
IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)
'

#construct payload with 7 days timespan
$body = @{Query = $huntingQuery
    Timespan = "P7D"
} | ConvertTo-Json

$url = "https://graph.microsoft.com/v1.0/security/runHuntingQuery"
#Run hunting query
$response = Invoke-MgGraphRequest -Method Post -Uri $url -Body $body

$results = foreach ($result in $response.results){
    [pscustomobject]@{
        IPAddress = $result.IpAddress
        DeviceName = $result.DeviceName
        LogonCount = $result.LogonCount
        }
}

$results

The hunting query is the same as above, but the datatable entries are populated by the results of the Get-ADReplicationSubnet command (and some dirty string formatting like adding quotation marks and a column). In the $body variable the Timespan is set to seven days (ISO 8601 format) – when Timespan is not set, it defaults to 30 days (reference)

Running the script

From this point, it is up to you to schedule the script (or fine tune the output) and email the results. 😊

Extra hint: if you have a multi-domain environment, the hunting query may need to be “domain specific” – for this purpose I would insert the following filter: | where AdditionalFields.Spns == “krbtgt/<domainDNSName>”, for example:

IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| where AdditionalFields.Spns == "krbtgt/F12.HU"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)

Tracking Microsoft Secure Score changes

Microsoft Secure Score can be a good starting point in assessing organizational security posture. Improvement actions are added to the score regularly (link) and points achieved are updated dynamically.

For me, Secure score is a mesurement of hard work represented in little percentage points. Every little point is a reward which can be taken back by Microsoft when changes happen in the current security state (let it be the result of an action [ie. someone enabled the printer spooler on a domain controller] – or inactvity [ie. a domain admin account became “dormant”]). Whatever is the reason of the score degradation, I want to be alerted, because I don’t want to check this chart on a daily basis. Unfortunately, I didn’t find any ready-to-use solution, so I’m sharing my findings.

TL;DR
Get-MgSecuritySecureScore Graph PowerShell cmdlet can be used to fetch 90 days of score data
-The basic idea is to compare the actual scores with yesterday’s scores and report on differences
-When new controlScores (~recommendations) arrive, send separate alert
-The script I share is a PowerShell script with certificate auth, but no Graph PowerShell cmdlets are used just native REST API calls (sorry, I still have issues with Graph PS while native approach is consistent). Using app auth with certificate, the script can be scheduled to run on a daily basis (I don’t recommend a more frequent schedule as there are temporary score changes which are mostly self-remediating)

Prerequisites
We will need an app registration with Microsoft Graph/SecurityEvents.Read.All Application permission (don’t forget the admin consent):

App registration with SecurityEvents.Read.All permission

On the server on which you are planning to schedule the script, create a new certificate. Example PowerShell command*:

New-SelfSignedCertificate -FriendlyName "F12 - Secure score monitor" -NotAfter (Get-date).AddYears(2) -Subject "F12 - Secure score monitor" -CertStoreLocation Cert:\LocalMachine\My -Provider “Microsoft Enhanced RSA and AES Cryptographic Provider” -KeyExportPolicy NonExportable

Don’t forget to grant read access to the private key for the account which will run the schedule. Right click on the certificate – All Tasks – Manage Private Keys…

I prefer to use “Network Service” for these tasks because limited permissions are needed

Export the certificate’s public key and upload it to the app registration’s certificates:

Let’s move on to the script.

The script

Some variables and actions need to be modified, like $tenantID, $appID and $certThumbprint in the first lines. Also, the notification part (Send-MailMessage lines) needs to be customized to your needs.
The script itself can be breaken down as follows:
– authenticate to Graph using certificate (the auth function is from MSEndpointMgr.com)
– the following to lines query the Secure Score data for today and yesterday:
$url = 'https://graph.microsoft.com/beta/security/securescores?$top=2'
$webResponse = Invoke-RestMethod -Method Get -Uri $url -Headers $headers -ErrorAction Stop

– some HTML style for readable emails
– compare today’s and yesterday’s controlscores – alert when there are new / deprecated recommendations
– compare today’s scores with yesterday’s scores – alert when changes are detected

Here it is:

$tenantId = '<your tenant ID>'
$appID = '<application ID with SecurityEvents.Read.All admin consented permission>'
$certThumbprint = '<thumbprint of certificate used to connect>'
$resourceAppIdUri = 'https://graph.microsoft.com'

#region Auth
$cert = gci Cert:\LocalMachine\my\$certThumbprint
$cert64Hash = [System.Convert]::ToBase64String($cert.GetCertHash())
function Get-Token {
    #https://msendpointmgr.com/2023/03/11/certificate-based-authentication-aad/
    #create JWT timestamp for expiration 
    $startDate = (Get-Date "1970-01-01T00:00:00Z" ).ToUniversalTime()  
    $jwtExpireTimeSpan = (New-TimeSpan -Start $startDate -End (Get-Date).ToUniversalTime().AddMinutes(2)).TotalSeconds  
    $jwtExpiration = [math]::Round($jwtExpireTimeSpan, 0)  
  
    #create JWT validity start timestamp  
    $notBeforeExpireTimeSpan = (New-TimeSpan -Start $StartDate -End ((Get-Date).ToUniversalTime())).TotalSeconds  
    $notBefore = [math]::Round($notBeforeExpireTimeSpan, 0)  
  
    #create JWT header  
    $jwtHeader = @{  
        alg = "RS256"  
        typ = "JWT"  
        x5t = $cert64Hash -replace '\+', '-' -replace '/', '_' -replace '='  
    }
    #create JWT payload  
    $jwtPayLoad = @{  
        aud = "https://login.microsoftonline.com/$TenantId/oauth2/token"  
        exp = $jwtExpiration   
        iss = $appID  
        jti = [guid]::NewGuid()   
        nbf = $notBefore  
        sub = $appID  
    }  
  
    #convert header and payload to base64  
    $jwtHeaderToByte = [System.Text.Encoding]::UTF8.GetBytes(($jwtHeader | ConvertTo-Json))  
    $encodedHeader = [System.Convert]::ToBase64String($jwtHeaderToByte)  
  
    $jwtPayLoadToByte = [System.Text.Encoding]::UTF8.GetBytes(($jwtPayLoad | ConvertTo-Json))  
    $encodedPayload = [System.Convert]::ToBase64String($jwtPayLoadToByte)  
  
    #join header and Payload with "." to create a valid (unsigned) JWT  
    $jwt = $encodedHeader + "." + $encodedPayload  
  
    #get the private key object of your certificate  
    $privateKey = ([System.Security.Cryptography.X509Certificates.RSACertificateExtensions]::GetRSAprivateKey($cert))  
  
    #define RSA signature and hashing algorithm  
    $rsaPadding = [Security.Cryptography.RSASignaturePadding]::Pkcs1  
    $hashAlgorithm = [Security.Cryptography.HashAlgorithmName]::SHA256  
  
    #create a signature of the JWT  
    $signature = [Convert]::ToBase64String(  
        $privateKey.SignData([System.Text.Encoding]::UTF8.GetBytes($jwt), $hashAlgorithm, $rsaPadding)  
    ) -replace '\+', '-' -replace '/', '_' -replace '='  
  
    #join the signature to the JWT with "."  
    $jwt = $jwt + "." + $signature  
  
    #create a hash with body parameters  
    $body = @{  
        client_id             = $appID
        resource              = $resourceAppIdUri
        client_assertion      = $jwt  
        client_assertion_type = "urn:ietf:params:oauth:client-assertion-type:jwt-bearer"  
        scope                 = $scope  
        grant_type            = "client_credentials"  
  
    } 
    $url = "https://login.microsoft.com/$TenantId/oauth2/token"  
  
    #use the self-generated JWT as Authorization  
    $header = @{  
        Authorization = "Bearer $jwt"  
    }  
  
    #splat the parameters for Invoke-Restmethod for cleaner code  
    $postSplat = @{  
        ContentType = 'application/x-www-form-urlencoded'  
        Method      = 'POST'  
        Body        = $body  
        Uri         = $url  
        Headers     = $header  
    }  
  
    $request = Invoke-RestMethod @postSplat  

    #view access_token  
    $request
}
$accessToken = (Get-Token).access_token

 $headers = @{ 
    'Content-Type' = 'application/json'
    'Accept' = 'application/json'
    'Authorization' = "Bearer $accessToken" 
    }
#region end

$url = 'https://graph.microsoft.com/beta/security/securescores?$top=2'
$webResponse = Invoke-RestMethod -Method Get -Uri $url -Headers $headers -ErrorAction Stop

#HTML Style for table reports
$Style = @'
<style>
table{
border-collapse: collapse;
border-width: 2px;
border-style: solid;
border-color: grey;
color: black;
margin-bottom: 10px;
text-align: left;
}
th {
    background-color: #0000ff;
    color: white;
    border: 1px solid black;
    margin: 10px;
}
td {
    border: 1px solid black;
    margin: 10px;
}
</style>
'@


$controlScoreChanges = Compare-Object ($webResponse.value[0].controlScores.controlname) -DifferenceObject ($webResponse.value[1].controlScores.controlname) 
$report_controlScoreChanges = if ($controlScoreChanges){
    foreach ($control in $controlScoreChanges){
        [pscustomobject]@{
        State = switch ($control.sideindicator){"<=" {"New"} "=>" {"Removed"}}
        Category = $webresponse.value[0].controlScores.where({$_.controlname -eq ($control.inputobject)}).controlCategory
        Name = $control.inputobject
        Description = $webresponse.value[0].controlScores.where({$_.controlname -eq ($control.inputobject)}).description
        }
    }
    
}

if ($report_controlScoreChanges){
    [string]$body = $report_controlScoreChanges | ConvertTo-Html -Head $Style
    Send-MailMessage -To "<address>" -From "<address>" -Subject "Secure Score control changes detected" -Body $body -SmtpServer "<SMTP server address>" -Port 25 -BodyAsHtml

}

$ErrorActionPreference= 'silentlycontinue'
$report_scoreChanges = foreach ($controlscore in $webResponse.value[0].controlscores){
  if ( Compare-Object $controlscore.score -DifferenceObject ($webResponse.value[1].controlScores.where({$_.controlname -eq ($controlscore.controlname)}).score)){
        [pscustomobject]@{
            date = $controlscore.lastSynced
            controlCategory = $controlscore.controlCategory
            controlName = $controlscore.controlName
            scoreChange = ($controlscore.score) - (($webResponse.value[1].controlScores.where({$_.controlname -eq ($controlscore.controlname)})).score)
            description = $controlscore.description
            }
        }
    }

if ($report_ScoreChanges){
    [string]$body = $report_ScoreChanges | ConvertTo-Html -Head $Style
    Send-MailMessage -To "<address>" -From "<address>" -Subject "Secure Score changes detected" -Body $body -SmtpServer "<SMTP server address>" -Port 25 -BodyAsHtml

}

Some example results:

New recommendations (Defender for Identity fresh install -> new MDI recommendations)
Score changes by recommendation

Fun fact:
The Defender portal section where these score changes are displayed actually uses a “scoreImpactChangeLogs” node for these changes, but unfortunately I didn’t find a way to query this secureScoresV2 endpoint:

https://security.microsoft.com/apiproxy/mtp/secureScore/security/secureScoresV2?$top=400

I hope it means that these informations will be available via Graph so that no calculations will be needed to detect score changes.

Reporting on Entra Application Proxy published applications – Graph PowerShell

I thought it will be a quick Google search to find a PowerShell script that will give a report on applications published via Entra application proxy, but I found only scripts (link1, link2, link3) using the AzureAD PowerShell module – so I decided to write a new version using Graph PowerShell.

The script:

#Requires Microsoft.Graph.Beta.Applications
Connect-MgGraph

$AppProxyConnectorGroups = Get-MgBetaOnPremisePublishingProfileConnectorGroup -OnPremisesPublishingProfileId applicationproxy

$AppProxyPublishedApps = foreach ($connector in $AppProxyConnectorGroups){
Get-MgBetaOnPremisePublishingProfileConnectorGroupApplication -connectorgroupid $connector.id -OnPremisesPublishingProfileId applicationproxy | % {
    $onpremisesPublishingInfo = (Get-MgBetaApplication -applicationID $_.id -Property onpremisespublishing).onpremisespublishing
    [pscustomobject]@{
        DisplayName = $_.DisplayName
        Id = $_.id
        AppId = $_.appid
        ExternalURL = $onpremisesPublishingInfo.ExternalURL
        InternalURL = $onpremisesPublishingInfo.InternalURL
        ConnectorGroupName = $connector.name
        ConnectorGroupId = $connector.id

    }
}
}

$AppProxyPublishedApps

Some story

Entra portal is still using the https://main.iam.ad.ext.azure.com/api/ApplicationProxy/ConnectorGroups endpoint to display the connector groups:

So the next step was to figure out if there are some Graph API equivalents. Google search: graph connectorgroups site:microsoft.com led me to this page: https://learn.microsoft.com/en-us/graph/api/connectorgroup-list?view=graph-rest-beta&preserve-view=true&tabs=http
From this point it was “easy” to follow the logic of previously linked scripts and “translate” AzureAD PowerShell commands to Graph PS.

Note: as per the documentation, Directory.ReadWrite.All permission is required and only delegated permissions work.

As an alternative, I share the original script that did not use these commands from Microsoft.Graph.Beta.Applications

Connect-MgGraph

$AppProxyConnectorGroups = Invoke-MgGraphRequest -Uri 'https://graph.microsoft.com/beta/onPremisesPublishingProfiles/applicationproxy/connectorgroups' -Method GET

$AppProxyPublishedApps = foreach ($connector in $AppProxyConnectorGroups.value){
  $publishedApps =  Invoke-MgGraphRequest -Uri "https://graph.microsoft.com/beta/onPremisesPublishingProfiles/applicationproxy/connectorgroups/$($connector.id)/applications" -Method GET
  foreach ($app in $publishedApps.value){
  [PSCustomObject]@{
    DisplayName = $app.DisplayName
    id = $app.id
    appId = $app.appId
    ConnectorGroupName = $connector.name
    ConnectorGroupID = $connector.id
  }
 }
}

$AppProxyReport = foreach ($publishedApp in $AppProxyPublishedApps){
    $onpremisesPublishingInfo = Invoke-MgGraphRequest -Uri "https://graph.microsoft.com/beta/applications/$($publishedApp.id)?`$select=onpremisespublishing" -Method GET
    [PSCustomObject]@{
        DisplayName = $publishedApp.DisplayName
        id = $publishedApp.id
        appid = $publishedApp.appId
        ConnectorGroupName = $publishedApp.ConnectorGroupName
        ConnectorGroupID = $publishedApp.ConnectorGroupID
        ExternalURL = $onpremisesPublishingInfo.onPremisesPublishing.externalUrl
        InternalURL = $onpremisesPublishingInfo.onPremisesPublishing.internalUrl
        externalAuthenticationType = $onpremisesPublishingInfo.onPremisesPublishing.externalAuthenticationType
    }
}

Playing with Microsoft Passport Key Storage Provider – protect user VPN certificates with Windows Hello for Business?

I’m really into this Windows Hello for Business topic… Recently, I was going through the “RDP with WHfB” guide from MS Learn (link) which gave me an idea: can this method be used to protect user VPN certificates? The short answer is: yes, but no 🙂

TL;DR
– Depending on your current infrastructure, several options are available to protect VPN with MFA: Azure MFA NPS extension, SAML-auth VPN with Conditional Access, Entra ‘mini-CA’ Conditional Access
– Hello for Business can be used to protect access to certificates, why not use it to protect VPN certs?

Protecting VPN with MFA with Microsoft tools

NPS Extension
The most popular option I know to protect VPN with MFA is the Azure MFA NPS extension (link). The logic is very simple: the RADIUS request coming to the NPS server is authenticated against Active Directory, then the NPS extension is doing a secondary authentication (Azure MFA).

SAML-based authentication with Conditional Access
This depends on the vendor of the VPN appliance, but the mechanism is that an Enterprise application is created in Entra and Conditional Access policy can be applied to it.

Conditional Access VPN
There is another option which is called “Conditional Access VPN connectivity” in Entra – and by the way it seems to me that Microsoft is hiding this option (I guess it’s because it is using Azure Active Directory Graph which is deprecated). I found a photo how it looked like in the old days (picture taken from here):

In the Entra portal this option is not visible (at least for me):

But when using the search bar, the menu can be found:

Some documentation links about this feature:

  • Conditional Access Framework and Device Compliance for VPN (link)
  • Conditional access for VPN connectivity using Microsoft Entra ID (link)
  • VPN and conditional access (link)

The mechanism in short: Entra creates a ‘mini-CA’ which issues a short-lived certificates to clients; when a Windows VPN client is configured to use DeviceCompliance flow, the client attempts to get a certificate from Entra before connecting to the VPN endpoint (from an admin standpoint a ‘VPN Server’ application is created in Entra and conditional access policies can be applied to this application – I’m not going into details about this one, mainly because I encountered a lot of inconsitencies in the user experience when testing this solution 🙃) – and when everything is OK, the user gets a short-lived certificate which can be used for authentication (eg. EAP-TLS)
Some screenshots about this:

Conditional Access policy evaluation result

Certificate valid for ~1 hour

VPN Certificate created with Microsoft Passport KSP
Disclaimer: it is not an official/supported by Microsoft method to use VPN certificates for authentication, I tested it only for entertainment purposes.

This was the initial trigger of this post – based on the “Remote Desktop sign-in with Windows Hello for Business” tutorial, create VPN certificates using the Microsoft Passport KSP (link). The process is straigthforward:
– create the VPN certificate template (or duplicate the one you already have)
– export the template to a txt file
– modify the pKIDefaultCSPs setting to Microsoft Passport Key Storage Provider
– update the template with the new setting

User experience: well, if the user is WHfB enrolled and logs in with WHfB then nothing changes (the certificate is used “silently” upon connecting) – but when using password to log in to Windows, the VPN connection prompts for Hello credentials:

So if Hello for Business can be considered a multi-factor authentication method, then this solution fits as well 🙂

Convenience PIN policy enables Windows Hello for Business enrollment in Windows Security

Windows Hello for Business and Windows Hello may sound siblings, but they are actually two different families in authentication world (link)*. Hello is basicly using password caching while Hello for Business uses asymmetric authentication (key or certificate based) – that’s why Windows Hello for Business (WHfB) has some infrastructure prerequisites in az on-premises or hybrid environment. Not every environment is prepared for WHfB, hence some organizations may have opted to enable convenience PIN for their users to make sign-in… well… more convenient.
Why does it matter?
Because users may encounter errors during WHfB enrollment, WHfB has impact on Active Directory infrastructure, WHfB is a strong authentication method (~considered as MFA in Conditional Access policy evaluation) and so on.

*the common thing about Hello and WHfB is the Credential Provider: users see the PIN/biometric authentication option on their logon screen

TL;DR
Turn on convenience PIN sign-in policy enables Hello PIN in Account settings, but invokes Hello for Business enrollment when setting up in Windows Security app
– Hello for Business implementation is very simple (and preferred over Hello) with Cloud Kerberos Trust, but migrating users from Hello has some pitfalls
– Hello usage can be detected in the following registry hive:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\
Authentication\LogonUI\NgcPin\Credentials\<userSID>

Behavior
Let’s assume that WHfB is not configured in your environment, even the Intune default policy for WHfB is set to “Not configured” like this:

On a client device, the eligibility for WHfB can be checked using dsregcmd /status under “Ngc Prerequisite Check” (link). On a domain joined/hybrid joined device, the PreReqResult will have the WillNotProvision value until WHfB is explicitly enabled.

When you open Settings – Accounts – Sign-in options, you will see that PIN (Windows Hello) is greyed out nor Windows Security app will display options to set up Hello:

Now let’s enable convenience PIN sign-in group policy: Computer Configuration – Administrative Templates – System – Logon – Turn on convenience PIN sign-in

The Windows Security traybar icon almost immediately shows a warning status:

The Hello enrollment is now active in the Settings- Accounts – Sign-in options menu and we also have the option to set up Hello in Windows Security:

And here lies the discrepancy in the enrollment behavior: the Settings menu (left) sets up Hello, while Windows Security app (right) will invoke the WHfB enrollment process

Windows Hello setup using Settings menu
Windows Security invoking Hello for Business enrollment

Migrating from Hello to Hello for Business
At this point, we may decide to prevent Hello for Business – but I suggest to follow the other direction and migrate Hello users to Hello for Business. Since we have Cloud Kerberos Trust, we don’t need a PKI either, only (at least one) Windows 2016 or newer Domain Controllers (and hybrid joined devices with hybrid identites with MFA registration of course)[link]… so the deployment is very easy… but migration can be a bit tricky.

First, when a Hello for Business policy is applied on a computer, the credential provider (~the login screen asking for PIN) is disabled for the user until WHfB enrollment. This means that the user will be asked for password instead of PIN – this may result in failed logon attempts, because users will probably enter their PIN “as usual”.
Another issue that you may encounter is related to the previous and the applied PIN policy. Based on my experience, the WHfB enrollment process is prompting the current PIN and tries to set it as the new PIN (from a user experience standpoint, this was a clever decision from Microsoft), but if the new policy requires a more complex PIN, the process may encounter an error (0x801c0026 not documented here)

Convenience PIN migration to Hello for Business PIN error

This error is handled by the logon screen:

Detecting Hello usage
As problems may occour with Hello to WHfB migration, it’s a good idea to have an inventory about Hello users. On every device, each Hello registration is stored under the following registry hive: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion \Authentication\LogonUI\NgcPin\Credentials\<userSID>

It’s up to your creativity how you collect this information and translate the SIDs to some human readable format 🙂

[Suggested article: Query Windows Hello for Business registrations and usage]

Hunting for report-only (Microsoft-managed) Conditional Access impacts

Microsoft is rolling out the managed conditional access policies (link) gradually and I wanted to know how it is going to impact the users (which users to be exact). Apparently, if the Sign-in logs are not streamed to a Log Analytics Workspace, the options are limited – but if you have the AADSignInEventsBeta table under Advanced hunting on the Microsoft Defender portal, some extra info can be gathered.

Streaming Entra logs to Log Analytics gives wonderful insights (not only for Conditional Access), so it is recommended to set up the diagnostic settings. If it is not an option, but the AADSignInEventsBeta is available (typically organizations with E5 licences), then the following query will show those sign-ins that would have been impacted by a report-only Conditional Access policy:

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| mv-apply todynamic(ConditionalAccessPolicies) on (
where ConditionalAccessPolicies.result == "reportOnlyInterrupted" or ConditionalAccessPolicies.result == "reportOnlyFailure"
| where ConditionalAccessPolicies.displayName has "Microsoft-managed:" //filter for managed Conditional Access policies
| extend CADisplayName = tostring(ConditionalAccessPolicies.displayName)
| extend CAResult = tostring(ConditionalAccessPolicies.result))
| distinct Timestamp,RequestId,Application,ResourceDisplayName, AccountUpn, CADisplayName, CAResult

Note: in the AADSignInEventsBeta table, the ConditionalAccessPolicies is a JSON value stored as a string so the todynamic function is needed.

Note2: Since every Conditional Access policy is evaluated against each logon, the query first filters for those sign-ins where the report-only result is ‘Interrupted’ or ‘Failure’, then the policy displayname is used to narrow down the results. Starting the filter with displayName would be pointless.

Some example summarizations if you need to see the big picture (same query as above but the last line can be replaced with these ones):
View impacted users count by application:
| summarize AffectedUsersCount=dcount(AccountUpn) by Application, CADisplayName, CAResult
Same summarization in one day buckets:
| summarize AffectedUsers = dcount(AccountUpn) by bin(Timestamp,1d), CADisplayName, CAResult
List countries by result:
| summarize make_set(Country) by  CADisplayName, CAResult

Other useful feature is the Monitoring (Preview) menu in Conditional Access – Overview:

Here we have a filter option called ‘Policy evaluated’ where report-only policies are grouped under the ‘Select individual report-only policies’ section. This gives an overview but unfortunately does not list the affected users.

When a Microsoft-managed policy is opened, this chart is presented under the policy info as well.