Recently, I came across an uncommon issue while disabling legacy authentication in a hybrid Exchange environment. Since I did not find any exact solutions, I thought I share my story about modern authentication in on-premises Exchange server and how it affects the mailbox migration account. Spoiler: it breaks the mailbox migration
TL;DR – Exchange Online uses NTLM authentication for MRSProxy – if you set the Exchange organization config to disable legacy authentication as default authentication policy, the mailbox migration account will not be able to authenticate (except when this account has a dedicated “allow legacy auth” policy assigned)
The story
So the Exchange Team blogged about disabling legacy authentication in Exchange (link) and I thought that this is an easy win: we have HMA enabled, we notified the users about the upcoming change, all we have to do is to create the “Block Legacy Auth” policy, gradually roll it out to users then set it as default (Set-OrganizationConfig -DefaultAuthenticationPolicy “Block Legacy Auth”). Everything went well, but some weeks later a mailbox migration batch to Exchange Online failed with the following error:
Error: CommunicationErrorTransientException: The call to 'https://<exchangeserver>/EWS/mrsproxy.svc' failed. Error details: The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The authentication header received from the server was 'Negotiate, NTLM'.. --> The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The authentication header received from the server was 'Negotiate, NTLM'.
We figured it out that it has to do something with the new authenticationpolicy – but every other aspect of the hybrid config was working fine. So as a workaround we created a “Allow legacy authentication” policy (to be honest, it’s more like: “don’t disable any authentication method”) and assigned it to the mailbox migration account:
One of my previous posts covered a “basic” way to track secure score changes using Graph API with application permissions. While I still prefer application permissions (over service accounts) for unattended access to certain resources, sometimes it is not possible – for example when you want to access resources which are behind the Defender portal’s apiproxy (like the scoreImpactChangeLogs node in the secureScore report). To overcome this issue, I decided to use Entra Certificate-based Authentication as this method provides a “scriptable” (and “MFA capable”) way to access these resources.
Lot of credit goes to the legendary Dr. Nestori Syynimaa (aka DrAzureAD) and the AADInternals toolkit (referring to the CBA module as this provided me the foundamentals to understand the authentication flow). My script is mostly a stripped version of his work but it targets the security.microsoft.com portal. Credit goes to Marius Solbakken as well for his great blogpost on Azure AD CBA which gave me the hint to fix an error during the authentication flow (details below).
prerequisites: Entra CBA configured for the “service account”, appropriate permissions granted for the account to access secure score informations, certificate to be used for auth
the script provided is only for research/entertainment purposes, this post is more about the journey and the caveats than the result
tested on Windows Powershell ( v5.1), encountered issues with Microsoft Powershell (v7.5)
The scipt
$tenantID = "<your tenant id>"
$userUPN = "<CBA user UPN>"
$thumbprint = "<thumbprint of certificate installed in Cert:\CurrentUser\My\ >"
function Extract-Config ($inputstring){
$regex_pattern = '\$Config=.*'
$matches = [regex]::Match($inputstring, $regex_pattern)
$config= $matches.Value.replace("`$Config=","") #remove $Config=
$config = $config.substring(0, $config.length-1) #remove last semicolon
$config | ConvertFrom-Json
}
#https://learn.microsoft.com/en-us/entra/identity/authentication/concept-authentication-web-browser-cookies
##Cert auth to security.microsoft.com
# Credit: https://github.com/Gerenios/AADInternals/blob/master/CBA.ps1
# STEP1 - Invoke the first request to get redirect url
$webSession = New-Object Microsoft.PowerShell.Commands.WebRequestSession
$response = Invoke-WebRequest -Uri "https://security.microsoft.com/" -Method Get -WebSession $webSession -ErrorAction SilentlyContinue -MaximumRedirection 0 -UseBasicParsing
$url = $response.Headers.'Location'
# STEP2 - Send HTTP GET to RedirectUrl
$login_get = Invoke-WebRequest -Uri $Url -Method Get -WebSession $WebSession -ErrorAction SilentlyContinue -UseBasicParsing -MaximumRedirection 0
# STEP3 - Send POST to GetCredentialType endpoint
#Credit: https://goodworkaround.com/2022/02/15/digging-into-azure-ad-certificate-based-authentication/
$GetCredentialType_Body = @{
username = $userUPN
flowtoken = (Extract-Config -inputstring $login_get.Content).sFT
}
$getCredentialType_response = Invoke-RestMethod -method Post -uri "https://login.microsoftonline.com/common/GetCredentialType?mkt=en-US" -ContentType "application/json" -WebSession $webSession -Headers @{"Referer"= $url; "Origin" = "https://login.microsoftonline.com"} -Body ($GetCredentialType_Body | convertto-json -Compress) -UseBasicParsing
#STEP 4 - Invoke REST POST to certauth endpoint with ctx and flowtoken using certificate
$CBA_Body = @{
ctx = (Extract-Config -inputstring $login_get.Content).sctx
flowtoken = $getCredentialType_response.FlowToken
}
$CBA_Response = Invoke-RestMethod -UseBasicParsing -Uri "https://certauth.login.microsoftonline.com/$TenantId/certauth" -Method Post -Body $CBA_Body -Certificate (get-item Cert:\CurrentUser\My\$thumbprint)
#STEP 5 - Send authentication information to the login endpoint
$login_msolbody = $null
$login_msolbody = @{
login = $userUPN
ctx = ($CBA_Response.html.body.form.input.Where({$_.name -eq "ctx"})).value
flowtoken = ($CBA_Response.html.body.form.input.Where({$_.name -eq "flowtoken"})).value
canary = ($CBA_Response.html.body.form.input.Where({$_.name -eq "canary"})).value
certificatetoken = ($CBA_Response.html.body.form.input.Where({$_.name -eq "certificatetoken"})).value
}
$headersToUse = @{
'Referer'="https://certauth.login.microsoftonline.com/"
'Origin'= "https://certauth.login.microsoftonline.com"
}
$login_postCBA = Invoke-WebRequest -UseBasicParsing -Uri "https://login.microsoftonline.com/common/login" -Method Post -Body $login_msolbody -Headers $headersToUse -WebSession $webSession
#STEP 6 - Make a request to login.microsoftonline.com/kmsi to get code and id_token
$login_postCBA_config = (Extract-Config -inputstring $login_postCBA.Content)
$KMSI_body = @{
"LoginOptions" = "3"
"type" = "28"
"ctx" = $login_postCBA_config.sCtx
"hpgrequestid" = $login_postCBA_config.sessionId
"flowToken" = $login_postCBA_config.sFT
"canary" = $login_postCBA_config.canary
"i19" = "2326"
}
$KMSI_response = Invoke-WebRequest -UseBasicParsing -Uri "https://login.microsoftonline.com/kmsi" -Method Post -WebSession $WebSession -Body $KMSI_body
#STEP 7 - add sessionID cookie to the websession as this will be required to access security.microsoft.com (probably unnecessary)
#$websession.Cookies.Add((New-Object System.Net.Cookie("s.SessID", ($response.BaseResponse.Cookies | ? {$_.name -eq "s.SessID"}).value, "/", "security.microsoft.com"))) #s.SessID cookie is retrived during first GET to defender portal
#STEP 8 - POST the id_token and session information to security.microsoft.com to get sccauth and XSRF-TOKEN cookies
$securityPortal_POST_body = @{
code = ($KMSI_response.InputFields.Where({$_.name -eq "code"})).value
id_token = ($KMSI_response.InputFields.Where({$_.name -eq "id_token"})).value
state = ($KMSI_response.InputFields.Where({$_.name -eq "state"})).value
session_state = ($KMSI_response.InputFields.Where({$_.name -eq "session_state"})).value
correlation_id = ($KMSI_response.InputFields.Where({$_.name -eq "correlation_id"})).value
}
$securityPortal_POST_response = Invoke-WebRequest -UseBasicParsing -Uri "https://security.microsoft.com/" -Method Post -WebSession $webSession -Body $securityPortal_POST_body -MaximumRedirection 1
##END of Cert auth to security.microsoft.com
## Query the secureScoresV2
#Decode the XSRF-TOKEN
$xsrfToken = $webSession.Cookies.GetCookies("https://security.microsoft.com") | ? {$_.name -eq "XSRF-TOKEN"} | % {$_.value}
$xsrfToken_decoded = [System.Web.HttpUtility]::UrlDecode($xsrfToken)
#Send GET to secureScoresV2 with the decoded XSRF-TOKEN added to the headers
$SecureScoresV2_headers = @{
"x-xsrf-token" = $xsrfToken_decoded
}
$secureScoresV2_response = Invoke-WebRequest -UseBasicParsing -Uri "https://security.microsoft.com/apiproxy/mtp/secureScore/security/secureScoresV2?`$top=400" -WebSession $webSession -Headers $SecureScoresV2_headers
#RESULT
$secureScoreInfo = $secureScoresV2_response.Content | ConvertFrom-Json
$secureScoreInfo.value
Explained
Since I’m not a developer, I will explain all the steps (result of research and lot of guesswork) as I experienced them (let’s call it sysadmin aspect). So essentially, this script “mimics” the user opening the Defender portal, authenticates with CBA, clicks on Secure Score and returns the raw information which is transformed in the browser to something user-friendly. As a prerequisite, the certificate (with the private key) needs to be installed in the Current User personal certificate store of the user running the script.
Step 0 is to populate the $tenantID, $userUPN and $thumbprint variables accordingly
Step 1 is creating a WebRequestSession object (like a browser session, from my perspective the $websession variable is just a cookie store) and navigating to https://security.microsoft.com. When performed in a browser, we get redirected to the login portal – if we open the browser developer tools, we can see in the network trace that this means a 302 HTTP code (redirect) with a Location header in the response. This is where we get redirected:
From the script aspect, we will store this Location header in the $url variable:
Notice that every Invoke-WebRequest/Invoke-Restmethod command uses the -UseBasicParsing parameter. According to documentation, this parameter is deprecated in newer PowerShell versions and from v6.0.0 all requests are using basic parsing only. However, I’m using v5.1 which uses Internet Explorer to get the content – so if it is not configured, disabled or anything else, the command could fail.
At this point the $webSession variable contains the following cookies for security.microsoft.com: s.SessID, X-PortalEndpoint-RouteKey and an OpenIdConnect.nonce:
Step 2 is to open the redirectUrl:
When opened, we receive some cookies for login.microsoftonline.com, including buid,fpc,esctx (documentation for the cookies here):
But the most important information is the flowtoken (sFT) which can be found in the response content. In the browser trace it looks like this:
In PowerShell, the response content is the $login_get variable’s Content member, returned as string. This needs to be parsed, because it is embedded in a script HTML node, beginning with $Config:
I’m using the Extract-Config function to get this configuration data (later I found that AADInternals is using the Get-Substring function defined in CommonUtils.ps1 which is more sophisticated 🙃):
Step 3 took some time to figure out. When I tried to use AADInternals’ Get-AADIntadminPortalAccessTokenUsingCBA command I got an error message:
AADSTS90036: An unexpected, non-retryable error stemming from the directory service has occurred.
Luckily I found this blogpost which led me to think that this GetCredentialType call is missing in AADInternals (probably something is misconfigured on my side and this can be skipped). This call – from my standpoint – is returning a new flowtoken and this new one needs to be sent to the certauth endpoint. (Until I figured it out, every other attempt to authenticate on the certauth endpoint resulted in AADSTS90036).
Step 4 is basically the same as in the AADInternals’ module: the flowtoken and ctx is posted to the certauth.login.microsoftonline.com endpoint.
Notice here, that the ContentType parameter is set to “application/json” – where it is not specified, it defaults to “application/x-www-form-urlencoded” for a POST call. In the browser trace, this is defined in the Content-Type header:
Step 5 is slightly different from AADInternals’ CBA module, but follows the same logic: send the login (userprincipalname), ctx, flowtoken, canary and certificatetoken content to the https://login.microsoftonline.com/common/login endpoint and in turn we receive the updated flowtoken, ctx, sessionid, canary informations which are posted to the https://login.microsoftonline.com/kmsi endpoint in Step 6
The KMSI_response contains the id_token, code, state, session_state and correlation_id. When we look back on the browser trace, we will see that these parameters are passed to the security.microsoft.com portal to authenticate the user.
Step 7 is probably totally unnecessary (commented out) and can be the result of too much desparate testing. It is just adding the s.SessID cookie to our websession which is also needed during authentication (without this cookie, you will immedately receive some timeout errors). This cookie is received upon the first request (I guess my testing involved clearing some variables… anyways, it won’t hurt)
Step 8 is the final step in this authentication journey: we post content we received in the $KMSI_response variable. In the browser trace we can see that an HTTP 302 is the status code for this request, followed by a new request to the same endpoint.
This is why the -MaximumRedirection parameter is set to 1 in this step. (Some of my tests failed with 1 redirection allowed, so if it fails, it can be increased to 5 for example).
Finally we have the sccauth and XSRF-TOKEN cookies which are required to access resources.
I thought this is the green light, all I need is to use the websession to access the secureScoresv2 – but some tweaking was required because the Invoke-WebRequest failed with the following error message:
Invoke-WebRequest : {"Message":"The web page isn\u0027t loading correctly. Please reload the page by refreshing your browser, or try deleting the cookies from your browser and then sign in again. If the problem persists, contact
Microsoft support."
Taking a look on the request, I noticed that the XSRF-TOKEN is used as X-Xsrf-Token header info (even though the cookie is present in the $websession)
XSRF-TOKEN sent as X-Xsrf-Token header
Took some (~a lot) time to figure out that this token is encoded so it needs to be decoded as well before using it as header:
Slight but crucial difference between the encoded and the decoded XSRF-TOKEN
So once we have the decoded token, it can be used as x-xsrf-token:
The response content is in JSON format, the ConvertFrom-Json cmdlet will do the parsing.
Compared to secureScore exposed by Graph API, here we have the ScoreImpactChangeLogs property which is missing in Graph.
Example of the ScoreImpactChangeLogs property
This is just one example (of endless possibilities) of using Entra CBA to access the Defender portal, but my main goal was to share my findings and give a hint on reaching other useful stuff on security.microsoft.com.
Recently, I came across a post on LinkedIn which demonstrated that Passkey authentication is way faster than traditional Password+MFA notification login. It made me curious: how much time does it exactly take to do MFA?
TL;DR – This report uses the SignInLogs table which needs to be configured in Diagnostic settings – Unfortunately I did not manage to gather the same info from AADSignInEventsBeta table in Defender or sign-in logs from Microsoft Graph – Everything written here is based on my tests and measurements, so it may contain inaccurate conclusions
The query that will display the authentication method, the average and overall time spent with completing the MFA prompt:
let StrongAuthRequiredSignInAttempts = SigninLogs
| where ResultType == "50074"
| distinct ResultType,UniqueTokenIdentifier,CorrelationId;
let MFA1 =SigninLogs
| join kind=inner StrongAuthRequiredSignInAttempts on UniqueTokenIdentifier
| mv-expand todynamic(AuthenticationDetails)
| project stepdate=todatetime(AuthenticationDetails.authenticationStepDateTime), authMethod = tostring(AuthenticationDetails.authenticationMethod), stepResult = tostring(AuthenticationDetails.authenticationStepResultDetail), RequestSequence = todouble(AuthenticationDetails.RequestSequence), StatusSequence = todouble(AuthenticationDetails.StatusSequence), CorrelationId,RequestSeq_UnixTime = unixtime_milliseconds_todatetime(todouble(AuthenticationDetails.RequestSequence)), UniqueTokenIdentifier, StatusSeq_UnixTime = unixtime_milliseconds_todatetime(todouble(AuthenticationDetails.StatusSequence)), MFAMethod =tostring(MfaDetail.authMethod)
| summarize make_set(stepResult), MFAStart=min(stepdate), MFAEnd=max(stepdate), TimeSpent=totimespan(max(stepdate)-min(stepdate)),TimeSpentv2=totimespan(maxif(StatusSeq_UnixTime, StatusSequence > 1)-minif(RequestSeq_UnixTime, RequestSequence > 1)) by UniqueTokenIdentifier,MFAMethod
| where set_stepResult has "MFA successfully completed"
;
MFA1
| where isnotempty(MFAMethod)
| project MFAMethod,TimeSpent = coalesce(TimeSpentv2,TimeSpent)
| summarize AverageMFATime=avg(TimeSpent),SumMFATime=sum(TimeSpent) by MFAMethod
Example result:
Explanation
The first step was to find those sign-in attempts that are interrupted, because MFA is needed. This can be easly found as there is a ResultDescription column where we can filter for “Strong Authentication is required.” entries:
SigninLogs
| where ResultDescription == "Strong Authentication is required."
The first catch is that not the entire sign-in session has this field populated with the same value (for logical reasons). Let’s take a simple login to the Azure portal with Authenticator Code as MFA:
In this example, I intentionally waited 30 seconds to provide the code (after successful password entry) [code prompt started on 2024.12.09 9:41:15, code sent on 9:41:45]. The TimeGenerated field is a bit misleading, because it is the creation timestamp of the event entry not the authentication event (this part is stored in the AuthenticationDetails column). It is also worth mentioning that the CorrelationId remains the same in a browser session (even if session policies require re-authentication) – so if for example the Azure portal is kept open in the browser but re-authentication happens, the CorrelationId is the same but the authentication steps (reentering password, new MFA prompt) need to be handled separately. This is why I’m using the UniqueTokenIdentifier.
But let’s get back to the example and extend the AuthenticationDetails column:
Some fields are not totally clear for me, but according to my measures the most accurate timespan of “doing MFA” is the time between the “MFA required in Azure AD” and the “MFA completed in Azure AD” events (it’s not totally accurate because I spent some time to change the MFA method).
However, this approach (time between “MFA required” and “MFA completed”) will not cover all other MFA methods, because “MFA required” is not always present in the logs. For example, the next sign-in example was using Mobile app notification as MFA:
At this point the possible solution is to either write a query for each authentication method or try to find a unified approach. I opted for the unified option: assume that the “MFA start time” is the first logged AuthenticationStepDate and the “MFA end time” is the last logged AuthenticationStepDate where we have “MFA successfully completed” entry (this one seems to be present in every MFA type).
This looks almost appropriate, but in the case of “Mobile app notification” I found the RequestSequence and StatusSequence fields which are Unix timestamps and look more precise:
But since these fields are not always present, I chose the KQL coalesce() function to return the TimeSpentv2 value when present – otherwise return the TimeSpent value.
Note1: the summarize operator needs to group by UniqueTokenIdentifier and MFAMethod, because without the MFAMethod, the “Password” will also be returned as authentication factor.
Note2: when calculating TimeSpentv2, there were other authentication steps where StatusSequence fields were empty, 0 or 1. They are clearly not Unix timestamps, so only values greater than 1 are considered here
A well maintained AD topology is very important because domain joined clients use this information to locate the optimal Domain Controller (DCLocator documentation here) – failing to find the most suitable domain controller will have performance impact on client side (slow logon, group policy processing, etc.). In an ideal world, when a new subnet is created and AD joined computers are placed here, AD admins are notified and they assign the subnet to the appropriate site – but sometimes this is not the case.
There are several methods to detect IP addresses coming from unassinged subnets: – By analyzing the \\<dc>\admin$\debug\netlogon.log logfiles (example here) – Looking for 5778 EventID in System log (idea from here) – Using Powershell get all client registered DNS entries and look up against the replication subnets (some IP subnet calculator will be needed)
My idea was to use Defender for Identity logs (mainly because I recently (re)discovered the ipv4_lookup plugin in Kusto 🙃).
TL;DR – by defining the ADReplicationSubnets as a datatable, we can find logon events from the IdentityLogonEvents table where clients use an IP address that is not in any replication subnet – we can use a “static” datatable, or schedule a PowerShell script which will dynamically populate the items in this table
The query:
let IP_Data = datatable(network:string)
[
"10.0.1.0/24", //example subnet1
"10.0.2.0/24", //example subnet2
"192.168.0.0/16", //example subnet3
];
IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)
Quite simple, isn’t it? So we filter for successful Kerberos logon events (without Protocol filter, other logon events could generate noise) and use the ipv4_lookup function to look up the IP address in the “IP_Data” variable’s “network” column, including those entries that cannot be matched with any subnet – then filter for the unmatched entries.
Example result
Scheduling the query as a PowerShell script
So far, so good. But over time, the list of subnets may change, grow, etc. – how can this subnet list be dynamically populated? Using the Get-ADReplicationSubnet command for example. As a prerequisite I created an app registration with ThreatHunting.Read.All application permission (with a certificate as credential):
The hunting query is the same as above, but the datatable entries are populated by the results of the Get-ADReplicationSubnet command (and some dirty string formatting like adding quotation marks and a column). In the $body variable the Timespan is set to seven days (ISO 8601 format) – when Timespan is not set, it defaults to 30 days (reference)
Running the script
From this point, it is up to you to schedule the script (or fine tune the output) and email the results. 😊
Extra hint: if you have a multi-domain environment, the hunting query may need to be “domain specific” – for this purpose I would insert the following filter: | where AdditionalFields.Spns == “krbtgt/<domainDNSName>”, for example:
IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| where AdditionalFields.Spns == "krbtgt/F12.HU"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)
Microsoft Secure Score can be a good starting point in assessing organizational security posture. Improvement actions are added to the score regularly (link) and points achieved are updated dynamically.
For me, Secure score is a mesurement of hard work represented in little percentage points. Every little point is a reward which can be taken back by Microsoft when changes happen in the current security state (let it be the result of an action [ie. someone enabled the printer spooler on a domain controller] – or inactvity [ie. a domain admin account became “dormant”]). Whatever is the reason of the score degradation, I want to be alerted, because I don’t want to check this chart on a daily basis. Unfortunately, I didn’t find any ready-to-use solution, so I’m sharing my findings.
TL;DR –Get-MgSecuritySecureScore Graph PowerShell cmdlet can be used to fetch 90 days of score data -The basic idea is to compare the actual scores with yesterday’s scores and report on differences -When new controlScores (~recommendations) arrive, send separate alert -The script I share is a PowerShell script with certificate auth, but no Graph PowerShell cmdlets are used just native REST API calls (sorry, I still have issues with Graph PS while native approach is consistent). Using app auth with certificate, the script can be scheduled to run on a daily basis (I don’t recommend a more frequent schedule as there are temporary score changes which are mostly self-remediating)
Prerequisites We will need an app registration with Microsoft Graph/SecurityEvents.Read.All Application permission (don’t forget the admin consent):
App registration with SecurityEvents.Read.All permission
On the server on which you are planning to schedule the script, create a new certificate. Example PowerShell command*:
Don’t forget to grant read access to the private key for the account which will run the schedule. Right click on the certificate – All Tasks – Manage Private Keys…
I prefer to use “Network Service” for these tasks because limited permissions are needed
Export the certificate’s public key and upload it to the app registration’s certificates:
Let’s move on to the script.
The script
Some variables and actions need to be modified, like $tenantID, $appID and $certThumbprint in the first lines. Also, the notification part (Send-MailMessage lines) needs to be customized to your needs. The script itself can be breaken down as follows: – authenticate to Graph using certificate (the auth function is from MSEndpointMgr.com) – the following to lines query the Secure Score data for today and yesterday: $url = 'https://graph.microsoft.com/beta/security/securescores?$top=2' $webResponse = Invoke-RestMethod -Method Get -Uri $url -Headers $headers -ErrorAction Stop – some HTML style for readable emails – compare today’s and yesterday’s controlscores – alert when there are new / deprecated recommendations – compare today’s scores with yesterday’s scores – alert when changes are detected
New recommendations (Defender for Identity fresh install -> new MDI recommendations)
Score changes by recommendation
Fun fact: The Defender portal section where these score changes are displayed actually uses a “scoreImpactChangeLogs” node for these changes, but unfortunately I didn’t find a way to query this secureScoresV2 endpoint:
I thought it will be a quick Google search to find a PowerShell script that will give a report on applications published via Entra application proxy, but I found only scripts (link1, link2, link3) using the AzureAD PowerShell module – so I decided to write a new version using Graph PowerShell.
Entra portal is still using the https://main.iam.ad.ext.azure.com/api/ApplicationProxy/ConnectorGroups endpoint to display the connector groups:
So the next step was to figure out if there are some Graph API equivalents. Google search: graph connectorgroups site:microsoft.com led me to this page: https://learn.microsoft.com/en-us/graph/api/connectorgroup-list?view=graph-rest-beta&preserve-view=true&tabs=http From this point it was “easy” to follow the logic of previously linked scripts and “translate” AzureAD PowerShell commands to Graph PS.
Note: as per the documentation, Directory.ReadWrite.All permission is required and only delegated permissions work.
As an alternative, I share the original script that did not use these commands from Microsoft.Graph.Beta.Applications
I’m really into this Windows Hello for Business topic… Recently, I was going through the “RDP with WHfB” guide from MS Learn (link) which gave me an idea: can this method be used to protect user VPN certificates? The short answer is: yes, but no 🙂
TL;DR – Depending on your current infrastructure, several options are available to protect VPN with MFA: Azure MFA NPS extension, SAML-auth VPN with Conditional Access, Entra ‘mini-CA’ Conditional Access – Hello for Business can be used to protect access to certificates, why not use it to protect VPN certs?
Protecting VPN with MFA with Microsoft tools
NPS Extension The most popular option I know to protect VPN with MFA is the Azure MFA NPS extension (link). The logic is very simple: the RADIUS request coming to the NPS server is authenticated against Active Directory, then the NPS extension is doing a secondary authentication (Azure MFA).
SAML-based authentication with Conditional Access This depends on the vendor of the VPN appliance, but the mechanism is that an Enterprise application is created in Entra and Conditional Access policy can be applied to it.
Conditional Access VPN There is another option which is called “Conditional Access VPN connectivity” in Entra – and by the way it seems to me that Microsoft is hiding this option (I guess it’s because it is using Azure Active Directory Graph which is deprecated). I found a photo how it looked like in the old days (picture taken from here):
In the Entra portal this option is not visible (at least for me):
But when using the search bar, the menu can be found:
Some documentation links about this feature:
Conditional Access Framework and Device Compliance for VPN (link)
Conditional access for VPN connectivity using Microsoft Entra ID (link)
The mechanism in short: Entra creates a ‘mini-CA’ which issues a short-lived certificates to clients; when a Windows VPN client is configured to use DeviceCompliance flow, the client attempts to get a certificate from Entra before connecting to the VPN endpoint (from an admin standpoint a ‘VPN Server’ application is created in Entra and conditional access policies can be applied to this application – I’m not going into details about this one, mainly because I encountered a lot of inconsitencies in the user experience when testing this solution 🙃) – and when everything is OK, the user gets a short-lived certificate which can be used for authentication (eg. EAP-TLS) Some screenshots about this:
Conditional Access policy evaluation result
Certificate valid for ~1 hour
VPN Certificate created with Microsoft Passport KSP Disclaimer: it is not an official/supported by Microsoft method to use VPN certificates for authentication, I tested it only for entertainment purposes.
This was the initial trigger of this post – based on the “Remote Desktop sign-in with Windows Hello for Business” tutorial, create VPN certificates using the Microsoft Passport KSP (link). The process is straigthforward: – create the VPN certificate template (or duplicate the one you already have) – export the template to a txt file – modify the pKIDefaultCSPs setting to Microsoft Passport Key Storage Provider – update the template with the new setting
User experience: well, if the user is WHfB enrolled and logs in with WHfB then nothing changes (the certificate is used “silently” upon connecting) – but when using password to log in to Windows, the VPN connection prompts for Hello credentials:
So if Hello for Business can be considered a multi-factor authentication method, then this solution fits as well 🙂
Windows Hello for Business and Windows Hello may sound siblings, but they are actually two different families in authentication world (link)*. Hello is basicly using password caching while Hello for Business uses asymmetric authentication (key or certificate based) – that’s why Windows Hello for Business (WHfB) has some infrastructure prerequisites in az on-premises or hybrid environment. Not every environment is prepared for WHfB, hence some organizations may have opted to enable convenience PIN for their users to make sign-in… well… more convenient. Why does it matter? Because users may encounter errors during WHfB enrollment, WHfB has impact on Active Directory infrastructure, WHfB is a strong authentication method (~considered as MFA in Conditional Access policy evaluation) and so on.
*the common thing about Hello and WHfB is the Credential Provider: users see the PIN/biometric authentication option on their logon screen
TL;DR – Turn on convenience PIN sign-in policy enables Hello PIN in Account settings, but invokes Hello for Business enrollment when setting up in Windows Security app – Hello for Business implementation is very simple (and preferred over Hello) with Cloud Kerberos Trust, but migrating users from Hello has some pitfalls – Hello usage can be detected in the following registry hive: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\ Authentication\LogonUI\NgcPin\Credentials\<userSID>
Behavior Let’s assume that WHfB is not configured in your environment, even the Intune default policy for WHfB is set to “Not configured” like this:
On a client device, the eligibility for WHfB can be checked using dsregcmd /status under “Ngc Prerequisite Check” (link). On a domain joined/hybrid joined device, the PreReqResult will have the WillNotProvision value until WHfB is explicitly enabled.
When you open Settings – Accounts – Sign-in options, you will see that PIN (Windows Hello) is greyed out nor Windows Security app will display options to set up Hello:
Now let’s enable convenience PIN sign-in group policy: Computer Configuration – Administrative Templates – System – Logon – Turn on convenience PIN sign-in
The Windows Security traybar icon almost immediately shows a warning status:
The Hello enrollment is now active in the Settings- Accounts – Sign-in options menu and we also have the option to set up Hello in Windows Security:
And here lies the discrepancy in the enrollment behavior: the Settings menu (left) sets up Hello, while Windows Security app (right) will invoke the WHfB enrollment process
Windows Hello setup using Settings menu
Windows Security invoking Hello for Business enrollment
Migrating from Hello to Hello for Business At this point, we may decide to prevent Hello for Business – but I suggest to follow the other direction and migrate Hello users to Hello for Business. Since we have Cloud Kerberos Trust, we don’t need a PKI either, only (at least one) Windows 2016 or newer Domain Controllers (and hybrid joined devices with hybrid identites with MFA registration of course)[link]… so the deployment is very easy… but migration can be a bit tricky.
First, when a Hello for Business policy is applied on a computer, the credential provider (~the login screen asking for PIN) is disabled for the user until WHfB enrollment. This means that the user will be asked for password instead of PIN – this may result in failed logon attempts, because users will probably enter their PIN “as usual”. Another issue that you may encounter is related to the previous and the applied PIN policy. Based on my experience, the WHfB enrollment process is prompting the current PIN and tries to set it as the new PIN (from a user experience standpoint, this was a clever decision from Microsoft), but if the new policy requires a more complex PIN, the process may encounter an error (0x801c0026 not documented here)
Convenience PIN migration to Hello for Business PIN error
This error is handled by the logon screen:
Detecting Hello usage As problems may occour with Hello to WHfB migration, it’s a good idea to have an inventory about Hello users. On every device, each Hello registration is stored under the following registry hive: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion \Authentication\LogonUI\NgcPin\Credentials\<userSID>
It’s up to your creativity how you collect this information and translate the SIDs to some human readable format 🙂
Microsoft is rolling out the managed conditional access policies (link) gradually and I wanted to know how it is going to impact the users (which users to be exact). Apparently, if the Sign-in logs are not streamed to a Log Analytics Workspace, the options are limited – but if you have the AADSignInEventsBeta table under Advanced hunting on the Microsoft Defender portal, some extra info can be gathered.
Streaming Entra logs to Log Analytics gives wonderful insights (not only for Conditional Access), so it is recommended to set up the diagnostic settings. If it is not an option, but the AADSignInEventsBeta is available (typically organizations with E5 licences), then the following query will show those sign-ins that would have been impacted by a report-only Conditional Access policy:
AADSignInEventsBeta | where LogonType has "InteractiveUser" | mv-apply todynamic(ConditionalAccessPolicies) on ( where ConditionalAccessPolicies.result == "reportOnlyInterrupted" or ConditionalAccessPolicies.result == "reportOnlyFailure" | where ConditionalAccessPolicies.displayName has "Microsoft-managed:" //filter for managed Conditional Access policies | extend CADisplayName = tostring(ConditionalAccessPolicies.displayName) | extend CAResult = tostring(ConditionalAccessPolicies.result)) | distinct Timestamp,RequestId,Application,ResourceDisplayName, AccountUpn, CADisplayName, CAResult
Note: in the AADSignInEventsBeta table, the ConditionalAccessPolicies is a JSON value stored as a string so the todynamic function is needed.
Note2: Since every Conditional Access policy is evaluated against each logon, the query first filters for those sign-ins where the report-only result is ‘Interrupted’ or ‘Failure’, then the policy displayname is used to narrow down the results. Starting the filter with displayName would be pointless.
Some example summarizations if you need to see the big picture (same query as above but the last line can be replaced with these ones): View impacted users count by application: | summarize AffectedUsersCount=dcount(AccountUpn) by Application, CADisplayName, CAResult Same summarization in one day buckets: | summarize AffectedUsers = dcount(AccountUpn) by bin(Timestamp,1d), CADisplayName, CAResult List countries by result: | summarize make_set(Country) by CADisplayName, CAResult
Other useful feature is the Monitoring (Preview) menu in Conditional Access – Overview:
Here we have a filter option called ‘Policy evaluated’ where report-only policies are grouped under the ‘Select individual report-only policies’ section. This gives an overview but unfortunately does not list the affected users.
When a Microsoft-managed policy is opened, this chart is presented under the policy info as well.
In the November 2023 – What’s New in Microsoft Entra Identity & Security w/ Microsoft Security CxE identity episode, a public preview feature of Entra Workload ID premium license was presented (link) which was actually announced on November 9th (link). I really love the idea of restricting application key credentials to a predefined list of Certificate Authorities, this is why I thought to write some words about it.
TL;DR – You can generate a report on current keyCredentials usage (with certificate Issuer data) using the PowerShell script below (Graph Powershell used here) [no extra license needed] – First, you create a certificateBasedApplicationConfigurations object – Then you can modify the defaultAppManagementPolicy or create an appManagementPolicy and apply it directly to one or more application objects (for the latter, tutorial below) – These configurations require Entra Workload ID premium license
Reporting on application key credentials
The linked announcements are highlighting how to set the defaultAppManagementPolicy, but before setting this, you may want to know which applications are using certificates to authenticate and which CA issued these certs. This way, you can first change the certificates to the ones you trust, then you can set up the restriction(s). The following script lists these applications and the Issuer of each certificate (for the sake of simplicity, I use the Invoke-MgGraphRequest command)
The result will look like this (yes, I use self-signed certificates in my demo environment 🙈):
Example result from reporting script
Note: the Issuer field may not be 100% reliable as it can be inserted manually when creating the self-signed certificate. The following method will show each certificate in the trust chain ($cred variable comes from the foreach loop above):
To restrict application keyCredentials, the following should be kept in mind (annoncement link again): – The policy applies only to new credentials, it won’t disable current keys – At least one root CA needs to be declared and a chain can consist of a max of 10 objects – First, you create a certificateBasedApplicationConfigurations object (~the trusted cert chain) – Next, you can modify the defaultAppManagementPolicy to restrict all keyCredentials to this/these trusted CAs (as demonstrated on the linked page) – OR you can create a separate appManagementPolicy to restrict the trusted CA THEN this policy can be applied directly to one or more applications (steps below)
Creating the certificateBasedApplicationConfigurations object
In this example, I’m going to use Graph Explorer to create the object. As a Windows user, I will simply export my issuing CA’s (F12SUBCA01) certificate and it root CA’s (ROOTCA01) certificate to a Base-64 encoded CER file using the certlm.msc MMC snap-in, open them in Notepad and copy the contents to the Graph Explorer call’s Request body. Find the issuing CA’s cert, then right-click – All Tasks – Export:
Select “Base-64 encoded X.509 (.CER)” as export file format.
Repeate the same steps for each certificate in the chain. Now, open the cer files with notepad, remove the ‘—–BEGIN CERTIFICATE—–‘ and ‘—–END CERTIFICATE—–‘ lines and every line-breaks
These values will be used in the payload sent to Microsoft Graph. CAUTION! Use the beta endpoint for now as it is a preview feature. If you accidentally use the v1.0 endpoint, you will encouter issues (example below)
If everything was inserted correctly, the response includes an id, take a note of it. If you did not manage to take it, no problem, you can query these configurations as follows:
METHOD: GET
ENDPOINT: beta
URI: https://graph.microsoft.com/beta/directory/certificateAuthorities/certificateBasedApplicationConfigurations
List configuration objects
Note: when you dig further, the CA information can be queried for each configuration id, for example (you can omit the ‘?$select=isRootAuthority,issuer’ part if you want to check the certificate data too:
METHOD: GET
ENDPOINT: beta
URI: https://graph.microsoft.com/beta/directory/certificateAuthorities/certificateBasedApplicationConfigurations/<configurationID>/trustedCertificateAuthorities?$select=isRootAuthority,issuer
Creating the appManagementPolicy object
Now that we have the CA configuration, the next step is to create the appManagementPolicy object (if you are not going to apply it in the defaultAppManagementPolicy). The appManagementPolicy can contain restrictions for passwordCredentials and KeyCredentials. In this example, I’m going to create a policy that prohibits passwordCredentials and restricts key credentials to the trusted CA configuration defined above.
METHOD: POST ENDPOINT: BETA URI: https://graph.microsoft.com/v1.0/policies/appManagementPolicies REQUEST BODY: {
“description”: “This policy restricts application credentials to certificates issued by F12SUBCA and disables password addition “,
“isEnabled”: true,
“restrictions”: {
“passwordCredentials”: [
{
“restrictionType”: “passwordAddition”,
“maxLifetime”: null
}
],
“keyCredentials”: [
{
“restrictionType”: “trustedCertificateAuthority”,
“certificateBasedApplicationConfigurationIds”: [
“0d60f78e-9916-4db2-9cee-5c8e470a19e9”
]
}
]
}
}
Creating the appManagementPolicy object
Take a note of the id given in the response as it will be used in the final step.
NOTE: if you accidentally use the v1.0 endpoint, you will encounter issues like this:
“Expected property ‘certificateBasedApplicationConfigurationIds’ is not present on resource of type ‘KeyCredentialConfiguration'”
Applying the policy to an application
Finally, the policy needs to be applied to an application, as follows:
METHOD: POST
ENDPOINT: BETA
URI:https://graph.microsoft.com/beta/applications/<objectID of application>/appManagementPolicies/$ref
REQUEST BODY:
{
"@odata.id": "https://graph.microsoft.com/beta/policies/appmanagementpolicies/<appManagementPolicyID>"
}
Applying the policy to an application
The result for this application:
Uploading a certificate not issued by the trusted CA fails
Adding new client secret option is greyed out
Closing words: it is a bit cumbersome to configure these settings, but the result is purely satisfying 😊 I hope, once it goes GA it will get some graphical interface to ease the process.
The information on this website is provided for informational purposes only and I make no warranties, either express or implied. Information in these documents, including URL and other Internet Web site references, is subject to change without notice. The entire risk of the use or the results from the use of this document remains with the user.
The postings on this site are my own and do not necessarily represent the postings, strategies or opinions of my employer.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.