Powershell with Entra CBA – unattended access to Defender portal when Graph API or Application permission does not fit

One of my previous posts covered a “basic” way to track secure score changes using Graph API with application permissions. While I still prefer application permissions (over service accounts) for unattended access to certain resources, sometimes it is not possible – for example when you want to access resources which are behind the Defender portal’s apiproxy (like the scoreImpactChangeLogs node in the secureScore report). To overcome this issue, I decided to use Entra Certificate-based Authentication as this method provides a “scriptable” (and “MFA capable”) way to access these resources.

Lot of credit goes to the legendary Dr. Nestori Syynimaa (aka DrAzureAD) and the AADInternals toolkit (referring to the CBA module as this provided me the foundamentals to understand the authentication flow). My script is mostly a stripped version of his work but it targets the security.microsoft.com portal. Credit goes to Marius Solbakken as well for his great blogpost on Azure AD CBA which gave me the hint to fix an error during the authentication flow (details below).

TL;DR

  • the script uses certificate-based auth (not to be confused with app auth with certificate) to access https://security.microsoft.com/apiproxy/mtp/secureScore/security/secureScoresV2 which is used to display secure score informations on the Defender portal
  • prerequisites: Entra CBA configured for the “service account”, appropriate permissions granted for the account to access secure score informations, certificate to be used for auth
  • the script provided is only for research/entertainment purposes, this post is more about the journey and the caveats than the result
  • tested on Windows Powershell ( v5.1), encountered issues with Microsoft Powershell (v7.5)

The scipt

$tenantID = "<your tenant id>"
$userUPN = "<CBA user UPN>"
$thumbprint = "<thumbprint of certificate installed in Cert:\CurrentUser\My\ >"

function Extract-Config ($inputstring){
    $regex_pattern = '\$Config=.*'
    $matches = [regex]::Match($inputstring, $regex_pattern)
    $config= $matches.Value.replace("`$Config=","") #remove $Config=
    $config = $config.substring(0, $config.length-1) #remove last semicolon
    $config | ConvertFrom-Json
}

#https://learn.microsoft.com/en-us/entra/identity/authentication/concept-authentication-web-browser-cookies
##Cert auth to security.microsoft.com 
# Credit: https://github.com/Gerenios/AADInternals/blob/master/CBA.ps1
# STEP1 - Invoke the first request to get redirect url
$webSession = New-Object Microsoft.PowerShell.Commands.WebRequestSession
$response = Invoke-WebRequest -Uri "https://security.microsoft.com/" -Method Get -WebSession $webSession  -ErrorAction SilentlyContinue -MaximumRedirection 0 -UseBasicParsing
$url = $response.Headers.'Location'

# STEP2 - Send HTTP GET to RedirectUrl
$login_get = Invoke-WebRequest  -Uri $Url -Method Get -WebSession $WebSession -ErrorAction SilentlyContinue -UseBasicParsing -MaximumRedirection 0

# STEP3 - Send POST to GetCredentialType endpoint
#Credit: https://goodworkaround.com/2022/02/15/digging-into-azure-ad-certificate-based-authentication/
$GetCredentialType_Body = @{
    username = $userUPN
    flowtoken = (Extract-Config -inputstring $login_get.Content).sFT
    }

$getCredentialType_response = Invoke-RestMethod -method Post -uri "https://login.microsoftonline.com/common/GetCredentialType?mkt=en-US" -ContentType "application/json" -WebSession $webSession -Headers @{"Referer"= $url; "Origin" = "https://login.microsoftonline.com"} -Body ($GetCredentialType_Body | convertto-json -Compress) -UseBasicParsing

#STEP 4 - Invoke REST POST to certauth endpoint with ctx and flowtoken using certificate
$CBA_Body = @{
    ctx = (Extract-Config -inputstring $login_get.Content).sctx
    flowtoken = $getCredentialType_response.FlowToken
    }
$CBA_Response = Invoke-RestMethod -UseBasicParsing -Uri "https://certauth.login.microsoftonline.com/$TenantId/certauth" -Method Post -Body $CBA_Body -Certificate (get-item Cert:\CurrentUser\My\$thumbprint)

#STEP 5 - Send authentication information to the login endpoint
$login_msolbody = $null
$login_msolbody = @{
        login = $userUPN
        ctx = ($CBA_Response.html.body.form.input.Where({$_.name -eq "ctx"})).value
        flowtoken = ($CBA_Response.html.body.form.input.Where({$_.name -eq "flowtoken"})).value
        canary = ($CBA_Response.html.body.form.input.Where({$_.name -eq "canary"})).value
        certificatetoken = ($CBA_Response.html.body.form.input.Where({$_.name -eq "certificatetoken"})).value
        }

$headersToUse = @{
        'Referer'="https://certauth.login.microsoftonline.com/" 
        'Origin'= "https://certauth.login.microsoftonline.com"                
        }

$login_postCBA = Invoke-WebRequest -UseBasicParsing -Uri "https://login.microsoftonline.com/common/login" -Method Post -Body $login_msolbody -Headers $headersToUse -WebSession $webSession 

#STEP 6 - Make a request to login.microsoftonline.com/kmsi to get code and id_token
$login_postCBA_config = (Extract-Config -inputstring $login_postCBA.Content)
        $KMSI_body = @{
            "LoginOptions" = "3"
            "type" = "28"
            "ctx" = $login_postCBA_config.sCtx
            "hpgrequestid" = $login_postCBA_config.sessionId
            "flowToken"	= $login_postCBA_config.sFT
            "canary" = $login_postCBA_config.canary
            "i19" = "2326"
        }
        
        
$KMSI_response = Invoke-WebRequest -UseBasicParsing -Uri "https://login.microsoftonline.com/kmsi" -Method Post -WebSession $WebSession -Body $KMSI_body

#STEP 7 - add sessionID cookie to the websession as this will be required to access security.microsoft.com (probably unnecessary)
#$websession.Cookies.Add((New-Object System.Net.Cookie("s.SessID", ($response.BaseResponse.Cookies | ? {$_.name -eq "s.SessID"}).value, "/", "security.microsoft.com"))) #s.SessID cookie is retrived during first GET to defender portal

#STEP 8 - POST the id_token and session information to security.microsoft.com to get sccauth and XSRF-TOKEN cookies
$securityPortal_POST_body = @{
    code = ($KMSI_response.InputFields.Where({$_.name -eq "code"})).value
    id_token = ($KMSI_response.InputFields.Where({$_.name -eq "id_token"})).value
    state = ($KMSI_response.InputFields.Where({$_.name -eq "state"})).value
    session_state = ($KMSI_response.InputFields.Where({$_.name -eq "session_state"})).value
    correlation_id = ($KMSI_response.InputFields.Where({$_.name -eq "correlation_id"})).value
    }
$securityPortal_POST_response = Invoke-WebRequest -UseBasicParsing -Uri "https://security.microsoft.com/" -Method Post -WebSession $webSession -Body $securityPortal_POST_body -MaximumRedirection 1
##END of Cert auth to security.microsoft.com 

## Query the secureScoresV2
#Decode the XSRF-TOKEN
$xsrfToken = $webSession.Cookies.GetCookies("https://security.microsoft.com") | ? {$_.name -eq "XSRF-TOKEN"} | % {$_.value}
$xsrfToken_decoded = [System.Web.HttpUtility]::UrlDecode($xsrfToken)

#Send GET to secureScoresV2 with the decoded XSRF-TOKEN added to the headers
$SecureScoresV2_headers = @{
    "x-xsrf-token" = $xsrfToken_decoded
    }
$secureScoresV2_response = Invoke-WebRequest -UseBasicParsing -Uri "https://security.microsoft.com/apiproxy/mtp/secureScore/security/secureScoresV2?`$top=400" -WebSession $webSession -Headers $SecureScoresV2_headers 

#RESULT
$secureScoreInfo = $secureScoresV2_response.Content | ConvertFrom-Json
$secureScoreInfo.value

Explained

Since I’m not a developer, I will explain all the steps (result of research and lot of guesswork) as I experienced them (let’s call it sysadmin aspect). So essentially, this script “mimics” the user opening the Defender portal, authenticates with CBA, clicks on Secure Score and returns the raw information which is transformed in the browser to something user-friendly. As a prerequisite, the certificate (with the private key) needs to be installed in the Current User personal certificate store of the user running the script.

Step 0 is to populate the $tenantID, $userUPN and $thumbprint variables accordingly

Step 1 is creating a WebRequestSession object (like a browser session, from my perspective the $websession variable is just a cookie store) and navigating to https://security.microsoft.com. When performed in a browser, we get redirected to the login portal – if we open the browser developer tools, we can see in the network trace that this means a 302 HTTP code (redirect) with a Location header in the response. This is where we get redirected:

From the script aspect, we will store this Location header in the $url variable:

Notice that every Invoke-WebRequest/Invoke-Restmethod command uses the -UseBasicParsing parameter. According to documentation, this parameter is deprecated in newer PowerShell versions and from v6.0.0 all requests are using basic parsing only. However, I’m using v5.1 which uses Internet Explorer to get the content – so if it is not configured, disabled or anything else, the command could fail.

At this point the $webSession variable contains the following cookies for security.microsoft.com: s.SessID, X-PortalEndpoint-RouteKey and an OpenIdConnect.nonce:

Step 2 is to open the redirectUrl:

When opened, we receive some cookies for login.microsoftonline.com, including buid,fpc,esctx (documentation for the cookies here):

But the most important information is the flowtoken (sFT) which can be found in the response content. In the browser trace it looks like this:

In PowerShell, the response content is the $login_get variable’s Content member, returned as string. This needs to be parsed, because it is embedded in a script HTML node, beginning with $Config:

I’m using the Extract-Config function to get this configuration data (later I found that AADInternals is using the Get-Substring function defined in CommonUtils.ps1 which is more sophisticated πŸ™ƒ):

Step 3 took some time to figure out. When I tried to use AADInternals’ Get-AADIntadminPortalAccessTokenUsingCBA command I got an error message:

AADSTS90036: An unexpected, non-retryable error stemming from the directory service has occurred.

Luckily I found this blogpost which led me to think that this GetCredentialType call is missing in AADInternals (probably something is misconfigured on my side and this can be skipped). This call – from my standpoint – is returning a new flowtoken and this new one needs to be sent to the certauth endpoint. (Until I figured it out, every other attempt to authenticate on the certauth endpoint resulted in AADSTS90036).

Step 4 is basically the same as in the AADInternals’ module: the flowtoken and ctx is posted to the certauth.login.microsoftonline.com endpoint.

Notice here, that the ContentType parameter is set to “application/json” – where it is not specified, it defaults to “application/x-www-form-urlencoded” for a POST call. In the browser trace, this is defined in the Content-Type header:

Step 5 is slightly different from AADInternals’ CBA module, but follows the same logic: send the login (userprincipalname), ctx, flowtoken, canary and certificatetoken content to the https://login.microsoftonline.com/common/login endpoint and in turn we receive the updated flowtoken, ctx, sessionid, canary informations which are posted to the https://login.microsoftonline.com/kmsi endpoint in Step 6

The KMSI_response contains the id_token, code, state, session_state and correlation_id. When we look back on the browser trace, we will see that these parameters are passed to the security.microsoft.com portal to authenticate the user.

Step 7 is probably totally unnecessary (commented out) and can be the result of too much desparate testing. It is just adding the s.SessID cookie to our websession which is also needed during authentication (without this cookie, you will immedately receive some timeout errors). This cookie is received upon the first request (I guess my testing involved clearing some variables… anyways, it won’t hurt)

Step 8 is the final step in this authentication journey: we post content we received in the $KMSI_response variable. In the browser trace we can see that an HTTP 302 is the status code for this request, followed by a new request to the same endpoint.

This is why the -MaximumRedirection parameter is set to 1 in this step. (Some of my tests failed with 1 redirection allowed, so if it fails, it can be increased to 5 for example).

Finally we have the sccauth and XSRF-TOKEN cookies which are required to access resources.

I thought this is the green light, all I need is to use the websession to access the secureScoresv2 – but some tweaking was required because the Invoke-WebRequest failed with the following error message:

Invoke-WebRequest : {"Message":"The web page isn\u0027t loading correctly. Please reload the page by refreshing your browser, or try deleting the cookies from your browser and then sign in again. If the problem persists, contact 
Microsoft support."

Taking a look on the request, I noticed that the XSRF-TOKEN is used as X-Xsrf-Token header info (even though the cookie is present in the $websession)

XSRF-TOKEN sent as X-Xsrf-Token header

Took some (~a lot) time to figure out that this token is encoded so it needs to be decoded as well before using it as header:

Slight but crucial difference between the encoded and the decoded XSRF-TOKEN

So once we have the decoded token, it can be used as x-xsrf-token:

The response content is in JSON format, the ConvertFrom-Json cmdlet will do the parsing.

Compared to secureScore exposed by Graph API, here we have the ScoreImpactChangeLogs property which is missing in Graph.

Example of the ScoreImpactChangeLogs property

This is just one example (of endless possibilities) of using Entra CBA to access the Defender portal, but my main goal was to share my findings and give a hint on reaching other useful stuff on security.microsoft.com.

How much time your users are wasting with “traditional” MFA?

Recently, I came across a post on LinkedIn which demonstrated that Passkey authentication is way faster than traditional Password+MFA notification login. It made me curious: how much time does it exactly take to do MFA?

TL;DR
– This report uses the SignInLogs table which needs to be configured in Diagnostic settings
– Unfortunately I did not manage to gather the same info from AADSignInEventsBeta table in Defender or sign-in logs from Microsoft Graph
– Everything written here is based on my tests and measurements, so it may contain inaccurate conclusions

The query that will display the authentication method, the average and overall time spent with completing the MFA prompt:

let StrongAuthRequiredSignInAttempts = SigninLogs
	| where ResultType == "50074"
	| distinct ResultType,UniqueTokenIdentifier,CorrelationId;
let MFA1 =SigninLogs
	| join kind=inner StrongAuthRequiredSignInAttempts on UniqueTokenIdentifier
	| mv-expand todynamic(AuthenticationDetails)
	| project stepdate=todatetime(AuthenticationDetails.authenticationStepDateTime), authMethod = tostring(AuthenticationDetails.authenticationMethod), stepResult = tostring(AuthenticationDetails.authenticationStepResultDetail), RequestSequence = todouble(AuthenticationDetails.RequestSequence), StatusSequence = todouble(AuthenticationDetails.StatusSequence), CorrelationId,RequestSeq_UnixTime = unixtime_milliseconds_todatetime(todouble(AuthenticationDetails.RequestSequence)), UniqueTokenIdentifier, StatusSeq_UnixTime = unixtime_milliseconds_todatetime(todouble(AuthenticationDetails.StatusSequence)), MFAMethod =tostring(MfaDetail.authMethod)
    | summarize make_set(stepResult), MFAStart=min(stepdate), MFAEnd=max(stepdate), TimeSpent=totimespan(max(stepdate)-min(stepdate)),TimeSpentv2=totimespan(maxif(StatusSeq_UnixTime, StatusSequence > 1)-minif(RequestSeq_UnixTime, RequestSequence > 1)) by UniqueTokenIdentifier,MFAMethod
    | where set_stepResult has "MFA successfully completed"
    ;
MFA1
| where isnotempty(MFAMethod)
| project MFAMethod,TimeSpent = coalesce(TimeSpentv2,TimeSpent)
| summarize AverageMFATime=avg(TimeSpent),SumMFATime=sum(TimeSpent) by MFAMethod

Example result:

Explanation

The first step was to find those sign-in attempts that are interrupted, because MFA is needed. This can be easly found as there is a ResultDescription column where we can filter for “Strong Authentication is required.” entries:

SigninLogs
| where ResultDescription == "Strong Authentication is required."

Or use the ResultType column, where 50074 state code indicates the same (reference: https://login.microsoftonline.com/error?code=50074).

The first catch is that not the entire sign-in session has this field populated with the same value (for logical reasons). Let’s take a simple login to the Azure portal with Authenticator Code as MFA:

In this example, I intentionally waited 30 seconds to provide the code (after successful password entry) [code prompt started on 2024.12.09 9:41:15, code sent on 9:41:45]. The TimeGenerated field is a bit misleading, because it is the creation timestamp of the event entry not the authentication event (this part is stored in the AuthenticationDetails column).
It is also worth mentioning that the CorrelationId remains the same in a browser session (even if session policies require re-authentication) – so if for example the Azure portal is kept open in the browser but re-authentication happens, the CorrelationId is the same but the authentication steps (reentering password, new MFA prompt) need to be handled separately. This is why I’m using the UniqueTokenIdentifier.

But let’s get back to the example and extend the AuthenticationDetails column:

Some fields are not totally clear for me, but according to my measures the most accurate timespan of “doing MFA” is the time between the “MFA required in Azure AD” and the “MFA completed in Azure AD” events (it’s not totally accurate because I spent some time to change the MFA method).

However, this approach (time between “MFA required” and “MFA completed”) will not cover all other MFA methods, because “MFA required” is not always present in the logs. For example, the next sign-in example was using Mobile app notification as MFA:

At this point the possible solution is to either write a query for each authentication method or try to find a unified approach. I opted for the unified option: assume that the “MFA start time” is the first logged AuthenticationStepDate and the “MFA end time” is the last logged AuthenticationStepDate where we have “MFA successfully completed” entry (this one seems to be present in every MFA type).

This looks almost appropriate, but in the case of “Mobile app notification” I found the RequestSequence and StatusSequence fields which are Unix timestamps and look more precise:

But since these fields are not always present, I chose the KQL coalesce() function to return the TimeSpentv2 value when present – otherwise return the TimeSpent value.

Note1: the summarize operator needs to group by UniqueTokenIdentifier and MFAMethod, because without the MFAMethod, the “Password” will also be returned as authentication factor.

Note2: when calculating TimeSpentv2, there were other authentication steps where StatusSequence fields were empty, 0 or 1. They are clearly not Unix timestamps, so only values greater than 1 are considered here

+1 point for passkey authentication πŸ™ƒ

Find clients authenticating from unassigned AD subnets – using Defeder for Identity

A well maintained AD topology is very important because domain joined clients use this information to locate the optimal Domain Controller (DCLocator documentation here) – failing to find the most suitable domain controller will have performance impact on client side (slow logon, group policy processing, etc.). In an ideal world, when a new subnet is created and AD joined computers are placed here, AD admins are notified and they assign the subnet to the appropriate site – but sometimes this is not the case.

There are several methods to detect IP addresses coming from unassinged subnets:
– By analyzing the \\<dc>\admin$\debug\netlogon.log logfiles (example here)
– Looking for 5778 EventID in System log (idea from here)
– Using Powershell get all client registered DNS entries and look up against the replication subnets (some IP subnet calculator will be needed)

My idea was to use Defender for Identity logs (mainly because I recently (re)discovered the ipv4_lookup plugin in Kusto πŸ™ƒ).

TL;DR
– by defining the ADReplicationSubnets as a datatable, we can find logon events from the IdentityLogonEvents table where clients use an IP address that is not in any replication subnet
– we can use a “static” datatable, or schedule a PowerShell script which will dynamically populate the items in this table

The query:

let IP_Data = datatable(network:string)
[
 "10.0.1.0/24", //example subnet1
"10.0.2.0/24", //example subnet2
"192.168.0.0/16", //example subnet3
];
IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)

Quite simple, isn’t it? So we filter for successful Kerberos logon events (without Protocol filter, other logon events could generate noise) and use the ipv4_lookup function to look up the IP address in the “IP_Data” variable’s “network” column, including those entries that cannot be matched with any subnet – then filter for the unmatched entries.

Example result

Scheduling the query as a PowerShell script

So far, so good. But over time, the list of subnets may change, grow, etc. – how can this subnet list be dynamically populated? Using the Get-ADReplicationSubnet command for example. As a prerequisite I created an app registration with ThreatHunting.Read.All application permission (with a certificate as credential):

App registration for script scheduling

The following script is used:

#required scope: ThreatHunting.Read.All

##Connect Microsoft Graph using Certauth
$tenantID = '<tenantID>'
$clientID = '<clientID>'
$certThumbprint = "<certThumbprint>"

Connect-MgGraph -TenantId $tenantID -ClientId $clientID -CertificateThumbprint $certThumbprint

##Define hunting query
$huntingQuery = '
let IP_Data = datatable(network:string)
['+( (Get-ADReplicationSubnet -filter *).Name | % {'"' + $_ + '",'}) +'
];
IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)
'

#construct payload with 7 days timespan
$body = @{Query = $huntingQuery
    Timespan = "P7D"
} | ConvertTo-Json

$url = "https://graph.microsoft.com/v1.0/security/runHuntingQuery"
#Run hunting query
$response = Invoke-MgGraphRequest -Method Post -Uri $url -Body $body

$results = foreach ($result in $response.results){
    [pscustomobject]@{
        IPAddress = $result.IpAddress
        DeviceName = $result.DeviceName
        LogonCount = $result.LogonCount
        }
}

$results

The hunting query is the same as above, but the datatable entries are populated by the results of the Get-ADReplicationSubnet command (and some dirty string formatting like adding quotation marks and a column). In the $body variable the Timespan is set to seven days (ISO 8601 format) – when Timespan is not set, it defaults to 30 days (reference)

Running the script

From this point, it is up to you to schedule the script (or fine tune the output) and email the results. 😊

Extra hint: if you have a multi-domain environment, the hunting query may need to be “domain specific” – for this purpose I would insert the following filter: | where AdditionalFields.Spns == “krbtgt/<domainDNSName>”, for example:

IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| where AdditionalFields.Spns == "krbtgt/F12.HU"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)

Tracking Microsoft Secure Score changes

Microsoft Secure Score can be a good starting point in assessing organizational security posture. Improvement actions are added to the score regularly (link) and points achieved are updated dynamically.

For me, Secure score is a mesurement of hard work represented in little percentage points. Every little point is a reward which can be taken back by Microsoft when changes happen in the current security state (let it be the result of an action [ie. someone enabled the printer spooler on a domain controller] – or inactvity [ie. a domain admin account became “dormant”]). Whatever is the reason of the score degradation, I want to be alerted, because I don’t want to check this chart on a daily basis. Unfortunately, I didn’t find any ready-to-use solution, so I’m sharing my findings.

TL;DR
Get-MgSecuritySecureScore Graph PowerShell cmdlet can be used to fetch 90 days of score data
-The basic idea is to compare the actual scores with yesterday’s scores and report on differences
-When new controlScores (~recommendations) arrive, send separate alert
-The script I share is a PowerShell script with certificate auth, but no Graph PowerShell cmdlets are used just native REST API calls (sorry, I still have issues with Graph PS while native approach is consistent). Using app auth with certificate, the script can be scheduled to run on a daily basis (I don’t recommend a more frequent schedule as there are temporary score changes which are mostly self-remediating)

Prerequisites
We will need an app registration with Microsoft Graph/SecurityEvents.Read.All Application permission (don’t forget the admin consent):

App registration with SecurityEvents.Read.All permission

On the server on which you are planning to schedule the script, create a new certificate. Example PowerShell command*:

New-SelfSignedCertificate -FriendlyName "F12 - Secure score monitor" -NotAfter (Get-date).AddYears(2) -Subject "F12 - Secure score monitor" -CertStoreLocation Cert:\LocalMachine\My -Provider β€œMicrosoft Enhanced RSA and AES Cryptographic Provider” -KeyExportPolicy NonExportable

Don’t forget to grant read access to the private key for the account which will run the schedule. Right click on the certificate – All Tasks – Manage Private Keys…

I prefer to use “Network Service” for these tasks because limited permissions are needed

Export the certificate’s public key and upload it to the app registration’s certificates:

Let’s move on to the script.

The script

Some variables and actions need to be modified, like $tenantID, $appID and $certThumbprint in the first lines. Also, the notification part (Send-MailMessage lines) needs to be customized to your needs.
The script itself can be breaken down as follows:
– authenticate to Graph using certificate (the auth function is from MSEndpointMgr.com)
– the following to lines query the Secure Score data for today and yesterday:
$url = 'https://graph.microsoft.com/beta/security/securescores?$top=2'
$webResponse = Invoke-RestMethod -Method Get -Uri $url -Headers $headers -ErrorAction Stop

– some HTML style for readable emails
– compare today’s and yesterday’s controlscores – alert when there are new / deprecated recommendations
– compare today’s scores with yesterday’s scores – alert when changes are detected

Here it is:

$tenantId = '<your tenant ID>'
$appID = '<application ID with SecurityEvents.Read.All admin consented permission>'
$certThumbprint = '<thumbprint of certificate used to connect>'
$resourceAppIdUri = 'https://graph.microsoft.com'

#region Auth
$cert = gci Cert:\LocalMachine\my\$certThumbprint
$cert64Hash = [System.Convert]::ToBase64String($cert.GetCertHash())
function Get-Token {
    #https://msendpointmgr.com/2023/03/11/certificate-based-authentication-aad/
    #create JWT timestamp for expiration 
    $startDate = (Get-Date "1970-01-01T00:00:00Z" ).ToUniversalTime()  
    $jwtExpireTimeSpan = (New-TimeSpan -Start $startDate -End (Get-Date).ToUniversalTime().AddMinutes(2)).TotalSeconds  
    $jwtExpiration = [math]::Round($jwtExpireTimeSpan, 0)  
  
    #create JWT validity start timestamp  
    $notBeforeExpireTimeSpan = (New-TimeSpan -Start $StartDate -End ((Get-Date).ToUniversalTime())).TotalSeconds  
    $notBefore = [math]::Round($notBeforeExpireTimeSpan, 0)  
  
    #create JWT header  
    $jwtHeader = @{  
        alg = "RS256"  
        typ = "JWT"  
        x5t = $cert64Hash -replace '\+', '-' -replace '/', '_' -replace '='  
    }
    #create JWT payload  
    $jwtPayLoad = @{  
        aud = "https://login.microsoftonline.com/$TenantId/oauth2/token"  
        exp = $jwtExpiration   
        iss = $appID  
        jti = [guid]::NewGuid()   
        nbf = $notBefore  
        sub = $appID  
    }  
  
    #convert header and payload to base64  
    $jwtHeaderToByte = [System.Text.Encoding]::UTF8.GetBytes(($jwtHeader | ConvertTo-Json))  
    $encodedHeader = [System.Convert]::ToBase64String($jwtHeaderToByte)  
  
    $jwtPayLoadToByte = [System.Text.Encoding]::UTF8.GetBytes(($jwtPayLoad | ConvertTo-Json))  
    $encodedPayload = [System.Convert]::ToBase64String($jwtPayLoadToByte)  
  
    #join header and Payload with "." to create a valid (unsigned) JWT  
    $jwt = $encodedHeader + "." + $encodedPayload  
  
    #get the private key object of your certificate  
    $privateKey = ([System.Security.Cryptography.X509Certificates.RSACertificateExtensions]::GetRSAprivateKey($cert))  
  
    #define RSA signature and hashing algorithm  
    $rsaPadding = [Security.Cryptography.RSASignaturePadding]::Pkcs1  
    $hashAlgorithm = [Security.Cryptography.HashAlgorithmName]::SHA256  
  
    #create a signature of the JWT  
    $signature = [Convert]::ToBase64String(  
        $privateKey.SignData([System.Text.Encoding]::UTF8.GetBytes($jwt), $hashAlgorithm, $rsaPadding)  
    ) -replace '\+', '-' -replace '/', '_' -replace '='  
  
    #join the signature to the JWT with "."  
    $jwt = $jwt + "." + $signature  
  
    #create a hash with body parameters  
    $body = @{  
        client_id             = $appID
        resource              = $resourceAppIdUri
        client_assertion      = $jwt  
        client_assertion_type = "urn:ietf:params:oauth:client-assertion-type:jwt-bearer"  
        scope                 = $scope  
        grant_type            = "client_credentials"  
  
    } 
    $url = "https://login.microsoft.com/$TenantId/oauth2/token"  
  
    #use the self-generated JWT as Authorization  
    $header = @{  
        Authorization = "Bearer $jwt"  
    }  
  
    #splat the parameters for Invoke-Restmethod for cleaner code  
    $postSplat = @{  
        ContentType = 'application/x-www-form-urlencoded'  
        Method      = 'POST'  
        Body        = $body  
        Uri         = $url  
        Headers     = $header  
    }  
  
    $request = Invoke-RestMethod @postSplat  

    #view access_token  
    $request
}
$accessToken = (Get-Token).access_token

 $headers = @{ 
    'Content-Type' = 'application/json'
    'Accept' = 'application/json'
    'Authorization' = "Bearer $accessToken" 
    }
#region end

$url = 'https://graph.microsoft.com/beta/security/securescores?$top=2'
$webResponse = Invoke-RestMethod -Method Get -Uri $url -Headers $headers -ErrorAction Stop

#HTML Style for table reports
$Style = @'
<style>
table{
border-collapse: collapse;
border-width: 2px;
border-style: solid;
border-color: grey;
color: black;
margin-bottom: 10px;
text-align: left;
}
th {
    background-color: #0000ff;
    color: white;
    border: 1px solid black;
    margin: 10px;
}
td {
    border: 1px solid black;
    margin: 10px;
}
</style>
'@


$controlScoreChanges = Compare-Object ($webResponse.value[0].controlScores.controlname) -DifferenceObject ($webResponse.value[1].controlScores.controlname) 
$report_controlScoreChanges = if ($controlScoreChanges){
    foreach ($control in $controlScoreChanges){
        [pscustomobject]@{
        State = switch ($control.sideindicator){"<=" {"New"} "=>" {"Removed"}}
        Category = $webresponse.value[0].controlScores.where({$_.controlname -eq ($control.inputobject)}).controlCategory
        Name = $control.inputobject
        Description = $webresponse.value[0].controlScores.where({$_.controlname -eq ($control.inputobject)}).description
        }
    }
    
}

if ($report_controlScoreChanges){
    [string]$body = $report_controlScoreChanges | ConvertTo-Html -Head $Style
    Send-MailMessage -To "<address>" -From "<address>" -Subject "Secure Score control changes detected" -Body $body -SmtpServer "<SMTP server address>" -Port 25 -BodyAsHtml

}

$ErrorActionPreference= 'silentlycontinue'
$report_scoreChanges = foreach ($controlscore in $webResponse.value[0].controlscores){
  if ( Compare-Object $controlscore.score -DifferenceObject ($webResponse.value[1].controlScores.where({$_.controlname -eq ($controlscore.controlname)}).score)){
        [pscustomobject]@{
            date = $controlscore.lastSynced
            controlCategory = $controlscore.controlCategory
            controlName = $controlscore.controlName
            scoreChange = ($controlscore.score) - (($webResponse.value[1].controlScores.where({$_.controlname -eq ($controlscore.controlname)})).score)
            description = $controlscore.description
            }
        }
    }

if ($report_ScoreChanges){
    [string]$body = $report_ScoreChanges | ConvertTo-Html -Head $Style
    Send-MailMessage -To "<address>" -From "<address>" -Subject "Secure Score changes detected" -Body $body -SmtpServer "<SMTP server address>" -Port 25 -BodyAsHtml

}

Some example results:

New recommendations (Defender for Identity fresh install -> new MDI recommendations)
Score changes by recommendation

Fun fact:
The Defender portal section where these score changes are displayed actually uses a “scoreImpactChangeLogs” node for these changes, but unfortunately I didn’t find a way to query this secureScoresV2 endpoint:

https://security.microsoft.com/apiproxy/mtp/secureScore/security/secureScoresV2?$top=400

I hope it means that these informations will be available via Graph so that no calculations will be needed to detect score changes.

Reporting on Entra Application Proxy published applications – Graph PowerShell

I thought it will be a quick Google search to find a PowerShell script that will give a report on applications published via Entra application proxy, but I found only scripts (link1, link2, link3) using the AzureAD PowerShell module – so I decided to write a new version using Graph PowerShell.

The script:

#Requires Microsoft.Graph.Beta.Applications
Connect-MgGraph

$AppProxyConnectorGroups = Get-MgBetaOnPremisePublishingProfileConnectorGroup -OnPremisesPublishingProfileId applicationproxy

$AppProxyPublishedApps = foreach ($connector in $AppProxyConnectorGroups){
Get-MgBetaOnPremisePublishingProfileConnectorGroupApplication -connectorgroupid $connector.id -OnPremisesPublishingProfileId applicationproxy | % {
    $onpremisesPublishingInfo = (Get-MgBetaApplication -applicationID $_.id -Property onpremisespublishing).onpremisespublishing
    [pscustomobject]@{
        DisplayName = $_.DisplayName
        Id = $_.id
        AppId = $_.appid
        ExternalURL = $onpremisesPublishingInfo.ExternalURL
        InternalURL = $onpremisesPublishingInfo.InternalURL
        ConnectorGroupName = $connector.name
        ConnectorGroupId = $connector.id

    }
}
}

$AppProxyPublishedApps

Some story

Entra portal is still using the https://main.iam.ad.ext.azure.com/api/ApplicationProxy/ConnectorGroups endpoint to display the connector groups:

So the next step was to figure out if there are some Graph API equivalents. Google search: graph connectorgroups site:microsoft.com led me to this page: https://learn.microsoft.com/en-us/graph/api/connectorgroup-list?view=graph-rest-beta&preserve-view=true&tabs=http
From this point it was “easy” to follow the logic of previously linked scripts and “translate” AzureAD PowerShell commands to Graph PS.

Note: as per the documentation, Directory.ReadWrite.All permission is required and only delegated permissions work.

As an alternative, I share the original script that did not use these commands from Microsoft.Graph.Beta.Applications

Connect-MgGraph

$AppProxyConnectorGroups = Invoke-MgGraphRequest -Uri 'https://graph.microsoft.com/beta/onPremisesPublishingProfiles/applicationproxy/connectorgroups' -Method GET

$AppProxyPublishedApps = foreach ($connector in $AppProxyConnectorGroups.value){
  $publishedApps =  Invoke-MgGraphRequest -Uri "https://graph.microsoft.com/beta/onPremisesPublishingProfiles/applicationproxy/connectorgroups/$($connector.id)/applications" -Method GET
  foreach ($app in $publishedApps.value){
  [PSCustomObject]@{
    DisplayName = $app.DisplayName
    id = $app.id
    appId = $app.appId
    ConnectorGroupName = $connector.name
    ConnectorGroupID = $connector.id
  }
 }
}

$AppProxyReport = foreach ($publishedApp in $AppProxyPublishedApps){
    $onpremisesPublishingInfo = Invoke-MgGraphRequest -Uri "https://graph.microsoft.com/beta/applications/$($publishedApp.id)?`$select=onpremisespublishing" -Method GET
    [PSCustomObject]@{
        DisplayName = $publishedApp.DisplayName
        id = $publishedApp.id
        appid = $publishedApp.appId
        ConnectorGroupName = $publishedApp.ConnectorGroupName
        ConnectorGroupID = $publishedApp.ConnectorGroupID
        ExternalURL = $onpremisesPublishingInfo.onPremisesPublishing.externalUrl
        InternalURL = $onpremisesPublishingInfo.onPremisesPublishing.internalUrl
        externalAuthenticationType = $onpremisesPublishingInfo.onPremisesPublishing.externalAuthenticationType
    }
}

Playing with Microsoft Passport Key Storage Provider – protect user VPN certificates with Windows Hello for Business?

I’m really into this Windows Hello for Business topic… Recently, I was going through the “RDP with WHfB” guide from MS Learn (link) which gave me an idea: can this method be used to protect user VPN certificates? The short answer is: yes, but no πŸ™‚

TL;DR
– Depending on your current infrastructure, several options are available to protect VPN with MFA: Azure MFA NPS extension, SAML-auth VPN with Conditional Access, Entra ‘mini-CA’ Conditional Access
– Hello for Business can be used to protect access to certificates, why not use it to protect VPN certs?

Protecting VPN with MFA with Microsoft tools

NPS Extension
The most popular option I know to protect VPN with MFA is the Azure MFA NPS extension (link). The logic is very simple: the RADIUS request coming to the NPS server is authenticated against Active Directory, then the NPS extension is doing a secondary authentication (Azure MFA).

SAML-based authentication with Conditional Access
This depends on the vendor of the VPN appliance, but the mechanism is that an Enterprise application is created in Entra and Conditional Access policy can be applied to it.

Conditional Access VPN
There is another option which is called “Conditional Access VPN connectivity” in Entra – and by the way it seems to me that Microsoft is hiding this option (I guess it’s because it is using Azure Active Directory Graph which is deprecated). I found a photo how it looked like in the old days (picture taken from here):

In the Entra portal this option is not visible (at least for me):

But when using the search bar, the menu can be found:

Some documentation links about this feature:

  • Conditional Access Framework and Device Compliance for VPN (link)
  • Conditional access for VPN connectivity using Microsoft Entra ID (link)
  • VPN and conditional access (link)

The mechanism in short: Entra creates a ‘mini-CA’ which issues a short-lived certificates to clients; when a Windows VPN client is configured to use DeviceCompliance flow, the client attempts to get a certificate from Entra before connecting to the VPN endpoint (from an admin standpoint a ‘VPN Server’ application is created in Entra and conditional access policies can be applied to this application – I’m not going into details about this one, mainly because I encountered a lot of inconsitencies in the user experience when testing this solution πŸ™ƒ) – and when everything is OK, the user gets a short-lived certificate which can be used for authentication (eg. EAP-TLS)
Some screenshots about this:

Conditional Access policy evaluation result

Certificate valid for ~1 hour

VPN Certificate created with Microsoft Passport KSP
Disclaimer: it is not an official/supported by Microsoft method to use VPN certificates for authentication, I tested it only for entertainment purposes.

This was the initial trigger of this post – based on the “Remote Desktop sign-in with Windows Hello for Business” tutorial, create VPN certificates using the Microsoft Passport KSP (link). The process is straigthforward:
– create the VPN certificate template (or duplicate the one you already have)
– export the template to a txt file
– modify the pKIDefaultCSPsΒ setting to Microsoft Passport Key Storage Provider
– update the template with the new setting

User experience: well, if the user is WHfB enrolled and logs in with WHfB then nothing changes (the certificate is used “silently” upon connecting) – but when using password to log in to Windows, the VPN connection prompts for Hello credentials:

So if Hello for Business can be considered a multi-factor authentication method, then this solution fits as well πŸ™‚

Convenience PIN policy enables Windows Hello for Business enrollment in Windows Security

Windows Hello for Business and Windows Hello may sound siblings, but they are actually two different families in authentication world (link)*. Hello is basicly using password caching while Hello for Business uses asymmetric authentication (key or certificate based) – that’s why Windows Hello for Business (WHfB) has some infrastructure prerequisites in az on-premises or hybrid environment. Not every environment is prepared for WHfB, hence some organizations may have opted to enable convenience PIN for their users to make sign-in… well… more convenient.
Why does it matter?
Because users may encounter errors during WHfB enrollment, WHfB has impact on Active Directory infrastructure, WHfB is a strong authentication method (~considered as MFA in Conditional Access policy evaluation) and so on.

*the common thing about Hello and WHfB is the Credential Provider: users see the PIN/biometric authentication option on their logon screen

TL;DR
Turn on convenience PIN sign-in policy enables Hello PIN in Account settings, but invokes Hello for Business enrollment when setting up in Windows Security app
– Hello for Business implementation is very simple (and preferred over Hello) with Cloud Kerberos Trust, but migrating users from Hello has some pitfalls
– Hello usage can be detected in the following registry hive:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\
Authentication\LogonUI\NgcPin\Credentials\<userSID>

Behavior
Let’s assume that WHfB is not configured in your environment, even the Intune default policy for WHfB is set to “Not configured” like this:

On a client device, the eligibility for WHfB can be checked using dsregcmd /status under “Ngc Prerequisite Check” (link). On a domain joined/hybrid joined device, the PreReqResult will have the WillNotProvision value until WHfB is explicitly enabled.

When you open Settings – Accounts – Sign-in options, you will see that PIN (Windows Hello) is greyed out nor Windows Security app will display options to set up Hello:

Now let’s enable convenience PIN sign-in group policy: Computer Configuration – Administrative Templates – System – Logon – Turn on convenience PIN sign-in

The Windows Security traybar icon almost immediately shows a warning status:

The Hello enrollment is now active in the Settings- Accounts – Sign-in options menu and we also have the option to set up Hello in Windows Security:

And here lies the discrepancy in the enrollment behavior: the Settings menu (left) sets up Hello, while Windows Security app (right) will invoke the WHfB enrollment process

Windows Hello setup using Settings menu
Windows Security invoking Hello for Business enrollment

Migrating from Hello to Hello for Business
At this point, we may decide to prevent Hello for Business – but I suggest to follow the other direction and migrate Hello users to Hello for Business. Since we have Cloud Kerberos Trust, we don’t need a PKI either, only (at least one) Windows 2016 or newer Domain Controllers (and hybrid joined devices with hybrid identites with MFA registration of course)[link]… so the deployment is very easy… but migration can be a bit tricky.

First, when a Hello for Business policy is applied on a computer, the credential provider (~the login screen asking for PIN) is disabled for the user until WHfB enrollment. This means that the user will be asked for password instead of PIN – this may result in failed logon attempts, because users will probably enter their PIN “as usual”.
Another issue that you may encounter is related to the previous and the applied PIN policy. Based on my experience, the WHfB enrollment process is prompting the current PIN and tries to set it as the new PIN (from a user experience standpoint, this was a clever decision from Microsoft), but if the new policy requires a more complex PIN, the process may encounter an error (0x801c0026 not documented here)

Convenience PIN migration to Hello for Business PIN error

This error is handled by the logon screen:

Detecting Hello usage
As problems may occour with Hello to WHfB migration, it’s a good idea to have an inventory about Hello users. On every device, each Hello registration is stored under the following registry hive: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion \Authentication\LogonUI\NgcPin\Credentials\<userSID>

It’s up to your creativity how you collect this information and translate the SIDs to some human readable format πŸ™‚

[Suggested article: Query Windows Hello for Business registrations and usage]

Hunting for report-only (Microsoft-managed) Conditional Access impacts

Microsoft is rolling out the managed conditional access policies (link) gradually and I wanted to know how it is going to impact the users (which users to be exact). Apparently, if the Sign-in logs are not streamed to a Log Analytics Workspace, the options are limited – but if you have the AADSignInEventsBeta table under Advanced hunting on the Microsoft Defender portal, some extra info can be gathered.

Streaming Entra logs to Log Analytics gives wonderful insights (not only for Conditional Access), so it is recommended to set up the diagnostic settings. If it is not an option, but the AADSignInEventsBeta is available (typically organizations with E5 licences), then the following query will show those sign-ins that would have been impacted by a report-only Conditional Access policy:

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| mv-apply todynamic(ConditionalAccessPolicies) on (
where ConditionalAccessPolicies.result == "reportOnlyInterrupted" or ConditionalAccessPolicies.result == "reportOnlyFailure"
| where ConditionalAccessPolicies.displayName has "Microsoft-managed:" //filter for managed Conditional Access policies
| extend CADisplayName = tostring(ConditionalAccessPolicies.displayName)
| extend CAResult = tostring(ConditionalAccessPolicies.result))
| distinct Timestamp,RequestId,Application,ResourceDisplayName, AccountUpn, CADisplayName, CAResult

Note: in the AADSignInEventsBeta table, the ConditionalAccessPolicies is a JSON value stored as a string so the todynamic function is needed.

Note2: Since every Conditional Access policy is evaluated against each logon, the query first filters for those sign-ins where the report-only result is ‘Interrupted’ or ‘Failure’, then the policy displayname is used to narrow down the results. Starting the filter with displayName would be pointless.

Some example summarizations if you need to see the big picture (same query as above but the last line can be replaced with these ones):
View impacted users count by application:
| summarize AffectedUsersCount=dcount(AccountUpn) by Application, CADisplayName, CAResult
Same summarization in one day buckets:
| summarize AffectedUsers = dcount(AccountUpn) by bin(Timestamp,1d), CADisplayName, CAResult
List countries by result:
| summarize make_set(Country) by  CADisplayName, CAResult

Other useful feature is the Monitoring (Preview) menu in Conditional Access – Overview:

Here we have a filter option called ‘Policy evaluated’ where report-only policies are grouped under the ‘Select individual report-only policies’ section. This gives an overview but unfortunately does not list the affected users.

When a Microsoft-managed policy is opened, this chart is presented under the policy info as well.

Entra Workload Identities – Trusted Certificate Authorities (public preview)

In the November 2023 – What’s New in Microsoft Entra Identity & Security w/ Microsoft Security CxE identity episode, a public preview feature of Entra Workload ID premium license was presented (link) which was actually announced on November 9th (link). I really love the idea of restricting application key credentials to a predefined list of Certificate Authorities, this is why I thought to write some words about it.

TL;DR
– You can generate a report on current keyCredentials usage (with certificate Issuer data) using the PowerShell script below (Graph Powershell used here) [no extra license needed]
– First, you create a certificateBasedApplicationConfigurations object
– Then you can modify the defaultAppManagementPolicy or create an appManagementPolicy and apply it directly to one or more application objects (for the latter, tutorial below)
– These configurations require Entra Workload ID premium license

Reporting on application key credentials

The linked announcements are highlighting how to set the defaultAppManagementPolicy, but before setting this, you may want to know which applications are using certificates to authenticate and which CA issued these certs. This way, you can first change the certificates to the ones you trust, then you can set up the restriction(s). The following script lists these applications and the Issuer of each certificate (for the sake of simplicity, I use the Invoke-MgGraphRequest command)

#https://learn.microsoft.com/en-us/graph/api/resources/keycredential?view=graph-rest-1.0
Connect-MgGraph

##region keyauthapps
$applications_url= 'https://graph.microsoft.com/beta/applications?$top=100'
$obj_applications = $null
while ($applications_url -ne $null){
    $response = (Invoke-MgGraphRequest -Method GET -Uri $applications_url)
    $obj_applications += $response.value
    $applications_url = $response.'@odata.nextLink'
    }

#filter apps using keycredentials
$keyauthApps = $obj_applications | ? {$_.keycredentials -ne $null}
#read keycredentialsinfo
$KeyAuthApps_creds =foreach ($app in $keyauthApps){
    Invoke-MgGraphRequest -Method GET -Uri https://graph.microsoft.com/beta/applications/$($app.id)?select=keycredentials
}
##region end

##region build report - apps
$report_Apps = foreach ($cred in $KeyAuthApps_creds.keycredentials){
    $tmp_appReference = $null
    $tmp_appReference = $keyauthApps.Where({$_.keycredentials.keyId -eq $cred.keyId})
    [pscustomobject]@{
    KeyIdentifier = $cred.customKeyIdentifier
    KeyDisplayName = $cred.displayname
    KeyStartDateTime = $cred.startDateTime
    KeyEndDateTime = $cred.endDateTime
    KeyUsage = $cred.usage
    KeyType = $cred.type
    Issuer = ([system.security.cryptography.x509certificates.x509certificate2]([convert]::FromBase64String($cred.key))).Issuer
    EntityID = $tmp_appReference.id
    EntityAppId = $tmp_appReference.appid
    EntityType = "application"
    EntityDisplayName = $tmp_appReference.displayname
    }
    }
##region end

$report_Apps | Out-GridView

The result will look like this (yes, I use self-signed certificates in my demo environment πŸ™ˆ):

Example result from reporting script

Note: the Issuer field may not be 100% reliable as it can be inserted manually when creating the self-signed certificate. The following method will show each certificate in the trust chain ($cred variable comes from the foreach loop above):

$tmp_cert = ([system.security.cryptography.x509certificates.x509certificate2]([convert]::FromBase64String($cred.key)))
$certChain = [System.Security.Cryptography.X509Certificates.X509Chain]::new()
$certChain.Build($tmp_cert)
$certChain.ChainElements.certificate
Example chain of a free Let’s Encrypt certificate

Building the Trusted Certificate Authority policy

To restrict application keyCredentials, the following should be kept in mind (annoncement link again):
– The policy applies only to new credentials, it won’t disable current keys
– At least one root CA needs to be declared and a chain can consist of a max of 10 objects
First, you create a certificateBasedApplicationConfigurations object (~the trusted cert chain)
Next, you can modify the defaultAppManagementPolicy to restrict all keyCredentials to this/these trusted CAs (as demonstrated on the linked page)
OR you can create a separate appManagementPolicy to restrict the trusted CA THEN this policy can be applied directly to one or more applications (steps below)

Creating the certificateBasedApplicationConfigurations object

In this example, I’m going to use Graph Explorer to create the object. As a Windows user, I will simply export my issuing CA’s (F12SUBCA01) certificate and it root CA’s (ROOTCA01) certificate to a Base-64 encoded CER file using the certlm.msc MMC snap-in, open them in Notepad and copy the contents to the Graph Explorer call’s Request body.
Find the issuing CA’s cert, then right-click – All Tasks – Export:

Select “Base-64 encoded X.509 (.CER)” as export file format.

Repeate the same steps for each certificate in the chain.
Now, open the cer files with notepad, remove the ‘—–BEGIN CERTIFICATE—–‘ and ‘—–END CERTIFICATE—–‘ lines and every line-breaks

Or you can use PowerShell:

$cert = get-childitem Cert:\LocalMachine\ca\ | ? {$_.Subject -match "F12SUBCA01"}
[convert]::ToBase64String($cert.RawData)

These values will be used in the payload sent to Microsoft Graph.
CAUTION! Use the beta endpoint for now as it is a preview feature. If you accidentally use the v1.0 endpoint, you will encouter issues (example below)

METHOD: POST
ENDPOINT: beta
URI: https://graph.microsoft.com/beta/certificateAuthoritites/certificateBasedApplicationConfigurations
REQUEST BODY:
{
  "displayName": "F12 Cert Chain",
  "description": "Allowed App certificates issued by F12SUBCA ",
  "trustedCertificateAuthorities": [{
    "isRootAuthority": true,
    "certificate": "<rootCA base64 certificate data>"
  },
  {
    "isRootAuthority": false,
    "certificate": "<subCA base64 certificate data>"
  }]
}
Creating the trustedCA configuration object

If everything was inserted correctly, the response includes an id, take a note of it. If you did not manage to take it, no problem, you can query these configurations as follows:

METHOD: GET
ENDPOINT: beta
URI: https://graph.microsoft.com/beta/directory/certificateAuthorities/certificateBasedApplicationConfigurations
List configuration objects

Note: when you dig further, the CA information can be queried for each configuration id, for example (you can omit the ‘?$select=isRootAuthority,issuer’ part if you want to check the certificate data too:

METHOD: GET
ENDPOINT: beta
URI: https://graph.microsoft.com/beta/directory/certificateAuthorities/certificateBasedApplicationConfigurations/<configurationID>/trustedCertificateAuthorities?$select=isRootAuthority,issuer

Creating the appManagementPolicy object

Now that we have the CA configuration, the next step is to create the appManagementPolicy object (if you are not going to apply it in the defaultAppManagementPolicy). The appManagementPolicy can contain restrictions for passwordCredentials and KeyCredentials. In this example, I’m going to create a policy that prohibits passwordCredentials and restricts key credentials to the trusted CA configuration defined above.

METHOD: POST
ENDPOINT: BETA
URI: https://graph.microsoft.com/v1.0/policies/appManagementPolicies
REQUEST BODY:
{

    “displayName”: “F12 AppManagementPolicy – F12SUBCA allowed only”,

    “description”: “This policy restricts application credentials to certificates issued by F12SUBCA and disables password addition “,

    “isEnabled”: true,

    “restrictions”: {

        “passwordCredentials”: [

            {

                “restrictionType”: “passwordAddition”,

                “maxLifetime”: null

            }

        ],

        “keyCredentials”: [

            {

                “restrictionType”: “trustedCertificateAuthority”,

                “certificateBasedApplicationConfigurationIds”: [

                    “0d60f78e-9916-4db2-9cee-5c8e470a19e9”

                ]

            }

        ]

    }

}

Creating the appManagementPolicy object

Take a note of the id given in the response as it will be used in the final step.

NOTE: if you accidentally use the v1.0 endpoint, you will encounter issues like this:

“Expected property ‘certificateBasedApplicationConfigurationIds’ is not present on resource of type ‘KeyCredentialConfiguration'”

Applying the policy to an application

Finally, the policy needs to be applied to an application, as follows:

METHOD: POST
ENDPOINT: BETA
URI:https://graph.microsoft.com/beta/applications/<objectID of application>/appManagementPolicies/$ref
REQUEST BODY:
{
    "@odata.id": "https://graph.microsoft.com/beta/policies/appmanagementpolicies/<appManagementPolicyID>"
}
Applying the policy to an application

The result for this application:

Uploading a certificate not issued by the trusted CA fails
Adding new client secret option is greyed out

Closing words: it is a bit cumbersome to configure these settings, but the result is purely satisfying 😊 I hope, once it goes GA it will get some graphical interface to ease the process.

Entra Workload Identites passwordLifetime policy vs. Entra ID Application Proxy – Application operation failed

Back in the days, I wrote about the Entra Workload Identities Premium licence and it’s very appealing capabilities (link). One of my favorites was the defaultAppManagementPolicy which can (also) restrict the lifetime of (new) password credentials created for an application. Well, it looks like I was too restrictive which led to the error message in the title.

TL;DR

  • When you publish an application via EntraID Appliction Proxy, the application is generated with a password credential valid for 1 year (actually 365 days + 4 minutes)
  • if you have Workload Identites Premium licence* and have set the default password credential lifetime to 12 months or less, Entra ID will not be able to create the Application Proxy application resulting in this very informative error message upon application creation: ‘Application operation failed’
  • Conclusion: when using passwordLifetime restiction in the defaultAppManagementPolicy and you intend to use AppProxy, make sure to set this lifetime to at least 366 days

Explained

When publishing a new application via Entra ID Application Proxy, I encountered this very detailed and error message: ‘Application operation failed’

Error message during AppProxy application creation

I went through some previously published applications to get an idea what may be wrong… And on the ‘Certificates & secrets’ page I had a flashback about configuring password credentials policy, then I was on the right track with a small surprise.

When an application is published with Application Proxy, an app registration is created with a password credential.

There is nothing you can do about it (as far as I know), you just live with it – it is handled by Microsoft automatically, I guess.

When you create a passwordLifetime policy specifying 12 months of lifetime, it is automatically translated to 365 days in the policy. On the next screenshot you can see my previous PATCH payload for defaultAppManagementPolicy which was followed by a GET to countercheck the settings:

passwordLifetime set to P12M which is translated to P365D

Remark: 12 months is not neccessarily 365 days (leap years!) This may cause issues in automations too when attempting to create a password valid for 1 year/12 months, which is 366 days in this case.

The point is that even if you set this lifetime to P12M (12 months) or P365D (365 days) this will prevent Application Proxy from adding the password credential, because the expiration for this password is set to T+365 days+4 minutes:

PasswordCrendetial endDateTime and startDateTime for an app published with Application Proxy

To get over this issue, modify the defaultAppManagementPolicy to allow 366 days of lifetime for a password credential:

Modifying the maxLifetime to 366 days

Now the application is successfully published:

*I used a trial licence back in those days to set up the policies…when the trial licence expires, these policies remain effective, but you will not be able to modify these settings – so you have to buy a licence to roll back these changes. So be cautious when playing with settings tied to a trial licence πŸ™ƒ