Exchange 2019 failing to install CU14 with installed certificate

Yesterday I wanted to update the Exchange 2019 to the current CU14.
The update was almost being finished, when that error popped up and halted the update and left with a broken Exchange installation:

The following error was generated when "$error.Clear(); 
          Install-ExchangeCertificate -services "IIS, POP, IMAP" -DomainController $RoleDomainController
          if ($RoleIsDatacenter -ne $true -And $RoleIsPartnerHosted -ne $true)
          Install-AuthCertificate -DomainController $RoleDomainController
        " was run: "Microsoft.Exchange.Management.SystemConfigurationTasks.AddAccessRuleUnauthorizedAccessException: Insufficient rights to grant Network Service access to the certificate with thumbprint E3F4913A68949761277C6D20B95A35C6791F2964. ---> System.UnauthorizedAccessException: Certificate ---> System.Security.Cryptography.CryptographicException: Keyset does not exist

It is a wildcard cert requested by Letsencrypt from CertifyTW and automatically installed.

I then analyzed the structure of the certificate in lmcert, and clicking on “Manage private keys” I was presented with “No private keys found”. That could simply not be the case since the certificate was running fine on that installation. First I fixed the Exchange routine by changing to a self-signed cert in IIS. Update succeeded.
Review the last four CertifyTW-certificate I still had in store all had “No private keys”. So this must be some sort of NTFS-related thing, because “Local system”-services could use it for its webserver and Exchange. Only my domain-admin could not read the private keys…

For CertifyTW to correctly work with the private keys and domain-admin, all that had to be done was changing the service-user of CertifyTW to the domain-admin, I was working with. After requesting a new certificate then, Exchange was using the new cert and I could still view the private keys. (I am pretty sure, CU14 would not even have failed in that state).

This is a Server 2022 Standard with no changes to the NTFS-structure whatsoever.
That problem is gone, just wanted to share my experience of last evening.

We strongly suggest that you do not change the service account for CTW. It’s Local System by default and we do not test with other types of account. DAPI is in use for some things and changing account will cause decryption to fail.

When the service stores a certificate normal it does so as Local System, not administrator. This usually works ok with Exchange so it could be something specific to the cumulative update, or the upgrade process. If you do need to grant a specific user permission on the (RSA) private key we have a deployment task to achieve that or your can use a powershell script to update the ACL on the private key.

I’d suggest are using an RSA key (Settings > Default key type) as I don’t know if ECDSA keys are compatible with Exchange.

RSA keys stored by System are kept under %ProgramData%\Microsoft\Crypto\RSA\MachineKeys as per Key Storage and Retrieval - Win32 apps | Microsoft Learn

Hm, obviously it seems as though the update-procedure of Exchange relies of having Full-admin-rights to the private key to be able to add another ACL-entry in their script.
I need to invest on how to achieve that with a PS-script.
Thank you!

It may be this is specific to the “Extended protection” feature for CU14 but we don’t have any in-house Exchange administrators for expertise in that area:

Where our built-in deployment tasks don’t quite match your requirements there’s always the option of scripting renewal deployment yourself: Scripting | Certify The Web Docs

I think the changes to key permissions are probably something you’d need to do before applying an update so I’d advise testing before applying cumulative updates in production, I know it’s easier to say that than to do it!

Thanks for replying.

Before I began to use CTW, I had a PS-script that used an LE-cert from a Linux-machine:

Start-Transcript -Path "C:\batch\ExchangeLetsEncrypt.log"

add-pssnapin Microsoft.Exchange.Management.PowerShell.Snapin

# Find the thumbprint of this certificate
$certPrint = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2

Import-ExchangeCertificate -Server "FQDN_server" -FileData ([Byte[]]$(Get-Content -Path "\\unc-path\cert.p12" -Encoding byte)) -Password (ConvertTo-SecureString -String 'password' -AsPlainText -Force)
Enable-ExchangeCertificate -Thumbprint $certPrint.Thumbprint -Services IIS,SMTP -Force

# Add the cert to the default site in IIS
$binding = Get-WebBinding -Name "Default Web Site" -Protocol "https"
$binding.AddSslCertificate($certPrint.GetCertHashString(), "my")

# Add the cert to the Exchange Backend site in IIS
$binding = Get-WebBinding -Name "Exchange Back End" -Protocol "https"
$binding.AddSslCertificate($certPrint.GetCertHashString(), "my")


That was also running as “Local System” as a task with highest privs, and that didn’t have missing ACLs. Do you have an idea what it is doing differently?

I think, it has something to do with a case discussed here :

Thanks, the storage flags may make a difference here, we use X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable so without MachineKeySet the above script may store keys under Local Systems user specific RSA keys location, not sure how windows would permission that.

I should add that whatever the resolution is, we don’t consider this a bug in Certify Certificate Manager, as this is Exchanges update script making arbitrary assumptions about private key permissions based on something. We’re happy to discuss remedies but we wouldn’t be planning to change how we currently add certs to the store.

Hi Donald!
The same exact thing just happened to me and I was stupid enough not to create a snapshot of the Exchange VM before starting the update. Now he server is broken. How did you manage to restore it to working condition?

You just remove the certificate in EM-shell or maybe even IIS is enough. Then update will succeed.

Yeah, well for me the EMS is broken either…

I’ve successfully fixed and recovered the server! Just a quick note for those, who may encounter a similar situation:
In my case, I believe that the problem was not specific to CU14 of Exchange 2019, but was related to the cert I’m using, which is Let’s Encrypt. Apparently, it’s not a new issue with Let’s Encrypt certs. As per this blog post.

What I’ve done to fix the server is to change all the bindings in IIS Manager to the self-signed default certificate (Microsoft Exchange), rebooting the server, and running the CU again.

I’d be approximately 100% confident that it’s nothing to do with Let’s Encrypt, that’s just correlation. The issue is read permission on the private key.

You right, it’s just the way ACME scripts works and sets the acl permissions, I guess.