Hi,
When I request a new certificate, I have a PowerShell script that will upload the certificate to a service. I then need to iterate through systems in that service to determine if their certificates need to be updated.
Is there a way in PowerShell to retrieve the old certificate serial number so that I can only replace the certificates on the devices whose serial matches the old cert?
Cheers
Hi,
Yes there is a property on the ManagedItem result object called CertificatePreviousThumbprintHash
which is probably exactly what you need ( where CertificateThumprintHash
is the new current one.)
Incidentally the next version has some fancy Deployment Task features for remote deployment (via SSH etc), can you share what other kind of services you are updating?
I work for a company called Pexip that produces an enterprise distributed videoconferencing system. Whilst the OS is Linux in nature, it is highly customised and only uses libraries that are relevant to the core application. The OS development team have been unwilling to support an ACME client directly within the OS as this will require additional dependencies that may increase any given surface area for potential attacks. However, the restful API’s are very complete and it is possible to manage the system certificates completely through these API’s.
Any given system could contain hundreds of nodes, which may share or use specific certificates. DNS FQDN entries in the SAN are important here as RFC 5922 (Domain Certificates in the Session Initiation Protocol (SIP), section 7.2), states that wildcard certificates cannot be used. The intention of script would be to upload and replace certificates in order to fully automate the management of these tasks, which is something that administrators seemingly always fall over themselves when attempting.
I have run Certify the Web for a number of years (Registered) for various internal tasks, and currently upload new certificate to some of our demo systems, but I stopped short of fully automating the task as I needed to iterate through the currently applied certificates in order to determine what could be updated and what shouldn’t (which was more work and I am lazy ), But now seems like the right time .
Something else I would like to be able to do is dummy run the script with dummy data in $result, which might make building and testing the scripts a bit simpler (rather than have to request a new cert). I currently use VS Code as my IDE (although I am no developer )
I should also say that the serial number properties of the certificate retrieved via this API is in the format:
serial_no : xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx
Hmm, there is also a key ID that is slightly longer:
"key_id": "xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx"
hmm, I have a sneaking suspicion that the ThumbprintHash is neither the serial number, nor the key ID.I don’t actually see anything to do with a Thumbprint in the data returned by the API, which is essentially output from OpenSSL.
ok, so to calculate the thumbprint, I can use OpenSSL:
openssl x509 -noout -fingerprint -sha1 -inform pem -in .\temp.crt
The Base64 encoded certificate is returned as part of the API request, which then needs to be saved as an ASCII file, and read back in using the above OpenSSL command. Of course, it also assumes that OpenSSL is installed on the Windows machine, which is not always the case.
I have just asked out devs if adding this value to the API result could be done, but it won’t happen overnight.
I am guessing that it is this value that you are using in the CertificatePreviousThumbprintHash
and CertificateThumprintHash
Hi, yes I can investigate a bit more but when deploying your certificate are you converting it into the component certificates/key using openssl already or do you directly use the PFX? It’s possible the key id being returned by your API is a different hash function on the same data, the difference in formatting of the hexadecimal pairs is just a convention. Our thumbprints are returned by the native .net/windows representation of the X509Certificate and we don’t calculate them ourselves.
Regarding feeding the test script a dummy object, we currently do feed in the latest version of the ManagedItem object when you hit the Test button next to the script path, you can check it out with this script which outputs the object to a text file. The managed certificate will need to have been saved at least once already:
# Logs results to the given path (modify as required)
param($result)
$logpath = "c:\temp\ps-test.txt"
$date = Get-Date
Add-Content $logpath ("-------------------------------------------------");
Add-Content $logpath ("Script Run Date: " + $date)
Add-Content $logpath ($result | ConvertTo-Json)
After a bit of playing I realised what the thumbprint hash that was returned in $result.ManagedItem.CertificatePreviousThumbprintHash
and the format that it is in. The OpenSSL command above is correct, so I simply needed to split the output of the string and remove the colons, and I can then run a equality comparison.
FWIW, the certificate for us needs to be in a PEM format as the underlying web server is Apache, although I already had this bit working nicely (again, I use OpenSSL to convert the $result.ManagedItem.CertificatePath
to a PEM). In fact, I wrote everything else yesterday and all seem to be working well
I have been meaning to create some kind of module that contains all the different functions used to integrate with our API, but this is on the ToDo list (all of this is just a side project).
Thanks for the test script to write out the JSON object. This has helped although I have hit another stumbling block (I think caused by a context issue).
I have been playing with the Alpha release of the Secrets Management Development Release (https://devblogs.microsoft.com/powershell/secrets-management-development-release/). One of the issues we have is writing scripts that people will use and trying not to encourage them to save credentials in clear text in the scripts. I do this myself (as I’m sure must many people), but I would like to get away from it if at all possible.
So I started playing with this module that allows you to interact with a Credential vault. Unfortunately, when I Add
credentials in the vault manually, then try to Get
them within the script, I get an exception saying the credential could not be found. Of course, running the test directly in VScode, all is OK. .
I am guessing the the script when run by the Certify the Web process runs in a different context so does not have access to my user stored variables. Yes, this is still only an Alpha, but perhaps there are other ways…
To clarify, what I’m doing, I have a generic script that runs for each certificate renewal, which will create a PEM for each deployment. This is all fine.
I have now decided to read in a JSON file that outlines the different deployments (that would match against the certificate name), then provide the FQDN of the main nodes where the cert should be uploaded (and all this is OK as well), and lastly reference to the the the Credential Vault ID. This is where I fail.
I could read in plain text creds to be used by the REST API for cert upload, but this is what I’m trying to avoid.
Your correct that when running Certify the service uses the Local System account, which doesn’t have the same access to encrypted data. Generally if this is using the Windows Data Protection APIs you get the choice of User level encryption or machine level. For the Certify Credentials manager (under Settings) we using machine level, so that the decryption is tied to the machine not the user.
It would be technically possible to pass in selected credentials stored in Certify in a decrypted form and access them in PowerShell as part of $result. Obviously once their decrypted anything and happen to them depending on what your script does.
I’ve also briefly looked into offering Azure Vault and Hashicorp Vault as potential credential storage providers (so credentials aren’t kept on the machine) but not really worked on that yet.
Looks like we could also provide a Vault Provider for the powershell secrets option. Could be a thing.
1 Like
Thanks, yes the security concern is valid (anyone who can elevate to Local System has access to Certify credentials) - this is one of the reasons why it would good to keep credentials off the machine. This is also in contrast to some other tools which require the credentials in plain text, so it’s a trade off.
So an alternative approach is to Run As a different user in your script, or write out the data you need (such as the JSON for the manageditem, or just the thumbprints) to a file, then have another script (running as the user you need, either scheduled or running with a file watcher) pick up the info and use it to complete the deployment tasks.
Future versions of certify will allow you run deployment tasks (scripts included) as other users, but of course you have to keep the password somewhere before you can logon as that user.
I have been scratching my head over this one. The credential that are stored in this case are the creds used to access the API for our service in order that the script can upload the certs generated through Certify. If I “run as” a different user, I need to store that creds for that user so that Certify can elevate.
Currently, I read in the API creds from a JSON file, but this is in clear text. It irks me that I am doing this, and I really dislike pointing other customers to do the same. I am not a strong enough scripter to come up with something better at this time.
I don’t see it as a technical problem, more chicken-and-egg. If you’re going to login as someone you need to know their password (or a temporary authentication token generated by using their password) and in turn that means you store it somewhere, you can optionally encrypt that but then your encryption key needs to be stored somewhere as well.
Eventually you arrive at a point where you’re just trusting a particular mechanism as being the least-bad.
For best practise ways of openly handling secrets, look at https://www.vaultproject.io/ and https://azure.microsoft.com/en-au/services/key-vault/ but at the end of the day you are unlocking a credential and once it’s unlocked your trusting the user/process not to leak that. If you need individual customers to post to your API you should issue them each with a distinct and potentially short lived token (like a JWT) that you can later revoke or expire, that way no two customers are sharing the same credentials.