Azure Key Vault Error: The Specified PEM X.509 Certificate Content Is In An Unexpected Format

Microsoft’s Azure Key Vault supports uploading certificates in the PEM format. However, it is a bit picky, and the format must be exact. The documentation contains all the information, but the PEM format has a few nuances that the documentation does not address.

The following is a valid certificate as generated by a PKI.

Subject: CN=The Codeslinger,O=The Codeslinger,C=DE
Issuer: CN=The Codeslinger Intermediate,O=The Codeslinger,C=DE
-----BEGIN CERTIFICATE-----
MIIC...Ivw=
-----END CERTIFICATE-----
Subject: CN=The Codeslinger Intermediate,O=The Codeslinger,C=DE
Issuer: CN=The Codeslinger Root,O=The Codeslinger,C=DE
-----BEGIN CERTIFICATE-----
MIIB...Rps=
-----END CERTIFICATE-----
Subject: CN=The Codeslinger Root,O=The Codeslinger,C=DE
Issuer: CN=The Codeslinger Root,O=The Codeslinger,C=DE
-----BEGIN CERTIFICATE-----
MIIB...aA==
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIIE...12Us
-----END RSA PRIVATE KEY-----

However, Key Vault will not accept it. Instead, it throws the dreaded error: “The specified PEM X.509 certificate content is in an unexpected format. Please check if certificate is in valid PEM format.”

As you can see in the documentation, the PEM file must not have metadata about the certificate and issuing authorities. You can remove this information, and the PEM will look like the following.

-----BEGIN CERTIFICATE-----
MIIC...Ivw=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB...Rps=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB...aA==
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIIE...12Us
-----END RSA PRIVATE KEY-----

You are not done yet, though, as the key must be in the PCKS#8 format. The following OpenSSL command will do the trick if you store your key in a file.

openssl pkcs8 -topk8 -nocrypt -in private-key.pem

This works for RSA keys, as shown above, and Elliptic Curve keys.

-----BEGIN EC PRIVATE KEY-----
MHcC...8g==
-----END EC PRIVATE KEY-----

The output will be the following.

-----BEGIN PRIVATE KEY-----
MIGH...vnry
-----END PRIVATE KEY-----

Putting it all together, Key Vault will now accept the certificate.

-----BEGIN CERTIFICATE-----
MIIC...Ivw=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB...Rps=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB...aA==
-----END CERTIFICATE-----
-----BEGIN PRIVATE KEY-----
MIIE...8kjt
-----END PRIVATE KEY-----

I hope this helps.

Thank you for reading.

Connect Spring Cloud Stream With Kafka Binder to Azure Event Hub

In two previous blog posts, I explained how to create a Kafka consumer and producer with the Spring Cloud Stream framework. In the Famous Last Words section of the producer, I already hinted at the notion of utilizing this technology for connecting to Azure Event Hub. While doing so, I discovered an error in one of Microsoft’s examples that has cost me about two days of work. I show you how to avoid the dreaded “Node -1 disconnected” error.

In this tutorial, I explain how to use the exact same code to connect to Azure Event Hub using a Shared Access Signature Token (connection string) and a Service Principal.

I have good news and bad news. Which one first? The bad? Okay, here we go:

There will not be any code in this tutorial, only YAML configuration.

Now to the good part:

There will not be any code in this tutorial, only YAML configuration.

This is the beauty of Spring Cloud Stream. Granted, I am not even swapping the binder for an Azure-native variant. So why would there be any code changes? But let me say this: I briefly plugged in the Event Hub Binder without changing the code in my research on getting this to work. Even the updates to the config were minimal. A few Event Hub-specific settings, especially the Storage Account for checkpoints, and that was it.

Enough foreplay; let me explain what you likely came here for.

Read More »

AD Workload Identity for AKS Pod-Assigned Managed Identity (Cross-Post)

Managing credentials and other types of access tokens is a hassle. In Microsoft’s Azure Cloud, you can take advantage of Service Principals and RBAC. But even then, a Service Principal requires a password. There is a better solution in Azure called Managed Identity. But how can you employ this feature when your workload runs in AKS? There is a solution, and I’ve explained all you need to know in an article on my employer’s developer blog.

There was this thing called Pod-Managed Identities, but that was pretty elaborate in its setup. Azure Workload Identity is much leaner, making the configuration and usage more straightforward. Managing credentials and connection strings in Kubernetes microservices is a hassle I have disliked from the start. Assigning a Managed Identity to an AKS pod or even a Service Principal and then relying on Azure RBAC can make your life as a developer or IT ops engineer so much more enjoyable.

Visit the blog linked earlier to read the full version. It’ll contain my usual bad jokes and is not censored in any way. I’d post the same article 1:1 on this blog if I had not researched the topic on company time.

I hope it can help you, and thank you for reading.

Terraform Azure Error SoftDeletedVaultDoesNotExist

I just ran into a frustrating error that seemed unexplainable to me. My goal was to replace an existing Azure Resource Group with a new one managed entirely with Terraform. Besides a few other errors, this SoftDeletedVaultDoesNotExist was incredibly confusing because no more Key Vaults were found in the Resource Group’s list of resources.

Error: creating Vault: (Name "my-fancy-key-vault" / Resource Group "The-Codeslinger"): 
keyvault.VaultsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- 
Original Error: Code="SoftDeletedVaultDoesNotExist" 
Message="A soft deleted vault with the given name does not exist. 
Ensure that the name for the vault that is being attempted to recover is in a recoverable state. 
For more information on soft delete please follow this link https://go.microsoft.com/fwlink/?linkid=2149745"

with module.base.azurerm_key_vault.keyvault,
on terraform\key_vault.tf line 9, in resource "azurerm_key_vault" "keyvault":
    9: resource "azurerm_key_vault" "keyvault" {

That is because it was soft-delete enabled. And it was the Key Vault from the other Resource Group that I previously cleared of all resources, not the new Resource Group.

Using the az CLI you can display it, though.

> az keyvault list-deleted
[
    {
        "id": "/subscriptions/<subscription-id>/providers/Microsoft.KeyVault/locations/westeurope/deletedVaults/my-fancy-key-vault",
        "name": "my-fancy-key-vault",
        "properties": {
            "deletionDate": "2021-08-02T09:39:29+00:00",
            "location": "westeurope",
            "purgeProtectionEnabled": null,
            "scheduledPurgeDate": "2021-10-31T09:39:29+00:00",
            "tags": {
                "customer": "The-Codeslinger",
                "source": "Terraform"
            },
            "vaultId": "/subscriptions/<subscription-id>/resourceGroups/My-Other-ResourceGroup/providers/Microsoft.KeyVault/vaults/my-fancy-key-vault"
        },
        "type": "Microsoft.KeyVault/deletedVaults"
    }
]

And finally delete it.

> az keyvault purge --name my-fancy-key-vault

After that, it is gone.

$ az keyvault list-deleted
[]

Another option seems to be the Azure Portal, but I discovered this only after removing it on the command line.

Terraform Azure Error: parsing json result from the Azure CLI: Error waiting for the Azure CLI: exit status 1; Failed to load token files

There are some instances where I have managed to screw up my Azure CLI configuration file with Terraform. It must have something to do with parallel usage of Terraform or Terraform simultaneously with the az tool. Either way, I ran into the following error.

$ terraform refresh
Acquiring state lock. This may take a few moments...

Error: Error building account: Error getting authenticated object ID: Error parsing json result from the Azure CLI: Error aiting for the Azure CLI: exit status 1

  on main.tf line 16, in provider "azurerm":
  16: provider "azurerm" {

I wondered: "What might block the Azure access? Am I maybe not logged in?" So, I went ahead and tried to log in.

$ az login
Failed to load token files. If you have a repro, please log an issue
at https://github.com/Azure/azure-cli/issues. At the same time, you 
can clean up by running 'az account clear' and then 'az login'. 

(Inner Error: Failed to parse /home/rlo/.azure/accessTokens.json with exception: Extra data: line 1 column 18614 (char 18613))

The error probably comes from parallel access to my Azure CLI configuration file. When I opened the /home/rlo/.azure/accessTokens.json, I found some dangling garbage at the end of it that broke the JSON format.

Here’s a snippet of the last few lines.

        "refreshToken": "0.A...",
        "oid": "<oid>",
        "userId": "<userId>",
        "isMRRT": true,
        "_clientId": "<clientId>",
        "_authority": "https://login.microsoftonline.com/<uid>"
    }
]bc1"}]

I took out the trash bc1"}], saved the file, and it worked again. Many access to resources. Such joy 😉

Azure PostgreSQL Error: PSQLException The connection attempt failed

A few days ago at work, I was investigating a strange issue where one of our services could not connect to the Azure Managed PostgreSQL Database from the Kubernetes cluster. Oddly enough, other services of that cluster did not exhibit this behavior.

org.postgresql.util.PSQLException: The connection attempt failed.
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:315) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.jdbc.PgConnection.(PgConnection.java:225) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.Driver.makeConnection(Driver.java:465) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.Driver.connect(Driver.java:264) ~[postgresql-42.2.16.jar!/:42.2.16]
        ...
        at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
        at org.springframework.boot.loader.Launcher.launch(Launcher.java:107) ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
        at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
        at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
Caused by: java.io.EOFException: null
        at org.postgresql.core.PGStream.receiveChar(PGStream.java:443) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.v3.ConnectionFactoryImpl.enableGSSEncrypted(ConnectionFactoryImpl.java:436) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:144) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213) ~[postgresql-42.2.16.jar!/:42.2.16]
        ... 46 common frames omitted

As it turns out, it was an issue with the PSQL JDBC driver version that comes bundled with Spring Boot version 2.3.4-RELEASE. All the other services were still built with a slightly older release and therefore used an older PSQL JDBC driver.

The key indicator of what is going on is this method call.

org.postgresql.core.v3.ConnectionFactoryImpl.enableGSSEncrypted

A bit of research led me to a question on StackOverflow that pointed me in the right direction, and ultimately I ended up on Microsoft’s Azure documentation. If you scroll down, you will find a section named "GSS error".

The solution to this problem is simple. If you do not want or cannot change the Spring Boot or PSQL JDBC driver version, e.g., because of automated CVE scans that break builds (the reason we upgraded this one service), then you can solve it with a configuration change. Append gssEncMode=disable to the JDBC connection string.

Example: jdbc:postgresql://svc-pdb-name.postgres.database.azure.com:5432/databasename?gssEncMode=disable