Connect Spring Cloud Stream With Kafka Binder to Azure Event Hub

In two previous blog posts, I explained how to create a Kafka consumer and producer with the Spring Cloud Stream framework. In the Famous Last Words section of the producer, I already hinted at the notion of utilizing this technology for connecting to Azure Event Hub. While doing so, I discovered an error in one of Microsoft’s examples that has cost me about two days of work. I show you how to avoid the dreaded “Node -1 disconnected” error.

In this tutorial, I explain how to use the exact same code to connect to Azure Event Hub using a Shared Access Signature Token (connection string) and a Service Principal.

I have good news and bad news. Which one first? The bad? Okay, here we go:

There will not be any code in this tutorial, only YAML configuration.

Now to the good part:

There will not be any code in this tutorial, only YAML configuration.

This is the beauty of Spring Cloud Stream. Granted, I am not even swapping the binder for an Azure-native variant. So why would there be any code changes? But let me say this: I briefly plugged in the Event Hub Binder without changing the code in my research on getting this to work. Even the updates to the config were minimal. A few Event Hub-specific settings, especially the Storage Account for checkpoints, and that was it.

Enough foreplay; let me explain what you likely came here for.

Read More »

Produce Messages With Spring Cloud Stream Kafka

Update:

In a recent post, I explained how to create a Kafka Consumer application with Spring Boot using Spring Cloud Stream with the Kafka Binder. In this installment, I explain how to build the other side of the Kafka connection: the producer.

The main concepts are the same. The most significant change is that instead of a Consumer<T>, you implement a Supplier<T>.

Read More »

Consume Messages With Spring Cloud Stream Kafka

Update:

Spring Cloud Stream is a very complex topic and a remarkable piece of technology. It builds on other intricate Spring technologies like Spring Integration and Spring Cloud Function, and when you add Apache Kafka to the mix, you have a steep learning curve on your hands.

There is a lot of documentation to read and comprehend, and I do not think it helps that your first interaction with the technology is by showing off. Here is the sample in the “Introducing Spring Cloud Stream” section.

@SpringBootApplication
public class SampleApplication {
    public static void main(String[] args) {
        SpringApplication.run(SampleApplication.class, args);
    }
    @Bean
    public Function<String, String> uppercase() {
        return value -> value.toUpperCase();
    }
}

That supposedly is a fully functioning application. The uppercase() method consumes and produces simultaneously, essentially turning it into a way software can pleasure itself. To understand this example, you must know about all the Spring Boot auto-configuration magic happening in the background. Otherwise, it is an opaque magical, and indecipherable showpiece.

This post will show a practical example of a simple consumer application receiving messages from a Kafka cluster. This was my use case, and while the documentation contains a ton of helpful information, it only succeeded in confusing me at first, coming to the technology with fresh eyes. As a result, it took me a long time to put together all the pieces before I understood what was going on.

Read More »

AD Workload Identity for AKS Pod-Assigned Managed Identity (Cross-Post)

Managing credentials and other types of access tokens is a hassle. In Microsoft’s Azure Cloud, you can take advantage of Service Principals and RBAC. But even then, a Service Principal requires a password. There is a better solution in Azure called Managed Identity. But how can you employ this feature when your workload runs in AKS? There is a solution, and I’ve explained all you need to know in an article on my employer’s developer blog.

There was this thing called Pod-Managed Identities, but that was pretty elaborate in its setup. Azure Workload Identity is much leaner, making the configuration and usage more straightforward. Managing credentials and connection strings in Kubernetes microservices is a hassle I have disliked from the start. Assigning a Managed Identity to an AKS pod or even a Service Principal and then relying on Azure RBAC can make your life as a developer or IT ops engineer so much more enjoyable.

Visit the blog linked earlier to read the full version. It’ll contain my usual bad jokes and is not censored in any way. I’d post the same article 1:1 on this blog if I had not researched the topic on company time.

I hope it can help you, and thank you for reading.

Spring Boot Push Micrometer Metrics to Prometheus Pushgateway

Prometheus, as a metrics solution, gets its data by actively reading it from designated services – a process known as scraping. This approach might not work if your workload contains short-lived tasks, as your task may not fall within the scraping time window.

Luckily, Prometheus has a solution for this: the Pushgateway.

It presents a push-based target for your metrics that itself is scraped by Prometheus. But how do you configure this in a Spring Boot application? Let me show you.

Read More »

Spring Boot Custom Field Error Messages in Class-Based Custom Bean ConstraintValidator

This short guide will focus on a single specific aspect of custom bean validation. If you need to catch up on how to write a custom bean validator, check out the tutorial on reflectoring.io. What is usually missing from these how-tos is the handling of validators for an entire class instead of just a field and how to set custom errors for specific field errors in a class.

Why would you want to write a validator for an entire class?

You may run into a situation where the value of one field of a class depends on the value of another field. For example, the field “type” value impacts which values are valid for the field “content”.

But when you define a custom validator, the validation annotation @interface only represents a single error message. The result is that any field error would result in the same error message. In a web service, this is not very helpful for users of your API.

Read More »

Spring Boot @RestController Action Returning java.util.Optional

Recently, I wondered what would happen if a Spring Boot RestController returned a java.util.Optional instead of a regular POJO.

I invested 30 minutes of my life to find out, and created an over-engineered example on GitHub.

Here is the controller.

@Slf4j
@RestController
@RequiredArgsConstructor
public class MusicController {

    private final MusicService musicService;

    @GetMapping(path = "/value")
    Album getValue(@RequestParam("isNull") boolean isNull) {
        log.info("Request album value (is null: {})", isNull);
        return musicService.getAlbumAsValue(isNull);
    }

    @GetMapping(path = "/optional")
    Optional<Album> getOptional(@RequestParam("isNull") boolean isNull) {
        log.info("Request album optional (is null: {})", isNull);
        return musicService.getAlbumAsOptional(isNull);
    }
}

The first question I had was if Spring Boot would even start up. You never know. It does, and with this hurdle out of the way, here’s the output of a couple of curl commands.

~ % curl -i 'http://localhost:8080/value?isNull=false'   
HTTP/1.1 200 
Content-Type: application/json
Transfer-Encoding: chunked
Date: Sun, 24 Jul 2022 08:24:50 GMT

{"artist":"Insomnium","title":"Winter's Gate","genre":"Melodic Death Metal","year":2016} 

~ % curl -i 'http://localhost:8080/value?isNull=true'    
HTTP/1.1 200 
Content-Length: 0
Date: Sun, 24 Jul 2022 08:24:55 GMT
 
~ % curl -i 'http://localhost:8080/optional?isNull=false'
HTTP/1.1 200 
Content-Type: application/json
Transfer-Encoding: chunked
Date: Sun, 24 Jul 2022 08:25:00 GMT

{"artist":"Insomnium","title":"Winter's Gate","genre":"Melodic Death Metal","year":2016} 

~ % curl -i 'http://localhost:8080/optional?isNull=true' 
HTTP/1.1 200 
Content-Type: application/json
Transfer-Encoding: chunked
Date: Sun, 24 Jul 2022 08:25:06 GMT

null

Everything works as expected except for the null-case with the Optional. It returns the string “null” instead of nothing.

The moral of the story: Do not return java.util.Optional from a @RestController, or you need to do more work to unpack it.

Thank you for reading.

Simplify Spring Boot Access to Kubernetes Secrets Using Environment Variables

This blog post is a follow-up to a previous blog post titled “Simplify Spring Boot Access to Secrets Using Spring Cloud Kubernetes“. Despite the downsides I mentioned, I already hinted at a more straightforward solution that utilizes environment variables. The plan is to get everything into the Pod with as little configuration effort as possible.

So, I promised a twist, and here it is, thanks to one of my colleagues who pushed me in this direction. Kubernetes gives you yet another tool to handle Secrets in environment variables. This time, it is more convenient since you only point it to the complete Secret, not just a single value. Kubernetes will then make all key-value pairs available as individual environment variables.

Read More »

Simplify Spring Boot Access to Secrets Using Spring Cloud Kubernetes

This topic has its origin in how we manage Kubernetes Secrets at my workplace. We use Helm for deployments, and we must support several environments with their connection strings, passwords, and other settings. As a result, some things are a bit more complicated, and one of them is the access to Kubernetes Secrets from a Spring Boot application running in a Pod.

This blog post covers the following:

  1. How do you generally get Secrets into a Pod?
  2. How do we currently do it using Helm?
  3. How can it be improved with less configuration?
  4. Any gotchas? Of course, it is software.

I will explain a lot of rationales, so expect a substantial amount of prose between the (code) snippets.

Read More »

Spring Boot Externalized Config on Command Line With Apache Commons CLI – Missing Required Option

I know this title is a bit of a mouthful, but you need to get all the keywords in for Google to do its magic 😉. In the previous blog post, I mentioned that I would take another look at this topic through the lens of a programmer that uses Apache Commons CLI for command-line argument handling. In a project for work, I noticed some odd error messages claiming that a command-line option did not have a value assigned to it, although it obviously did.

A more extensive set of examples can be found in the README file on GitHub, together with the code.

The sole reason for this blog post is how unknown parameters from the view of Commons CLI can mess up the parsing. The demo application defines two required Options – one for input (“-i” or “–input”) and one for output (“-o” or “–output”). Consider this command where I also set a Spring configuration setting.

% java -jar target/external-config-commons-cli-1.0.0.jar --spring.config.additional-location=src/config/application-mac.yml -i in -o out
-> AppRunner.run() Command Line Arguments
Argument: --spring.config.additional-location=src/config/application-mac.yml
Argument: -i
Argument: in
Argument: -o
Argument: out
-> ExternalConfigProperties
Input path: /Users/mac/thecode
Output path: /Users/mac/slinger
-> Parsing Help With Apache Commons CLI
-> Parsing Arguments With Apache Commons CLI
Missing required options: i, o

Both options are clearly there. The raw output of the String… args array shows that. By default, Commons CLI complains about unknown options. I disabled that behavior by setting stopAtNonOption to true. The parameter’s name makes no sense to me because it does not stop, but I might misinterpret something.

Either way, I assume that Commons CLI expects an option and a value by default. –spring.config.additional-location=src/config/application-mac.yml is a continuous string, an option without a value – at least to Commons CLI. Then it reads -i as the value to that option, and from there, the parsing goes south. The actual options are interpreted as values now.

Note, though, that Spring still accepts the configuration setting.

How can we fix that? There are two ways to do that:

  1. Add the Spring arguments at the end of the command line.
  2. Use the JVM-style Spring arguments with “-D”, as alluded to in the other blog post.

Putting the argument at the end:

% java -jar target/external-config-commons-cli-1.0.0.jar -i in -o out --spring.config.additional-location=src/config/application-mac.yml
-> AppRunner.run() Command Line Arguments
Argument: -i
Argument: in
Argument: -o
Argument: out
Argument: --spring.config.additional-location=src/config/application-mac.yml
-> ExternalConfigProperties
Input path: /Users/mac/thecode
Output path: /Users/mac/slinger
-> Parsing Help With Apache Commons CLI
-> Parsing Arguments With Apache Commons CLI
Found option i with value in
Found option o with value out

Using the JVM-style:

% java -Dspring.config.additional-location=src/config/application-mac.yml -jar target/external-config-commons-cli-1.0.0.jar -i in -o out
-> AppRunner.run() Command Line Arguments
Argument: -i
Argument: in
Argument: -o
Argument: out
-> ExternalConfigProperties
Input path: /Users/mac/thecode
Output path: /Users/mac/slinger
-> Parsing Help With Apache Commons CLI
-> Parsing Arguments With Apache Commons CLI
Found option i with value in
Found option o with value out

Thank you very much for reading. I hope this was helpful.

Spring Boot Externalized Config on Command Line

Spring Boot applications do not always have to serve as a web service located on the Internet. You can also use Spring Boot (or Spring without the Boot) for a command-line utility. I was recently faced with this task, and one requirement for the tool was to support setting a profile-specific configuration on the command line. This isn’t earth-shattering per se since that is a regular Spring feature. The goal was to provide a profile-specific configuration file on the command line that is not bundled in the application.

Imagine developing a Cloud service and running different environments for the different phases of your project – one for development tests, a staging environment, and, finally, the production environment. Connecting to the different environments may require secrets you do not want to be bundled in the application – and, thus, the source tree.

Now, you could roll your own configuration file reader. But wouldn’t it be nice to make full support of Spring’s @Value annotation or @ConfigurationProperties classes?

Read More »

Azure PostgreSQL Error: PSQLException The connection attempt failed

A few days ago at work, I was investigating a strange issue where one of our services could not connect to the Azure Managed PostgreSQL Database from the Kubernetes cluster. Oddly enough, other services of that cluster did not exhibit this behavior.

org.postgresql.util.PSQLException: The connection attempt failed.
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:315) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:225) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.Driver.makeConnection(Driver.java:465) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.Driver.connect(Driver.java:264) ~[postgresql-42.2.16.jar!/:42.2.16]
        ...
        at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
        at org.springframework.boot.loader.Launcher.launch(Launcher.java:107) ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
        at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
        at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
Caused by: java.io.EOFException: null
        at org.postgresql.core.PGStream.receiveChar(PGStream.java:443) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.v3.ConnectionFactoryImpl.enableGSSEncrypted(ConnectionFactoryImpl.java:436) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:144) ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213) ~[postgresql-42.2.16.jar!/:42.2.16]
        ... 46 common frames omitted

As it turns out, it was an issue with the PSQL JDBC driver version that comes bundled with Spring Boot version 2.3.4-RELEASE. All the other services were still built with a slightly older release and therefore used an older PSQL JDBC driver.

The key indicator of what is going on is this method call.

org.postgresql.core.v3.ConnectionFactoryImpl.enableGSSEncrypted

A bit of research led me to a question on StackOverflow that pointed me in the right direction, and ultimately I ended up on Microsoft’s Azure documentation. If you scroll down, you will find a section named "GSS error".

The solution to this problem is simple. If you do not want or cannot change the Spring Boot or PSQL JDBC driver version, e.g., because of automated CVE scans that break builds (the reason we upgraded this one service), then you can solve it with a configuration change. Append gssEncMode=disable to the JDBC connection string.

Example: jdbc:postgresql://svc-pdb-name.postgres.database.azure.com:5432/databasename?gssEncMode=disable