Convert Java POJO With Protobuf field to JSON Using Jackson

In this blog post, I will explain how to convert a regular Java class that contains a Protobuf message field to JSON using the Jackson library, for example, in a Spring Boot application as a return value of an @Controller method.

You might wonder, how this is such a big deal? After all, you can create complex POJO hierarchies, and Jackson will pick them up just fine. Well, maybe this error message will convince you.

o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [
    Request processing failed; nested exception is org.springframework.http.converter.HttpMessageConversionException: 
    Type definition error: [simple type, class]; 
    nested exception is com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Direct self-reference leading to cycle 
    (through reference chain: com.thecodeslinger.ppjs.web.dto.AwesomeDto["awesomePowerUp"]
        ->["defaultInstanceForType"])] with root cause

Java classes that you create with the Protobuf compiler require their JSON converter JsonFormat.Printer. So, how can we get Jackson and JsonFormat.Printer love each other and have a wedding together?

Simple: we create a custom JsonSerializer.

public class ProtobufSerializer extends JsonSerializer<Message> {

    private final JsonFormat.Printer protobufJsonPrinter = JsonFormat.printer();

    public void serialize(Message anyProtobufMessage, JsonGenerator jsonGenerator, SerializerProvider serializerProvider)
            throws IOException {
        // The magic sauce: use the Protobuf JSON converter to write a raw JSON
        // string of the Protobuf message instance.

This class is very simple and can work with any Protobuf Message class instance. This way, it is universal and only needs to be written once. The main ingredient is the jsonGenerator.writeRawValue method that takes the input without modification. Since we already ensure a proper JSON format using Protobuf’s converter, this is no problem in this case. Otherwise, be careful with this method.

The last step is to annotate the Message field in the POJO, so Jackson knows what to do.

@JsonSerialize(using = ProtobufSerializer.class)
private final AwesomePowerUpOuterClass.AwesomePowerUp awesomePowerUp;

You can find a complete working example that uses Spring Boot and a REST endpoint on my Github account.

Install Minikube in VirtualBox on Remote Machine for Kubectl

At work, we are using Kubernetes as a way to run our application services. To test and debug deployments before they go into code review and to the development environment, a local Kubernetes is beneficial. That is where Minikube comes into play. Unfortunately for me, our application services require more resources than my work laptop can provide, especially RAM. Either I close all applications and run Minikube, or I have a helpful browser and IDE window open 😉.

Since I need the local K8s cluster from time to time, I wondered if I could run it on my personal computer and access it from my laptop. This way, I can dedicate at least six physical cores and 24 GB of RAM to the VM (even more, but that was a nice number and more than enough).

Read More »

OpenRGB – An RGB Software I Want to Use (It Runs on Linux!)

If you are in the market for anything gaming PC or gaming laptop related, chances are, you have come across the industry-wide trend of RGB illuminated hardware and peripherals. Everything is RGB, from the graphics card to the RAM, to your headset (because you can see the lights when you wear it 🙄), and many, many more. I am not against RGB lighting per se, but if you follow the industry as a PC hardware enthusiast, it is evident that in some aspects, this has gone too far.

Quick side note: after a rant about RGB software, I will show examples of using OpenRGB on Windows and Linux. If you are interested in only that, skip the rant and scroll to the bottom.

Read More »

Apple, Stop Parenting Me – Rant About iOS 14 Auto-Volume Reduction

Apple is a company that tends to believe it knows best what its customers want. Sometimes a company – not specific to Apple – does actually know better than the customer. Apple has been very active in the past years to push customer health and provide hardware, the Apple Watch, and software, the Health app, to facilitate this push in the form of products they can sell. I do not own an Apple Watch, but I genuinely view it as a good thing.

Now, with iOS 14, Apple has gone a bit too far with regards to health monitoring. It now enforces rules I, the customer and user of a device, cannot override. I am talking about the automatic volume reduction when iOS thinks I have been listening to loud audio for too long.

This is not okay!
This is not a situation where a company knows better.

It is actively limiting its product’s usefulness to me, the customer who paid a lot of money for it. I understand the motivation, but I cannot condone the action taken. Apple cannot even know why I turn up the volume to levels it deems inappropriate for a more extended period.

Here are a few examples, some of which already happened to me.

  1. Bluetooth-pairing the phone with my car’s audio system.

    I usually crank the phone’s volume to max to roughly match the other audio sources, like music on a USB stick (yes, I am a cave-man that has music on a stick).

  2. Listening to podcasts while going for a walk or run next to a busy road.

    Imagine my surprise when the voices speaking to me seemed to have disappeared because iOS lowered the volume to a point where the audio was drowned by traffic noise. I thought my phone had died – which has happened often enough thanks to an iOS bug that incorrectly reported battery percentage and dropped from 30% to turning off within 15-20 minutes.

  3. Listening with studio headphones that have a high input resistance (in ohm).

    I recently bought a new pair of headphones, and the quickest way to compare them with my old ones was to plug them into my phone. 80 Ω is not a lot, but enough to have to crank up the volume a bit higher to get a decent fun level. In the end, it is still much quieter compared to my PC soundcard that supports up to 600 Ω headphones.

No. 1 has not yet happened, but I assume it might once the world is rid of the COVID-19 pandemic, and I can/must travel to work a couple of times per month. On longer car rides, I usually listen to podcasts, and as mentioned, I turn up the volume on my phone in those cases. The other two issues have already managed to annoy me, and No. 3 prompted me to write this little rant – although that is the least likely of the three examples to occur regularly. Most of the time, it will be No. 2 when I am out walking or going for a run. The traffic noise is much worse than people talking to me. I am not even listening to music, which is also worse than people talking to me. I prefer Apple to turn down the car noise on the roads instead of my headphones. Until they can do that, stop messing with my volume, please.

(Is this a ploy to get me to buy horribly expensive AirPods Pro with
noise cancellation?)

I can agree that a notification is a good start to educate users. But please do not take any automatic action. At least make it configurable. I am an adult, and I should be able to decide for myself. On top of that, there are legitimate use-cases where a higher "theoretical" volume is required.

CMake C++ Custom Library on Windows “Undefined Reference” – No Error on Linux

Here is the short version with a quick setup of the situation and the fix. After that, I’ll elaborate a bit.



I have a custom C++ library and a separate project for tests (all based on Qt 6). The test project requires the library for execution.

Here is a short excerpt of the CMake scripts, first the library, then the tests.

project(wt2-shared VERSION 2.0.0 DESCRIPTION "WorkTracker2 Shared 

# To export symbols.

# Snip header + source definitions

add_library(${PROJECT_NAME} ${SOURCES} ${HEADERS})

target_include_directories(${PROJECT_NAME} PUBLIC include/)
target_link_libraries(wt2-shared Qt6::Core Qt6::Sql)
project(wt2-shared-test VERSION 2.0.0 DESCRIPTION "WorkTracker2 
Shared Library Tests")

# Snip header + source definitions

add_executable(${PROJECT_NAME} ${SOURCES} ${HEADERS})

target_include_directories(${PROJECT_NAME} PRIVATE ${INCLUDES})
target_link_libraries(${PROJECT_NAME} Qt6::Core Qt6::Test wt2-shared)


This error only occurred on Windows, and it does not matter which toolchain I used, be it MinGW or MSVC. The result was always the same.

The following shows the MinGW error.

[100%] Linking CXX executable wt2-shared-test.exe    
    undefined reference to `__imp__ZN4Data3Sql13SqlDataSourceC1E7QString'
    undefined reference to `__imp__ZN4Data3Sql13SqlDataSource4loadEv'
    undefined reference to `__imp__ZN4Data3Sql13SqlDataSourceC1E7QString'	
    undefined reference to `__imp__ZN4Data3Sql13SqlDataSource4loadEv'
collect2.exe: error: ld returned 1 exit status
mingw32-make[2]: *** 
wt2-shared-test/wt2-shared-test.exe] Error 1


The add_library definition in the CMakeLists.txt was incomplete.
To make it work, I added SHARED because I want a shared library.


Continue reading, though, to get the full picture. There is more to it than just making the library a shared one.

Read More »

CMake CLI Parameter “generator-name” Usage

This topic shouldn’t even require a blog post, but I find the CMake CLI usage rather odd when it comes to specifying a generator. Here’s a shortened "-h" output.

> cmake -h

cmake [options] <path-to-source>
cmake [options] <path-to-existing-build>
cmake [options] -S <path-to-source> -B <path-to-build>

Specify a source directory to (re-)generate a build system for it in the
current working directory.  Specify an existing build directory to
re-generate its build system.

-S <path-to-source>          = Explicitly specify a source directory.
-B <path-to-build>           = Explicitly specify a build directory.
-C <initial-cache>           = Pre-load a script to populate the cache.
-D <var>[:<type>]=<value>    = Create or update a cmake cache entry.
-U <globbing_expr>           = Remove matching entries from CMake cache.
-G <generator-name>          = Specify a build system generator.
-T <toolset-name>            = Specify toolset name if supported by
-A <platform-name>           = Specify platform name if supported by



The following generators are available on this platform (* marks default):
Visual Studio 16 2019        = Generates Visual Studio 2019 project files.
                                Use -A option to specify architecture.
Visual Studio 15 2017 [arch] = Generates Visual Studio 2017 project files.
                                Optional [arch] can be "Win64" or "ARM".


Borland Makefiles            = Generates Borland makefiles.
* NMake Makefiles              = Generates NMake makefiles.
NMake Makefiles JOM          = Generates JOM makefiles.
MSYS Makefiles               = Generates MSYS makefiles.
MinGW Makefiles              = Generates a make file for use with
Unix Makefiles               = Generates standard UNIX makefiles.

Kate - Ninja                 = Generates Kate project files.
Kate - Unix Makefiles        = Generates Kate project files.
Eclipse CDT4 - NMake Makefiles
                            = Generates Eclipse CDT 4.0 project files.
Eclipse CDT4 - MinGW Makefiles
                            = Generates Eclipse CDT 4.0 project files.
Eclipse CDT4 - Ninja         = Generates Eclipse CDT 4.0 project files.
Eclipse CDT4 - Unix Makefiles= Generates Eclipse CDT 4.0 project files.

These generator options do not look like valid parameter values to the "-G" option. But they are. So, if you want to compile on Windows using MinGW, you have to use this.

> cmake -S ../src -B ./ -G "MinGW Makefiles"

Or, if you prefer Visual Studio project files:

> cmake -S ../ -B ./ -G "Visual Studio 15 2017 Win64"

This "syntax" looks weird, and it tripped me for about 10 minutes until I found a sample and understood how it works.

Road to the Perfect Mini ITX PC (Part 5): Other Small Form-Factor ITX Cases

Previously on “Road to the Perfect Mini ITX PC”:

  1. Fractal Design Core 500
  2. NZXT H200
  3. Fractal Design Meshify C
  4. Lian Li TU150

The first two cases I had bought were before I became aware of all the mass-market and niche options that existed at that time. Not only have I learned about the DAN Case, NCase M1, and Streacom DA2, but companies have released more cases during the past year. I’m talking about the NZXT H1, the Cooler Master NR200, or, recently, Phanteks’ second attempt at the Evolv Shift. Even Lian Li’s TU150 landed during that time, my current case. There are even so many more cases, like the Louqe Ghost S1, the FormD T1, Sliger SM560, and many more.

Read More »

Road to the Perfect Mini ITX PC (Part 4): Lian Li TU150

In a few aspects, the Lian Li TU150 is comparable to the NZXT H200. One: for an ITX enclosure, it is on the bigger side. And two: it has a similarly closed-off front. Other than that, they are pretty different, though. In some areas, that is a good thing, and it is a bad thing in others.

In the timeframe of just over a year, this is the fourth (!) computer case that I have tried. Usually, it is the CPU or GPU that gets replaced more often 😅 It is also my current case, which means I can provide good pictures to visualize my thought process better.

Read More »

Road to the Perfect Mini ITX PC (Part 3): Fractal Design Meshify C White

In the third part of my road to the perfect mini ITX computer case, things will get a bit weird. As you may have gathered from the title, I will not talk about a mini ITX enclosure in this blog post. Quite the opposite, in fact: the Fractal Design Meshify C is a full-sized mid-tower ATX case.

You may now wonder why I suddenly had a change of heart and ditched a big.SMALL™ case for a not-so-small big computer tower. Well, I was surrendering to big graphics cards. Or, put the other way around, I was annoyed that I had to search endlessly to find a fast and quiet, and affordable two-slot graphics card model, only to fail ultimately. But, let me not get ahead of myself and start from the beginning, the same way I did for the previous two blog posts.

Read More »

Road to the Perfect Mini ITX PC (Part 2): NZXT H200

The second of the bunch is one of the stylish cases from NZXT, the H200. While it is technically a mini ITX chassis, it is a large case for that market segment. Just like the Fractal Design Core 500, it is compatible with a wide range of hardware, making it the perfect enclosure for price-conscious buyers. On top of that, it also is beautiful.

Unfortunately, I do not have an image of a complete desk setup with this case. Here is one with a good look at the internal layout and installed hardware.

Read More »

Road to the Perfect Mini ITX PC (Part 1): Fractal Design Core 500

The computer that I bought roughly a year ago has seen quite a few revisions already. But I am not talking about the core hardware – although I switched the GPU at one point. I mean the case. I wanted to go with something small from the start, so the basis is a mini ITX mainboard. However, I have not been incredibly happy with any of the cases so far. In this first installment in a series of several blog posts, one for each computer case, I will share my experiences in building a small, attractive, and performant and yet price efficient computer. I will cover design, hardware compatibility, pricing, and availability. Unlike the YouTube tech creators, not everybody has a seemingly unlimited budget or receives hardware from the manufacturers for review or showcases. It may look easy in all those YouTube videos, but it might not be for everyone.

Although I am mainly talking about gaming hardware, the same thoughts also apply to compact office PCs or workstations. Depending on the use case, i.e., which PC component requires the most focus, one or the other might become less or more relevant. So, first off is the Fractal Design Core 500.

Read More »

Terraform Azure Error: parsing json result from the Azure CLI: Error waiting for the Azure CLI: exit status 1; Failed to load token files

There are some instances where I have managed to screw up my Azure CLI configuration file with Terraform. It must have something to do with parallel usage of Terraform or Terraform simultaneously with the az tool. Either way, I ran into the following error.

$ terraform refresh
Acquiring state lock. This may take a few moments...

Error: Error building account: Error getting authenticated object ID: Error parsing json result from the Azure CLI: Error aiting for the Azure CLI: exit status 1

  on line 16, in provider "azurerm":
  16: provider "azurerm" {

I wondered: "What might block the Azure access? Am I maybe not logged in?" So, I went ahead and tried to log in.

$ az login
Failed to load token files. If you have a repro, please log an issue
at At the same time, you 
can clean up by running 'az account clear' and then 'az login'. 

(Inner Error: Failed to parse /home/rlo/.azure/accessTokens.json with exception: Extra data: line 1 column 18614 (char 18613))

The error probably comes from parallel access to my Azure CLI configuration file. When I opened the /home/rlo/.azure/accessTokens.json, I found some dangling garbage at the end of it that broke the JSON format.

Here’s a snippet of the last few lines.

        "refreshToken": "0.A...",
        "oid": "<oid>",
        "userId": "<userId>",
        "isMRRT": true,
        "_clientId": "<clientId>",
        "_authority": "<uid>"

I took out the trash bc1"}], saved the file, and it worked again. Many access to resources. Such joy 😉

Qt5 QtCreator Error on Linux: stddef.h: No such file or directory – Code model could not parse an included file

The following is an error that has shown itself every time I have installed the Qt5 framework and the QtCreator development environment on a Linux based machine. It never mattered which flavor of Linux; QtCreator always showed this error.

Warning: The code model could not parse an included file, which might lead to incorrect code completion and highlighting, for example. 

fatal error: 'stddef.h' file not found 
note: in file included from /home/rlo/Code/C++/WorkTracker2/WorkTracker2Shared/src/data/taskrepository.h:1: 
note: in file included from /home/rlo/Code/C++/WorkTracker2/WorkTracker2Shared/src/data/taskrepository.h:3: 
note: in file included from /usr/include/c++/9/optional:38: 
note: in file included from /usr/include/c++/9/stdexcept:38: 
note: in file included from /usr/include/c++/9/exception:143: 
note: in file included from /usr/include/c++/9/bits/exception_ptr.h:38: 

Although that message never caused any issues compiling the code, I found it rather annoying, and at some point, annoying enough to search for a solution.

As it turns out, this message appears when you have Clang libraries installed. QtCreator detects that and automatically uses Clang to parse the source code and provide inline error messages and code completion.

You can get rid of this error when you explicitly add the STL header files’ include-path to your project. In my case, I have added the following to my *.pro file.

unix {
    INCLUDEPATH += /usr/lib/gcc/x86_64-linux-gnu/9/include

Azure PostgreSQL Error: PSQLException The connection attempt failed

A few days ago at work, I was investigating a strange issue where one of our services could not connect to the Azure Managed PostgreSQL Database from the Kubernetes cluster. Oddly enough, other services of that cluster did not exhibit this behavior.

org.postgresql.util.PSQLException: The connection attempt failed.
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl( ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.ConnectionFactory.openConnection( ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.jdbc.PgConnection.<init>( ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.Driver.makeConnection( ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.Driver.connect( ~[postgresql-42.2.16.jar!/:42.2.16]
        at ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
        at org.springframework.boot.loader.Launcher.launch( ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
        at org.springframework.boot.loader.Launcher.launch( ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
        at org.springframework.boot.loader.JarLauncher.main( ~[ehg-hermes.jar:0.13.0-SNAPSHOT]
Caused by: null
        at org.postgresql.core.PGStream.receiveChar( ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.v3.ConnectionFactoryImpl.enableGSSEncrypted( ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect( ~[postgresql-42.2.16.jar!/:42.2.16]
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl( ~[postgresql-42.2.16.jar!/:42.2.16]
        ... 46 common frames omitted

As it turns out, it was an issue with the PSQL JDBC driver version that comes bundled with Spring Boot version 2.3.4-RELEASE. All the other services were still built with a slightly older release and therefore used an older PSQL JDBC driver.

The key indicator of what is going on is this method call.


A bit of research led me to a question on StackOverflow that pointed me in the right direction, and ultimately I ended up on Microsoft’s Azure documentation. If you scroll down, you will find a section named "GSS error".

The solution to this problem is simple. If you do not want or cannot change the Spring Boot or PSQL JDBC driver version, e.g., because of automated CVE scans that break builds (the reason we upgraded this one service), then you can solve it with a configuration change. Append gssEncMode=disable to the JDBC connection string.

Example: jdbc:postgresql://