Last week, I demonstrated that OpenJDK is faster than GraalVM Java, at least for obtaining the Java version. This even prompted the mighty Thomas Wuerthinger (creator of GraalVM) to react. But the measured ~20ms for the OpenJDK is still too slow for applications like execjar, where it could significantly increase the runtime of short-running CLI tools. In this week’s brief blog post, I’ll show you the fastest way to access the Java version.
The main performance issue is that calling java -version creates a process with a fairly large (around 38MB) maximum resident set size, and using a proper command line parser. But do we actually need to call the java binary to get the version?
TL;DR: I created the java-version tool, which can obtain the Java version in under a millisecond.
Basic Idea
No, we can just realize that most Java installations have a release file that contains the relevant information in a machine-readable format. You can find this file in the main folder of the installation (./release when java is in ./bin).
Well, we all know that the most crucial feature of the JVM runtime is the -version output. So how does the OpenJDK (in the form of SapMachine) compare with GraalVM? It’s significantly faster. Using hyperfine, we can see that GraalVM 25 CE takes almost twice as long to emit the version number as a regular SapMachine 25 on my MacBook Pro M5:
The slowness of java -version was actually one of the performance issues of the tool I showcased in How to Build an Executable from a JAR using ExecJAR, as it originally used java -version a lot to check the Java version constraint.
Is this relevant? Not really. However, so are most microbenchmarks and benchmarks in general that are taken out of context. You should not generalize small benchmarks, and modern systems are complex.
Join me next week for a blog post on something different and learn how to check the version of a Java installation even faster in under one millisecond:
P.S.: I just ran some more benchmarks: OpenJDK 25 is 18% faster than OpenJDK 17 and 21 and a whopping 84% faster than OpenJDK 11. Upgrade now!
P.P.S.: As many people (Thomas Wuerthinger, Fabio Niebhaus, Volker Simonis, and multiple of my SapMachine colleagues) pointed out, the differences between OpenJDK and GraalVM are due to the GraalVM initializing the JVM Compiler Interface (JVMCI). The difference between the two becomes negligible when running OpenJDK with enabled JVMCI (initialize the JIT at the beginning):
In my last blog post, I covered a new tool called jstall, which enables you to quickly check on a Java application. Because it was tiresome to always call the tool via java -jar jstall, I looked for a way to create executables directly from JARs, inspired by async-profiler’s build system. And I, of course, went down a rabbit hole. In this blog post, I’ll show you how use execjar to easily create your own executable JARs that you can execute directly on the command line while still being valid JARs.
TL;DR:execjar is a CLI and Maven plugin that enables you to create executables from JARs by just adding a few lines to your Maven file:
Welcome to the last blog post of the year. Last week, I discussed the limitations of custom JFR events. This week, I’ll also be covering a profiling-related topic and showcasing a tiny tool called JStall.
I hope I’m not the only one who sometimes wonders: “What is my Java application doing right now?” When you don’t see any output. Yes, you could perform a simple thread dump via jstack, but it is hard to understand which threads are actually consuming CPU and making any sort of progress. This is where my tiny tool called JStall comes in:
JStall is a small command-line tool for one-shot inspection of running JVMs using thread dumps and short, on-demand profiling. The tool essentially takes multiple thread dumps of your application and uses the per-thread cpu-time information to find the most CPU-time-consuming Java threads.
First, download the JStall executable from the GitHub releases page. Let us then start by finding the currently running JVMs:
> ./jstall
Usage: jstall <command> <pid|files> [options]
Available commands:
status - Show overall status (deadlocks + most active threads)
deadlock - Check for deadlocks
most-work - Show threads doing the most work
flame - Generate flame graph
threads - List all threads
Available JVMs:
7153 ./jstall
1223 <unknown>
8136 ./renaissance-gpl-0.16.0.jar
6138 org.jetbrains.idea.maven.server.RemoteMavenServer36
5597 DeadlockDemo
49294 com.intellij.idea.Main
This provides us with a list of options for the main status command, as well as a list of JVM processes and their corresponding main classes. Let’s start checking for deadlocking:
JDK Flight Recorder (JFR) provides support for custom events as a profiler. Around two years ago, I wrote a blog post on this very topic: Custom JFR Events: A Short Introduction. These custom events are beneficial because they enable us to record additional project-specific information alongside the standard JFR events, all in the same file. We can then view and process this information with the JFR tools. You can freely specify these events in Java.
There is only one tiny problem nobody talks about: Array support (and, in more general, the support of complex types).
Over a year ago, I wrote a blog post called Who instruments the instrumenters? together with Mikaël Francoeur on how we debugged the Java instrumentation code. In the meantime, I gave a more detailed talk on this topic at VoxxedDays Amsterdam. The meta-agent that I developed for this worked well for Java agents/instrumenters, but what about native agents? Marco Sussitz found my agent and asked exactly this question. Native agents are agents that utilize the JVMTI API to, for example, modify class bytecode; however, they are not written in Java. With this blog post, I’m proud to announce that the meta-agent now supports instrumenting native agents.
TL;DR: Meta-agent allows you to see how an agent, native or Java, transforms bytecode.
There are many examples of native agents, like DynaTrace‘s monitoring agent or async-profiler‘s method tracer. I’m using the latter in my example here, as it’s open-source and readily available. The method tracer instruments the Java bytecode to trace the execution time of specific methods. You can find more about it in the async-profiler forum.
As a sample program, we use Loop.java:
public class Loop {
public static void main(String[] args)
throws InterruptedException {
while (true) Thread.sleep(1000);
}
}
Let’s trace the Thrread.sleep method and use the meta-agent to see what async-profiler does with the bytecode:
This opens a server at localhost:7071 and we check how async-profiler modified the Thread class:
So we can now instrument native agents like any other Java agent. And the part: As all Java agents are built on top of the libinstrument native agent, we can also see what any Java agent is doing. For example, we can see that the Java instrumentation agent instruments itself:
So I finally built an instrumenter that can essentially instrument my instrumentation agent, which in turn instruments other instrumentation agents. Another benefit is that the instrumenter can find every modification of any Java agent.
In my previous post, I showed you how tricky it is to compare objects from the JFR Java API. You probably wondered why I wrote about this topic. Here is the reason: In this blog post, I’ll cover how to load JFR files into a DuckDB database to allow querying profiling data with simple SQL queries, all JFR views included.
This blog post will start a small series on making JFR quack.
TL;DR
You can now use a query tool (via GitHub) to transform JFR files into similarly sized DuckDB files:
CREATE VIEW "hot-methods" AS
SELECT
(c.javaName || '.' || m.name || m.descriptor) AS "Method",
COUNT(*) AS "Samples",
format_percentage(COUNT(*) / (SELECT COUNT(*) FROM ExecutionSample)) AS "Percent"
FROM ExecutionSample es
JOIN Method m ON es.stackTrace$topMethod = m._id
JOIN Class c ON m.type = c._id
GROUP BY es.stackTrace$topApplicationMethod, c.javaName, m.name, m.descriptor
ORDER BY COUNT(*) DESC
LIMIT 25
In the last blog post, I showed you how to silence JFR’s startup messages. This week’s blog post is also related to JFR, and no, it’s not about the JFR Events website, which got a simple search bar. It’s a short blog post on comparing objects from JFR recordings in Java and why this is slightly trickier than you might have expected.
Example
Getting a JFR recording is simple; just use the RecordingStream API. We do this in the following to record an execution trace of a tight loop using JFR and store it in a list:
List<RecordedEvent> events = new ArrayList<>();
// Know when to stop the loop
AtomicBoolean running = new AtomicBoolean(true);
// We obtain one hundred execution samples
// that have all the same stack trace
final long currentThreadId = Thread.currentThread().threadId();
try (RecordingStream rs = new RecordingStream()) {
rs.enable("jdk.ExecutionSample").with("period", "1ms");
rs.onEvent("jdk.ExecutionSample", event -> {
if (event.getThread("sampledThread")
.getJavaThreadId() != currentThreadId) {
return; // don't record other threads
}
events.add(event);
if (events.size() >= 100) {
// we can signal to stop
running.set(false);
}
});
rs.startAsync();
int i = 0;
while (running.get()) { // some busy loop to produce sample
for (int j = 0; j < 100000; j++) {
i += j;
}
}
rs.stop();
}
[0.172s][info][jfr,startup] Started recording 1. No limit specified, using maxsize=250MB as default.
[0.172s][info][jfr,startup]
[0.172s][info][jfr,startup] Use jcmd 29448 JFR.dump name=1 to copy recording data to file.
when starting the Flight Recorder with -XX:StartFlightRecorder? Even though the default logging level is warning, not info?
This is what this week’s blog post is all about. After I showed you last week how to waste CPU like a Professional, this week I’ll show you how to silence JFR. Back to the problem:
As a short backstory, my profiler needed a test to check that the queue size of the sampler really increased dynamically (see Java 25’s new CPU-Time Profiler: Queue Sizing (3)), so I needed a way to let a thread spend a pre-defined number of seconds running natively on the CPU. You can find the test case in its hopefully final form here, but be aware that writing such cases is more complicated than it looks.
So here we are: In need to essentially properly waste CPU-time, preferably in user-land, for a fixed amount of time. The problem: There are only a few scant resources online, so I decided to create my own. I’ll show you seven different ways to implement a simple
void my_wait(int seconds);
method, and you’ll learn far more about this topic than you ever wanted to. That works both on Mac OS and Linux. All the code is MIT licensed; you can find it on GitHub in my waste-cpu-experiments, alongside some profiling results.
Welcome back to my blog, this time for a blog post on profiling your Java applications in Cloud Foundry and the tool I helped to develop to make it easier.
Cloud Foundry “is an open source, multi-cloud application platform as a service (PaaS) governed by the Cloud Foundry Foundation, a 501(c)(6) organization” (Wikipedia). It allows you to run your workloads easily in the cloud, including your Java applications. You just need to define a manifest.yml, like for example:
But how would you profile this application? This and more is the topic of this blog post.
I will not discuss why you might want to use Cloud Foundry or how you can deploy your own applications. I assume you came this far in the blog post because you already have basic Cloud Foundry knowledge and want to learn how to profile your applications easily.
The Java Plugin
Cloud Foundry has a cf CLI with a proper plugin system with lots of plugins. A team at SAP, which included Tim Gerrlach, started to develop the Java plugin many years ago at SAP. It’s a plugin offering utilities to gain insights into JVMs running in your Cloud Foundry app.
The changes I described in this blog post led to segfaults in tests, so I backtracked on them for now. Maybe I made a mistake implementing the changes, or my reasoning in the blog post is incorrect. I don’t know yet.
Should the queue implementation use Atomics and acquire-release semantics?
This is what we cover in this short blog post. First, to the rather fun topic:
Is it a Queue?
I always called the primary data structure a queue, but recently, I wondered whether this term is correct. But what is a queue?
Definition: A collection of items in which only the earliest added item may be accessed. Basic operations are add (to the tail) or enqueue and delete (from the head) or dequeue. Delete returns the item removed. Also known as “first-in, first-out” or FIFO.
Welcome back to my series on the new CPU-time profiler in Java 25. In the previous blog post, I covered the implementation of the new profiler. In this week’s blog post, I’ll dive deep into the central request queue, focusing on deciding its proper size.
The JfrCPUTimeTraceQueue allows the signal handler to record sample requests that the out-of-thread sampler and the safepoint handler process. So it’s the central data structure of the profiler:
This queue is thread-local and pre-allocated, as it’s used in the signal handler, so the correct sizing is critical:
If the size is too small, you’ll lose many samples because the signal handler can’t record sample requests.
If you size it too large, you waste lots of memory. A sampling request is 48 bytes, so a queue with 500 elements (currently the default) requires 24kB. This adds up fast if you have more than a few threads.
So, in this blog post, we’re mainly concerned about setting the correct default size and discussing a potential solution to the whole problem.
I developed, together with others, the new CPU-time profiler for Java, which is now included in JDK 25. A few weeks ago, I covered the profiler’s user-facing aspects, including the event types, configuration, and rationale, alongside the foundations of safepoint-based stack walking in JFR (see Taming the Bias: Unbiased Safepoint-Based Stack Walking). If you haven’t read those yet, I recommend starting there. In this week’s blog post, I’ll dive into the implementation of the new CPU-time profiler.
It was a remarkable coincidence that safepoint-based stack walking made it into JDK 25. Thanks to that, I could build on top of it without needing to re-implement:
The actual stack walking given a sampling request
Integration with the safepoint handler
Of course, I worked on this before, as described in Taming the Bias: Unbiased Safepoint-Based Stack Walking. But Erik’s solution for JDK 25 was much more complete and profited from his decades of experience with JFR. In March 2025, whether the new stack walker would get into JDK 25 was still unclear. So I came up with other ideas (which I’m glad I didn’t need). You can find that early brain-dump in Profiling idea (unsorted from March 2025).
In this post, I’ll focus on the core components of the new profiler, excluding the stack walking and safepoint handler. Hopefully, this won’t be the last article in the series; I’m already researching the next one.
Main Components
There are a few main components of the implementation that come together to form the profiler:
Ever wondered how the views of the jfr tool are implemented? There are views like hot-methods which gives the most used methods, or cpu-load-samples that gives you the system load over time that you can directly use on the command line:
> jfr view cpu-load-samples recording.jfr
CPU Load
Time JVM User JVM System Machine Total
------------------ ------------------ -------------------- -----------------------
14:33:29 8,25% 0,08% 29,65%
14:33:30 8,25% 0,00% 29,69%
14:33:31 8,33% 0,08% 25,42%
14:33:32 8,25% 0,08% 27,71%
14:33:33 8,25% 0,08% 24,64%
14:33:34 8,33% 0,00% 30,67%
...
This is helpful when glancing at JFR files and trying to roughly understand their contents, without loading the files directly into more powerful, but also more resource-hungry, JFR viewers.
In this short blog post, I’ll show you how the views work under the hood using JFR queries and how to use the queries with my new experimental JFR query tool.
I didn’t forget the promised blog post on implementing the new CPU-time profiler in JDK 25; it’ll come soon.
Under the hood, JFR views use a built-in query language to define all views in the view.ini file. The above is, for example, defined as:
More than three years in the making, with a concerted effort starting last year, my CPU-time profiler landed in Java with OpenJDK 25. It’s an experimental new profiler/method sampler that helps you find performance issues in your code, having distinct advantages over the current sampler. This is what this week’s and next week’s blog posts are all about. This week, I will cover why we need a new profiler and what information it provides; next week, I’ll cover the technical internals that go beyond what’s written in the JEP. I will quote the JEP 509 quite a lot, thanks to Ron Pressler; it reads like a well-written blog post in and of itself.
Before I show you its details, I want to focus on what the current default method profiler in JFR does:
Smartphones are more powerful then ever, with processors rivaling old laptops. So let’s try to use them like a laptop to develop web-applications on the go. In this weeks blog post I’ll show you how to do use run and develop a CAP JavaSpring Boot application on your smartphone and how to run VSCode locally to develop and modify it. This, of course, works only on Android phones, as they are a Linux at their core.
If you’re here for eBPF content, this blog post is not for you. I recommend reading an article on a concurrency fuzzing scheduler at LWN.
Ever wonder how the JDK Flight Recorder (JFR) keeps track of the classes and methods it has collected for stack traces and more? In this short blog post, I’ll explore JFR tagging and how it works in the OpenJDK.
Tags
JFR files consist of self-contained chunks. Every chunk contains:
The maximum chunk size is usually 12MB, but you can configure it:
java -XX:FlightRecorderOptions:maxchunksize=1M
Whenever JFR collects methods or classes, it has to somehow tell the JFR writer which entities have been used so that their mapping can be written out. Each entity also has to have a tracing ID that can be used in the events that reference it.
This is where JFR tags come in. Every class, module, and package entity has a 64-bit value called _trace_id (e.g., classes). Which consists of both the ID and the tag. Every method has an _orig_method_idnum, essentially its ID and a trace flag, which is essentially the tag.
In a world without any concurrency, the tag could just be a single bit, telling us whether an entity is used. But in reality, an entity can be used in the new chunk while we’re writing out the old chunk. So, we need to have two distinctive periods (0 and 1) and toggle between them whenever we write a chunk.
Tagging
We can visualize the whole life cycle of a tag for a given entity:
In this example, the entity, a class, is brought into JFR by the method sampler (link) while walking another thread’s stack. This causes the class to be tagged and enqueued in the internal entity queue (and is therefore known to the JFR writer) if it hasn’t been tagged before (source):
This shows that tagging also prevents entities from being duplicated in a chunk.
Then, when a chunk is written out. First, a safepoint is requested to initialize the next period (the next chunk) and the period to be toggled so that the subsequent use of an entity now belongs to the new period and chunk. Then, the entity is written out, and its tag for the previous period is reset (code). This allows the aforementioned concurrency.
But how does it ensure that the tagged classes aren’t unloaded before they are emitted? By writing out the classes when any class is unloaded. This is simple yet effective and doesn’t need any change in the GC.
Conclusion
Tagging is used in JFR to record classes properly, methods, and other entities while also preventing them from accidentally being garbage collected before they are written out. This is a simple but memory-effective solution. It works well in the context of concurrency but assumes entities are used in the event creation directly when tagging them. It is not supported to tag the entities and then push them into the queue to later create events asynchronously. This would probably require something akin to reference counting.
Thanks for coming this far in a blog post on a profiling-related topic. I chose this topic because I wanted to know more about tagging and plan to do more of these short OpenJDK-specific posts.
This week, a short blog post on a question that bothered me this week: How can I get the operating systems thread ID for a given Java thread? This is useful when you want to deal with Java threads using native code (foreshadowing another blog post). The question was asked countless times on the internet, but I couldn’t find a comprehensive collection, so here’s my take. But first some background:
Background
In Java, normal threads are mapped 1:1 to operating system threads. This is not the case for virtual threads because they are multiplexed on fewer carrier threads than virtual threads, but we ignore these threads for simplicity here.
But what is an operating system thread? An operating system thread is an operating system task that shares the address space (and more) with other thread tasks of the same process/thread group. The main thread is the thread group leader; its operating system ID is the same as the process ID.
Be aware that the Java thread ID is not related to the operating system ID but rather to the Java thread creation order. Now, what different options do we have to translate between the two?
Different Options
During my research, I found three different mechanisms:
Using the gettid() method
Using JFR
Parsing thread dumps
In the end, I found that option 3 is best; you’ll see why in the following.
I worked too much on other stuff, so I didn’t have time to blog, so here is a tiny post.
Java annotations are pretty nice: You can annotate many things to add more information. For example, you can add an @Nullable to a type used to tell static analyzers or IDEs that this the value of this type there might actually be null:
There are many other uses, especially in adding more information needed for code generation. In working on hello-ebpf, I used annotations and generated code with JavaPoet containing annotations. When we generate the code from above with JavaPoet, it produces:
This denotes a two-dimensional array of strings that might be null and might contain null, and its arrays might contain null. This is true to the language specification:
There is even an example in the specification that is similar to our example:
For example, given the field declaration:
@Foo int f;
@Foo is a declaration annotation on f if Foo is meta-annotated by @Target(ElementType.FIELD), and a type annotation on int if Foo is meta-annotated by @Target(ElementType.TYPE_USE). It is possible for @Foo to be both a declaration annotation and a type annotation simultaneously.
Type annotations can apply to an array type or any component type thereof (§10.1). For example, assuming that A, B, and C are annotation interfaces meta-annotated with @Target(ElementType.TYPE_USE), then given the field declaration:
@C int @A [] @B [] f;
@A applies to the array type int[][], @B applies to its component type int[], and @C applies to the element type int. For more examples, see §10.2.
An important property of this syntax is that, in two declarations that differ only in the number of array levels, the annotations to the left of the type refer to the same type. For example, @C applies to the type int in all of the following declarations:
Java never stops surprising me. This syntax looked weird when I first stumbled upon it, but after looking through the language specification, I see how useful and justified this placement of annotations is.
I hope you enjoyed this tiny blog post on annotations; see you in my next one.