[dev nexus 2024] a glance at the java performance toolbox

Speaker: Ana Maria Mihalceanu

@ammbra1508

For more, see the 2024 DevNexus Blog Table of Contents


What is performance

  • From user POV, how much work can do in a reasonable amount of time
  • From business, what is cost in computational resources needed to provide that user experience

Cloud

  • Practically unlmited resources
  • Reasonable cost

Container images

  • Tools to build container images – docker, jib, kaniko, buidah, etc
  • All started with a Dockerfile
  • Other tools arrived later to make easier

JLink

  • JRE stopped being included in Java 11
  • Can use jlink to include custom JRE with just modules need.
  • Can also omit man pages and header files.
  • Compress zip-9 offers the best compression.

Fine Tuning JVM Flags

  • Ergonomics docs – process for JVM/GC to tune performance measures
  • Tune min/max heap size with -Xms and -Xmx
  • Consider Java heap ratio

jcmd

  • Tracks native memory
  • Want available in container
  • Add jdk.jcmd module to application

Other commands/tools

  • Use jinfo to see what flags used in app
  • Helps when don’t know all flag names
  • Look for amount of memory reserved and amount used
  • Look for big values
  • JConsole – can see graph of memory use
  • jstat – garbage collection statistics
  • jmap – histogram of heap summary
  • Profiling with Java Flight Recorder – use when looking for something, not all the time. Need jdk.jfr module. Can specify how long to record.
  • Prometheus server – monitors/alerts on events
  • JFR Streaming – sends metrics to monitoring service

Sample app for testing at https://github.com/ammbra/performance-glance

My take

Good information and good demo. It was nice seeing the commands actually get used. Clear how to apply.

[dev nexus 2024] Table of Contents

This is DevNexus’ 20th anniversary!

Wednesday

Thursday

mutation testing from a five year old (according to chatgpt)

I asked ChatGPT to explain mutation testing to a five year old to see what it would say. I got this back:

Alright, imagine you have a big puzzle made of Lego blocks. Each block represents a piece of code that makes up a computer program. Now, let’s say we want to make sure that our puzzle is put together correctly so that our program works properly.

Mutation testing is like having a mischievous Lego wizard who comes and changes some of the pieces in our puzzle to see if we notice. He might swap a blue block for a red one, or turn a block upside down. After he’s done, we carefully check our puzzle again to see if it still looks right and if our program still works.

If we can still solve the puzzle and our program still works even with the changes the wizard made, then we know our puzzle – or our program – is really strong. But if we can’t solve the puzzle or our program stops working, then we know there’s a problem and we need to fix it. So, mutation testing helps us make sure our programs are really robust and can handle unexpected changes.

It’s a good example of how ChatGPT is an aide and not a solver. First paragraph is good. In the second part, I love the mischievous Lego wizard who comes and changes some of the pieces to see if we notice. It’s a good premise. We see if our program still works with the changes.

The third paragraph leads us astray. If we notice the change, that is good. If the program stops working it is also good, means we noticed the change. Mutation testing is about having good tests that can detect change. Not a robust program with unnecessary code that behaves the same way when changed.