creating a new junit 5 project in gradle in intellij

I haven’t created a new JUnit 5/Gradle project in a IntelliJ in a long time. When doing so today, I ran into a number of problems. I wound up just starting over, but writing up what I encountered/learned.

Java 16 vs 17

IntelliJ supports Java 17. When I set up Gradle, it told me that Gradle doesn’t yet support Java 17. I confirmed on the Gradle website this is true. Ok that’s fair. Java 17 isn’t fully released for another month. While there might be an early adopter package that does work with Java 17, my focus here is JUnit 5, no, Java 17. So I went with Java 16.

String not found – “Package java.lang is declared in module java.base, which is not in the module graph”

This surprised me. I’m not using modules in this project. Or at least I don’t want to be, nor do I have a module-info.java file. I tried invalidating the IntelliJ cache and restarting the IDE. That didn’t work. I then tried deleting the .gradle and .idea directories and reopening IntelliJ. That worked. IntellIJ even asked if I wanted to trust the gradle project as it recreated one.

This approach lost all IntelliJ settings including Java version, gradle version and source/test directories.

Recreating the project

At this point, I decided to delete my build.gradle and settings.gradle files in addition to the .idea and .gradle directories. My plan was to start over. I then created a new Gradle project in the same directory. I created a dummy file and had IntellIJ create a JUnit 5 test for me. That updated the Gradle file in a way that the IDEA was happy with. Now to commit to git lest anything else happen.

ant and junit 5 – outputting test duration and failure to the log

In JUnit 5, you use the junitlauncher Ant task rather than the junit Ant task. Unfortunately, this isn’t a drop in replacement. In addition to needing a workaround to set system properties, you also need a workaround to write the test results to the console log.

JUnit 4

With JUnit 4 and Ant, you got output that looked list this for each test run. In the console. Real time while the build was running. This was really useful.

[junit] Running com.javaranch.jforum.MissingHeadTagTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.685 sec

The problem

JUnit 5 itself provides a number of built in options for this. However, Ant integration renders this “not very useful” for a few reasons.

    1. I want to use the legacy listeners. For example:
      <listener type="legacy-xml" sendSysOut="true" sendSysErr="true" />
      <listener type="legacy-plain"sendSysOut="true"sendSysErr="true"/>
      
      

      However, if I enabled sendSysOut and sendSysErr to them, all the listeners present redirect away from System.out. This means if I try to use the built it LoggingListener, it also redirects to the file and I don’t see it in the console. This means, I have to chose between the legacy listeners and seeing real time data.

    2. Ant seems to pass a fileset as individual tests. This means that running this code, prints a summary for each test rather than the whole thing at the end.
      <junitlauncher haltOnFailure="true" printsummary="true">
           <classpath refid="test.classpath" />
           <testclasses outputdir="build/test-report">
      	<fileset dir="build/test">
      	  <include name="**/*Tests.class" />
              </fileset>
      </junitlauncher>
      

      It looks like this. The summary tells me about each run, but not with the test names so not useful anyway.

      [junitlauncher] Test run finished after 117 ms
      [junitlauncher] [         3 containers found      ]
      [junitlauncher] [         0 containers skipped    ]
      [junitlauncher] [         3 containers started    ]
      [junitlauncher] [         0 containers aborted    ]
      [junitlauncher] [         3 containers successful ]
      [junitlauncher] [         0 containers failed     ]
      [junitlauncher] [        15 tests found           ]
      [junitlauncher] [         0 tests skipped         ]
      [junitlauncher] [        15 tests started         ]
      [junitlauncher] [         0 tests aborted         ]
      [junitlauncher] [        15 tests successful      ]
      [junitlauncher] [         0 tests failed          ]
      [junitlauncher] Test run finished after 14 ms
      [junitlauncher] [         2 containers found      ]
      [junitlauncher] [         0 containers skipped    ]
      [junitlauncher] [         2 containers started    ]
      [junitlauncher] [         0 containers aborted    ]
      [junitlauncher] [         2 containers successful ]
      [junitlauncher] [         0 containers failed     ]
      [junitlauncher] [        10 tests found           ]
      [junitlauncher] [         0 tests skipped         ]
      [junitlauncher] [        10 tests started         ]
      [junitlauncher] [         0 tests aborted         ]
      [junitlauncher] [        10 tests successful      ]
      [junitlauncher] [         0 tests failed          ]
    3. If I use JUnit Launcher’s fail on error property, it fails on the first test rather than telling me about all of them. (I don’t use this feature anyway.)
    4. Custom extensions also have their system out/err redirected to the legacy listeners.

The solution

This feels like a hacky workaround. But I only have one project that uses Ant so I don’t have to worry about duplicate code. The legacy listeners are useful so I don’t want to get rid of them.

I wrote a custom listener that stores the results in memory for each test and writes it to disk after each test class runs. That way it gets all the data, not just the last one. And then Ant writes it out.

<target name="test.junit.launcher" depends="compile">
		<junitlauncher haltOnFailure="false" printsummary="false">
			<classpath refid="test.classpath" />
			<testclasses outputdir="build/test-report">
				<fileset dir="build/test">
					<include name="**/*Tests.class" />
				</fileset>
				<listener type="legacy-xml" sendSysOut="true" sendSysErr="true" />
				<listener type="legacy-plain" sendSysOut="true" sendSysErr="true" />
				<listener classname="com.example.project.CodeRanchListener" />
			</testclasses>
		</junitlauncher>
		<loadfile property="summary" srcFile="build/status-as-tests-run.txt" />
		        <echo>${summary}</echo>
	</target>

And the listener

package com.example.project;

import java.io.*;
import java.time.*;

import org.junit.platform.engine.*;
import org.junit.platform.engine.TestDescriptor.*;
import org.junit.platform.engine.TestExecutionResult.*;
import org.junit.platform.launcher.*;

public class MyListener implements TestExecutionListener {
	
	private StringWriter inMemoryWriter = new StringWriter();

	private int numSkippedInCurrentClass;
	private int numAbortedInCurrentClass;
	private int numSucceededInCurrentClass;
	private int numFailedInCurrentClass;
	private Instant startCurrentClass;

	private void resetCountsForNewClass() {
		numSkippedInCurrentClass = 0;
		numAbortedInCurrentClass = 0;
		numSucceededInCurrentClass = 0;
		numFailedInCurrentClass = 0;
		startCurrentClass = Instant.now();
	}

	@Override
	public void executionStarted(TestIdentifier testIdentifier) {
		if ("[engine:junit-jupiter]".equals(testIdentifier.getParentId().orElse(""))) {
			println("Ran " + testIdentifier.getLegacyReportingName());
			resetCountsForNewClass();
		}
	}

	@Override
	public void executionSkipped(TestIdentifier testIdentifier, String reason) {
		numSkippedInCurrentClass++;
	}

	@Override
	public void executionFinished(TestIdentifier testIdentifier, TestExecutionResult testExecutionResult) {
		if ("[engine:junit-jupiter]".equals(testIdentifier.getParentId().orElse(""))) {
			int totalTestsInClass = numSucceededInCurrentClass + numAbortedInCurrentClass
					+ numFailedInCurrentClass + numSkippedInCurrentClass;
			Duration duration = Duration.between(startCurrentClass, Instant.now());
			double numSeconds = duration.getNano() / (double) 1_000_000_000;
			String output = String.format("Tests run: %d, Failures: %d, Aborted: %d, Skipped: %d, Time elapsed: %f sec",
					totalTestsInClass, numFailedInCurrentClass, numAbortedInCurrentClass,
					numSkippedInCurrentClass, numSeconds);
			println(output);

		}
		// don't count containers since looking for legacy JUnit 4 counting style
		if (testIdentifier.getType() == Type.TEST) {
			if (testExecutionResult.getStatus() == Status.SUCCESSFUL) {
				numSucceededInCurrentClass++;
			} else if (testExecutionResult.getStatus() == Status.ABORTED) {
				println("  ABORTED: " + testIdentifier.getDisplayName());
				numAbortedInCurrentClass++;
			} else if (testExecutionResult.getStatus() == Status.FAILED) {
				println("  FAILED: " + testIdentifier.getDisplayName());
				numFailedInCurrentClass++;
			}
		}
	}
	
	private void println(String str) {
		inMemoryWriter.write(str + "\n");
	}
	
	/*
	 * Append to file on disk since listener can't write to System.out (becuase legacy listeners enabled)
	 */
	private void flushToDisk() {
		try (FileWriter writer = new FileWriter("build/status-as-tests-run.txt", true)) {
			writer.write(inMemoryWriter.toString());
		} catch (IOException e) {
			throw new UncheckedIOException(e);
		}
	}

	@Override
	public void testPlanExecutionFinished(TestPlan testPlan) {
		flushToDisk();
	}
}

 

JUnit 5 – assertAll

I gave a half day JUnit 5 workshop at DevNexus last week. When I covered assertAll(), an interesting piece of code came up. In JUnit 5, there is an assertAll() which runs all of the assertions it contains so you can see all of them that failed at once.  This replaces the need for AssertJ’s SoftAssertions.

So how many assertions do you think this will fail with?

@Test
public void all() {
  assertAll("message", () -> {
    assertEquals(1, 2);
    assertEquals(3, 4);
  });
}

The fact that this is a blog post should get you nice and suspicious. The answer is one:

org.opentest4j.MultipleFailuresError: message (1 failure)
expected: <1> but was: <2>

Why is this? Well, remember that assertAll takes a vararg of Executables (lambdas in this case.) However, the above code only has one lambda. So JUnit runs the two asserts in the same block which makes them happen sequentailly.

To fix this, you need to pass separate lambdas for each assertions.

@Test
public void all() {
  assertAll("message", 
	() -> assertEquals(1, 2), 
	() -> assertEquals(3, 4));
  }
}

Ah this one tells you about both failures.

org.opentest4j.MultipleFailuresError: message (2 failures)
expected: <1> but was: <2>
expected: <3> but was: <4>

Thanks to Steve Kiley for asking the question that led to this blog post!