[2019 oracle code one] java concurrency

Java Concurrency, A(nother) Peak Under the Hood

Speakers: David Buck (Oracle) – @DavidBuckJP

For more blog posts, see The Oracle Code One table of contents


Multi threaded code use cases

  • GUIs
  • Libraries
  • Batch Processing
  • Worst case: writing multi-threaded code but don’t know you are

Problems

  • Race conditions
  • Heisenbugs/observer problem – occur in Prod (or locally), but disappear in debugger

Tools

  • Java memory model
  • synchronized keyword
  • java.util.concurrent

Memory model

  • What guarantees do you have that writes in one thread are available to another thread?
    • Local variables safe
    • Static variables/instance variables can introduce concurrency problems
    • JIT compiler can change order things are done (vs the order you code them in)
    • If optimizing compiler sees a = 1 and a=a+1, compiler will just initialize a to 2.
    • Out of order execution – hardware can execute out of order to improve performance
  • Clartify what multithreaded behavior developers may depend on. Limits what types of optimizations the runtime can do
  • Spec: https://docs.oracle.com/javase/specs/jls/se13/html/jls-17.html

Things Java Protects us from

  • Testing on Raspberry Pi because ARM so can see problems that don’t occur on Intel chips
  • memory barrier/fence
  • prevents reordering before before/after fence. How do to this varies by processor./tool chain. Need to set two – one on read and one on write to guarantee behavior for testing.
  • Types: store-store, store-load, load-store, load-load

Relativity

  • As in physics, no single “correct” timelines
  • Observed order of events depends on frame of reference
  • Each core can have slightly different view of memory
  • This means core files aren’t 100% accurate

Changes to memory model

  • Originally only dealt with synchronized – mutual exclusion and happened-before. Volatile was defined but just about visibility
  • This was a problem because volatile didn’t enforce happened-before and final values could change.
  • Java 5 adopted Doug Lea’s open source APIs (JSR-166)
  • JSR-133 – ensured volatile does establish happens-before and final values will never changed
  • Volatile != atomic
  • id++ still not thread safe on a volatile field. Can use synchronized to fix
  • In original memory model, 64-bit numbers (longs and doubles) not updated atomically

Practical advice

  • Use highest level construct available. java.util.concurrency > synchronized/wait/notify. Last choice is volatile/final
  • Don’t roll your own. Even Doug Lea didn’t get it perfect
  • Java 8 – Lambdas/streams – high level
  • Java 9 – Var handles – lower level than volatile/final. Alllows value to be volatile only when we want it to be. Explicit fence/acquire/release. This was introduced for use cases sun.misc.Unsafe used to deal with.

My take

This was really interesting. It was like a peek behind the curtain! My only complain is that dark red text on a black background is hard to read. Luckily there wasn’t much of it and it was obvious what was being “emphasized” in red. Also, I liked the demos!

The Amazon AWS Java SQS Client is *Not* Thread-safe

I recently added long polling to an Amazon SQS project that processes thousands of messages a minute. The idea was simple:

  • Spawn N number of threads (let’s say 60) that repeatedly check an SQS queue using long polling
  • Each thread waits for at most one message for maximum concurrency, restarting if no message is found
  • Each time a message is found, the thread processes it and ACK’s via deleteMessage() (failure to do so causes the message to go back on the queue after the visibility timer is reached)

For convenience, I used the Java Concurrency API ScheduledExecutorService.scheduleWithFixedDelay() method, setting each thread with 1 millisecond delay, although I could have accomplished the same thing using the Thread class and an infinite while() loop. With short polling, this kind of structure would tend thrash, but with long polling, each thread is just waiting when there are no messages available. Note: For whatever reason, Java does not allow a 0 millisecond delay for this method, so 1 millisecond it is!

Noticing the Problem
When I started testing my new version based on long polling, I noticed something quite odd. While the messages all seem to be processed quickly (1-10 milliseconds) and there were no errors in the logs, the AWS Console showed 50+ messages in-flight. Based on the number of messages being processed a second and the time it was taking to process them, the in-flight counter should have been only 3-4 messages at any given time but it consistently stayed high.

Isolating the Issue
I knew it had something to do with long polling, since previously with short polling I never saw that many messages consistently in flight, but it took a long time to isolate the bug. I discovered that in certain circumstances the Amazon AWS Java SQS Client is not thread-safe. Apparently, the deleteMessage() call can block if too many other threads are performing long polling. For example, if you set the long polling to 10 seconds, the deleteMessage() can block for 10 seconds. If you set long polling to 20 seconds, the deleteMessage() can block for 20 seconds, and so on. Below is a sample class which reproduces the issue. You may have to run it multiple times and/or increase the number of polling threads, but you should see intermittent delays in deleting messages between Lines 25 and 27.

package net.selikoff.aws;

import java.util.concurrent.*;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.sqs.*;
import com.amazonaws.services.sqs.model.*;

public class SQSThreadSafeIssue {
	private final String queueName;
	private final AmazonSQS sqsClient;
	private final int numberOfThreads;
	
	public SQSThreadSafeIssue(Regions region, String queueName, int numberOfThreads) {
		super();
		this.queueName = queueName;
		this.sqsClient = AmazonSQSClientBuilder.standard().withRegion(region).build(); // Relies on locally available AWS creds
		this.numberOfThreads = numberOfThreads;
	}
	
	private void readAndProcessMessages(ReceiveMessageRequest receiveMessageRequest) {
		final ReceiveMessageResult result = sqsClient.receiveMessage(receiveMessageRequest);
		if(result!=null && result.getMessages()!=null && result.getMessages().size()>0) {
			result.getMessages().forEach(m -> {
				final long start = System.currentTimeMillis();
				System.out.println("1: Message read from queue");
				sqsClient.deleteMessage(new DeleteMessageRequest(queueName, m.getReceiptHandle()));
				System.out.println("2: Message deleted from queue in "+(System.currentTimeMillis()-start)+" milliseconds");
			});
		}
	}
	
	private void createMessages(int count) {
		for(int i=0; i<count; i++) {
			sqsClient.sendMessage(queueName, "test "+System.currentTimeMillis());
		}
	}
	
	public void produceThreadSafeProblem(int numberOfMessagesToAdd) {
		// Start up and add some messages to the queue
		createMessages(numberOfMessagesToAdd);
		
		// Create thread executor service
		final ScheduledExecutorService queueManagerService = Executors.newScheduledThreadPool(numberOfThreads);
		
		// Create reusable request object with 20 second long polling
		final ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest();
		receiveMessageRequest.setQueueUrl(queueName);
		receiveMessageRequest.setMaxNumberOfMessages(1);
		receiveMessageRequest.setWaitTimeSeconds(20);
		
		// Schedule some thread processors
		for(int i=0; i<numberOfThreads; i++) {
			queueManagerService.scheduleWithFixedDelay(() -> readAndProcessMessages(receiveMessageRequest),0,1,TimeUnit.MILLISECONDS);
		}
	}
	
	public static void main(String[] args) {
		final SQSThreadSafeIssue issue = new SQSThreadSafeIssue(Regions.YOUR_REGION_HERE,"YOUR_QUEUE_NAME_HERE",60);
		issue.produceThreadSafeProblem(5);
	}
}

And below is a sample output of this, showing that each message took 20 seconds (the long polling time) to be deleted.

1: Message read from queue
1: Message read from queue
1: Message read from queue
1: Message read from queue
1: Message read from queue
2: Message deleted from queue in 20059 milliseconds
2: Message deleted from queue in 20098 milliseconds
2: Message deleted from queue in 20024 milliseconds
2: Message deleted from queue in 20035 milliseconds
2: Message deleted from queue in 20038 milliseconds

Note: The SQSThreadSafeIssue class requires Java 8 or higher along with the following libraries to compile and run. It uses the latest version of the Amazon AWS Java SDK 1.11.278 available from AWS (although not in mvnrepository.com yet):

Understanding the Problem
Now that we see messages are taking 20 seconds (the long polling time) to be deleted, the large number of messages in-flight makes total sense. If the messages are taking 20 seconds to be deleted, what we are seeing is the total number of in-flight messages over the last 20 second window waiting to be deleted, which is not a ‘true measure’ of in-flight messages actually being processed. The more threads you add, say 100-200, the more easily the issue becomes to reproduce. What’s especially interesting is that the polling threads don’t seem to be blocking each other. For example, if 50 messages come in at once and there are 100 threads available, then all 50 messages get read immediately, while not a single deleteMessage() is allowed through.

So where does the Problem lie? That’s easy. Despite being advertised as @ThreadSafe in the API documentation, the AmazonSQS client is certainly not thread-safe and appears to have a maximum number of connections available. While I imagine this doesn’t come up often when using the default short-polling, it is not difficult to reproduce this problem when long-polling is enabled in a multi-threaded environment.

Finding a Solution
The solution? Oh, that’s trivial. So trivial, I was tempted to leave as an exercise to the reader! But since I’m hoping AWS developers will read article and fully understand the bug, so they can apply a patch, here goes….

You just need to create two AmazonSQS instances in the constructor of SQSThreadSafeIssue, one for reading (Line 21) and one for deleting (Line 26). Once you have two distinct clients, the deletes all happen within a few milliseconds. Once applied to the original project I was working on, the number of in-flight messages dropped significantly to a number that was far more expected.

Although this work-around fix is easy to apply, it should not be necessary, aka you should be able to reuse the same client. In fact, AWS documentation often encourages you to do so. The fact that the Amazon SQS client is not thread-safe when long polling is enabled is a very serious issue, one I’m hoping AWS will resolve in a timely manner!

The 8 Nights of Java – Night 7

Our 8 Nights of Java series is nearly at an end! Tonight we look at Java 7, the first version of Java released entirely by Oracle. Java 7 included many goodies for developers, in the form of Project Coin. In a lot of ways, Project Coin was straight off of most developers Top 10 Wish List. Like Java 5, many of the futures were compile-time enhancements and helped to simplify numerous lines of code down to just one, allowing developers to focus on the important logic in their classes rather than syntax.

Jump to: [Night 1 | Night 2 | Night 3 | Night 4 | Night 5 | Night 6 | Night 7 | Night 8]

Java 7 Notable Features
Oracle released Java 7 (codename Dolphin) on July 7, 2011. After 5 years of Java 6 (including 131 updates) and the acquisition of Sun by Oracle, it was finally time to add some new features to Java. Key new functionality included:

  • Project Coin:
    • Strings in switch statements
    • Underscores (_) in numbers
    • Diamond (<>) operator
    • try-with-resources statement
    • Multi-catch blocks
    • Binary literals
  • NIO.2
  • Expanded Concurrency API

From Scott:

I love Java 7. While Java 5 “fixed” Collections by adding generics/autoboxing/foreach loops, Java 7 greatly improved the language by adding things that I didn’t even realize I was missing. The greatest of those (for a database-driven developer such as myself) was the try-with-resources statement. We’ve spoken in the past about finally closing JDBC resources, and to be honest, even we forget to do them sometimes. Could you blame us? The syntax was horrible. Especially because close() throws a checked exception which often must be caught inside another catch block! The try-with-resources statement fixed this not only by designing a syntax that was simple and easy to use, but finally providing a “standard” for all developers to work off of. After Java 7 was released, if I came across another developer opening/closing a connection without a try-with-resources statement, I could instruct them to replace it with one that does, thereby ensuring any possible resource leak is closed.

Project Coin had many other, very useful features. Strings in switch statements had been requested by numerous developers, especially beginners learning Java, for the better part of a decade. The diamond operator, while seemingly simply, greatly reduces assignments involving embedded Collections… such as “new HashMap<String,List<Integer>>” to just “new HashMap<>”. It was a pain to write out the full generic expression on both sides of an assignment! I don’t tend to use binary literatals or underscores in numbers, but I’m thrilled they were added to Java 7 since it makes reading other developers code a lot easier.

So many awesome features for developers in Java, I haven’t even gotten to the improved Concurrency API and new NIO.2. Both are extremely powerful, fully developed libraries that you can use to developer complex applications in minutes. And both got even better in Java 8 with the inclusion of… oops! Guess we’ll have to wait till tomorrow night to finish that sentence!

From Jeanne:

When I was young, the local college built a science building and named it the “New Science Building.” It’s still called that.The problem is what you call the one after that. NIO.2 had the same problem. “New I/O” didn’t take so well and now the name is taken. Luckily NIO.2 is nice! I use it a lot in my coding because I do a lot of file system work. I find myself using Apache Commons for I/O operations very rarely now.

Then there is Project Coin. While some is just semantic sugar like the diamond operator, it still makes the code cleaner and less redundant. The try with resources is great for not having to deal with boilerplate code! I also appreciate multi-catch. Before it existed, I’d often revert to “throws Exception” rather than independently catch two or three exceptions and handle them the same way. This happened a lot with a framework we used that declared two checked exceptions. It also happened a lot with XML parsing and reflection. With multi-catch, I’m more inclined to do the right thing and handle them all in one line.