Jeanne’s experiences taking the AWS Certified Cloud Practitioner Exam

Yesterday, I took and passed the AWS Certified Cloud Practitioner Exam. Also see:

Registering for the exam

Registering was pretty easy. You enter your zip code and it tells you the next available exam date of the nearby centers. You can click on each one to see actual dates/times. There were both weekday and weekend choices which was nice.

The exam center

I took the exam at “Forest Hills Brainseed.” Being able to walk to the venue was nice because I could leave “unnecessary objects” at home!

The center had a locker for your stuff. You hold the key during the exam. They weren’t strict about what you put in the locker. I kept my credit card and tissues in my pockets. (Some centers have made me empty my pockets.) The center keeps your drivers license while you are in the exam room.

Like most exams, you are entitled to a writing utensil and something to write on. This center uses paper and pencil. I haven’t gotten physical paper at an exam in ages. It was *so* nice.

This center also provides ear plugs which I didn’t need.

The center had two rooms of 10 computers each. My room was less than half filled, but it was still hot. Luckily, I wore a short sleeve t-shirt under a sweatshirt so could just remove my sweatshirt.

The actual exam

The software didn’t log me in at first. The center had me change computers and then it worked.

After 22 minutes, I had completed my first pass of the exam and was 100% confident on 44 of the answers. (Passing is a little higher than that.) Luckily I was 50% confident of others. I did some more review but turned it in with about 50 minutes left. (I always finish cert exams quickly.)

After you click “End test”, you get 6-9 survey questions. Then you get your pass/fail result. One to five days later, you’ll get an email with your actual score. (Janeice and I both got the score one day later). Given that this is a pass fail exam, the score isn’t important to me. That said, I did get a good score – 895 (out of 1000)

How to view your score

  • Sign in to the AWS Training and Certification Portal.
  • Click the ‘Certification’ tab.
  • Click the ‘AWS Certification Account’ button.
  • Click “Previous Exams”
  • Click “Download” on the right hand side

The Amazon AWS Java SQS Client is *Not* Thread-safe

I recently added long polling to an Amazon SQS project that processes thousands of messages a minute. The idea was simple:

  • Spawn N number of threads (let’s say 60) that repeatedly check an SQS queue using long polling
  • Each thread waits for at most one message for maximum concurrency, restarting if no message is found
  • Each time a message is found, the thread processes it and ACK’s via deleteMessage() (failure to do so causes the message to go back on the queue after the visibility timer is reached)

For convenience, I used the Java Concurrency API ScheduledExecutorService.scheduleWithFixedDelay() method, setting each thread with 1 millisecond delay, although I could have accomplished the same thing using the Thread class and an infinite while() loop. With short polling, this kind of structure would tend thrash, but with long polling, each thread is just waiting when there are no messages available. Note: For whatever reason, Java does not allow a 0 millisecond delay for this method, so 1 millisecond it is!

Noticing the Problem
When I started testing my new version based on long polling, I noticed something quite odd. While the messages all seem to be processed quickly (1-10 milliseconds) and there were no errors in the logs, the AWS Console showed 50+ messages in-flight. Based on the number of messages being processed a second and the time it was taking to process them, the in-flight counter should have been only 3-4 messages at any given time but it consistently stayed high.

Isolating the Issue
I knew it had something to do with long polling, since previously with short polling I never saw that many messages consistently in flight, but it took a long time to isolate the bug. I discovered that in certain circumstances the Amazon AWS Java SQS Client is not thread-safe. Apparently, the deleteMessage() call can block if too many other threads are performing long polling. For example, if you set the long polling to 10 seconds, the deleteMessage() can block for 10 seconds. If you set long polling to 20 seconds, the deleteMessage() can block for 20 seconds, and so on. Below is a sample class which reproduces the issue. You may have to run it multiple times and/or increase the number of polling threads, but you should see intermittent delays in deleting messages between Lines 25 and 27.

package net.selikoff.aws;

import java.util.concurrent.*;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.sqs.*;
import com.amazonaws.services.sqs.model.*;

public class SQSThreadSafeIssue {
	private final String queueName;
	private final AmazonSQS sqsClient;
	private final int numberOfThreads;
	
	public SQSThreadSafeIssue(Regions region, String queueName, int numberOfThreads) {
		super();
		this.queueName = queueName;
		this.sqsClient = AmazonSQSClientBuilder.standard().withRegion(region).build(); // Relies on locally available AWS creds
		this.numberOfThreads = numberOfThreads;
	}
	
	private void readAndProcessMessages(ReceiveMessageRequest receiveMessageRequest) {
		final ReceiveMessageResult result = sqsClient.receiveMessage(receiveMessageRequest);
		if(result!=null && result.getMessages()!=null && result.getMessages().size()>0) {
			result.getMessages().forEach(m -> {
				final long start = System.currentTimeMillis();
				System.out.println("1: Message read from queue");
				sqsClient.deleteMessage(new DeleteMessageRequest(queueName, m.getReceiptHandle()));
				System.out.println("2: Message deleted from queue in "+(System.currentTimeMillis()-start)+" milliseconds");
			});
		}
	}
	
	private void createMessages(int count) {
		for(int i=0; i<count; i++) {
			sqsClient.sendMessage(queueName, "test "+System.currentTimeMillis());
		}
	}
	
	public void produceThreadSafeProblem(int numberOfMessagesToAdd) {
		// Start up and add some messages to the queue
		createMessages(numberOfMessagesToAdd);
		
		// Create thread executor service
		final ScheduledExecutorService queueManagerService = Executors.newScheduledThreadPool(numberOfThreads);
		
		// Create reusable request object with 20 second long polling
		final ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest();
		receiveMessageRequest.setQueueUrl(queueName);
		receiveMessageRequest.setMaxNumberOfMessages(1);
		receiveMessageRequest.setWaitTimeSeconds(20);
		
		// Schedule some thread processors
		for(int i=0; i<numberOfThreads; i++) {
			queueManagerService.scheduleWithFixedDelay(() -> readAndProcessMessages(receiveMessageRequest),0,1,TimeUnit.MILLISECONDS);
		}
	}
	
	public static void main(String[] args) {
		final SQSThreadSafeIssue issue = new SQSThreadSafeIssue(Regions.YOUR_REGION_HERE,"YOUR_QUEUE_NAME_HERE",60);
		issue.produceThreadSafeProblem(5);
	}
}

And below is a sample output of this, showing that each message took 20 seconds (the long polling time) to be deleted.

1: Message read from queue
1: Message read from queue
1: Message read from queue
1: Message read from queue
1: Message read from queue
2: Message deleted from queue in 20059 milliseconds
2: Message deleted from queue in 20098 milliseconds
2: Message deleted from queue in 20024 milliseconds
2: Message deleted from queue in 20035 milliseconds
2: Message deleted from queue in 20038 milliseconds

Note: The SQSThreadSafeIssue class requires Java 8 or higher along with the following libraries to compile and run. It uses the latest version of the Amazon AWS Java SDK 1.11.278 available from AWS (although not in mvnrepository.com yet):

Understanding the Problem
Now that we see messages are taking 20 seconds (the long polling time) to be deleted, the large number of messages in-flight makes total sense. If the messages are taking 20 seconds to be deleted, what we are seeing is the total number of in-flight messages over the last 20 second window waiting to be deleted, which is not a ‘true measure’ of in-flight messages actually being processed. The more threads you add, say 100-200, the more easily the issue becomes to reproduce. What’s especially interesting is that the polling threads don’t seem to be blocking each other. For example, if 50 messages come in at once and there are 100 threads available, then all 50 messages get read immediately, while not a single deleteMessage() is allowed through.

So where does the Problem lie? That’s easy. Despite being advertised as @ThreadSafe in the API documentation, the AmazonSQS client is certainly not thread-safe and appears to have a maximum number of connections available. While I imagine this doesn’t come up often when using the default short-polling, it is not difficult to reproduce this problem when long-polling is enabled in a multi-threaded environment.

Finding a Solution
The solution? Oh, that’s trivial. So trivial, I was tempted to leave as an exercise to the reader! But since I’m hoping AWS developers will read article and fully understand the bug, so they can apply a patch, here goes….

You just need to create two AmazonSQS instances in the constructor of SQSThreadSafeIssue, one for reading (Line 21) and one for deleting (Line 26). Once you have two distinct clients, the deletes all happen within a few milliseconds. Once applied to the original project I was working on, the number of in-flight messages dropped significantly to a number that was far more expected.

Although this work-around fix is easy to apply, it should not be necessary, aka you should be able to reuse the same client. In fact, AWS documentation often encourages you to do so. The fact that the Amazon SQS client is not thread-safe when long polling is enabled is a very serious issue, one I’m hoping AWS will resolve in a timely manner!

AWS CodeBuild + Bitbucket – Teams = Epic Fail


Updated 8/19/2017: Amazon has now updated AWS CodeBuild service to support Teams! In other words, in the 2 days since I posted this issue, it has now been fixed. Hooray! I now see my team projects in the list of repositories after linking my account. One minor nitpick though… They sort the list of repositories in the drop-down chronologically, not alphabetically. Since I have hundreds of repositories, that means in order to find a particular one I have to remember the order it was created. Hope they fix this (minor) issue too!


As a user of both Bitbucket and AWS, I was recently excited to hear Amazon had announced integration with both AWS CloudBuild and Atlassian Bitbucket. For those unfamiliar with these two products, AWS CloudBuild is part of Amazon’s suite of code automation CI/CD toolset. This service, along with the full suite, provides the ability to automate software build creation, testing, and deployment. Atlassian Bitbucket, on the other hand, is a large source code repository provider. The AWS announcement means that you can now build projects in AWS using Bitbucket repositories as the source.


Or that’s what it was supposed to mean… Apparently, no one told AWS that most professional software development companies use Bitbucket Teams to manage projects. The new AWS integration is accomplished using an OAuth authenticated sign-in from within the AWS CodeBuild project creation wizard. Unfortunately, after logging in it only allows two types of repositories to be selected: public repositories and those in your *personal* account. Most people using Bitbucket professional use teams and do not store the repositories in their personal account. The result is that no repositories are available for integration.

In other words… it’s broken. One solution would be to authenticate with the team login but Atlassian disabled the ability to login with the team account years ago. Now, Amazon only announced this feature recently, so it is possible they will get around to fixing it but in the short-term it is quite disappointing. While there are other ways to integrate AWS CodeDeploy and Bitbucket, I was looking for an all-in-one solution. In fact, when I recently tried Atlassian’s plugin to integrate one of my repositories into AWS CodeBuild, the web page just froze. Oh well, hopefully Amazon will fix this oversight soon!

By the way, you might ask, “Why I don’t just move my source code repository into AWS CodeCommit?” The answer is simple logistics. If I have hundreds of projects used by hundreds of developers, migrating them to a new repository is not easy/fun. The advantage of having this integration working is that it provides a nice, fluid transition toward migrating to AWS builds, without the commitment of actually transferring any repositories.