Paying for the Java Certification Online – 2024 Edition (part 1)

Since the last time I took an exam, Oracle switched from Pearson VUE in person testing to online testing from your home/office. This series of three blog posts explains the process of buying a voucher and scheduling an exam. Some of the steps aren’t obvious, so I comment on those in more detail.

Remember to study before you buy the exam as your exam must be taken within 6 months of paying. See our Java 21 OCP Certified Professional Study Guide page for details!

Step 1: Find the page for your exam

To start out, you find the page for your exam. For example, the Java SE 21 Developer Professional Exam Number: 1Z0-830. Alternatively, you can start from the page of all the exams.

Once on the page for your exam, note that there are three “steps”.

  1. “Recommended Training” – taking Oracle’s training is optional. It is also expensive so I recommend opting not to take it. Our Java OCP 21 Certified Professional Study Guide is two orders of magnitude less expensive.
  2. “Review Exam Topics” – this lists the exam objectives
  3. “Register and take the exam” – this section has a link to “Buy Exam”. There is also a link to “Buy Exam” at the top of the page before you get to these three steps. Both buttons take you to the same place so click either of them.

Step 2: Choose the Exam Type

The previous step takes you to a generic page for buying an exam with no memory of which exam it is. Choose the first option “Oracle Cloud Infrastructure and Technology Exams”. The Java Professional Cert is considered a “Technology Exam”. If you are not in the United States, you’ll see different prices. Click “Purchase” under that first column.

Note: If you are taking the Foundations exam click the fourth option. Unless you were specifically asked to, it is unlikely you are taking the Foundations exam. I wrote a blog post about why to choose the OCP instead back in Java 11 and the reasons still apply.

Step 3: Add to cart

You then get taken to a page inviting you to add a “subscription” to your cart. This page confirms Java is a choice for this type of certification.

Tip: the word “subscription” is misleading. It’s not a subscription. It’s a single attempt at taking one exam. It’s really an exam voucher which is not the same thing as a subscription. I guess they are sharing the processing engine with the Oracle University subscription option.

Choose “Add to Cart.”

Step 4: Go to cart

After clicking “Add to Cart”, the page doesn’t take you to your cart. You need to click on the shopping cart in the top right corner manually which will give you a preview of your cart. Then click “View Cart” to actually go to your cart.

Step 5: Sign in (if you haven’t already)

You can sign in at any point in the process. When viewing the cart is the latest opportunity though.

Step 6: Checkout

Now you can click the “Checkout” button

I’m not showing the checkout screen as it has my personal info on it, but they want to know

  • Customer type – I choose Individual (there’s also an option for corporate). For individual, they want a contact phone number
  • “Service” address. (aka mailing address) They wanted a county and town. Which is odd. The post office doesn’t want both in the United States
  • Name/Email
  • Phone number (yes, again)
  • I clicked “preferred” for this address which defaulted the billing address to the same
  • Payment info – added my credit card. Depending on where you live there may be tax. For Jeanne it was about $22 in tax

Then I clicked the “Place Order” button. I got taken to a screen with an order number and customer support id. it also says “Within 2 business days, you will receive an email from Oracle University containing a link and instructions to activate your subscription. If you have questions or need assistance, please Contact Us by filling out a support form under “Order Processing”.

Spoiler: it does not take two days. See Assigning the exam attempt for what to do when you get the exam.

Jeanne’s Experience Taking the Java 21 Certification Exam 1Z0-830

As we mentioned earlier last week, Oracle announced their new Java 21 Certification Exam 1Z0-830 this week. Scott passed on Friday and wrote about his experience. Today was my turn.

Before the exam

I registered on Friday night to take the exam on Monday morning. As Scott noted, the registration process has changed a lot and we will be posting blogs about that. I chose the 7am slot so I wouldn’t have to miss much work time. Starting Sunday at 8am, I got a reminder every 3 hours. Yes, that’s 8 reminder emails. Seems excessive, but I was definitely reminded

I cleaned up to unplug my second monitor and have my workspace free of books and papers. (Which oddly the proctors didn’t check for). I took out my two forms of id. I even left myself a post it reminding to use Chrome and not Safari. And I made sure to get up with enough time to have breakfast and get dressed by 6:30am. Checkin starts half an hour before the exam slot.

Checking in

I did the Lockdown Browser test the day before. I had to do it again right before the exam (just the test, not the practicing with the UI). Sign in was quick. There’s an embedded Zoom meeting with everyone in your slot. You raise your hand to be taken to a breakout where the proctor checks your id. I didn’t hold mine close enough to the camera so got a chat message to do so. Then I got the code for the exam. Which started the two hour timer right away.

I asked my proctor about showing my scratch paper was blank and the proctor didn’t want to see it (Scott’s proctor did). Also odd was that you could see the names and video of everyone in your group while in the main room. Once you moved to your exam tab, they were offscreen. Still seems like an odd security choice to have people on video without virtual background to strangers.

The exam

Like Scott I had some very long questions. A number had 15+ lines of code with 4-6 of those as options. And many were very similar which meant I had to waste time spotting the differences to figure out what I was being asked. Which is hard when they are next to each other and even harder when scrolling. And then actually answer the questions. There were still some multiple choice with 4-6 answers and some word problems with multiple choice answers.

Like Scott, I saw a lot of topics mixed together in a single question. I liked the questions as they got you to think about code. I didn’t like how much I had to scroll/remember at one time from the question. And that was with scrap paper. I used it for process of elimination and to keep track of variable state.

As for content:

  • Pattern matching switch was definitely on the exam (with and without records)
  • Scott didn’t get a question about Virtual threads or Sequenced Collections and noted it was likely just bad luck of the draw. I got questions about both. And they tested the core concepts of each topic, not just using them in passing. So definitely covered.
  • The topics removed from the objectives were not on the exam.
  • The questions I got covered the objectives well.

How I did

Normally, I when I take an exam, I finish answering all the questions between the halfway and two thirds point. At that point, I go through all the questions again with an emphasis of what I was unsure of and typically fix 3-7 stupid mistakes from misreading or things I didn’t notice. And then I wind up with a good amount of time left on the timer and end early.

That was not my experience. With 18 minutes left, I had six questions I hadn’t even looked at. I flagged anything long at that point without reading and got to the end with 10 minutes left. I then used the next six minutes to answer the three long ones I flagged from those six. That left me with 4 minutes left. I was afraid that the exam attempt wouldn’t go through if it hit zero so I ended it there. That means I didn’t have time to review my answers and fix the stupid mistakes I always make. That was reflected in my score – 62%. Which is 3 questions below passing. See blog post for when I passed a week later

I’m currently worried about the length of the exam. I’ve never come close to running out of time on an exam. I probably could have gone a tiny bit faster if I wasn’t thinking about the exam relates to our book. But I did that for all the other exams and it didn’t affect my time in any noticeable way. I’m hoping Oracle sees this and rebalances the exam to have less long questions.

AWS Summit 2024

I went to AWS Summit New York today, a free one day conference. It’s the first time I’ve gone. I didn’t live blog but am writing a summary post of my day after the fact.

Overview

AWS spent a ton of money on this event. They rented out all or most of the Javits Center in NYC (this is where NYC Comic Con is held). They gave coffee/soft drinks and even free lunch. They also spent a lot of money on security. For cause. There were protesters right outside the front door.

I tried to experience the major parts of the event.

Expo

The exhibit hall was large on the third floor with lots of vendors related to cloud. There were also some fun activities like a drone and toy car racing. Lots of space for sitting/networking.

There were also some stages in the expo for shorter (15-30 minute talks). They had headphones for people who couldn’t filter out the background noise of the expo. It was nice because you could flit by and see if you were interested. I listened to some pieces of cert/education talks and a full one from Elastic on LLMs and summarizing security incidents

Breakouts

There were lots of one hour breakout sessions on the first floor. I went to two customer success stories (Venmo and Fannie Mae). It was dark in the breakout rooms. Like most places have for keynotes

Learning highlights for Venmo

Key strategies

  • Distribute load to maximize processing throughput
  • Use event based systems for anything not in critical path

Other notes

  • Django app. Used Celery for async work,  reader db instanes for queries that can use
  • Then added DynamoDB, MongoDB, OpenSeach Service, data lake, microservics, Cassandra (for microservices), Kafka
  • Split Mysql into Auroa MySQL comatible secondary ySQL and Analytics MySQ databases

Social feed data migration

  • Transactions visible, high traffic because home screen
  • Every transaction geerations a feed story along wih certain profile operations
  • 3.6TB of data, 5.6 bllion entries
  • Since digit lateny on data retrieval
  • 90% of memory usage
  • switched to DynamoDB due to cost (90% less), performance (equivlanet), managed servie, data encrption at rest, integration with other AWS offerings
  • Migrated via backfill followed by dual writes. Let verify performne under pro load and confirm data consistent. Then started ramping reads on new database. Started with 1% reading from new DynaeoDB. Finally cut off writes to ol MongoDB

Offloadng transaction history

  • For each payment put message on Kafka queue and write to Cassandra via microservice.  Implemented as best effort write Needed to guarantee 100% of data so could move over use caes taht required full fidelity data
  • Switch to write ahead log – write log essage saying intend to peror action and store in DynaeoDB Then proess transaction/pblish essage. FInally, delete inteded action message ow that completed. Background process looks for pendin messages 
  • Asyc payment processin using Kinesis
  • Problem batches huge and inconsistent for credit car sage, delays, outage costly, can’t send 500 error/need to reconcile, not a way to replay transactions internally
  • Added Kineis Data Stream via think wrapper to put mesage on strea and ackowledge success to upstrea. From KInsis, have consumers/lambda procss. Also usig Auora, DocumentDB, ElastiCache, DynamoDB and SQS

Key learnings for Fannie Mae

data science research 

  • compared research vs deveopment – ex: research has poc, live prod data, latest tools/patterns
  • pilars of platform:
  • data access – prod data, data usage contracts
  •  governance  – control by business, not tech, autoamted integration with governace
  •  operationalization – testing, validation, Ci/CD
  • data science controls
  • register research activities in CMDB so can provision/tag resources. Automated provisioning, strealined architect review process
  • Data access.sharing  contracts, perissions, ingress/egress rules, sensitive data protection rules
  • Cde deployment and change managment  CI/CD, scanning
  • Data science platform architecture
  • code/image repo
  • pblic data endpoints
  • code/package library
  • read only access to enterprise data lake
  • research envs –
  •  collaboration – just in time access – read only access to prod enterprise data lake. results an’t be shared; considered dev
  • validation  – testing/shakeot – still read only
  • operaiton – headless execution/- now can write to prod, create reports and share exterally
  • data access JIT (just in time). Fannie Mae has a patent on this
  • request access to data. could be from many data sources
  • JIT access engine checks against coarse grained contracts
  • Then goes to policy manager to check fine graine access controls. Use UI to create rules. creates new role dynamically so can use token to access

Building a generative AI use case

  • Used Anthropic’s  Claude 3 Sonnet via Amazon Bedrock and Aazon Neptune (graph db)
  • A lot of analysis of unstructured documents, average of 5 hours per doc and 8K dos per year
  • Deep Insight for LLM driven knowledge extraction. Uses ontology (schema( an LLM t generate knowledge graphs. Human in the loop to validate Then knowledge utiilization step to use natural language via a chatbot
  • taxonomy – linear top down hierarchy. Ontoogy – interconnected network representation
  • Disambigution important to avoid duplication
  • graph database  
  • reduces risk of hallucinations because more context
  • two types – 
  • Property Graph (Apache Tinkerpop) . Query with Gremlin or Cypher
  • RDF Graph (from W3C). query with SPARQL
  • extraction uses Bedrock, fargate, lambda, neptune, s3
  • utilization uses – bedrock, fargate, neptune and a chatbot
  •  also uses LangChain – Neptune Open Cypher QA chain (converts natural langague queries into Cyper so can do query(  and Amazon OpenSearch
  • challenges
  • pick onthology framework – Chose Turtle (Terse RF Triple Language for reeasability/ease of reading
  • find best way to chunk. Chose at sections so handle complex tables btter
  • Picking graph type. Chose property graph due to better OSS framework support
  • Amazon Kendra (enterprise search( did not integrate with Amazon Neptune. Used LangChain’s NeptuneOpenCypher QA Chain instea

Chalk Talks

Chalk talks were also on the first floor. They were also an hour but had less prepared content. The one I went to had 20 minutes of talking/demos. Most of the time was Q&A or discussion. They had a whiteboard with a camera to show what was on it so the speakers could write/draw real time. This meant one projected screen was the computer and one was the physical whiteboard.

Learning highlights

  • gen customers what to know what model to use, how to move quickly and how keep data secure/private
  • Bedrock provides foundational models via single API, customize model, RAG (Retrieval Augmented Generation), agents for multi step tasks, security/privacy/safety
  • Models include – amazon’s models, anthorpic,, cohere, meta, etc. ANd lots of variants/versions of each.
  • Two use cases: observability of generative AI itself, using gen AI to help with observability
  • gather metrics – ex: number tokens used for input/output
  • collected metadata/requests/responses so understand how customers use
  • governance/controls/guardrails
  • Cloudwatch – analyze inovcation logs, protect sensitve date, real time metrics and alarms (Ex: more latency on different version of claude), single pane of glass/dashboard
  • recorded demo #1 (while video was recorded, he narrated live. also paused periodically to say more
  • can send model invocation logs to either s3 (if using other loggiing system) or cloudwatch

Builder Sessions

Also on the first floor, these were small group labs. I went to one on Amazon Q. They had 4 areas on the room with 10 chairs each. An instructor from AWS was allocated to each group. After a short intro, the instructor helped anyone stuck and answered questions. This was great.

The lab had an access code good for three hours so you continue a little longer if you wanted. In theory, there was separate wifi for the lab but it didn’t work. The main conference wifi was fine though.

Learning highlights

  • Amazon Q Developer has a free and paid version.
  • The paid version promises not to learn from your data, It’s licensed per person but only billed if the developer uses in a month.
  • IDE integration for VS Code and IntelliJ.
  • Chat bar. Often gives sources/links. From 2023 for public internet. RAG for Amazon so more recent
  • Can explain code, refactor code, fix code and migrate to later version of Java. Can also write a plan for writing code and write code (with some errors)
  • Code Whisperer was folded into Q
  • It was slow, but I was on a conference network

Main dev activities

  • planning – docs, examples, deisgn
    • creating = generate cpde,amage omfra
    • test amd secure – test cases, scan for security vulnerabiliteies
    • operate – identify and mitigate code issues, monitor performance and efficiencey
    • maintenance and modernization – modernize and update old code languages and dependencies

Amazon Q Developer tries to help with all phases

  • plan – explain code with conversational coding (chatbot)
  • create – inline code complete, conversational coding
  • test/secure – unit test generation, OWASP top 10 security scanning
  • operate – debug/optimize code with conversational coding
  • maintenance and modernatization  update code with agent from legacy

Keynote

The keynote was in a big room that wouldn’t fit everyone. They also used all the breakout rooms as overflow and streamed to the stages in the expo. I like that as it was easy to eat and listen. Or talk to the vendors and listen to parts. Or not.