java evolution of eclipse collections – live blogging from qcon

The Java Evolution of Eclipse Collections
Speaker: Kristen O’Leary
See the list of all blog posts from the conference

Eclipse Collections

  • was once GS (Goldman Sachs) Collections
  • Memory efficient collections framework
  • Open sourced in 2012

Java 8

  • 8.0 compatible with Java 8+
  • Extend Java 8 Functional Interfaces
  • New APIs – ex: reduceInPlace

Optional

  • RichIteratable.detectWith() used to rutn null if no match
  • Pre Java-8 could use detectIfNone() to create if doesn’t exist
  • New method detectWithOptional() which returns Optional wrapper

Collectors

  • Collectors2 has collectors
  • Can collect into Bag, ImmutableSet, BiMap, Stack, etc
  • Have full primitive collections library
  • Have full set of multi-map (map with multiple values for same key ex: key to list/set)
  • Have extra APIs like chunk() and zip()

Default methods

  • RichIterable is common interface so default methods helpful.
  • Used to add reduceInPlace() so don’t need to stream and create new collection
  • Also useful for asLazy() or toImutabile() since Eclipse Collections doesn’t provide stream() there.

Primitive Collections

  • Code generate the primitive classes so symmetry
  • Showed impressive memry savings – Eclipse Collections and Trove equivalent. Much smaller than with autoboxing and faster too
  • LazyIterable availalbe for all 8 primitive types. Just call asLazy(). A LazyIterabe can be reused unlike a stream.

Java 9

  • Module system
  • Internal API Encapsulation
  • Need to change APIs that use reflection in order to build. Can’t call setAccessible(true) any more.
  • There is a command line argument to ignore reflection errors. Don’t want to impose that on callers

I like that Kristen uses kata examples to show hw the APIs work.

development metrics you should use (but don’t) – live blogging from qcon

Development Metrics you should use (but don’t)
Speaker: Cat Swetel @CatSwetel
See the list of all blog posts from the conference

Breakfast is good. You should eat breakfast. So Fruit Loops?
Metrics are good. You should have metrics. So bad metrics?

Metrics should fall into four baskets: quality, responsiveness, productivity and predictablilty. Value isn’t called out because this is about development metrics.

“The RIGHTER we do the WRONG thing, the WRONGER we become”

Definitions: (in this presentation)

  • start – when pulled work into team (not when requested)
  • finished – when customers can user

Metric: Time in process

  • Units of time for one unit of work
  • Display as a satter plot to see trends over time
  • Can look at average and 90% line (likely worst case). Would you rather hear 90% of the time it will just under two months or on average 20 days. Either way will be 53 days.
  • Display as a bar chart frequency distribution. See mode (entry with most items). Also see if long tail pattern. Tells a story about predictability
  • Weibull distribution – fat toward zero but tail trickles into infinity. This is like your commute. It usually takes X minutes but then sometimes something happens. (Remember from phrase: Weibull wiggle and they wobble but they don’t fall down)
  • Figure out story from data. In this case, determined that thought all work was the same, but really two distinct types. Were able to detect that high priority items were rushed and everything else waited.
  • Learned really had a multi-modal curve. Like two separate bell curves
  • This covers responsiveness and predictability

Metric: Throughput

  • Units of work per unit of time
  • Team cares about total capacity
  • Customer cares about how many new features
  • Cover range so cn see high and low
  • Ok to see dip while make improvements and then it goes up after. Expect productivity to drop before normalize around the change
  • Can display as range (hard to read
  • Can display bar chart showing probability for each number of requests
  • This covers productivity and predictability

Metric: Time in state

  • Need to collaborate across teams so wasted time waiting.
  • “Touch time” is a very small percentage of the total time
  • Helps determine where work is stuck
  • Good if see trend that getting better/worse. Look at more recent data. Are ueues growing or shrinking.
  • Do not make the bars red and green. Don’t want to avoid investing in improvement. Also, red/green colorblindness
  • Can stack within bar to display work in different states
  • Can display cummulative flow diagram as line graph
  • Little’s Law – average time in system
  • If arrival and departure rates don’t match, should affect expections
  • This covers predictability

Provided warning about this being number in context,not an estimate. Statistics are just answers/numbers. A person needs to provide the story/context around them.

I really liked this talk. Going deep on why a few metrics are useful is great!

The Paved PaaS to Microservices – live blogging at qcon

The Paved PaaS to Microservices
Speaker: Yunong Xiao @yunongx
See the list of all blog posts from the conference

Yunong is from Netflix. Talked about how serving multiple types of devices. Can innovate faster if not worrying about infrastructure

Scale. 1K+ microservices. JavaScriptfront end; Java back end.

Client teams own microservices for front end/client devices. Edge API is server based/back end API

Standardized Components

  • Common set of components – ex: metrics, logging, alerts, dashboard, configuration
  • Don’t want everyone picking RPC mechanism. Invest in one set of tooling
  • Benefits of standardizing: consistency (ease of troubleshooting/support), leverage (more time for business problems with platform team focused on components), interoperability (easier integration so faster velocity and less cognitive overload), quality (more investment so higher chance of quality), support (greater knowledge base)
  • “But I’m a Snowflake” – Netflix has culture of freedcom and responsibility. Helps with innovation. If works, re-integrate into ecosystem. Be concious of talking to the others and the cost to other teams of your choice.

Pre-assembled platform

  • Starting a new project, there isn’t velocity yet nor stats on reliability.
  • Putting components into a pre-assembled platform so can just add business logic. Less likely to be missing things like metrics because in pre-assembled platform.
  • Guarantees standard and consistent metrics. Reducs MTTD (mean time to detect) and MTTR (mean time to recover)
  • Maintenance vs convenience – easier for platform team to include just basics. Easier for app team to have more included. The solution is layers and favors. Having a base platform and add ons.
  • Testing – running other team’s microservices tests when upgrading a platform tests the platform upgrade
  • Design for configuration and hooks
  • API Semantic versioning (like what Maven versions do)
  • Use conventional changelog to automatically creat changelog

Automation and Tooling

  • Provide CLI for common dev experience – allows scripting and consistent environment each time. Ensures can run locally and in the cloud
  • Local development fast – use local debugger and curl. Can still test with container so mirrors prod config
  • Provide first class mocks for components in pre-assembled platform. Facilitate determining where the problem lives. Easier to write reliable automated tests.
  • Provide facilitate to generate mock data
  • Need a testing API so different groups can work on mocks. Component teams create mocks against standard APIs
  • “Production is war” – different levels of experience in ops. Use tools to avoid shooting off own foot. Ex: pipelines for deployment/rollback, automated canary analysis
  • Dashboards provide a consolidated view for ops. Having platform generate dashboard and alerts standardizes them. Also allows for automate analytics and tooling.

Provided warning about not just copying Netflix. PaaS is for certain use cases. Components help regardless.