[uberconf 2023] Jenkins vs GitHub Actions

Speaker: Brent Laster

@BrentCLaster

For more, see the table of contents


GitHub Actions

  • Several years old
  • Actions = framework, actions – building block (ex: checkout code)
  • Automated workflows, call actions
  • Based on repository operations – ex: push, pull, issues comment
  • Can combine/share
  • Migration tool from other CI providers
  • Repository dispatch events – for things not in github. Good for while migrating.
  • Workflows contain jobs, jobs contain steps
  • All public actions: https://github.com/marketplace?type=actions. Anyone could have submitted. See who created. ex; verified creator. Can see source code of any action ex: last updated date, how many creators
  • Can create Docker, JavaScript or composite (multiple workflow steps) as custom actions
  • workflow_dispatch – can start interactively

Cost

  • Free for public repos or self hosted runners (aka running on your servers)
  • For private repos, 2K free minutes per month and 500MB of storage. Minutes restart each month. Storage includes github packages
  • Multiplier if using github hosted runners – linux x1, windows x2, mac x10

Directory

  • .github directory
  • .github/workflows/*.yml

Key differences from Jenkins

  • GitHub Actions run in parallel by default; Jenkins runs serially by default.
  • GitHub Action jobs like Jenkins stages
  • GitHub Action actions are like Jenkins plugins
  • Less config for GItHub Action
  • GitHub Action can have any name; only yaml extension matters. (action.yaml needed for metadata for reuse though)
  • GitHub Action always in github
  • GitHub Action can have different workflows for different events
  • Jenkins supports othe reports
  • Jenkins pipeline stages can run on different nodes
  • GitHub uses reusable workflow where Jenkins uses pipeline libraries. Use workflow_call trigger to call.

Structure

  • on
  • — jobs
    • —- job
      • —— runner
      • —— steps

Code

  • runs-on – what runner to use. Can be custom runner or a GitHub provided/hosted runner. Steps in a job run on the same runner. Fresh VM per job. Docker runners are self hosted.
  • uses – the code as a relative path to the repo. After path to action can have @label for tag/version #/etc
  • on.schedule to run on a schedule – can use cron
  • needs – set dependency to invoke sequentially
  • if: success(), always(), cancelled(), failure()

UI

  • Actions tab lists workflow. Can see runs over time.
  • Like stage view of Jenkins
  • Don’t know of a way to aggregate reporting on an org level [me neither; but worth asking]

Bonus: Online IDE

  • Going to your repo and pressing the period, changes your URL from github.com to github.dev. This shows your repo in VS Code.

Migration

  • Code – move to GitHub if not there – all code/projects, history? branches? Easy to move from one git to another
  • Automation – all projects? Do people know what the Jenkinsfiles do? Custom scripting/kludges? Old versions?
  • Infrastructure – custom setup/config/os versions? Can you switch from Mac/Windows to Linux?
  • Users – what are appropriate permissions? Informed? Trained?
  • Tips – delete outdated/unneeded, standardize where can/make reusable workflow, allow enough time to migrate, require training, do a test conversion
  • Don’t want to migrate unicorns
  • GitHub Actions Importer – tool for bootstrapping migrations, not complete solutions. Attempts to read AzDO/Bamboo/CircleCI/GitLab/Jenkins/Travis. Migrate what can and access what can’t. Docker container runs as extension to GitHub CLI. Has commands: update (to latest version), version, configure (interactive prompt to configure credentials), audit (looks at current footprint), forecast (predicts Actions usages), dry-run, migration (create initial files). Good for insights. Can be more trouble than it’s worth to use in full. Can write custom transformer in Ruby if need something not built in

My take

I’ve used GitHub actions only a tiny bit, but lots of Jenkins. The phrase “like in Jenkins” came up a lot which was helpful in comparing them and learning faster. As were the tables and the code comparisons. The shortcut of “.” is cool (not about actions, but still useful).

[2023 kcdc] busy developer’s guide to next generation languages

Speaker: Ted Neward

Twitter: @tednewarrd

For more, see the table of contents.


General

  • Covering 10 next gen languages
  • The languages we use here are old enough to drink
  • However, the world has changed. Problems have changed
  • We got lazy and added features to general purpose programming languages
  • “How many of you like abstractFactory.impl.impl”
  • If don’t know all features of your chosen language, maybe it is too complicated
  • Excel is world’s most popular functional programming language. If you change a cell, everything in the dependency tree changes.

Crystal

  • crystal-lang.org
  • Online playground – https://play.crystal-lang.org/#/cr
  • native complication via LLVM (low level virtual machine) toolchain
  • interoperates with other LLVM based platforms ex: GraalVM)
  • heavily inspired by Ruby but has performance of native
  • created specifically to tap into AI
  • statically type checked/type inferenced
  • non-nillable types (compile time nil checks)
  • macro metaprogramming system
  • creates an executable

Julia

  • julialang.org
  • interactive shell: https://julialang.org/learning/tryjulia/
  • decently known in R/math/science community
  • compiled (via LLVM)
  • direct support for complex and rational numbers
  • OO and functional via multiple dispatch
  • dynamically typed
  • parallel/async/multithreaded
  • metaprogrammming (code is data; data is code) – ex: Math.parse (“1 + 1”)
  • good candidate for parallelizable math
  • can call from C

IO

  • iolanguage.org
  • IO for Graal
  • Development has ceased. Original creator proved his point. Others set it up on top of other languages which are active. Ex: IO for Java
  • homoiconic language – all values are objects; everything is a message
  • no keywords
  • will hurt brain until it clicks

Flix

  • flix.dev
  • functional first imperative logic language
  • runs on JVM
  • algebraic data types and pattern matching
  • Java took these features
  • easy to mix pure and impure code (re side effects)
  • First class Datalog contraints (based on Prolog) – rules and rules chaining

Pony

  • ponylang.io
  • statically typed, OO
  • uses actor model
  • capabilities secure: type, memory, exception, no deadlocks, no data race
  • high performance
  • philosophy: get stuff done
  • guarantees if compiles, won’t crash, etc

My take

Good high level overview of many things. Good to see code examples for each as well. Also interesting that he presented out of HTML and Dropbox. It worked well. I left when there were 10 minutes left (and 5 languages left) because my session is right after this. It was hard to leave, the session was excellent.

[2023 kcdc] DRYing out your GItLab Pipeline

Speaker: Lynn Owens

For more, see theĀ table of contents.


Intro/Problem

  • Every gitlab project has own .gitlab-ci.yml file. Great for getting started
  • Quickly have hundreds of projects
  • Goal is to eliminate copy/paste by centralizing in a few projects

What NAIC has

  • 200+ projects maintained by 11 teams in 2 dev orgs
  • Pipeline is inner source
  • Version 6 of pipeline; working on version 7
  • Reduced maintenance burden by making change once and not in each project
  • Hosted directly on gitlab.com

Milestone 1 – Hidden jobs for pipeline project

  • GitLab has “hidden” jobs
  • Start with a period
  • Don’t appear in any pipeline; just for the common code
  • The “pipeline” project has a .gitlab-ci-base.yml which contains common code
  • Common code makes no assumptions about teams and is configurable for all known use cases
  • v1 was about two dozen lines of common code
  • The client projects include the pipeline code (can include in any part of gitlab so doesn’t need to be yours)
include:
   -project: 'NAIC/pipeline' 
   -file './gitlab-ci-base.yml'
  • Then added jobs that extended the hidden jobs to call functions in the base code. Where deploy_foo is in the base code
deploy_foo:
  stage: deploy
  extend: .deploy_s3
  variables:
   ...

Suggested practices

  • Advises against pinning the pipeline to a tag because don’t get bug fixes and everyone has to upgrade manually
  • Don’t include stages in the pipeline as it forces one opinion on everyone. Many groups had written a pipeline for their use case and not all same.

Milestone 2 – Profiles

  • Found a half dozen use cases. ex: Maven for Java, NPM building Angular etc.
  • The .gitlab-ci.yaml was a copy/paste of the others in the use case.
  • Made profiles/maven-java.yml and the like in the common profile
  • Profiles are not one size fits all because there are a bunch of different ones and can still use the milestone v1 approach.

Milestone 3 – Pipeline scripts

  • Common code like logging, calling rest apis, etc
  • Switched from bash scripting to python so had common code in modules and could unit test the modules

Options to get scripts

  • Could have the pipeline create a tar.zip and upload to a repo. This is a little slow
  • Could have a global before_script that does a git clone of peipleine-scripts. Uses a network connection
  • Could bake the scripts into an image. Requires a pipeline

If was doing again, wouldn’t create separate pipeline-scripts because tightly coupled to pipeline. Doesn’t change problem of using the scripts though.

Testing

  • If client projects are all using the default branch, small changes will affect them all.
  • Use a testing framework for script code (ex: python/go)
  • Follow development practices
  • Write a sample app for each profile. Have the common pipeline trigger a downstream pipeline on this project. For any merge to master, the downstream jobs must pass.
  • Before major refactors, inventory profile jobs and audit afterwards,

Milestone 4 – Profile Fragments

  • Had about 24 profiles (ex: maven-java-jar, maven-java-pom, maven-java-k8s, etc)
  • Typically three components – build tool, language, deployment method
  • These profiles had a lot of copy/paste
  • Decomposed into fragments – ex: maven, npm, java, angular, k8s, s3)

Selling the idea

  • Needed to convince people to use this pipeline instead of writing own or another team.
  • Offer flexibility
  • Show value
  • Follow semantic versioning to the T (he tags every merge to master of the pipeline even though encourages use of the default branch. the tags are good rollback points or if the project needs something older)
  • Changelog everything
  • Document well
  • Train and evangelize
  • Record training so have library

My take

This was a good case study and useful to see concrete examples and techniques. I wish we could see the code, but I understand that belongs to their org.