QCon 2018 – Behavioral Economics and Chatbots

Title: Behavioral Economics and Chatbots
Speaker: Jim Clark

See the table of contents for more blog posts from the conference.


Urinal with a fly is an example of behavioral economics. The idea is to give something to aim at.

Psychology

  • Nudge theory – alter decision making
  • Value-action gaps – how want to be working vs how actually work
  • Information deficits  – know right thing when need to know it
  • Diffusion of innovation – why do teams succeed vs get stuck

OODA loop

  • Observe – unmet goal
  • Orient – provide context
  • Decision – should action be taken. choose appropriate action
  • Action – take appropriate action

Demo – follow the leader

  • find out what others are doing/if issues
  • chatbot notices when library version changed and if different from others/standard
  • chatbot offers to set new version to others that should use later version. testifying it is safe
  • then chatbot can nudge users on older version to upgrade
  • can tell chatbot to upgrade it for you and chatbot creates the pull request for the update. ex: action: accept/set target/ignore

Demo – Libbis? (he said only two people call it that)

  • When change code in one project, the projects that copied it get a pull request to change. For reuse smaller than libraries [seems like a hack to enable copy/paste reuse]
  • Recognizes same code fingerprints
  • Action: accept/reject pull request

Demo – Value Action Gaps

  • Reports vulnerable libraries and whether a fixed version is available
  • Artifactory can block more downloads/builds with artifacts
  • Action: deal with violation/block download/upgrade
  • The command is important. The nudge lowers the barrier to actual acting. Timely suggestions.

Demo – Innovation diffusion

  • Shared goals
  • When one project gets a new feature/microservice, encourage others to as well.

General

  • Always be innovating – need to be able to try things without getting permission from a committee.
  • The chatbot committed so much to github that it got recommended to do a code review.
  • Make it easy to create new projects. Won’t do it if it is hard. If ceremony isn’t something you want to do, take it away.
  • Lower barrier for good ideas spreading.
  • Bots are not mobile CLIs. They are agents for collaboration and automate in a social context.

My take

This talk uses live demos in slack. Very cool! The range of benefit to developers is really useful. Seeing real Slack examples and real code was great.  I missed a little bit because I had to make a change to my deck for tomorrow’s presentation. (Java 11 goes feature complete tomorrow and a new feature was added.) I wish I could rate this session higher than “green”.

QCon 2018 – Smart Speakers Designing for the Human

Title: Designing for the Human
Speaker: Charles Berg (from Google Home)

See the table of contents for more blog posts from the conference.


Over half the audience has smart speakers at home (Echo, Alexa, etc)

Most common uses of smart speakers

  • Communication (calls, texts)
  • IOT (turn on lights, fans)
  • Clock (alarms/timers)
  • Music

History

  • Encourages checking out of the environment. Even a notification gets you back into your phone
  • We focus on features and apps, but not the reason why users is doing something. App first human second mindset is a problem.

Smart speakers

  • Unlike smart phones, they are fixed in space. Direct voice to it.
  • Place it near where plan to use it.
  • That usage leads to context.

Agile

  • Ask questions
  • Do research before had design
  • Storyboards

Calling use case

  • Call dentist – should be seemless
  • Call Walgreens – which one?
  • Hands free calling for friends is frequent

Process

  • Understand context.
  • In a medium to large company, there is a lot of research already going on.
  • Find archived research. Don’t need to do from scratch.
  • Interview – start internal to team, then friends/family
  • Quickly need to expand interview to be more broad.
  • Pitch teammates on ideas based on research
  • Identify lead designer. Then identify themes (summary of research), brainstorm and create user journey map.
  • Physical user testing. Made two rooms with a mattress instead of just talking about it.

Smart speakers and dialog

  • There is a smart speaker style guide.
  • Develop variations
  • Test with actual people; see variation

My take

This was fun. It was a good mix of smart speakers and user focused design. I would have liked for more of the examples to be about speakers (vs an example about pretend stock quotes). Reading the abstract, I wasn’t sure how much to expect of each type of information. I don’t know if it was reasonable to expect, but I was expecting more on the smart speakers. And then right after I typed this, there was more on the speakers. Good. So maybe it wasn’t the amount of information, but the distribution of it. And the QA definitely went back to speakers.

 

QCon 2018 – Rethinking HCI with Neural Interfaces

Title: Rethinking HCI with Neural Interfaces
Speaker: Adam Berenzweig @madadam

See the table of contents for more blog posts from the conference.


Minority Report analysis

  • why need gloves to interface
  • ergonomics – tiring to hold arm up

History of UI Paradigm Shifts

  • Command line – we still use the command line; just not exclusively
  • mouse, graphics – original Apple. Design innovation; not just tech
  • minesweeper and solitaire built in so could learn how to use the mouse – right click for minesweeper and click/drag for solitarire
  • MIT wearable computing in 1993 paved way for Google Glass. [but successful]
  • Joysticks, gloves, body (Kinect), eye tracking, VR/AR headsets
  • Had audience raise hand if wearing a computer. Not many Apple watch people in the room
  • Future: tech is always there. It knows about the world around you and is always ready

Book recommendation: Rainbow’s End – an old man gets rejuvenated (or something) and comes back younger needing to learn new tech

Intro to Neural Interfaces

  • Interfaces devices to translate muscle movement into actions
  • Human input/output has high bandwidth compared to typing or the like. We think faster than we can relay information. Output constrained.
  • Myo – For amputee, arm where have electrode on arm that controls arm.
  • Neural interfaces have information would have sent to muscle or physical controller
  • Lots of stuff happens in the brain, but you don’t want all of it. You want the intentional part without having to filter out everything else. The motor cortex controls muscles so represents voluntarily control. Also don’t have to plan electrodes on brain.

Examples

  • Touch type without keyboard presence [not very practical as it is hard to touch type without seeing keys]
  • Mirrors intention of moving muscles even if physical attempt is blocked
  • VR/AR – more immersive experience

Designing for Neural Interfaces

  • Want to maximize control/minimize effort
  • Cognitive limits – what can people learn/retain
  • Mouse is two degrees of freedom, laser pointer is three. There is also six where control in space. Human body has ore than six degrees of freedom. Are humans capable of controlling an octopus
  • How efficient is the input. Compared to existing control devices
  • It is possible to control three cursors at once, but it is exhausting. Not a good design
  • Different people find different things intuitive. Which way is up?
  • Don’t translate existing UIs. Can evolve over time.

My take

Fun! Great mix of pictures, videos and concepts. I learned a lot. Would be interesting to see this vs the privacy/ethics track. Imagining what data it could have reading your mind/muscles.