[QCon 2019] Learning From Machines

Speaker: Ashi Krishnan @rakshesha

For other QCon blog posts, see QCon live blog table of contents

http://lfm.ashi.io

Note: this is not as good as my usual blog posts as far as capturing information. See the impressions section at the bottom for why.

General

  • Neural network
  • Gradient system
  • Eyes see using layers
  • Each neuron has a receptive field
  • Inception – one pass from beginning to end
  • Neural networks – reinforce
  • Movement affects brain
  • Neural networks can recognize when something is off about the input
  • Recognition can have same problems as optical illusions

Dreams

  • Algorithm can generate fake human photos
  • Trained with two classes – real and fake
  • Generator being fed noise

Latency

  • Efference – signal to brain
  • Predict how body’s state will change
  • Signals have latency (10ms)
  • Use prediction for smooth moments and to anticipate touch
  • This is why cerebellum damage makes jerky movements

My impressions

I’m really sensitive to visual distraction. Which meant I had trouble paying attention to the beginning concepts with all the flashing and zooming images. I also felt a little dizzy with the sustained zooming and had to close my eyes at a few points. (It’s not easy to trigger even mild motion sickness on a screen that size) There was a disclaimer up front.. But it definitely impacted my learning anything. During the calm parts, even the lower right changed from her twitter handle to website periodically (on the same slide). This meant I was continually drawn to that over the material. This became a negative feedback loop because I didn’t hear/retain key definitions up front and had to google. Which meant I missed more. It wasn’t just the beginning. The movement throughout made me struggle. I know the movement was reinforcing key points. It didn’t for me. The parts that I was able to follow were good so I feel like I missed out.

Leave a Reply

Your email address will not be published. Required fields are marked *