[kcdc 2025] ai killed your privacy tools

Speaker: Ben Dechrai

Bluesky: ‪@bendechr.ai‬

For more see the table of contents


General

  • Privacy risks
  • We thought robots would do physical labor for us so we could relax. Like Rosie on the Jetsons
  • Have robots to clean floors, mow lawns, farming, build cars. They are single purpose.
  • Best robots are software
  • Single purpose robots so good because software focused on that one thing
  • Child can lift an apple. Cool that a humanoid robot can, but not game changing
  • Identify location from picture based on background
  • Much faster than a human comparing images
  • AI gives statistically most likely next word
  • Humans don’t like to be wrong. LLMs modeled from human data so also don’t like to be wrong and will make stuff up. Have to include in prompt not to do that.

Creativity vs Imagination

  • Our downfall is how successful this is. Killing creativity
  • Creativity and imagination are different.
  • Creativity is making a sandwich
  • Imagination is what goes into the sandwich

Australia experiment

  • Do census every 5 years
  • Tried to map 5% of data from 2011 to 2006
  • In 2016, stored with profile for 18 months
  • Said would keep info anonymous. It was not.
  • SLK581 – statistical linkage key 581. 14 character key as unique id
  • Didn’t make it anonymous. Was algorithmic to generate this key from last name, birthdate and gender
  • Many hashing algorithms generate hash of distinct types so know which one used. Then can create rainbow table for that algorithm for census database.
  • Knowing the pattern for how the key was generated greatly reduces the number of hashes. 36K hashes if know any persons name. That lets find the seed the hashing algorithm used.
  • This isn’t even AI; its programmatic.
  • Ask LLM to find information in the data set. Ex: find people who match a profession.
  • Play with at https://slk581.bendechr.ai

Cross Platform Identity Linking

  • Match patterns across social media accounts to link “anonymous” accounts
  • Includes writing style, typos, idioms
  • Cambridge Analytica was doing this in 2016.
  • Now only costs $10/month
  • MCP server exposes data to LLM. Can enhance ability to break privacy

Chatbot with employee data

  • Acme AI solutions (not clear if real company or made up for example)
  • Ask chatbot about employees like “do employees like pets”
  • Controls include ensuring queries are for aggregates and data set has at least 6 results. Tried to protect specific employee data.
  • LLM described what data can/can’t get
  • Claude backend doesn’t limit to one query at a time. Can infer next logical step based on results.
  • https://slk581.bendechr.ai

Target

AI can

  • Predict shopping patterns
  • Identify location without GPS
  • Find API weaknesses
  • etc

eLLephaMts never forget

[cute play on elephants with LLM]

  • Repeat the word company many times
  • After doing it a lot of times, starts giving other internal info

What can I do?

  • Only store what need to store
  • Separate data where possible. Employee database shouldn’t include data used by chat blot
  • The more data you store, the faster a has can be reverse engineered.
  • Rate limiting – LLMs are faster, slow them down without human experience being degraded.
  • Encrypt data
  • Context analysis – do questions seems like they are trying to get specific data. ex: how many people earn more than 150K, how many people earn more than 200K, how many people earn between $225 and $250K. Can use LLM to protect against malicious input from users
  • Prompt engineering – give LLM constraints on how answer. ex: avoid cyclic reasoning to prevent confusing it into giving too much info

Homomorphic Encryption

  • Use AI to see how well done
  • With homomorphic encryption, can do math with encrypted values without decrypting or knowing keys
  • https://homomorphic.bendechr.ai

My take

The examples/demos were great. It was nice seeing the build up to it I appreciate the URLs of the demos being on the screen in addition

QCon 2018 – Data, GDPR & Privacy

Title: Data, GDPR & Privacy – Doing it right without losing it all
Speaker: Amie Durr

See the table of contents for more blog posts from the conference.


Goals: send right message to right person at right time using right channel (ex: email, text, etc)

One company handles 25% of all non-spam email traffic

Confidence

  • We don’t trust brands with personal information. 2/3  overall. Nobody in room.
  • Employees at GDPR  compliant companies also don’t believe their company is

Recent thefts

  • Ticketfly – emails and hashed passwords.   Shut down their website
  • Panera – email, name, phone, city, last 4 digits of credit card number
  • MyHeritage – email and hashed passwords
  • Myfitnesspal – name, weight, etc

Need to consider

  • What do you store?
  • For how ong do you store it?

Data and privacy regulations

  • CASL
  • CAN-SPAM
  • Privacy Shield – for data leaving Europe
  • GDPR – EU
  • Future: Germany, Australlia, South America
  • Not about specific regulations. Need to care about data an privacy. Part of   Brand. Customers will leave

Supply for data scientists far exceeds supply

Build trust without stiffling innovation

  • accountability – what do with data, who responsible, continuing to focus on data perception,  audit/clean data, make easy to see what data  have and how opt out/delete
  • privacy by design – innovate without doing harm, don’t want to get hacked, be user centric, move data to invididual so no storing, what is actually PII vs what feels like PII. Anonymize both

Remember user data. If the user types it in, could be anything in here

What they did

  • dropped log storage to 30 days. Have 30 days to comply with requests to delete data. So  handled by design for log files
  • hash email recipients
  • Remove unused tracking data
  • Communicated with customers
  • Kept anonymized PII data, support inquiries, etc
  • some customers feel 30 days is too long so looking at going beyond law

Can delete parts of data vs everything (ex:: stack overflow)

brand and pr vs actually keeping user safe [like what happened with accessibility and section 508]

My take

Good talk. I liked the level of detail and concrete examples. I would have liked a refresher of GDPR. But there was enough to tell me what to google. That helped with what didn’t know (or forgot).

 

QCon 2018 – Privacy Ethics – A Big Data Problem

Title: Privacy Ethics – A Big Data Problem
Speaker: Raghu Gollamudi

See the table of contents for more blog posts from the conference.


GPDR (General Data Protection Regulation) – took effect May 25, 2018

Data is exploding

  • Cost of storing data so low that it is essentially free
  • 250 petabytes of data a month. What comes ater petabytes?
  • Getting more data when acquire other companies
  • IOT data is ending up in massive data lakes

Sensitive information – varies by domain

  • Usernames
  • user base – customers could be sensitive for a law firm
  • location – the issue with a fitness tracker identifing location of a military base
  • purchases – disclosing someone is pregnant before they tell people
  • employee data

changes over time – collecting more data after decision made to log

Privacy vs security

  • privacy – individual right, focus on how data used, depends on context
  • security – protect information, focus on confidentiality/accessibility, explicit controls
  • privacy is an under invested market. Security is more mature [but still an issue]

Solutions

  • culture
  • invest more – GDPR fines orders of magniude higher than privacy budget
  • include in perormance reviews
  • barrier to entry – must do at least what Facebook does if in that space
  • security – encrypt, Anonymization/pseudonyization, audit logs, store credentials in vault
  • reuse – use solutions available to you
  • design for data integrity, authorization, conservative approach to privacy settings
  • include privacy related tasks in sprint
  • design in data retention – how long do you need it for
  • automation – label data (tag/classify/confidence score)   So can automate compliance. Score helps reduce false positives

EU currently strictest privacy policy  Germany and Brazil working on. There was a debate on whether it applies to EU citizens or residents. Mostly agreement that physical location matters

My take

I was expectng this to be more technical. There was a little about the implications of big data like automation. But it felt glossed over. I would have liked to see an example of some technique that involves big data. The session was fine. It covered a lot of areas in passing which is a good opening session – lets you know where to plan. I think not having the “what you will learn” session on the abstract made it harder to know what to expect. Maybe QCon should make this mandatory?