1 minute read

As I wrote earlier, it’s practically impossible to keep up with all developments in AI. Microsoft, Google, and others are caught in a race to deploy as quickly as possible. Consequently, adequate safety measures are lacking. The Center for Humane Technology published a presentation on the exponential growth of AI, some of the dangers that entails, and how to cope with that (via Leo Simons).

My highlights and takeaways:

  1. “What nukes are to the physical world, AI is to the virtual world.”, because
  2. AI is the total decoding and synthesizing of our reality through the foundation of our communication: symbols and language.
  3. This will ultimately impact our daily lives heavily: news items/customer service messages/commercials/ads/education/religious outings will be extremely tailored/seducive/addictive/influential (fully A/Z tested, i.e. optimized across an entire population). Think: Facebook—Cambridge Analytica on steroids. So 2024 may be the last human (US) elections: future elections will be won through (AI) compute power. Also: loneliness will be a significant security threat
  4. AI will unveil new and unforeseen patterns, relations and correlations. Think: detecting (the poses of) humans in a room through the WiFi antennas in the room, or reconstructing the ideas and meaning behind your words through MRI brain scans.
  5. Nukes don’t make better nukes, but AI does make better AI—or, more specifically, LLMs are capable of improving their performance by training on its own generated labels only (‘double exponential’ curve).
  6. Also: this is not just an arms race. It’s like an arms race to strengthen every other arms race, because whatever other arms race between people, AI makes better abilities to do all of those things (again: ‘double exponential’ curve)
  7. We currently lack the technology to know/verify/prove what knowledge is in an LLM (i.e., what answers a tool like ChatGPT could return given any question). Still, these models are deployed to the general public.

Exciting times! We must now prioritize and catch-up on the ‘non-functionals’ (e.g. the explainability, auditability, security, morality) of AI and work towards a ‘Non-Proliferation Treaty’ for AI. Good thing the EU AI Act is underway.

Give a man a fish and you feed him for a day,

teach a man to fish and you feed him for a lifetime,

teach an AI to fish and it’ll teach itself biology, chemistry, oceanography, evolutionary theory and then fish all the fish to extinction.