profile

Ravishankar Iyer

A cautionary view on AI (3-2-1 by Story Rules #14)

Published about 1 year ago • 3 min read

Welcome to the fourteenth edition of '3-2-1 by Story Rules'.

A newsletter recommending good examples of storytelling across:

  • 3 tweets
  • 2 articles, and
  • 1 long-form content piece

Each would be accompanied by my short summary/take and sometimes with an insightful extract.

Let's dive in.


🐦 3 Tweets of the week

Some welcome good news on inflation in the West.


Good use of a Stacked Column chart to make a striking point.


A hilarious set of outdoor ads by an oat-milk company - check out the entire set.


📄 2 Articles of the week

a. Yuval Noah Harari argues that AI has hacked the operating system of human civilisation: in The Economist

Historian and peerless storyteller Yuval Noah Harari argues for curbs on AI.

Some extracts (emphasis mine):

In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.
---
We can still regulate the new AI tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful AI. The first crucial step is to demand rigorous safety checks before powerful AI tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new AI tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

b. Geoffrey Hinton tells us why he’s now scared of the tech he helped build (MIT Technology Review)

Ex-Google AI scientist Geoffrey Hinton has been in the news recently after he quit the company and gave a bunch of interviews sharing his concerns about AI.

“Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
---
(On AI coming up with bullshit answers):
Hinton has an answer for that too: bullshitting is a feature, not a bug. “People always confabulate,” he says. Half-truths and misremembered details are hallmarks of human conversation: “Confabulation is a signature of human memory. These models are doing something just like people.”

The difference is that humans usually confabulate more or less correctly, says Hinton. To Hinton, making stuff up isn’t the problem. Computers just need a bit more practice
.

Hinton is not too optimistic of lawmakers taking any substantive action:

Bengio agrees with Hinton that these issues need to be addressed at a societal level as soon as possible. But he says the development of AI is accelerating faster than societies can keep up. The capabilities of this tech leap forward every few months; legislation, regulation, and international treaties take years.

This makes Bengio wonder whether the way our societies are currently organized—at both national and global levels—is up to the challenge. “I believe that we should be open to the possibility of fairly different models for the social organization of our planet,” he says.

Does Hinton really think he can get enough people in power to share his concerns? He doesn’t know. A few weeks ago, he watched the movie Don’t Look Up, in which an asteroid zips toward Earth, nobody can agree what to do about it, and everyone dies—an allegory for how the world is failing to address climate change.

“I think it’s like that with AI,” he says, and with other big intractable problems as well. “The US can’t even agree to keep assault rifles out of the hands of teenage boys,” he says.

Hat/Tip: Prahlad Viswanathan


📖 1 long-form read of the week

a. 'Quantum computing could break the internet. This is how' by the Financial Times

This is a fabulous visual explainer of the next big thing in technology - quantum computing. I mean I still don't get a lot of it, but this is the most accessible and engaging content I've encountered on this topic.

They call it Q-day. That is the day when a robust quantum computer, like this one, will be able to crack the most common encryption method used to secure our digital data.
---

As well as being enticed by the economic possibilities, governments are concerned about the security implications of developing quantum computers. At present, the most common method used to secure all our digital data relies on the RSA algorithm, which is vulnerable to being cracked by a quantum machine.


That's all from this week's edition.

Announcement: 3-2-1 by Story Rules will be on a summer break over the next 3 weeks and will resume on the 2nd of June!

​Ravi

PS: Got this email as a forward? Get your own copy here.

Access this email on a browser or share this email on Whatsapp, LinkedIn or Twitter. You can access the archive of previous newsletter posts here.

You are getting this email as a part of the Story Rules Newsletter. To get your own copy, sign up here.

Ravishankar Iyer

A Storytelling Coach More details here: https://www.linkedin.com/in/ravishankar-iyer/

Read more from Ravishankar Iyer

Welcome to the sixty-fourth edition of '3-2-1 by Story Rules'. A newsletter recommending good examples of storytelling across: 3 tweets 2 articles, and 1 long-form content piece Let's dive in. 𝕏 3 Tweets of the week Source: X If someone takes the time and effort to (constructively) criticise your work, consider yourself lucky. Source: X Learning a new skill needs reinforcement. The most effective way in my personal experience has been a senior consistently reviewing my work and pointing out...

2 days ago • 5 min read

Welcome to the sixty-third edition of '3-2-1 by Story Rules'. A newsletter recommending good examples of storytelling across: 3 tweets 2 articles, and 1 long-form content piece Let's dive in. 𝕏 3 Tweets of the week Source: X Not an easy tightrope to walk on! Source: X The French are masters of susegad. Source: X The robots have been 'coming soon' for a few decades now! To be fair, we have Alexa, Siri et al. But humanoid robots in homes still seem some distance away. 📄 2 Articles of the week...

9 days ago • 4 min read

Welcome to the sixty-second edition of '3-2-1 by Story Rules'. A newsletter recommending good examples of storytelling across: 3 tweets 2 articles, and 1 long-form content piece Let's dive in. 𝕏 3 Tweets of the week Source: X Of course, we also film events to create memories to relive them later. But there is clearly a point to what Gurwinder is saying. Guess the Status Game has captured most of us. Source: X Wise words - and great use of contrast! SourceL X Super-detailed but fascinating...

16 days ago • 5 min read
Share this post