3 Comments
User's avatar
Michael David Cobb Bowen's avatar

At minute 19, as I am mixing a bunch of metaphors (hopefully consistently) what I neglected to mention is that with agentic programing as with constitutional law, there is the probability of what is known as the paradox of self-amendment. This is why you have to have some very strict deterministic rules in agentic systems that cannot be amended as well as some looser rules, prompts, laws etc that can be amended.

You want an AI to *say* "Today I just learned this, after 50 years, now I have to go back and rethink everything". But you don't actually want it to *undo* all of its prior work. So the idea of taking humans completely out of the loop has this fundamental risk, which is that you don't understand how much undoing an AI will attempt when it runs out of ideas or runs into a serious contradiction of guidelines. The forcing of these situations is exactly how they are being hacked today. It's like the Borg Queen giving Cdr Data something that makes him forget the prime directive.

joe.nalven2's avatar

This is a conversation that needs revisiting every so often, given the continued development and deployment of AI. I'd like to hear more about whether human problems are different than AI problems, given that AI is built on human data (for the most part), built by humans and for humans. Yes, there are emergent features that sound like naughty humans.

No Namy's avatar

This Q&A format really works for you two. No episode of this podcast is too long, but although closer to a mere 60 minutes, as another commenter states I am going to be listening to this on repeat in the coming days at least.

Never considered that the best answer to whataboutism is simply "what about it?"

Inspired, and terrifying, comparison with the Manhattan Project. Hopefully AI, unlike Christa McAuliffe, will ask “What does this button do?" before pressing it to find out. Say her name!!

Two things to get in on first: http://LeroySax.com and orbit rights