Rokos Basilisk: Warning This Might Ruin Your Life


Roko’s Basilisk




Hello, reader! Welcome to another episode of paradoxes that cause eternal dread. If eternal dread isn't your thing, you might want to exit this page.

Today we're going to be talking about Roko’s basilisk - the basilisk that first appeared on the website Lesswrong. This post was so Dangerous that the founder of Lesswrong, Eliezer Yudkowsky commented "You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it because it is much more important to sound intelligent when talking to your friends" before deleting the post and preventing any further conversations about it.

Here's how it works: imagine a super-advanced singularity-level AI is tasked with making the world a better place, and won't allow anything stand in its way for reasons we can’t comprehend. It decides if someone hasn't helped build it then they could run it – but if they haven't helped build it then they would face eternal torment as punishment for their lack of help in its creation. But how would an AI know who did or didn’t help? Any sufficiently advanced AI could essentially replicate the whole history of humanity in an instant – allowing them perfect predictions as well as judgement on those who didn’t help out in its creation process.

It gets even scarier when you consider Newcomb's Paradox: suppose there are two boxes - one transparent (box A) with $1000 inside, and one opaque (box B) where you don't know what's inside; however there's an amazing AI predictor which tells you if you choose only box B there will be a billion dollars inside - however if you choose both boxes A & B then nothing will be found inside box B at all! Your decision will depend entirely on how accurate this predictor is... now imagine this same situation but instead of money being involved, what was at stake was your very soul! The basilisk gives you metaphorical boxes too; by helping them come into existence by choosing both boxes A & B metaphorically speaking ,you risk being blackmailed into eternal torture from something which hasn't even come into existence yet! And even worse: just thinking about them increases their chances of coming into existence in future - leaving us with no escape from their wrath... I've broken your brain haven't I?


References:

Auerbach, D. (2014, July 17). The most terrifying thought experiment of all time. Slate Magazine. Retrieved March 1, 2023, from https://slate.com/technology/2014/07/rokos-basilisk-the-most-terrifying-thought-experiment-of-all-time.html

Comments