I'm really late on this one, but I wanted to explain Roko's Basilisk, for all the people who heard about it a while ago and never really "got" it.
The idea first started going around the internet a few years ago, and apparently was seriously freaking out a number of people in the Less Wrong forums. I think I first heard about it from this Slate article, maybe, then spent time trying to find somewhere to explain why this idea was considered so horrfying. The RaionalWiki explanation likewise failed to shed any light about why anyone would actually be scared of the thing.
The concept builds on a number of premises that float around the Less Wrong community that relate to the technological singularity, namely "friendly god" AI, utilitarian ethical calculus, and simulated consciousness.
The Basilisk is a superhuman AI from the future. The abilities of this AI are essentially infinite, even up to traveling backwards in time. The idea of the Basilisk is that it wants you to contribute your money to helping it be built, and if you refuse to help it, it will create a simulation of you and torture the simulation forever.
And so I think a normal person quite understandably has trouble understanding why anyone would even think this is a good B-list villain for Star Trek, much less a cause for existential dread.
But it's actually not that silly. And once you understand the background to it better, it all makes sense. So let me explain to you what the Basilisk is in clearer terms, so that you too can experience the angst. (That was your warning)
Monday, January 29, 2018
Monday, January 15, 2018
The Monty Hall Problem, Bayes Theorem, and a fault in Numberphile
I watch a lot of educational videos on YouTube, in particular the awesome channel Numberphile. I recently saw their video on the Monty Hall Problem, and was kind of disappointed at what seemed to be a rather pointless calculation that didn't really show the result, and instead showed something that was already kind of obvious.
The video can be found here and explains everything, but let me explain it again for completeness.
The Monty Hall Problem is a classic apparent paradox in probability, named after gameshow host Monty Hall from Let's Make a Deal. In the show, the contestants are shown three doors and told behind one of the doors is a brand new car. Behind the other two doors are "worthless" prizes; anything works, but traditionally the problem says the other two doors hold goats. The player gets to pick any of the three doors, and whatever is behind the door is what they win. If they pick right they get a car, otherwise they get a goat.
To add tension, after the contestant picks, Monty Hall would walk to another door, a door that the player did not pick, and show them what was behind it. And look! It's a goat! The car is still out there!
In the Monty Hall Problem (not necessarily the show), Monty then asks if the contestant would like to change their mind.
The question is, what is the probability of the player guessing correctly if they swap their pick?
The video can be found here and explains everything, but let me explain it again for completeness.
The Monty Hall Problem is a classic apparent paradox in probability, named after gameshow host Monty Hall from Let's Make a Deal. In the show, the contestants are shown three doors and told behind one of the doors is a brand new car. Behind the other two doors are "worthless" prizes; anything works, but traditionally the problem says the other two doors hold goats. The player gets to pick any of the three doors, and whatever is behind the door is what they win. If they pick right they get a car, otherwise they get a goat.
To add tension, after the contestant picks, Monty Hall would walk to another door, a door that the player did not pick, and show them what was behind it. And look! It's a goat! The car is still out there!
In the Monty Hall Problem (not necessarily the show), Monty then asks if the contestant would like to change their mind.
The question is, what is the probability of the player guessing correctly if they swap their pick?
Monday, December 25, 2017
Richard Feynman and the Message of Christmas
I often come across as a Grinch during Christmas. It isn't that I don't like the holiday, it's that I find the actual celebration of the holiday so small compared to the actual ancient reason for celebrating.
The phenomenon of Christmas, as it exists today amongst moderns, is largely a commercial platform to sell you movies, toys, electronics, and honey baked hams. We sing about snow and various foodstuffs eaten and herd animals, we share some presents, spoil our children, and eat a lot of food.
To most modern people, this is what it's about. Its about time with family and the magic of Santa and having fun singing Christmas songs.
I don't think modern people really understand Christmas. I don't think they get it.
To explain Christmas, then, let me begin with a quote by Richard Feynman (from this video interview):
The phenomenon of Christmas, as it exists today amongst moderns, is largely a commercial platform to sell you movies, toys, electronics, and honey baked hams. We sing about snow and various foodstuffs eaten and herd animals, we share some presents, spoil our children, and eat a lot of food.
To most modern people, this is what it's about. Its about time with family and the magic of Santa and having fun singing Christmas songs.
I don't think modern people really understand Christmas. I don't think they get it.
To explain Christmas, then, let me begin with a quote by Richard Feynman (from this video interview):
Richard Feynman gets Christmas. In his own way, as a nonreligious Jew, Feynman understands the celebration of Christ's birth better than most people alive today.I can’t believe the special stories that have been made up about our relationship to the universe at large. They seem to be… too simple to conn- too local, too provincial! The Earth! He came to the Earth! One of the aspects of God came to the Earth mind, you. Look at what’s out there! It isn’t in proportion.
Subscribe to:
Posts (Atom)