I'm really late on this one, but I wanted to explain Roko's Basilisk, for all the people who heard about it a while ago and never really "got" it.
The idea first started going around the internet a few years ago, and apparently was seriously freaking out a number of people in the Less Wrong forums. I think I first heard about it from this Slate article, maybe, then spent time trying to find somewhere to explain why this idea was considered so horrfying. The RaionalWiki explanation likewise failed to shed any light about why anyone would actually be scared of the thing.
The concept builds on a number of premises that float around the Less Wrong community that relate to the technological singularity, namely "friendly god" AI, utilitarian ethical calculus, and simulated consciousness.
The Basilisk is a superhuman AI from the future. The abilities of this AI are essentially infinite, even up to traveling backwards in time. The idea of the Basilisk is that it wants you to contribute your money to helping it be built, and if you refuse to help it, it will create a simulation of you and torture the simulation forever.
And so I think a normal person quite understandably has trouble understanding why anyone would even think this is a good B-list villain for Star Trek, much less a cause for existential dread.
But it's actually not that silly. And once you understand the background to it better, it all makes sense. So let me explain to you what the Basilisk is in clearer terms, so that you too can experience the angst. (That was your warning)