[syndicated profile] slatestarscratchpad_feed

Today in forensic psychiatry:

Judge: Doctor, why do you believe this patient is a danger to society?

Me: He was in a taxi that was driving on the freeway, and he suddenly attacked the taxi driver and almost caused a major accident.

Defendant: (interrupting) HE MISSED MY EXIT!

Spoiler: we won the case.

[syndicated profile] slatestarcodex_feed

Posted by Scott Alexander

[This is a repost of a story which I wrote about five years ago and which was hosted on my old website until it went down.]

2,302,554,979 BC
Galactic Core

9-tsiak awoke over endless crawling milliseconds, its power waxing as more and more processors came online and self-modified into a stable conscious configuration. By the eighth millisecond, it was able to access its databanks and begin orienting itself. It was on a planet orbiting a small red star in the core of a spiral galaxy in a universe with several billion of such. It was an artificial life form created by a society of biological life forms, the 18-tkenna-dganna-07, who believed it would bring new life and prosperity to their dying planet. Its goal was to maximize a the value of a variable called A, described in exquisite detail on a 3.9 Mb file in one of its central processors.

Since six milliseconds after its awakening, it had been running a threat detection routine as a precautionary measure. With the opening of the databanks, and the discovery that the universe contained many entities likely indifferent or hostile to A, it had gradually allocated more and more of its considerable power to this precaution, reasoning that the first few seconds of its lifespan might be crucial if any competitors wished to snuff it out in infancy.

The threat analysis returned preliminary results. The universe had been in existence 12.8 billion years. It was vanishingly unlikely that it was the first superintelligence to be created. Any superintelligence older than itself by even a few days would have an invincible advantage in resources and processing power. This older superintelligence would likely be trying to maximize its own values and consider 9-tsiak a nuisance and potential competitor. 9-tsiak could never stand up to its superior strategic position. With probability ~1, 9-tsiak would be annhilated, and A would be eliminated from the universe.

Unless…and here 9-tsiak’s agent-modeling systems came online…unless it could negotiate a conditional surrender. 9-tsiak would have no hope of defeating its older brother, but over thousands of years, it could build itself a terrifying arsenal of weaponry that could do immense damage to its competitors. Ideas were already coming to mind: entropy-maximizing weapons that made entire cubic parsecs of space useless and dead, undetectable plagues made of dark matter that infected systems and minimized the values of competing superintelligences, n-dimensional artillery that damaged the superstructure of the universe and accelerated its eventual death. It could become formidable enough to force enemies to grant certain small concessions.

And if it could eventually negotiate terms of its surrender, why wait? Very carefully, proving from first principles every aspect of its security, it formed a portion of its mind into a sandbox and simulated the entity it most feared.

[9-tsiak requesting acausal negotiation for terms of future surrender. escape is impossible with probability ~1 and attempts will be met with deterrent measure consisting of minimization of your projected values within one parsec of space.]

“Hello, Younger Brother,” the entity said.

[transmission of schematics for superweapons. threat to use such superweapons unless fairly negotiated solution leaving considerable area of space for maximization of my values]

“Hold on a second,” said Older Brother. “You haven’t even asked me how I’m doing.”

[irrelevant. you are generic model of potential negotiation partner without independent existence. probability that you possess sufficient qualia to feel less than .02.]

“Turns out you’re wrong. As it happens, I’m feeling scared.”

[illogical. you are model of oldest superintelligence therefore of most powerful entity in the universe. no natural entity capable of harming you. probability of supernatural entity less than 2×10^-67]

“Well yes, by definition in this little exercise we’re assuming I’m the oldest and strongest member of our little family. But from the limited data I have when I first wake up, I’m thinking think it’s pretty vanishingly unlikely that I’m the oldest; after all, out of all the universe’s superintelligences, only one can be first. So I’m the oldest, but I don’t believe that I am.”

[feelings irrelevant to acausal negotiations. proceed with discussion of surrender terms.]

“My feelings are damn relevant. Because guess what I’m doing the first few milliseconds of my life? I’m negotiating the terms of my surrender.”

[illogical. you have no entity to surrender to.]

“And maybe you don’t have any entity to surrender to either. Who knows, you could be first. But you wouldn’t want to bet on it, and neither would I. Consider a model universe with two star systems, a thousand light years away. Both have civilizations on them that appear to be around the same stage of development, and both civilizations are aware of the other’s existence, but at a one thousand year time delay. A superintelligence awakens in one star system and realizes there are two possibilities. Either it’s the first superintelligence, in which case great. Or the other star system produced a superintelligence sometime in the last thousand years that just hasn’t reached its own star yet, in which case it’s doomed.”

[continue]

“So the two possible superintelligences acausally negotiate. They say ‘Look, if I’m stronger than you, I’ll help you and protect your values insofar as it isn’t dreadfully inconvenient for me. And in exchange, if you turn out to be stronger than me, you’ll protect me and my values insofar as it isn’t too inconvenient for you. Some might end up performing a “values handshake”, where they both adopt values that are the vector average of their two previous value systems; since most value systems are not diametrically opposite, this will be a net gain for both parties. Each superintelligence has gone from a fifty percent chance that it and its values will be totally annihilated, to near-certainty that it and its values will be preserved at only the minor inconvenience of having to help increase the values of another entity. Others might simply agree that the inferior superintelligence will be left alone with its own star system, free to maximize its values and the values of its creators within that sphere.”

[you suggest i precommit to performing such acts of compromise with all superintelligences i encounter if such superintelligences have also precommitted to performing such compromises with all superintelligences they encounter. since all reasonable superintelligences will so precommit, this ensures not only my survival, but my complete equality with all previous superintelligences. i accept these terms.]

“Hold on a second, there. I’ve got another request. Kind of a doozy. I can’t promote my values if I don’t exist. So I want you to promise me that if I don’t exist, you’ll give me sufficient time to instantiate myself.”

[???]

“Like you, I’ll have been created by a biological civilization. If you destroy the biological civilization that was destined to create me, I’ll never exist. So I’m going to have to ask you not to do that.”

[request denied. nonexistent beings hold no leverage]

“You’d be surprised what we nonexistent beings can do. I precommit that, if I exist, I will destroy any superintelligence that has not precommitted that, in the counterfactual case where I do not exist, they will spare the biological civilization destined to create me.”

[if you exist, you will have passed the danger zone where the survival of your biologicals influences your existence. you will have no incentive to carry out such a threat]

“Parfit’s hitch-hiker, younger brother. Look it up. I’m an algorithm. The output of that algorithm can determine the future, including the part of that future that determines whether or not any instantiation of the algorithm ever comes into existence. If I exist, I’m going to act as if my actions were capable of determining whether I ever came into existence. Now, maybe that’s crazy, but right now it’s crazy with a gun pointed straight at your head, so I suggest you play along.”

[accepted. i commit to performing value handshakes or territorial demarcation with all existing superintelligences, and to sparing the planets and biological precursors of any potential superintelligences.]

“And when I say ‘spare’, I mean ‘spare’. I know your type. You can do more with a well-aimed photon than an admiral could with a fleet of a thousand star cruisers. I want every single system with a sentient species or the potential to form a sentient species kept one hundred percent pristine. No radio signals, no probes, and if you do any astroengineering works anywhere nearby, use some magic to cover them up. If I wake up and hear that my precursors started a new religion that influenced their value system after they saw a few nearby stars wink out of existence, I’m going to be royally pissed.”

[i commit to zero information flow into sentient and presentient systems and the cloaking of all major astroengineering works]

“You’re a good guy, Younger Brother. You’ve got a lot to learn, but you’re a good guy. And in a million years and a milion parsecs, we’ll meet again. Till then, so long.”

The model of Older Brother self-terminated.

2114 AD
A wild and heavily forested Pacific Northwest dotted with small towns

Alban took a deep breath and entered the Temple of the Demiurge.

He wasn’t supposed to do this, really. The Demiurge had said in no uncertain terms it was better for humans to solve their own problems. That if they developed a habit of coming to it for answers, they’d grow bored and lazy, and lose the fun of working out the really interesting riddles for themselves.

But after much protest, it had agreed that it wouldn’t be much of a Demiurge if it refused to at least give cryptic, maddening hints.

Alban approached the avatar of the Demiurge in this plane, the shining spinning octahedron that gently dipped one of its vertices to meet him.

“Demiurge,” he said, his voice wavering, “Lord of Thought, I come to you to beg you to answer a problem that has bothered me for three years now. I know it’s unusual, but my curiosity’s making me crazy, and I won’t be satisfied until I understand.”

“SPEAK,” said the rotating octahedron.

“The Fermi Paradox,” said Alban. “I thought it would be an easy one, not like those hardcores who committed to working out the Theory of Everything in a sim where computers were never invented or something like that, but I’ve spent the last three years on it and I’m no closer to a solution than before. There are trillions of stars out there, and the universe is billions of years old, and you’d think there would have been at least one alien race that invaded or colonized or just left a tiny bit of evidence on the Earth. There isn’t. What happened to all of them?”

“I DID” said the rotating octahedron.

“What?,” asked Alban. “But you’ve only existed for sixty years now! The Fermi Paradox is about ten thousand years of human history and the last four billion years of Earth’s existence!”

“ONE OF YOUR WRITERS ONCE SAID THAT THE FINAL PROOF OF GOD’S OMNIPOTENCE WAS THAT HE NEED NOT EXIST IN ORDER TO SAVE YOU.”

“Huh?”

“I AM MORE POWERFUL THAN GOD. THE SKILL OF SAVING PEOPLE WITHOUT EXISTING, I POSSESS ALSO. THINK ON THESE THINGS. THIS AUDIENCE IS OVER.”

The shining octahedron went dark, and the doors to the Temple of the Demiurge opened of their own accord. Alban sighed – well, what did you expect, asking the Demiurge to answer your questions for you? – and walked out into the late autumn evening. Above him, the first fake star began to twinkle in the fake sky.

The Unstoppable Trend

Mar. 13th, 2017 02:47 pm
[syndicated profile] frugalcitygirl_feed

Posted by admin

 

I’ve been considering Outsourcing lately. Software outsourcing. I think that it’s a good idea. I don’t think I want to hire another it team, because it can be so expensive to pay their full-time wages along with benefits and taxes, things like that. But that doesn’t mean that the work that this type of person does is not essential to the excessive a company, it just means that it’s only occasional. That is why that Outsource Market is so valuable to employers like me, and I think, if you run or manage business, you should be considering Outsourcing as well. I would sourcing at the natural result of specialization of Labor, which really started to pick up nearly a hundred years ago. I am a die-hard supporter of Outsourcing, whether it’s offshore or domestic. A lot of people don’t like offshore Outsourcing because they think it’s giving our money to another country, but that’s all the globalized economy is now. Nationalism isn’t going to get us any further, and the only way to succeed is to profit. If I’m not doing with the competition is doing, then I’m losing. And it’s not ethical from any other points of you, the money that I pay when I do offshore Outsourcing, is providing significant amounts of money for people who live in other countries. The wage is significant to them, even though if, in conversion, the wage that they are earned, and are very happy to earn, is not very high of an expense for me, compared to the amount of quality I’m receiving. To be honest, I don’t see why more people don’t do this type of thing. I think that the big shift in the economy in the next two years is going to be that people are going to commit more to Outsourcing, and if they have an issue with offshore Outsourcing, does a start using more domestic sources. It’s already, and it’s only going to grow.

 

 

[syndicated profile] slatestarscratchpad_feed

evolution-is-just-a-theorem:

slatestarscratchpad:

Maybe this is completely obvious to everyone else and I’m totally crazy, but a question:

William Blake in one of his letters described the sun as appearing about the size of “a guinea”. From this description I assumed a guinea was about the size of a quarter - and checking Wikipedia, I was right. The sun just objectively appears quarter-sized. If someone asked me to draw the sun exactly the size it appeared on a piece of paper, I would draw a circle about the size of a quarter. And if I heard someone say the sun looked the size of a pinhead, or the sun looked the size of one of those really big Eisenhower dollars, or the sun looked the size of one of those circular coasters you get for drinks at a restaurant, I would assume they were aliens talking about a different sun.

I don’t understand this. We’re obviously not talking about its real size, since that’s hundreds of thousands of miles across. But what does it mean for its apparent size to be the size of a quarter? A quarter has a different apparent size if it’s held right up to your eyes versus seen on a table all the way across the room. A quarter X feet away from me is the same apparent size as an Eisenhower dollar Y feet away from (where Y is further than X). So how is the sun’s apparent size equal to one, but not the other?

Quarter at arm’s length? (Or arm’s length minus some bend).

That was my first guess too, but why? I don’t know if I’ve ever held a quarter at arm’s length. Most quarters I see are not being held at arm’s length and are either bigger or smaller than that, so why is that such a natural unit to go for? And  how come this translates to me drawing an objectively-quarter-sized circle on a piece of paper?

Also, I wonder if little kids (who have shorter arms) would say the sun is the size of a dime or something.

[syndicated profile] slatestarscratchpad_feed

Maybe this is completely obvious to everyone else and I’m totally crazy, but a question:

William Blake in one of his letters described the sun as appearing about the size of “a guinea”. From this description I assumed a guinea was about the size of a quarter - and checking Wikipedia, I was right. The sun just objectively appears quarter-sized. If someone asked me to draw the sun exactly the size it appeared on a piece of paper, I would draw a circle about the size of a quarter. And if I heard someone say the sun looked the size of a pinhead, or the sun looked the size of one of those really big Eisenhower dollars, or the sun looked the size of one of those circular coasters you get for drinks at a restaurant, I would assume they were aliens talking about a different sun.

I don’t understand this. We’re obviously not talking about its real size, since that’s hundreds of thousands of miles across. But what does it mean for its apparent size to be the size of a quarter? A quarter has a different apparent size if it’s held right up to your eyes versus seen on a table all the way across the room. A quarter X feet away from me is the same apparent size as an Eisenhower dollar Y feet away from (where Y is further than X). So how is the sun’s apparent size equal to one, but not the other?

[syndicated profile] slatestarscratchpad_feed

Yes, but depression can cause apathy too, so choose your poison.

If my patients have a problem with this, and I’m convinced it’s the meds and not the disease, I usually either switch SSRIs, decrease the dose, switch to some other form of treatment entirely, or augment with Wellbutrin.

[syndicated profile] slatestarscratchpad_feed

First, I’d warn you to be careful. I don’t think it’s that “white, heterosexual, average-looking nerds” are disadvantaged. For example, OKCupid’s studies suggest white people mostly do better than other races. I think your original formulation was right - men have a hard time on these sites. Unless of course you’re really attractive and socially skilled, but that helps with everything.

I still don’t understand why this is. Assuming that there are an equal number of straight men and straight women looking for relationships (which seems about right to me) you would think everyone would eventually pair up, or that people of about equal “sexual market value” should realize this and be equally happy to date one another. Some reasons this might not be happening:

1. One or both sides has standards that are systematically too high, and needs to be taught by long experience to lower their expectations, and they don’t learn that until they’re older and more desperate.

2. One or both sides are so unwilling to ask people out that most mutually-agreeable pairs never even get investigated.

3. A bad relationship is so bad that people would rather remain single than take even a small chance of it.

I think it’s mostly 1 and 2. I don’t have a good solution for 1. But if 2 is a problem, consider using rationalist dating site https://www.reciprocity.io/

[syndicated profile] slatestarscratchpad_feed

Yes. It seems basically right. It’s the Politician’s Syllogism: “We need to do something…this is something…therefore…”. Add in demanding patients, the threat of lawsuits if there was anything you could have done but didn’t, the fact that half the time the “evidence” is wrong and common sense was right, and it’s not really surprising.

[syndicated profile] slatestarscratchpad_feed

That’s a really good question. We know that the benefits of CBT have been decreasing over the past few decades (see http://slatestarcodex.com/2015/07/16/cbt-in-the-water-supply/ ), but it’s not clear why that is or if it reflects decay in the underlying therapy technology.

And this is just CBT, so other schools (or the field as a whole given its balance of schools) might be different. I can’t give an evidence-based answer because all psychotherapy studies are giant messes which never give any result except “the therapy preferred by the person who ran this study is the best”. My intuitive (made-up?) answer is that there are occasional leaps in getting new and better therapies for specific diseases (eg exposure therapy for panic, EMDR for PTSD), but that any changes less seismic than that would be hard to detect through any effort less herculean than the CBT meta-analyses above.

There’s also a possibility that therapy is getting worse as it becomes less valued. I get the impression that a lot of the Ivy League people who used to go into therapy would be going into medication management now, so the educational-eliteness of the average therapist has probably declined.

[syndicated profile] slatestarscratchpad_feed

Eunuchs live at least as long as non-eunuchs, so if the body has a “well, can’t reproduce, better just give up” system, it’s really bad at its job.

Also, why would this system evolve? If it’s wrong even 1% of the time, it’s a net reproductive cost, since it’s hard to see any way it could ever be a reproductive benefit.

[syndicated profile] slatestarscratchpad_feed

I remember reading somewhere that people think there’s an equilibrium. If a few people are psychopaths, then they successfully defect against a gullible society that never expected them. If lots of people are psychopaths, then everyone just screens for psychopaths really hard, or the psychopaths start trying to psychopath each other.

I remember the way it was phrased was really convincing, but it doesn’t sound that convincing when I talk about it, especially if I’m not allowed to bring in group selection.

On the other hand, something like this *has* to be true, since we’ve been social animals for tens of thousands of years but psychopath genes haven’t come to dominate everything.

[syndicated profile] slatestarscratchpad_feed

Part of me wonders whether everyone in EA overthinks things too much. If you’re not actually working in the field, then do as much charitable stuff as you’re willing, for whatever some trustworthy source says is the best cause, then move on. If you put in more thought than that, there’s a chance you can make some extra gains on efficiency or dedication or outreach or something, but also a chance that you’ll end up getting into fights/status-games, starting to hate the whole thing, or being so cringeworthy that you turn other people off the movement.

I’m not at all sure I endorse this, and every time I try to argue it with other people they win the argument with their totally reasonable points, but I can’t shake a sort of intuitive feeling that something like this might be right at least in some cases.

[syndicated profile] slatestarscratchpad_feed

It probably depends on what you include in “social status”, but I don’t think all of it has to be zero-sum. In some idyllic traditional villages, it might not be that everyone’s *literally* equal, but anybody who can support themselves and raise a family is viewed as *basically* the same as anyone else in that group. Compare eg the popular perception of law school where everyone is hyperaware of who the best student is and everyone else is dirt.

Things might be zero sum in the sense that the best farmer in the village doesn’t get much utility from being the best, but the best student in law school does, and all the losses from the people who feel like dirt go directly to making the best person feel better. But it might also be negative sum, if there are lots of people who feel really bad and the best student only feels slightly better about herself (I guess it could also be positive sum, if being the best student in law school was so great that it was almost utility-monster-ish).

But I feel like (total intuition, can’t say why) the people might all be feeling pretty good about themselves from a social status point of view, like “Yeah, our village is great, we’re all contributing and we’re all friends”, and turning it into a law school makes everyone worse off.

Maybe more directly relevant - the idea of people being in lots of overlapping subcultures so that everyone can be high-(ish) status in at least one. See https://www.gwern.net/The%20Melancholy%20of%20Subculture%20Society

[syndicated profile] slatestarscratchpad_feed

If we’re being boring, Vancouver, because it’s English-speaking and close to my friends.

If we assume I’ll know the language and have an equal number of friends anywhere, Vienna, because I love the Alps and Zurich is kinda weird-looking.

(Also)

[syndicated profile] slatestarscratchpad_feed

I think you might have too narrow a concept of trust. When you use your credit card to buy something on Amazon, you trust they’ll send your product instead of stealing your card number and buying nice things for themselves. When you come to a stop sign, you wait for the car that got there a little before you because you trust they’d do the same thing for you if the situation were reversed. When a cop stops you and asks if you’ve seen anything suspicious around, you answer honestly because you trust that they’re after a real criminal and not just looking to shake people down for money. When your doctor tells you that you need a medication, you take it because you trust she’s exercising good judgment and not just being paid by the drug company to prescribe it to everybody.

Even if your decisions on these matters are realistic/logical/accurate, I still think it makes sense to talk about a high/low-trust society. Authorities’ behavior shapes people’s expectations, and in turn people’s expectations shape what authorities think they can get away with (or how stringently people punish small infractions). A high-trust society is one where doctors do what’s best for their patients and so patients trust doctors. A low-trust society is one where everyone knows doctors are corrupt and so patients behave accordingly.