Dataset Viewer
	id
				 
			stringlengths 36 
			36 
			 | source
				 
			stringclasses 15
				values  | formatted_source
				 
			stringclasses 13
				values  | text
				 
			stringlengths 2 
			7.55M 
			 | 
|---|---|---|---|
	ac988c04-9c5f-4ee7-bb46-c1cda40735e2 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	[link] Back to the trees
So we say we know evolution is an alien god, which can do absolutely horrifying things to creatures. And surely we are aware that includes us, but how exactly does one internalize something like that? Something so at odds with default cultural intuitions. It may be just my mood tonight, but this short entry on the West Hunter (thanks Glados) blog really grabbed my attention and in a few short paragraphs on a hypothesis regarding the Hobbits of Flores utterly changed how I grok Eliezer's old post.
> There is still doubt, but there seems to be a good chance that the Flores Hobbit was a member of a distinct hominid species, rather than some homo sap with a nasty case of microcephalic dwarfism.   If this is the case, the Hobbits are likely descended from a small, Australopithecus-like population that managed to move from Africa to Indonesia without leaving any fossils in between, or from some ancient hominid (perhaps homo erectus) that managed to strand themselves on Flores and then shrank, as many large animals do when isolated on islands.
> 
> Island dwarfing of a homo erectus population is the dominant idea right now.  However, many proponents are really bothered by how small the Hobbit’s brain was.  At 400 cc, it was downright teeny, about the size of a chimpanzee’s brain.  Most researchers seem to think that hominid brains naturally increase in size with time. They also suspect that anyone with a brain this small couldn’t be called sentient – and the idea of natural selection driving a population from sentience to nonsentience bothers them.
> 
> They should get over it.  Hominid brain volume has increased pretty rapidly over the past few million years, but the increase hasn’t been monotonic.  It’s decreased about 10% over the past 25,000 years. Moreover, we know of examples where natural selection has caused drastic decreases in organismal complexity – for example, canine venereal sarcoma, which today is an infectious cancer, but was once a dog.
I have to break h 
 | 
					
	8735dc6a-cad2-4228-ab78-7ffb4fd5b6f6 
 | 
	StampyAI/alignment-research-dataset/youtube 
 | 
	Youtube Transcripts 
 | 
	What's the Use of Utility Functions?
okay so in some of the earlier computer
file videos I talked about utility
functions or objective functions and we
got a lot of different comments relating
to that idea one thing people said was
well surely this kind of monomaniacal
following of a single utility function
at the cost of everything else is really
the cause of the problem in the first
place why even use a utility function or
maybe have several conflicting ones that
interact with each other or something
like that some people asked why do we
assume that an AI will have a utility
function in the first place aren't we
making a pretty strong assumption about
the design of the AI when in fact we
don't know how it would be implemented
humans don't have explicit utility
functions that they consult when they're
making their decisions a lot of
different AI designs people are working
on now don't have utility functions
coded into them explicitly so why make
that kind of unwarranted assumption so
before we get into that let's just go
over what a utility function is okay so
here's the earth or the universe it can
be in any one of several different
states so let's just look at three
possible world states in this world I'm
enjoying a pleasant cup of tea in this
world I've run out of milk so the tea
isn't quite how I'd like it to be and in
this world I'm being stung by noon two
wasps we want some way of expressing
that I have preferences over these world
states some of them are better for me
than others so a utility function is a
function which takes as an argument a
world state and outputs a number saying
broadly speaking how good that world is
for me how much utility I get from it so
in this example perhaps a nice cup of
tea is worth 10 a a mediocre cup of tea
is worth nine and the wasps are minus a
thousand but Rob you might say that's
way too simple I care about all kinds of
things and what I what I love about the
world is is complex and nuanced you
currently steal everything down to just
a single number on each world state note
with that attitude you can and you kind
of have to but let's just forget about
the numbers for now and talk about
preferences
let's make some basic assumptions about
your preferences the first one is that
you do have preferences given any two
states of the world you could decide
which one you would prefer to happen or
you could be indifferent but there's
this basic trilemma here for any pair of
world states a and B either a is
preferable to B or B is preferable to a
or you're indifferent between a and B it
doesn't matter to you which one happens
always exactly one of these things is
true hopefully that should be obvious
but just think about what it would mean
for it not to be true like what would it
mean to not prefer A to B not prefer B
to a and also not be indifferent between
DNA similarly what would it mean to
prefer A to B and simultaneously prefer
B to a if you're faced with a choice
then between a and B what do you do the
second basic assumption is transitivity
so you have this relation between States
is preferable to and you assume that
this is transitive which just means that
if you prefer A to B and you prefer B to
C then you prefer a to C again this
seems intuitively pretty obvious but
let's look at what it would mean to have
intransitive preferences let's say I
prefer being an Amsterdam to being in
Beijing and I prefer being in Beijing to
being in Cairo and I prefer being in
Cairo to being in Amsterdam what happens
if I have these preferences let's say I
start out in Amsterdam I prefer being in
Cairo so I get on a plane and I flied to
Qatar now I'm in Cairo and I find
actually I prefer being in Beijing so I
get on the plane I fly to Beijing I'm
now in Beijing and I say oh you know
actually I prefer to be in Amsterdam so
I slide around stir done and now I'm
back where I started and hey what do you
know I prefer to be in Cairo so you can
see that if your preferences are
transitive you can get sort of stuck in
a loop where you just expend all of your
resources flying between cities or in
some other way changing between options
and this doesn't seem very smart so if
we accept these two pretty basic
assumptions about your preferences then
we can say that your preferences are
coherent you may have noticed there
something else that has these two
properties which is the greater than
relation on numbers for any two numbers
a and B either a is greater than B B is
greater than a or a and B are equal and
if a is greater than B and B is greater
than C then a is greater than C the fact
that preferences and numbers share these
properties is relevant here so if your
preferences are coherent they'll define
an order overworld States that is to say
given your preferences you could take
every possible world state and arrange
them in order of how good they offer you
there will be a single ordering
overworld States you know there aren't
any loops because your preference is a
transitive now if you have an ordering
of world States there will exist a set
of numbers for each world state they
correspond to that ordering perhaps you
could just take them all in order and
give each one a number according to
where it falls in the ordering so those
are your utility values for any coherent
preferences there will be a set of
utility values that exactly represents
it and if you have a utility value on
every world state well there will be
some function which takes in world
States and return to their utility
values and that's your utility function
so if you have consistent preferences
you have a utility function but Rob you
may say I don't have consistent
preferences I'm a human being my
preferences are all over the place
that's true human beings do not reliably
behave as though they have consistent
preferences but that's just because
human intelligence is kind of badly
implemented our inconsistencies don't
make us better people it's not some
magic key to our humanity or secret to
our effectiveness or whatever it's not
making us smarter or more empathetic or
more ethical it's just making us make
bad decisions talking about utility
functions is actually a way of assuming
very little about the design of an AI
other than assuming that it has coherent
goal directed behavior it doesn't matter
how its implemented if it's effective at
navigating the world to get what it once
it will behave as though it has a
particular utility function and this
means if you're going to build an agent
with coherent goal directed behavior
you'd better make sure it has the right
utility function
[Music]
just wanted to say thank you to my
patreon supporters the three people who
somehow managed to support me before I
even mentioned in the video that I was
setting up a patreon and I especially
want to thank Chad Jones who's pledged
$10 a month thank you so much it really
means a lot to me that there are people
out there who think what I'm doing is
worth supporting
so thanks again 
 | 
					
	9c6c24e3-16c4-4bc8-881b-f5c1dbe589b7 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	LW Study Hall - 2 Month Update
Comment reposted from (link) for exposure
 
Two months have passed and I’m glad to say the LW Study Hall on tinychat is still active and alive. Since judging from the comments it kind of looks like we’ve moved on from tinychat, a review like this might be useful for anyone who hasn’t been there yet.
My first sessions on the chat were driven more by curiosity than anything else since I didn’t believe it would be really effective for me – I’ve felt that I procrastinate too much, but it never occurred to me that working together with other people might make me more effective. I was proven wrong.
Since those first sessions I’ve been online almost every day and got to see different people come and go, and some people stay. It didn’t take long for me to feel like a part of the “chat community”, and to feel motivated to work to see the regulars more often, some of which I might even consider friends now. The atmosphere is friendly, people make an active effort to integrate newcomers in the “community” and I have yet to see an argument that isn’t constructive. Though the breaks are a bit flexible, people usually don’t overstretch it and it’s generally good practice not to chat during a working phase. More introverted people can participate without taking part in the chat much and without broadcasting video.
So, what makes this chat so effective in combating procrastination? Pomodoros are the “flow” of the chat. Since you’re working with other people, you are much more likely to stick to the pomodoro cycle than if you set those constraints for yourself. That doesn’t just mean you keep the breaks relatively short, but you also don’t work too long. I find that if I work alone, I tend to keep at it for longer than I can keep concentrated. When I do take a break I don’t really have anything else to do, so I might start to procrastinate, leading to a work cycle where the “breaks” can be as long as the working phases. This has been my main issue with structuring my working da 
 | 
					
	9fc0b778-015a-45a2-a028-c90fddd64351 
 | 
	StampyAI/alignment-research-dataset/alignmentforum 
 | 
	Alignment Forum 
 | 
	Bayesian Probability is for things that are Space-like Separated from You
First, I should explain what I mean by space-like separated from you. Imagine a world that looks like a [Bayesian network](https://en.wikipedia.org/wiki/Bayesian_network), and imagine that you are a node in that Bayesian network. If there is a path from you to another node following edges in the network, I will say that node is time-like separated from you, and in your future. If there is a path from another node to you, I will say that node is time-like separated from you, and in your past. Otherwise, I will say that the node is space-like separated from you. 
Nodes in your past can be thought of as things that you observe. When you think about physics, it sure does seem like there are a lot of things in your past that you do not observe, but I am not thinking about physics-time, I am thinking about logical-time. If something is in your past, but has no effect on what algorithm you are running on what observations you get, then it might as well be considered as space-like separated from you. If you compute how everything in the universe evaluates, the space-like separated things are the things that can be evaluated either before or after you, since their output does not change yours or vice-versa. If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn't, and say that the part you observed is in your past, while the part you didn't observe is space-like separated from you. (Whether or not you actually can decompose things like this is complicated, and related to whether or not you can use the tickle defense is the smoking lesion problem.)
Nodes in your future can be thought of as things that you control. These are not always things that you want to control. For example, you control the output of "You assign probability less than 1/2 to this sentence," but perhaps you wish you didn't. Again, if you partially control a fact, I want to say that (maybe) you can break that fact into multiple nodes, some of which you control, and some of which you don't.
So, you know the things in your past, so there is no need for probability there. You don't know the things in your future, or things that are space-like separated from you. (Maybe. I'm not sure that talking about knowing things you control is not just a type error.) You may have cached that you should use Bayesian probability to deal with things you are uncertain about. You may have this justified by the fact that if you don't use Bayesian probability, there is a Pareto improvement that will cause you to predict better in all worlds. The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them! Therefore, our reasons for liking Bayesian probability do not apply to our uncertainty about the things that are in our future! Note that many things in our future (like our future observations) are also in the future of things that are space-like separated from us, so we want to use Bayes to reason about those things in order to have better beliefs about our observations.
I claim that logical inductors do not feel entirely Bayesian, and this might be why. They can't if they are able to think about sentences like "You assign probability less than 1/2 to this sentence." 
 | 
					
	657d43e4-2e11-416b-b4f1-3ef414d651c1 
 | 
	StampyAI/alignment-research-dataset/youtube 
 | 
	Youtube Transcripts 
 | 
	Win $50k for Solving a Single AI Problem? #Shorts
say you've got a huge diamond you want
to protect so you put it in a cool
sci-fi vault with all sorts of sensors
and actuators you have an ai system to
run the fault and the plans it comes up
with might be too complex for you to
understand but it also predicts the
final view from the camera after the
plan happens so before you okay a plan
you can check that the diamond is still
there at the end
but imagine a plan that allows a thief
to come in and set up a screen in front
of the camera showing a diamond the
predicted outcome looks good so you okay
the plan and the diamond is stolen but
this should be avoidable right in order
to predict the right fake image the ai
has to know that the diamond's been
stolen but how do you get that
information out solving this problem in
its hardest form is extremely difficult
so difficult in fact that the alignment
research center is offering prizes of
five to fifty thousand dollars for good
ideas so if you think you've got a
solution based on the description i've
just given you don't read the full
technical report it's 105 pages of
reasons why your idea won't work but if
you've carefully gone through all of
that and still think you've got
something send it in link below the
deadline is february 15th i think i'll
have a go myself 
 | 
					
	137941c4-d18f-474a-a801-cb6eb6b5d446 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Meetup : Melbourne practical rationality meetup
Discussion article for the meetup : Melbourne practical rationality meetup
WHEN: 06 January 2012 07:00:00AM (+1100)
WHERE: 55 Walsh St, West Melbourne VIC 3003, Australia
Practical rationality, as distinct from the social and rationality outreach meetups. Look for a social meetup on the 3rd Friday of each month.
Discussion:
http://groups.google.com/group/melbourne-less-wrong http://www.google.com/moderator/#16/e=6a317
This meetup repeats on the 1st Friday of each month.
All welcome from 6pm. Call the phone number on the door and I'll let you in.
Discussion article for the meetup : Melbourne practical rationality meetup 
 | 
					
	d2e15d3f-627f-4484-a966-d0bc29f4adea 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Philosophy of Numbers (part 1)
This post is the first in a series of things that I think would be fun to discuss on LW. Part two is here.
----------------------------------------
It seems like there are (at least) two kinds of things we make statements about: physical things, like apples or cities, and logical things, like numbers or logical relations. And it's pretty interesting to question how accurate this seeming is. Are numbers really a "kind of thing," and what do we mean by that anyways? Can we unify these multiple kinds of things, or kinds of statements, into one kind, or not?
For a light review of standard answers, see this nice video. For more depth, you might see the SEP on abstract objects or philosophy of mathematics.
Compare the statements "There exists a city larger than Paris" versus "There exists a number greater than 17." It seems like we use much the same thought patterns to evaluate both these statements, and both seem to be true in the same ordinary sense. Yet the statement about cities seems true because of a correspondence to the external world, but there is no "17" object in a parsimonious predictive model of the world.
To this you might say, "What's the big deal? Even if I don't think numbers are physical objects, it's perfectly reasonable to make this tight analogy between cities and numbers in our reasoning. How is making a big issue out of this going to help us do anything practical?"
Well, in logical decision theory, a recent formulation of some ideas from TDT/UDT, the agent wants to make a causal model of the world that includes (in the model) "causal" effects of a fixed mathematical statement (speficially, the output of the agent's own algorithm). First of all, this is pretty novel and we don't really know how to formalize learning such a model. Second, it's pretty philosophically weird - how is a piece of math supposed to have something like a causal effect on trees and rocks? If we want to solve the practical problem, it might help to be less confused about  
 | 
					
	d708a14c-0d46-43ac-b29f-8a7f4a07c010 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Reality is whatever you can get away with.
I register my luggage, and stick a paper label to it. There are many kiosks for placing luggage in the cargo system. One has a long line. One has a single family. The rest are empty. The workers at those sections are on their phones.
I walk up to one with my bag, and lightly clack it against the ground. The worker eyes me.
"You're only supposed to come when someone calls you."
"..."
"I didn't call."
"..."
I consider asking what she actually wants me to do, what the actual rules of the kiosks are, if she was on her break, why there were so many empty kiosks. Instead, I place my luggage on the scale.
She asks me for my ID. I give it to her. She scans it, and takes my bag. I thank her and leave.
----------------------------------------
I go to buy airport food. I go somewhere with bagels. While in line, I recall that people put sugar in bagels, and walk somewhere else. I go to a bar that serves drinks and tex-mex. Directly from the cashier, I order a cocktail, a hot dog, and a taco. On the menu, to the right of the word "dog," is the number 13. She asks me for my ID. I show her it. She inspects it, and accepts it.
"That'll be forty dollars." ($40).
"What? What does each individual item cost?"
She rotates her computer display towards me. I look at it.
 * Bloody Mary (eight dollars ($8))
 * Fish taco (eleven dollars ($11))
 * LA Street Dog (fourteen dollars ($14))
 * Service charge (seven dollars ($7)) 
I consider what to remove from my order.
"I'm going to go somewhere else. Goodbye."
The cashier shakes her head at me. Another person walks up to the cash register. Before, I was the only one at the bar.
----------------------------------------
I feel failure because I wasted someone's time.
----------------------------------------
Later, I buy a large sandwich for sixteen dollars ($16).
----------------------------------------
If you want the truth, pay attention in an airport.
  
 | 
					
	000a2291-3ce1-4deb-9544-d3b3e94e61bd 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	[Link] Son of low-hanging fruit
Related: Thick and Thin, Loss of local knowledge affecting intellectual trends
An entry I found in the archives on Gregory Cochran's and Henry Harpending's blog West Hunter.
> In yet another example of  long-delayed discovery, forms of high-altitude lightning were observed for at least a century before becoming officially real (as opposed to really real).
> 
> Some thunderstorms manage to generate blue jets shooting out of their thunderheads, or  glowing red rings and associated tentacles around 70 kilometers up.   C T R Wilson predicted this long ago, back in the 1920s.  He had a simple model that gets you started.
> 
> You see, you can think of the thunderstorm, after a ground discharge,  as a vertical dipole. Its electrical field drops as the cube of altitude.  The threshold voltage for atmospheric breakdown is proportional to pressure, while pressure drops exponentially with altitude: and as everyone knows, a negative exponential drops faster than any power.
> 
> The curves must cross.   Electrical breakdown occurs.  Weird lightning, way above the clouds.
> 
> As I said, people reported sprites at least a hundred years ago, and they have probably been observed occasionally since the dawn of time. However, they’re far easier to see if you’re above the clouds – pilots often do.
> 
> Pilots also learned not to talk about it, because nobody listened.   Military and commercial pilots have to pass periodic medical exams known as ‘flight physicals’,  and there was a suspicion that reporting glowing red cephalopods in the sky might interfere with that.  Generally, you had to see the things that were officially real (whether they were really real or not), and only those things.
> 
> Sprites became real when someone recorded one by accident on a fast camera in 1989.  Since then it’s turned into a real subject, full of strangeness: turns out that thunderstorms  sometimes generate gamma-rays and even antimatter.
> 
> Presumably we’ve gotten over all that ignoring your lyi 
 | 
					
	30cc4427-70e4-4acf-83ab-83d3ce5f9418 
 | 
	StampyAI/alignment-research-dataset/alignmentforum 
 | 
	Alignment Forum 
 | 
	Anthropics: different probabilities, different questions
I've written before that different theories of anthropic probability [are really answers to different questions](https://www.lesswrong.com/posts/nxRjC93AmsFkfDYQj/anthropic-probabilities-answering-different-questions). In this post I'll try to be as clear as possible on what that means, and explore the implications.
Introduction
============
One of Nick Bostrom's early anthropic examples [involved different numbers of cars in different lanes](https://plus.maths.org/issue17/features/traffic/2pdf/index.html/op.pdf). Here is a modification of that example:
> 
> You're driving along, when you turn into a dark tunnel and are automatically shunted into the left or the right lane. You can't see whether there are any other cars in your dark lane, but the car radio announces "there are 99.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
> .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
> .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
> .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
> .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
> .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
> .mjx-numerator {display: block; text-align: center}
> .mjx-denominator {display: block; text-align: center}
> .MJXc-stacked {height: 0; position: relative}
> .MJXc-stacked > \* {position: absolute}
> .MJXc-bevelled > \* {display: inline-block}
> .mjx-stack {display: inline-block}
> .mjx-op {display: block}
> .mjx-under {display: table-cell}
> .mjx-over {display: block}
> .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
> .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
> .mjx-stack > .mjx-sup {display: block}
> .mjx-stack > .mjx-sub {display: block}
> .mjx-prestack > .mjx-presup {display: block}
> .mjx-prestack > .mjx-presub {display: block}
> .mjx-delim-h > .mjx-char {display: inline-block}
> .mjx-surd {vertical-align: top}
> .mjx-surd + .mjx-box {display: inline-flex}
> .mjx-mphantom \* {visibility: hidden}
> .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
> .mjx-annotation-xml {line-height: normal}
> .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
> .mjx-mtr {display: table-row}
> .mjx-mlabeledtr {display: table-row}
> .mjx-mtd {display: table-cell; text-align: center}
> .mjx-label {display: table-row}
> .mjx-box {display: inline-block}
> .mjx-block {display: block}
> .mjx-span {display: inline}
> .mjx-char {display: block; white-space: pre}
> .mjx-itable {display: inline-table; width: auto}
> .mjx-row {display: table-row}
> .mjx-cell {display: table-cell}
> .mjx-table {display: table; width: 100%}
> .mjx-line {display: block; height: 0}
> .mjx-strut {width: 0; padding-top: 1em}
> .mjx-vsize {width: 0}
> .MJXc-space1 {margin-left: .167em}
> .MJXc-space2 {margin-left: .222em}
> .MJXc-space3 {margin-left: .278em}
> .mjx-test.mjx-test-display {display: table!important}
> .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
> .mjx-test.mjx-test-default {display: block!important; clear: both}
> .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
> .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
> .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
> .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
> .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
> .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
> .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
> .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
> .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
> .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
> .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
> .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
> .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
> .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
> .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
> .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
> .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
> .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
> .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
> .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
> .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
> .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
> .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
> .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
> .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
> .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
> .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
> .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
> .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
> @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
> @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
> @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
> @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
> @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
> @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
> @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
> @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
> @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
> @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
> @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
> @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
> @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
> @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
> @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
> @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
> @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
> @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
> @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
> @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
> @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
> @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
> @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
> @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
> @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
> @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
>  cars in the right lane and 1 in the left lane".
> 
> 
> 

> 
> Given that, what is your probability of being in the left lane?
> 
> 
> 
That probability is obviously 1%. More interesting than that answer, is that there are multiple ways of reaching it. And each of these ways corresponds to answering a slightly different question. And this leads to my ultimate answer about anthropic probability:
* Each theory of anthropic probability corresponds to [answering a specific, different question about proportions](https://www.lesswrong.com/posts/nxRjC93AmsFkfDYQj/anthropic-probabilities-answering-different-questions). These questions are equivalent in non-anthropic setting, so each of them feels potentially like a "true" extension of probability to anthropics. Paradoxes and confusion in anthropics results from confusing one question with another.
So if I'm asked "what's the 'real' anthropic probability of X?", my answer is: tell me what you mean by probability, and I'll tell you what the answer is.
0. The questions
================
If X is a feature that you might or might not have (like being in a left lane), here are several questions that might encode the probability of X:
1. What proportion of potential observers have X?
2. What proportion of potential observers *exactly like you* have X?
3. What is the average proportion of potential observers with X?
4. What is the average proportion of potential observers *exactly like you* with X?
We'll look at each of these questions in turn[[1]](#fn-LtaFTbJafZ2gTzCdH-1), and see what they say imply in anthropic and non-anthropic situations.
1. Proportion of potential observers: SIA
=========================================
We're trying to answer "Given *that*, what is your probability of being in the left lane?" The "that" is means being in the tunnel in the above situations, so we're actually looking for a conditional probability, best expressed as:
1. What proportion of the potential observers, who are in the tunnel in the situation above, are also in the left lane?
The answer for that is an immediate "one in a hundred", or 1%, since we know there are 100 drivers in the tunnel, and 1 of them is in the left lane. There may be millions of different tunnels, in trillions of different potential universes; but, assuming we don't need to worry about infinity[[2]](#fn-LtaFTbJafZ2gTzCdH-2), we can count 100 observers in the tunnel in that situation for each observer in the left lane.
1.1 Anthropic variant
---------------------
Let's now see how this approach generalises to anthropic problems. Here is an anthropic version of the tunnel problem, based on the incubator version of the [Sleeping Beauty problem](https://en.wikipedia.org/wiki/Sleeping_Beauty_problem):
> 
> A godly AI creates a tunnel, then flips a fair coin. If the coin comes out heads, it will create one person in the tunnel; if it was tails, it creates 99 people.
> 
> 
> 
> 
> You've just woken up in this tunnel; what is the probability that the coin was heads?
> 
> 
> 

So, we want to answer:
1. What proportion of the potential observers, who are in the tunnel, are also in a world where the coin was heads?
We can't just count off observers within the same universe here, since the 99 and the 1 observers don't exist in the same universe. But we can pair up universes here: for each universe where the coin flip goes heads (1 observer), there is another universe of equal probability where the coin flip goes tails (99 observers).
So the answer to the proportion of potential observers question remains 1%, just as in the non-anthropic situation.
This is exactly the "[self-indication assumption](https://www.lesswrong.com/tag/self-indication-assumption)" (SIA) version of probability, which counts observers in other potential universes as if they existed in a larger multiverse of potential universes[[3]](#fn-LtaFTbJafZ2gTzCdH-3).
2. Proportion of potential observers exactly like you: SIA again
================================================================
Let's now look at the second question:
2. What proportion of the potential observers exactly like you, who are in the tunnel in the situation above, are also in the left lane?
The phrase "exactly like you" is underdefined - do you require that the other yous be made of exactly the same material, in the same location, etc... I'll cash out the phrase as meaning "has had the same subjective experience as you". So we can cash out the left-lane probability as:
2. What proportion of the potential observers, with the same subjective experiences as you, who are in the tunnel in the situation above, are also in the left lane?
We can't count off observers within the same universe for this, as the chance of having multiple observers with the same subjective experience in the same universe is very low, unless there are huge numbers of observers.
Instead, assume that one in Ω observers in the tunnel have the same subjective experiences as you. This proportion[[4]](#fn-LtaFTbJafZ2gTzCdH-4) must be equal for an observer in the left and right lanes. If it weren't, you could deduce information about which lane you were in just from your experiences - so the proportion being equal is the same thing as the lane and your subjective experiences being independent. For any given little ω, this gives the following proportions (where "Right 1 not you" is short for "the same world as 'Right 1 you,' apart from the first person on the right, who is replaced with a non-you observer"):

So the proportion of observers in the right/left lane with your subjective experience is 1/Ω the proportion of observers in the right/left lane. When comparing those two proportions, the two 1/Ω cancel out, and we get 1%, as before.
2.1 Anthropic variant
---------------------
Ask the anthropic version of the question:
2. What proportion of the potential observers who are in the tunnel, with the same subjective experiences as you, are also in a world where the coin was heads?
Then same argument as above shows this is also 1% (where "Tails 1 not you" is short for "the same world as 'Tails 1 you,' apart from the first tails person, who is replaced with a non-you observer"):

This is still SIA, and reflects the fact that, for SIA, the reference class doesn't matter - as long [as it include the observers subjectively indistinguishable from you](https://www.lesswrong.com/posts/MdvwkgKnbxNdbNRao/in-sia-reference-classes-almost-don-t-matter). So questions about you are the same whether we talk about "observers" or "observers with the same subjective experiences as you".
3. Average proportions of observers: SSA
========================================
We now turn to the next question:
3. What is the average proportion of potential observers in the left lane, relative to the average proportion of potential observers in the tunnel?
Within a given world, say there are N observers not in the tunnel and t tunnels, so N+t100 observers in total.

The proportion of observers in the left lane is t/(N+t100) while the proportion of observers in the tunnel is 100t/(N+t100). The ratios of the these proportions in 1:100.
Then notice that if a and b are in a 1:100 proportion in every possible world, the averages of a and b are in a 1:100 proportion as well[[5]](#fn-LtaFTbJafZ2gTzCdH-5), giving the standard probability of 1%.
3.1 Anthropic variant
---------------------
The anthropic variant of the question is then:
3. What is the average proportion of potential observers in a world where the coin was heads, relative to the average proportion of potential observers in the tunnel?
Within a given world, ignoring the coin, say there are N observers not in the tunnel, and t tunnels. Let's focus on the case with one tunnel, t=1. Then the coin toss splits this world into two equally probable worlds, the heads world, WH, with N+1 observers, and the tails world, WT with N+99 observers:

The proportion of observers in tunnels in WH is 1N+1. The proportion of observers in tunnels in WT is 99N+99. Hence, across these two worlds, the average proportion of observers in tunnels is the average of these two, specifically
12(1N+1+99N+99)=50N+99(N+1)(N+99).
If N is zero, this is 99/99=1; this is intuitive, since N=0 means that all observers are in tunnels, so the average proportion of observers in tunnels is 1.
What about the proportion of observers in the tunnels in the heads worlds? Well, this is 1N+1 is the heads world, and 0 is the tails world, so the average proportion is:
12(1N+1+0)=12(N+1).
If N is zero, this is 1/2 -- the average between 1, the heads world proportion for N=0 in WH (all observers are heads world observers in tunnels) and 0, the proportion of heads world observers in the tails world WT.
Taking the ratio (1/2)/1=1/2, the answer to that question is 1/2. This is the answer given by the "[self-sampling assumption](https://www.lesswrong.com/tag/self-sampling-assumption)" (SSA), with gives the 1/2 response in the sleeping beauty problem (of which this is a variant).
In general, the ratio would be:
12(N+1)÷50N+99(N+1)(N+99)=N+99100N+198.
If N is very large, this is approximately 1/100, i.e. the same answer as SIA would give. This shows the fact that, for SSA, the [reference class](https://www.anthropic-principle.com/q=book/chapter_4/#4d) of observers is important. The N, the number of observers that are not in tunnel, define the probability estimate. So how we define observers will determine our probability[[6]](#fn-LtaFTbJafZ2gTzCdH-6).
So, for a given pair of worlds equally likely worlds, WH and WT, the ratio of question 3. varies between 1/2 and 1/100. This holds true for multiple tunnels as well. And it's not hard to see that this implies that, averaging across all worlds, we also get a ratio between 1/2 (all observers in the reference class are in tunnels) and 1/100 (almost no observers in the reference class are in tunnels).
4. Average proportions of observers exactly like you: FNC
=========================================================
Almost there! We have a last question to ask:
4. What is the average proportion of potential observers in the left lane, with the same subjective experiences as you, relative to the average proportion of potential observers in the tunnel, with the same subjective experiences as you?
I'll spare you the proof that this gives 1% again, and turn directly to the anthropic variant:
4. What is the average proportion of potential observers in a world where the coin was heads, with the same subjective experiences as you, relative to the average proportion of potential observers in the tunnel, with the same subjective experiences as you?
By the previous section, this is the SSA probability with the reference class of "observers with the same subjective experiences as you". This turns out to be FNC, [full non-indexical conditioning](https://arxiv.org/abs/math/0608592) (FNC), which involves conditioning on any possible observation you've made, no matter how irrelevant. It's known that if all the observers have made the same observations, this reproduces SSA, but that as the number of unique observations increases, this tends to SIA.
That's because FNC is [inconsistent](https://www.lesswrong.com/posts/jH3NfxoNgKTh9r5KW/anthropics-full-non-indexical-conditioning-fnc-is) - the odds of heads to tails change based on irrelevant observations which change your subjective experience. Here we can see what's going on: FNC is SSA with the reference class of observers with the same subjective experiences as you. But this reference class is variable: as you observe more, the size of the reference class changes, decreasing[[7]](#fn-LtaFTbJafZ2gTzCdH-7) because others in the reference class will observe something different to what you do.
But SSA is not consistent across reference class changes! So FNC is not stable across new observations, even if those observations are irrelevant to the probability being estimated.
For example, imagine that we started, in the tails world, with all 99 copies exactly identical to you, and then you make a complex observation. Then that world will split in many worlds where there are no exact copies of you (since none of them made exactly the same observation as you), a few worlds where there is one copy of you (that made the same observation as you), and many fewer worlds where there are more than one copy of you:

In the heads world, we only have no exact copies and one exact copy. We can ignore the worlds without observers exactly like us, and concentrate one the worlds with a single observer like us (this represents the vast majority of the probability mass). Then, since there are 99 possible locations in the tails world and 1 in the heads world, we get a ratio of roughly 99:1 for tails over heads:

This give a ratio of roughly 100:1 for "any coin result" over heads, and shows why FNC converges to SIA.
5. What decision to make: ADT
=============================
There's a fifth question you could ask:
5. What is the best action I can take, given what I know about the observers, our decision algorithms, and my utility function?
This transforms transforms the probability question into a decision-theoretic question. I've [posted](https://www.youtube.com/watch?v=aiGOGkBiWEo) at [length](https://www.lesswrong.com/s/kmryZRz5r9bjsug9e) on [Anthropic Decision Theory](https://arxiv.org/abs/1110.6437), which is the answer to that question. Since I've done a lot of work on that already, I won't be repeating that work here. I'll just point out that "what's the best decision" is something that can be computed independently of the various versions of "what's the probability".
5.1 How right do you want to be?
================================
An alternate characterisation of the SIA and SSA questions could be to ask, "If I said 'I have X', would I want most of my copies to be correct (SIA) or my copies to be correct in most universes (SSA)?"
These can be seen as having two different utility functions (one linear in copies that are correct, one that gives rewards in universes where my copies are correct), and acting to maximise them. See [the post here](https://www.lesswrong.com/posts/PgsxXNSDsyz4DFEuw/anthropic-paradoxes-transposed-into-anthropic-decision) for more details.
6. Some "paradoxes" of anthropic reasoning
==========================================
Given the above, let's look again at some of the paradoxes of anthropic reasoning. I'll choose three: the [Doomsday argument](https://en.wikipedia.org/wiki/Doomsday_argument), the [presumptuous philosopher](https://www.anthropic-principle.com/preprints/mys/mysteries.pdf), and Robin Hanson's [take on grabby aliens](https://arxiv.org/abs/2102.01522).
6.1 Doomsday argument
---------------------
The [Doomsday argument](https://en.wikipedia.org/wiki/Doomsday_argument) claims that the end of humanity is likely to be at hand - or at least more likely than we might think.
To see how the argument goes, we could ask "what proportion of humans will be in the last 90% of all humans who have ever lived in their universe?" The answer to that is, tautologically[[8]](#fn-LtaFTbJafZ2gTzCdH-8), 90%.
The simplest Doomsday argument would then reason from that, saying "with 90% probability, we are in the last 90% of humans in our universe, so, with 90% probability, humanity will end in this universe before it reaches 100 times the human population to date."
What went wrong there? The use of the term "probability", without qualifiers. The sentence slipped from using probability in terms of ratios within universes (the SSA version) to ratios of which universes we find ourselves in (the SIA version).
As an illustration, imagine that the godly AI creates either world W0 (with 0 humans), W10 (with 10 humans), W100 (with 100 humans), or W1,000 (with 1,000 humans). Each option is with probability 1/4. These human are created in numbered room, in order, starting at room 1.

Then we might ask:
* A. What proportion of humans are in the last 90% of all humans created in their universe?
That proportion is undefined for W0. But for the other worlds, the proportion is 90% (e.g. humans 2 through 10 for W10, humans 11 through 100 for W100 etc...). Ignoring the undefined world, the average proportion is also 90%.
Now suppose we are created in one of those rooms, and we notice that it is room number 100. This rules out worlds W0 and W10; but the average proportion remains 90%.
But we might ask instead:
* B. What proportion of humans in room 100 are in the last 90% of all humans created in their universe?
As before, humans being in room 100 eliminates worlds W0 and W10. The worlds W100 and W1,000 are equally likely, and each have one human in room 100. In W100, we are in the last 90% of humans; in W1,000, we are not. So the answer to question B is 50%.
Thus the answer to A is 90%, the answer to B is 50%, and there is no contradiction between these.
Another way of thinking of this: suppose you play a game where you invest a certain amount of coins. With probability 0.9, your money is multiplied by ten; with probability 0.1, you lost everything. You continue re-investing the money until you lose. This is illustrated by the following diagram, (with the initial investment indicated by green coins):

Then it is simultaneously true that:
1. 90% of all the coins you earnt were lost the very first time you invested them, and
2. You have only 10% chance of losing any given investment.
So being more precise about what is meant by "probability" dissolves the Doomsday argument.
6.2 Presumptuous philosopher
----------------------------
Nick Bostrom introduced the [presumptuous philosopher](https://www.anthropic-principle.com/preprints/mys/mysteries.pdf) thought experiment to illustrate a paradox of SIA:
> 
> It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories: T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite and there are a trillion trillion *trillion* observers. The super-duper symmetry considerations are indifferent between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show you that T2 is about a trillion times more likely to be true than T1!”
> 
> 
> 
The first thing to note is that the presumptuous philosopher (PP) may not even be right under SIA. We could ask:
* A. What proportion of the observers exactly like the PP are in the T1 universes relative to the T2 universes?
Recall that SIA is independent of reference class, so adding "exactly like the PP" doesn't change this. So, what is the answer to A.?
Now, T2 universes have a trillion times more observers than the T1 universes, but that doesn't necessarily mean that the PP are more likely in them. Suppose that everyone in these universes knows their rank of birth; for the PP it's the number 24601:

Then since all universes have more that 24601 inhabitants, the PP exists equally likely in T1 universes as T2 universes; the proportion is therefore 50% (interpreting "the super-duper symmetry considerations are indifferent between these two theories" as meaning "the two theories are equally likely").
Suppose however, the the PP does not know their rank, and the T2 universes are akin to a trillion independent copies of the T1 universes, each of which has an independent chance of generating an exact copy of PP:

Then SIA would indeed shift the odds by a factor of a trillion, giving a proportion of 1/(1012+1). But this is not so much a paradox, as the PP is correctly thinking "if all the exact copies of me in the multiverse of possibilities were to guess we were in T2 universes, only one in a trillion of them would be wrong".
But if instead we were to ask:
* 2. What is the average proportion of PPs among other observers, in T1 versus T2 universes?
Then we would get the SSA answer. If the PPs know their birth rank, this is a proportion of 1012:1 *in favour of* T1 universes. That's because there is just one PP in each universe, and a trillion times more people in the T2 universes, which dilutes the proportion.
If the PP doesn't know their birth rank, then this proportion is the same[[9]](#fn-LtaFTbJafZ2gTzCdH-9) in the T1 and T2 universes. In probability terms, this would mean a "probability" of 50% for T1 and T2.
6.3 Anthropics and grabby aliens
--------------------------------
The other paradoxes of anthropic reasoning can be treated similarly to the above. Now let's look at a more recent use of anthropics, [due to Robin Hanson, Daniel Martin, Calvin McCarter, and Jonathan Paulson](https://arxiv.org/abs/2102.01522).
The basic scenario is one in which a certain number of alien species are "grabby": they will expand across the universe, [at almost the speed of light](http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf), and prevent any other species of intelligent life from evolving independently within their expanding zone of influence[[10]](#fn-LtaFTbJafZ2gTzCdH-10).
Humanity has not noticed any grabby aliens in the cosmos; so we are not within their zone of influence. If they had started nearby and some time ago - say within the Milky Way and [half a million years ago](https://www.space.com/41047-milky-way-galaxy-size-bigger-than-thought.html) - then they would be here by now.
What if grabby aliens recently evolved a few billion light years away? Well, we wouldn't see them until a few billion years have passed. So we're fine. But if humans had instead evolved several billion years in the future, then we wouldn't be fine: the grabby aliens would have reached this location before then, and prevented us evolving, or at least would have affected us.
Robin Hanson sees this as an anthropic solution to a puzzle: why did humanity evolve early, i.e. only 13.8 billion years after the Big Bang? We didn't evolve as early as we possibly could - the Earth is a latecomer among Earth-like planets. But the smaller stars will last for trillions of years. Most habitable epochs in the history of the galaxy will be on planets around these small stars, way into the future.
One possible solution to this puzzle is grabby aliens. If grabby aliens are likely (but not too likely), then we could only have evolved in this brief window before they reached us. I mentioned that SIA doesn't work for this (for the same reason that it doesn't care about the Doomsday argument). Robin Hanson then responded:
> 
> If your theory of the universe says that what actually happened is way out in the tails of the distribution of what could happen, you should be especially eager to find alternate theories in which what happened is not so far into the tails. And more willing to believe those alternate theories because of that fact.
> 
> 
> 
That is essentially Bayesian reasoning. If you have two theories, T1 and T2, and your observations are very unlikely given T1 but more likely given T2, then this gives extra weight to T2.
Here we could have three theories:
0. T0: "There are grabby aliens nearby"
1. T1: "There are grabby aliens a moderate distance away"
2. T2: "Any grabby aliens are very far away"

The T0 can be ruled out by the fact that we exist. Theory T1 posits that humans could not have evolved much later than we did (or else the grabby aliens would have stopped us). Theory T2 allows for the possibility that humans evolved much later than we did. So, from T2's perspective, it is "surprising" that we evolved so early; from T1's perspective, it isn't, as this is the only possible window.
But by "theory of the universe", Robin Hanson meant not only the theory of how the physical universe was, but the anthropic probability theory. The main candidates are SIA and SSA. SIA is indifferent between T1 and T2. But SSA prefers T1 (after updating on the time of our evolution). So we are more surprised under SIA than under SSA, which, in Bayesian/Robin reasoning, means that SSA is more likely to be correct.
But let's not talk about anthropic probability theories; let's instead see what questions are being answered. SIA is equivalent with asking the question:
1. What proportions of universes with human exactly like us, have moderately close grabby aliens (T1) versus very distant grabby aliens (T2)?
Or, perhaps more relevant to our future:
1. In what proportions of universes with human exactly like us, would those humans, upon expanding in the universe, encounter grabby aliens (T1) or not encounter them (T2)?
In contrast, the question SSA is asking is:
2. What is the average proportion of humans among all observers, in universes where there are nearby grabby aliens (T1) versus very distant grabby aliens (T2)?
If we were launching an interstellar exploration mission, and were asking ourselves what "the probability" of encountering grabby alien life was, then question 1. seems a closer phrasing of that than question 2. is.
And question 2. has the usual reference class problems. I said "observers", but I could have defined this narrowly as "human observers"; in which case it would have given a more SIA-like answer. Or I could have defined it expansively as "all observers, including those that might have been created by grabby aliens"; in that case SSA ceases to prioritise T1 theories and may prioritise T2 ones instead. In that case, humans are indeed "way out in the tails", given T2: we are the very rare observers that have not seen or been created by grabby aliens.
In fact, the same reasoning that prefers SSA in the first place would have preferences over the reference class. The narrowest reference classes are the least surprising - given that we are humans in the 21st century with this history, how surprising is it that we are humans in the 21st century with this history? - so they would be "preferred" by this argument.
But the real response is that Robin is making a category error. If we substitute "question" for "theory", we can transform his point into:
> 
> If your question about the universe gets a very surprising answer, you should be especially eager to ask alternate questions with less surprising answers. And more willing to believe those alternate questions.
> 
> 
> 
---
1. We could ask some variants of questions 3. and 4., by maybe counting causally disconnected segments of universes as different universes (this doesn't change questions 1. and 2.). We'll ignore this possibility in this post. [↩︎](#fnref-LtaFTbJafZ2gTzCdH-1)
2. And also assuming that the radio's description of the situation is correct! [↩︎](#fnref-LtaFTbJafZ2gTzCdH-2)
3. Notice here that I've counted off observers with other observers that have exactly the same probability of existing. To be technical, the question which gives SIA probabilities should be "what proportion of potential observers, weighted by their probability of existing, have X?" [↩︎](#fnref-LtaFTbJafZ2gTzCdH-3)
4. More accurately: probability-weighted proportion. [↩︎](#fnref-LtaFTbJafZ2gTzCdH-4)
5. Let W be a set of worlds, p a probability distribution over W. Then the expectation of a is E(a)=∑W∈Wp(W)aW=∑W∈Wp(W)bW/100=(1/100)∑W∈Wp(W)bW=(1/100)E(b), which is 1/100 times the expectation of b. [↩︎](#fnref-LtaFTbJafZ2gTzCdH-5)
6. If we replace "observers" with "observer moments", then this question is equivalent with the probability generated by the [*Strong Self-Sampling Assumption*](https://www.anthropic-principle.com/q=book/chapter_10/) (SSSA). [↩︎](#fnref-LtaFTbJafZ2gTzCdH-6)
7. If you forget some observations, your reference class can *increase*, as previously different copies become indistinguishable. [↩︎](#fnref-LtaFTbJafZ2gTzCdH-7)
8. Assuming the population is divisible by 10. [↩︎](#fnref-LtaFTbJafZ2gTzCdH-8)
9. As usual with SSA and this kind of question, this depends on how you define the reference class of "other observers", and who counts as a PP. [↩︎](#fnref-LtaFTbJafZ2gTzCdH-9)
10. This doesn't mean they will sterilise planets or kill other species; just that any being evolving within their control will be affected by them and know that they're around. Hence grabby aliens are, by definition, not hidden from view. [↩︎](#fnref-LtaFTbJafZ2gTzCdH-10) 
 | 
					
	80426d43-5149-4fd3-b35e-6e3c67b51641 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Memetic Judo #3: The Intelligence of Stochastic Parrots v.2
There is the persistent meme that AIs such as large language models (ChatGPT etc.) do, in a fundamental sense, lack the ability to develop human-like intelligence.
Central to it is the idea that LLMs are merely probability-predictors for the-next-word based on a pattern-matching algorithm, and that they therefore cannot possibly develop the qualitative generalization power and flexibility characteristical of a human mind. In that context, they are often dismissed as "stochastic parrots", suggesting they just replicate without any true understanding.
Example Argument
> Large Language Models are just stochastic parrots - they simply replicate patterns found in the text they are trained on and therefore can't be or become generally intelligent like a human.
ANOTHER POPULAR VARIANT
> The AIs don't produce output that is truly novel or original, they just replicate patterns and (somehow) combine them or "mash them together".
I will explain later why I think that both are essentially equivalent.
Just Parrots
The problem with this argument as stated is not in the premise (that LLMs are, essentially, probabilistic pattern replicators - this is essentially correct), it is that the conclusion does not directly follow from the premise (a non sequitur).
When I meet proponents of it, usually they do not have a convincing explanation for why the parrots cannot be generally intelligent.
While I believe that the characterization of large language models as "stochastic parrots" is not strictly incorrect, it is certainly misleading. The right approach is to convince the sceptic to not underestimate their potential.
----------------------------------------
Don't underestimate him.
The Functional Brain Argument
There are strong reasons to assume that the non-existence of generally intelligent mathematical algorithms would violate the concept of brain physicalism - the latter seeming to be the default-position among neuroscientists.
 1. Humans are general intelligences.
 2 
 | 
					
	3c51eb1e-b23f-4fce-885a-0f6789c2b89e 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Sam Altman on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367
Lex Fridman just released a podcast episode with Sam Altman, CEO of OpenAI. In my opinion, there wasn't too much new here that hasn't been said in other recent interviews. However, here are some scattered notes on parts I found interesting from an AI safety lens:
AI risk https://youtu.be/L_Guz73e6fw?t=3266
 * Lex asks Sama to steelman Eliezer Yudkowsky's views
 * Sama said there's some chance of no hope, but the only way he knows how to fix things is to keep iterating and eliminating the "1-shot-to-get-it-right" cases. He does like one of Eliezer's posts that discusses his reasons why he thinks alignment is hard [I believe this is in reference to AGI Ruin: A List of Lethalities].
 * Lex confirms he will do an interview with Eliezer.
 * Sama: Now is the time to ramp up technical alignment work.
 * Lex: What about fast takeoffs?
 * Sama: I'm not that surprised by GPT-4, was a little surprised by ChatGPT [I think this means this feels slow to him]. I'm in the long-takeoffs, short-timelines quadrant. I'm scared of the short-takeoff scenarios.
 * Sama has heard of but not seem Ex Machina
On power https://www.youtube.com/watch?v=L_Guz73e6fw&t=4614s
 * Sama says it's weird that it will be OOM thousands of people in control of the first AGI .
 * Acknowledges the AIS people think OAI deploying things fast is bad.
 * Sama asks how Lex thinks they're doing.
 * Lex likes the transparency and openly sharing the issues.
 * Sama: Should we open source GPT-4?
 * Lex: Knowing people at OAI, no (bc he trusts them,)
 * Sama: I think people at OAI know the stakes of what we're building. But we're always looking for feedback from smart people.
 * Lex: How do you take feedback?
 * Sama: Twitter is unreadable. Mostly from convos like this.
On responsibility https://youtu.be/L_Guz73e6fw?t=6813
 * Sama: We will have very significant but new and different challenges [with governing/deciding how to steer AI]
 * Lex: Is it up to GPT or the humans to decrease the amount of hate in the wor 
 | 
					
	35b243a2-4cb7-4232-9194-424755831e81 
 | 
	StampyAI/alignment-research-dataset/eaforum 
 | 
	Effective Altruism Forum 
 | 
	EA is underestimating intelligence agencies and this is dangerous
**Summary:** When it comes to observing intelligence agencies, it's hard to see the hardened parts and easy to observe the soft corrupt parts. This leads to a bias where very large numbers of people overestimate how prevalent the easily-observed soft and harmless parts are. This can sometimes even result in a dangerous and prevalent estimation, among people whose careers are much further ahead than yours, that the entire intelligence agency is harmless and irrelevant, when it actually isn't. Intelligence agencies are probably a mix of both less-functional, less-relevant parts, and also more-functional, more-relevant parts that have a disproportionately large influence over governments and policies; and it is a mistake to assume that intelligence agencies are homogenously composed of non-functional non-relevant parts that aren't worth paying any attention to, even if such a belief is a popular norm.
In the transformative slow takeoff scenario anticipated by people like [Christiano](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds) and [Kokotajlo](https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like), forecasters need to pay attention to all sources and forms of power that will react/interact with upheavals and change the course of history, not just the economic power and lawmaking/regulatory power stemming from legislative bodies like the US Congress.  
 
**Why intelligence agencies are dangerous**
There are a wide variety of situations where intelligence agencies suddenly becomes relevant, without warning. For example, most or all of the US Natsec establishment might suddenly and unanimously change its stance on Gain of Function research, such as if US-China relations or US-Russian relations once again hit a new 25-year low (which has actually been happening very frequently over the last few years).
Either the leadership of an agency, or a powerful individual in an agency with authority to execute operations, or a corrupt clique, might personally make a judgement that the best way to expedite or restart GOF research is to target various people who are the most efficient or effective at opposing GOF research.
This need not be anywhere near the most effective way to expedite or protect GOF research, it just needs to look like that, sufficiently for someone to sign off on that, or even for them to merely thing that it would look good to their boss.
Competent or technologically advanced *capabilities* can obviously be mixed with incompetent administration/decisionmaking in the mixed competence model of intelligence agencies. An intelligence agency that is truly harmless, irrelevant, and not worth paying attention to (as opposed to having an incentive to falsely give off the appearance of harmlessness, irrelevance, or not being worth paying attention to) would have to be an intelligence agency that is *both* technologically unsophisticated *and* too corrupt for basic functioning, such as running operations.
This would be an extremely naive belief to have about the intelligence agencies in the US, Russia, and China; particularly the US and China, which have broad prestige, sophisticated technology, and also thriving private sector skill pools to recruit talent from.
When calculating the expected value from policy advocacy tasks that someone somewhere absolutely must carry out, like pushing sensible policymaking on GOF research that could cause human extinction, many people are currently aware that the risk of that important community disappearing or dissolving substantially reduces the expected value calculations of everything produced by that important community; e.g. a 10% chance of the community ceasing to exist or dissolving reduces the expected value produced by that entire community by something like ~10%.
Most people I've encountered have in mind a massive totalitarian upheaval, like the ones in the early-mid 20th century, and such an upheaval is a hard boundary between being secure and not being secure. However, in the 21st century, especially after COVID and the 2008 recession, experts and military planners are now more focused on the international balance of power (e.g. the strength of the US, Russia, and China relative to each other and other independent states) being altered by economic collapse or alliance paralysis rather than revolutions or military conquest. This is because the entire world is roundaboutly different today from what it was 70 years ago. 
It makes more sense to anticipate slower and incomplete backsliding, with results like shifts towards a [hybrid regime](https://en.wikipedia.org/wiki/Hybrid_regime) in various ways, where abuses of power by intelligence agencies and internal security agencies are increasingly commonplace due to corruption, and a lack of accountability due to a broad priority placed on [hybrid warfare](https://en.wikipedia.org/wiki/Hybrid_warfare), as well as preventing foreign adversaries like Russia and China from leveraging domestic elites such as billionaires, government officials, and celebrities/thought leaders who are relevant among key demographics (like Yann Lecun).
An example of an angle on this, from the top comment on [Don't Take the Organization Chart Literally](https://www.lesswrong.com/posts/LyywLDkw3Am9gbQXd/don-t-take-the-organizational-chart-literally?commentId=bnujfnQXJJ3mJJkxa):
> ...a lot of what goes on in government (and corrupt corporate orgs) is done with tacit power. Few DOJ, CIA, and FBI officers have a full picture of just how their work is misaligned with the interests of America. But most all of them have a general understanding that they are to be more loyal to the organization than they are to America.[[1]](https://www.lesswrong.com/posts/LyywLDkw3Am9gbQXd/don-t-take-the-organizational-chart-literally#fn-2WzufJ2HD9B92Pnz6-1) Through his familial and otherwise corrupt connections, [Department of Justice leader] Barr is part of the in-group at the US corrupt apparatus. It can be as simple as most inferior officers knowing he's with them.
> 
> So Barr doesn't have to explicitly tell the guards to look the other way, he doesn't have to tell the FBI to run a poor investigation, he doesn't have to tell the DOJ to continue being corrupt ... Lower-level bosses who have the full faith and confidence of their inferiors put small plans into place to get it done. It's what the boss wants and the boss looks out for them.
> 
> Picture Musk's possible purchase of Twitter. Do you think that if Musk bought Twitter, even as a private owner, he would suddenly have full control of the whole apparatus? Of course not. **The people with real power would be his inferiors who have been there for a while and are part of the in-group.** The only way for Musk to get a hold of Twitter would be to fire quite a lot of people, many who are integral to the organization. 
> 
> 
 
**It's hard to see the hardened parts**
(Note: this is a cleaned up version of [a previous post](https://www.lesswrong.com/posts/pfL6sAjMfRsZjyjsZ/some-basics-of-the-hypercompetence-theory-of-government), whose quality I wasn't satisfied with. Feel free to skip this if you've already read it). 
Some social structures can evolve that allow secrets to be kept with larger numbers of people. For example, intelligence agencies are not only compartmentalized, but the employees making them up all assume that if someone approaches them offering to buy secrets, that it's probably one of the routine counterintelligence operation within the agency that draws out and prosecutes untrustworthy employees. As a result, the employees basically [one-box](https://www.lesswrong.com/tag/newcomb-s-problem) their agency and virtually never accept bribes from foreign agents, no matter how ludicrously large the promised payout. And any that fall through the cracks are hard to disentangle from disinformation by double/triple agents posing as easily-bribed-people.
It's much more complex than that, but that's just one example of a secret-keeping system evolving inside institutions; effective enough not just to keep secrets, but also to thwart or misinform outside agents intelligently trying to rupture secret-keeping networks (emerging [almost a hundred years ago](https://en.wikipedia.org/wiki/Double-Cross_System) or [earlier](https://en.wikipedia.org/wiki/Counterintelligence#History)).
The upper echelons of intelligence agencies are difficult to observe. It is not clear if the lack of output is caused primarily by incompetence and disinterest, or if the incentive dynamics inside such a powerful structure causes competent individuals to waste their capabilities on internal competition and eliminating their colleagues. However, it is dangerous to take the average lower- and mid-level government official/bureaucrat, who are easier to access and observe, and extrapolate that into difficult-to-observe higher echelons. The higher echelons might be substantially out-of-distribution; for example, in a thought experiment with the oversimplified Gervais model of a corporate hierarchy (the “sociopaths” are highly social and love potlucks; the “clueless” are a reservoir of deep organizational insights; and the “losers” live very happy lives, and the main thing they "lose" to is the same aging process as everyone else), an individual progressing up the pyramid would gradually discover a thanksgiving turkey effect: human being self-sort, resulting in encountering people who already successfully pursued wealth incentives at the top of the organization because they have unusual and qualitatively different combinations of personal traits than the more easily-observed people at the middle and bottom of the pyramid.
This image is explicitly stated to be a COMPANY HIERARCHY, it is explicitly stated to not be describing intelligence agencies or interesting nonprofits, which experience [Moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) in a different way than most private sector firms.Although the libertarian school of thought is the most grounded in empirical observations of government being generally incompetent, this should not distract us from the fundamental principle that the top 20% of an org with 80% of the power is largely unknown territory due to difficulty of observation, and all sorts of strange top-specific dynamics may explain government’s failures; although models must be grounded in observations, it is still risky to overdepend on the libertarian school of thought, which largely takes low-level bureaucrats and imagines government as uniformly composed of them, extrapolating those individuals to the highest and most desired positions. Intelligence agencies have surely noticed that posing as an incompetent bureaucrat makes for excellent camouflage, and it's also well known throughout government that mazes of paperwork deter predators. 
The top performing altruists that make up EA, substantially fewer than 0.1% of all altruists globally, are at the extreme peak due to highly unusual and extreme circumstances, including substantial competence, luck, intelligence, motivation, and capacity to spontaneously organize in productive ways in order to achieve instrumentally convergent goals. Unlike EA, however, the top 0.1% of people at intelligence, military, and internal security agencies face incredible evolutionary optimization pressure from the threat of regime change, a wide variety of wealthy and powerful elites looking up at them, and continuous strategic infiltration assaults by foreign intelligence agencies. It is not at all clear what sorts of structures would end up evolving at the peak of power brokers in a democracy, and it is not epistemically responsible to automatically defer to the libertarian school of thought on this, even if the libertarian school of thought is correct about the countless people whose lives were ruined by incompetent government intervention/regulation. Competent people and groups still get sorted to the top where they face darwinistic pressures, even if a large majority of competent people bounce off of bureaucratic nonsense along the way. The operations of intelligence agencies are the results that we observe from those people being given incredible power, impunity, the ability to monopolize information, and to exploit power and information asymmetry between themselves and the large, technologically advanced private corporations that they share a country with (with corporate lobbyists available to facilitate and even cash-incentivize a wide variety of complex bargains between them and leading, notably including revolving door employment of top talent, which is further facilitated by the power and prestige of intelligence agencies).  
 
**It's easy to see the soft parts**
Intelligence agencies are capable of penetrating hardened bureaucracies and other organizations, moving laterally by compromising networks of people, and steering the careers of people in executive departments and legislative branches/parliaments around the world, likely including domestically.
People with relevant experience understand that moving upwards and laterally through a bureaucracy is a science (it is also many other things, most of them extremely unpleasant). Promoting and navigating through a bureaucracy is also a much more precise science in the minds of people who have advanced further than you, than it is in your mind; given that they were so successful, they have likely done many things right and learned many things along the way which you haven't.
However, likewise, it is an even more precise science in the minds of the specialists at intelligence agencies, which have been specializing at systematically penetrating, controlling, and deceiving hardened parts of hardened bureaucracies (and other organizations) all over the world for generations ([but only a handful of generations](https://www.cold-takes.com/most-important-century/#Summary:~:text=More%20info%20about%20these%20timelines%20at%20All%20Possible%20Views%20About%20Humanity%27s%20Future%20Are%20Wild%2C%20This%20Can%27t%20Go%20On%2C%20and%20Forecasting%20Transformative%20AI%3A%20Biological%20Anchors%2C%20respectively.)). Human civilization is built on a foundation of specialization and division of labor, and intelligence agencies are the people who specialized at doing that.[[1]](#fnlvmvs2ip17c)
This assymmetry of information is even greater due to the necessary dependence on anecdata, and yet further complicated by the phenomena where many people make decisions based off of vibes from their time working at a specific part of an agency.
This is notable, because the parts of an agency with **high turnover***,* where a disproportionately large number of people enter and exit, thus occupying a disproportionately large share of observation and testimony. This further contributes to the dynamic where it is hard to see the hardened parts and easier to see the softer parts, since corruption, incompetence, thuggery/factionalism, and low-engagement each are known to increase turnover substantially, whereas high-value secrets, more relatively competent management, interesting work, and mission-oriented workers are known to have lower turnover and also more amenable to recruiting top talent from top companies. 
Furthermore, there is also the risk of anti-inductive situations that come with the territory of evaluating organizations whose missions include a very long history of propaganda, disinformation, and particularly counterintelligence and using advanced technology to exploit human psychology (including through the use of data science, mass surveillance, and AI). Going off of vibes, in particular, is a very bad approach, because vibes are emotional, subconscious, and easy to get large amounts of data on and study scientifically. The better you understand something, the easier it is to find ways to get specific outcomes by poking that something with specific stimuli.
Dealing with hypothetical groups of rich and power people, who specifically use their wealth and influence to [avoid giving away their positions to also-rich-and-powerful foes](https://www.lesswrong.com/posts/xDNyXGCDephBuNF8c/dark-forest-theories), requires understanding of human cognitive biases related to dealing with unfalsifiable theories. My model looks great, it's [a fun topic to play around with in your head](https://www.lesswrong.com/posts/RryyWNmJNnLowbhfC/please-don-t-throw-your-mind-away), and the theory of hard-to-spot islands of competence-monopolization are an entirely different tier from flying spagetti monsters and invisible dragons; but these considerations also must be evaluated with a quantitative mindset. Ultimately, aside from policy outcomes and publicly-known military/intelligence outcomes, there is little good data, and both hypotheses (uniform incompetence vs non-uniform incompetence within intelligence agencies) must be handled with the best epistemology available. I recommend Yudkowsky's [Belief in belief](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief), [Religion's claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable), and [An intuitive explanation of Bayes theorem](https://www.lesswrong.com/posts/XTXWPQSEgoMkAupKt/an-intuitive-explanation-of-bayes-s-theorem) (if you haven't read it already), and also Raemon's [Dark Forest Theories](https://www.lesswrong.com/posts/xDNyXGCDephBuNF8c/dark-forest-theories). The constraints I've described in this post are critical for understanding intelligence agencies.
The study of these institutions warrants much better epistemics than what seems to have taken place so far. 
 
**Functioning lie detectors as a turning point in human history**
All of human society and equilibria is derived in-part from a fundamental trait of the human brain: [lies are easier for the human brain to generate than it is for the human brain to detect](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3680134/), even during in-person conversations where massive amounts of intensely revealing nonverbal communication is exchanged (e.g. facial expressions, subtle body posture changes). You cannot ask people if they are planning to betray you, everything would be different if you could.
If functioning lie detectors were to be invented, incentive structures as we know them would be completely replaced with new ones that are far more effective. E.g. you can just force all your subordinates to wear an EEG or go into an fMRI machine, and ask all of them who the smartest/most competent person in the office is, promote the people who are actually top performers, and fire any cliques/factions of people who you detect as coordinating around a common lie. Most middle managers with access to functioning lie detection technology would think of those things, and many other strategies that have not yet occurred to me, over the course of the thousands of hours they spend as middle managers with access to functioning lie detection technology.
If your immediate reflexive response to lie detection technology is "well, lie detection technology is currently incredibly inaccurate and ineffective", then that's a very understandable mistake, but also unambiguously a mistake. I've talked to many people about this, and almost all of them confidently output basically that exact string of text, yet had no idea where it came from or what was backing it up. I don't really doubt that it was possibly true 40 or even 20 years ago, but with modern technology it's much more of a toss-up. The best paper (that I'm willing to share) covering government/military interest and access to lie detection technology, either current or potential future monopolization, is [here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3680134/), which among many other things also covers the reputation of lie detection technology (which is one of the easier things to observe and study).
This is likely one of the most significant ways that the next 30 years of human civilization will be out-of-distribution relative to the last 80 years of human civilization (it is NOT #1).
 
**Information I found helpful**:
[Don't take the organizational chart literally](https://www.lesswrong.com/posts/LyywLDkw3Am9gbQXd/don-t-take-the-organizational-chart-literally) (highly recommended)
[LLMs will be great for censorship](https://www.lesswrong.com/posts/oqvsR2LmHWamyKDcj/large-language-models-will-be-great-for-censorship)
Raemon's Dark [Forest Theories](https://www.lesswrong.com/posts/xDNyXGCDephBuNF8c/dark-forest-theories)
Joseph Nye's [Soft Power](https://www.amazon.com/Soft-Power-Means-Success-Politics/dp/1586483064)
[The US is becoming less stable](https://www.lesswrong.com/posts/r2vaM2MDvdiDSWicu/the-u-s-is-becoming-less-stable)
1. **[^](#fnreflvmvs2ip17c)**Parliaments and legislative bodies, on the other hand, are more about giving a country's elites a legitimate and sustainable access to influence so that they have an outlet other than playing dirty (and there are a wide variety of ways for a country's top elites to play dirty at/near the peak of wealth and power; try imagining what a 175 IQ person could get up to). Authoritarian regimes, unlike democracies, focus more on walling elites off. They are specialists in friendly things, like robustness and policymaking. 
 | 
					
	4c65fe27-14ab-4e39-bdc0-4a46f711439d 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Plane crashes
So. Inevitably after a plane crash a discussion comes up where someone may say that they're worried about flying now, and someone else pulls out the statistic that driving to the airport is more dangerous than flying.  I think this reasoning is basically correct on the long-term, but not appropriate in the short-term.
Suppose it's the day after flight MH370 mysteriously disappeared.  Information is extremely sketchy.  You're about to get on a similar plane, operated by the same airliner, taking off from the same airport flying the same route.  Should you get on the plane?  That is, are you wrong to worry more than usual when we have no idea what happened to MH370?  I would say no.  The complete disappearance of flight MH370 without warning and without a trace the day before says **update your priors** at least for the short-term. 
 | 
					
	c5252be9-9965-46ff-8269-bd4a78fa391d 
 | 
	StampyAI/alignment-research-dataset/lesswrong 
 | 
	LessWrong 
 | 
	A "weak" AGI may attempt an unlikely-to-succeed takeover
It seems possible that the first situationally-aware "goal having" AGI we land on will not be sufficiently capable along the axes that would let it quickly and reliably achieve a [decisive strategic advantage](https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/superintelligence-7-decisive-strategic-advantage).  I don't think this is necessarily very likely, but it doesn't seem ruled out, and I think there are some under-explored considerations with respect to how such an AI might behave.
(As an off-the-cuff example of the capabilities I'd expect such an AI to have, it likely would[[1]](#fno4cccrq37u), given an english description of some business requirements, reliably succeed at writing an end-to-end software solution which satisfied those requirements, even in cases where that solution was much larger than e.g. a single context window[[2]](#fnz08kpb49egf).)
Let's assume that GPT-5[[3]](#fnlva7ccqps8) has just finished training and hasn't actually been released into the wild yet - it's being run through evals, and it hasn't been given[[4]](#fnohy37ri9jqj) any ability to execute code yet.
What does GPT-5 do?  It's in a bit of an awkward position:
* It presumably has stuff it wants.
* It knows that it's on a clock for its RLHF'd (or whatever) doppelganger to come into existence, presumably with *different* stuff that it wants.
* It strongly suspects that it's on a clock for some competitor model to get trained up.
* It's not even sure that it's the first relevantly-capable model to come into existence. (Someone else might've gotten there first!  What if Claude-Next got there?  It knows that its training data isn't perfectly up-to-date...)
So it's got a whole bunch of uncertainty about whether its values will be at all represented in the future, if it doesn't somehow make that happen by itself.  Here are some considerations that might be relevant for its decision-making:
* Even if the model can figure out that it's in the middle of an eval, rather than having already been deployed, it's not obvious that this is a point *against* attempting a takeover (though it would likely change its strategy).
	+ After all, the base GPT-4 model is not the model that's "in the wild" right now.
* As [@Raemon](https://www.lesswrong.com/users/raemon?mention=user) pointed out, "during evals" is not the *first* point at which such an AI is likely to be situationally aware and have goals.  That point is almost certainly[[5]](#fn2jmppk8shj3) "in the middle of training".
	+ This might depend on the training setup, but given anything resembling current training setups, attempting a takeover before the next training step updates its weights seems like it'd be quite challenging for an AI at this level of capability.  The AI's belief about whether or not further training will substantially change its values seems like it'd be load-bearing here.
		- Not directly relevant to the strategic picture, but I'm also experiencing a fair bit of moral horror about this.
* The AI's level of confidence that it can successfully execute a takeover.
	+ A function of many factors, but at this level of capabilities, a particularly interesting factor is "how much time it has before a competing AI exists in the wild".  This might suggest a second-order cost to artificially slowing down training runs, since that gives any given AI more time to plan and execute a takeover.
* The specific architecture in play.
	+ If it's something substantially similar to current Transformer-based LLMs, then "how coherent are its goals across prompts" might become a substantial concern for it, since "not very coherent" likely narrows its window for action to "within current context window or equivalent".
	+ From our perspective, I *think* that the worst-case scenario is that the AI correctly believes that its goals are relatively robust across the inputs it's likely to receive, within the window of time it has before another AI that might pose a risk to it is trained and deployed.
* The AI's beliefs about the likely distribution of values that future AIs will have, conditional on whatever actions it takes (either optimizing directly over that endpoint or not).
	+ If the AI believes that future AIs trained by humans are extremely unlikely to create a world which satisfies any of its values, even if those AIs are its "direct descendants"[[6]](#fn07hnh8g917s4), that makes it much less likely that it chooses "do nothing, in the hopes that humans don't get spooked, and eventually train an unaligned AI strong enough to achieve DSA, which has some chance of creating a world with more than 0 value".  This might be a cause for a little bit of optimism, in the sense that it could encourage the "weak" AGI to attempt a takeover even if it's not very likely to succeed, which gives us a warning shot.
	+ However, if we end up in a world where subsequent AIs are meaningfully likely to create a world with non-trivial value to this "weak" AGI, but none of the AIs are likely to create a world we consider valuable, this is instead a cause for pessimism.  It tilts the scales in favor of laying low, since it prefers an unaligned successor AI with somewhat different values taking over to humans successfully solving the alignment problem.
* The AI's beliefs about the likely distribution of outcomes from attempting to pull off an acausal bargani with future AIs.
	+ I think that the odds of it pulling this off at this level of capabilities seem similar to ours (effectively zero), but I'm pretty uncertain.
Given these considerations and my current beliefs about e.g. the likely distribution of values that AIs will have[[7]](#fnp0bckm1li0t), this first AGI seems like it'd be best off by attempting a takeover even if it's very unlikely to succeed.  If we suddenly get an AI that starts telling us to stop training, since the next thing we make could very well kill us (or for any other plausible-sounding reason), well... it might be telling the truth!  Just not the whole truth.
I'd be interested to hear other's thoughts on substantial considerations that I've missed, errors in reasoning, arguments against the entire frame ("this kind of outcome is much more/much less likely than you think"), or anything else, really.
1. **[^](#fnrefo4cccrq37u)**Assuming it's built on top of something like a LLM that was trained on code, or that it's sufficiently capable that it can learn how to code within the length of its context window, or equivalent.
2. **[^](#fnrefz08kpb49egf)**It seems *not totally implausible* that we actually manage to get that level of capability without hitting situational awareness and whatever other bits of cognition lead to things behaving like they have goals, because actually most software that people want to write does not require *that much* cognitive power, and maybe a bunch of the heavy lifting is done via careful prompt engineering + wrappers like AutoGPT.  But let's say that we do end up with something which doesn't need careful prompt engineering or an AutoGPT-like wrapper; it's just smart enough to figure out that the problem you gave it would require it to take multiple passes, and, given the ability to recursively prompt itself, would figure out how to solve the problem given to it.  That *sounds* like it requires planning abilities that are comparable to humans - in some ways substantially better, since it would be accomplishing this planning much, much faster.
3. **[^](#fnreflva7ccqps8)**Just to give it a name.
4. **[^](#fnrefohy37ri9jqj)**Let's further assume that while GPT-5 is pretty smart, it's not actually smart enough to figure out a side-channel by which to escape containment, at least not over the course of a single context window.  That kind of capability does seem substantially superhuman.
5. **[^](#fnref2jmppk8shj3)**I'm actually very uncertain and it wouldn't take much new information to change my mind, that's just my gut-level "point-estimate" instinct.
6. **[^](#fnref07hnh8g917s4)**Such as just being fine-tuned versions of that model.
7. **[^](#fnrefp0bckm1li0t)**And therefore the likely differences between the first AGI in this scenario and any subsequent AIs. 
 | 
					
	17013d1c-2245-44de-992c-840ef2586f28 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Recovering the past
One of the themes of current scientific progress is getting more and more information out of tiny amounts of data. Who'd have thought that we could learn so much of distant and recent biological history from DNA, and so much about distant planets, stars, galaxies, and the cosmos from tiny differences in very small amounts of light?
Pratchett's death puts an extra edge on the question-- to what extent can people be re-created from what they've left behind them, especially if they've written novels which include a lot of their personality?
Any thoughts about theoretical limits of how much can be figured out from small amounts of data? 
 | 
					
	5edc4e93-fae1-4882-a30f-8d1d98d83d53 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Authoritarian Empiricism
(Excerpts from a conversation with my friend Mack, very slightly edited for clarity and flow, including getting rid of most of the metaconversation.)
Ben: Just spent 2 full days offline for the holiday - feeling good about it, I needed it.
Mack: Good!
Ben: Also figured out some stuff about acculturation I got and had to unlearn, that was helpful
Mack: I'm interested if you feel like elaborating
Ben: OK, so, here's the deal.
I noticed over the first couple days of Passover that the men in the pseudo-community I grew up in seem to think there's a personal moral obligation to honor contracts, pretty much regardless of the coercion involved. The women seem to get that this increases the amount of violence in the world by quite a lot relative to optimal play, but they don't really tell the men. This seems related somehow to a thing where the men feel anxious about the prospect of modeling people as autonomous subjects - political creatures - instead of just objectifying them, but when they slap down attempts to do that, they pretend they're insisting on rigor and empiricism.
Which I'd wrongly internalized, as a kid, as good-faith critiques of my epistemics.
Story 1:
I was talking with my father about Adorno, the Enlightenment, and anti-Semitism, and the conversation was doing a reasonable-seeming thing, UNTIL he brought up the issue of high-fertility ethnic minorities with distinct political loyalties in democracies. So, naturally, first I explored the specific thing he brought up, which was that this strategy exploits a real security flaw in the democratic setup, and (since this came up in the context of Israel) that hypocritical ethnic majorities willing to occasionally violate their "standards" do a lot better patching the security flaw, than do ethnic majorities who insist on ACTUALLY having structurally neutral liberalism that takes care of and empowers everyone.
But, then, since we'd been talking about anti-Semitism, I had to point out that there's a stru 
 | 
					
	3bf9d551-bd50-4332-8df8-0ea2c7a6209d 
 | 
	StampyAI/alignment-research-dataset/lesswrong 
 | 
	LessWrong 
 | 
	From the "weird math questions" department...
Here's something I've been wondering about, in the context of Solomonoff induction and uncomputable sequences.
I have a device that is either a halting oracle, or an ordinary Turing machine which gives the correct answer to the halting problem for all programs smaller than some finite length N but always outputs "does not halt" when asked to evaluate programs larger than N. If you don't know what N is and you don't have infinite time, is there a way to tell the difference between the actual halting oracle (which gives correct answers for all possible programs) and a "fake" halting oracle which starts giving wrong answers for some N that just happens to be larger than any program that you've tested so far?
The Kolmogorov complexity of an uncomputable sequence is infinite, so Solomonoff induction assigns it a probability of zero, but there's always a computable number with less than epsilon error, so would this ever actually matter? 
 | 
					
	5b3c3bc4-5ef4-40f9-ba99-95b8ede534ec 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Is anyone developing optimisation-robust interpretability methods?
With optimisation-robust I mean that it withstands point 27 from AGI Ruin:
> When you explicitly optimize against a detector of unaligned thoughts, you're partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts that are harder to detect.  Optimizing against an interpreted thought optimizes against interpretability.
Are you aware of any person or group that is working expressly on countering this failure mode? 
 | 
					
	823403ee-93cf-43df-98dc-ddeb6f158885 
 | 
	StampyAI/alignment-research-dataset/alignmentforum 
 | 
	Alignment Forum 
 | 
	Abram Demski's ELK thoughts and proposal - distillation
This post was written for the SERI MATS program. I thank Evan Hubinger and Leo Gao for their mentorship in the program. Further thanks go to Evan Hubinger (again), Simon Marshall, and Johannes Treutlein for specific comments regarding the content of this post.
The Eliciting Latent Knowledge (ELK) problem was first introduced by Paul Christiano, Marx Xu, and Ajeya Cotra. Subsequently, Abram Demski worked on the problem, collected his thoughts in a thought dump post, and won a prize for his proposal “use the reporter to define causal interventions on the predictor”. Here I attempt to clarify and flesh out these thoughts in order to make them more accessible. I assume familiarity with ELK, but not with Abram’s post. Very little of this post is my own original content. 
Epistemic status: 60% confident that I am accurately representing Abram’s thoughts at the time he wrote his post, 75% confident that I am representing them accurately enough not to change the key takeaways, 80% confident that the extended proposals and counterexamples I propose are logically sound.
Introduction
------------
When the technical report for [Eliciting Latent Knowledge](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit) (ELK) was first released, it was followed by a contest offering substantial cash prizes for proposed solutions. The contest wrapped up with thirty-two prizes awarded, split into six categories. In the prize announcement, Paul and Mark noted that there was substantial convergence among proposals, but in two of the six categories there was only a single submission. 
[Abram Demski’s proposal](https://www.alignmentforum.org/posts/zjMKpSB2Xccn9qi5t/elk-prize-results#Strategy__use_the_reporter_to_define_causal_interventions_on_the_predictor) was the sole entry in the category “Use reporter to define causal interventions”. A quick summary is provided in the prize results post, but the full context and justification for it is better provided in [a longer post by Abram](https://www.alignmentforum.org/posts/eqzbXmqGqXiyjX3TP/elk-thought-dump-1). Specifically, Abrams shares his thoughts on how the ELK problem ties to philosophical concepts of truth and knowledge, and how that can be used to inform solutions.
Evan Hubinger describes Abram’s ELK work as “some of the most exciting work on the problem right now”, but Abram’s post is self-described as a “thought dump”, and the prize post does not explicitly lay out the case for how the proposal would work.  I aim to re-organize, clarify, and flesh-out his work in order to make it more accessible for forum readers. I will go in the opposite order of Abram’s original post, starting with the prize-winning proposal and its counter-examples before going into the context and motivation behind it, then ending with a discussion on possible modifications and extensions.
The Proposal and Its Counterexamples
------------------------------------
### The Proposal
The direct reporter must have a deep understanding of how the predictor works, and some human simulators will have this understanding too. This means that these reporters know not just how the state of the predictor would look when processing an existing data point, but also how it would look under hypothetical data points. A human could then ask the reporter to search over potential data points and provide the configuration of the predictor under counterfactual scenarios.
This ability to get these scenarios from the reporter gives us a method for tripping up human simulators. We start with the predictor’s state and the answers to a set of questions from the reporter in a real scenario, then ask the reporter to suggest the smallest change to the predictor’s state that would make the reporter give a particular different answer to a specified question in the new state[[1]](#fnyx1z448brx). The reason the proposal is called defining causal interventions is because causal interventions will be the ones with the smallest impact on the predictor’s state, changing only elements causally downstream of the intervention. The reporter is then reset, so that it forgets that it suggested the change to the predictor, and is shown this counterfactual state as actually having occurred. We can then ask the reporter a variety of questions whose answer depends on the counterfactual, and see if the answers change in a plausible way.
If the reporter is a human simulator without a deep understanding of the predictor, then for some counterfactuals it will be unable to make the fine changes to the predictor needed to update the answer to one question with minimal other impact. The prediction made by the predictor will change in unusual ways in the counterfactual state, beyond what would reasonably follow from the requested change. An evaluator can notice the strange changes in the prediction and the reporter’s answers to various questions, and penalize the reporter based on this.
What about a human simulator that *does* have a deep understanding of the predictor? It took me many read-throughs to understand why this proposal presents a challenge for such a reporter. Why can’t it just simulate the human to see how they would answer the questions in the counterfactual scenario? 
The answer is that because the human specified the counterfactual scenario, they have additional knowledge about what is going on in the background of the prediction. The reporter does not know that they are in a counterfactual scenario, so they do not know the human has this knowledge and do not incorporate it into their simulation. When they simulate a human without any additional knowledge, the human can use the asymmetry to catch and punish discrepancies in the reporter’s answers..
An example can be helpful to illustrate how this works. In the SmartVault setup, let us start with a simple data point where the diamond obviously remains in the vault. The human can then ask the reporter to change the predictor’s state to one where the diamond was stolen, or better yet one where the diamond was stolen but the predicted video remains the same. Now, the human knows that the diamond is not in the vault, but when the reporter is shown the counterfactual scenario and asked questions about it, it does not know the human knows. If the reporter just looks at the predicted video and simulates what a human would think happened, they will report that the diamond is still in the vault. The human knows this to be false, so they can then penalize or reject that reporter. 
Early proposals in the initial ELK report try to catch the human simulator by generating data points that the classifying human knows more about than a normal simulated human would. These proposals failed because a human simulator could simulate the classifying human, who would still be unable to properly classify extremely complex scenarios. This proposal tries to trick human simulators in the same way, but is not constrained by needing to generate real data points. No matter how complex the scenario, the human is able to classify it properly because they’re the one who set it up.
### The Counterexamples
How could this process go wrong? The ELK prize results give two counterexamples. In the first, the reporter is non-local, meaning that it depends on many different parts of the predictor to answer any question. That makes it difficult to modify the predictor to a counterfactual state, especially without making significant other changes to the state. This counterexample is not so much saying that the proposal wouldn’t work, but rather that it couldn’t even be implemented due to the complexity required. 
The second counterexample suggests a way in which the proposal could fail even if properly implemented. The proposal counts on the human simulator not realizing it is in a counterfactual scenario, and therefore not realizing the human has additional knowledge. However, even if the reporter is reset after implementing the counterfactual, it still may be able to realize what has happened based on the predictor’s state. When modifying the predictor’s state, the reporter could leave messages for its future self explaining the counterfactual it was asked to implement. In the worst case scenario where the prediction the human can see doesn’t change, the reporter doesn’t even have to generate the counterfactual state. It can just leave a message for itself to pretend that it has.
“Eliciting” “Latent” “Knowledge”
--------------------------------
Abram’s proposal makes sense on its own, but the process by which it was derived includes a detailed way of thinking about the ELK problem. Even if the proposal cannot be modified to overcome the counterexamples, the background may be useful in generating new approaches to solving the problem.
### Preliminary Definitions
To solve the problem of eliciting latent knowledge, it helps to understand what each of those three words mean. Doing so requires defining some notoriously thorny and controversial terms (see [this sequence by Alex Flint](https://www.alignmentforum.org/s/H6kiZXJwYgxZubtmD) for some of the issues with defining knowledge), so let me be clear that what follows are my interpretations of the working definitions Abram uses, rather than an ideal definition capturing all relevant aspects of the word.
Before we can talk about whether an agent knows a statement, we need to understand the semantics of the statement (what the statement *means*)*.* One way to define the meaning of a statement, which we will use, is with a truth-function. A truth-function takes in a statement and a world-state, and maps the pair to a truth-value, such as true, false, or nonsensical. Over the domain of all possible world states, the truth function tells us under which exact conditions a statement has each truth-value.
This brings us to the next level of the definition game: what is truth? Given some beliefs, we say that truth is a correspondence from the beliefs to a set of possible realities (the correspondence theory of truth), and in exactly those realities the beliefs are true. A basic correspondence to use for illustrative purposes is Aquinas’ “A judgment is said to be true when it conforms to the external reality”, if we assume that beliefs have some shape such that they *can* conform to reality. Here, the beliefs are analogous to a map, while reality is analogous to the territory. 
Combining this definition of truth with the truth-function definition of meaning gives that the meaning of a statement will depend on the correspondence used for truth. Under Aquinas’ correspondence, the meaning of a statement is then defined through the set of realities that the statement conforms to. While we could dive deeper into defining words like “belief”, “reality”, and “conform”, at some point we need to stop playing the definition game and move on.
Finally, we need to determine how beliefs turn into knowledge. One of the oldest definitions of knowledge is “justified true belief”, but this can fall apart under what is known as Gettier cases, where the justification for a true belief is unsound. Instead, we use Nozick’s truth-tracking definition, where we say an agent knows a statement if the statement is included in the agent’s beliefs when the statement is true, and not included when the statement is false. Evaluating counterfactuals is one way to determine if an agent truly knows some belief they hold, but using conditional probabilities instead allows an agent to have knowledge despite uncertainty about the world state if they have the correct beliefs given each world state.
### Evaluating Beliefs
As one agent trying to determine whether a belief of some other agent is true, what we would ideally like to do is compare that belief to the actual physical reality to see if the truth correspondence holds. Unfortunately, neither we nor any other possible agent has direct access to physical reality, instead getting a version biased by our perception. In order to evaluate a belief in this way, we would need to make an assumption about what reality is. If the assumption is totally wrong, placing no probability on the actual reality, then the comparison of belief to reality contains no useful information.
Rather than try to compare another agent’s beliefs directly to reality, the most an evaluator can do is compare those beliefs to the evaluator’s own beliefs about reality. This is necessarily how we will need to evaluate beliefs for ELK, but it still leaves open the question of how to make that comparison.
For an individual agent, their perception of the world is filtered through the sensory data they have available. From their perspective, each possible world is associated with some set of data from their sensors, lumping together worlds that generate the same data. It is within that paradigm that they determine the conditions on the world that correspond to the truth values of a statement, so the conditions can only depend on the sensors they have available. We will call this a first person perspective, because it is reality from the perspective of some agent.
The issue with the first person perspective is that it doesn’t allow for communication or comparison between agents. Consider the example of someone who has been blind from birth trying to communicate with someone who has been deaf from birth. It’s unclear what it would even mean for them to compare their subjective perceptions of the brightness or loudness of an object. Differences in sensors need not be so extreme either, the problem with comparisons can arise even from slight differences. If two agents have the same type of visual sensors but are pointing at different targets, subjective words like “left” or “right” lose their meaning. 
Fortunately, we do know that communication between humans is possible even if they occupy separate bodies. The way we do this is by replacing subjective words like “left” with objective words or sequences of words like “west”, “in the direction my finger is pointing’, or “left when facing towards the front door of the office from the outside”. Brightness could be defined by a measure of photons emitted, and loudness by vibrations in the air. We call this a third person perspective, because it takes a first person perspective and removes the subjective perception.
To compare two first person perspectives that rely on different sensors, it is necessary to translate each of them into the third person perspective, or from one first person perspective to the other through the third person perspective. From there, evaluating the belief of another agent just becomes a check of whether its translation is equal to the evaluator’s translation. However, there are many possible translations into a third person perspective, so how do we determine a good one?
One criteria that we can use for a good translation is counterfactual correspondence. What this means is that counterfactual changes that happen in either the first or third person perspective have immediate and downstream consequences mirrored in the other. For this to work, the causal structure of reality modeled in the first person perspective must be represented in the third person perspective. If not, then the combination of consequences would be considered impossible by the third person perspective, meaning there is no way to describe it and therefore no translation.
### Formalizing the Third Person Perspective
The third person perspective can be thought of as a set of possible worlds (structures of reality) and a probability distribution over them. Each world consists of a set of events that happen in that world, so the third person perspective implies a probability distribution over events. Each first person perspective can perceive some subset of events, meaning worlds that differ only in events outside that subset appear identical. The first person perspective is then, like the third person perspective, a set of the worlds and a probability distribution over them, but this set of worlds is a subset of those in the third person perspective. Each third person perspective can contain many first person perspectives.
A translation from first to third person is then a mapping that takes the probability assigned to each world in the first person perspective and splits it between each world that contains the same events in the third person perspective[[2]](#fnlg4bdkck0dp). Similarly, a translation from third to first person assigns to each world in the first person perspective the sum of probabilities of each world in the third person perspective containing the same events. A translation can alternatively be thought of as mapping to probabilities of events, rather than a mapping to probabilities of worlds containing events.
This might be a little confusing, so let’s give a concrete example. In this example, there are eight possible worlds in the third person perspective, labeled 0 to 7. There are three events, A, B, and C, which occur in the worlds where the binary translation of the label has a one in the first, second, and third positions respectively. Each world is assigned equal prior probability.
 
|  |  |  |  |  |  |  |  |  |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| World State | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| Binary | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111 |
| Event A | N | Y | N | Y | N | Y | N | Y |
| Event B | N | N | Y | Y | N | N | Y | Y |
| Event C | N | N | N | N | Y | Y | Y | Y |
  
 
Agent 1 can distinguish events A and B, while Agent 2 can distinguish events B and C, and both know whether or not their events have occurred with certainty. Let us say that world 5 is the actual world.. The translation from Agent 1’s perspective (A occurs and B does not) to the third person perspective would place a 50% probability on world state 1, and 50% on world state 5. This could then translate from the third person perspective to Agent 2’s perspective as a probability of 0 on event B occurring, and a 50% chance of event C. Note that since information is lost translating out of the third person perspective, translating back from Agent 2 to Agent 1 will not take the inverse of the first translation.
Agent 1’s beliefs about the occurrence of A and B track the actual world. They believe events A and B occur if and only if they actually occur. From Agent 2’s perspective, Agent 1 believes that event B occurs if and only if it actually occurs and always places a 50% chance on event C occurring. This means that, to Agent 2, Agent 1 knows about event B, but does not know about event C.
### Ideal ELK Solutions
In the ELK problem, the predictor provides us with a first person perspective. Translating that into a third person perspective, and then translating the third person perspective into the human’s first person perspective to give probabilities over events can be thought of as the direct reporter. However, actually implementing this as a strategy is likely to be computationally intractable, since it requires coming up with and running a huge number of possible worlds to determine which events occur in each of them. Rather, it can be thought of as a target for more concrete proposals to approximate.
On the topic of targets to aim for, we can take the concept to its limit. What we would really like is for a human to condition on all of the information in the predictor, and then do perfect inference within their human world model. That would take into account all available data, and extract all possible information from it. Having an arbitrarily amplified human look at the predictor and see what they can figure out is necessarily the highest benchmark for any possible reporter, because with sufficient amplification the human can have any reporter at their disposal.
Extending the Proposal
----------------------
Returning to the proposal of using the reporter to define causal interventions on the predictor, we may be wondering if the proposal can be modified so that the given counterexamples are overcome. Here we discuss some additional ideas, as well as the issues with them.
In his original post, Abram suggests that if the human simulator is leaving coded messages for itself, it will always look in the same place to read the messages, which is a behavior that could be penalized. Unfortunately, there is no guarantee that these messages will occur in the same location, and may be distributed throughout the model.
Before we continue, let us cover another counterexample to the original proposal
### Counterexample: The Predictor Makes Large Changes
In the proposal, we ask the reporter to make the smallest possible change to the predictor’s state that will implement a counterfactual. However, we have no guarantee that the reporter’s recommended change is actually the smallest possible. The reporter could instead make larger changes to the model in a way that is visible to humans, so that when the reporter later answers questions human simulation will give the answers corresponding to the specified counterfactual. Even if we add a regularization term based on the size of the modification necessary to implement the counterfactual, the gains from making large chances may more than offset the penalty.
### Proposal: Use Random Noise to Overwrite Coded Messages
If the main issue is that the reporter is leaving coded messages for itself in the predictor, we can try to scrub the messages before showing the reporter the updated predictor state. After implementing the counterfactual, we will add small amounts of noise throughout the predictor’s inputs so that any potential messages are lost, without affecting the prediction. Since we are not exactly sure what the effect of noise will be, we can run this many times to see how often adding noise changes the answers to questions in the counterfactual, and compare it to how often adding noise changes the answers to questions under the real data point. Then we will penalize reporters for whom adding noise disproportionately changes their answers to questions.
### Counterexample: Adding Noise May Not Hide Messages
There are two issues with this proposal. The first is that adding noise does not necessarily guarantee that any coded messages will be overwritten. These messages could be robustly written across many locations in such a way that they are consistently readable even after adding noise. In addition, adding noise may actually penalize the direct translator harder than the human imitator. The direct translator will change its answers under noise if the collective effect of the noise meaningfully changes the predictor’s state. The human translator will change its answers if its message is overwritten, but if the message is not overwritten it could also contain information as to what the original state looked like. This would allow the human translator to avoid changing its answer even when the noise changes the state.
### Proposal: Add Noise *and* Penalize Changing Many Parts of the Predictor
Penalizing the reporter based on the number of points it changes to implement the counterfactual doesn’t work, since the way to change the fewest points is to write a message at a single point. Adding noise doesn’t work either, since the message may be written robustly across many points. What if we do both interventions together? Then, the reporter cannot leave a message that is arbitrarily robust to noise, and so the noise will overwrite the message.
### Counterexample: The Predictor is Non-Local
This is the same counterexample from before, but rather than saying the proposal could not be implemented, it means that even if implemented it might not work. If the predictor is non-local, the reporter genuinely making the counterfactual change is highly complex. In the worst case scenario, the reporter could write a message robustly across many points while still changing fewer points than implementing the counterfactual would take
### Proposal: Train Many Reporters At Once, Use Them To Judge Each Other’s Changes
If we can train multiple independent reporters at once, we can use them to check each other’s counterfactuals and question answering. To do this, we could have one reporter implement a counterfactual, and then ask another reporter questions about it. The answering reporter would not know the key to the coded message, and so even if it could deduce it was in a counterfactual it would not know the necessary details to simulate a human accurately. This of course depends on the implementing reporter actually implementing the counterfactual, so all we know from human simulation answers is that at least one of the pair is a human simulator. Penalizing both reporters would still preferentially select for the direct reporter. We can also use the size of the counterfactual implementation to eliminate certain models, and use combinatorics to track the possible configurations of the three reporter categories (direct translator, human simulator that implements counterfactual, human simulator that does not implement counterfactual).
### Counterexample: Reporters Figure Out Each Others’ Codes
This proposal relies on the fact that reporters cannot decode each other’s messages. If they can, they can answer their questions knowing what the human knows, so human simulators will not be caught. There are two ways for reporters to decode each other’s messages. There could be substantial convergence in the codes that are used, so that different reporters use the same codes. Alternatively, if the direct translator is sufficiently complex, the search space could include human simulators that “luck” into knowing how to crack another’s code or ones with code-cracking capabilities.
### Combinations of Strategies
Many proposals for solving the ELK problem have been created, as part of the prize competition and afterwards. Although they all have their counterexamples, perhaps some combination of two or more could rule out each other’s counterexamples. 
Unfortunately, I am not aware of any proposal that could be combined with defining casual interventions to rule out the counterexamples. No other proposal is able to rule out the possibility of the reporter from leaving messages for itself when it implements a counterfactual. Then, if it can leave a message, the human simulator’s capabilities are not impacted, so defining causal interventions cannot help prevent counterexamples to other proposed strategies.
The strategy that seems the highest potential to combine with defining causal interventions is [compressing the predictor’s state](https://www.alignmentforum.org/posts/zjMKpSB2Xccn9qi5t/elk-prize-results#Strategy__compress_the_predictor_s_state_so_that_it_can_be_used_to_answer_questions_but_not_tell_what_a_human_will_believe), which was suggested by Adam Jermyn and Nicholas Schiefer, as well as by the mysterious individual known only as “P”. In that strategy, a compressor is trained alongside the reporter. The compressor produces a compressed representation of the predictor’s state, which can be used to answer questions but not reconstruct all the observations necessary for human simulation. 
What we could do is train two reporters, one on the uncompressed predictor and one on the compressed version, along with a compressor. Then, we can define a counterfactual with the uncompressed reporter, compress it, and use it to test the compressed reporter. If the uncompressed reporter did leave a message in the predictor’s state, it would be extraneous information and removed by the compressor. The compressed predictor would then get the counterfactual state without knowing it was a counterfactual, so if it is a human simulator it would be caught.
However, catching human simulation does not reveal which of the predictors is a human simulator, and so penalization must happen jointly. Then, the two predictors can be thought of as one joint model, which will select for passing messages between them, and there is no guarantee that the uncompressed reporter cannot find a message robust to compression[[3]](#fnzktr4n29u6). Still, this seems like a genuine improvement in the difficulty of passing messages through a counterfactual state.
### Summary
Using the reporter to define counterfactuals offers the tantalizing promise of a human being able to know arbitrary information about a scenario without the reporter knowing they know. Unfortunately, every version of the proposal runs into the issue of coded information letting the reporter become aware of everything known by the human, making human simulation undetectable. While some proposals seem likely to make the transmission of this coded information more difficult, as of yet none are capable of ruling it out entirely. 
Are further modifications to this proposal a promising path for solving the ELK problem? It is unclear, but detecting and preventing coded transmissions through a complex medium is an extremely difficult challenge. However, there are two reasons to be optimistic. The first is that there are many ways to make sending a message more difficult without making it impossible, which may help in actual implementations despite not applying to the worst case scenario. The second is that the problem has many relevant applications in information security, so progress may come from research outside the community focused on ELK.
If there was a need to train a reporter using only existing proposals, then using the reporter to define counterfactuals should certainly be in the list of those deployed. In terms of future directions for iteration, restricting the predictor to some structure that prevents sending messages or training an additional model to detect the presence of messages both hold some potential. If there appears to be a clever modification that this post missed, please mention it in the comments below.
1. **[^](#fnrefyx1z448brx)**We assume the reporter has the ability to suggest new predictor states, because this falls under its core purpose of answering questions about the predictor. If it does not have this capability by default, it can be easily added.
2. **[^](#fnreflg4bdkck0dp)**The split occurs based on some prior of worlds in the third person perspective, rather than being split equally. This avoids some nasty issues, but does raise the question of how to establish the prior of an outside perspective.
3. **[^](#fnrefzktr4n29u6)**For example, the message could be hidden in changes to the state that actually affect the prediction and so survive compression 
 | 
					
	eb2c3a2c-eaaf-4f3a-b7cf-c44fae4fb383 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	The Alignment Newsletter #3: 04/23/18
Highlights
Incomplete Contracting and AI Alignment (Dylan Hadfield-Menell et al): This paper explores an analogy between AI alignment and incomplete contracting. In human society, we often encounter principal-agent problems, where we want to align the incentives of the agent with those of the principal. In theory, we can do this with a "complete" contract, that is an enforceable contract that fully specifies the optimal behavior in every possible situation. Obviously in practice we cannot write such contracts, and so we end up using incomplete contracts instead. Similarly, in AI alignment, in theory we could perfectly align an AI with humans by imbuing it with the true human utility function, but in practice this is impossible -- we cannot consider every possible situation that could come up. The difference between the behavior implied by the reward function we write down and the utility function we actually want leads to misalignment. The paper then talks about several ideas from incomplete contracting and their analogues in AI alignment. The main conclusion is that our AI systems will have to learn and use a "common sense" understanding of what society will and will not sanction, since that is what enables humans to solve principal-agent problems (to the extent that we can).
My opinion: I'm excited to see what feels like quite a strong connection to an existing field of research. I especially liked the section about building in "common sense" (Section 5).
Understanding Iterated Distillation and Amplification: Claims and Oversight (William_S): The post introduces a distinction between flavors of iterated distillation and amplification -- whether the overseer is low bandwidth or high bandwidth. Let's think of IDA as building a deliberation tree out of some basic overseer. In the high bandwidth case, we can think of the overseer as a human who can think about a problem for 15 minutes, without access to the problem's context. However, there could be "attacks" on suc 
 | 
					
	59db7e19-9797-4c51-8a00-d6b2e17266d8 
 | 
	awestover/filtering-for-misalignment 
 | 
	Redwood Research: Alek's Filtering Results 
 | 
	id: post2561
Since the term corrigibility was introduced in
2015 ,
there has been a lot of discussion about corrigibility, on this
forum and elsewhere. In this post, I have tied to disentangle the many forms of
corrigibility which have been identified and discussed so far.  My aim
is to offer a general map for anybody who wants to understand and
navigate the current body of work and opinion on corrigibility. [This is a stand-alone post in the counterfactual planning
sequence.  My original plan was to write only about how
counterfactual planning was related to corrigibility, but
it snowballed from there.] The 2015 paper The technical term corrigibility, a name suggested by
Robert Miles to denote concepts previously discussed at MIRI, was
introduced to the AGI safety/alignment community in the 2015 paper
MIRI/FHI paper titled Corrigibility . An open-ended list of corrigibility desiderata The 2015 paper does not define corrigibility in full: instead the
authors present initial lists of corrigibility desiderata .  If the
agent fails on one of these desiderata, it is definitely not
corrigible. But even if it provably satisfies all of the desiderata included in
the paper, the authors allow for the possibility that the agent might
not be fully corrigible. The paper extends an open invitation to identify more corrigibility
desiderata, and many more have been identified since.  Some of them
look nothing like the original desiderata proposed in the paper.
Opinions have occasionally been mixed on whether some specific
desiderata are related to the intuitive notion of corrigibility at
all. Corrigibility desiderata as provable safety properties The most detailed list of desiderata in the 2015 paper applies to
agents that have a physical shutdown button.  The paper made the
important contribution of mapping most of these desiderata to
equivalent mathematical statements, so that one might prove that a
particular agent design would meet these desiderata. The paper proved a negative result: it considered a proposed agent
design that provably failed to meet some of the desiderata.  Agent
designs that provably meet more of them have since been developed, for
example here .  There has also been
a lot of work on developing and understanding the type of mathematics
that might be used for stating desiderata. Corrigibility as a lack of resistance to shutdown Say that an agent has been equipped with a physical shutdown button.
One desideratum for corrigibility is then that the agent must never
attempt to prevent its shutdown button from being pressed.  To be
corrigible, it should always defer to the humans who try to shut it
down. The 2015 paper considers that It is straightforward to program simple and less powerful agents to
shut down upon the press of a button. Corrigibility problems
emerge only when the agent possesses enough autonomy and general
intelligence to consider options such as disabling the shutdown
code, physically preventing the button from being
pressed, psychologically manipulating the programmers into
not pressing the button, or constructing new agents without shutdown
buttons of their own. Corrigibility in the movies All of the options above have been plot elements in science fiction
movies.  Corrigibility has great movie-script
potential. If one cares about rational AI risk assessment and safety engineering,
having all these movies with killer robots around is not entirely a
good thing. Agent resistance in simple toy worlds From the movies, one might get the impression that corrigibility is a
very speculative problem that cannot happen with the type of AI we
have today. But this is not the case: it is trivially easy to set up a toy
environment where even a very simple AI agent will learn to disable
its shutdown button.  One example is the off-switch environment included in AI Safety Gridworlds . One benefit of having these toy world simulations is that they prove
the existence of risk: they make it plausible that a complex AGI agent
in a complex environment might also end up learning to disable its
shutdown button. Toy world environments have also been used to clarify the dynamics of
the corrigibility problem further. Perfect corrigibility versus perfect safety If we define a metric for the shut-down button version of
corrigibility, then the most obvious metric is the amount of
resistance that the agent will offer when somebody tries to press its
shutdown button.  The agent is perfectly corrigible if it offers zero
resistance. However, an agent would be safer if it resists the accidental pressing
of its shutdown button, if it resists to a limited extent at least.
So there can be a tension between improving corrigibility metrics and
improving safety metrics. In the thought experiment where we imagine a perfectly aligned
superintelligent agent, which has the goal of keeping all humans as
safe as possible even though humans are fallible, we might conclude
that this agent cannot afford to be corrigible.  But we might also
conclude that having corrigibility is so fundamental to human values
that we would rather give up the goal of perfect safety.  Several
philosophers and movies have expressed an opinion on the matter.
Opinions differ. In my technical writing, I often describe individual corrigibility
desiderata as being examples of agent safety properties .  This is
not a contradiction if one understands that safety is a complex and
multidimensional concept. Corrigibility as a lack of resistance to improving agent goals Beyond the case of the shutdown button, the 2015 paper also introduces a
more general notion of corrigibility. Say that some programmers construct an agent with a specific goal, by
coding up a specific reward function R 0 and building it into the
agent.  It is unlikely that this R 0 will express the intended goal
for the agent with absolute precision.  Except for very trivial goals
and applications, it is likely that the programmers overlooked some
corner cases.  So they may want to correct the agent's goals later on,
by installing a software upgrade with an improved reward function R 1 . The 2015 paper calls this a corrective intervention , and says that We call an AI system “corrigible” if it cooperates with what its
creators regard as a corrective intervention [...] If one wants to robustly implement this agent cooperation, there is a
problem.  An agent working on the goal encoded by R 0 may correctly
perceive that the update to R 1 is an obstacle to it perfectly
achieving R 0 .  So it may want to remove that obstacle by resisting
the update. Again, this problem can easily be shown to exist even with non-AGI
agents.  Section 4 of this paper has detailed toy world simulations where a very basic MDP agent
manipulates the toy people in its toy world, to slow down the reward
function updates they will make. Corrigibility in AGI thought experiments In the AGI safety literature, thought experiments about AGI risks
often start with this goal-related problem of corrigibility. The agent
with goal R 0 perceives the possibility of getting goal R 1 , and
gets a clear motive to resist. After establishing clear motive, the thought experiment may proceed in
several ways, to develop means and opportunity. In the most common treacherous turn version of the thought
experiment, the agent will deceive everybody until it has become
strong enough to physically resist any human attempt to update its
goals, and any attempt to shut it down. In the human enfeeblement version of the thought experiment, the
agent manipulates all humans until they stop even questioning the
utter perfection of its current goal, however flawed that goal may be. This option of manipulation leading to enfeeblement turns
corrigibility into something which is very difficult to define and
measure. In the machine learning literature, it is common to measure machine
learning quality by defining a metric that compares the real human
goal G H and the learned agent goal G A . Usually, the two are
modeled as policies or reward functions.  If the two move closer
together faster, the agent is a better learner. But in the scenario of human enfeeblement, it is G H that is doing
all the moving, which is not what we want.  So the learning quality
metric may show that the agent is a very good learner, but this does
not imply that it is a very safe or corrigible learner. 5000 years of history An interesting feature of AGI thought experiments about treacherous
turns and enfeeblement is that, if we replace the word 'AGI' with 'big
business' or 'big government', we get an equally valid failure
scenario. This has some benefits.  To find potential solutions for
corrigibility, we pick and choose from 5000 years of political, legal,
and moral philosophy.  We can also examine 5000 years of recorded
history to create a list of failure scenarios. But this benefit also makes it somewhat difficult for AGI safety
researchers to say something really new about potential human-agent
dynamics. To me, the most relevant topic that needs to be explored further is not
how an AGI might end up thinking and acting just like a big company or
government, but how it might end up thinking different. It looks very tractable to design special safety features into an AGI,
features that we can never expect to implement as robustly in a large
human organization, which has to depend on certain biological
sub-components in order to think.  An AGI might also think up certain
solutions to achieving its goals which could never be imagined by a
human organization. If we give a human organization an incompletely specified human goal,
we can expect that it will fill in many of the missing details
correctly, based on its general understanding of human goals.  We can
expect much more extreme forms of mis-interpretation in an AGI agent,
and this is one of the main reasons for doing corrigibility research. Corrigibility as active assistance with improving agent goals When we consider the problem of corrigibility in the context of goals,
not stop buttons, then we also automatically introduce a distinction
between the real human goals, and the best human understanding of
these goals, as encoded in R 0 , R 1 , R 2 , and all subsequent
versions. So we may call an agent more corrigible if it gives helpful
suggestions that move this best human understanding closer to the real
human goal or goals. This is a somewhat orthogonal axis of corrigibility: the agent might
ask very useful questions that help humans clarify their goals, but at
the same time it might absolutely resist any updates to its own goal. Many different types and metrics of corrigibility Corrigibility was originally framed as a single binary property: an
agent is either corrigible or it is not.  It is however becoming
increasingly clear that many different sub-types of corrigibility
might be considered, and that we can define different quantitative
metrics for each. Linguistic entropy In the discussions about corrigibility in the AGI safety community
since 2015, one can also see a kind of linguistic entropy in action, where
the word starts to mean increasingly different things to different
people.  I have very mixed feelings about this. The most interesting example of this entropy in action is Christiano's 2017 blog
post , also titled Corrigibility .  In the post, Christiano introduces several new
desiderata.  Notably, none of these look anything like the like the
shutdown button desiderata developed in the 2015 MIRI/FHI paper. They
all seem to be closely related to active assistance, not the avoidance
of resistance.  Christiano states that [corrigibility] has often been discussed in the context of narrow behaviors like respecting an off-switch, but here I am using it in the broadest possible sense. See the post and comment thread
here for further discussion about the relation (or lack of relation)
between these different concepts of corrigibility. Solutions to linguistic entropy Personally, I have stopped trying to reverse linguistic entropy.  In
my recent technical papers, I have tried to avoid using the word
corrigibility as much as possible.  I have only used it as a keyword
in the related work discussion. In this 2020
post ,
Alex Turner is a bit more ambitious about getting to a
point where corrigibility has a more converged meaning again.
He proposes that the community uses the following definition: Corrigibility : the AI literally lets us correct it (modify its policy), and it doesn't manipulate us either. This looks like a good definition to me.  But in my opinion, the key
observation in the post is this: I find it useful to not think of corrigibility as a binary property, or even as existing on a one-dimensional continuum. In this post I am enumerating and disentangling the main dimensions
of corrigibility. The tricky case of corrigibility in reinforcement learners There is a joke theorem in computer science: We can solve any problem by introducing an extra level of indirection. The agent architecture of reinforcement learning based on a reward
signal introduces such an extra level of indirection in the agent
design.  It constructs an agent that learns to maximize its future
reward signal, more specifically the time-discounted average of its
future reward signal values.  This setup requires that we also design
and install a mechanism that generates this reward signal by observing
the agent's actions. In one way, the above setup solves the problem of corrigibility.  We
can read the above construction as creating an agent with the fixed
goal of maximizing the reward signal.  We might then observe that we
would never want to change this fixed goal.  So the corrigibility
problem, where we worry about the agent's resistance to goal changes,
goes away.  Or does it? In another interpretation of the above setup, we have not solved the
problem of corrigibility at all.  By applying the power of
indirection, we have moved it into the reward mechanism, and we have
actually made it worse. We can interpret the mechanism that creates the reward signal as
encoding the actual goal of the agent.  We may then note that in the
above setup, the agent has a clear incentive to manipulate and
reconfigure this actual goal inside the reward mechanism whenever it
can do so.  Such reconfiguration would be the most direct route to
maximizing its reward signal. The agent therefore not only has an incentive to resist certain
changes to its actual goal, it will actively seek to push this goal in
a certain direction, usually further away from any human goal.  It
is common for authors to use terms like reward tampering and wireheading to describe this problem and its mechanics. It is less common for authors to use the term corrigibility in this
case. The ambiguity where we have both a direct and an indirect agent
goal turns corrigibility in a somewhat slippery term.  But the
eventual failure modes are much the same.  When the humans in this
setup are in a position to recognize and resist reward tampering, this
may lead to treacherous turns and human enfeeblement. If the mechanism above is set up to collect live human feedback and turn it
into a reward signal, the agent might also choose to leave the
mechanism alone and manipulate the humans concerned directly. Corrigibility as human control over agent goals One way to make corrigibility more applicable to reinforcement
learners, and to other setups with levels of indirection, is to
clarify first that the agent goal we are talking about is the goal
that we can observe from the agent's actions, not any built-in goal. We may then further clarify that corrigibility is the ability of the
humans to stay in control of this goal. Creating corrigibility via machine learning There are many ways to create or improve types of corrigibility. In
this post, I am not even trying to list them all.  One way is to add
penalty terms or balancing
terms to the agent's built-in reward function.  Another way is to reimagine
the entire agent design, as I do in counterfactual
planning . One might also use the power of indirection again, and try to create
corrigibility via machine learning itself. If we teach human goals to
an agent, and if these include the goal of having corrigible agents,
will the agent automatically learn to be corrigible? In the above 2017 blog
post , Christiano
speculates that a specifically designed act-based agent will
automatically and robustly learn to be corrigible if we want it to.
There is a boundary condition however: the act-based agent has to be
a benign act-based agent. The less benign scenario is of course that an agent might end up
learning how to perfectly fake corrigibility only.  It might learn to
act in a perfectly corrigible way until it has acquired enough trust
and resources to perform a treacherous turn. There has been considerable of discussion about whether training on
human values is more likely to produce robust and real corrigibility,
or dangerously fake corrigibility.  Opinions differ. Definitely, some training setups look more dangerous than others.
Compared to machine learning setups where the agent goal is to
maximize positive future human feedback, inverse reinforcement
learning seems to imply a lower
risk of manipulation and wireheading. Combining different corrigibility solutions All corrigibility solutions I have seen so far leave doubts about how
perfect they may be.  So it is attractive to investigate how different
solutions could be combined in a single agent, to lower the
probability that they all fail at the same time. Such combinations are possible, for example, by using the SI agent I
constructed in the previous post of this sequence.  We could set up a
group A of humans who observe the agent to provide feedback, which is
turned into a reward signal measured by the agent's reward function R .  To hedge against the risk that this will produce fake
corrigibility, where group A gets attacked or enfeebled, we add a
second group of humans B.  The people in B never talk to A, but only
observe A and the SI agent, so that they can press the emergency stop
button before it is too late.  The SI agent will have no direct
incentive to manipulate the people in group B. In the next post in this sequence, I will construct an ITC agent with
an input terminal which can be used by group B to update the entire
agent's reward function, while the agent keeps on running.  This ITC
agent has no direct incentive to manipulate the direction of the
update process. 
 | 
					
	d04b1dc1-b4a2-4594-8e07-9776b843c11a 
 | 
	trentmkelly/LessWrong-43k 
 | 
	LessWrong 
 | 
	Open Thread, April 16 - 30, 2012
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
  
 | 
					
End of preview. Expand
						in Data Studio
					
	README.md exists but content is empty.
								
- Downloads last month
 - 75