Go back
Evidence, Induction and Drinking Games

Evidence, Induction and Drinking Games

Debates

r
CHAOS GHOST!!!

Elsewhere

Joined
29 Nov 02
Moves
17317
Clock
23 Feb 06
Vote Up
Vote Down

I'd like to restart a discussion which began on Forum Wars nearly a year ago, shortly before some reprobate took FW offline.

Informally, I'd like to propose a rational method for deciding whether to accept some proposition as being true on the basis of evidence.

When we reason inductively, we begin with evidence and produce some hypothesis -- a statement which is likely to be true conditional on the evidence. As philosopher Robert Pirsig observed, and as any natural scientist I've asked has confirmed, it is generally very easy to think of hypotheses which fit given evidence to a 'meaningful' degree. In fact, informally, the number of hypotheses which could conceivably be true tends to grow as the amount of evidence increases. However, this does not necessarily spell doom for inductive reasoning, because while the number of potential hypotheses may grow, one may emerge as the most likely (and in practical situations, this is often easy to identify). The idea of this discussion is to formulate a method for assessing this likelihood, in such a way that the assessment can vary according to the presence of evidence.

In situations where generating hypotheses is not hard, this sort of inductive reasoning reduces to testing hypotheses against evidence. Furthermore, since in actual fact a hpothesis is true or false, any measure of degree of belief in a hypothesis should depend only on evidence and not on the other hypotheses, necessarily. That is, we want our method, when faced with the competing hypotheses "It was done in the study by Miss Scarlett with the revolver" and "It was Professor Plum in the kitchen with the knife", to be reducible to two independent tests, one of the first hypothesis against its negation and one of the second against its negation. The presence of the second hypothesis may admit itself as evidence against the first in some way, but as we shall see this is different from requiring them to be direct competitors.

The word 'likelihood' has been thrown around in a non-technical context thus far (which I intend -- in a technical context, it means something rather different), but it indicated that probability theory is probably the framework within which we should develop such a method. Fundamentally, probabilities are defined by a mapping from some set (the 'event space'😉 of subsets of a given set (the 'sample space'😉 to the set of real numbers between 0 and 1 inclusive, subject to the constraint that if we sum the probabilities of every set in the event space, we get 1. That's it; from those three conditions, and a few additional definitions, almost all of probability theory follows. I don't want to get into a discussion of the basics of probability theory; I assume anyone reading this will have some grasp of it. I've only included the axioms so that we can interpret them in order to model the situation at hand.

These axioms are commonly interpreted in several ways, but we will use something very similar to the so-called 'Bayesian interpretation'. Therefore, instead of speaking of probabilities having 'events' as their arguments (a term which is derived from the 'frequentist' interpretation, somewhat arbitrarily, since the axioms only mention sets), we speak of probabilities operating on statements. Instead of viewing a probability as some limiting frequency of an outcome in a repeated experiment, we view a probability as the extent to which a rational being would believe a statement, with 1 indicating absolute confidence in the statement's truth and 0 indicating absolute confidence in the truth of the statement's negation. Since this is merely an interpretation of probabilities, which has not altered the axioms, we can still do any probabilistic calculation we are familiar with. In fact, we can also speak of the probability of events, by considering the extent to which our rational being would believe the statement 'event X will happen'.

(TBC...)

r
CHAOS GHOST!!!

Elsewhere

Joined
29 Nov 02
Moves
17317
Clock
23 Feb 06
Vote Up
Vote Down

We can now pose our problem slightly more formally, because hypotheses and evidence can all be expressed as statements. Therefore, let our evidence be a set E = {E1,...,En} of n statements. Let H be a hypothesis about E. We want a way to determine whether we should believe H or ¬H in view of E, and how confident we can be in our decision. Hopefully, our method will have the following properties (at least):

First, since we are trying to abstract parts of how we really reason about things, we would hope that our method is consistent with our intuition. In particular, if a piece of evidence strengthens our belief in H, it should weaken our belief in ¬H by the same amount.

Second, our method should involve some meaningful scale by which the total strength of evidence can be measured.

At this point, I'm just going to give a rough-and-ready description of the method and introduce a little thought-experiment to put it in context. Finally, there are some questions which I think are in need of discussion. I will ask these, and hope everyone else contributes more. This has not, thus far, been very clearly written and I think as individual points are discussed, the idea will evolve a great deal.

(TBC...)

r
CHAOS GHOST!!!

Elsewhere

Joined
29 Nov 02
Moves
17317
Clock
23 Feb 06
Vote Up
Vote Down

So:

Suppose, in the absence of evidence, that we have some prior probability P(H) for H (in practice, choosing this is the main difficulty of the method, and I think the discussion might end up focusing on this point, which would be worthwhile).

We thus define the odds of H against ¬H in the usual way:

O(H,¬H)=P(H)/(1-P(H))

These are the 'prior odds' and we call their logarithm (natural, for now, but this may change later and matters only up to rescaling):

W(H,¬H) = log O(H,¬H)

the 'prior evidence' in favour of H. Note that if O < 1, then the evidence is negative, and the situation, at the moment, favours ¬H. We could equally well choose ¬H as our hypothesis, and the magnitude of the prior evidence would not change, ie we have not violated our intuition thus far. Suppose we introduce a single piece of evidence E1. Then the obvious thing to do is to consider how likely E1 is in the situation that H is true and in the situation that ¬H is true.

Thus we consider the odds of E1 being true conditional on H and ¬H, ie the ratio P(E1|H)/P(E1|¬H). By Bayes' theorem, we have:

P(H|E1)/P(¬H|E1)=[P(E1|H)/P(E1|¬H)]*O(H,¬H)

In other words, the odds on H given E1 are given by the above expression. Now we take logarithms to define two new functions. The first is an extension of our prior evidence:

W(H,¬H|E1) = log[P(E1|H)/P(E1|¬H)]

and the second is the logarithm of the left-hand side, which represents the total strength of evidence for H:

S(H) = W(H,¬H|E1) + W(H,¬H).

Suppose we add another piece of evidence, E2. To see what happens, it is best to start at the beginning, and treat P(H|E1)/P(¬H|E1) exactly as we did the prior odds before. This is admissable since conditional probabilities behave exactly like prior ones (this follows from the axioms, and can be visualised by picturing the sample space as a plane region etc.). It is important, however, that we assume this new piece of evidence to be independent of E1. This is not a particularly limiting assumption in practice (another point of discussion, but I have some rationale for this claim). By the same argument before, we obtain (GIVEN THAT OUR EVIDENCE STATEMENTS ARE MUTUALLY INDEPENDENT) that:

S(H) = W(H,¬H) + W(H,¬H|E1) + W(H,¬H|E2)

Thus to incorporate new evidence, we need only add another term depending on the conditional probability of the single new piece of evidence given each hypothesis -- by induction, we can do this for as many pieces of independent evidence as we like. This concept is powerful, because it strongly agrees with our intuition. Furthermore, the use of Bayes' theorem eliminates any worry about the prior probability of our single pieces of evidence, since they always cancel (in particular, their prior probabilities are never zero since we've observed them happening).

I encourage the reader to have a little ponder about the central notion, which is the function W (ie, the weight of evidence -- the S is for 'strength' of the hypothesis H). Finally, note that with the appropriate change in the original definition of odds, ¬H can be replaced by any hypothesis I without changing anything else, but I believe that evaluating S(H,I) can be reduced recursively to considering multiple binary cases as described (consider Miss Scarlett and Professor Plum again).

(TBC...)

r
CHAOS GHOST!!!

Elsewhere

Joined
29 Nov 02
Moves
17317
Clock
23 Feb 06
1 edit
Vote Up
Vote Down

The last thing I want to talk about tonight is the subject of units. It's all very well to have defined S(H), but where do we draw the line in making a decision? It would be naive to simply accept H every time S(H) > 0, because the method certainly does not guarantee we have accumulated all the relevant evidence; some massive piece of evidence might come along with a massively negative W that totally undermines our faith in H (I include in this the case where a piece of evidence shows up in the form of the presence of some other hypothesis; if you don't like that, then simply include it as a distinct reason why the above strategy would be naive). Similarly, we might then try to test a new hypothesis H': "We have collected sufficient evidence to test H". I haven't considered the ramifications of this approach, but it does not look promising when we being to think about testing H''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''.

One approach would be to treat evidence as one would some real physical quantity, and give real meaning to S. The following thought-experiment (which became an actual experiment) motivates this.

I first started thinking about this sort of thing about a year ago, and the previous weekend I had been playing the drinking game 'I have never'. Consider the following thought experiment; it is a game played by an Experimenter and a Subject. Here are the rules:

1. The Subject decides beforehand, uniformly randomly (by the flip of a fair coin, say), whether, throughout the experiment, he will always tell the truth (or act in a manner equivalent to telling the truth) or always lie (ditto). He does not inform the Experimenter of his choice.

2. The Experimenter asks the subject if he has done some specific action in his life; these actions must be chosen in a colloquially 'random' way, so that the answers are essentially independent. If his answer would be 'yes', the subject does a shot. If his answer would be 'no', the subject abstains from drinking.

3. 2. is repeated as many times as the Experimenter sees fit, and the questions are totally at the discretion of the Experimenter.

4. The Experimenter guesses whether or not the Subject had chosen to be truthful.

Obviously, the Experimenter could win by deduction (for example, by asking the Subject if he has ever flipped a coin, or waiting until the subject drinks and asking him about it, but for the sake of the experiment, the Experimenter does not ask questions from which the answer could be deduced with certainty).

The Experimenter tests the hypothesis H = "the subject is truthful" against its negation, and keeps a score of evidence as the game progresses; observing whether the Subject drinks on each question gives a piece of evidence. Clearly, since H and ¬H are equiprobable, the prior weight of evidence is zero.

(TBC...)

r
CHAOS GHOST!!!

Elsewhere

Joined
29 Nov 02
Moves
17317
Clock
23 Feb 06
Vote Up
Vote Down

The point of the experiment is to determine, subjectively, how big S(H) should be to convince us to accept or reject H. We could obviously work out what conditional probability corresponds to a given S(H), but this doesn't give us much of an intuitive feel for whether to accept H. Do we accept H if the evidence points to H being 80% likely? 99%?

The experiment works because the Experimenter has a large amount of control over P(E|H) for each piece of evidence, because he can make the question arbitrarily but roughly measurably, outrageous. Much as Gabriel Fahrenheit calibrated his temperature scale to that of human blood, the Experimenter knows how much evidence against the Subject's claim of truthfulness is provided by the fact that he does a shot when asked if he's ever slept with the Queen. Therefore, in other contexts, the total weight of evidence in favour of a claim can have some intuitive meaning.

We can thus define units in a practical way (this amounts to choosing a base for the logarithm). I defined the basic unit of evidence as follows:

Suppose the Experimenter asks the Subject a question for which he knows with certainty the odds P(E|H)/P(E|¬H). The basic unit of evidence is the logarithm of the closest value to one these odds may take and still produce noticeable bias in a second Experimenter, who knows nothing about probability theory. (Most people, even if ignorant of probability theory, would have a strong opinion about the truth of H if the Subject claimed to have slept with the Queen. The basic unit of evidence is that yielded by observing the answer to the most innocuous question we can think of which still produces trust or skepticism of H.)

I would put these odds at somewhere near 60:40 = 1.5; at one point, I had a question in mind that corresponded to this, but I've forgotten. I call the basic unit of evidence a 'shot' and define it to be the base-1.5 logarithm of the relevant odds ratio.

Talk amongst yourselves.

(TBC...)

r
CHAOS GHOST!!!

Elsewhere

Joined
29 Nov 02
Moves
17317
Clock
23 Feb 06
1 edit
Vote Up
Vote Down

(Note: I'm not sure how much of this is non-standard. When I was using some of this stuff in a paper I wrote last June, my advisor told me that the ratio P(E|H)/P(E|¬H) is a special case of what's understandably called a 'Bayes factor', but I have not run across using it's logarithm before. The idea is based on that of Shannon entropy, which takes logarithm of probabilites, rather than odds, to measure the information communicated by a random variable, but it addresses an essentially different problem. The additive nature of logarithms is what makes this conform to our intuition about adding up evidence. I'm particularly interested to see what RHP can do with the Experiment, and in discussing dependent evidence.)

bbarr
Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
Clock
23 Feb 06
2 edits
Vote Up
Vote Down

Originally posted by royalchicken
I'd like to restart a discussion which began on Forum Wars nearly a year ago, shortly before some reprobate took FW offline.

Informally, I'd like to propose a rational method for deciding whether to accept some proposition as being true on the basis of evidence.

When we reason inductively, we begin with evidence and produce some hypothesis -- a st being would believe the statement 'event X will happen'.

(TBC...)
How should one go about determining the probability that there exists no rational being? Since this is surely possible, the probability ought to be greater than zero. But your analysis of evidential probability entails that this must be zero, since a rational being would give no credence to the proposition that there are no rational beings. So, you need some other (presumably non-credal) analysis of evidential probability. The general problem with credal analyses of evidential probability is that they overlook the entailments of their antecedents (i.e., if X is a rational being....) on that which these conditional claims are supposed to analyze.

DoctorScribbles
BWA Soldier

Tha Brotha Hood

Joined
13 Dec 04
Moves
49088
Clock
23 Feb 06
9 edits
Vote Up
Vote Down

Originally posted by bbarr
a rational being would give no credence to the proposition that there are no rational beings.
I don't believe that this is the case.

A being need not be aware of his own rationality in order to act rationally or to apply a rational method of analysis.

After all, you presumably acted rationally before you even understood what rationality was, and tons of people act irrationally without admitting that they are doing so.

Thus, a being can carry out a rational analysis of the proposition at hand without rejecting it out of hand as a direct consequence of performing that rational analysis.

Further, even an irrational being can mechanically apply a rational method of analysis. Even complete crazies can add. If only irrational beings apply royalchicken's rational method, your paradox never arises, which demonstrates that the flaw, if it exists at all, is something external to the method itself.

r
CHAOS GHOST!!!

Elsewhere

Joined
29 Nov 02
Moves
17317
Clock
23 Feb 06
1 edit
Vote Up
Vote Down

Actually, we don't need to argue this point; we can take a purely Bayesian view and accept that everyone's assessment of the evidence (ie of O(H,¬H|E)) may differ. Pretend I modified my first post to reflect this -- probabilities are now interpreted as degrees of confidence in a statement according to any observer. For a given observer, things will still be consistent, since this is merely an interpretation of something which must obey axioms.

bbarr
Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
Clock
23 Feb 06
Vote Up
Vote Down

Originally posted by DoctorScribbles
I don't believe that this is the case.

A being need not be aware of his own rationality in order to act rationally.

After all, you presumably acted rationally before you even knew what rationality was, and tons of people act irrationally without admitting that they are doing so.

Thus, a being can carry out a rational analysis of the questi ...[text shortened]... at hand without rejecting it out of hand as a consequence of performing that rational analysis.
We're not talking about practical rationality, we're talking about theoretical rationality, so acting rationally is irrelevant. What's relevant is the credence a rational agent would have in the hypothesis given our available evidence.

DoctorScribbles
BWA Soldier

Tha Brotha Hood

Joined
13 Dec 04
Moves
49088
Clock
23 Feb 06
1 edit
Vote Up
Vote Down

Originally posted by bbarr
We're not talking about practical rationality, we're talking about theoretical rationality, so acting rationally is irrelevant. What's relevant is the credence a rational agent would have in the hypothesis given our available evidence.
In order for your paradox to arise, the theoretically rational agent must be aware of his own theoretical rationality.

Why would a theoretically rational agent be aware of his own rationality? What is the analytical connection between being theoretically rational and being aware of your theoretical rationality?

bbarr
Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
Clock
23 Feb 06
Vote Up
Vote Down

Originally posted by royalchicken
Actually, we don't need to argue this point; we can take a purely Bayesian view and accept that everyone's assessment of the evidence (ie of O(H,¬H|E)) may differ. Pretend I modified my first post to reflect this -- probabilities are now interpreted as degrees of confidence in a statement according to any observer. For a given observer, things will still be consistent, since this is merely an interpretation of something which must obey axioms.
If evidential probabilities are probabilities hypotheses conditional upon their evidence, then, trivially, any piece of evidence must be assigned probability 1 (its probability conditional upon itself). Does this require us to be absolutely certain about our evidence, where such certainty is construed as the highest possible degree of belief? If so, then how would we ever revise our belief that some proposition qualifies as evidence? It seems we can't, on your view. But that is absurd, given that we revise what we take to be our evidence all the time.

bbarr
Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
Clock
23 Feb 06
Vote Up
Vote Down

Originally posted by DoctorScribbles
In order for your paradox to arise, the theoretically rational agent must be aware of his own theoretical rationality.

Why would a theoretically rational agent be aware of his own rationality? What is the analytical connection between being theoretically rational and being aware of your theoretical rationality?
The point generalizes. Do you want a proof?

DoctorScribbles
BWA Soldier

Tha Brotha Hood

Joined
13 Dec 04
Moves
49088
Clock
23 Feb 06
2 edits
Vote Up
Vote Down

Originally posted by bbarr
The point generalizes. Do you want a proof?
Only if it's quick and easy. Otherwise I'll take your word - I just don't see it.

My point is, can your objection be addressed by something analagous to the fix applied to set theory to eliminate its classic paradox? Can the rational agent be defined as being restricted to analyzing things in a universe in which he does not exist, thereby eliminating your objection?

bbarr
Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
Clock
23 Feb 06
Vote Up
Vote Down

Let P be some logical truth such that, in this world, it is very probable on our evidence that nobody has great credence in P.

Let H be the hypothesis that nobody has great credence in P.

By assumption, H is very probable upon our evidence.

So, a rational being with our evidence would have great credence in H.

Since P is a logical truth, H is logically equivalent to the conjunction
P & H.

Since rational beings would have the same credence in logically equivalent hypotheses, he would have great credence in P & H.

But this entails that he has great credence in the proposition "P, and nobody has great credence that P", which is absurd.

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.