26 Nov 12
Originally posted by bbarrI think you may be using a poor example. You claim to know your address and announce your address. Now, suppose I have been to your address and believe you have the house number wrong. I don't see why it would be unfair for me to point out that the number should be 4579 and not 4597. In this case, it is easy for you to resolve if you have a postal letter or drivers license with your address on it as proof you are correct.
You're right, there are billions of people that claim having personal experiences they explain by reference to this god or that god, or to angels, demons, ancestors or ghosts. There are billions of people that believe in Vishnu who have not had experience of your God. And millions of Buddhists who have not had experiences of any personal God. And millions of ...[text shortened]... as unlikely as there being a poltergeist in my freezer...
Originally posted by SoothfastYes, S can be justified in believing P despite P being false. In face, S can be justified in believing P, P can be true, and S can still not know P (because S's justification is not connected in the right way to the truth of P; this is what Gettier showed).S may be justified in believing P despite S having reasons insufficient to guarantee the truth of P. But S can't know P under those circumstances.
Are you saying, then, that (3) can be true and yet "S knows P" false? But then, if P happens to be true, it should be possible for (1)&(2)&(3)&(4) to be true while "S knows P" is false. Thus ...[text shortened]... be false.
EDIT: I'm going to go study the "Gettier counterfactuals" more closely.
No, it doesn't follow that if all the conditions are met, that 'S knows P' could still be false. Here is a quick Gettier example:
Suppose Smith believes his friend Jones has a brown car. He has seen this brown car, been driven around in it, etc. Just yesterday, Jones was talking at length about his brown car to Smith. In short, Smith is justified in believing Jones has a brown car. Further, suppose it is actually true that Jones has a brown car. Thus, Smith has a justified, true belief that Jones has a brown car. But does Smith know that Jones has a brown car? It seems like it. Suppose, however, that today Jones sold his brown car. Jones then took the money and bought another, completely different brown car. Smith still has a justified, true belief that Jones has a brown car. But know it seems that Smith is only accidentally justified in his belief. The evidence Smith has isn't connected, in the right sort of way, with the facts that make it true that 'Jones has a brown car'. Smith is just sort of lucky in still having a true belief.
What condition (4) aims to do to spell out the connection that one's reasons, evidence or justification for a belief has to have to the facts that make that belief true in order for one to have knowledge. There have been a number of attempts (e.g., Armstrong's causal analysis, Nozick's truth-tracking analysis, Clark's no-false-lemmas analysis, Harman's no-essential-false-lemmas analysis, etc. ad nauseum); some are better than others.
You know that a proposed condition (4) is bad if you can provide a counterexample showing that each of the conditions is met and yet, intuitively, S doesn't know P. But it is not inevitable that there will always be a counterexample to any proposed condition (4). This is why it doesn't simply follow that the conditions as a whole don't provide a good analysis. Work needs to be done, though, that's for sure. And then, of course, it could be the case that the whole project is doomed because conceptual analysis never works for natural language concepts, that Wittgenstein is right about the different instances of 'know' bearing family-resemblances to each other, not sharing definitions; that Quine is right about analyticity; that Kohlberg is right about knowledge being a natural kind; that Goldman is right about knowledge being the result of reliable true belief formation, etc.
Originally posted by bbarrThanks. That makes sense.
Yes, S can be justified in believing P despite P being false. In face, S can be justified in believing P, P can be true, and S can still not know P (because S's justification is not connected in the right way to the truth of P; this is what Gettier showed).
No, it doesn't follow that if all the conditions are met, that 'S knows P' could still be f ...[text shortened]... right about knowledge being the result of reliable true belief formation, etc.
One of the examples I found yesterday, which you doubtless are familiar with, was attributed to Bertrand Russell: The case of someone looking at a clock, seeing it reads 12:00, and concluding -- correctly -- that it is noon. Only, the clock actually stopped working exactly 12 hours earlier, so the conclusion is only accidentally right.
Originally posted by SoothfastMaybe, things like this could happen with evolutionists thinking the earth and universe is very old and that man came from monkeys or some other ape like creature. Maybe the evidence looks good to them, but they are actually wrong while believing they are right. Who knows?
Thanks. That makes sense.
One of the examples I found yesterday, which you doubtless are familiar with, was attributed to Bertrand Russell: The case of someone looking at a clock, seeing it reads 12:00, and concluding -- correctly -- that it is noon. Only, the clock actually stopped working exactly 12 hours earlier, so the conclusion is only accidentally right.
Originally posted by SoothfastRight, that's from Human Knowledge; its Scope and Limits. If I remember correctly, Russell actually presents that case as a counterexample to the True Belief theory of knowledge, arguing for a justification condition. But it could apply just as well as a Gettier-type case.
Thanks. That makes sense.
One of the examples I found yesterday, which you doubtless are familiar with, was attributed to Bertrand Russell: The case of someone looking at a clock, seeing it reads 12:00, and concluding -- correctly -- that it is noon. Only, the clock actually stopped working exactly 12 hours earlier, so the conclusion is only accidentally right.
Originally posted by SoothfastThe problem is that it doesn't matter how "strict" S intends to be: if you tie the justification condition (3) to the truth condition (1) in the analysis of knowledge, then in the same sense that (1) is "external" and not something that S meets, now (3) also becomes external and not something S meets. If you stipulate that satisfaction of (3) entails satisfaction of (1), then (3) is for all intents and purposes an external condition because now it is simply a demand of the world; it is a demand that S's evidential basis, objectively, be such that it guarantees the truth of P. And this doesn't track our intuitions regarding justification, as the lottery example shows. At the end of the day, you cannot really claim that there are some inquiries for which you "allow" for the possibility that (2)&(3)&(4) are true and yet (1) is false, as if you can make it so that there is no such possibility for the inquiries you actually care about just by being strict and careful about your approach. The reality is that the possibility will always exist, at least for the inquiries you could really care about. So, we can only do our best to ferret out the facts; try to be objective and apportion belief as the evidence dictates; and just live with whatever epistemic possibility is left over that what we think we know we don't actually know. S can be as strict as he wants about inquiries related to God or Higgs boson, and he can succeed in shrinking the possibility down but S won't succeed in causing this possibility to evaporate completely (even if, in fact, S succeeded in making himself maximally sure about his conclusion).
Yes, I understand your point. As I said, my viewpoint evolved as the discussion progressed, and my final word on the matter ultimately came to resemble my first words (which brought up the probability issue). My final assessment was this:
[quote]If you want to interpret "warranted" as meaning "Ho hum, seems good enough for gubbamint work," well, that' e instead "no god exists" then I would still be skeptical that (3) is satisfied.
Regarding your claim that warrant varies with the seriousness of the consequences of being wrong, I think I understand your point, and it is a good point. I'm not saying it should not factor, but it's unclear to me how exactly this should factor into the analysis of knowledge. The analysis of knowledge purports to outline the necessary and sufficient conditions for an instance to count as knowledge. It does not purport to account for some things you imply. For example, we can distinguish between the epistemic probability as in the ideal rational confidence level indicated by the evidence on one hand and whatever confidence level S actually forms about the proposition on the other. These need not be the same, just like we can distinguish epistemic certainty (when the evidence actually guarantees that P) from psychological certainty (when S is just maximally convinced that P). (This distinction is why even if S can succeed in achieving psychological certainty by being very strict with his approach, it won't mean he is thereby successful in actually elminating all epistemic possibility that he is wrong.) My point, I guess, is that a lot of what you talk about here goes to achieving a certain level of psychological confidence, but it's not clear to what extend this is necessary or sufficient for justification. For example, let's say P is the proposition that a narrow wooden bridge over a 1000 foot rocky gorge is going to hold your weight. This represents a proposition that, as you say, could hold a lot of seriousness of consequence if you're mistaken in it, especially if you're mistaken in it and yet it informs your next action. Here, prudence dictates that you will try to verify the proposition every which way you can before you act on it, but is this actually required for S to be epistemologically justified in believing that P? Probably not. This would be a case where prudence and things that we value dictate we achieve some level of psychological certainty that outpaces what would be required for it to count as an instance of knowledge. It could of course work the other way, for things we don't care about. My point is that I would be wary of saying that something like epistemological justification needs to account for our passions regarding what we take to be the seriousness of the consequences of being wrong about P. Passions are exactly what often muddle our ability to be objective. So, while I agree that prudence dictates that "we be very strict before concluding that (3) is true is when the consequence of being wrong in our conclusion stands a good chance of costing lives", I don't agree that this necessarily means that our satisfying (3) requires more in this case than in others we don't care about. On the contrary, I would think it just means that prudence, and the things we value, dictate that we ought to, if anything, overdetermine our satisfaction of (3). We ought to make sure that we have not just satisfied (3), but that we have over-satisfied it, if anything. So I'm not all that convinced that our notions of epistemic justification, or warrant, need to stretch to account for such things, especially since our notions of prudence and value already accommodate the idea that we will feel the need to overdetermine or underdetermine what is required of us in such cases. At any rate, we have already seen how what one views as seriousness of the consequences can lead one to the most bizarre ideas regarding justification: just look at the ardent theist who might claim that bbarr must have absolutely conclusive proof to conclude that God doesn't actually exist, which is just patently absurd. On the other hand, maybe we'll believe stuff on no good reason if it doesn't matter to us. No one would claim that these attitudes actually have anything to do with delineating what's really required or sufficed of one to be justified in those cases, so I don't really see why something else should hold just because one thinks it is of grave consequence to be wrong about Higgs boson, or something.
At any rate, I wasn't quite sure if you are claiming that our accounts of justification or warrant need to stretch to accommodate our attitudes about the consequences of our being wrong; or if you're claiming that prudence and values, etc, can reasonably dictate that we set some level of psychological confidence before we endorse and act on things, or something like that. I agree with the latter, but I am not so sure about the former.