Go back
Questions for the moral atheist

Questions for the moral atheist

Spirituality

L

Joined
24 Apr 05
Moves
3061
Clock
27 Aug 11
Vote Up
Vote Down

Originally posted by Palynka
I'll reply properly tomorrow. 🙂

Just a word, when I argue for morality being preference based, I'm arguing against moral realism not for non-cognitivism per se. There are cognitivist anti-realist views (cognitive subjectivism, for example). But I think that there's no point in arguing for non-cognitivism if we haven't even agreed on anti-realism.
This is already a very complicated discussion. On my understanding I think there are several problems with the thesis of moral non-cognitivism. I would like to ease into the discussion by outlining just one, since I would like to know more about how you deal with it on your view.

With respect to, say, moral utterances the question of (non)cognitivism deals in part with what types of mental states such utterances serve to express (and, I should point out that this is a different question from the question of what types of mental states may cause such utterances). Under your non-cognitivism, moral utterances do not serve to express beliefs, since beliefs will take as their content propositions, which can be true or false.

But, now, here's one potential problem that I see. If it is the case that some utterance U in circumstances C does in fact serve to express some mental state M; then we would expect that some subject S who is in C and utters U would bring about confusion if she went on to say that, however, she does not have M. So consider if someone were to say "Hitler was an evil person! But I do not believe that Hitler was an evil person." Or to borrow an example from vistesd in this thread, consider if someone were to say "It is wrong to commit child rape. But I do not believe it is wrong to commit child rape." These seem rather odd and in some sense would "fail to get by". How do you deal with this kind of seeming awkardness?

black beetle
Black Beastie

Scheveningen

Joined
12 Jun 08
Moves
14606
Clock
29 Aug 11
Vote Up
Vote Down

Originally posted by bbarr
O.K., back to the action:

According to the view you’ve articulated here, sincere moral judgments are identical to expressions of constellations of preferences (i.e., “X is good” expresses your preferences that X & that others prefer X). Of course, there is room for refinement here. You could construe moral judgments as expressions of personal preferences ...[text shortened]... r I'll go over the Smith example, and some other worries. Now I'm off with the girlfriend...
Excellent!

As regards your Chess example, methinks we have handy three rational ways to establish the means of Knowledge of the Royal Game (any moral normative will follow just according to our own –how banal, how banal!– evaluation of the mind).

First: since we want to win according to specific rules, we may suppose that our variations and our knowledge are established by mutual coherence. However, in this case we just end up with coherent fairy-tales, because the validity of our variations is not something “objective out there” but a specific subjective requirement of a specific mind-only position. Objectivity is anyway non-existent, but my point is that Knowledge cannot be established by the mutual coherence of our epistemic instruments and the epistemic objects.

Second: are our variations self-established? I argue they are not, for if our chess perception was self-established it would exist independently of the existence of the position; furthermore, if we assume that our chess perception was self-established, our variations should function as their own object because otherwise there would be no other object to be pondered on, we would “only calculate”. However the position, our perception and our variations do not exist simultaneously, since variations are evolved in time; and the variations cannot be the product of our calculations that exist simultaneously with the position and our perception, because both our perception and our calculations evolve in time too.
On the other hand, if our epistemic instruments were self-established and therefore established independently of the position, our variations wouldn’t be chosen amongst all the different means (all the variations that we qualify them as winning) of our cognitive access that deliver accurate knowledge that will in turn ease us to win. It follows that we should select solely the variations that have this “winning” property, because this specific internal quality would guarantee that we delivered accurate response as regards the nature of the objects cognized (position). But this simply does not hold. Therefore, our (empty, lacking of inherent existence) variations are not self-established too.

So we have to see, as you said, how the connection between the “winning quality” and the implementation of the “correct” variation is justified.
In other words, since the position envelops all the hidden properties that after our evaluation can be set into motion, how do we know that a specific variation is the accurate representation of our cognitive access that leads to a win? Or, how do we know that a particular set of moral prepositions is intrinsically moral?
Well, methinks we choose the variations we identified as winning solely when we assign their “winning quality” once we have accessed them in relation to the position cognized; then we can conclude that these variations really lead us to the knowledge of the win rather than of a draw or a loss. (We are doing the same as regards our sets of moral principles). And, once more, this means that the establishment of our variations is not self-established, for it incorporates reference to the specific position. (In analogy, morality is not self-established).

Finally, do our variations and the position establish each other? I argue that they do not. Since we apprehend the position, our apprehension and our perception are nothing but epistemic instruments. But we will need another epistemic instrument in order to establish the success of our variations (and thus the success of our own cognitive actions), otherwise we cannot tell whether our variations are winning or not! The successful cognitive apprehension is not an act of cognition that leads to a winning variation, so we are forced to include coherence with other cognitions, calculations and beliefs as a criterion of our (desire and will to) win. At this point we are forced to select a certain set of cognitions and beliefs that they are fixed to us, so that we can evaluate the status of other counter-variations (refutations) relative to ours (and here we are, deep into the realm of the moral philosophers!). Therefore, if our variations are not accurate (winning), the coherence of each move we make in this context has no winning (argumentative) weight.
Then, if we suppose that knowledge of morality (or knowledge that leads to a win during a game of chess) is acquired by using a set of procedures (perception, inference etc.) whose nature is to produce further knowledge by means of conveying specific pieces of information as regards a set of objective, mind-independent set of individuals that are the bearers of specific qualities (like our Freaky claims when he is talking again and again about absolute/ objective/ independent from the human mind morality according to his dogma), we would be stranded: you see, I have the feeling we cannot establish that Something can be regarded as intrinsically an epistemic instrument or object: these two are ad infinitum mutually established, because the instrument establishes the object by giving us cognitive access to it, whilst our successful interaction with the object establishes the instrument as a trustworthy means to perceive it.

So we are doomed to classify something as an epistemic instrument or as an object not because its intrinsic nature per se is either an epistemic instrument or an object, but because we simply regard it as such at a given reflective equilibrium! It’s –still– only Us, we still remain the products of our products: methinks in the Royal Game we are using empty beliefs about the nature of the position and thus we bring up variations in order to test our hypotheses as regards the instruments of acquiring our belief that we will finally win, and then we use the new position in order to access our view about the nature of the position we just entered. Well, I argue we are doing the same thing as regards morality. And what exactly do we want to achieve by means of inventing and coding morality, other than living well in a given social/causal nexus the way we subjectively perceive “living well” is meaning?
😵

P
Upward Spiral

Halfway

Joined
02 Aug 04
Moves
8702
Clock
29 Aug 11
Vote Up
Vote Down

Originally posted by bbarr
Later I'll go over the Smith example, and some other worries.
To avoid risking dispersion (and discouragement!) I think for now it's best we stay with the objections you raise in this example, and discuss them a bit more before we go into that one. Although I have a general idea about my answer to your last post, I need some time to go organize my response to the main objections you presented...

JS357

Joined
29 Dec 08
Moves
6788
Clock
29 Aug 11
2 edits
Vote Up
Vote Down

Originally posted by Palynka
To avoid risking dispersion (and discouragement!) I think for now it's best we stay with the objections you raise in this example, and discuss them a bit more before we go into that one. Although I have a general idea about my answer to your last post, I need some time to go organize my response to the main objections you presented...
I second this suggestion, because the foray into chess (and side forays into parenting and gardening) are productive in terms of relevant thought.

This is a kibitz needing no reply.

Recognizing that there is a moral side to Chess, which concerns cheating on the large scale and the courtesy of "gardez la dame" on the small; recognizing that parenting and gardening are similar to one another in interesting ways but that parenting has a stronger moral element, and wondering what it is that the psychopath/sociopath lacks WRT these endeavors, makes me want to suggest that an endeavor carries moral weight to the extent that (1) the norm concerns the care given by one human for the well being of another, or more generally, for society as a whole, (in chess, for "the game" or the chess community) and (2) success or failure against its norms triggers certain learned emotional responses we call shame, guilt, moral outrage, etc.

Does a society induce and draw on these emotional responses when the norm is important enough to the well being of the society itself? It it too important to leave to cognitive faculties alone?

Maybe we don't need to know the answer. It is in the book "Godel, Escher and Bach" that an entity called Aunt Hillary is first depicted. It is an entity composed of an ant hill, that thinks and "talks" with an anteater who visits it, but none of the ants know anything about that. I suggest that the ants making it up might wonder why it is they do certain things. So might it be with the individual human and society, and this is a reason that morality is somewhat mysterious. It is the society acting as a whole, for its own well being, and we don't need to know everything about that.

http://www.veiled-chameleon.com/weblog/archives/000282.html

bbarr
Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
Clock
30 Aug 11
2 edits
Vote Up
Vote Down

Originally posted by Palynka
To avoid risking dispersion (and discouragement!) I think for now it's best we stay with the objections you raise in this example, and discuss them a bit more before we go into that one. Although I have a general idea about my answer to your last post, I need some time to go organize my response to the main objections you presented...
I hope you enjoyed the weekend! Please continue with the earlier objections, we're not in any hurry. But I'm going to try to get all my worries out somewhat early, both because I don't want to get derailed, and because they all dovetail in interesting ways.

The Smith Example

Your response to this example didn’t address my intended point, so my point must have been unclear. I’ll try to get at it another way. Presumably you think that there are conditions under which certain utterances make sense. If the cat is on the mat, and you ask me whether the cat is on the mat, and I am not out to deceive you, etc., then the conditions suffice for it to be appropriate for me to say “Indeed, the cat is on the mat”. But are there conditions that suffice to make it appropriate to utter a moral claim? I am not asking whether there are conditions that would make a moral utterance true, but, rather, whether there are conditions, perhaps internal to a person, that would suffice to broadly license S in making some moral utterance. For instance, if I sincerely prefer both X & that you prefer X (the two preferences, (1) & (2), from my previous post), then on your analysis of moral judgments it seems I would be licensed to utter “X is good”. At least, in such a case, the conditions would be met for my utterance to be sincere. On the non-cognitivist view, what makes a moral claim the type of thing it is, is that it is a manifestation or expression of these twin preferences, or whatever preferences, pro-attitudes, imperatives, etc. show up in the final non-cognitivist analysis. On the non-cognitivist analysis, moral claims are simply identical with manifestations or expressions of some set of pro-attitudes. Let’s just call that set SA (the Set of Attitudes), and leave open the type and number of attitudes that will be members of SA. So, moral claims are identical with expressions of SA.

In the Smith example, what I was trying to present was a case where there was a sincere expression of SA, that had a surface structure identical to a moral claim and would, in fact, be clearly a moral claim in a different context, but that would not count as a moral claim at all in the context as described, as judged by either common usage or by Smith himself. This example would be troublesome if (a) you had some criterion that distinguished moral from non-moral normative/evaluative claims, (2) Smith met your criterion and his utterance thereby counted as a moral claim, but that (3) Smith’s utterance was clearly not a moral claim judged either by our common usage of moral claims or, explicitly, by Smith himself. In short, the non-cognitivist claims that moral claims are identical with expressions of SA. But identity goes both ways. So expressions of SA are moral claims. Smith expressed SA (by hypothesis), so Smith expressed a moral claim. But Smith did not, in fact, express a moral claim. So, either you need to (a) refine SA to rule out such examples, or (b) present some criterion for distinguishing moral from non-moral claims, or give up on such a distinction and provide a non-cognitivist analysis that (c) covers all normative/evaluative claims, or (d) argue that we're just profoundly mistaken about which claims are moral claims and which aren't. In the Chess Example post I gave some reasons why (b) and (c) likely won't work.

Now, it is entirely possible that once we get clear on just which pro-attitudes constitute SA, the Smith example would be explained away. It is also possible that common usage and Smith himself could just be wrong about which claims count as moral claims, but that is less likely. Analyses of moral claims have to start somewhere, and that will typically be with claims that are paradigmatic of moral claims. Our basic intuitions about the domain of the moral (e.g., it involves the welfare of persons, harms, benefits, constraints on treatment, etc.) and our first-order moral judgments about the moral status of particular acts, character traits, forms of living, function as data-points for moral theory construction; they’re the points that constrain the curve-fitting exercise that is normative ethics. If it turns out, on some analysis of moral claims, that ‘S shouldn’t needlessly harm others’, doesn’t count as a moral claim at all, then that analysis is thereby shown defective. We simply can’t be that wrong about morality.

Incidentally, this methodological point is one reason I find theistic ethics so wrong-headed. It’s crazy to think that we could end up discovering that morality isn’t really about our treatment of others or their welfare, but actually about the content of the psychology of some divine agent. This is part of what I mean when I claim that morality is for and about us (by ‘us’ I mean sentient agents, not just humans). This also applies to reductive accounts of evolutionary ethics. We couldn’t have been so wrong about the essential concerns of morality that it is possible to discover that morality actually concerns the replication of certain genotypes. Of course, the theist and the evolutionary ethicist may claim that God or natural selection causes us to have moral sentiments, or causes us to see the world as shot through with value, or causes us to form normative beliefs on the basis of how our shared nature and moral education inclines us to see the world. But these causal stories are orthogonal to accounts of the referents of our moral sentiments and beliefs, and what morality, as a system, is fundamentally about.

Next up, my biggest problem with non-cognitivism…

Soothfast
0,1,1,2,3,5,8,13,21,

☯️

Joined
04 Mar 04
Moves
2709
Clock
30 Aug 11
Vote Up
Vote Down

Originally posted by JS357
It is in the book "Godel, Escher and Bach" that an entity called Aunt Hillary is first depicted.
I found this book in a used bookstore almost two weeks ago. I'm on page 75 right now...

black beetle
Black Beastie

Scheveningen

Joined
12 Jun 08
Moves
14606
Clock
30 Aug 11
Vote Up
Vote Down

Originally posted by bbarr
I hope you enjoyed the weekend! Please continue with the earlier objections, we're not in any hurry. But I'm going to try to get all my worries out somewhat early, both because I don't want to get derailed, and because they all dovetail in interesting ways.

[b]The Smith Example


Your response to this example didn’t address my intended point, so my p ...[text shortened]... as a system, is fundamentally about.

Next up, my biggest problem with non-cognitivism…[/b]
Edit: “But these causal stories are orthogonal to accounts of the referents of our moral sentiments and beliefs, and what morality, as a system, is fundamentally about.”

The causation is anyway possible, however our understanding of the causal relation is based on faulty premises due to the fact that, for one, we perceive them as qualitatively distinct and independent objects and, for two, because we consider them independent of the cognizing mind. As a result, theistic ethics are not irrational merely because they perceive morality as a story about the content of the psychology of some divine agent (you just debunked this theological approach), but mainly because they consider that the causal field and conditions is something that exists “objectively” “out there” in the world, independent of the human mind and its concerns. (And, as regards the latter faulty premise, the moral philosophers who overcame the former obstacle by means of considering morality as a story about our treatment of others or their welfare and so on, they are stranded too and deeply mistaken each time they attempt to prove that their claims are “objective”. I have the feeling that this kind of “objectivity” is attacked from our Palynka, who choose to identify his string of thoughts as “non-cognitivist” and now has to find a way to answer the questions posed by LemonJello and you).

But: cause and effect can be related solely either from themselves, or from other things, or from both themselves and other things, or from neither. Since none of the above four theses holds, the objects we interact with are causally produced, therefore: whenever an object involves a conceptually constructed property, then the object is conceptually constructed too, as I explained earlier at the Chess Example. So, since the causal relation doesn’t exist independently from its own side, it is too conceptually construed and thus empty –and this is the case about morality anyway, regardless of our criteria and our final thesis. On both the cognitivist and the non-cognitivist view, each causally related moral set, each set of attitudes and each epistemic object appear to be all construed and thus empty (and this is the case with all of our products I reckon)
😵

P
Upward Spiral

Halfway

Joined
02 Aug 04
Moves
8702
Clock
02 Sep 11
Vote Up
Vote Down

I haven't forgotten this, I just haven't had the right combination of time and dedication to answer it properly yet...

bbarr
Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
Clock
02 Sep 11
1 edit
Vote Up
Vote Down

Originally posted by Palynka
I haven't forgotten this, I just haven't had the right combination of time and dedication to answer it properly yet...
Here's a bit more...


The Guise of the Good; a deeper worry…

In our earlier exchange, I mentioned that, as agents, we must think and act for reasons. Or, at least, we must think and act on the basis of considerations we take to be reasons. Of course, we often form beliefs and act pretty much automatically, and nothing like explicit deliberation occurs. But, still, in such cases there are tacit considerations we take to be our reasons for believing and acting. After all, a large part of the inculcation of character traits is the formation of a constellation of disposition to take certain considerations as normative; as reasons to be believe, feel, be motivated and act. That we operate on the basis of reasons can be made clear by asking an agent why he came to believe this or that, or why he acted in this or that way. In the absence of accessible reasons, the formation of our beliefs and our actions will seem unintelligible from our own point of view. They will not be agential, nor seem to be ours in any real sense, but rather more like foreign psychological intrusions or physical ticks.

Ethicists commonly distinguish between motivating reasons and justificatory reasons. As the terminology suggests, the motivating reasons of an action are those considerations that motivate an agent to act; the considerations for which an agent acts, and that indicate to an agent that the action was warranted, appropriate in the circumstances, etc. Justificatory reasons are those considerations that justify an action, count in favor of an action, or determine that an action was good, excellent, right or whatever. It is part of moral theorizing to provide an account of justificatory reasons in the moral domain. When an ethicist gives an account of what he takes to be of fundamental moral importance, he (typically) thereby gives an account of the sort of reasons that justify actions in the moral domain. Similarly, if the chess expert gives an account of success in the game and what constitutes excellent play, he (typically) thereby gives an account of the sort of reasons that justify particular strategies, tactics and individual moves. If people were perfectly practically rational, the set of motivating and justificatory reasons would perfectly overlap. People would be motivated to act for the very same reasons that justify their actions. Indeed, this is very close to a definition of wisdom, or expertise in a practical domain; there being harmony between the reasons for which one acts and the reasons for which one’s actions count as good, right or excellent. Alas, often people are motivated by reasons that do not justify.

But, and here’s the thing, people consistently construe or interpret their motivating reasons as justificatory reasons. This is not just some weird fact about human beings; it’s something like a law of agency. Philosophers call this seeing things under the “guise of the good”. Now, this is not to say that people always believe that they act morally. Nor even that people always believe their actions are, all things considered, justified. The point is simply that people consistently take their actions to be motivated by reasons that in some sense support, or cast in a favorable light, or bear positively on their actions. So the thesis here is not very stringent, but it does have some bite, I think, with regard to non-cognitivism. Here’s an example I hope will bring out my worry clearly:

Jones has a strange preference. Whenever he passes a radio that is turned off, he very strongly prefers that it be turned on. This preference is strong enough that when he passes an off radio he is sufficiently motivated to turn it on, or at least to make a real effort. Now, Jones does not believe that there is anything good in radios being turned on, or that it is more valuable that they be turned on. In fact, Jones has no normative, cognitive or propositional attitudes concerning the turning on of radios. When asked what reasons motivate him, Jones sincerely replies that he has no reasons per se, but just this extremely strong preference.

What must Jones’ instances of radio-turning-on seem like, from his point of view? Would it even be correct to construe his behavior as action? From Jones’ own point of view, he must just find himself compelled to turn on radios. It is likely Jones’ would wonder why he was turning on radios, and his behavior must seem the result of something like a foreign psychological intrusion into the set of his motivations. In short, Jones’ behavior must seem to him unintelligible.

But what could we add to this story that would render Jones’ behavior (though still, admittedly, bizarre) intelligible from his point of view? Suppose Jones sincerely believed that a tyrannical government is unable to track him when he’s near turned on radios. Or that every time he turns a radio on, he saves a baby’s life. Or simply that he has a moral obligation to turn on radios. Whatever the content, it seems what is needed for Jones (or us, for that matter) to make sense of his behavior is some story about his reasons. We need to know the reasons that motivate him. We need to know why he takes those reasons to justify, support, or show as somehow valuable or good the turning on of radios. To put the point somewhat differently, the mere preference for radios being turned on does not, on its own, suffice to make it seem reasonable to turn on radios, as judged from either our perspective or Jones’.

And this point generalizes to all purely conative, non-cognitive motivational states, if we construe these states as something like desires; as inner pushes and pulls towards action. Acting on our conative states is not practically rational in the absence of some belief or judgment regarding the normative/evaluative credentials of that state. The thing is, though, that when we act on the basis of what we take to be a moral judgment, or employ a moral claim in our deliberations, it does make sense both from the first-person and third-person point of view. When I judge that I should try to comfort my grieving friend, and act on the basis of that judgment, it is nothing at all like simply acting on a desire unhinged from reasons. I can tell you why I believe the desire to comfort my friend is a good one, and why I believe that comforting one's grieving friends is something one should do. It because I have these cognitivist states that my desire to comfort seems reasonable from my point of view, and why it is motivationally efficacious. But why, then should my desire to comfort be taken as the content of the moral claim "I should comfort", as the non-cognitivist would have it? Why not analyze that moral claim as, for instance, short-hand for cognitivist beliefs I have regarding the goodness of comforting? After all, it is those beliefs that allow me to see the act of comforting under the guise of the good and, hence, render intelligible my providing comfort.

JS357

Joined
29 Dec 08
Moves
6788
Clock
02 Sep 11
Vote Up
Vote Down

Originally posted by bbarr
Here's a bit more...


[b]The Guise of the Good; a deeper worry…


In our earlier exchange, I mentioned that, as agents, we must think and act for reasons. Or, at least, we must think and act on the basis of considerations we take to be reasons. Of course, we often form beliefs and act pretty much automatically, and nothing like explicit deliberation ...[text shortened]... comforting under the guise of the good and, hence, render intelligible my providing comfort.[/b]
But why, then should my desire to comfort be taken as the content of the moral claim "I should comfort", as the non-cognitivist would have it? Why not analyze that moral claim as, for instance, short-hand for cognitivist beliefs I have regarding the goodness of comforting?


Thank you for this writing. I suppose a reason for not going down that cognitive analytical road would be that doing so fails in some way. The obvious thing to do is go down that road and see what we find. Has it been done somewhere that you can recommend?

Regarding the example, it seems that you are saying one of the relations between "I should comfort" and your cognitivist beliefs regarding the goodness of comforting is that "I should comfort" can be analyzed, for instance, as shorthand for the relevant cognitivist beliefs. In some way, they express the same thoughts.

I am wondering though if the relation between "I should comfort" and the beliefs can alternatively be analyzed as the logical implication of your cognitivist beliefs regarding the goodness of comforting. "I should comfort" would also be one of your cognitivist beliefs, but would be so because it is implied by the other relevant cognitivist beliefs.

In either case, it seems like the next step would be to write those beliefs out in 'longhand.'

bbarr
Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
Clock
02 Sep 11
Vote Up
Vote Down

Originally posted by JS357
But why, then should my desire to comfort be taken as the content of the moral claim "I should comfort", as the non-cognitivist would have it? Why not analyze that moral claim as, for instance, short-hand for cognitivist beliefs I have regarding the goodness of comforting?


Thank you for this writing. I suppose a reason for not going down that ...[text shortened]... case, it seems like the next step would be to write those beliefs out in 'longhand.'
Right, the claim "I should comfort" (or whatever normative/evaluative claim is at issue), will sometimes appear as the outcome of deliberation that takes as premises other beliefs regarding the goodness of comforting, the importance of persons, the bonds of love and friendship, etc. Sometimes, though, "thin" normative predicates like 'good', 'right', 'should', etc. (they are called 'thin' because they only have normative content, as opposed to 'thick' predicates like 'compassionate' or 'cruel' which have both normative and descriptive content) function as placeholders for something more robust.

But, look, when an ethicist provides a moral theory or ethical framework, part of what's involved is making clear what is being taken as normative bedrock, laying bare the inferential relations that are supposed to obtain between the bedrock claims and our typical moral judgments about actions, traits, lives or whatever. My own view is that calling something 'good' or 'valuable' or 'right' or 'obligatory' doesn't do very much. I unpack those terms, depending on the claim at issue, by using thick ethical notions, or talking about the reasons persons have, or talking about the flourishing of sentient creatures. And, since I don't think that there is anything like an algorithm for determining appropriate action, and since I think that the ethical life is uncodifiable, and that what counts as appropriate will often be really dependent on context, I don't think anybody can "write out those beliefs in 'longhand'" in the abstract. I can try, though, given some pretty well described ethical scenario.

JS357

Joined
29 Dec 08
Moves
6788
Clock
02 Sep 11
Vote Up
Vote Down

Originally posted by bbarr
Right, the claim "I should comfort" (or whatever normative/evaluative claim is at issue), will sometimes appear as the outcome of deliberation that takes as premises other beliefs regarding the goodness of comforting, the importance of persons, the bonds of love and friendship, etc. Sometimes, though, "thin" normative predicates like 'good', 'right', 'should' ...[text shortened]... ry, though, given some pretty well described ethical scenario.
And, since I don't think that there is anything like an algorithm for determining appropriate action, and since I think that the ethical life is uncodifiable, and that what counts as appropriate will often be really dependent on context, I don't think anybody can "write out those beliefs in 'longhand'" in the abstract. I can try, though, given some pretty well described ethical scenario.


This makes me think that a pretty well described ethical scenario would include a description of all circumstances that decisively bear on whether "I should comfort" is the appropriate action. To me, this implies that there IS an algorithm waiting to be fed information to be processed in the context of a code. You have ruled that out, and I accept that. I do wonder what rational cognitive processes we are to use. It may be a more gestaltic bit of brain work, often called "sleeping on it."

bbarr
Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
Clock
03 Sep 11
Vote Up
Vote Down

Originally posted by JS357
[quote]And, since I don't think that there is anything like an algorithm for determining appropriate action, and since I think that the ethical life is uncodifiable, and that what counts as appropriate will often be really dependent on context, I don't think anybody can "write out those beliefs in 'longhand'" in the abstract. I can try, though, given some pret ...[text shortened]... to use. It may be a more gestaltic bit of brain work, often called "sleeping on it."
I am using 'algorithm' as synonymous with 'application of a finite set of rules guaranteed to deliver a correct answer" or something along those lines. I don't think moral scenarios admit of that type of resolution. Any proposed set of rules I've come across putatively sufficient to resolve moral quandaries has generated counter-examples and certainly ran contrary to our considered moral judgments. The moral domain, or the broader ethical domain, or the broadest domain of practical reason (if you want to keep all these things separate), are just too complex and context dependent to be codified by a set of rules. There are, of course, generally salient consideration we should bring to bear in our ethical deliberations, and there are typically better or worse conclusions one can draw in any given scenario. But there also may be tragic dilemmas, or cases where one simply can't do the right thing (because, for instance, one has a vicious character, or is already in a morally compromised situation, or...).

shavixmir
Lord

Sewers of Holland

Joined
31 Jan 04
Moves
89784
Clock
05 Sep 11
Vote Up
Vote Down

Originally posted by FreakyKBH
Read these recently on another website, thought they were pretty interesting food for thought... for those inclined.

• If everything ultimately must be explained by the laws of physics and chemistry, what is a moral value (does it have mass, occupy space, hold a charge, have wavelength)?

• How did matter, energy, time and chance result in a set of ob ...[text shortened]... ry? Why can't it simply be ignored? Won’t our end be the same (death and the grave) either way?
Objective morals?
Surely all morals are subjective.

For example:
A Christian thinks it's alright to hang a criminal, but it's not alright to have an abortion.
I think the complete opposite.

Where's the objectivity in that?

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.