Page 9 of 11

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Thu Oct 08, 2015 5:39 pm
by Lightningman_42
http://veganstrategist.org/tag/abolitionism/

I thought that this blog post was a great criticism of Francione and his "Abolitionist Approach." I remember feeling much the same way that this author did when I first discovered Francione and his style of advocacy, which I liked at first but gradually noticed flaws in. Absolutist statements about morality, advocating morality based upon "rights" (irrational) rather than vice-versa, unsupported claims about anything "welfarist" being counterproductive, etc.

What irritates me most about Francione (downright pisses me off really) is his blatant arrogance and perceived immunity to criticism. He believes he knows exactly what an "abolitionist vegan" should or shouldn't do. The term "abolitionist" (in the context of veganism) really only refers to someone who would like the use of animals as resources/property by humans to end, but does not necessarily agree with Francione's approach towards reaching that goal. He ostracizes anyone from his FB page discussions who present ideas that are not consistent with his own approach, and harshly condemns (very frequently) individuals/organizations (who care about animals and may do a lot of good for animals) because their advocacy is "not vegan enough" or "too welfarist".

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Thu Oct 08, 2015 10:28 pm
by brimstoneSalad
ArmouredAbolitionist wrote: What irritates me most about Francione (downright pisses me off really) is his blatant arrogance and perceived immunity to criticism. He believes he knows exactly what an "abolitionist vegan" should or shouldn't do. The term "abolitionist" (in the context of veganism) really only refers to someone who would like the use of animals as resources/property by humans to end, but does not necessarily agree with Francione's approach towards reaching that goal.
If I'm understanding what you're saying, in terms of deontology for abolitionism, Francione is kind of right (which is a rare occurrence, like fundamentalist religious being right sometimes vs. progressives). Logically, you can't really substantiate true abolitionism without deontology.
That said, it's GOOD to not advocate dogmatic abolitionism. It doesn't necessarily hurt animals to "own" them or "use" them as long as we don't abuse them; there's a fairly long discussion of dogs here, for example.
There are practical concerns to use or ownership without abuse, but it's not impossible; reason only supports serious regulation, not so much complete abolition.

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Thu Oct 08, 2015 10:46 pm
by Lightningman_42
Brimstone, if I'm understanding you correctly you're saying that abolition of all animal use requires deontology. A utilitarian-consequentialist can only advocate abolishing use of animals as resources in specific cases (like most forms of farming animals for food) where that use is prone to result in harm or violation of will of the animals. You're saying that without deontology I cannot advocate abolishing certain types of animal use/ownership that are not inherently prone to abuse (like ownership of dogs as pets).

I completely agree with that, so no I would not advocate that all forms of animal ownerships are necessarily immoral and ought to end (unlike Francione). Do you think, then, that as a consequentialist I should not consider myself an abolitionist at all? I only consider myself an abolitionist in the sense that I want those forms of animal-use that are inherently abusive to end.

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Fri Oct 09, 2015 6:35 pm
by brimstoneSalad
ArmouredAbolitionist wrote:Brimstone, if I'm understanding you correctly you're saying that abolition of all animal use requires deontology. A utilitarian-consequentialist can only advocate abolishing use of animals as resources in specific cases (like most forms of farming animals for food) where that use is prone to result in harm or violation of will of the animals. You're saying that without deontology I cannot advocate abolishing certain types of animal use/ownership that are not inherently prone to abuse (like ownership of dogs as pets).
Correct.
ArmouredAbolitionist wrote:Do you think, then, that as a consequentialist I should not consider myself an abolitionist at all? I only consider myself an abolitionist in the sense that I want those forms of animal-use that are inherently abusive to end.
Yes, I believe it would be more appropriate to not call ourselves abolitionists, because it will give people the wrong idea and imply total abolition. Reform is more what a consequentialist would be after. Reforms so extensive it would make any kind of animal agriculture at least impractically expensive (like having to provide veterinary care AND wait for the natural death where no more can be done to save them).

We'd be talking something like thousand dollar steaks, cows treated as well as beloved pets and living beyond their natural lifespans in comfort, peace, and in an engaging and happy environment. And oversight so extensive (like 24 hour live video feed on the internet of every cow) that abuses would become virtually impossible, including, for example, requirements that the administrators of the farms have no profit or other motives to cut corners.

These are implausible reforms, but the important point is that they're technically possible, and it makes us seem unreasonable if we demand "abolition or bust".

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Tue Dec 01, 2015 12:19 am
by garrethdsouza

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Sun Dec 06, 2015 9:26 pm
by inator
I’m undecided on the consequentialism/deontology thing. I don’t have any issues with consequentialism itself, just with its apparent incompatibility with fairness/justice. As brimstone put it in another topic, “fair doesn’t mean good” in a consequentialist’s eyes, that’s just deontological nonsense. And what counts are only the ultimate global consequences. Supposedly. Can you help me understand this position?
I’d say that the global results are not all that matter, the standard deviation in those results also counts. Otherwise you can easily end up with a utility monster, big no-no. It’s true that the utility monster only becomes a problem of consequentialism if the distribution of well-being is regulated in a utilitarian way, i.e. maximizing the sum total of the good. But I'm not convinced that this is not just an arbitrary choice anyway, based on some deontological principle that requires the amount of good to be maximized. Let me explain:

As I see it, consequentialism defines 1) a set of basic, ultimate, agent-neutral goods; 2) an operation for combining these separate goods into a single global value; and 3) a function which specifies how this value is to be promoted.
Assuming that 1 and 2 are possible, number 3 is problematic if consequentialism is supposed to be a purely teleological theory of the good which does not make any deontological claims about what is right.
For example:
Outcome utilitarians hold that nothing but satisfaction has intrinsic value and then claim that the larger the sum of such value, the better.
Outcome welfare egalitarians hold that nothing but satisfaction has intrinsic value and then claim the more equal the distribution of such value, the better.
There's no difference between a fair distribution and the maximization of the sum total in this respect: they're both just requirements that tell us what we ought to do with well-being.
Maximizing the global good is a way of behaving with regard to the good, much like distributing the good fairly, or simply maximizing the good of the worst off person. These are all operations on the good and not themselves goods - they're questions about the right thing to do. Depending on the operation chosen, we must assume at least one deontological principle of what is right.
The problem that I see is that all these operations belong to a different component than the theory of the good to which consequentialism belongs. Either they are all compatible with consequentialism or none of them is. This includes the fair distribution principle.

This is less important, but I also have a slight problem with no.1, the assumption that ultimate agent-neutral goods exist, and no.2 - the assumptions that you can combine all individual separate goods into a single global value.
“A is a good x” doesn’t entail “A is good”, so the term good is an attributive adjective and cannot legitimately be used without qualification. On this view, it is senseless to call something good unless this means that it is good for someone or in some respect or for some use or as an instance of some kind. I violate this if I say that the total or average consequences are good without any such qualification. If I claim they are good for the well-being of....mankind (?), then I suppose that mankind is some sort of a super-person, whose greatest satisfaction is the objective of moral action. But individuals have wants and needs (sometimes different ones), not mankind. Individuals seek satisfaction, not mankind. A person's satisfaction is not part of any greater satisfaction. (I’m quoting David Gauthier on that last argument, I'm not completely convinced it works myself.)

Sorry for the long post!

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Sun Dec 06, 2015 11:41 pm
by brimstoneSalad
inator wrote:I don’t have any issues with consequentialism itself, just with its apparent incompatibility with fairness/justice. As brimstone put it in another topic, “fair doesn’t mean good” in a consequentialist’s eyes, that’s just deontological nonsense. And what counts are only the ultimate global consequences. Supposedly. Can you help me understand this position?
Certainly.

Take the case of two people who only have enough food for one person to survive. The fair thing would be for them to split the food... which would result in them both dying of malnutrition. The unfair thing is for one person to get all of the food, wherein he or she lives and the other dies of starvation.

The consequences of fairness in this case are, ultimately, worse than the consequences of unfairness.
Sometimes the perfect is the enemy of the good.

When nobody has the food and you're the gatekeeper, sure, you could say "flip a coin", and that might be seen as "fair". But the trouble is that's not how things are in reality, and redistribution of resources is never that simple. The coin has already been flipped -- sometimes thousands of years ago -- and we're still dealing with the results of that.
Sometimes you cause more trouble trying to fix something that's sub-optimal than the sub-optimal system was causing in the first place.
inator wrote:I’d say that the global results are not all that matter, the standard deviation in those results also counts.
Pleasure has quickly diminishing returns beyond basic satisfaction. Millionaires aren't necessarily happier than people who just have living wages. Those who are objectively lacking, living in a state of abject poverty and oppression, see the greatest returns on goods in terms of well being or happiness.
This doesn't require any special premises or formulations, it's just how things work.

The utility monster is what it is not because it is well off, but because it is highly sentient. Relative to us, as a god to ants.

Anyway, I'm not a utilitarian. I use a more altruistic framework that takes into account not personal benefit, but benefit and harm to others.

I see the utility monster for what it is; a monster that is taking from others, harming them, for its own pleasure (even if that pleasure is ostensibly greater than the harm others experience, that doesn't matter). When the utility monster does this without need, it is committing an evil; the degree of awareness and sentience of the utility monster only makes that evil action all the more morally relevant (whereas a mindless force, like a hurricane, is amoral).

A contest of needs is much more sympathetic. If the utility monster needs to consume us to survive, it may yet be a monster, but an understandable one; like a lion hunting for survival. When choice is removed from the equation (partially to totally) the action loses moral relevance and becomes amoral again (or moves in that direction).

In a bubble:
The altruistic castaway may give the food to his fellow man, dying so his comrade may live.
The altruistic gazelle may lay down her life willingly for the lion to survive.
The altruistic lion may spare the gazelle, at the cost of his own life.
The altruistic human may throw himself into the maw of the utility monster.
And the altruistic utility monster may choose death, sparing the lowly humans.

In a bubble, there are no repercussions beyond those actions; So which which actions have the most moral value?
How do they compare with selfish actions?
inator wrote: As I see it, consequentialism defines 1) a set of basic, ultimate, agent-neutral goods;
In order to speak of morality, we must speak of the actions of a moral agent or agents.
For my moral actions, the good I do is the good I do for others, not the selfish benefits I accrue.

This is where utilitarianism has problems. Given two people, according to utilitarianism I am doing good in harming you to benefit myself if I perceive the benefit to myself to be greater than the harm I have done to you.
In an altruistic context, this is evil. Or at best, amoral if there was diminished choice involved.

Goods are not agent-neutral, because they are not good when I accrue them for myself. Or for you, when you accrue them for yourself. Or if we were enabled to do it (somebody doing it on our behalf).
The same applies for any selfish actions of the utility monster itself.
inator wrote: 2) an operation for combining these separate goods into a single global value;
Utilitarianism imagines a very straight forward actor-less system of evaluating these things. That doesn't necessarily work, at least directly.

On a personal level, we can look at the harm our proposed actions do to one will, and the help they yield to another. To see things more globally, we also have to look at the good and evil actions of those we are enabling with our actions.

In order to perform an actor-less evaluation, we'd have to look at the sum of all good and evils done by all actors, rather than the distribution of anything like "utility" or "goods" in any material sense. This is the only practical way to discount self interest and express altruism globally.

To give an example with the utility monster:

The Utility monster's actions:
The Utility monster chooses to eat you for enjoyment, not out of need. You receive 100 units of harm from this (the worst thing for a being of your sentience level), the utility monster received 1000 units of enjoyment from this (the best thing for a being of its sentience level; ten times higher than human). The thousand units are irrelevant in altruism; net 100 units of evil.
My actions:
Because you have not chosen to sacrifice yourself, I chose to sacrifice you to the utility monster. You receive 100 units of harm from this, and the utility monster receives 1000 units of enjoyment. I net 900 units of good.
Normalization & Total:
Because the utility monster is ten times more sentient and intelligent than I, its actions have ten times the moral weight of mine. Total actions all around: -100 * 10 + 900 * 1 = -100.
Conclusion:
We should not sacrifice each other to the utility monster for its enjoyment.

This is, of course, in a bubble; which no actions really are.
We need to consider the probable chain of events that follow our actions whenever we make choices in cases like these when interests conflict (and those are much more meaningful, usually, than the immediate interests being realized or violated).
inator wrote: and 3) a function which specifies how this value is to be promoted.
Assuming that 1 and 2 are possible, number 3 is problematic if consequentialism is supposed to be a purely teleological theory of the good which does not make any deontological claims about what is right.
This is problematic only for utilitarianism specifically, not consequentialism as a whole. Altruism has a non-arbitrary logical claim to morality.

As before, you're conflating consequentialism generally, and utilitarianism as a specific formulation of consequentialism (albeit a popular one).
inator wrote: For example:
Outcome utilitarians hold that nothing but satisfaction has intrinsic value and then claim that the larger the sum of such value, the better.
Outcome welfare egalitarians hold that nothing but satisfaction has intrinsic value and then claim the more equal the distribution of such value, the better.
There's no difference between a fair distribution and the maximization of the sum total in this respect: they're both just requirements that tell us what we ought to do with well-being.
Because they're both wrong; utilitarianism is close, but clumsily formulated.
inator wrote: Maximizing the global good is a way of behaving with regard to the good, much like distributing the good fairly, or simply maximizing the good of the worst off person. These are all operations on the good and not themselves goods - they're questions about the right thing to do.
Which is just one of many things wrong with utilitarianism. This is not inherent to consequentialism, though.
inator wrote: Depending on the operation chosen, we must assume at least one deontological principle of what is right.
I think you're mixing up deontology with premises or axioms.
Deontology is also practice; it's found in the mathematics of things (that is, a lack of them).
It is a declaration of rules, but also a rejection of consequentialism following from those rules.
inator wrote: Either they are all compatible with consequentialism or none of them is. This includes the fair distribution principle.
Anything can be consequential, as long as you look at consequences. You can use consequentialism to maximize paperclips. Only one axiom has a valid claim to morality, though. Maximizing paperclips in the universe would be consequential paperclipism, not consequential morality.
inator wrote: On this view, it is senseless to call something good unless this means that it is good for someone or in some respect or for some use or as an instance of some kind.
Which is why good can only relate to the realization or violation of sentient wills.
inator wrote: If I claim they are good for the well-being of....mankind (?), then I suppose that mankind is some sort of a super-person, whose greatest satisfaction is the objective of moral action. But individuals have wants and needs (sometimes different ones), not mankind. Individuals seek satisfaction, not mankind. A person's satisfaction is not part of any greater satisfaction. (I’m quoting David Gauthier on that last argument, I'm not completely convinced it works myself.)
Mankind is made up of individuals. Its moral value as a collective is based upon that, and upon the potential of that collective to do morally positive things in the future.

inator wrote: Sorry for the long post!
No problem! Let me know if you have any questions.

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Mon Dec 07, 2015 1:54 pm
by inator
brimstoneSalad wrote: Take the case of two people who only have enough food for one person to survive. The fair thing would be for them to split the food... which would result in them both dying of malnutrition. The unfair thing is for one person to get all of the food, wherein he or she lives and the other dies of starvation.
The consequences of fairness in this case are, ultimately, worse than the consequences of unfairness.
Sometimes the perfect is the enemy of the good.
Good point.
A decision in a survival situation might not be exactly the same as are day to day decisions, but even here you can vary the details of the problem to create different results.
Something trivial: how long will the food last them? 2 years? Most people would prefer to spend 1 year in good company and then die rather than spend 2 years alone and then also die. We all die in the end and prologing someone's life doesn't mean improving its quality.

Or if they only have enough food for one to survive until rescue, then indeed, fairness would be wrong in this situation.
But the action of making the decision is also interesting- Will they flip the coin themselves? Or will the strongest win the food? Or will the less "moral" of the two just steal it in the middle of the night and run? The point is, there is no invisible hand to decide on the moral distribution for all. The decision-makers are also agents in the game.
Extrapolate that to bigger groups and everyone mistrusts everyone because one might steal the ressources for herself, which would be the rational thing to to as an individual. Classic prisoner's dilemma. Then you need a social contract to end this and so on.

It would be moral to end the perpetual fight for resources because, of course 1) it diminishes overall benefits, but also because 2) a constant state of mistrust diminishes the quality of life in other ways too.
The bootom line is that fairness feels good. For one, it increases the chance of cooperation and therefore increases total returns. But apart from that indirect effect, it also...well... feels good, it creates psychological satisfaction in itself because most people are able to use selfishness as well as empathy.
The problem is that it gets difficult to use empathy more than on a one-to-one basis, since we can only imagine ourselves in the mind of another at a time. From there on you have to start using reason and decide how much and to whom you can/should extend your empathy (family members, tribe members, same nationality, same religion, same skin colour, same species...) It gets more difficult with the increasing size of the group.
brimstoneSalad wrote: Sometimes you cause more trouble trying to fix something that's sub-optimal than the sub-optimal system was causing in the first place.
Depends. You can cause way more trouble, yes, but that might be temporary trouble. Then in time you build a new and better system. It depends on what time frame you choose to take into account when measuring cost-effectiveness and overall well-being.
brimstoneSalad wrote:Pleasure has quickly diminishing returns beyond basic satisfaction. Millionaires aren't necessarily happier than people who just have living wages. Those who are objectively lacking, living in a state of abject poverty and oppression, see the greatest returns on goods in terms of well being or happiness.
This doesn't require any special premises or formulations, it's just how things work.
True, I didn't think to apply that fact to utilitarianism.
brimstoneSalad wrote: A contest of needs is much more sympathetic. If the utility monster needs to consume us to survive, it may yet be a monster, but an understandable one; like a lion hunting for survival. When choice is removed from the equation (partially to totally) the action loses moral relevance and becomes amoral again (or moves in that direction).

In a bubble:
The altruistic castaway may give the food to his fellow man, dying so his comrade may live.
The altruistic gazelle may lay down her life willingly for the lion to survive.
The altruistic lion may spare the gazelle, at the cost of his own life.
The altruistic human may throw himself into the maw of the utility monster.
And the altruistic utility monster may choose death, sparing the lowly humans.

In a bubble, there are no repercussions beyond those actions; So which which actions have the most moral value?
None, from the consequentialist's point of view. If you're the invisible hand, you could have simply flipped a coin. But if you're an agent, not only does your action have more moral value (deontology), but you yourself become a more moral person as a result of it (virtue ethics).
It can be completely amoral only if the decision-maker is not implicated in the situation.

That isn't to say that moral actions should ever take precedence over moral ends. But in a situation where the possible ends are equal (like your example), then the moral content of the action starts to matter. Only slightly.
brimstoneSalad wrote:Utilitarianism imagines a very straight forward actor-less system of evaluating these things. That doesn't necessarily work, at least directly.
Consequentialism does that, not just utilitarianism. What I said above.
brimstoneSalad wrote:This is problematic only for utilitarianism specifically, not consequentialism as a whole. Altruism has a non-arbitrary logical claim to morality.
Altruism excludes the actor making the decision from the calculations...An individual should take actions that have the best consequences for everyone except for herself.
Utilitarianism includes everyone, so it's "more non-arbitray" I think. The most arbitrary would be egoism. But all of them are all about maximizing well-being, so the rule about what to do with the good is the same: maximize!
brimstoneSalad wrote:I think you're mixing up deontology with premises or axioms.
Deontology is also practice; it's found in the mathematics of things (that is, a lack of them).
It is a declaration of rules, but also a rejection of consequentialism following from those rules.
Ah, I see.
brimstoneSalad wrote:Anything can be consequential, as long as you look at consequences. You can use consequentialism to maximize paperclips. Only one axiom has a valid claim to morality, though. Maximizing paperclips in the universe would be consequential paperclipism, not consequential morality.
Hahah
brimstoneSalad wrote:Mankind is made up of individuals. Its moral value as a collective is based upon that, and upon the potential of that collective to do morally positive things in the future.
What I meant is that human nature is dynamic, so the concept of a single utility for all humans is one-dimensional. The question is: Consequences for whom?
It may be possible to uphold the distinction between persons whilst still aggregating utility, if it's accepted that people can be influenced by empathy. That's where fairness comes in.

The whole concept of rights is a social construct to protect the individual and minority groups against the "tyranny of the majority". Whether or not this sort of individualistic society has different outcomes than a Greek city-state type of society (or Rousseauian or whatever you prefer), where individual needs are confounded with the needs of the state... probably.
A climate of fairness and inclusion reduces everyone's anxiety that they could be the next one to be sacrificed for the greater good, so they are more likely to stay in line and happy. But it also increases the satisfaction of the empathetic ones who would have been well-off anyway, because no one has to be sacrificed for their well-being. This may outweigh the negative effects of not actually being able to occasionally sacrifice the needs of some individuals for the greater good.

So ok, there might be marginal cases where fairness is incompatible with consequentialism. Usually though I think it's exactly what indirectly creates better outcomes. Since it's impossible to consider all consequences of all decisions all the time and we don't live in a bubble, it's not irrational to use fairness as a rule of thumb. I guess that would be some sort of rule consequentialism.

At the same time, no fixed rules OR ends/consequences could be adequate in a world of constant change and plural and conflicting values. This is a formula for perpetual immaturity, because it cuts off all possibility of learning better ways to live by experimenting with them. I think value judgments should be subject to empirical testing and verification.
brimstoneSalad wrote:No problem! Let me know if you have any questions.
Thanks, I don't know how you find the time for all this...

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Mon Dec 07, 2015 10:39 pm
by brimstoneSalad
In the island scenario, I was imagining a limited sustainable source of food (as in reality). Like the plants on the island can only support one person, but do so indefinitely. If two people eat their fill, they will both die because the plants will be wiped out.
inator wrote: Extrapolate that to bigger groups and everyone mistrusts everyone because one might steal the ressources for herself, which would be the rational thing to to as an individual. Classic prisoner's dilemma. Then you need a social contract to end this and so on.
Exactly, which is why fairness under the LAW is important, because of the consequences. But fairness itself isn't always inherently good. And practical unfairness that is fair in the context of the social contract (like us being able to hire who we want, and basically make the choice to be assholes in the greater "fair" context of free speech and freedom of choice) may be worth it in terms of the other freedoms that affords. It's always a trade off, and we have to look at what we gain or lose. Fairness, whatever that arbitrarily means, does not define the best case outcome.
inator wrote: The bootom line is that fairness feels good. For one, it increases the chance of cooperation and therefore increases total returns.
Sometimes it does, sometimes it doesn't. But if you try to force it at the cost of other freedoms, it's usually worse. Remember that fairness is not a single goal, and can be subjective. To a millionaire, it does not seem fair to take his money, which belongs to him, when he didn't do anything 'wrong' in terms of violating the social contract.
inator wrote: Depends. You can cause way more trouble, yes, but that might be temporary trouble. Then in time you build a new and better system. It depends on what time frame you choose to take into account when measuring cost-effectiveness and overall well-being.
Redistribution of wealth in the early days of communism provides a good example. Sometimes the new system you're trying to build is just inherently unstable, and in trying to fix things based on some ideological notion of improvement you just make them worse until, over time, they return to the original status quo.

Does that mean it's impossible to do right, just because nobody has done it? No. But it means we should be cautious in trying to fix something when there's no evidence of the new system being able to work in practice.
inator wrote:
brimstoneSalad wrote:In a bubble, there are no repercussions beyond those actions; So which which actions have the most moral value?
None, from the consequentialist's point of view.
You misunderstand again. Consequentialism does not equal utilitarianism. A single event can easily be moral or immoral in consequentialsim. Consider altruistic consequentialism, not utilitarianism. Which examples demonstrate more altruistic value?
inator wrote:But if you're an agent, not only does your action have more moral value (deontology)
I think you're mistakenly considering events and consequences to be fundamentally different. Consequentialism is a sum of events (such as an instance of well being, or a moment of happiness, or an action of altruism). If there is only one event (one action, one moment, etc.), there is still a sum to be evaluated. The action itself is still part of the sum of the consequences.

Deontology fails to consider the sum of the following events caused by the original, and only considers the first event itself.
inator wrote:, but you yourself become a more moral person as a result of it (virtue ethics).
Virtue ethics are typically a form of consequentialism.
inator wrote:That isn't to say that moral actions should ever take precedence over moral ends. But in a situation where the possible ends are equal (like your example), then the moral content of the action starts to matter. Only slightly.
It matters more than slightly. Consequentialism has no problem with this. Utilitarianism has a problem with this, not consequentialism.
inator wrote:Consequentialism does that, not just utilitarianism. What I said above.
You're greatly misunderstanding consequentialism here, and its breadth. Maybe failing to see the forest through the very large and prominent utilitarian tree.

Virtue ethics is usually consequential. You do something because the consequences are it increases your virtue. And virtues are virtuous because the consequences of having those virtues is on the whole good. It's sort of a second order formulation -- a simplification for practice -- but quite valid.

Virtue ethics is a good study of the extent of consequentialism.
inator wrote:Altruism excludes the actor making the decision from the calculations...An individual should take actions that have the best consequences for everyone except for herself.
Utilitarianism includes everyone, so it's "more non-arbitray" I think. The most arbitrary would be egoism. But all of them are all about maximizing well-being, so the rule about what to do with the good is the same: maximize!
I see why you think that, but it's not a good fit for the definition of morality. This is a semantic issue, and one of the innate philosophical principle of valuing the interests of others. Egoism is amorality (interest for the self is the default), Altruism is morality, Utilitarianism is an odd mix of the two. Sadism is immorality -- as the opposite of altruism (should utilitarianism include negative interests for others too?). When you look at the full spectrum, it becomes more clear why altruism is morality, and how it stands in relation to other primitive or elemental considerations of interest.
inator wrote:It may be possible to uphold the distinction between persons whilst still aggregating utility, if it's accepted that people can be influenced by empathy. That's where fairness comes in.
Aggregating is more of a practical matter. Ideally you consider individuals, but we just can't do that. We need rules of thumb to put things into practice. This is similar to the logic behind virtue ethics. People are bad at calculating moral outcome for each situation on the fly: too many variables, and it can end in analysis paralysis. Sometimes a less precise method actually works better, and turns out more accurate on average because it's better put into practice.
inator wrote:A climate of fairness and inclusion reduces everyone's anxiety that they could be the next one to be sacrificed for the greater good, so they are more likely to stay in line and happy. But it also increases the satisfaction of the empathetic ones who would have been well-off anyway, because no one has to be sacrificed for their well-being. This may outweigh the negative effects of not actually being able to occasionally sacrifice the needs of some individuals for the greater good.
Right. This is the case where law and rights are useful in a social context.
This isn't always the case, however. There is a balancing act, and one right always conflicts with another.
inator wrote:So ok, there might be marginal cases where fairness is incompatible with consequentialism. Usually though I think it's exactly what indirectly creates better outcomes. Since it's impossible to consider all consequences of all decisions all the time and we don't live in a bubble, it's not irrational to use fairness as a rule of thumb. I guess that would be some sort of rule consequentialism.
Yes, but fairness is a poor rule in many cases, because it's so subjective. Fair to whom?
We need to abide by the social contract -- or at least abide by its punishments -- but that in itself can be exploited to create differences in well being. The rich get richer, and the poor get poorer, because the rich are protected from the overwhelming poor too; they got their money "fair" and square, playing the game.
inator wrote:I think value judgments should be subject to empirical testing and verification.
I agree. If I see evidence of something working, I'm for it. Socialism is working pretty well in Europe; it's probably worth moving in that direction more in North America. But that's all about the social contract -- the rule of law -- within which fairness is of paramount importance, but also in part defined.

Once we are legally equal, is it really more fair, or does it promote good consequences, to force more "equality" upon people in their personal lives to compensate for personal choices and systemic social prejudices?
I am not convinced it does promote good consequences -- if there are ten poor people jobs and ten rich people jobs, and they all need to be done, I don't think it matters who is in which jobs, or if the genders or skin colors are disproportionately represented in the two, as long as this isn't a product of the LAW forcing this, since they're all humans.
And I'm definitely not convinced that the effort in changing social attitudes on these topics pays good dividends, or is an important part in vegan outreach (i.e. intersectionality).

Re: The Issue with Gary Francione and Deontological Veganism?

Posted: Tue Dec 08, 2015 12:29 pm
by inator
brimstoneSalad wrote:In the island scenario, I was imagining a limited sustainable source of food (as in reality). Like the plants on the island can only support one person, but do so indefinitely. If two people eat their fill, they will both die because the plants will be wiped out.
Ah, got it. I guess one would have to be vegan and the other a breatharian. :)
brimstoneSalad wrote:Exactly, which is why fairness under the LAW is important, because of the consequences. But fairness itself isn't always inherently good. And practical unfairness that is fair in the context of the social contract (like us being able to hire who we want, and basically make the choice to be assholes in the greater "fair" context of free speech and freedom of choice) may be worth it in terms of the other freedoms that affords. It's always a trade off, and we have to look at what we gain or lose. Fairness, whatever that arbitrarily means, does not define the best case outcome.
True.
Usually you define fairness as getting merit-based returns. Not equality, but equal opportunity and all that stuff.
brimstoneSalad wrote:Sometimes it does, sometimes it doesn't. But if you try to force it at the cost of other freedoms, it's usually worse. Remember that fairness is not a single goal, and can be subjective. To a millionaire, it does not seem fair to take his money, which belongs to him, when he didn't do anything 'wrong' in terms of violating the social contract.
He didn’t, but it's fair to try to correct the effects of the coin flip that happened a long time ago. Millionaires most often have not gotten there just by virtue of their own merit, they've always been rich (there are exceptions here of course). A more substantive inheritance tax should take care of that.
I think that, statistically, the rule is generally reversed for the super rich, like multi-billionaires - they're more likely to have worked their way to the top. The abnormally substantive result of their work should be due to some flaw in the contract, or in the lacking ability to prevent deviations from the contract, and that can be adjusted for if you can find the common cause. I guess it's pretty easy to find it, it's just more difficult to gather enough support to fix it.
I’m referring strictly to what’s fair here, not about how that affects the outcome. Based on the contract, it’s not fair to take away the millionaire's property. But it is fair to adjust for the unfairness within the contract if the practical results clearly indicate that it exists.
brimstoneSalad wrote:Yes, but fairness is a poor rule in many cases, because it's so subjective. Fair to whom? We need to abide by the social contract -- or at least abide by its punishments -- but that in itself can be exploited to create differences in well being. The rich get richer, and the poor get poorer, because the rich are protected from the overwhelming poor too; they got their money "fair" and square, playing the game.
Then the game is rigged. Fix the contract. Or don’t, if the current contract results in better overall outcomes. But usually it won’t. It’s a matter of testing if that’s the case.
brimstoneSalad wrote:Redistribution of wealth in the early days of communism provides a good example. Sometimes the new system you're trying to build is just inherently unstable, and in trying to fix things based on some ideological notion of improvement you just make them worse until, over time, they return to the original status quo.
Does that mean it's impossible to do right, just because nobody has done it? No. But it means we should be cautious in trying to fix something when there's no evidence of the new system being able to work in practice.
I agree 100%.
That’s why I support incremental change based on needs brought to light by empirical analysis, rather than bulldozing down a current system to make way for a shiny new one. The new type of system can't be stable because it’s top-down. You can’t force people to internalize its ideas, especially when they’re logically inconsistent due to ideology.
Communism is a good example here, but then there are versions and versions of communism.
brimstoneSalad wrote:I think you're mistakenly considering events and consequences to be fundamentally different. Consequentialism is a sum of events (such as an instance of well being, or a moment of happiness, or an action of altruism). If there is only one event (one action, one moment, etc.), there is still a sum to be evaluated. The action itself is still part of the sum of the consequences.
Deontology fails to consider the sum of the following events caused by the original, and only considers the first event itself.

Virtue ethics are typically a form of consequentialism.
Hmm... It might actually be true that I'm not getting something, I still do have difficulty making a clear distinction between utilitarianism and consequentialism as a whole (as it's portrayed in the sources I've come in contact with until now). I now understand your definition a bit better and it sounds alright to me, but I'll have to do some more reading on that and I'll get back to you if anything's still unclear.

brimstoneSalad wrote:You misunderstand again. Consequentialism does not equal utilitarianism. A single event can easily be moral or immoral in consequentialsim. Consider altruistic consequentialism, not utilitarianism. Which examples demonstrate more altruistic value?
My initial reaction was: Who cares, when you only look at the consequences in that event? But as I said, I might have an off picture of consequentionalism and just confuse it with utilitarianism.
It's clear which demonstrate more altruistic value. So the right kind of consequence is the one maximizing well-being achieved through altruistic means in your view?

About altruism and egoism - In a society where each person values himself only as a means for each other person, who in turn value themselves as means for each other person, then everyone receives the benefits of an exclusively altruistic society, so everyone is well-off. On the other hand, if each person lives exclusively for their own interests, everyone receives the benefit that they are able to create for themselves. Sure, there are conflicts of interests, but that affects interactions between egoists just as much as the decisions made by altruists when considering who to care about more.
First I thought that the results of the two systems should be the same, but people have different levels of ability, so I guess the first one should generate more egalitarian results than the latter. But then the sum total of the benefits should be the same.
brimstoneSalad wrote:Virtue ethics is usually consequential. You do something because the consequences are it increases your virtue. And virtues are virtuous because the consequences of having those virtues is on the whole good. It's sort of a second order formulation -- a simplification for practice -- but quite valid.

Virtue ethics is a good study of the extent of consequentialism.
Traditional normative moral theories fall into three types - Teleological theories seek to identify some supreme end or best way of life, and reduce the right and the virtuous to the promotion of this good. Deontological theories seek to identify a supreme principle or laws of morality independent of the good, and subordinate the pursuit of the good to conformity with the moral law, independent of consequences. Virtue theories take phenomena of approval and disapproval to be fundamental, and derive the right and the good from them. They are based on the spontaneous tendencies of observers to approve and disapprove of people's conduct, not just on taking actions that lead to the most moral ends.

For example (virtue): You can still be a pretty shitty person even if the totality of the consequences of your actions isn't less moral than a nice person's.
I'll donate 1 milion$ to a charity for animals and continue being a carnist. I'm doing way more good for the animals than my vegan neighbour who struggles to put food on the table for their family. But that's only when you look at things in absolute terms. Relative to my means, I'm doing less, so I'm the less virtuous person.
or
A person is a carnist because they've never come in contact with veganism and they don't possess the mental faculties to come up with the logic of veganism on their own. There are immoral consequences of their actions, but they’re not really an immoral person because of that.
Another person is a carnist although they understand and even agree with the vegan logic but "bacon tho". Same consequences, very different levels of virtue.

What you do with what you have says a lot about how moral you are. And the intention behind what you do also says something about your virtue, even though the result might go to shit due to miscalculation. That’s more information than just considering the morality of the ends of one’s actions. If you take actions specifically with the intention of creating moral consequences, then I guess you'd be virtuous within the framework of consequentialism too. But what about if your actions accidentally result in moral consequences?
brimstoneSalad wrote:I see why you think that, but it's not a good fit for the definition of morality. This is a semantic issue, and one of the innate philosophical principle of valuing the interests of others. Egoism is amorality (interest for the self is the default), Altruism is morality, Utilitarianism is an odd mix of the two. Sadism is immorality -- as the opposite of altruism (should utilitarianism include negative interests for others too?). When you look at the full spectrum, it becomes more clear why altruism is morality, and how it stands in relation to other primitive or elemental considerations of interest.
Yeah, I understand.
brimstoneSalad wrote:Aggregating is more of a practical matter. Ideally you consider individuals, but we just can't do that. We need rules of thumb to put things into practice. This is similar to the logic behind virtue ethics. People are bad at calculating moral outcome for each situation on the fly: too many variables, and it can end in analysis paralysis. Sometimes a less precise method actually works better, and turns out more accurate on average because it's better put into practice.
Right.
brimstoneSalad wrote:I agree. If I see evidence of something working, I'm for it. Socialism is working pretty well in Europe; it's probably worth moving in that direction more in North America. But that's all about the social contract -- the rule of law -- within which fairness is of paramount importance, but also in part defined.

Once we are legally equal, is it really more fair, or does it promote good consequences, to force more "equality" upon people in their personal lives to compensate for personal choices and systemic social prejudices?
I am not convinced it does promote good consequences -- if there are ten poor people jobs and ten rich people jobs, and they all need to be done, I don't think it matters who is in which jobs, or if the genders or skin colors are disproportionately represented in the two, as long as this isn't a product of the LAW forcing this, since they're all humans.
You're not wrong, BUT I think there's more to it:
Any clear disproportion in sex, race or any other arbitrary criteria like that within those jobs means that there is something causing it, since differences between those groups of people are not that large. It might not be a systemic legal cause, but it is something. If there wasn't any discrimination, be it conscious or involuntary*, we would see a different distribution.
This sort of discrimination might not matter to the overall economy ( I don't know, that should be verified empirically), but it does matter to people. Do we want the world to only be efficient or do we want it to also be nice and warm and fuzzy? By that I mean:

If a group of individuals feels underrepresented in a "good" (desirable) category of jobs or whatever, they'll be less satisfied. Individuals do identify with certain groups - even those based on arbitrary criteria like gender, race, sexual orientation -, it's human nature. And they also more readily extend their empathy to the members of the group which they themselves feel they belong to (maybe you don't - kudos to you and education and reason). That might be irrational, but we shouldn't be concerned with how people should think, but with how they do think. It makes sense to talk about different categories of people because that's how emotions work in most humans.
When one category is randomly disadvantaged, all individuals within that category are hurt to some extent, simply because they identify with the category. Until people's mentalities are evolved enough to stop differentiating between humans like that, this sort of fairness will matter to them and will affect their well-being. So maybe it's pragmatic to take fairness based on random criteria seriously for now, since it clearly does create consequences for people.

It's easier to get people to understand that various categories might be different but should not be treated differently, rather than make them understand that those differences are irrelevant in the first place. Similar result, more cost-effective.
That's how you get the 'let's encourage diversity' policies. I agree, diversity shouldn't mean much in itself, because it would only be a side-effect if we stopped caring about those differences in the first place.
It's like what you said about genders in another thread: Until we evolve to stop seing gender and sexuality as relevant criteria, it does make sense to promote the lgbt cause.

*For example it can be involuntary just by the effects of socially constructed gender roles, where females simply show a preference for certain areas because they've been conditioned that way. Does this actually hurt them that much, since they're the ones chosing their job anyway?
They do have some freedom to make their choice, but not full freedom. Social conditioning does slightly restrict their freedom to make a choice out of the full range of options by highlighting some options and making others less visible. It's like what Google does when you search for anything. :) You still get to decide which links to click, but some links get shoved in your face more than others. Is that a fair comparison?
(While you can argue that Google does this using a pattern that's more likely to help you make a good choice, since the full range of options would overwhelm you. I don't think you can say the same about gender roles.)

Does that ultimately hurt women? Yes, if they end up getting payed less.
Then again, should we care that females and not other human beings are getting payed less? Yes, if they identify as female and they feel disadvantaged economically as a result of their identity. If you feel like you're getting disadvantaged economically as a result of your own (lack of) merit or ability rather than your identity, then you might feel better about it. Fairness works both ways. Attitudes are important.

brimstoneSalad wrote:And I'm definitely not convinced that the effort in changing social attitudes on these topics pays good dividends, or is an important part in vegan outreach (i.e. intersectionality).
Yeah I've also been following the intersectionality discussion (before it got sidetracked). Social attitudes are important, though I'm not sure if we should mix up veganism with other stuff. I'm still forming my opinion on this issue.
For now, I don't really see how it could hurt to reach out to potential vegans and acknowledge their different needs and difficulties though. It might just not be very cost-effective.