Page 2 of 2
Re: Cat-loving consequentialist
Posted: Tue Nov 03, 2020 1:03 pm
by Lay Vegan
brimstoneSalad wrote: ↑Sun Nov 01, 2020 3:03 pm
I don't think it has anything to do with the utility monster argument.
It's just consideration only for experienced pleasure and pain, and no consideration for interests themselves. Abducting people off the street and plugging the whole of humanity into pleasure machines would be good even if people didn't want that because it would maximize pleasure.
I find Singer's transition from negative to positive utilitarianism strange since it usually happens in reverse. What do you think is the source of Singer's error here? I think it might have something to do with the confusion between happiness (satisfaction of interests) and pleasure. Most people don't want to be turned into mindless pleasure zombies stuck in some chemical state of euphoria, and a consistent moral philosophy should respect that.
Re: Cat-loving consequentialist
Posted: Wed Nov 04, 2020 12:24 am
by brimstoneSalad
Lay Vegan wrote: ↑Tue Nov 03, 2020 1:03 pm
I find Singer's transition from negative to positive utilitarianism strange since it usually happens in reverse.
You mean preference based to hedonic?
Lay Vegan wrote: ↑Tue Nov 03, 2020 1:03 pmWhat do you think is the source of Singer's error here?
It's hard to imagine. Maybe he thought interests were too abstract and wanted something more physical you can point to in an MRI.
Re: Cat-loving consequentialist
Posted: Wed Nov 04, 2020 12:38 am
by thebestofenergy
brimstoneSalad wrote: ↑Wed Nov 04, 2020 12:24 am
Lay Vegan wrote: ↑Tue Nov 03, 2020 1:03 pmWhat do you think is the source of Singer's error here?
It's hard to imagine. Maybe he thought interests were too abstract and wanted something more physical you can point to in an MRI.
Which would be a silly thing to do, and a lack of imagination on his side.
Even if a person doesn't know what's good for them, interests could be seen by correlation of certain events increasing the amount of suffering.
i.e. The more you inject heroin in someone to make them feel good, and they want it as well, the more distress they have as a consequence in the future because of it - you could see it would be in their best interest to not have drugs injected into them even if it brings them immediate pleasure, by simply observing the domino effect that comes from it in the brain after other consequences manifest that are caused by the injection of drugs and that cause suffering (such as brain damage, health problems, anxiety from possible legal troubles, cravings, low self-esteem for being addicted to drugs, etc.).
That's observable and measurable.
Re: Cat-loving consequentialist
Posted: Wed Nov 04, 2020 12:46 am
by brimstoneSalad
Well the pleasure pill machine hypothetical imagines no such health consequences, or if there are the net pleasure in the shorter life to follow still exceeds the net pleasure in a longer more natural life.
Re: Cat-loving consequentialist
Posted: Wed Nov 04, 2020 8:18 am
by thebestofenergy
brimstoneSalad wrote: ↑Wed Nov 04, 2020 12:46 am
Well the pleasure pill machine hypothetical imagines no such health consequences, or if there are the net pleasure in the shorter life to follow still exceeds the net pleasure in a longer more natural life.
But then, even with some situation like in the Matrix, people still have wants to not be in that position. Those wants can be quantified, if they're made aware of the situation.
It's weird he would arbitrarily give value to wants that are seen, vs wants that are inherently there but not manifested (yet, because of lack of knowledge).
I wonder if he believes it would be OK to destroy someone's life work if they were dying in a hospital and they wouldn't be knowing it was happening.
Re: Cat-loving consequentialist
Posted: Thu Nov 05, 2020 12:12 am
by brimstoneSalad
thebestofenergy wrote: ↑Wed Nov 04, 2020 8:18 am
I wonder if he believes it would be OK to destroy someone's life work if they were dying in a hospital and they wouldn't be knowing it was happening.
Yes, he would be OK with that, assuming there were no other consequences to others that would harm them (like destroying a cure for a disease would not be OK).