brimstoneSalad wrote: I think I'm going to read this, because it looks like a good summary: http://cognitivephilosophy.net/ethics/s ... sumptions/
If I have a chance, I'll do a point-by-point discussion.
From the article, and assuming the article is accurate:
I disagree with Harris (and Singer) on that point.Greg Nirshberg wrote: Harris’s basic premise is this: If ethics is about anything, it is about the conscious states of organisms able to experience consciousness. Any other definition is meaningless. Any action that has no actual or potential affect on the conscious state of an organism is by definition valueless.
http://theveganatheist.com/forum/viewto ... t=10#p5081
I consider helpfulness or harmfulness to be based on realization of interests, with or without the beings being aware of that realization.
This is a subtle but also profound difference, that has significant consequences in the margins.
I agree with this in the sense that interests are empirically verifiable (with the right technology), as arising from consciousness upon their inception. After that, however, they are conceptual (though no less matters of fact), and may persist in relevance after cessation of consciousness or death.Greg Nirshberg wrote: Harris’s next point is a simple small jump. If ethics is about the conscious states of organisms, then this must by definition translate into facts about brains and their interaction with the world. This also seems uncontroversial. Assuming conscious states have a neurophysiological correlate (an extremely grounded assumption), then it’s obvious that science can give us a complete account of the ever evolving dynamic states of consciousness, the very thing that ethics is about.
I think I agree with the spirit of Harris' claims, but with respect to the differences in the initial assumptions.
Here's another point where we diverge.Greg Nirshberg wrote: Sam Harris’s next point is that ethics must specifically be about maximizing the well being of conscious organisms.
While I don't agree that the term well being is useful unless it represents a realization of those being's interests, that's a simpler matter.
The broader issue is that this mirrors utilitarianism, and suffers from its same pitfalls; namely, the utility monster and the problem of selfishness.
On a personal level, it's a good in utilitarianism to harm another, as long as you benefit ever so slightly more than the other was harmed.
Although other factors may be relevant to considering justification, and nudge an action without choice toward amorality, it's not important in consideration of the fundamental nature of morality, which is concern for the other.
On a universal scale, where there isn't really any "self", I think it's more important to think in terms of maximizing morality; which the actions of the evil utility monster are unlikely to do.