carnap wrote: ↑Wed May 02, 2018 2:04 am
It doesn't "qualify" in your opinion but the issue here is that its not clear how you interpret classical or operant condition in the case of plants.
If you're confused about that, then bring it up as a point of discussion. DO NOT LIE, and claim something is operant conditioning when there's no indication of that because you don't agree that there's a difference between the two.
I think you owe Lay Vegan an apology on that point.
In terms of what it is and what differentiates it:
Whether in plants or animals, you're typically looking for some response that would provide benefit in the controlled environment where it would not do so in nature; this tells us there's no reason for it to be "hardwired" by evolution as a mere reflex.
For example, if the plant grew *away* from the light source, the light could become stronger rather than weaker. This wouldn't make sense in nature, and it
might be operant conditioning for the plant to actually learn to control the light be growing away from it.
Or the light could get stronger when the plant grew in a clockwise spiral or a counter-clockwise spiral, regardless of the direction it came from (that might be a stronger case).
This is how operant conditioning has been demonstrated in some insects with specific antenna movement which wouldn't make any sense as a source of reward in nature.
carnap wrote: ↑Wed May 02, 2018 2:04 amWhat the research showed is that the plant was able to learn based on its experience,
whether the learning properly represents classical of operant conditioning is pretty immaterial.
NO it is not. If you thought so, then you should have said that instead of lying. Lay Vegan (and I) have been specifically talking about operant conditioning, he explicitly asked you about it (not classical Pavlovian conditioning), and you said yes and then linked to something that DID NOT show that.
Again, I think you owe him an apology. If you thought it was irrelevant then you should have said that instead of lying about it.
If you do not understand the difference, and you mistakenly believed the terms were perfectly interchangeable, that would exonerate you, but I don't think you're *that* confused on the issue. I think you know people consider them different things, and you knew what he was asking for.
But either way, it isn't an excuse anymore now that you know it IS material to us.
If you want to argue that the distinction isn't important then that's a different topic.
As to what the research showed: nothing. It had the worst granularity possible (only two states, left and right) and a very small sample. Even without outright fraud, all they would have had to do is do a few variations on the same experiment (excusing it to themselves as controlling for more variables or something, since it didn't work because X) to get the results by chance. Maybe they didn't get the results the first time, and decided the breeze wasn't strong enough or the light wasn't bright enough or it was too bright? OK, rinse and repeat with small tweaks until you get the P value you want.
Absolute garbage.
carnap wrote: ↑Wed May 02, 2018 2:04 amAfter all, why would operant condition be important evidence of sentience (or other higher order cognitive function) but not classical conditioning?
Because Pavlovian conditioning can be entirely unconscious; it can be a programmed variable by evolution that just needs to be ticked by some kind of hormone to change gene expression. As I said, like a computer program remembering your settings.
That's distinct from a conscious interest where the organism learns something for a reason.
carnap wrote: ↑Wed May 02, 2018 2:04 ambrimstoneSalad wrote: ↑Tue May 01, 2018 4:33 pm
Simple associative learning can be hardwired, like a computer program that remembers your settings.
Learned behavior is by definition not "hardwired".
Should I put "learning" in quotes for you? That's fine if we're talking about true learning, but that's a more complex discussion.
Here we're talking about sensitizing a variables of gene expression.
X happens, by reflex in response to X a chemical Y is produced, when that chemical reaches a certain level Z then gene expression changes which changes behavior from A to B.
It's not conscious. There's no neural network controlling this.
carnap wrote: ↑Wed May 02, 2018 2:04 amYour example is an example of memory, not learning. When you change a setting on your computer your computer didn't learn anything.
That depends on your definition of learning, I assumed you were using it in the most absurd broad sense, because you seem to want to claim that plants exhibit it. In the same way, computer programs "learn" your preferences.
If we're talking about "true learning", no, plants do not exhibit it in any way and neither does Microsoft Word. Even IF that "experiment" wasn't outright fraud or dumb luck (which was promoted as a credible result by the people behind it because they're insane), it still doesn't demonstrate true learning.
Both plants and simple computer programs can have variables changed in their settings which creates a learning-like behavior when not sufficiently tested. We know this from higher quality data on damage causing higher production of toxins.
The way we tell the difference is by testing operant conditioning, anything else may be a hardwired reflex of some kind. Neither plants nor Microsoft Word can respond to operant conditioning.
Insects can, though (although only barely, and probably not all of them).
carnap wrote: ↑Wed May 02, 2018 2:04 amBut then do you also think that it is morally problematic to destroy such systems? Let's suppose I installed a AI with ample learning abilities on your computer that didn't save its state when your system was powered off, would it then become morally problematic to turn off your computer? Is then research in AI where the culling of such systems is the norm just as problematic as the culling of animals?
It might be. Of course animals tend to have an interest in self preservation which the AI might not, but even without that we are costing the AI potential experiences. Margaret and I discussed this in another thread with respect to the harms of death.
That said, most AI are probably around insect level, so if it benefits humanity it's probably worth it (as with animal experimentation for medical purposes). I wouldn't promote their use as entertainment, though.
If video games start having more advanced AI that are spawned, harmed, and killed for mere entertainment that may become a substantial moral issue in the next decade.
carnap wrote: ↑Wed May 02, 2018 2:04 am
Right....and that is what is meant by "indirect rights".
OK. There's probably a better way to phrase that, I didn't see "indirect rights" anywhere in the article.
carnap wrote: ↑Wed May 02, 2018 2:04 amdeontologists are by no means obligated to accept "anything goes" treatment non-moral agents (or non-rights holders).
Yet they are after you address a couple caveats (which were explained already): the non-moral agent can not be owned by a moral agent (in which case it has indirect protection as property IF the owner wills it), and if you consider offending people's aesthetics by doing visually objectionable things in front of them arbitrarily wrong, then you need to do whatever terrible things you want to do to the non-moral agent in private.
Caveats:
1. Doesn't belong to anybody else.
2. Done in private.
Then anything goes.
carnap wrote: ↑Wed May 02, 2018 2:04 amIts not, making implications about the result of some moral rule doesn't make it a consequentialist argument.
I think you're confused here. An act being wrong because of its consequences makes it a consequentialist argument.
Some people like to mix and match.
Kant made it pretty clear he didn't care about consequences at all; only that it wasn't a contradiction of will or however he put it.
That doesn't mean deontology makes any sense or that it doesn't
decay into consequentialism when you try to correct its problems.
Deontology is not consistent, so we shouldn't expect it to really mean anything at all when examined rigorously.
carnap wrote: ↑Wed May 02, 2018 2:04 amDeontological ethics is a broad class of ethical theories, in fact, the term was first used well after most "deontological" systems were developed so its more fruitful to refer to the specific theories rather than trying to reason about a broad and loosely defined group of ethical theories.
Maybe, but that broad group is loosely defined in opposition to consequentialism. So unless you want to start breaking it down, that's all we have to work with here.
I don't make recommendations based on deontology because it doesn't make any sense. It's like advising people how to best draw a square circle.
carnap wrote: ↑Wed May 02, 2018 2:04 amI would agree, there is some consequentialism as well but I think this hints at something that you don't seem to want to acknowledge. Both deontological and consequenctialist systems of ethics have problems, that is, they both have cases where their application seems to counter our moral intuitions. So any practical application of ethics, like a legal system, tends to be a mixed of various moral thinking.
What do you think I don't want to "acknowledge"?
Being against intuition isn't a "problem", it usually means intuition is wrong.
Or do you deny the validity of all counter-intuitive statistics, quantum mechanics, etc? Probably not.
You're just begging the question here with regards to your subjectivism again. If you categorically reject moral realism, and even logic (as seen in the other thread) as reality this isn't a discussion that can be had.
Obviously I know WHY there are deontological aspects to law. People are stupid and the law didn't originate from the most rational place (a lot of it came out of religion even).