Linguistics, as I explained, is a soft science.
Computer science is more of a field of engineering; it doesn't typically apply scientific methodology.
I don't know about that, all I see is that studying linguistics and studying computer science is way more similar to each other than either is to studying Flat-Earthism.
What's distinct for something like "we need a higher energy particle accelerator" is that it's something we can build.
How do you know there isn't some unknown law of physics preventing the higher energy particle accelerators to exist?
That's cool, but it's meaningless without a P value, and P value is meaningless post hoc.
That's right, nothing can be learned from history... Other than that anarchy is to be afraid of, more than the dangers of there being a government that are fully explicable by reason.
You don't get to assume you're right until somebody else shows another theory is just as likely to be right.
But that's exactly what "falsifiable" means, right? That's what makes scientific method better than the philosophically very flawed inductive reasoning.
Linguistics is a soft science. And it's among the softest of the soft.
How did you figure that out? Linguistics, unlike, for example, macroeconomics or the political science, isn't connected to politics for there to be a bias in it. As the IMF paper I linked to says, the predictions the economists (those of them who claim it's possible to make macroeconomic predictions) make are not only incorrect, they are demonstrably biased toward the idea that the current economic policies are good.
Besides, which statement describes human behavior more accurately? The statement "People generally behave rationally and biases are never systematic." (the core premise of economics) or "When people speak, they obey the laws of grammar, and if they don't, other people quickly notice that." (the core premise of linguistics)?
Honest linguists will admit linguistics is a soft science.
So too will honest economists admit they can't predict the economic recessions, explain what caused them, and propose what the government should do about them.
What material value do these languages hold?
Possible evidence to falsify current linguistic theories, among other things. If the Piraha language truly lacks linguistic recursion (and it hasn't been documented to a degree we can actually claim that), that would prove the generative grammar wrong.
Besides, "preserving the languages" usually means simply getting bureaucracy outside of matters of free speech, with their authoritarian notions that some languages are better than others.
You basically have to appeal to a very exaggerated notion of how linguistics affect human cognition
As far as I know, no serious linguist has held to that notion since the advent of the comparative linguistics in the Renaissance (up until then, grammar was considered a part of logic). And don't bring up that "Hopi time"-story, that's just the media vastly misrepresenting linguistics.
Sure, there are linguists who claim to have done the studies about how, for example, people who speak languages with genders or noun-classes have different associations than those who speak English, and there are also studies that fail to replicate that effect. Very few linguists take such notions seriously.
You just can't. Thus soft science.
OK, let's move that into some other context.
A: How far will a ball with a mass
m and a radius
r go until it touches the ground if you throw it horizontally at the speed
v?
B: OK, here is a calculation assuming the ball is in the air at the atmospheric pressure. It will go
s meters.
A: With what probability?
B: How do I know? That's simply what the laws of physics tell us. Obviously, they will not calculate the effects of a strong wind, since it could theoretically be there, but we have no reason to believe it's actually there.
A: So, you can't tell me the probability? Well, then that's not a real science.
Does that make sense? Of course not.
How certain? What probability?
Why would the numbers be important here? I just don't see it. You mean like trying to apply the Bayesian Formula to determine how certain the sound changes are? Sorry, the languages just don't work that way. The vocabulary of the languages is divided into layers. The bottom-most layer is called basic vocabulary, it's approximately the words on the Swadesh list. That's where sound changes are derived from, because those are the words that are very rarely replaced by loan-words or neologisms, in other words, they are the oldest words in a language. If it appears that different sound laws operate in basic vocabulary and in some other part of the vocabulary, the right inference from that is that a layer of a vocabulary has been borrowed from a distantly related language.
Consider the Armenian language, for example. From the basic vocabulary, we see that Proto-Indo-European *d turns into 't' in Armenian. If we were to apply the Bayesian Inference to the entire vocabulary, we would get the result that Proto-Indo-European *d remains 'd' in Armenian with low probability of error (since all but around 400 words in Armenian are loan-words, the vast majority of them being from Iranian languages, which didn't undergo the 'd'->'t' sound change).
Do you think that dumping a dictionary into a statistical program would be more scientific than what I am doing now?
Do you also think those phonosemantic studies with low P-values should be taken seriously, despite the fact that their conclusions being true would refute the core premise of etymology?
Why would you assume some scroll is a trustworthy example of a language?
Because I have no good reason to think it isn't. People generally write down grammatical and somewhat sensible (though often theology-related) samples of a language. If some alien claims to be a human being that lived 2000 years ago, that's a good reason to assume he is not telling the truth.
So we need to TEST the idea of changing the minimum wage.
Yeah, everything in science needs to be empirical. Except, that doesn't appear to be the way science works. Should the, for example, the magnetic permeability of vacuum be tested empirically, even though the theory clearly says that it's a constant equal to mu=1/(epsilon*speed_of_light^2)? What would be the point of it? If the theory didn't make the right predictions assuming that value of magnetic permeability of vacuum is true, there is no reason to assume it will suddenly start making the right predictions with an empirically-determined value of "mu".
And that's kind of the point of the legitimate part of the climate change controversy. The theory predicts there will be a negative feedback loop of vapor in the atmosphere decreasing the warming effect of CO2 by a factor of around 0.5. Some empirical data shows there is actually a positive feedback loop of vapor in the atmosphere increasing the warming effects of CO2 by a factor of around 3,
assuming all the recent warming was due to the increase of CO2 in the atmosphere. Thereby, the mainstream climate science rejects the "negative feedback loop" prediction the theory makes, and replaces it with the positive feedback loop factor that's been "determined empirically". Is that the right thing to do? The time will show.
I hurt your feelings by denying that you're qualified to evaluate hard sciences because you published a few peer reviewed papers in linguistics.
And insisting that people from "hard sciences" are somehow more competent in evaluating politics (and philosophy) than I am.