teo123 wrote: ↑Mon Jan 21, 2019 4:28 am
You really need to study more astronomy.
Ah, dude! I've put a lot of my time studying linguistics, I've put a lot of my time studying computer science, and I've put an insane amount of time studying various pseudosciences (Flat-Earth...). If that's not enough to have an educated discussion about philosophy of science, then nothing would be.
Linguistics, as I explained, is a soft science.
Computer science is more of a field of engineering; it doesn't typically apply scientific methodology. "science" being in the name is misleading.
Studying pseudoscience is a good start, but you need some background in a real hard science.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 am
So, if I understand you correctly, that guy who is credited for the Big Bang theory proposed a way to test it, that couldn't be properly done using the equipment available back in the day.
At the time, then, there was no reason to believe it over anything else since it couldn't be currently tested. There are a few things like that in physics.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amSo, how is that different from what I was doing? You never know what kind of equipment will be available to archaeologists and people from other historical sciences ten years from now. The number of ancient manuscripts and inscriptions that can be read is increasing every day and will increase even faster in the future.
What's distinct for something like "we need a higher energy particle accelerator" is that it's something we can build. Linguistics have to rely on luck, and it may very well be that the information you need doesn't and never will exist. You might just need a time machine.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amYes, but they don't move as randomly as you seem to assume. Have you heard of, for example, the Skok's Second Law of Toponymy? It states that modern names of the islands derive from the ancient names of the largest places on the islands.
That's cool, but it's meaningless without a P value, and P value is meaningless post hoc.
Read this:
https://xkcd.com/882/
It quite cleverly explains why P value is only meaningful in a context where you don't get to select from a pool. What you're doing in linguistics is selecting a correlation, and it doesn't matter if your after the fact calculations of the probability of it being a coincidence are 10% or 0.1%.
You have a large pool to choose from, and THAT fact means you're not practicing good science.
You have to develop a theory, delineate what would falsify it in a statistically meaningful way, and *then* collect the data to do so.
Now, to be fair, a LOT of scientists in the public view are also not practicing good science. Look at Brian Wansink. P hacking is how you get all the research dollars, because it gets you results fast and most people don't understand it well enough to see how those results are worthless.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amI think I understand what you are saying, but I think you are commiting a form of Ludic fallacy here. If you knew that a dice has been thrown 100 times and every time six was at the top, would you still assume it's a fair dice? You don't need to make a prediction and throw it again to be reasonably certain it isn't.
If the hundred times were hand picked from a pool of billions of billions of dice being each thrown a hundred times, yes.
It's inevitable that SOME of those dice will roll six a hundred times in a row without being "unfair". It would be unusual if they did not.
Again, see that XKCD comic for an illustration.
If you test enough things, some of them will come out appearing to have good P values, but that doesn't make them true.
What you're doing is applying a P value ad hoc to something that already appeared to have a good P value, but there's no good reason to believe that wasn't by chance (that it wasn't the green jellybean).
Again, this is basically textbook P hacking. Some researchers do this, look in a bunch of data for correlations and then ad hoc apply P values. It's a scandal. Bad science.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amIt's not plausible if you don't have any data that confirms that.
No, that's not how science works. You don't get to assume you're right until somebody else shows another theory is just as likely to be right.
The alternatives are plausible, period.
If you want to show it's less plausible than YOUR theory, then YOU need to do the leg work and disprove them. And you need to do so with every plausible alternative (and there could be dozens, even hundreds, thousands).
The vast array of variables and plausible alternatives is one of the things that keeps soft sciences soft; it's very very hard to boil it down (as done with physics) to a small handful of options.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amSee, how many words from a single language are expected to repeat themselves in toponyms. Water, flow, island, mountain, hill, waterfall, spring, abyss, coniferous forest... Let's be generous and say it's 20. So, how many possible toponymic elements are there? Let's be very generous and say all toponymic elements are in the form of consonant+vowel+consonant, and that the vowel is always ignored (which it isn't). Since there are about 20 consonants in a language, the probability of each element having each form is 1/(20*20)=1/400. So, the probability that some single element of the 20 elements that repeat themselves will be homonymius with some other element is 1-((400-1)/400)^20=4.8%. The chance that you've wrongly postulated a meaning to those elements is a lot higher than that.
Of course, the probability that any single of those 20 elements will be homonymous, under our simplified model here, is 1-(400 P 20)/(400^20)=38%, but for each of those 20 elements, by far simplest explanation is that it's not homonymous with any other. And that's even more true in a more realistic model.
All of that, I just see the flat-Earther teo is back.
You don't want to accept that you've invested a lot of time in a soft science. That doesn't mean it's not useful or a fulfilling profession, and that doesn't mean you can't be more rigorous and more scientific about it than others, but it's very hard to turn something like this into a hard science. Any of those numbers, and more, are likely very easy to manipulate to P hack.
I'm sure @Jebus knows psychology is a softer science, but that doesn't mean it's not useful. It's also a harder science than linguistics because there's something concrete and testable that actually comes out of clinical psychology: either you help people get better, or you don't. There's something concrete there, and you can use good methodology to evaluate interventions. It's also applicable in testable ways to advertising, propaganda, etc.
What is the parallel there in linguistics? So what if you get the root of a word wrong? Is there a clinical application? Can we do some actual test to confirm it's right or wrong on the basis of people understanding or misunderstanding something? No.
The best it can do is help us understand writings in dead languages, but usually all we can do is say if the results are intelligible or not. Doesn't mean it's the correct translation. Very VERY rarely writing might lead us to a prediction about some archaeological discovery. You translate something and it tells you there's a pyramid filled with cat mummies at the mouth of some river, and then you find exactly that... well, good job. That's pretty convincing. NOW you can argue that you translated it right if it made a concrete prediction. But you're relying on an astronomically rare occurrence (which in a lifetime working in linguistics you'll probably never see) and it's accomplished what exactly? Stocked a museum with cat mummies or something.
Meanwhile a working psychologist can make and confirm concrete predictions very reliably with a few years of study.
Linguistics is a soft science. And it's among the softest of the soft. Again, doesn't mean it's useless, doesn't mean you can't do better. But raging against the fact that it's a soft science isn't going to help your case. It's making you look delusional and unhinged.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amThat's right, go insult people who have spent years studying that... because they don't do P-hacking to make their conclusions look more credible to people like you!
Honest linguists will admit linguistics is a soft science. If they're really honest they'll also admit that it doesn't have that much of material value to offer humanity outside of contributing to our understanding of ancient human history... which, unfortunately, is a mostly academic interest... at least outside of biblical/quranic stuff (which has real implications because people believe in that stuff).
There's also, at best, some efforts at documenting and preserving contemporary human languages which are going extinct. But why? What material value do these languages hold? You basically have to appeal to a very exaggerated notion of how linguistics affect human cognition, and claim that with these languages whole ways of thinking will go extinct, and we may lose the ability to solve some real human problems in other sciences because we couldn't think in a certain unique way. That's not very plausible.
People do what makes them happy, and sometimes that's exploring history and culture rather than working in a hard science and discovering cures for diseases. Maybe it's very writing fictional novels, making movies, games, whatever.
I'm not here to hate on people who study soft sciences, or who do other work in the humanities and entertainment. Doing something other than curing cancer doesn't make you a bad person.
I'm just telling you it's not a hard science, and your experience in it doesn't really qualify you to understand what hard science is about.
You need to study a hard science to understand that well.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 am That means nothing at all if you have not done the hard work of empirically establishing that probability.
Probability of what? Of a particular term coming from an unattested language family? That's using the unknown to explain the unknown, and the probability that should be assigned to that is zero, right?
You just can't. Thus soft science.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amSafely? Again, with what probability? There are exceptions to that kind of thing everywhere.
Well, that question is very hard to answer.
Exactly, soft science.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amSound laws are considered way more certain than any particular etymology. If some etymology doesn't agree with the sound laws, it's more likely a wrong etymology than an exception to a sound law. The exceptions are numerals and some very basic vocabulary, certainly not the toponyms.
How certain? What probability?
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amYou are probably using the regularity of the English spelling as a proxy to how regular sound changes are.
I'm not using anything for anything. I'm saying if you don't know the actual probability of these things then you're practicing a soft science.
If that's what you want to study that's cool, just stop pretending it's a hard science.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 am"Oh hai guyz, I'm just a hooman, I was abducted by aliens. Sorry about all the confusion."
And why would it be reasonable to assume that alien is trustworthy?
Why would you assume some scroll is a trustworthy example of a language? What if it was an ancient linguist playing a joke on the future?
My point exactly. You're asking for outside and (once) conscious forces to sort it out for you.
teo123 wrote: ↑Mon Jan 21, 2019 4:28 amReally? I've always assumed it's some hypothesis about the composition of the sub-atomic particles that sounds smart to laymen, but sounds like a bunch of baseless and unfalsifiable assertions to the physicists.
Hypotheses are testable. Models are different, any model that is truly descriptive is basically true as far as most people are concerned (and may in fact be, metaphysically).
Say we have a black box, and we know when you put a 1 in it, it gives you a 2. That's all it does, because all we can put into it are ones. The opening is too small for anything else, it would break the box.
So one model might be "it has an adding machine in it, and it adds 1 to whatever you feed into it"
Another model might be "it's a multiplier, and doubles whatever number it gets"
Since we can only ever give it one, those models are equivalent and unfalsifiable ways of understanding the mechanism. They're both essentially true, for all we know, and the difference between them is a very annoying philosophical issue.
String theory is basically that. At least it is right now. Wait until it makes a testable prediction, and maybe it can be something more. But that's why people are annoyed at the time being put into it.