|
AbbadonOfHell posted:You'd think, but the number of Trump supporters seems to indicate otherwise. Among Republicans, whose only other choices right now are Son of Ron Paul, Ted Cruz, Rick Santorum, Chris Christie, and Other Son of George Bush. If I were a Republican, I'd be going for Trump, too.
|
# ? Aug 28, 2015 09:34 |
|
|
# ? Apr 27, 2024 04:49 |
|
d3c0y2 posted:Maybe America will stop calling them loving social sciences and start calling them humanities like the rest of the world and finally catch up with Non-Positivism. Please take Durkheim's dick out your mouth American social "scientists".
|
# ? Aug 28, 2015 11:46 |
|
communications major here can anyone loan me some money?
|
# ? Aug 28, 2015 11:50 |
|
Bit ironic to use a study conducted by academics in the social sciences as your main evidence that social sciences are bullshit.
|
# ? Aug 28, 2015 12:58 |
|
Yes science reporters are lovely and lazy but that's kind of beside the point when huge numbers of studies cannot be replicated.
|
# ? Aug 28, 2015 13:04 |
|
They should make a game like Portal, only focused around social science
|
# ? Aug 28, 2015 13:08 |
|
evilpicard posted:Yes science reporters are lovely and lazy but that's kind of beside the point when huge numbers of studies cannot be replicated. The real problem is that nobody's even trying to replicate them. This is true in many fields, not just the social sciences. There are far more studies being published than you could ever hope to responsibly replicate, and the peer review system is similarly broken - there just aren't enough people with the right kinds of knowledge and the time and lack of conflicts of interest to meaningfully assess all of the studies. The knowledge problem is especially bad in fields like physics and chemistry where knowledge gets extremely specific extremely fast. Either you draw from a small pool of scientists who understand the study but all know each other and have personal relationships, or you bring in people from outside the field who are just sorta guessing about whether things look mostly ok.
|
# ? Aug 28, 2015 13:43 |
|
True enough. On the other hand, any field that uses the term "decline effect" is pretty lol.
|
# ? Aug 28, 2015 14:11 |
|
i am shocked op, just shocked
|
# ? Aug 28, 2015 14:16 |
|
Here's the article the OP's story is referencing and my hunch is that they read the abstract, to write an article about it because the article itself is behind a paywall. http://www.sciencemag.org/content/349/6251/910.summary?sid=259d4c7d-caf8-4f9d-8467-bb1666a0b95b
|
# ? Aug 28, 2015 14:33 |
|
News Article Fails To Explain Content Of Study Claiming To Confirm Contents Of Other Studies Requiring Expensive Funding For Replication Because The Article Is Too Expensive And Requires An Account
|
# ? Aug 28, 2015 14:34 |
|
Engineering, gently caress you.
|
# ? Aug 28, 2015 14:36 |
|
evilpicard posted:Yes science reporters are lovely and lazy but that's kind of beside the point when huge numbers of studies cannot be replicated. Attempting to replicate them and either failing or succeeding is a part of the scientific process. A published study isn't considered fact by anyone working in the sciences, not even the people who conducted it. If they weren't published there would be far fewer channels through which people could learn about and consequently try to replicate them without being connected to the original researchers to a degree that could negatively impact the credibility of their findings. Only after a pile of explicit replications and complementary studies using different methodology or testing related theories and alternative explanations has congealed into a reasonable mound of evidence does anyone (except the lovely media) start to have any confidence in it. This is how science is meant to work, including the part where people within the field do reviews to expose the flaws in the system and pretty much constantly focus on and emphasize the weaknesses and uncertainty. Social sciences are inherently more likely to fail to be replicated than other sciences, as the amount of unknown and difficult to control for confounding factors are huge when considering something like human behaviour and sample selection. This is one of the reasons definitive progress in social sciences is slower than in other sciences (also ethics and time/cost of human research), on top of the several centuries worth of head start physics and chem have over psychology. dogcrash truther posted:The real problem is that nobody's even trying to replicate them. This is true in many fields, not just the social sciences. There are far more studies being published than you could ever hope to responsibly replicate, and the peer review system is similarly broken - there just aren't enough people with the right kinds of knowledge and the time and lack of conflicts of interest to meaningfully assess all of the studies. The knowledge problem is especially bad in fields like physics and chemistry where knowledge gets extremely specific extremely fast. Either you draw from a small pool of scientists who understand the study but all know each other and have personal relationships, or you bring in people from outside the field who are just sorta guessing about whether things look mostly ok. This is also very true. Most studies are read by virtually no one and cared about only by the people who contributed (often not even that). So a lot of crap will sit there unrefuted, but also unused or absorbed so is not even really worth refuting. The same is true of a lot of good studies that could eventually have contributed to or developed into tangible progress had there been anyone to read it and work on it, or even an easy way to find it if you were looking. Also, science is boring and unrewarding.
|
# ? Aug 28, 2015 16:28 |
|
dogcrash truther posted:The real problem is that nobody's even trying to replicate them. This is true in many fields, not just the social sciences. There are far more studies being published than you could ever hope to responsibly replicate, and the peer review system is similarly broken - there just aren't enough people with the right kinds of knowledge and the time and lack of conflicts of interest to meaningfully assess all of the studies. The knowledge problem is especially bad in fields like physics and chemistry where knowledge gets extremely specific extremely fast. Either you draw from a small pool of scientists who understand the study but all know each other and have personal relationships, or you bring in people from outside the field who are just sorta guessing about whether things look mostly ok. isnt this what they made watson for sortof like no joke i remember some interview where one of the creators was suggesting it could someday be thrown at the vast number of, in his example, medical papers. i don't know how pie in the sky this really is, but i'm guessing looking for interaction correlations across a huge number of studies is probably a lot more feasible than whatever the op i didn't read is about The Protagonist fucked around with this message at 16:43 on Aug 28, 2015 |
# ? Aug 28, 2015 16:41 |
|
The Protagonist posted:isnt this what they made watson for sortof its what they made deez nuts for
|
# ? Aug 28, 2015 16:42 |
|
it's all a lie all the empirical evidence is just stoners coming up with bullshit jus like I fudged my 10th grade lab reports jesus is real (if you falsify cancer research you're a giant piece of poo poo imo)
|
# ? Aug 28, 2015 16:43 |
|
What's with all the posters who are like "it happens in stem too!". Yeah I guess but not nearly as often. For like physics and chemistry stuff if you set the experiment up right it should be pretty easy to replicate. I just put out a paper on making one chemical from another chemical and aside from lying through my teeth or my equipment being completely uncalibrated there's really no room for fudge factor in stuff like "under these conditions this happened, here are the tiny error bars on us doing it several times". I guess some fields in stem are going to run into statistical difficulties, I had some bioengineering friends and their experiments sounded like nightmares, but a lot of our stuff are directly measurable values with very little fudge factors and you just don't run into reproducibility issues with experiments like those. At worst the absolute values from experimental setup to experimental setup might be a bit different, but the trends are always going to match.
|
# ? Aug 28, 2015 17:00 |
|
ArbitraryC posted:What's with all the posters who are like "it happens in stem too!". Yeah I guess but not nearly as often. For like physics and chemistry stuff if you set the experiment up right it should be pretty easy to replicate. I just put out a paper on making one chemical from another chemical and aside from lying through my teeth or my equipment being completely uncalibrated there's really no room for fudge factor in stuff like "under these conditions this happened, here are the tiny error bars on us doing it several times". Idk did you read the article or whatever about how Bayer was trying to do some cancer med research and found that like 47 of the 52 studies they were relying on for data points were unable to be reproduced so they just canned the whole project? I feel like you're right that yeah non STEM probably has more fuckery going on but it seems like there's still fuckery.
|
# ? Aug 28, 2015 17:13 |
|
EugeneJ posted:So does this mean the 150 flavors of sexuality are all bullshit? most sexuality research IS bullshit, but don't confuse the stuff made up on tumblr with stuff that has bullshit papers backing it up at least. there are different levels of bullshit (ie: being trans is totally a thing, being 'demigender otherkin' is not)
|
# ? Aug 28, 2015 17:22 |
|
The remaining 41 (87%) were eligible but not claimed. These often required specialized samples (such as macaques or people with autism)
|
# ? Aug 28, 2015 17:26 |
|
sugar free jazz posted:The remaining 41 (87%) were eligible but not claimed. These often required specialized samples (such as macaques or people with autism) so out of 100, 41 they couldn't do and of the remaining 59 half worked.
|
# ? Aug 28, 2015 17:31 |
|
Moridin920 posted:Idk did you read the article or whatever about how Bayer was trying to do some cancer med research and found that like 47 of the 52 studies they were relying on for data points were unable to be reproduced so they just canned the whole project? I think the soft sciences are really important, but the way their experiments are setup as a necessity almost always makes them more qualitative and prone to error. It is completely fair to draw a distinction between the "sciences" because of this.
|
# ? Aug 28, 2015 17:38 |
|
Moridin920 posted:so out of 100, 41 they couldn't do and of the remaining 59 half worked. lol no that's just a funny quote from their methods section. They made a larger group of articles for teams to select from and are just discussing the ones that weren't selected. Some weren't tested because resources, knowledge, or autistic people were lacking "In total, there were 488 articles in the 2008 issues of the three journals. One hundred fifty-eight of these (32%) became eligible for selection for replication during the project period, between November 2011 and December 2014. From those, 111 articles (70%) were selected by a replication team, producing 113 replications"
|
# ? Aug 28, 2015 17:40 |
|
ArbitraryC posted:That's biology tho which is like the softest of the hard sciences. Like outside of being grossly negligent or falsifying my results its just not going to be hard to replicate the kind of experiments i do. We don't need to do much if any statistical wizardry to massage our data, most papers wouldn't even use poo poo like p values (which have all sorts of issues regarding experimental designs that generate false positives easily). yeah fair enough
|
# ? Aug 28, 2015 17:41 |
|
|
# ? Aug 28, 2015 17:42 |
|
lol I skimmed the article it's pretty good I like it ummm people who only read the abstract or a description of the abstract rly misunderstand what the article is about and what its results say. Thats ok tho, no one actually reads journal articles anyways
|
# ? Aug 28, 2015 17:50 |
|
|
# ? Apr 27, 2024 04:49 |
|
When my data is inconsistent and all over the place it means there's something wrong with the experimental setup and i work on that until im getting clean results. When soft sciences data is all over the place they just collect a bigger sample size to boost their confidence and consider a 5% chance of being completely wrong acceptable. Its just a different world.
|
# ? Aug 28, 2015 17:52 |