Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
My Q-Face
Jul 8, 2002

A dumb racist who need to kill themselves

AbbadonOfHell posted:

You'd think, but the number of Trump supporters seems to indicate otherwise.

Among Republicans, whose only other choices right now are Son of Ron Paul, Ted Cruz, Rick Santorum, Chris Christie, and Other Son of George Bush. If I were a Republican, I'd be going for Trump, too.

Adbot
ADBOT LOVES YOU

Mercrom
Jul 17, 2009

d3c0y2 posted:

Maybe America will stop calling them loving social sciences and start calling them humanities like the rest of the world and finally catch up with Non-Positivism. Please take Durkheim's dick out your mouth American social "scientists".

Basically Germany and parts of Europe been aware social "science" is bullshit for loving ages
so people prefer to listen to people who are absolutely sure of their unfalsifiable claims over people who prove themselves wrong. wow. i bet that never happened in the history of the STEM fields.

a misanthrope
Jun 21, 2010

:burgerpug::burgerpug::burgerpug::burgerpug::burgerpug:
communications major here

can anyone loan me some money?

Moon Atari
Dec 26, 2010

Bit ironic to use a study conducted by academics in the social sciences as your main evidence that social sciences are bullshit.

COPE 27
Sep 11, 2006

Yes science reporters are lovely and lazy but that's kind of beside the point when huge numbers of studies cannot be replicated.

Roy
Sep 24, 2007
They should make a game like Portal, only focused around social science

dogcrash truther
Nov 2, 2013

evilpicard posted:

Yes science reporters are lovely and lazy but that's kind of beside the point when huge numbers of studies cannot be replicated.

The real problem is that nobody's even trying to replicate them. This is true in many fields, not just the social sciences. There are far more studies being published than you could ever hope to responsibly replicate, and the peer review system is similarly broken - there just aren't enough people with the right kinds of knowledge and the time and lack of conflicts of interest to meaningfully assess all of the studies. The knowledge problem is especially bad in fields like physics and chemistry where knowledge gets extremely specific extremely fast. Either you draw from a small pool of scientists who understand the study but all know each other and have personal relationships, or you bring in people from outside the field who are just sorta guessing about whether things look mostly ok.

COPE 27
Sep 11, 2006

True enough. On the other hand, any field that uses the term "decline effect" is pretty lol.

Ocean Book
Sep 27, 2010

:yum: - hi
i am shocked op, just shocked

dendy crew
Jun 1, 2011

Dendy!, Dendy!, We love Dendy! Dendy - Everyone plays it!
Here's the article the OP's story is referencing and my hunch is that they read the abstract, to write an article about it because the article itself is behind a paywall.

http://www.sciencemag.org/content/349/6251/910.summary?sid=259d4c7d-caf8-4f9d-8467-bb1666a0b95b

dendy crew
Jun 1, 2011

Dendy!, Dendy!, We love Dendy! Dendy - Everyone plays it!
News Article Fails To Explain Content Of Study Claiming To Confirm Contents Of Other Studies Requiring Expensive Funding For Replication Because The Article Is Too Expensive And Requires An Account

spud
Aug 27, 2003

by LITERALLY AN ADMIN
Engineering, gently caress you.

Moon Atari
Dec 26, 2010

evilpicard posted:

Yes science reporters are lovely and lazy but that's kind of beside the point when huge numbers of studies cannot be replicated.

Attempting to replicate them and either failing or succeeding is a part of the scientific process. A published study isn't considered fact by anyone working in the sciences, not even the people who conducted it. If they weren't published there would be far fewer channels through which people could learn about and consequently try to replicate them without being connected to the original researchers to a degree that could negatively impact the credibility of their findings. Only after a pile of explicit replications and complementary studies using different methodology or testing related theories and alternative explanations has congealed into a reasonable mound of evidence does anyone (except the lovely media) start to have any confidence in it. This is how science is meant to work, including the part where people within the field do reviews to expose the flaws in the system and pretty much constantly focus on and emphasize the weaknesses and uncertainty.

Social sciences are inherently more likely to fail to be replicated than other sciences, as the amount of unknown and difficult to control for confounding factors are huge when considering something like human behaviour and sample selection. This is one of the reasons definitive progress in social sciences is slower than in other sciences (also ethics and time/cost of human research), on top of the several centuries worth of head start physics and chem have over psychology.

dogcrash truther posted:

The real problem is that nobody's even trying to replicate them. This is true in many fields, not just the social sciences. There are far more studies being published than you could ever hope to responsibly replicate, and the peer review system is similarly broken - there just aren't enough people with the right kinds of knowledge and the time and lack of conflicts of interest to meaningfully assess all of the studies. The knowledge problem is especially bad in fields like physics and chemistry where knowledge gets extremely specific extremely fast. Either you draw from a small pool of scientists who understand the study but all know each other and have personal relationships, or you bring in people from outside the field who are just sorta guessing about whether things look mostly ok.

This is also very true. Most studies are read by virtually no one and cared about only by the people who contributed (often not even that). So a lot of crap will sit there unrefuted, but also unused or absorbed so is not even really worth refuting. The same is true of a lot of good studies that could eventually have contributed to or developed into tangible progress had there been anyone to read it and work on it, or even an easy way to find it if you were looking.

Also, science is boring and unrewarding.

The Protagonist
Jun 29, 2009

The average is 5.5? I thought it was 4. This is very unsettling.

dogcrash truther posted:

The real problem is that nobody's even trying to replicate them. This is true in many fields, not just the social sciences. There are far more studies being published than you could ever hope to responsibly replicate, and the peer review system is similarly broken - there just aren't enough people with the right kinds of knowledge and the time and lack of conflicts of interest to meaningfully assess all of the studies. The knowledge problem is especially bad in fields like physics and chemistry where knowledge gets extremely specific extremely fast. Either you draw from a small pool of scientists who understand the study but all know each other and have personal relationships, or you bring in people from outside the field who are just sorta guessing about whether things look mostly ok.

isnt this what they made watson for sortof

like no joke i remember some interview where one of the creators was suggesting it could someday be thrown at the vast number of, in his example, medical papers. i don't know how pie in the sky this really is, but i'm guessing looking for interaction correlations across a huge number of studies is probably a lot more feasible than whatever the op i didn't read is about

The Protagonist fucked around with this message at 16:43 on Aug 28, 2015

fuck the ROW
Aug 29, 2008

by zen death robot

The Protagonist posted:

isnt this what they made watson for sortof

its what they made deez nuts for :D

Moridin920
Nov 15, 2007

by FactsAreUseless
it's all a lie all the empirical evidence is just stoners coming up with bullshit jus like I fudged my 10th grade lab reports


jesus is real


(if you falsify cancer research you're a giant piece of poo poo imo)

ArbitraryC
Jan 28, 2009
Pick a number, any number
Pillbug
What's with all the posters who are like "it happens in stem too!". Yeah I guess but not nearly as often. For like physics and chemistry stuff if you set the experiment up right it should be pretty easy to replicate. I just put out a paper on making one chemical from another chemical and aside from lying through my teeth or my equipment being completely uncalibrated there's really no room for fudge factor in stuff like "under these conditions this happened, here are the tiny error bars on us doing it several times".

I guess some fields in stem are going to run into statistical difficulties, I had some bioengineering friends and their experiments sounded like nightmares, but a lot of our stuff are directly measurable values with very little fudge factors and you just don't run into reproducibility issues with experiments like those. At worst the absolute values from experimental setup to experimental setup might be a bit different, but the trends are always going to match.

Moridin920
Nov 15, 2007

by FactsAreUseless

ArbitraryC posted:

What's with all the posters who are like "it happens in stem too!". Yeah I guess but not nearly as often. For like physics and chemistry stuff if you set the experiment up right it should be pretty easy to replicate. I just put out a paper on making one chemical from another chemical and aside from lying through my teeth or my equipment being completely uncalibrated there's really no room for fudge factor in stuff like "under these conditions this happened, here are the tiny error bars on us doing it several times".

I guess some fields in stem are going to run into statistical difficulties, I had some bioengineering friends and their experiments sounded like nightmares, but a lot of our stuff are directly measurable values with very little fudge factors and you just don't run into reproducibility issues with experiments like those. At worst the absolute values from experimental setup to experimental setup might be a bit different, but the trends are always going to match.

Idk did you read the article or whatever about how Bayer was trying to do some cancer med research and found that like 47 of the 52 studies they were relying on for data points were unable to be reproduced so they just canned the whole project?

I feel like you're right that yeah non STEM probably has more fuckery going on but it seems like there's still fuckery.

The Whole Internet
May 26, 2010

by FactsAreUseless

EugeneJ posted:

So does this mean the 150 flavors of sexuality are all bullshit?

Am I not a genderqueer lesbian with CIS tendencies?

most sexuality research IS bullshit, but don't confuse the stuff made up on tumblr with stuff that has bullshit papers backing it up at least. there are different levels of bullshit

(ie: being trans is totally a thing, being 'demigender otherkin' is not)

sugar free jazz
Mar 5, 2008

The remaining 41 (87%) were eligible but not claimed. These often required specialized samples (such as macaques or people with autism)

Moridin920
Nov 15, 2007

by FactsAreUseless

sugar free jazz posted:

The remaining 41 (87%) were eligible but not claimed. These often required specialized samples (such as macaques or people with autism)

so out of 100, 41 they couldn't do and of the remaining 59 half worked.

:shrug:

ArbitraryC
Jan 28, 2009
Pick a number, any number
Pillbug

Moridin920 posted:

Idk did you read the article or whatever about how Bayer was trying to do some cancer med research and found that like 47 of the 52 studies they were relying on for data points were unable to be reproduced so they just canned the whole project?

I feel like you're right that yeah non STEM probably has more fuckery going on but it seems like there's still fuckery.
That's biology tho which is like the softest of the hard sciences. Like outside of being grossly negligent or falsifying my results its just not going to be hard to replicate the kind of experiments i do. We don't need to do much if any statistical wizardry to massage our data, most papers wouldn't even use poo poo like p values (which have all sorts of issues regarding experimental designs that generate false positives easily).

I think the soft sciences are really important, but the way their experiments are setup as a necessity almost always makes them more qualitative and prone to error. It is completely fair to draw a distinction between the "sciences" because of this.

sugar free jazz
Mar 5, 2008

Moridin920 posted:

so out of 100, 41 they couldn't do and of the remaining 59 half worked.

:shrug:

lol no that's just a funny quote from their methods section. They made a larger group of articles for teams to select from and are just discussing the ones that weren't selected. Some weren't tested because resources, knowledge, or autistic people were lacking


"In total, there were 488 articles in the 2008 issues of the three journals. One hundred fifty-eight of these (32%) became eligible for selection for replication during the project period, between November 2011 and December 2014. From those, 111 articles (70%) were selected by a replication team, producing 113 replications"

Moridin920
Nov 15, 2007

by FactsAreUseless

ArbitraryC posted:

That's biology tho which is like the softest of the hard sciences. Like outside of being grossly negligent or falsifying my results its just not going to be hard to replicate the kind of experiments i do. We don't need to do much if any statistical wizardry to massage our data, most papers wouldn't even use poo poo like p values (which have all sorts of issues regarding experimental designs that generate false positives easily).

I think the soft sciences are really important, but the way their experiments are setup as a necessity almost always makes them more qualitative and prone to error. It is completely fair to draw a distinction between the "sciences" because of this.

yeah fair enough

FlimFlam Imam
Mar 1, 2007

Standing on a hill in my mountain of dreams

sugar free jazz
Mar 5, 2008

lol I skimmed the article it's pretty good I like it ummm people who only read the abstract or a description of the abstract rly misunderstand what the article is about and what its results say. Thats ok tho, no one actually reads journal articles anyways

Adbot
ADBOT LOVES YOU

ArbitraryC
Jan 28, 2009
Pick a number, any number
Pillbug
When my data is inconsistent and all over the place it means there's something wrong with the experimental setup and i work on that until im getting clean results.

When soft sciences data is all over the place they just collect a bigger sample size to boost their confidence and consider a 5% chance of being completely wrong acceptable. Its just a different world.

  • Locked thread