The cult of science - Page 6 - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

All sociological topics not appropriate or suited to other areas of the board.
Forum rules: No one line posts please.
User avatar
By Godstud
#14966069
@One Degree :roll:
I use it like it's meant to be used- as in the DEFINITION, which is not the incorrect way to use a word.

Fact:
a piece of information used as evidence

Stop trying to misrepresent what I say, as it's annoying, and dishonest.
User avatar
By One Degree
#14966073
Godstud wrote:@One Degree :roll:
I use it like it's meant to be used- as in the DEFINITION, which is not the incorrect way to use a word.

Fact:
a piece of information used as evidence

Stop trying to misrepresent what I say, as it's annoying, and dishonest.

I did not misrepresent what you said. I replied to what you posted as I can not know what you are thinking. The definition you give here seems to say a fact can be anything. My information is I saw my cat abducted by aliens. I present this information (fact) as evidence of alien existence.
User avatar
By Godstud
#14966075
Do I have to teach you English language, just to make a fucking point?

Evidence
the available body of facts or information indicating whether a belief or proposition is true or valid.

Stop being asinine. You misrepresent it because you assume it's something it's not.
User avatar
By One Degree
#14966080
Godstud wrote:Do I have to teach you English language, just to make a fucking point?

Evidence
the available body of facts or information indicating whether a belief or proposition is true or valid.

Stop being asinine. You misrepresent it because you assume it's something it's not.


That is the definition for oneway the word is used. A very poor definition as it is not actually defining evidence but simply saying it is sometimes used to refer to ‘body of evidence’.
User avatar
By Godstud
#14966081
Well, I'm not going to discuss this further if you aren't even going to use the English language, as per the rules of this forum. I used the right definition, in the right context.

Have a nice day.
By Sivad
#14966347


There are a lot more ways to be wrong than there are ways to be right. Yet somehow, many of us think that we are probably right most of the time. Prior generations were wrong about almost everything they believed, but this does not stop our unfailing confidence that we, being so much more enlightened, have things for the most part figured out. In this talk I give a short tour of the myriad surprising ways in which (and degrees to which) we can be wrong about even the most seemingly obvious things. This pervasive fallibility will cast doubt not only on our beliefs about matters of objective fact, but also subjective and personal matters such as our predictions about what will make us happy, what we actually believe, and what emotions we are feeling. Drawing on insights from history, psychology and philosophy, I attempt to pin down some of the reasons why we are so often and so profoundly wrong, and why our being wrong (to say nothing of our recalcitrant confidence that we are nonetheless right) is unlikely to change any time soon.


The Pessimistic Induction

Worries about underdetermination and inference to the best explanation are generally conceptual in nature, but the so-called pessimistic induction (also called the “pessimistic meta-induction”, because it concerns the “ground level” inductive inferences that generate scientific theories and law statements) is intended as an argument from empirical premises. If one considers the history of scientific theories in any given discipline, what one typically finds is a regular turnover of older theories in favor of newer ones, as scientific knowledge develops. From the point of view of the present, most past theories must be considered false; indeed, this will be true from the point of view of most times. Therefore, by enumerative induction (that is, generalizing from these cases), surely theories at any given time will ultimately be replaced and regarded as false from some future perspective. Thus, current theories are also false. The general idea of the pessimistic induction has a rich pedigree. Though neither endorse the argument, Poincaré ([1905] 1952: 160), for instance, describes the seeming “bankruptcy of science” given the apparently “ephemeral nature” of scientific theories, which one finds “abandoned one after another”, and Putnam (1978: 22–25) describes the challenge in terms of the failure of reference of terms for unobservables, with the consequence that theories incorporating them cannot be said to be true. (For a summary of different formulations, see Wray 2015.)

Contemporary discussion commonly focuses on Laudan’s (1981) argument to the effect that the history of science furnishes vast evidence of empirically successful theories that were later rejected; from subsequent perspectives, their unobservable terms were judged not to refer and thus, they cannot not be regarded as true or even approximately true. (If one prefers to define realism in terms of scientific ontology rather than reference and truth, one may rephrase the worry in terms of the mistaken ontologies of past theories from later perspectives.)
https://plato.stanford.edu/entries/scie ... /#PessIndu
User avatar
By Godstud
#14966353
You should read and apply those same quotes to your thinking about science, since most of science is not about knowing things, but learning and discovering things. You are making a ridiculous assertion that scientists think they know everything, and that's completely nonsense.
User avatar
By XogGyux
#14966355
Godstud wrote:You should read and apply those same quotes to your thinking about science, since most of science is not about knowing things, but learning and discovering things. You are making a ridiculous assertion that scientists think they know everything, and that's completely nonsense.

I would argue it is quite the opposite. Science is the process by which we study and learn. The mere practice of science requires 1.- accepting that there are many things you don't know, 2.- be willing to pursue evidence where it leads you. 3. be ready to accept at any time you could be wrong and if so, work towards addressing this.
By Sivad
#14966373
Awesome TED talk by establishment shill Naomi Oreskes(popularizer of the bogus 97% consensus on AGW claim)


She makes all the same points I've made throughout this thread and concludes that science is ultimately an appeal to authority, but then she ignores all the sociological issues with the institution of science and says the institution is trustworthy. :lol:
By Sivad
#14966383
Rancid wrote:You're saying you're the fanatic right?


Aren't you the guy that said we should lock people up for disobeying Science?
User avatar
By Rancid
#14966388
Sivad wrote:Aren't you the guy that said we should lock people up for disobeying Science?


I said that? :lol:

If I did, I was either joking and/or trolling.
By Sivad
#14967021
Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition

Marc A. Edwards*, and Siddhartha Roy

Environ Eng Sci. 2017 Jan 1; 34(1): 51–61.
Published online 2017 Jan 1



Abstract

Over the last 50 years, we argue that incentives for academic scientists have become increasingly perverse in terms of competition for research funding, development of quantitative metrics to measure performance, and a changing business model for higher education itself. Furthermore, decreased discretionary funding at the federal and state level is creating a hypercompetitive environment between government agencies (e.g., EPA, NIH, CDC), for scientists in these agencies, and for academics seeking funding from all sources—the combination of perverse incentives and decreased funding increases pressures that can lead to unethical behavior. If a critical mass of scientists become untrustworthy, a tipping point is possible in which the scientific enterprise itself becomes inherently corrupt and public trust is lost, risking a new dark age with devastating consequences to humanity. Academia and federal agencies should better support science as a public good, and incentivize altruistic and ethical outcomes, while de-emphasizing output.


Perverse Incentives in Research Academia: The New Normal?

When you rely on incentives, you undermine virtues. Then when you discover that you actually need people who want to do the right thing, those people don't exist…—Barry Schwartz, Swarthmore College (Zetter, 2009)

Academics are human and readily respond to incentives. The need to achieve tenure has influenced faculty decisions, priorities, and activities since the concept first became popular (Wolverton, 1998). Recently, however, an emphasis on quantitative performance metrics (Van Noorden, 2010), increased competition for static or reduced federal research funding (e.g., NIH, NSF, and EPA), and a steady shift toward operating public universities on a private business model (Plerou, et al., 1999; Brownlee, 2014; Kasperkevic, 2014) are creating an increasingly perverse academic culture. These changes may be creating problems in academia at both individual and institutional levels (Table 1).


Quantitative performance metrics: effect on individual researchers and productivity

The goal of measuring scientific productivity has given rise to quantitative performance metrics, including publication count, citations, combined citation-publication counts (e.g., h-index), journal impact factors (JIF), total research dollars, and total patents. These quantitative metrics now dominate decision-making in faculty hiring, promotion and tenure, awards, and funding (Abbott et al., 2010; Carpenter et al., 2014). Because these measures are subject to manipulation, they are doomed to become misleading and even counterproductive, according to Goodhart's Law, which states that “when a measure becomes a target, it ceases to be a good measure” (Elton, 2004; Fischer et al., 2012; Werner, 2015).

Ultimately, the well-intentioned use of quantitative metrics may create inequities and outcomes worse than the systems they replaced. Specifically, if rewards are disproportionally given to individuals manipulating their metrics, problems of the old subjective paradigms (e.g., old-boys' networks) may be tame by comparison. In a 2010 survey, 71% of respondents stated that they feared colleagues can “game” or “cheat” their way into better evaluations at their institutions (Abbott, 2010), demonstrating that scientists are acutely attuned to the possibility of abuses in the current system.


Thus, another danger of overemphasizing output versus outcomes and quantity versus quality is creating a system that is a “perversion of natural selection,” which selectively weeds out ethical and altruistic actors, while selecting for academics who are more comfortable and responsive to perverse incentives from the point of entry. Likewise, if normally ethical actors feel a need to engage in unethical behavior to maintain academic careers (Edwards, 2014), they may become complicit as per Granovetter's well-established Threshold Model of Collective Behavior (1978). At that point, unethical actions have become “embedded in the structures and processes” of a professional culture, and nearly everyone has been “induced to view corruption as permissible” (Ashforth and Anand, 2003).


Systemic Risks to Scientific Integrity

Science is a human endeavor, and despite its obvious historical contributions to advancement of civilization, there is growing evidence that today's research publications too frequently suffer from lack of replicability, rely on biased data-sets, apply low or substandard statistical methods, fail to guard against researcher biases, and their findings are overhyped (Fanelli, 2009; Aschwanden, 2015; Belluz and Hoffman, 2015; Nuzzo, 2015; Gobry, 2016; Wilson, 2016). A troubling level of unethical activity, outright faking of peer review and retractions, has been revealed, which likely represents just a small portion of the total, given the high cost of exposing, disclosing, or acknowledging scientific misconduct (Marcus and Oransky, 2015; Retraction Watch, 2015a; BBC, 2016; Borman, 2016). Warnings of systemic problems go back to at least 1991, when NSF Director Walter E. Massey noted that the size, complexity, and increased interdisciplinary nature of research in the face of growing competition was making science and engineering “more vulnerable to falsehoods” (The New York Times, 1991).

Misconduct is not limited to academic researchers. Federal agencies are also subject to perverse incentives and hypercompetition, giving rise to a new phenomenon of institutional scientific research misconduct (Lewis, 2014; Edwards, 2016). Recent exemplars uncovered by the first author in the Flint and Washington D.C. drinking water crises include “scientifically indefensible” reports by the U.S. Centers for Disease Control and Prevention (U.S. Centers for Disease Control and Prevention, 2004; U.S. House Committee on Science and Technology, 2010), reports based on nonexistent data published by the U.S. EPA and their consultants in industry journals (Reiber and Dufresne, 2006; Boyd et al., 2012; Edwards, 2012; Retraction Watch, 2015b; U.S. Congress House Committee on Oversight and Government Reform, 2016), and silencing of whistleblowers in EPA (Coleman-Adebayo, 2011; Lewis, 2014; U.S. Congress House Committee on Oversight and Government Reform, 2015). This problem is likely to increase as agencies increasingly compete with each other for reduced discretionary funding. It also raises legitimate and disturbing questions as to whether accepting research funding from federal agencies is inherently ethical or not—modern agencies clearly have conflicts similar to those that are accepted and well understood for industry research sponsors. Given the mistaken presumption of research neutrality by federal funding agencies (Oreskes and Conway, 2010), the dangers of institutional research misconduct to society may outweigh those of industry-sponsored research (Edwards, 2014).

A “trampling of the scientific ethos” witnessed in areas as diverse as climate science and galvanic corrosion undermines the “credibility of everyone in science” (Bedeian et al., 2010; Oreskes and Conway, 2010; Edwards, 2012; Leiserowitz et al., 2012; The Economist, 2013; BBC, 2016). The Economist recently highlighted the prevalence of shoddy and nonreproducible modern scientific research and its high financial cost to society—posing an open question as to whether modern science was trustworthy, while calling upon science to reform itself (The Economist, 2013). And, while there are hopes that some problems could be reduced by practices that include open data, open access, postpublication peer review, metastudies, and efforts to reproduce landmark studies, these can only partly compensate for the high error rates in modern science arising from individual and institutional perverse incentives (Fig. 1).


The high costs of research misconduct

The National Science Foundation defines research misconduct as intentional “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results” (Steneck, 2007; Fischer, 2011). Nationally, the percentage of guilty respondents in research misconduct cases investigated by the Department of Health and Human Services (includes NIH) and NSF ranges from 20% to 33% (U.S. Department of Health and Human Services, 2013; Kroll, 2015, pers. comm.). Direct costs of handling each research misconduct case are $525,000, and over $110 million are incurred annually for all such cases at the institutional level in the U.S (Michalek, et al., 2010). A total of 291 articles retracted due to misconduct during 1992–2012 accounted for $58 M in direct funding from the NIH, which is less than 1% of the agency's budget during this period, but each retracted article accounted for about $400,000 in direct costs (Stern et al., 2014).

Obviously, incidence of undetected misconduct is some multiple of the cases judged as such each year, and the true incidence is difficult to predict. A comprehensive meta-analysis of research misconduct surveys between 1987 and 2008 indicated that 1 in 50 scientists admitted to committing misconduct (fabrication, falsification, and/or modifying data) at least once and 14% knew of colleagues who had done so (Fanelli, 2009). These numbers are likely an underestimate considering the sensitivity of the questions asked, low response rates, and the Muhammad Ali effect (a self-serving bias where people perceive themselves as more honest than their peers) (Allison et al., 1989). Indeed, delving deeper, 34% of researchers self-reported that they have engaged in “questionable research practices,” including “dropping data points on a gut feeling” and “changing the design, methodology, and results of a study in response to pressures from a funding source,” whereas 72% of those surveyed knew of colleagues who had done so (Fanelli, 2009). One study included in Fanelli's meta-analysis looked at rates of exposure to misconduct for 2,000 doctoral students and 2,000 faculty from the 99 largest graduate departments of chemistry, civil engineering, microbiology, and sociology, and found between 6 and 8% of both students and faculty had direct knowledge of faculty falsifying data (Swazey et al., 1993).

Academia and science are expected to be self-policing and self-correcting. However, based on our experiences, we believe there are incentives throughout the system that induce all stakeholders to “pretend misconduct does not happen.” Science has never developed a clear system for reporting, investigating, or dealing with allegations of research misconduct, and those individuals who do attempt to police behavior are likely to be frustrated and suffer severe negative professional repercussions (Macilwain, 1997; Kevles, 2000; Denworth, 2008). Academics largely operate on an unenforceable and unwritten honor system, in relation to what is considered fair in reporting research, grant writing practices, and “selling” research ideas, and there is serious doubt as to whether science as a whole can actually be considered self-correcting (Stroebe et al., 2012). While there are exceptional cases where individuals have provided a reality check on overhyped research press releases in areas deemed potentially transformative (e.g., Eisen, 2010–2015; New Scientist, 2016), limitations of hot research sectors are more often downplayed or ignored. Because every modern scientific mania also creates a quantitative metric windfall for participants and there are few consequences for those responsible after a science bubble finally pops, the only true check on pathological science and a misallocation of resources is the unwritten honor system (Langmuir et al., 1953).


If nothing is done, we will create a corrupt academic culture

The modern academic research enterprise, dubbed a “Ponzi Scheme” by The Economist, created the existing perverse incentive system, which would have been almost inconceivable to academics of 30–50 years ago (The Economist, 2010). We believe that this creation is a threat to the future of science, and unless immediate action is taken, we run the risk of “normalization of corruption” (Ashforth and Anand, 2003), creating a corrupt professional culture akin to that recently revealed in professional cycling or in the Atlanta school cheating scandal.

An uncontrolled perverse incentive system can create a climate in which participants feel they must cheat to compete, whether it is academia (individual or institutional level) or professional sports. While procycling was ultimately discredited and its rewards were not properly distributed to ethical participants, in science, the loss of altruistic actors and trust, and risk of direct harm to the public and the planet raise the dangers immeasurably.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5206685/
By Sivad
#14967318
David Berlinski, a mathematician, philosopher, and biologist, discusses the current state of the scientific community, the theories of Darwinism, and the science behind global warming.

Berlinski was a research assistant in molecular biology at Columbia University, and was a research fellow at the International Institute for Applied Systems Analysis (IIASA) in Austria and the Institut des Hautes Études Scientifiques (IHES) in France.

By Sivad
#14968049
Saving Science
Science isn’t self-correcting, it’s self-destructing. To save the enterprise, scientists must come out of the lab and into the real world.


Science, pride of modernity, our one source of objective knowledge, is in deep trouble. Stoked by fifty years of growing public investments, scientists are more productive than ever, pouring out millions of articles in thousands of journals covering an ever-expanding array of fields and phenomena. But much of this supposed knowledge is turning out to be contestable, unreliable, unusable, or flat-out wrong. From metastatic cancer to climate change to growth economics to dietary standards, science that is supposed to yield clarity and solutions is in many instances leading instead to contradiction, controversy, and confusion. Along the way it is also undermining the four-hundred-year-old idea that wise human action can be built on a foundation of independently verifiable truths. Science is trapped in a self-destructive vortex; to escape, it will have to abdicate its protected political status and embrace both its limits and its accountability to the rest of society.

The story of how things got to this state is difficult to unravel, in no small part because the scientific enterprise is so well-defended by walls of hype, myth, and denial. But much of the problem can be traced back to a bald-faced but beautiful lie upon which rests the political and cultural power of science. This lie received its most compelling articulation just as America was about to embark on an extended period of extraordinary scientific, technological, and economic growth. It goes like this:

Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.


“The free play of free intellects...dictated by their curiosity”

So deeply embedded in our cultural psyche that it seems like an echo of common sense, this powerful vision of science comes from Vannevar Bush, the M.I.T. engineer who had been the architect of the nation’s World War II research enterprise, which delivered the atomic bomb and helped to advance microwave radar, mass production of antibiotics, and other technologies crucial to the Allied victory. He became justly famous in the process. Featured on the cover of Time magazine, he was dubbed the “General of Physics.” As the war drew to a close, Bush envisioned transitioning American science to a new era of peace, where top academic scientists would continue to receive the robust government funding they had grown accustomed to since Pearl Harbor but would no longer be shackled to the narrow dictates of military need and application, not to mention discipline and secrecy. Instead, as he put it in his July 1945 report Science, The Endless Frontier, by pursuing “research in the purest realms of science” scientists would build the foundation for “new products and new processes” to deliver health, full employment, and military security to the nation.

From this perspective, the lie as Bush told it was perhaps less a conscious effort to deceive than a seductive manipulation, for political aims, of widely held beliefs about the purity of science. Indeed, Bush’s efforts to establish the conditions for generous and long-term investments in science were extraordinarily successful, with U.S. federal funding for “basic research” rising from $265 million in 1953 to $38 billion in 2012, a twentyfold increase when adjusted for inflation. More impressive still was the increase for basic research at universities and colleges, which rose from $82 million to $24 billion, a more than fortyfold increase when adjusted for inflation. By contrast, government spending on more “applied research” at universities was much less generous, rising to just under $10 billion. The power of the lie was palpable: “the free play of free intellects” would provide the knowledge that the nation needed to confront the challenges of the future.

To go along with all that money, the beautiful lie provided a politically brilliant rationale for public spending with little public accountability. Politicians delivered taxpayer funding to scientists, but only scientists could evaluate the research they were doing. Outside efforts to guide the course of science would only interfere with its free and unpredictable advance.

[...]

Is science today just the latest candidate for inclusion in the growing list of failing institutions that seems to characterize our society? As with democratic politics, criminal justice, health care, and public education, science’s organization and culture are captured by a daunting, self-interested inertia, and a set of values reflecting a world that no longer exists.

Daniel Sarewitz is a professor of science and society at Arizona State University’s School for the Future of Innovation and Society, and the co-director of the university’s Consortium for Science, Policy, and Outcomes. He is also the co-editor of Issues in Science and Technology and a regular columnist for the journal Nature.

https://www.thenewatlantis.com/publicat ... ng-science


By Sivad
#14972472

An interview with the controversial philosopher of science, Paul Feyerabend, an Austrian-born thinker who was an influential figure in the philosophy of science and sociology of science. He was well known for being something of a provocateur or gadfly. In his most famous work "Against Method", he developed an anarchist view of knowledge (epistemological anarchism), the view that "anything goes" in science, that there are no fixed, universal methods or rules within science and there shouldn't be (there shouldn't be because fixed rules are detrimental to science itself insofar as they restrict scientific progress and lead to dogmatism).
By Sivad
#14972483
TV programmes on science pursue a line that's often cringe-makingly reverential. Switch on any episode of Horizon, and the mood lighting, doom-laden music and Shakespearean voiceover convince you that you are entering the Houses of the Holy – somewhere where debate and dissent are not so much not permitted as inconceivable. If there are dissenting views, they aren't voiced by an interviewer, but by other scientists, and "we" (the great unwashed) can only sit back and watch uncomprehending as if the contenders are gods throwing thunderbolts at one another. If the presenters are scientists themselves, or have some scientific knowledge, be they Bill Oddie or David Attenborough, their discourse is one of monologue rather than argument, received wisdom rather than doubt.

User avatar
By Rancid
#14981070
Sivad wrote:https://www.youtube.com/watch?v=WpNzwzwm-xU


I think the better question is:
Is there any truth finding alternative to science that is more accurate than science?
  • 1
  • 4
  • 5
  • 6
  • 7
  • 8
  • 13
Candace Owens

She has shown to many Americans what Zionism is s[…]

Both of them have actually my interest at heart. […]

Israel-Palestinian War 2023

As predicted, the hasbara troll couldn't quote me […]

...Gaza could become a tourist attraction if the […]