Science in Captivity, Knowledge for Sale
How Impact Metrics and Market Forces Are Shaping—and Distorting—Scientific Research
The fate of a scientist today is measured in numbers—not in bold questions or discoveries that transform lives. Audacity in thinking the unthinkable is not valued; what matters is the number of citations, the number of publications, how often their name appears in prestigious journals, and increasingly, how many dollars they have available to ensure their work gets read.
Research has ceased to be a field of intellectual exploration. It has turned into a speculative market, where the value of a study is not defined by its contribution to knowledge but by its ranking in high-impact journals.
The almost invisible tyrant that governs this game is the Impact Factor, a metric created in 1955 by linguist and founder of The Scientist Eugene Eli Garfield (1925–2017). Its initially harmless purpose is to help libraries decide which journals to subscribe to based on citation analysis of published articles.
Garfield’s system was never designed to measure a researcher’s value or determine the course of science. However, what began as a library management tool became the primary criterion for academic legitimacy. And like any metric turned into dogma, it has sown distortion, inequity, and subjugation in its wake.
The Impact Factor does not measure the quality of a discovery but rather the reputation of the journal that publishes it. If an article appears in Nature or The Lancet, it is presumed relevant. If it is published in a less influential journal, it is almost as if it does not exist—even if it contains findings capable of changing the world and improving the lives of millions. This is how the hierarchy of knowledge operates in the academic ecosystem: the visibility of an idea depends less on its depth than on its placement in the prestige index.
Major publishing houses have exploited this logic well. Elsevier, Springer, and Wiley have turned scientific publishing into a multi-billion-dollar business where knowledge production is measured in market shares.
According to a report by El Diario, in 2023, MDPI, Elsevier, and Springer Nature made millions in revenue from Article Processing Charges (APCs), totaling over 1.8 billion euros. Between 2019 and 2023, their profits increased 2.5 times, exceeding 2.5 billion dollars. In this context, the Impact Factor functions more as a commercial control tool than a true mechanism for scientific evaluation.
Silenced Science: Between Domestication and Invisibility
The imperative to "publish or perish" has turned research into a race—not toward knowledge but survival. Those who fail to accumulate enough citations are doomed to irrelevance, regardless of the originality of their thinking, the strength of their hypotheses, or the significance of their findings.
As in any market, players learn to adapt. Scientists do more than research; they also strategize. They spend years reformulating manuscripts to meet the requirements of the most sought-after journals. They fragment studies into multiple minimal publications to inflate their productivity (salami slicing). And they align their research questions with dominant trends: publishing a predictable study on the future of automation or the harmful effects of tobacco—topics already exhaustively documented—is far easier than conducting an inconvenient investigation into how lower-risk nicotine products could save lives, reduce deaths, and accelerate the obsolescence of combustible cigarettes.
What is rewarded is not intellectual boldness but the ability to optimize performance within a rigid system of rules. Academia ceases to be a space for critical thinking and becomes a factory of papers designed to maximize their profitability within the publishing circuit.
Science almost always advances in the margins, in discomfort, in the unexpected. But when publication becomes the sole criterion for success, disruption is punished, and conformity is rewarded.
Peter Higgs, the physicist who predicted the boson that bears his name, admitted that he would never have been able to develop his theory in today's system. "I wouldn’t have met the productivity standards," he confessed. If Einstein had been born in this century, special relativity would likely have been rejected for being too speculative and lacking citations in its early years. In a world where short-term metrics measure impact, the ideas that could transform the future run the risk of dying in the present.
The result is an ecosystem of tame, domesticated, predictable science. Incremental studies multiply, while radical bets shrink to near extinction. Thought is encapsulated within the boundaries of what is permitted, restricted by the demands of the publishing market and the imperative of citation.
In that process, the most valuable aspect of research—its ability to reimagine the world—erodes, leaving behind an increasingly sterile and less transformative body of knowledge.
The Cost of Existing in Science: Stress, Anxiety, and Burnout
If the Impact Factor shapes knowledge production, it also shapes those who produce it. Early-career scientists are the most vulnerable: Without an established career, they rely on scholarships and precarious contracts that demand immediate results.
But genuine science does not follow market timelines. It requires time for mistakes, uncertainty, and dead ends that often precede truly groundbreaking discoveries.
The system allows no such margin. The relentless pressure to publish at any cost generates alarming levels of anxiety, stress, and depression. Precarious conditions become normalized, exhaustion becomes a prerequisite, and the temptation to engage in questionable practices increases: exaggerating results, omitting inconvenient data, and formulating research questions that align with editorial market expectations. At its extreme, what deteriorates is not just creativity but the very ethics of science.
This ideological bias—dictated by academic trends or what is deemed acceptable within the system—also affects research fields that challenge dominant narratives. A clear example is Tobacco Harm Reduction, where scientific evidence often clashes with political and economic interests. Studies on e-cigarettes or lower-risk nicotine products are rejected or ignored by prestigious journals—not due to a lack of scientific rigor, but because they challenge established dogmas and vested interests.
Meanwhile, thousands of articles continue to be published on the harmful effects of combustible tobacco even though its risks are widely documented and ingrained in public knowledge. Inconvenient science is left without a place in a system that prioritizes profitability over controversy, suffocated by a model that rewards the predictable and penalizes the disruptive.
Excluded Minds: Science Enclosed by the Knowledge Monopoly
If the Impact Factor metric is unfair within a single institution, it becomes even more insidious globally. Science is a landscape of inequality, where universities in wealthier countries have the resources, networks, and privileged access to prestigious journals.
The invisible barriers are many. High-impact journals impose exorbitant publishing fees (Article Processing Charges, APCs). The cost of publishing a single article in some of these journals can exceed $5,000. This is merely an administrative formality for a researcher at Harvard or Cambridge. For a scientist in Latin America or Africa, it is the difference between being read or remaining invisible. In a system where visibility equates to legitimacy, knowledge becomes a privilege for those who can afford it.
Studies have also shown that articles from developing countries are more frequently rejected by high-impact journals, even when they maintain the same scientific quality.
Garfield himself admitted that the calculation of the Impact Factor excludes hundreds of journals—mostly from resource-limited regions—simply because they do not meet the system’s economic and distribution criteria.
In a world where paywalls guard knowledge, many nations cannot even access the literature they are expected to cite to be considered relevant. The result is a vicious cycle: global scientific production remains concentrated in certain countries. At the same time, the rest of the world struggles to have a voice in the debate and to break the structural barriers that condemn them to invisibility.
Escaping the Labyrinth: Reinventing Science from Its Margins
But the system is not immutable. Recently, initiatives have emerged to challenge the Impact Factor's monopoly and propose fairer evaluation methods.
New metrics, such as the h-index, Altmetrics, and the Eigenfactor Score, aim to measure a study's impact, relevance, and reach in different contexts beyond its presence in elite journals. The San Francisco Declaration on Research Assessment (DORA) recommends evaluating scientific quality based on its real contribution rather than the Impact Factor of the journal where it is published.
Some institutions have begun to adopt this approach. Universities like the Universitat Oberta de Catalunya and organizations like the Wellcome Trust have signed onto DORA’s principles, prioritizing social impact and genuine innovation in their evaluation criteria. Platforms like DOAJ (Directory of Open Access Journals), Qeios, Redalyc, SciELO, and Latindex have proven to publish high-quality research without relying on the traditional pay-to-publish model.
These changes have started to create cracks in the system. The question is whether they will be enough to dismantle it or if, as has often happened in academia, power will find new ways to shield itself.
Science cannot be reduced to a number. The Impact Factor was born as a practical tool, but it has become the symbol of everything wrong with academia: rigid, exclusionary, and easily manipulated. It has distorted scientific production, rewarding quantity over quality, conformity over innovation, and strategy over truth.
If science aims to expand the boundaries of human knowledge, we must reclaim its essence: curiosity, questioning, and exploration. A discovery should not be measured by how many times it is cited but by how much it transforms our understanding of the world.
The real question is not whether the Impact Factor is useful. The question is whether we will let a single number continue deciding what matters in science. As Max Planck warned: “A new scientific truth does not triumph by convincing its opponents but because its critics eventually die, and a new generation grows up familiar with it.” Perhaps it is time to ask whether that new generation still has space to exist.



