Researchers all over the world are obsessed with one action, publish. Nor are the experiments neither their scientific problems that they deal with are the worries the most to the science community. If someone wants to success in this world the main requirement is clear: having a various track of publications in the journals of major impact. Nonetheless, it is true that the system appears to be currently the most objective way to judge the CV of a scientist but not his/hers skills. This controversial statement is to be discussed in this essay.

One potential problem present in the current scheme of publications is the balance between quality and quantity. Universities, generally use a system to evaluate candidates CVs which include both factors. As a consequence, a candidate with a long record of medium quality studies can overcome another who had made major contributions. Besides, industrial experience, where advances are not usually published, is not valuated. Although the system is unfair it seems hard to improve it but not impossible.

Quantity of publications impacts in quality. Each year more and more journals are created; it is a common question whether the referees of all these materials are doing their job properly. Since there is a lot of pressure, those who write articles are willing to see them published. Thus, it is possible that reviewing is making a fault. On the other hand, many articles describe wonderful scientific advances which are extremely far away from applied science. These factors are delaying the progress of science. Not only is there a problem with reviewing but also with allocating resources.

A major problem occurs when researchers try to replicate an experiment described in a paper. Sometimes they find that those experimental procedures simply do not work. The causes of these inconveniencies are multiple:

  • The experiment could be poorly detailed –which is a fault of the reviewers who accepted that work-.
  • External conditions were not consider –like the bulk effect or the environment-
  • Poor skills or less experience of those who try to reply the work

The current system in this case punishes the authors of this non-working article not providing them with citations. However, what happens if the article is simply a fake?

In order to solve this situation my proposal is to carry out studies of reproducibility. This is a matter which is currently not appreciated. Innovation is capital, but it is also capital to obtain a reliable knowledge.

Additionally, it is widely accepted by experts that the number of scientist is continuously increasing. Some pessimistic predictions foresee high rates of unemployment among these professionals. The strategy that I propose could be a double solution to these problems.

Another point is the lack of applied science. While basic research is essential, the ratio between applied and elemental studies might be incorrect. Not simply the ratio, the problem is also in regards to the extent of some studies, which only consider laboratory conditions. The link between science and practical applications, which create benefits for our society, is too weak. Companies dislike working with Universities because they are too focused in publications. By contrast, Universities argue that companies are only focused in profits and they despise the knowledge as well as the long term effects. Being in both parties I can assert that this is approximately the reality

In conclusion, it is possible to summarise the political and organisational challenges for the research management in our world in few points:

  • Improve the system to evaluate candidates. Experience in the private sector should be valued.
  • Improve reviewing; it would be interesting to demonstrate that results are reproducible by others. Why not create careers of people to control the reproducibility of experiments? Another possibility is to price those who assure that papers written by others are reproducible.

Create bridges between applied science and fundamental academic research.