Abstract
SIGNIFICANCE Putting worth on research and selection of studies by importance are crucial in medical innovation. Practical applications include choosing personal study topics, publication review, study grant selection, and decisions of spending or misspending billions in public health. Multiple studies raised alarm that current methods perform poorly in reproducibility, prediction of best research and objectivity. I propose using the metrics how much disease burden is reduced and calculating objective, numerical research value. The concept is that worth of medical research is not subjective but can be reproducible and numerically quantified. The method increases transparency by giving decision makers an externally accountable proof, and frees peer reviewers to check scientific integrity. Its numerical form can capture small differences important in competition between studies. ABSTRACT Finding value and selecting knowledge by importance are crucial in medical innovation. Applications include individuals designing research, funding organizations selecting grants, journals – publications, institutions – priorities in public health and health policy, and decision makers spending or misspending billions of research funds. Currently finding value of knowledge is done by peer review together with checking scientific integrity. Multiple studies raised alarm that it performs poorly in prediction of highest citations, bias, transparency and quality. The resulting problems include perception of slow medical progress and wasting funds and time. I introduce a standard, objective and numerical method for finding value of medical research. It measures disease burden prevented by new knowledge contained in a study or a publication. In its simple form, it is calculated by multiplying disease prevalence, disease burden, and efficacy of the therapy. It can be modified for risk of failure, multi-disease effect and for ethical considerations. The process is described step-by-step in terms common in medical practice. A quick estimate is often sufficient. The advantage is objectivity, since it is calculated from real world data. This gives transparency and externally accountability of decision making. The second advantage is a numerical form. This can measure small differences in research value which, in sharp competition, determine which studies are selected. A researcher can calculate the value of own future effort. Institutions might ask to provide it at submission. The method is also applicable to broad policy analysis, objective evaluation of scientific achievement and bibliometric studies.