This is a prototype that summarizes journal information from numerous distinct databases. Data have not been thoroughly validated and may contain missing or wrong values. Please verify critical information directly with the journals.
Open Journal Metrics
Researchers often choose academic journals based on metrics. This editorial dashboard facilitates the comparison of traditional impact metrics alongside open science practices, empowering informed and responsible publishing decisions.
Loading dataset into memory...
Title
ISSN
Access
Nordic List
TOP Factor
SJR
H-Index
Docs
Retractions
Included Databases & Sources
This dashboard is an aggregation tool that builds directly upon the extensive work of the academic community. If you utilize this tool for your research, all credit and citations should be directed to the creators of the original databases below.
@misc{doaj_data,
author = {{Directory of Open Access Journals}},
title = {DOAJ Public Data Dump},
year = {2024},
url = {https://doaj.org/docs/public-data-dump/}
}
Wallrich, L.*, & Röseler, L.*, Hartmann, H., Ashcroft-Jones, S., Doetsch, C., Kaiser, L., Schüller, S. M., Aldoh, A., Behbood, H., Elsherif, M. M., Klett, N., Krapp, J., Liu, M., Pavlović, Z., Pennington, C. R., Schütz, A., Seida, C., Siziva, K., Skvortsova, A., Aczel, B., Adelina, N., Agostini, V., Al-Hoorie, A. H., Alarie, S., Albayrak-Aydemir, N., Alzahawi, S., Anvari, F., Arriaga, P., Baker, B. J., Barth, C. L., Bauer, D. J., Becker, R., Beitner, J., Belaus, A., Bhatt, H., Bhogal, J., Boyce, V., Breemer, L., Brick, C., Brohmer, H., Brummernhenrich, B., Budd, E., Butler, A., Casula, A., Chandrashekar, S. P., Chen, S., Chung, K. L., Cockcroft, J. P., Crowe, P., Cummins, J., Daniel, A., Deane, O., Deressa, T. K., Dienlin, T., Diveica, V., Draguns, A., Dumbalska, T., Efendic, E., El Halabi, M., Enright, S., Evans, T. R., Exner, A., Farrar, B. G., Feldman, G., Fillon, A., Floyd, J., Fontana Vieira, F., Frese, J., Förster, N., Gattie, M. C., Gemmecke, C., Genschow, O., Giannouli, V., Gjoneska, B., Gnambs, T., Gourdon-Kanhukamwe, A., Graham, C. J., Greshake Tzovaras, B., Guay, S., Hausenloy, J., Haviva, C., Henderson, E. L., Herderich, A., Hilbert, L., Holgado, D., Hussey, I., Höfer, L., Ilchovska, Z. G., Imada, H., Imwene, P., Izydorczak, K., Jaubert, S., Jeftić, A., Kalandadze, T., Kamermans, K., Karhulahti, V., Kasseckert, L., Kastrinogiannis, A., Klingelhöfer-Jens, M., Kocalar, H. E., Koppel, L., Koppold, A., Korbmacher, M., Kujawa, Z., Kulke, L., Kumar, P., Kuper, N., LaPlume, A. A., Lach, R., Lecuona, O., Lee, J., Leech, G., Leksina, E., Lin, C., Liu, Y., Lohkamp, F., Lou, N. M., Lynott, D., Mackinnon, S., Maier, M., Maiya, S., Makel, M. C., Manrique-Castano, D., Manríquez-Robles, D., Mathes, L., McSharry, D., Meidenbauer, K. L., Meier, M., Micheli, L., Miller, T., Montefinese, M., Moreau, D., Moser, N., Mrkva, K., Murphy, J., Muthu, J., Narkar, N., Nemcova, M., Nádvorník, J., O'Mahoney, R., O'Mahony, A., Oberholzer, Y., Oomen, D., Osano, M., Otstavnov, N., Packheiser, J., Pandey, S., Panton, H., Papenmeier, F., Parsons, S., Paruzel-Czachura, M., Pavlov, Y. G., Pittelkow, M., Plomp, W., Plonski, P. E., Pravednikov, A., Pronizius, E., Pua, A., Pypno-Blajda, K., Rausch, M., Raza, H., Reason, R., Rebholz, T. R., Resulbegoviq, H., Richert, E., Ross, R. M., Russo, S., Röer, J. P., Sandkühler, J. F., Schmidt, K., Sempere, N., Sobolak, R., Sperl, M. F., Stevens, J. R., Stogianni, M., Szekely, R., Tan, A. W., Thürmer, J. L., Tiulpakova, M., Tomczak, J., Tołopiło, A., Tunca, B., Vanpaemel, W., Vaughn, L. A., Verheyen, S., Vineyard, G. H., Weber, L., Weinberg, A., Wingen, S., Wolska, J., Yeung, S. K., Younssi, M., Zaneva, M., Zimmermann, D., Azevedo, F. (2026). FORRT Library of Replication Attempts (FLoRA) [Data set]. OSF. https://doi.org/10.17605/OSF.IO/9R62X
* These authors contributed equally to this work.
@misc{hijacked_checker,
author = {{Retraction Watch}},
title = {The Retraction Watch Hijacked Journal Checker},
year = {2024},
url = {https://retractionwatch.com/the-retraction-watch-hijacked-journal-checker/}
}
COS TOP Factors
Transparency and Openness Promotion guidelines. Data:Download Data License: CC0 1.0 Universal
Note on TOP Factor: As of February 2025, TOP Factor scores will no longer be added or updated by COS. The information on this page is for general reference only. Please check journal websites for current policies.
@misc{top_factors,
author = {Mellor, D. T. and Esposito, J. and DeHaven, A. C. and Stodden, V. and Lowrey, O. and Boycan, E.},
title = {TOP Resources - Evidence and Practices},
year = {2026},
month = {Mar},
doi = {10.17605/OSF.IO/KGNVA},
url = {https://doi.org/10.17605/OSF.IO/KGNVA}
}
@misc{nordic_list,
author = {{Norwegian Directorate for Higher Education and Skills}},
title = {Norwegian Register for Scientific Journals, Series and Publishers},
year = {2024},
url = {https://kanalregister.hkdir.no/}
}
Responsible Journal Selection
Traditionally, academic evaluation has heavily relied on where a researcher publishes rather than what they publish. Today, there is a growing global movement to reform research assessment.
Modern evaluation frameworks prioritize the intrinsic quality, methodological rigor, and societal impact of research over the prestige of the publication venue. Open science practices—such as data sharing, preregistration, and open access—are increasingly recognized as vital components of high-quality research.
The Limits of Journal-Level Metrics
Metrics like the Impact Factor and SCImago Journal Rank (SJR) calculate the average citation counts of a journal. It is crucial to remember that these are journal-level metrics, not article-level metrics. Because citation distributions are highly skewed (a few highly cited papers usually drive the metric), a journal's average score says very little about the quality or impact of any individual paper within it.
Relying on these metrics to evaluate individual researchers can lead to biased and inaccurate assessments.
DORA's core recommendation is simple but profound: do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions.