The integrity of the scholarly record is an essential aspect of research integrity. Every initiative and service that we have launched since our founding has been focused on documenting and clarifying the scholarly record in an open, machine-actionable and scalable form. All of this has been done to make it easier for the community to assess the trustworthiness of scholarly outputs. Now that the scholarly record itself has evolved beyond the published outputs at the end of the research process – to include both the elements of that process and its aftermath – preserving its integrity poses new challenges that we strive to meet… we are reaching out to the community to help inform these efforts.
I’m pleased to share the 2022 board election slate. Crossref’s Nominating Committee received 40 submissions from members worldwide to fill five open board seats.
We maintain a balance of eight large member seats and eight small member seats. A member’s size is determined based on the membership fee tier they pay. We look at how our total revenue is generated across the membership tiers and split it down the middle. Like last year, about half of our revenue came from members in the tiers $0 - $1,650, and the other half came from members in tiers $3,900 - $50,000.
Our entire community – members, metadata users, service providers, community organizations and researchers – create and/or use DOIs in some way so making them more accessible is a worthy and overdue effort.
For the first time in five years and only the second time ever, we are recommending some changes to our DOI display guidelines (the changes aren’t really for display but more on that below). We don’t take such changes lightly, because we know it means updating established workflows.
I’m delighted to say that Martin Paul Eve will be joining Crossref as a Principal R&D Developer starting in January 2023.
As a Professor of Literature, Technology, and Publishing at Birkbeck, University of London- Martin has always worked on issues relating to metadata and scholarly infrastructure. In joining the Crossref R&D group, Martin can focus full-time on helping us design and build a new generation of services and tools to help the research community navigate and make sense of the scholarly record.
Back in 2014, Geoffrey Bilder blogged about the kick-off of an initiative between Crossref and Wikimedia to better integrate scholarly literature into the world’s largest knowledge space, Wikipedia. Since then, Crossref has been working to coordinate activities with Wikimedia: Joe Wass has worked with them to create a live stream of content being cited in Wikipedia; and we’re including Wikipedia in Event Data, a new service to launch later this year. In that time, we’ve also seen Wikipedia importance grow in terms of the volume of DOI referrals.
How can we keep this momentum going and continue to improve the way we link Wikipedia articles with the formal literature? We invited Alex Stinson, a project manager at The Wikipedia Library (and one of our first guest bloggers) to explain more:
Wikipedia provides the most public gateway to academic and scholarly research. With millions of citations to academic as well as non-academic but reliable sources, like those produced by newspapers, its ecosystem of 5 million English Wikipedia articles and 35 million articles in hundreds of languages provides the first stop for researchers in both scholarly and informal research situations. The practice of “checking Wikipedia” has become ubiquitous in a number of fields; for example, Wikipedia is the most visited source of medical information online, even providing the first stop for many medical students and medical practitioners when looking for medical literature.
The Wikipedia Library program helps Wikipedia’s volunteer editors access and use the best sources in their research and citations. Through partnerships with over fifty leading publishers and aggregators, like JSTOR, Project Muse, Elsevier, Newspapers.com, Highbeam, Oxford University Press and others, we have been able to give over 3000 of our most prolific volunteers access to over 5500 accounts. These are clear, win-win relationships where Wikipedia editors get to use these databases to improve Wikipedia, while in turn linking to authoritative resources and enhancing their discovery.
JSTOR has been working with us since 2012, providing over 500 accounts to our editors. Kristen Garlock at JSTOR writes:
“We’re very happy to collaborate with the Wikipedia Library to provide JSTOR access to Wikipedia editors. Supporting the initiative to increase editor access to scholarly resources and improve the quality of information and sources on Wikipedia has the potential to help all Wikipedia readers. In addition to providing more discoverability for our institutional subscribers, introducing new audiences to the scholarship on JSTOR them discover access opportunities like our Register & Read program.”
There are strong signals that Wikipedia’s role in the citation ecosystem helps ensure the best materials reach the public through its over 400 million monthly readers:
Two of our access partners have found that around half of the referrals arriving from Wikipedia were able to authenticate into their subscription resources, suggesting that a large portion of our readers can take advantage of subscriptions provided by scholarly institutions.
Altmetrics tools (such as Altmetric.com, ImpactStory or Plum Analytics) are recognizing Wikipedia’s importance by including Wikipedia citations in their impact metrics.
Despite these advances, we think this is only the beginning of Wikipedia’s impact on the landscape of scholarly research and discovery. Wikipedia can become a highly integrated research platform within the broader research ecosystem, where the best scholarship is summarized and discoverable-where Wikipedia effectively becomes the front matter to all research.
However, there are some clear barriers to fulfilling this vision. Currently, most citations on Wikipedia are stored in free-text and not readily available in machine-readable formats; our community is working to fix this. Wikipedia also has major systematic gaps in topics where either we lack volunteer interest or Wikipedia reflects larger systemic biases within society or scholarship.We need the help of volunteers, experts, industry partners, and information technologists to grow Wikipedia’s collection of citations, especially around key missing areas, and to transform existing citations into structured formats.
WikiData, Wikipedia’s sister project which crowdsources structured metadata, offers an excellent opportunity for improving the impact of Wikipedia in research. Having Wikipedia citations stored in this structured ecosystem, connecting metadata with semantic meaning, would allow the citations in Wikipedia to become the backbone for discovery tools which emphasize the hand-curated interrelationships between authoritative sources and the knowledge collected by Wikipedia and Wikidata editors.
We need more collaborators to realize the full vision of Wikipedia supporting research in the most effective ways:
We need help from publishers with subscription databases, to help us give our editors access to the databases through The Wikipedia Library’s access partnership program. These high-quality source materials allow our editors to expose that research in a number of languages and for millions of readers.
We need the larger research community to promote Wikipedia as a scholarly communications tool and make contributing to Wikipedia an important part of the social responsibility of experts. Wider citation of sources in Wikipedia ensures widespread discovery and dissemination of that research.