Assessing Discussions of Related Work through Citation-based Recommendations and Network Visualization
|Publisher Information:||Bamberg : Otto-Friedrich-Universität|
|Year of publication:||2022|
|Source/Other editions:||3rd Workshop on Open Citations and Open Scholarly Metadata, Oct 05, 2022, Online. Zenodo, 2022. - DOI: 10.5281/ZENODO.7123500|
|is version of:||10.5281/ZENODO.7123500|
|Licence:||Creative Commons - CC BY - Attribution 4.0 International|
A discussion of related work is part of every research paper and grant proposal submission, and as such is evaluated in the reviewing process. If a reviewer is an expert in all covered areas, a qualified judgement of the section can be expected. However, in many cases, especially when the discussion touches on different research areas and application domains, most reviewers will only have a partial understanding of the related research. Hence, they face a difficult challenge in assessing the quality and completeness of the discussion of related work. Starting an own full-fledged literature search with traditional academic search tools poses an unrealistically high effort for the reviewers.
Facilitated by an increased availability of open citation data, a new group of tools has recently been published that support users in tracking the citation links of multiple researcher papers. For instance, “Citation Gecko” and “PURE suggest” visualize the surrounding citation network and suggest related papers that are linked through citations to the selected ones. Other tools, like “ResearchRabbit” and “Inciteful”, embed similar features with a broader range of analysis and recommendation options. Loading the list of references of a paper under review, these tools hence allow inspecting the citation network and suggest related, non-listed publications. This can be the starting point for an efficient check of the character and completeness of a related work discussion.
In this talk, I specifically explore the capabilities of our tool “PURE suggest” (https://fabian-beck.github.io/pure-suggest/) for assessing a list of references. The top-ranked suggestions can be quickly checked for relevancy. If the scores of these suggestions are comparable to the scores of already included papers, this might already hint at a gapped discussion. Boost keywords help the reviewer to investigate specific directions more closely. Filtering by year allows checking whether the newest publications have already been considered. Similarly, filtering for underrepresented works might point out relevant, but somewhat unknown works that have been overlooked. The cluster-oriented visualization of the citation network reveals whether expected clusters of works show in the network, are missing, or disconnected.
This demonstrates that these tools, although targeted primarily on literature search, can be also leveraged for assessing lists of related work in the sense of a sanity check. It does not replace the expertise of the reviewers but complements it with a data-driven approach. The reviewers work with the tool and use it as an aid to reflect on various aspects of the related work. Of course, a similar approach can be used by the authors of the submission already to prevent gaps in their discussion of related work. Although it might sound like a threat to the suggested assessment, on the contrary, it would be a desirable effect as it will, already in the first place, likely improve the quality of the submitted work.
|GND Keywords:||Zitatenanalyse; Bewertung; Literaturrecherche|
|Keywords:||open citations, citation network, research assessment, literature search|
|DDC Classification:||004 Computer science|
|RVK Classification:||ST 274|
|Open Access Journal:||Ja|
|Release Date:||9. November 2022|
originated at the
University of Bamberg
University of Bamberg