ATG Original: Scopus at 15: Part 3 – Elsevier Gets Some Advice From Users

by | May 19, 2020 | 0 comments


by Nancy K. Herther (writer, consultant and Sociology/Anthropology Librarian, University of Minnesota Libraries)


As information professionals know, the use of citation data and author metrics has become standard across the research landscape. Against the Grain contacted a number of citation research experts to get their perspectives on Scopus today, as well as suggestions for Elsevier in the future.

RESEARCHERS SPEAK ABOUT THEIR NEEDS

Quan-Hoang Vuong

Quan-Hoang Vuong, Director of the Centre for Interdisciplinary Social Research at Vietnam’s Phenikaa University has studied Scopus, and its competitors, with a focus on how citation indexes cover Asian (and particularly southeast Asian) research. He sees key advantages as well as issues that Elsevier still needs to address. “Scopus has a number of advantages that other systems cannot offer researchers,” he explains.

“First, its system of ScopusID, that I believe their databases make use of AI for determining if a new author appears in the system. Of course, it is not perfect due to the probabilistic trend, but that function is useful. Second, journal metrics are equipped with the powerful CiteScoreTracker; the data that are updated monthly at the beginning of each new month (around the 4th to the 10th day, according to my observation). Metrics are things that are useful, to not just academic evaluators/institutions and authors, but also to the public who are becoming increasingly aware of the problems of research quality and sensitive to costs of doing science (taxpayers fund the large portion of research in the modern day; please see my paper.

Third,” Vuong continues, “the Scopus database systems are still evolving, with SciVal helping much in providing useful and important statistics. Scopus works well with SCImago, although we don’t really know if SCImago is Scopus‘ subsidiary or just its technical partner, powered by Scopus databases. But, their collaboration becomes the type of value: a range of indicators from CiteScore, to SNIP (Source Normalized Impact per Paper) and the main SCImago ranking workhorse: SJR (Scientific Journal Rankings). Fourth, CiteScore itself. This is an equivalent to Journal Impact Factor (JIF), but for a longer period of papers counting (three years, while it’s two for JIF). The CiteScores of journals are provided free to learn about the trend of a journal’s performance during a year (let’s say a citation year since May and June now are considered the ‘impact factor season’ by myself, and my students who continue to keep watching the trend of this bibliographic databases world.)”

When it comes to issues like duplication, data errors and missing or incomplete information, Vuong agrees that there is certainly room for improvement. “Yes, Scopus has shown all these problems, and I have documented that in my paper in Nature’s Scientific Data. But my point is [that] although these errors and data issues remain in Scopus, they are not worse than other systems. So, as economists say, ceteris paribus. The functioning of Scopus is still quite a good experience.”

Vuong sees key issues for all citation researchers and information professionals in using these databases. “People should pay attention – especially funders and
governments who use Scopus databases for evaluation and research about
R&D and scientific/innovation productivity, including: 

1. How Elsevier and Scopus maintain their slogan of ‘free source of information’. My observation is that Scopus keeps shutting down the window. Their personal author page (using a unique ScopusID) used to present 20 entries for each author, together with the capability of sorting research articles for the author by from oldest to newest research publications (and, newest to oldest), and by highest to lowest citation counts (and, lowest to highest citation). But now these functions can only be used if you log in the system, and the question is if Scopus continues to limit the scope of data presented to the public.

2. The power of being included in Scopus. For quite some time, Clarivate Analytics – the major Scopus and the owner of the Impact Factor concept – has advertised that they are the most trusted source of information, implying that the firm itself is not associated with a specific publisher like Scopus. Of course, this has so far been only an exaggeration of the risks of being associated with a publisher. But Clarivate has its logic. If the power and popularity of Scopus keep expanding over time, and Elsevier keeps making huge profits and buying more data science products available in the marketplace, then that could one day become a real problem that the scientific world has to deal with: skewing the notion of quality by just a handful of indicators (i.e., CiteScore, SNIP (Source Normalized Impact per Paper), SJR (Science Journal Rankings)). So far, I have not heard of complaints about the functioning of these Scopus-powered metrics. But, the issue with selecting journals in Scopus coverage starts emerging already.”

Still Vuong sees important value in the current citation system. “Replacing WoS (Web of Science) and Scopus is impossible in today’s world. Researchers’ stickiness dies hard. Policymakers, too. One of the reasons is because it took really long for any society, and its scientific evaluation system, to really appreciate the use of statistics, be accustomed to the type of data, be comfortable with the use of metrics,… changing this ecosystem is almost impossible. Chances are addicts to JIF (Journal Impact Factor), will also become addicts to CiteScore. (It is like you have a desktop, PC AND a laptop.)

Web metrics,” Vuong explains, “are also useful, e.g., Google Scholar, a very, very powerful system that no researcher lives without for more than half a day (maybe it’s a bit of my exaggeration). But, they are there for other functions: finding a paper (or its version) for free, a quick check on the first-sight credibility, a glance at the group of coauthors, an article-level comparison, etc., but, not for replacing the highly structured databases like Scopus. It is also worth noting that Google System has been mainly powered by AI capacity, and their probabilistic trends, albeit useful, also create a lot of data problems (citation inflation, incorrect indexing, frequency, etc.).”

Mike Thelwell

Mike Thelwell, University of Wolverhampton Data Science professor, is a well-known expert in research metrics and citation searching.  He tells Against the Grain readers that “I like Scopus for its wide coverage of academic literature and multiple clear field categories. I rarely need old data, so I’m not personally concerned by its lower coverage before 1996. A key advantage of Scopus over the Web of Science is its greater coverage of non-English sources as well as social sciences, arts and humanities content.

For mainly English language research,” Thewell continues, “this can have disadvantages if using citation data to benchmark against field citation averages. This is because non-English journal articles tend to be less cited, lowering the benchmark in categories containing many of them. Scopus also indexes some low cited publications as journal articles that do not seem to me to be standard academic journals and attract few citations. This also lowers benchmark figures in the categories containing them. These are part of the reason why the UK chose Clarivate’s Web of Science for its Research Excellence Framework citation data.” 

Andy Yeung

Hong Kong University Management Professor Andy Yeung, has studied Scopus and believes the database “has wider coverage than Web of Science (WOS), Dimensions and PubMed [and] can be great, but the former gives no publication type to journal articles, while the latter records no citation counts.”

Erwin Krauskopf

Erwin Krauskopf of Chile’s Universidad Andrés Bello in Santiago, has written about citation analysis and believes that responsibility for data and accuracy is shared by many. “Although Elsevier is accountable for missing or incomplete information, the editorial management team of the journals are negligent as it is their responsibility to confirm that the information uploaded by Scopus is correct.  Furthermore, each researcher should be checking whether the Scopus record contains the correct/complete data.”

Meho Lokman

Well-known researcher Meho Lokman, University Librarian at American University of Beirut suggests that “Scopus should standardize institutions’ names and merge their respective records into a single entry (as Web of Science did). Scopus should also fix data errors and not rely on institutions to do that. Be consistent in indexing or not indexing certain types of documents (e.g., abstracts, letters, etc.). Indexing such types of documents for a select list of journals is not wise. Also, CiteScore should not include in its calculation methods editorial material, letters, conference reviews, errata, and notes. FWCI (Field-weighted Citation Impact) should not be computed for documents other than journal articles and review articles because coverage of all other types of materials/documents in Scopus database is minimal.”

Ryalat Saif

Ryalat Saif, consultant and coordinator at The University of Jordan, School of Medicine Research Office, has written several analytical articles “where I used Scopus and other bibliometric databases in my research methods, I found Scopus database the most comprehensive bibliometric database to do bibliometric analysis research, where they provide several analytic tools that are not available with other databases (e.g. PubMed) and cover a larger number of journals (e.g. compared to Web of Science). Moreover, the journal coverage is further increased in the latest release reaching 40,503 journals. I recently published an article that compared different bibliometric databases in terms of their merits and demerits in bibliometric analysis.”

Thewell advises Against the Grain readers to take a broad approach to citation analysis. “If citation benchmarking data is not needed, then I would be happy to use Dimensions for most citation analysis purposes and I expect that the field of viable citation indexes will include several extra players within the next five years. The newer services still need some time to ensure that they cover the core needs of research evaluation effectively and also to gain trust so that they can be used in large scale formal research evaluations. For large scale important research evaluations that influence funding or decision making, I would still recommend WoS (Web of Science) and Scopus, of course to support peer review or expert judgment rather than to replace it. For other purposes, the choice depends on the needs of the evaluation. Web-based services are attractive for non-English nations for their greater coverage, especially if international benchmarking of citation data is not needed. In the social sciences, arts and humanities, if citation data is used at all then web-based services may also be attractive on the same basis.”

In a recent article in the Journal of Data and Information Science, Dag Aksnes and Gunnar Sivertsen studied how well Scopus and Web of Science were covering Norwegian scientific and scholarly publications over two years using the Norwegian Science Index as the criteria for analysis.  Finding a wide variance in “how well Scopus and Web of Science cover the publication output of individual institutions,” they observe that  “after decades of letting commercial providers act as the ‘neutral guarantors of quality,’ we wish to empower the academic communities to take back responsibility for criteria and procedures also in the domain of bibliometrics for research evaluation and funding.” A bold mandate, but one that is not available today.  

Although clearly Elsevier could profit from more user interaction and input, researchers and institutions share an important role here as well – one that hasn’t gone far beyond criticisms of pricing or coverage. Institutions might benefit from attention to the structure and requirements of research assessment in order to give the industry a better definition of the factors and statistics that are required – and the data that they require.  Today searching and citation research are being done in the equivalent of a “black box”, which isn’t good for anyone and perhaps now is a good time to change that.  

Nancy K. Herther is Sociology/Anthropology Librarian at the University of Minnesota Libraries, Twin Cities campus[email protected]

Sign-up Today!

Join our mailing list to receive free daily updates.

You have Successfully Subscribed!