Dealing with performance problems of Wikidata API

Currently, I am testing the library and run into the following performance problems concerning the Wikidata API.

Applying TIP on use case

In the context of my research project Q-Aktiv, together with Tetyana Melnychuk and Lukas Galke, we examine the development of research activities on the chemical lipid “cholesterol”. Analyzing the keywords allocated to papers on cholesterol we aim to follow the development of the research. Early finding indicate an emphasis of publication activities on the topics of cardiovascular diseases and nutrition in the 50ies to the early 70ies, while from the late 70ies on an increasing number of gynecological related key words can be observed, that are allocated to the scientific papers dealing with cholesterol. This finding suggests an increasing interest in gynecological studies from the 70ies till present days.

Deploying “Take it Personally”-Library we aim to understand if the shifting (or expanding) interest is connected to a change (or expansion) of the group of the researchers themselves. Therefore, LIVIVO database including the worldwide leading database for medicine, Medline, supplemented with other Life Science databases, was filtered for scientific articles on cholesterol (indicated by MeSH vocabulary). More than 14 Mio papers from the food and pharma field could be detected in order to monitor the development of cholesterol-papers through time.

Applying the Pubmed-Id used by Medline and DOI identifier we requested the Wikidata-API using SPARQL if the authors of the cholesterol related scientific papers are known in Wikidata. If they are listed, we request the API for the identifier of the specific author(s). In Wikidata this identifier is called Q-Nr/Q-Id.

Unstable results

When confronting the API with 14 Mio Pubmed-Ids and DOI it was noticeable that Wikidata was only able to identify a poor minority of some thousand articles. Splitting the dump in smaller chunks helped already to increase the number fundamentally to more than 400.000 – which is still few for an incoming request of 14 Mio documents. However, the API of Wikidata still delivers unstable results. When requesting the Q-Nr.s of the same initial document the results sometimes deviate in thousands from another.

The performance problems seems to be known in the Wikidata Foundation (mentioned for example in the discussion connected to Taraborelli/Mietchen 2018). We first tried to find a workaround in chunking the input data in smaller units. We also decided to request not more than three queries simultaneously. The purpose was to reach a greater number of results from the single data set, which supplement each other.

Current approach: creating an own Wikidata triple store

Apart from this workaround we focus on building our own triple store for Wikidata with “singularity” to reach faster and comprising information from Wikidata. For all who deal with a similar approach building an own Wikidata triple store I recommend the blogpost of Iazharichir (2018). As we are still in the process I will report soon on the experiences.

References

Mietchen, Daniel; Taraborelli, Dario (2018): Wikidata, Wikibase, and a federated ecosystem of structured knowledge for open science. figshare. Presentation. https://doi.org/10.6084/m9.figshare.7195358.v3

Iazharichir (2018): Importing Wikidata Dumps — The Easy Part, Blog post 5. 14. 2018, Topicseed, https://topicseed.com/blog/importing-wikidata-dumps