Wikidata

Hyppige elementer blandt bedste danske film

Posted on Updated on

Bo Green Jensen har skrevet bogen De 25 bedste danske film, hvor man blandt andet finder Vredens Dag, Kundskabens træ, Babettes gæstebud og Den eneste ene. Denne korte liste på 25 film, der blev udgivet i 2002, har jeg lige indtastet i Wikidata via “katalog”-egenskaben. Når det er gjort, kan man benytte Wikidata Query Service til, med en SPARQL-databaseforespørgsel, at finde elementer der går igen blandt filmene. En sådan SPARQL-forespørgsel kunne se sådan ud:

SELECT (COUNT(?item) AS ?count) ?value ?valueLabel WHERE {
  ?item wdt:P972 wd:Q12307844 .
  ?item ?property ?value .
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],da,en". }
}
GROUP BY ?value ?valueLabel
HAVING (COUNT(?item) > 1)
ORDER BY DESC(?count)

Denne version tæller film og ordner elementerne efter hvor mange film de enkelte elementer indgår i. Informationen i Wikidata er nok ikke helt komplet. Med Magnus Manskes Listeria-værktøj kan man dog få en tabel konstrueret der viser at hver enkelt film er rimeligt godt dækket ind.

SPARQL’en findes her og resultatet ses her.

Det er ikke overraskende at et af de elementer der findes ved alle de 25 film er at de er oplistet i De 25 bedste danske film. Det er lissom en tautologi… Hvis vi går videre ned i hyppighed finder vi at Bodil Kjer og Anne Marie Helger er de højest placerede personer.

Bodil Kjer forbindes nok mest med gråtonede film fra 1940’erne og 1950’erne – i listen finder man hende som skuespiller i Otte akkorder, John og Irene og Mød mig på Cassiopeia – men i sin senere karriere gjorde hun sig også bemærket, dels som skrøbelig frue i Strømer, dels i den første danske Oscarvindende spillefilm. Hun er ikke en overraskelse.

Hvad jeg finder overraskende er at Anne Marie Helger ligger med 5 elementer, og dermed den næsthøjeste person på listen. Hun er skuespiller i Strømer, Johnny Larsen, selvfølgelig Koks i kulissen, og Erik Clausens De frigjorte. Hun figurerer også som manuskriptforfatter på Christian Braad Thomsens film.

En tak længere nede kommer Erik Balling, Ebbe Rode, Ib Schønberg og Anders Refn. Balling er producent på to film på listen og stod for både instruktion og manuskript på Poeten og Lillemor. Anders Refn er filmklipper på to og var tillige i en dobbeltrolle med instruktion og manuskript til Strømer.

Min navnebror Finn Nielsen er med på listen i forbindelse med tre film: Strømer, Johnny Larsen og Babettes gæstebud. Han gjorde forøvrigt også en fin(n) præstation i Kærlighedens smerte, som ikke kom på listen da instruktøren allerede er repræsenteret med Kundskabens træ.

Sverige står som samproduktionsland på fire film. Det er særligt i de senere års film, men det første film er faktisk Sult som jo er fra 1960’erne.

Og så iøvrigt mangler Bodil Kjer at blive talt med en ekstra gang: Som ekstra 26. emne lister Bo Green Jensen Far til fire-serien. I denne serie indgår der en legetøjselefant ved navn Bodil Kjer…

Advertisements

“Og så er der fra 2018 og frem øremærket 0,5 mio. kr. til Dansk Sprognævn til at frikøbe Retskrivningsordbogen.”

Posted on Updated on

From Peter Brodersen I hear that the budget of the Danish government for next year allocates funds to Dansk Sprognævn for the release of the Retskrivningsordbogen – the Danish official dictionary for word spelling.

It is mentioned briefly in an announcement from the Ministry of Culture: “Og så er der fra 2018 og frem øremærket 0,5 mio. kr. til Dansk Sprognævn til at frikøbe Retskrivningsordbogen.”: 500.000 DKK allocated for the release of the dataset.

It is not clear under which conditions it is released. An announcement from Dansk Sprognævn writes “til sprogteknologiske formål” (to natural language processing purposes). I trust it is not just for natural language processing purposes, – but for every purpose!?

If it is to be used in free software/databases then a CC0 or better license is a good idea. We are still waiting for Wikidata for Wiktionary, the yet waporware with a multilingual, collaborative and structured dictionary. This ressource is CC0-based. The “old” Wiktionary has surprisingly not been used that much by natural language processing researcher. Perhaps because of the anarchistic structure of Wiktionary. Wikidata for Wiktionary could hopefully help with us with structuring lexical data and improve the size and the utility of lexical information. With Retskrivningsordbogen as CC0 it could be imported into Wikidata for Wiktionary and extended with multilingual links and semantic markup.

The problem with Andreas Krause

Posted on Updated on

I first seem to have run into the name “Andreas Krause” in connection with NIPS 2017. Statistics with the Wikidata Query Service shows “Andreas Krause” to be one of the most prolific authors for that particular conference.

But who is “Andreas Krause”?

Google Scholar lists five “Andreas Krause”. An ETH Zürich machine learning researcher, a pharmacometrics researcher, a wood researcher, a economish networkish researcher  working from Bath and a Dresden-based battery/nano researcher. All the NIPS Krause works should likely be attributed to the machine learning researcher, and a read of the works reveal the affiliation to be to ETH Zürich.

An ORCID search reveals six “Andrea Krause”. Three of the Andreas Krause have no or almost no further information about them beyond the name and the ORCID identifier.

There is a Immanuel Krankenhaus Berlin rheumatologist which does not seem to be in Google Scholar.

There may even be more than these six “Andreas Krause”. For instance, the article Emotional Exhaustion and Job Satisfaction in Airport Security Officers – Work–Family Conflict as Mediator in the Job Demands–Resources Model has affiliation with “School of Applied Psychology, University of Applied Sciences and Arts Northwestern Switzerland, Olten, Switzerland”, thus topic and affiliation do not quite fit in with any of the previously mentioned “Andreas Krause”.

One interesting ambiguity is for Classification of rheumatoid joint inflammation based on laser imaging – which obviously is a rheumatology work but also has some machine learning aspects. There is computer scientist/machine learner Volker Tresp as co-author and the work is published in an IEEE venue. There is no affiliation on the “Andreas Krause” on the paper. It is likely the work of the rheumatologist, but you could also guess on the machine learner.

Yet another ambiguity is Biomarker-guided clinical development of the first-in-class anti-inflammatory FPR2/ALX agonist ACT-389949. The topic somewhat overlap between the pharmacokinetics and the domain of the Berlin researcher. The affiliation is to “Clinical Pharmacology, Actelion”, but interestingly, Google Scholar does not associate this paper with the pharmacokinetics researcher.

In conclusion, author disambiguation may be very difficult.

Scholia will can show the six Andreas Krause. But I am not sure that helps us very much.

Female GitHubbers

Posted on Updated on

In Wikidata, we can record the GitHub user name with the P2037 property. As we typically also has the gender of the person we can make a SPARQL query that yields all female GitHub users recorded in Wikidata. There ain’t no many. Currently just 27.

The Python code below gets the SPARQL results into a Python Pandas DataFrame and queries the GitHub API for followers count and adds the information to a dataframe column. Then we can rank the female GitHub users according to follower count and format the results in a HTML table

Code

 

import re
import requests
import pandas as pd

query = """
SELECT ?researcher ?researcherLabel ?github ?github_url WHERE {
  ?researcher wdt:P21 wd:Q6581072 .
  ?researcher wdt:P2037 ?github .
  BIND(URI(CONCAT("https://github.com/", ?github)) AS ?github_url)
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}
"""
response = requests.get("https://query.wikidata.org/sparql",
                        params={'query': query, 'format': 'json'})
researchers = pd.io.json.json_normalize(response.json()['results']['bindings'])

URL = "https://api.github.com/users/"
followers = []
for github in researchers['github.value']:
    if not re.match('^[a-zA-Z0-9]+$', github):
        followers.append(0)
        continue
    url = URL + github
    try:
        response = requests.get(url,
                                headers={'Accept':'application/vnd.github.v3+json'})
        user_followers = response.json()['followers']
    except: 
        user_followers = 0
    followers.append(user_followers)
    print("{} {}".format(github, followers))
    sleep(5)

researchers['followers'] = followers

columns = ['followers', 'github.value', 'researcherLabel.value',
           'researcher.value']
print(researchers.sort(columns=['followers'], ascending=False)[columns].to_html(index=False))

Results

The top one is Jennifer Bryan, a Vancouver statistician, that I do not know much about, but she seems to be involved in R-studio.

Number two is Jessica McKellar is a well-known figure in the Python community. Number four and five, Olga Botvinnik and Vanessa Sochat, are bioinformatician and neuroinformatician, respectively (or was: Sochat has apparently left the Poldrack lab in 2016 according to her CV). Further down the list we have people from the wikiworld, Sumana Harihareswara, Lydia Pintscher and Lucie-Aimée Kaffee.

I was surprised to see that Isis Agora Lovecruft is not there, but there is no Wikidata item representing her. She would have been number three.

Jennifer Bryan and Vanessa Sochat are almost “all-greeners”. Sochat has just a single non-green day.

I suppose the Wikidata GitHub information is far from complete, so this analysis is quite limited.

followers github.value researcherLabel.value researcher.value
1675 jennybc Jennifer Bryan http://www.wikidata.org/entity/Q40579104
1299 jesstess Jessica McKellar http://www.wikidata.org/entity/Q19667922
475 triketora Tracy Chou http://www.wikidata.org/entity/Q24238925
347 olgabot Olga B. Botvinnik http://www.wikidata.org/entity/Q44163048
124 vsoch Vanessa V. Sochat http://www.wikidata.org/entity/Q30133235
84 brainwane Sumana Harihareswara http://www.wikidata.org/entity/Q18912181
75 lydiapintscher Lydia Pintscher http://www.wikidata.org/entity/Q18016466
56 agbeltran Alejandra González-Beltrán http://www.wikidata.org/entity/Q27824575
22 frimelle Lucie-Aimée Kaffee http://www.wikidata.org/entity/Q37860261
21 isabelleaugenstein Isabelle Augenstein http://www.wikidata.org/entity/Q30338957
20 cnap Courtney Napoles http://www.wikidata.org/entity/Q42797251
15 tudorache Tania Tudorache http://www.wikidata.org/entity/Q29053249
13 vedina Nina Jeliazkova http://www.wikidata.org/entity/Q27061849
11 mkutmon Martina Summer-Kutmon http://www.wikidata.org/entity/Q27987764
7 caoyler Catalina Wilmers http://www.wikidata.org/entity/Q38915853
7 esterpantaleo Ester Pantaleo http://www.wikidata.org/entity/Q28949490
6 NuriaQueralt Núria Queralt Rosinach http://www.wikidata.org/entity/Q29644228
2 rongwangnu Rong Wang http://www.wikidata.org/entity/Q35178434
2 lschiff Lisa Schiff http://www.wikidata.org/entity/Q38916007
1 SigridK Sigrid Klerke http://www.wikidata.org/entity/Q28152723
1 amrapalijz Amrapali Zaveri http://www.wikidata.org/entity/Q34315853
1 mesbahs Sepideh Mesbah http://www.wikidata.org/entity/Q30098458
1 ChristineChichester Christine Chichester http://www.wikidata.org/entity/Q19845665
1 BinaryStars Shima Dastgheib http://www.wikidata.org/entity/Q42091042
1 mollymking Molly M. King http://www.wikidata.org/entity/Q40705344
0 jannahastings Janna Hastings http://www.wikidata.org/entity/Q27902110
0 nmjakobsen Nina Munkholt Jakobsen http://www.wikidata.org/entity/Q38674430

Find titles of all works published by DTU Cognitive Systems in 2017

Posted on Updated on

Find titles of all works published by DTU Cognitive Systems in 2017! How difficult can that be? To identify all titles of works from a research organization? With Wikidata and the Wikidata Query Service (WDQS) at hand it shouldn’t be that difficult to do? Nevertheless, I ran into a few hatches:

  1. There is what we can call the “Nathan Churchill Problem”: Nathan Churchill was at one point affiliated with our research section Cognitive Systems and wrote papers, e.g., together with our Morten Mørup. One paper clearly identifies him as affiliated with our section. Searching the DTU website yields no homepage for him though. He is now at St. Michael’s Hospital, Toronto according to a newer paper. So is he no longer affiliated with the Cognitive Systems section? That’s somewhat difficult to establish with credible and citable sources. If he is not, then any simple SPARQL query on the WDQS for Cognitive Systems papers will yield his new papers which shouldn’t be counted as Cognitive Systems section papers. If we could point to a source that indicates whether his affiliation at our section is stopped we could add a qualifier to the P1416 property in his Wikidata entry and extend the SPARQL query. What I ended up doing, was to explicitly filter out two of Churchill’s publications with the ugly line “FILTER(?work != wd:Q42595201 && ?work != wd:Q36384548)“. The problem is of course not just confined to Churchill. For instance, Scholia currently lists new publications by our Søren Hauberg at the Scholia page for DIKU, – a department where he has previously been affiliated. We discussed the affiliation problem a bit in the Scholia paper, see page 253 (page 17).
  2. Datetime datatype conversion with xsd:dateTime. The filter on date is with this line: “FILTER(?publication_datetime >= "2017-01-01"^^xsd:dateTime)“. Something like “FILTER(?publication_datetime >= xsd:dateTime(2017))” does not work.
  3. Missing data. It is difficult to establish how complete the Wikidata listing is for our section with respect to publications. Scraping Google Scholar, PubMed and our local university database of publications could be a possibility, but this is far from streamlined with the tools I have developed.

The full query is listed below and the result is available from this link. Currently, 48 results are returned.

#defaultView:Table
SELECT ?workLabel 
WITH {
  SELECT 
    ?work (MIN(?publication_datetime) AS ?datetime)
  WHERE {
    # Find CogSys work
    ?researcher wdt:P108 | wdt:P463 | wdt:P1416/wdt:P361* wd:Q24283660 .
    ?work wdt:P50 ?researcher .
    ?work wdt:P31 wd:Q13442814 .
    
    # Nathan Churchill seems not longer to be affiliated!?
    FILTER(?work != wd:Q42595201 && ?work != wd:Q36384548)
    
    # Filter to year 2017
    ?work wdt:P577 ?publication_datetime .
    FILTER(?publication_datetime >= "2017-01-01"^^xsd:dateTime)
  }
  GROUP BY ?work 
} AS %results
WHERE {
  INCLUDE %results
  SERVICE wikibase:label { bd:serviceParam wikibase:language "en,da,de,es,fr,jp,nl,nl,ru,zh". }
}

 

Do we have a final schema for Wikicite?

Posted on Updated on

No, Virginia, we do not have a final schema for Wikicite IMHO.

Wikicite is a project that focuses on sources in the Wikimedia universe. Currently, the most active part of Wikicite is the setup of bibliographic data from scientific articles in Wikidata with the tools of Magnus Manske, the Fatameh-duo and the GeneWiki people, and particular James Hare, Daniel Mietchen and Magnus Manske have been active in automatic and semi-automatic setup of data. Jakob Voß’ statistics says we have – as of medium October 2017 – metadata from almost 10 million publications in Wikidata and recorded over 36 million citation between the described works.

Given that so many bibliographic items have been setup in Wikidata it may be worth to ask whether we actually have a schema for the setup of this data. While we surely have sort-of a convention that tools and editors follow it is not complete and probably up for change.

Here are some Wikicite-related schema issues:

  1. What instance is a scientific article? Most tools use instance of Q13442814, currently “scientific article” in English. But what is this? In English “scientific” means something different than the usual translation into Danish (“videnskabelig”) or German (“wissenschaftlicher“), – and these words are used in the labels of Q13442814. “Scientific” usually only entails natural science, leaving out social science and the humanities (while “videnskabelig”/”wissenschaftlicher” entails social science and humanities too). An attempt to fix this problem is to call these articles “scholarly articles”. It is interesting to think that what is one of the most used classes in Wikidata – if not the most used class – has an language ambiguity. I see no reason to restricted Q13442814 to only the English sense of science. It is all too difficult to distinguish between scientific disciplines: Think of computational humanities.
  2. What about the ontology of scientific work? Currently, Q13442814 is set as a subclass of academic journal articles, but this is not how we use it as conference articles in proceedings are set to Q13442814. Is a so-called abstract a “scientific article”? “Abstracts” appear, e.g., in neuroimaging conferences, where they are full referenceable items published in proceedings or supplementary journal issues.
  3. What is the instances of scientific article in Wikidata describing? A work or an edition? What happens if the article is reprinted (it happens to important work)? Should we then create a new item? Or amend the old item? If we create a new item then how should we link the two? Should we create a third item as a work item? Should items in preprint archives have their own item? Should that issue depend on whether the preprint version and the canonical version are more or less the same?
  4. How do we represent the language of an article? There are two generally used properties: original language of work and language of the work. There is a discussion about deleting one of them.
  5. How do we represent an author? Today an author can be linked to the article via the P50 property. However, the author label may be different than the name written in the article (we may refer to this issue as the “Natalie Portman Problem” as she published a scientific article as “Natalie Hershlag”). P1932 as a qualifier to P50 may be used to capture the way that the name is represented in the article, – a possible solution. Recently, Manske’s author name resolver has started to copy the short author name to the qualifier under P50. For referencing, there is still the problem that the referencing software would likely need to determine the surname, and this is not trival for authors with suffixes and Spanish authors with multiple surnames.
  6. How do we record the affiliation of a paper. Publicly funded universities and other research entities would like to make statistics on, for instance, the paper production, but this is not possible to do precisely with today’s Wikidata as papers are usually not affiliated with organizations, – only indirectly by the author affiliation. And the author affiliation might change as the author moves between different institutions. We already noted this problem in the first article we wrote about Scholia.
  7. How do record the type of scientific publication? There are various subtypes, e.g., systematic review, original article, erratum, “letter”, etc. Or the state of the article: submitted, under-review, peer-review, not peer-reviewed. The “genre” and the “instance of” properties can be used, but I have seen no ruling convention.
  8. How do we record what software and which datasets have been used in the article, e.g., for digital preservation. Currently, we are using “used” (P2283). But should we have dedicated properties, e.g., “uses software“? Do we have a schema for datasets and software?
  9. How do we record the formatting of the title, e.g., case? Bibliographic reference management software may choose to capitalize some words. In BibTeX you have the possibility to format the title using LaTeX commands. Detailed formatting of titles in Wikidata is currently not done, and I do not believe we have dedicated properties to handle such cases.
  10. How do we manage journals that change titles? For instance, for BMJ we have several items covering the name changes: Q546003, Q15746654, and Q28464921. Is this how we should do? There is the P156 property to connect subsequent versions.
  11. How should we handle series of conference proceedings? A particular article can  be “published in” a proceedings and such a proceedings may be part of a “series” that is a “conference proceedings series“. However, according to my recollection some/one(?) Wikidata bot may link articles directly as “published in” the conference proceedings series: they can have ISSNs and look like ordinary scientific journals.
  12. When is an article published? You have a number of publishers setting a formal publication date in the future for an article that is actually published prior to that formal date. In Wikidata there is to my knowledge only a single property for publication date. Preprints yield other publication dates.
  13. A minor issue is P820, arXiv classification. According to documentation it should be used as a qualifier to P818, the arXiv identifier property. Embarrassingly, I overlooked that and the Scholia arXiv extraction program and Quickstatement generator outputs it/them as a proper property.

Update:

Do we have a schema for datasets and software? Well, yes, Virginia. For software Katherine Thornton & Co. have produced Modeling the Domain of Digital Preservation in Wikidata.

Some information about Scholia

Posted on

Nielsen2017Scholia_coauthornormalizedcitations

Scholia is mostly a web service developed from GitHub at https://github.com/fnielsen/scholia in an open source fashion. It was inspired by discussions at the WikiCite 2016 meeting in Berlin. Anyone can contribute as long as their contribution is under GPL.

I started to write the Scholia code back in October 2016 according to the initial commit at https://github.com/fnielsen/scholia/commit/484104fdf60e4d8384b9816500f2826dbfe064ce Since then particularly Daniel Mietchen and Egon Willighagen have joined in and Egon has lately be quite active.

Users can download the code and run the web service from their own computer if they have a Python Flask development environment. Otherwise the canonical web site for Scholia is https://tools.wmflabs.org/scholia/ which anyone with an Internet connection should be able to view.

So what does Scholia do? The initial “application” was a “static” web page with a researcher profile/CV of myself based on data extracted from Wikidata. It is still available from: http://people.compute.dtu.dk/faan/fnielsenwikidata.html. I added a static page for my research section, DTU Cognitive Systems, showing scientific page production and a coauthor graph. This is available here: http://people.compute.dtu.dk/faan/cognitivesystemswikidata.html.

The Scholia web application was an extension of these initial static pages so a profile page for any researcher or any organization could be made on the fly. And now it is no longer just authors and organizations where there is a profile page, but also works, venues (journals or proceedings), series, publishers, sponsors (funders) and awards. We have also “topics” and individual pages showing specialized information about chemicals, proteins, diseases and biological pathways. A rudimentary search interface is implemented.

The content of the web pages of Scholia with plots and tables are made from queries to the Wikidata Query Service, – the extended SPARQL endpoint provided by the Wikimedia Foundation. We also pull in text from the introduction of the articles in the English Wikipedia. We modify the table output of the Wikidata Query Service so individual items displayed in table cells link back to other items in Scholia.

Egon Willighagen, Daniel Mietchen and I have described Scholia and Wikidata for scientometrics in the 16-pages workshop paper “Scholia and scientometrics with Wikidata” https://arxiv.org/pdf/1703.04222.pdf The screenshots shown in the paper has been uploaded to Wikimedia Commons. These and other Scholia media files are available in category page https://commons.wikimedia.org/wiki/Category:Scholia

Working with Scholia has been a great way to explore what is possible with SPARQL and Wikidata. One plot that I like is the “Co-author-normalized citations per year” plot on the organization pages. There is an example on this page: https://tools.wmflabs.org/scholia/organization/Q24283660. Here the citations to works authored by authors affiliated with the organization in question are counted and organized in a colored bar chart with respect to year of publication, – and normalized for the number of coauthors. The colored bar charts have been inspired by the “LEGOLAS” plots of Shubhanshu Mishra and Vetle Torvik.

Part of the Python Scholia code will also work as a command-line script for reference management in the LaTeX/BIBTeX environment using Wikidata as the backend. I have used this Scholia scheme for a couple of scientific papers I have written in 2017. The particular script is currently not well developed, so users would need to be indulgent.

Scholia relies on users adding bibliographic data to Wikidata. Tools from Magnus Manske are a great help as are Fatameh of “T Arrow” and “Tobias1984” and the WikidataIntegrator of the GeneWiki people. Daniel Mietchen, James Hare and a user called “GZWDer” have been very active adding much of the science bibligraphic information and we are now past 2.3 million scientific articles on Wikidata. You can count them with this link: https://tinyurl.com/yaux3uac