technical

How to quickly generate word analogy datasets with Wikidata

Posted on Updated on

One popular task in computational linguistics/natural language processing is the word analogy task: Copenhagen is to Denmark as Berlin is to …?

With queries to Wikidata Query Service (WDQS) it is reasonably easy to generate word analogy datasets in whatever (Wikidata-supported) language you like. For instance, for capitals and countries, a WDQS SPARQL query that returns results in Danish could go like this:

select
  ?country1Label ?capital1Label
  ?country2Label ?capital2Label
where { 
  ?country1 wdt:P36 ?capital1 .
  ?country1 wdt:P463 wd:Q1065 .
  ?country1 wdt:P1082 ?population1 .
  filter (?population1 > 5000000)
  ?country2 wdt:P36 ?capital2 .
  ?country2 wdt:P463 wd:Q1065 .
  ?country2 wdt:P1082 ?population2 .
  filter (?population2 > 5000000)
  filter (?country1 != ?country2)
  service wikibase:label
    { bd:serviceParam wikibase:language "da". }  
} 
limit 1000

Follow this link to get to the query and press “Run” to get the results. It is possible to download the table as CSV-formatted (see under “Download”). One issue to note that you have multiple entries for countries with multiple capital cities, e.g., Sydafrika (South Africa) is listed with Pretoria, Kapstaden (Cape Town) and Bloemfontein.

The Wikidata scholarly profile page

Posted on Updated on

my_coauthors

Recently Lambert Heller wrote an overview piece on websites for scholarly profile pages: “What will the scholarly profile page of the future look like? Provision of metadata is enabling experimentation“. There he tabularized the features of the various online sites having scholarly profile pages. These sites include (with links to my entries): ORCID, ResearchGate, Mendeley, Pure and VIVO (don’t know these two), Google Scholar and Impactstory. One site missing from the equation is Wikidata. It can produce scholarly profile pages too. The default Wikidata editing interface may not present the data in a nice way – Magnus Manske’s Reasonator – better, but very much of the functionality is there to make a scholarly profile page.

In terms of the features listed by Heller, I will here list the possible utilization of Wikidata:

  1. Portrait picture: The P18 property can record Wikimedia Commons image related to a researcher. For instance, you can see a nice photo of neuroimaging professor Russ Poldrack.
  2. Researchers alternative names: This is possible with the alias functionality in Wikidata. Poldrack is presently recorded with the canonical label “Russell A. Poldrack” and the alternative names “Russell A Poldrack”, “R. A. Poldrack”, “Russ Poldrack” and “R A Poldrack”. It is straightforward to add more variations
  3. IDs/profiles in other systems: There are absolutely loads of these links in Wikidata. To name a few deep linking posibilities: Twitter, Google Scholar, VIAF, ISNI, ORCID, ResearchGate, GitHub and Scopus. Wikidata is very strong in interlinking databases.
  4. Papers and similar: Papers are presented as items in Wikidata and these items can link to the author via P50. The reverse link is possible with a SPARQL query. Futhermore, on the researcher’s items it is possible to list main works with the appropriate property. Full texts can be linked with the P953 property. PDF of papers with an appropriate compatible license can be uploaded to Wikimedia Commons and/or included in Wikisource.
  5. Uncommon research product: I am not sure what this is, but the developer of software services is recorded in Wikidata. For instance, for the neuroinformatics database OpenfMRI it is specified that Poldrack is the creator. Backlinks are possible with SPARQL queries.
  6. Grants, third party funding. Well there is a sponsor property but how it should be utilized for researchers is not clear. With the property, you can specify that paper or research project were funded by an entity. For the paper The Center for Integrated Molecular Brain Imaging (Cimbi) database you can see that it is funded by the Lundbeck Foundation and Rigshospitalet.
  7. Current institution: Yes. Employer and affiliation property is there for you. You can see an example of an incomplete list of people affiliated with research sections at my department, DTU Compute, here, – automagically generated by the Magnus Manske’s Listeria tool.
  8. Former employers, education etc.: Yes. There is a property for employer and for affiliation and for education. With qualifiers you can specify the dates of employment.
  9. Self assigned keywords: Well, as a Wikidata contributor you can create new items and you can use these items for specifying field of work of to label you paper with main theme.
  10. Concept from controlled vocabulary: Whether Wikidata is a controlled vocabulary is up for discussion. Wikidata items can be linked to controlled vocabularies, e.g., Dewey’s, so there you can get some controlness. For instance, the concept “engineer” in Wikidata is linked the BNCF, NDL, GND, ROME, LCNAF, BNF and FAST.
  11. Social graph of followers/friends: No, that is really not possible on Wikidata.
  12. Social graph of coauthors: Yes, that is possible. With Jonas Kress’ work on D3 enabling graph rendering you got on-the-fly graph rendering in the Wikidata Query Service. You can see my coauthor graph here (it is wobbly at the moment, there is some D3 parameter that need a tweak).
  13. Citation/attention metadata from platform itself: No, I don’t think so. You can get page view data from somewhere on the Wikimedia sites. You can also count the number of citations on-the-fly, – to an author, to a paper, etc.
  14. Citation/attention metadata from other sources: No, not really.
  15. Comprehensive search to match/include own papers: Well, perhaps not. Or perhaps. Magnus Manske’s sourcemd and quickstatement tools allow you to copy-paste a PMID or DOI in a form field press two buttons to grap bibliographic information from PubMed and a DOI source. One-click full paper upload is not well-supported, – to my knowledge. Perhaps Daniel Mietchen knows something about this.
  16. Forums, Q&A, etc.: Well, yes and no. You can use the discussion pages on Wikidata, but these pages are perhaps mostly for discussion of editing, rather than the content of the described item. Perhaps Wikiversity could be used.
  17. Deposit own papers: You can upload appropriately licensed papers to Wikimedia Commons or perhaps Wikisource. Then you can link them from Wikidata.
  18. Research administration tools: No.
  19. Reuse of data from outside the service: You better believe! Although Wikidata is there to be used, a mass download from the Wikidata Query Service can run into timeout problems. To navigate the structure of individual Wikidata item, you need programming skills, – at least for the moment. If you are really desperate you can download the Wikidata dump and Blazegraph and try to setup your own SPARQL server.

 

So what can we use Wikicite for?

Posted on Updated on

openfmri-journal-statistics-2016-09-19

Wikicite is a term for the combination of bibliographic information and Wikidata. While Wikipedia often records books of some notability it rarely records bibliographic information of less notability, i.e., individual scientific articles and books where there little third-party information (reviews, literary analyses, etc.) exists. This is not the case with Wikidata. Wikidata is now beginning to record lots of bibliographic information for “lesser works”. What can we use this treasure trove for? Here are a few of my ideas:

  1. Wikidata may be used as a substitute for a reference manager. I record my own bibliographic information in a big BIBTeX file and use the bibtex program together with latex when I generate a scientific document with references. It might very well be that the job of the BIBTeX file with bibliographic information may be taken over by Wikidata. So far we have, to my knowledge, no proper program for extracting the data in Wikidata and formatting it for inclusion in a document. I have begun a “wibtex” program for this, and only reached 44 lines so far, and it remains to be seen whether this is a viable avenue, whether the structure of Wikidata is good and convenient enough to record data for formatting references or that Wikidata is too flexible or too restricted for this kind of application.
  2. Wikidata may be used for “list of publications” of individual researchers, institutions, research groups and sponsor. Nowadays, I keep a list of publication on a webpage, in a latex document and on Google Scholar. My university has a separate list and sometimes when I write an research application I need to format the data for inclusion in a Microsoft Word document. A flexible program on top of Wikidata could make dynamic lists of publications
  3. Wikidata may be used to count citations. During the Wikicite 2016 Berlin meeting I suggested the P2860 property and Tobias quickly created it. The P2860 allows us to describe citations between items in Wikidata. Though we managed to use the property a bit for scientific articles during the meeting, it has really been James Hare that has been running with the ball. Based on public citation data he has added hundreds of thousands of citations. At the moment this is of course only a very small part of the total number of citations. There are probably tens of millions of scientific papers with each having tens, if not hundreds of citations, of citations, so with the 499,750 citations that James Hare reported on 11 September 2016, we are still far from covering the field: James Hare tweeted that Web of Science claims to have over 1 milliard (billion) citations. The citation counts may be compared to a whole range of context data in Wikidata: author, affiliated institution, journal, year of publication, gender of author and sponsor (funding agency), so we can get, e.g., most cited Dane (or one affiliated with a Danish institution), most cited woman with an image, etc.
  4. Wikidata may be used as a hub for information sources. Individual scientific articles may point to further ressources, such as raw or result data. I myself have, for instance, added links to the neuroinformatics databases OpenfMRI, NeuroVault and Neurosynth, where Wikidata records all papers recorded in OpenfMRI, as far as I can determine. Wikidata is then able to list, say, all OpenfMRI papers or all OpenfMRI authors with Magnus Manske’s Listeria tool.
  5. Wikicite information in Wikidata may be used to support claims in Wikidata itself. As Dario Taraborelli points out this would allow queries like “all statements citing journal articles by physicists at Oxford University in the 1970s”.
  6. Wikidata may be used for other scientometrics analyses than counting, e.g, generation of coauthor graphs and cocitation graphs giving context to an author or paper. The bubble chart above shows statistics for journals of papers in OpenfMRI generated with the standard Wikidata Query Service bubble chart visualization tool.
  7. Wikidata could be used for citations in Wikipedia. This may very well be problematic, as a large Wikipedia article could have hundreds of references and each reference needs to be fetched from Wikidata generating lots of traffic. I tried a single citation on the “OpenfMRI” article (it has later been changed). Some form of inclusion of Wikidata identifier in Wikipedia references could further Wikipedia bibliometrics, e.g., determine the most cited author across all Wikipedias.

Backup of directory

Posted on Updated on

I at times does not recall the command to backup a directory. So here for my own sake of reference:

$ rsync -au /home/fnielsen/Pictures/ /media/fnielsen/a4f11e07-3f63-45cc-bcab-e7d135c14b9c/backup/Billeder/

the -a is the standard archive option. the -u is ‘update’ (“skip files that are newer on the receiver”).

Page rank of scientific papers with citation in Wikidata – so far

Posted on Updated on

A citation property has just be created a few hours ago, – and as of writing still not been deleted. It means we can describe citation network, e.g., among scientific papers.

So far we have added a few citations, – mostly from papers about Zika. And now we can plot the citation network or compute the network measures such as page rank.

Below is a Python program using everything with Sparql, Pandas and NetworkX:

statement = """
select ?source ?sourceLabel ?target ?targetLabel where {
  ?source wdt:P2860 ?target .
  SERVICE wikibase:label {
    bd:serviceParam wikibase:language "en" .
  }
} 
"""

service = sparql.Service('https://query.wikidata.org/sparql')
response = service.query(statement)
df = DataFrame(response.fetchall(),
    columns=response.variables)

df.sourceLabel = df.sourceLabel.astype(unicode)
df.targetLabel = df.targetLabel.astype(unicode)

g = nx.DiGraph()
g.add_edges_from(((row.sourceLabel, row.targetLabel)
    for n, row in df.iterrows()))

pr = nx.pagerank(g)
sorted_pageranks = sorted((rank, title)
    for title, rank in pr.items())[::-1]

for rank, title in sorted_pageranks[:10]:
    print("{:.4} {}".format(rank, title[:40]))

The result:

0.02647 Genetic and serologic properties of Zika
0.02479 READemption-a tool for the computational
0.02479 Intrauterine West Nile virus: ocular and
0.02479 Internet encyclopaedias go head to head
0.02479 A juvenile early hominin skeleton from D
0.01798 Quantitative real-time PCR detection of 
0.01755 Zika virus. I. Isolations and serologica
0.01755 Genetic characterization of Zika virus s
0.0175 Potential sexual transmission of Zika vi
0.01745 Zika virus in Gabon (Central Africa)--20

Occupations of persons from Panama Papers

Posted on Updated on

Can we get an overview of the occupations of the persons associated with the Panama Papers? Well … that might be difficult, but we can get a biased plot by using the listing in Wikidata, where persons associated with the Panama Papers seems to be tagged and where their occupation(s) is listed. It produces the plot below.

PanamaPapersOccupations

It is fairly straightforward to construct such a bubble chart given the new plotting capabilities in the Wikidata Query Service. Dutch Wikipedian Gerard Meijssen seems to have been the one who has entered the information in Wikidata linking Panama Papers to persons via the ‘significant event‘ property. How complete he yet has managed to do this I do not know. Our Danish Wikipedian Ole Palnatoke Andersen set up a page on the Danish Wikipedia at Diskussion:Panama-papirerne/Wikidata tabulating with the nice Listeria tool of Magnus Manske. Modifying Ole’s SPARQL query we can get the count of occupations for the persons associated with the Panama Papers in Wikidata.

SELECT ?occupationLabel(count(distinct ?person) as ?count) WHERE {
  ?person wdt:P793 wd:Q23702848 ; wdt:P106 ?occupation .   
  service wikibase:label { bd:serviceParam wikibase:language "en" . }
} group by ?occupationLabel

Some people may see that politicians are the largest group, but that might simply be an artifact of the notability criterion of Wikidata: Only people who are somewhat notable or are linked to something notable are likely to be included in Wikidata, e.g., the common businessman/woman may not (yet?) be represented in Wikidata.

The bubble chart cuts letters of the words for the occupation. ‘murd’ is murderer. Joaquín Guzmán has his occupation set to murderer in Wikidata, – without source…

 

Review and comment on Nick Bostrom’s book Superintelligence

Posted on Updated on

Back in the 1990s I spent considerable computer time training and optimizing artificial neural networks. It was hot then. Then around year 2000 artificial neural networks became unfashionable with Gaussian processes and support vector machines taking over. During the 2000s computers got faster and some engineers turned to see what graphics processing units (GPU) could do besides doing computer rendering for computer games. GPUs are fast for matrix computations which are central in artificial neural network computations. Oh and Jung’s 2004 paper “GPU implementation of neural networks” seems to be the first according to Jurgen Schmidhuber describing the use of GPUs for neural network computation, but it was perhaps first when Dan Ciresan from Politehnica University of Timisoara began using GPUs that interesting advances began: In Schmidhuber’s lab he trained a GPU-based deep neural network system for Traffic Sign Classification and managed to get superhuman performance in 2011.

Deep learning, i.e., computation with many-layered neural network systems, was already then taking off and now broadly applied where the training of a system for computer gaming (classic Atari 2600 games) is perhaps the most illustrative example on how flexible and powerful modern neural networks are. So in limited domains deep neural networks are presently taking large steps.

A question is whether this will continue and whether we will see artificial intelligence system having more general superhuman capabilities. Nick Bostrom‘s book ‘Superintelligence‘ presupposes so and then starts to discuss “what then”.

Bostrom’s book, written from the standpoint of an academic philosopher, can be regarded as a elaboration from the classic Vernor Venge “The coming technological singularity: how to survive in the post-human era” from 1993. It is generally thought that if or when artificial intelligence become near-human intelligent the artificial intelligence system will be able to improve itself and once improved it will be able to improve yet more, resulting in a quick escalation (Verge’s ‘singularity’) with the artificial intelligence system becoming much more intelligent than humans (Bostrom’s ‘superintelligence’). Bostrom lists surveys among expert showing that the median time for the human-level intelligence is estimated to be around year 2040 and 2050, – a share of experts even believe the singularity will appear in the 2020s.

The book lacks solid empirical work on the singularity. The changes around the industrial revolution is discussed a bit and the horse in society in the 20th Century is mentioned: From having widespread use for transport, its function for humans would be taken over with human-constructed machines and the horses sent the butcher. Horses in the developed world are now mostly being used for entertainment purposes. There are various examples in history where a more ‘advanced’ society competes with an established less developed: neanderthal/modern humans, the age of colonization. It is possible that a superintelligence/human encounter will be quite different though.

The book discusses a number of issues from a theoretical and philosophical point of view: ‘the control problem’, ‘singleton’, equality, strategies for uploading values to the superintelligent entity. It is unclear to me if a singleton is what we should aim at. In capitalism, a monopoly seems not necessarily to be good for society, and in market economy societies put up regulation against monopolies. Even with a superintelligent singleton it appears to me that the system can run into problems when it tries to handle incompatible subgoals, e.g., an ordinary desktop computer – as a singleton – may have individual processes that require a resource which is not available because another resource is using it.

Even if the singularity is avoided there are numerous problems facing us in the future: warbots as autonomous machines with killing capability, do-it-yourself kitchen-table bioterrorism, general intelligent programs and robots taking our jobs. Major problems with it-security occur nowadays with nasty ransomware. The development of intelligent technologies may foster further inequality where a winner-takes-all company will rip all benefits.

Bostrom’s take home message is that the superintelligence is a serious issue, that we do not know how to tackle, so please send more money to superintelligence researchers. It is worth alerting society about the issue. There is general awareness of the evolution of society for some long term issues such as the demographics, future retirement benefits, natural resource depletion and climate change issues. It seems that development in information technology might be much more profound and requires much more attention than, say, climate change. I found Bostrom’s book a bit academically verbose, but I think the book has quite important merit as a coherent work setting up the issue for the major task we have at hand.

 

(Review also published on LibraryThing).