Month: December 2019

What does “toxicity” mean?

Posted on Updated on

There are now a range of people using “toxic” and “toxicity” in the context of messages on social media. I have had a problem with these words because I lacked a clear definition of the concept behind them. What is a toxic social media post? Negative sentiment, rudeness, harassment, cyberbullying, trolling, rumour spreading false news, heated arguments and possibly more may be mixed together.

The English Wiktionary currently lists “Severely negative or harmful” as a gloss for a figurative sense of the word “toxic”.

For social media, a 2015 post in relation to League of Legends, “Doing Something About the ‘Impossible Problem’ of Abuse in Online Games“,  mentions “toxicity” along with “online harassment”. They “classified online citizens from negative to positive”, apparently based on language from “trash talk” to “non-extreme but still generally offensive language”. What precisely “trash talk” is in the context of social media is not clear to me. The English Wikipedia describes “Trash-talk” in the context of sports. A related term, “Smack talk”, is defined for Internet behaviour.

There are now a few scholarly papers using the wording.

For instance, “Detecting Toxicity Triggers in Online Discussions” writes “Detecting these toxicity triggers is vital because online toxicity has a contagious nature” from September 2019 cites our paper “Good Friends, Bad News – Affect and Virality in Twitter“. I think that this citation has some issues. First of all, we do not use the word “toxicity” in the our paper. Previously in the paper the authors seems to equate toxicity with rudeness and harassment, but our paper did not specifically look at that. Our paper was particularly focus on “newsness” and sentiment score. A simplified conclusion would be that negative news are more viral. News articles are rarely rude or harassing.

Julian Risch and Ralf Krestel in “Toxic Comment Detection in Online Discussions” write: “A toxic comment is defined as a rude, disrespectful, or unreasonable comment that is likely to make other users leave a discussion”. This phrase seems to originate from the Kaggle competition “Toxic Comment Classification Challenge” from 2018: “…negative online behaviors, like toxic comments (i.e. comments that are rude, disrespectful or otherwise likely to make someone leave a discussion)”. The aspects of the competition that would be classified would be “threats, obscenity, insults, and identity-based hate”.

Risch and Krestel have been the first I have run into with a good discussion of the aspects of what they call toxicity. They seems to be inspired by the work on Wikipedia citing Ellery Wulczyn et al.’s “Ex Machina: Personal Attacks Seen at Scale”. Wulczyn’s work goes back to 2016 with the Detox research project. This research project may have been spawned by an entry in the 2015 wishlist in the Wikimedia community. “Algorithms and insults: Scaling up our understanding of harassment on Wikipedia” is a blogpost on the research project.

The Wulczyn-paper describes the construction of a corpus of comments from the article and user talk pages of the English Wikipedia. The labelling described in the paper would focus on “personal attack or harassment”. The authors define a “toxicity level” quantity as the number of personal attacks by a user (in the particular year examined). Why “personal attack level” is not used instead of the word “toxicity” is not clear.

It is interesting that the Kaggle competition defines “toxicity” via the likelihood that would “make other users leave a discussion”. I would usually think that heated discussions would tend to attract people to the discussion, – at least in “discussion social media” such as Reddit, Twitter and Facebook, though I suppose this is an open question. I do not recall seeing any study modelling the relationship between user retention and personal attack and obscene language.

The paper “Convolutional Neural Networks for Toxic Comment Classification” from 2018 cites a Maeve Duggan PEW report, “Online Harassment“, in the context “Text arising from online interactive communication hides many hazards such as fake news, online harassment and toxicity”. If you lookup the PEW report the words “fake news” and “toxic” hardly appear (only quoting a user comment for “toxic masculinity”).

Google’s Perspective API can analyze a text and give back a “toxicity” score.

The current English Wikipedia article on “toxicity” only describe the chemical sense of the word. The “toxic” disambiguation page has 3 relevant links: “toxic leader”, “toxic masculinity” and “toxic workplace”.

It still seems to me that “toxicity” and “toxic” are a too fuzzy words to be used in serious contexts without proper definition. It is also not clear to me if, e.g., the expression of strong negative sentiment, which could potentially be classified as “toxic”, necessarily negatively affect productivity and the health of the community. The 2015 harassment survey from the Wikimedia Foundation examined “Effects of experiencing harassment on participation levels” (Figure 47) and at least here the effect seems to be seriously negative on Wikimedia projects participation level. The word toxic was apparently not used in the survey, though under the example ideas for improvements from the respondents are listed: “Scoring the toxicity of users and watching toxic users’ actions in a community tool like the anti-vandal software.”

NeurIPS in Wikidata

Posted on Updated on

scholia-neurips-2019-co-authors.png
Co-authors in the NeurIPS 2019 conference based on data in Wikidata. Screenshot based on Scholia at https://tools.wmflabs.org/scholia/event/Q61582928.

The machine learning and neuroinformatics conference NeurIPS 2019 (NIPS 2019) takes place in the middle of December 2019. The conference series has always had a high standing and has grown considerably in reputation in recent years.

All papers from the conference are available online at papers.nips.cc. There is – to my knowledge – little structured metadata associated with the papers, though the website is consistently formatted in HTML and metadata can thus relatively easy be scraped. There are no consistent identifiers that I know of that identifies the papers on the site: No DOI, no ORCID iD or anything else. A few papers may be indexed here and there on third-party sites.

In the Scholia python package, I have made a module for scraping the papers.nips.cc website and convert the metadata to the Quickstatement format for Magnus Manske’s web application that submits the data to Wikidata. The entry of the basic metadata about the papers from NeurIPS is more or less complete. A check is needed to see if all is entered. One issue that the Python code attempts to counter is the cases where the scraped paper is already entered in Wikidata. Given that there are no identifiers the match attempt is somewhat heuristic.

Authors have separate webpages on papers.nips.cc with listing of papers published at the conference. This is quite well-curated, though I have discovered authors that have several webpages associated: The Bayesian Carl Edward Rasmussen is under http://papers.nips.cc/author/carl-edward-rasmussen-1254, https://papers.nips.cc/author/carl-e-rasmussen-2143 and http://papers.nips.cc/author/carl-edward-rasmussen-6617. Joshua B. Tenenbaum is also split.

Authors are not resolved with the code from Scholia. They are just represented as strings. The Author Disambiguator tool that Arthur P. Smith has built from a tool by Magnus Manske can semi-automatically resolve authors, i.e., associate the author of a paper with a specific Wikidata item representing a human. The Scholia web site has particular pages (“missing”) that make contextual links to the Author Disambiguator. For the NeurIPS 2019 proceedings the links can be seen at https://tools.wmflabs.org/scholia/venue/Q68600639/missing. There are currently over 1,400 authors that needs to be resolved. Some of these are not easy. Multiple authors may share the same name, e.g., some European names, e.g., Andreas Krause, and I have difficulty knowing how unique East Asian names are. So far only 50 authors from the NeurIPS conference have been resolved.

There is no citation information when the data is first entered with the Scholia and Quickstatement tools. There are currently no means to automatically enter that information. NeurIPS proceedings are – as far as I know – not available through CrossRef.

Since there is little editorial control of the format of the references the come in various shapes and may need “interpretation”. For instance, “Semi-Supervised Learning in Gigantic Image Collections” claims a citation to “[3] Y. Bengio, O. Delalleau, N. L. Roux, J.-F. Paiement, P. Vincent, and M. Ouimet. Learning
eigenfunctions links spectral embedding and kernel PCA. In NIPS, pages 2197–2219, 2004.” But that is unlikely a NIPS paper, and the reference should likely go to Neural Computation.

The ontology of Wikidata for annotating what papers are about is not necessarily good. Some concepts in cognitive sciences, including psychology, machine learning and neuroscience, become split or merged, e.g., Reinforcement learning is an example where the English Wikipedia article focus on the machine learning aspect, while Wikidata also tag neuroscience-oriented articles with the concept. For many papers I find it difficult to link to the ontology as the topic of the paper is so specialized that it is difficult to identify an appropriate Wikidata item.

With the data in Wikidata, it is possible to see many aspect of the data with the Wikidata Query Service and Scholia. For instance,

  1. Who has the most papers at NeurIPS 2019? A panel of a Scholia page readily shows this to be Sergey Levine, Francis Bach, Pieter Abbeel and Yoshua Bengio.
  2. The heuristically computed topic scores on the event page for NeurIPS 2019 show that reinforcement learning, generative adversarial network, deep learning, machine learning and meta-learning are central topics this year. (here one needs to keep in mind that the annotation in Wikidata is imcomplete)
  3. Which Danish researcher has been listed as an author on most NeurIPS papers through time? This is possible to ask with a query to the Wikidata Query Service: https://w.wiki/DTp. It depends upon what is meant by “Danish”. Here it is based on employment/affiliation and gives Carl Edward Rasmussen, Ole Winther, Ricardo Henao, Yevgeny Seldin, Lars Kai Hansen and Anders Krogh.