Metallica fan Milena Penkowa rocks Danish University

Posted on Updated on


Accusations of fraud – both real and researcher fraud – target high-profile Danish glossy neuroscientist Milena Penkowa from the University of Copenhagen. It has been frontpage news in Denmark for some time now. Want an English introduction to the case as it stood in the beginning of January 2011 then read the Nature News article Nature News. Since then the case has grown. Now the central damaging allegation is that she falsified documents stating that a Spanish company was involved in an experiment with many hundreds of rats.

I have tried to aggregate the different sources on the Danish Wikipedia page about Milena Penkowa. I have not yet managed to assemble all the material. During the writing I stumbled on some “loose ends” and subjective thoughts about the case (I must warn you though. I have a conflict of interest: I am from the Technical University of Denmark – a competing university in Copenhagen. I also know some of the professors that just before yule sent a letter with a request for an investigation):

  1. An element of the Milena Penkowa case cannot be discussed in public because of “legalities”. Journalists and researchers generally know the details of that case but are prohibited from mentioning them in public. Information seekers that want to find out about it may either need to seek a person in-the-know or do a bit of triangulation with an Internet search enginge. Hmmm… Aren’t the Danish variation of free speech, The Ytringsfrihed, having a problem here?
  2. Some have questioned her overall scientific contribution. If you pubmed Penkowa you’ll see she has first-authored 33 PubMed papers and the total listing counts 98 PubMed papers. Most of her research seems to revolve around the protein metallothionein. The blog entry from Morten Garly Andersen states that “Penkowas first big article, which came in 2000 in the scientific magazine Glia has her coauthor, the Spanish Professor Juan Hildalgo, now retracted.” That statement seems not to be true: “Strongly compromised inflammatory response to brain injury in interleukin-6-deficient mice” is from 1999 and is with 114 citation her most cited article on Google Scolar. As far as I know this has not been retracted. As far as I can determine her Google Scholar h-index is 32. That is quite much compared to her young age. Quite impressive I would say. On the other hand among her first authored papers I find no journal I can recognize as a high-impact journal. From a medical researcher one would have expected at least one article in, e.g., The Journal of Neuroscience. So we may not be talking about ground-breaking science. She has two patents, but I am not presently aware of any application of these patents. Correction 22 August 2011: Here I am definitely wrong: She has an article in Journal of Neuroscience called CNS wound healing is severely depressed in metallothionein I- and II-deficient mice!
  3. Penkowa has claimed that she was under contractual obligations with a company. But can a researcher sign such a contract without the university approving it? Has the university approved such a contract? Has the university investigated whether such a contract exists?
  4. One commentator noted that Helge Sander exited as Minister of Science one month prior to Penkowa’s suspension from the university. Is that a coincidence?
  5. In the begining of January Chairman of the Board of the University of Copenhagen found that “Penkowa’s research is already being treated by the relevant authority, The Danish Committees on Scientific Dishonesty (DCSD)”. I presume he no longer will accept his own statement as the university has involved the police in the investigation?
  6. The university has reported Penkowa for falsifying documents. Even if the allegation is correct it is probably the case that the police cannot do anything about it because too many years have passed. The alleged falsification have supposedly taken place back in 2003. A document falsification allegation case that expired has recently happened for a Danish businessman with a high-profile politician wife. In that case the Danish police simply rejected the case.
  7. Fraud? What fraud? The university has reported Penkowa to the police for fraud (real fraud – not science fraud). But to commit fraud you need to gain value from it. If the allegation is correct what she gained is not clear. One “gain” was to avoid being dragged through a scientific dishonesty process, but is that a “gain” in the sense of that paragraph of the law? Has the university lost any money on that? It is probably not the case that she gained her degree based on the documents – she simply left out the problematic study from the disertation. So fraud? What fraud?
  8. Her collaborator fra Barcelona has said little. Penkowa has two papers in Glia from 2000: Impaired inflammatory response and increased oxidative stress and neurodegeneration after brain injury in interleukin-6-deficient mice and Metallothionein I+II expression and their role in experimental autoimmune encephalomyelitis (as far as I can see the contested table is Table 1 with 784 rats. The method section reads “Female Lewis rats, weighing 180-200 g, were obtained from the animal facilities of the Panum Institute in Copenhagen”). I find his name on both papers. Has he anything to say? If Copenhagen researchers find it strange that over 700 rats have been used in a study, why does this Spanish collaborator and co-author not find that strange? Now, hold on! Hold on! On the 8th February 2011 he actually came forward: According to Danish news the Spanish collaborator has asked the editor of “Glia” to retract the article. So it clarifies that aspect. Then February 9th Danish news paper BT got a strong statement from the Spanish researcher saying he is not a second in doubt that she has lied about the rats and he used words such as “not a friendly person”, evil and demanding.
  9. If Penkowa writes “Female Lewis rats, weighing 180-200 g, were obtained from the animal facilities of the Panum Institute in Copenhagen” in her paper then why was the university satisfied with Penkowa’s explanation that part of the rat study was performed in Spain?
  10. IMK Almene Fond has supported Penkowa with 5.6 million kroner (around 1 million American dollars). They have demanded (some of) the money paid back. According to Politiken the foundation accepted to pay salaries (that had been paid), but not travel expenses and restaurant bills, bills to laywers and expenses for patent application as well as office funitures, cloths, (office?) rental, hardware and software etc. Now a foundation can put any kind of restriction on the use
    of its donated money. But its seems strange that a foundation given such a large grant does not support travel expenses and restaurant bills in connection with research. In standard research grants you usually get money to exactly that: Money to travel to scientific conferences, money to pay for hotel and food while you outside the country to the conferences, money to pay for food in you home country if you have lunch or dinner with foreign scientific visitors or internal scientific meetings within the group. Has the university paid money back to the foundation just to be on good terms with them for prospect of future grants? That might be a good strategy, but is that legal? The question may be answered as our national financial auditor Henrik Otbo now will examine this aspect
  11. Milena Penkowa received the EliteForsk prize. It is unclear who promoted her. Ralf Hemmingsen ok’ed it even though he must have known about the suspicions against Penkowa. Minister of Science Helge Sander has personal ties to Penkowa. Has there been a direct or indirect pressure from Sander on the people in the nomination committee? Who can investigate a former minister? Surely not the university.
  12. Penkowa stated in a letter that she had been to a funeral following a traffic accident involving her mother and sister. At the later party at the university her mother showed up. Did any one at the university remember the letter? Did they write it off as a white lie composed by a stressful person?
  13. Ralf Hemmingsen has apologized for the treatment the three members of the Penkowa’s original 2001-2003 doctoral committee got. However, it still an open question if committee members did a reasonable scientific job. Prominent Nordic neuroscientists Per Andersen and Anders Björklund critized the work of the committee. So where does that leave us? Was the work of the committee not good enough? Did Andersen and Björklund not get enough material or time to evaluate. Is Andersen and Björklund’s criticism unfounded? Should we have an investigation of the investigation of the investigation?
  14. Committee members said in 2011 they investigated the possibility of submitting Penkowa to the Danish Committees on Scientific Dishonesty back in the early 00s but were adviced not to do so as it could be regarded as a breach of confidentiality. Apparently the members reluctance to call in DCSD put Ralf Hemmingsen in a catch-22 and was the reason he called in the investigation with Andersen and Björklund. Is the claim of the committee members really true? Shouldn’t such members be allowed to submited to DCSD?
  15. Some have critized that Ralf Hemmingsen for not involving Andersen and Björklund in the investigation about the 784 rats. But is that critique fair? The investigation would involve looking through bureaucratic documents (bills, invoices, lab reports) and really not scientific material. Do the critics think that busy widely known neuroscientists Andersen and Björklund should spend their valuable time looking into such things?
  16. Penkowa’s latest statement from February 12th says the following: “That company has of course existed, like the persons, that at that time was involved and performed the experiment, also existed. The university called the company to get it confirmed. The one, Weekendavisen has called in 2010, is in all likelihood not the same person”. So either the newspaper Weekendavisen and the Spanish lawyer the University of Copenhagen employed for investigating the whereabouts of the existing or nonexisting Spanish company have made a major blunder or Penkowa has now shown a considerable strained relationship with reality.

Care for a bit more science gossip? Here is some in Danish provided by
a commentator:

Nu er jeg så gammel, at jeg husker en sag for omkring 40 år siden, hvor en kvik og dejlig dame blev “båret frem” til en medicinsk doktorgrad af ældre “velgørere” på Københavns Universitet. Bagefter var der nogle unge forskere, som pillede doktorgraden fra hinanden – med en hel del røde ører til følge. De unge læger blev i øvrigt bagefter blacklistet som hævn, så vidt jeg ved. Det kunne man gøre dengang.

Willy Johannsen,

Photo: A Mazda MX-5 roadster. A photo by Mauricio Marchant from Wikimedia Commons with license CC-by-sa. Penkowa has a similar car and has been photographed in it a couple of times.

(Typo fix: 14. February 2011)

(Factual correction: 22. August 2011: I was wrong to state that she has not an article in a high-impact journal. The article CNS wound healing is severely depressed in metallothionein I- and II-deficient mice from 1999 published in The Journal of Neuroscience is what I would call a high-impact journal.)


On the number of blog posts and PET/MRI scanners

Posted on Updated on

How many blog posts or status updates can you write?

The limited space in this slot means that there is a finite number of different status updates. After we reach this limit, we will only be able to repeat or copy. — Daniela Balslev

Facebook status messages can apparently only be 420 characters long. If you disregard capital letter, “foreign” characters and punctuations you get something like 27^420 different Facebook status messages. This was discovered after some discussion on Facebook between people in Daniela Balslev’s network. It is a bit difficult to compute with such a large number as it doesn’t fit in standard IEEE floating point representation that are ubiquitous in computing. However, the programming language Python is, with its “long int” representation, able to find the number:


>>> str(len('qwertyuiopasdfghjklzxcvbnm ')**420)[:5] '14886' >>> len(str(len('qwertyuiopasdfghjklzxcvbnm ')**420))-1 601


That is 1.4886 x 10^601. (This number does not reflect that consecutive spaces don’t really change the message)


So you are able to write an over 600-digit long number of Facebook statuses. — If they are suppose to be different. I guess it is not so interesting to repeat a message. It will take many many years before we repeat ourselves even if we type really fast – well over a 500-digit long number of years.


I recently ran into an announcement stating that they in Tübingen university hospital has got a combined MRI/PET scanner. Wow, I thought. This is really news. I had only heard of combined CT/PET scanners and wasn’t aware of that they could combine MRI and PET. I should give such a news a blogpost or at least a tweet.


As a read the announcement it occured to me that I already had done a blogpost on combined MRI/PET scanners some years ago. That was on the late ‘Machine Culture’ blog (practically the only memory the Internet has of that post is a reference on a page in the Internet Archive: “Simultaneous PET/MR brain scanner”). So it seems that I am beginning to repeat myself with around 3.5 years interval. That number is not in correspondence with the above calculate number.


I note the MRI/PET scanner they got in Tübingen is a “Ganzkörper” type. Maybe I should focus on “Ganzkörper” this time instead of repeating myself. Maybe I should also pop in and have my head examined: A Ganzkopf-MR-PET scan.

Navigating the Natalie Portman graph: Finding a co-author path to a NeuroImage author

Posted on Updated on


Hollywood actress Natalie Portman I first remarked in the Mike Nichols 2004 film Closer. According to rumor on the Internet a few years before Closer she co-authored a functional neuroimaging scientific article called Frontal lobe activation during object permanence: data from near-infrared spectroscopy. She was attributed as Natalie Hershlag.

I have written before of data mining a co-author graph for the Erdös number and “Hayashi” number, and I wondered if it would be possible to find a co-author path from Portman to me. And indeed yes.

Abigail A. Baird first-authored Portman’s article, and the article Functional magnetic resonance imaging of facial affect recognition in children and adolescents has Abigail Baird and psychiatry professor Bruce M. Cohen among the authors. Bruce M. Cohen and Nicholas Lange is among the co-authors on Structural brain magnetic resonance imaging of limbic and thalamic volumes in pediatric bipolar disorder and Lange and I are linked through our Plurality and resemblance in fMRI data analysis, — an article that contrasted different fMRI analysis methods.

So the co-author path between Portman and me is: Portman – Baird – Cohen – Lange – me, which bring my “Portman number” to 4.

Navigating a graph is a general problem if you only know the local connections. There has even been written scientific articles about it, e.g., Jon Kleinberg‘s Navigating in a small world. When a human (such as I) navigate a social graph such as the co-author graph of scientific articles one can utilize auxillary information, here the information about where a researcher has worked, what his/her interest are and how prominent the researcher is (how many co-authors s/he has). As Portman worked from Harvard a good guess would be to start looking among my co-authors that are near Harvard. Nicholas Lange is from Harvard and we collaborated in the American funded Human Brain Project. I knew that radiology professor Bruce R. Rosen was/is a central figure in Boston MRI researcher, so I thought that there might be a productive connection from him, — both to Lange and to Portman. Portman’s co-author Baird is professor and has written some neuroimaging papers, so among Portman’s co-authors Baird was probably the one that could lead to a path. While searching among Lange and Baird co-authors I confused Bruce Rosen and Bruce Cohen (their Hamming distance is not great). This error proved fertile.

If I didn’t run into Cohen and really wanted to find a path between Portman and me then I think a more automated and brute force method could have been required. One way would be to query PubMed and put the co-author graph into NetworkX which is a Python package. It has a shortest path algorithm. Joe Celko in his book SQL for Smarties: Advanced SQL programming shows a shortest path algorithm in SQL. That might be an alternative to NetworkX.

(Photo: gdcgraphics, CC-by, taken from Wikimedia Commons)

5-HTTLPR episode 17: The revenge of the neurocriticcritic

Posted on Updated on

I am sort of a neuropessimist believing that a large part of neuroscience results are more variable than we would like to think. I dont think that I am extremist like Why Most Published Research Findings Are False. I still need to understand its mathematical details and its critique.

The oldtimer 5-HTTLPR genetic polymorphism has long been hailed and then dethroned as associated with anxiety-related personality traits. Quite a number of meta-analyses have examined its effect on a range of variables and I recently listed some of these in tables for 5-HTTLPR meta-meta-analysis. The results are somewhat – hmmm – well – perhaps there is an effect on depression, perhaps only little effect or perhaps no effect. For the interaction between 5-HTTLPR and “stressful life events” on depression two 2009 meta-analyses (Munafo and some others) found no effect.

Anonymous neuroimaging blogger The Neurocritic had in 2009 a piece called Myth of the Depression Gene where he (probably not a she) with a certain amount of schadenfreude dethroned the optimistic original 2003 study of Caspi, Sugden, Moffit and all the others. Now yesterday neurocriticcritic nooffensebut pointed to a new meta-analysis published a few days ago, The serotonin transporter promoter variant (5-HTTLPR), stress, and depression meta-analysis revisited: evidence of genetic moderation, that claims a fair amount of effect from the 5-HTTLPR-stress interaction on depression.

Now I would say that you can’t trust the papers that say you can’t trust papers. But in the true spirit of neuropessimism I would say that you also shouldn’t trust that.

For you PubMed junkies: The next episode of 5-HTTLPR will come to a web-page near to you.

Secure multi-party computations in Python?

Posted on Updated on

I am not into cryptography, but I recently heard through Professor Lars Kai Hansen of secure multi-party computations, where multiple persons compute on numbers they do not directly reveal to each other, – only in encrypted form.

It turns out that Aarhus has done some research in that area and even released a Python package called VIFF (Virtual Ideal Functionality Framework).

The December 14th, 2009 1.0 release can be downloaded from their homepage. They provide a standard Python setup file:

python install --home=~/python/

The installation complained as it required the gmpy package which is in standard Ubuntu:

sudo aptitude install python-gmpy

With the package is example files in the ‘apps’ directory. They require the generation of configuration files where you specify hosts and ports for the ‘persons’ that need to communicate for secure computation. To keep it simple I stayed on localhost:

./ localhost:5000 localhost:5001 localhost:5002

In three different terminals you can then type (with the working directory being viff-1.0/apps):

./ player-1.ini 42

./ player-2.ini 3

./ player-3.ini 5

This example program will sum 42, 3 and 5. Each of the running Python programs then report the result:

Sum: {50}

The three values are private to each person (here each terminal) and the result is public. If you go in the middle of the Python program and write print str(x) thinking that you can reveal one of the private values (42, 3 or 5) you only get something like:

Share at 0x9751b4c current result: {805}

Close to pure magic.

Performance enhancement through TMS and TDCS

Posted on Updated on

Today I heard in the Danish Radio (“Danmarks Radio”) that British researcher had improved mathematical performance on subjects by sending electricity through the head. It is presently even on the front webpage of the “P3” channel with the headline “Is it ok to dope the brain?”.

Poor Thomas Z. Ramsøy, that I know, was dragged early out of bed by the radio to comment on the story. He is neuropsychologist, but I don’t think the story is in his line of research.

I got the impression that the research was performed with transcranial magnetic stimulation (TMS), – a technique where you apply a strong magnetic field just outside the head. Performance enhancement through TMS has been carried out before. A few years ago neuroscientist Daniela Balslev and her cos put TMS (or rather repetitive TMS – rTMS – if you are in the know) at the somatosensory hand area in the brain. You know the area “located at 3 cm posterior to the motor hotspot”. With that she was able to enhance the performance in the so-called mirror tracing task. This is a task where you trace lines on a piece of paper or computer screen, but through a mirror (actual or computer programmed). If you turn the computer mouse 180 degrees around you will see how difficult that task is.

Danish Radio doesn’t link to the original article they talked about as far as I can see. They should learn something from British Radio BBC in that aspect. But luckily Google News manages to find a reference. New Scientist writes Electrical brain stimulation improves math skills and references research by Roi Cohen Kadosh. He has done a TMS experiment, – but the mathematical performance fell. Actually the new research is reported to be performance enhancement with so-called transcranial direct current stimulation (TDCS), – a technique where you apply a small current through the brain.

The original article is called Modulating Neuronal Activity Produces Specific and Long-Lasting Changes in Numerical Competence. Danish science museum Experimentarium had an article a few days ago linking to that article.

2010-11-29: Minor correction

Hot or not or what: Data mining attractiveness

Posted on Updated on


From the media we hear that women are most attractive at 31. That fact is based on an “poll of 2,000 men and women, commissioned by the shopping channel QVC to celebrate its Beauty Month.” So this is a kind of science that is part of a media effort of a company. We also see such use of science in neuromarketing research. However, in this case the results are likely to be reasonably ok.

The web site Hot or Not has according to Wikipedia both been an inspiration for YouTube and Facebook. The site allows you to rate men and women based on their uploaded photo.

Back in 2009 I became aware of Hot or Not in a nerdish way: The computer programming book Programming Collective Intelligence uses the site as a real-life example for prediction based on annotation in the social web. Hot or Not has an API, so you can get some data from the site. You need an API key, and last time I checked you couldn’t obtain new keys, but I could use the one given in the book.

So I started to download data. You don’t get the individual ratings but the average rating for each person as well as a bit of demographics, e.g., the age. So there is really not so much you can do. The programming book try to predict the rating based on gender, age and location (US state).

I tried to see how the rating varied with age. I managed to make a plot of a sample of men and women from Hot or Not, and the result somewhat surprised me. I was expecting a decay in rating for women and men as a function of age, with around 31 years as a good candidate for maximum rating. However when I look on the ratings for women there is very little decay, in fact if you fit a second order polymonium you actually see a slight rise for older women. With unscrupulous extrapolation you would say that 100-year old women are maximum attractive. Men have the ‘correct’ decay with a highest rating somewhere around 30 or before. But there is considerably variance within year compared to the average between years.

One explanation for the effect seen among women is that only beautiful older ladies would “dare” to upload their image, while ugly young women are not afraid. There is also the possibility that we really cannot trust the average ratings reported to us by Hot or Not. I have got an account myself and uploaded an image. Presently I got a rating on 7.7 based on 206 people (the scale goes from 1 to 10). Hot or Not reports that I am “hotter than 74% of men on this site!”. When I compare 7.7 with the data I can download the percentage does not fit: Around 90% of males score higher than my 7.7. Yet another possibility is that the way I call the Hot or Not API does not give a fair sample of the people actually in the Hot or Not database.

Hot or Not data has been used in a few scientific reports, see, e.g., Economic principles motivating social attention in humans that made their own ratings and If I’m Not Hot, Are You Hot or Not? that has employees on the author list and thereby gained access to its unique data.