Latest Event Updates
Thomson Reuters has announced their prediction for the 2009 Nobel prize. Among them is Seiji Ogawa. He is indeed a good guess as the pioneer for functional magnetic resonance imaging, that is now much used in cognitive neuroscience. Ogawa was not the first how did functional neuroimaging. Niels A. Lassen and David Ingvar were quite earlier – almost too earlier for their technique to have impact on other laboratories. Much of initial functional imaging (some 10-15 years) was performed with positron emission tomography (PET). Whereas there have been Nobel Prizes for MRI and CT scanning, there haven’t – as far as I recall – been a Nobel Prize for PET – or SPECT for that matter. Another development that could deserve recognition is Fludeoxyglucose (FDG). Whereas fMRI is still a bit of a toy for scientists the combination of PET and FDG is dead serious forming the work horse for advanced cancer imaging. I am not too much into the history of FDG but Alfred P. Wolf and Louis Sokoloff seem to have been important contributors. For PET David Kuhl and Michael Phelps are often mentioned. Since PET predates fMRI and PET is used much in oncology – not just cognitive neuroscience – it might be that the Nobel goes to PET instead of fMRI, – if a Nobel Prize in Physiology or Medicine is awarded for imaging at all.
I recently discovered Google Chart API. From URLs it is able to generate image files with plots of different sorts, e.g., line plots, pie charts or even QR codes. The pie chart here was generated with the following code:
du -sk /usr/* > stats.txt ; python -c "d = open('stats.txt').read().split(); s = sum(map(float, d[0::2])); print('http://chart.apis.google.com/chart?cht=p&chd=t:' + ','.join(map(lambda x : str(int(float(x)/s*100)),d[0::2])) + '&chs=600x300&chl=' + '|'.join(d[1::2]));"
Copy and pasting the returned URL into a Web-browser will show the Google-generated pie chart as PNG. Alternatively one could let Python download the file by modifying the code to use ‘urllib.urlretrieve()’.
For the data in the ‘chd’ parameter it seems that one needs to indicate the percentage.
There is also a module called pycha, which I haven’t tried.
The MediaWiki page counter for my Brede Wiki now tells me that it has passed the 1000 “total pages” mark. Pages include, e.g., comments with data on scientific articles, pages for brain regions and pages for “topics” such as neuroticism.The Brede Wiki is presently open for anonymous edits and wiki spammers are quite interested in the article on Hidehiko Takahashi. I wonder if they are communicating something via the cryptic comment fields. Disregarding the spammers the article on Richard S. J. Frackowiak seems to be the most popular article after posterior cingulate gyrus.
I would like to download comments from YouTube. This is possible via the gdata.youtube Python module. python-gdata is a Debian/Ubuntu module of GData but may not include the most recent additions, such as the youtube module, so it may be necessary to download the gdata-python-client package with something like:
tar vfxz gdata-2.0.2.tar.gz
python setup.py install --home=~/python
With some help provided by the python code of Giles Bowkett it is now possible to download some of the comments to a video on YouTube with the following lines of Python code:
yts = gdata.youtube.service.YouTubeService()
urlpattern = 'http://gdata.youtube.com/feeds/api/videos/' +
index = 1
url = urlpattern % index
comments = 
ytfeed = yts.GetYouTubeVideoCommentFeed(uri=url)
comments.extend([ comment.content.text for comment in ytfeed.entry ])
url = ytfeed.GetNextLink().href
It seems only to be possible to download 1000 comments, – see also Stephen Mesa’s comment. So the small script will error after 1000 comments have been downloaded…
The Omgili blog (Yoav Pridor) seems to be the ones who first described the real-time search facility presently somewhat hidden in the Google. By tweeking the search parameters it is possible to search for web-pages from the past two minutes:http://www.google.dk/search?tbo=1&tbs=qdr%3An2&q=denmarkIt is not clear to me what the “two minutes” mean: published? Or Google-crawled?I was alert to this real-time search via twitter mia out.
Direct links to the videos of Danmarks Radio are now possible: Et andet vidne fortæller om bilisternes affærd. Previously it has been quite a task (sometimes impossible) to view videos on the Debian and Kunbuntu system that I have. The present video describes a tragic truck and train accident. The man, Niels Stæhr, explains that the gate could be closed for as long as 7 minutes, and impatient drivers would zigzag between the gates. My immediate impression is that Banedanmark has a problem.