Latest Event Updates

1000 total pages in the Brede Wiki

Posted on

The MediaWiki page counter for my Brede Wiki now tells me that it has passed the 1000 “total pages” mark. Pages include, e.g., comments with data on scientific articles, pages for brain regions and pages for “topics” such as neuroticism.The Brede Wiki is presently open for anonymous edits and wiki spammers are quite interested in the article on Hidehiko Takahashi. I wonder if they are communicating something via the cryptic comment fields. Disregarding the spammers the article on Richard S. J. Frackowiak seems to be the most popular article after posterior cingulate gyrus.

Getting comments from YouTube via Python’s gdata.youtube

Posted on

I would like to download comments from YouTube. This is possible via the gdata.youtube Python module. python-gdata is a Debian/Ubuntu module of GData but may not include the most recent additions, such as the youtube module, so it may be necessary to download the gdata-python-client package with something like:

wget http://gdata-python-client.googlecode.com/files/gdata-2.0.2.tar.gz
tar vfxz gdata-2.0.2.tar.gz
cd gdata-2.0.2
python setup.py install --home=~/python

With some help provided by the python code of Giles Bowkett it is now possible to download some of the comments to a video on YouTube with the following lines of Python code:

import gdata.youtube.service
yts = gdata.youtube.service.YouTubeService()
urlpattern = 'http://gdata.youtube.com/feeds/api/videos/' +
       'JE5kkyucts8/comments?start-index=%d&max-results=25'
index = 1
url = urlpattern % index
comments = []
while url:
  ytfeed = yts.GetYouTubeVideoCommentFeed(uri=url)
  comments.extend([ comment.content.text for comment in ytfeed.entry ])
  url = ytfeed.GetNextLink().href
  print url

It seems only to be possible to download 1000 comments, – see also Stephen Mesa’s comment. So the small script will error after 1000 comments have been downloaded…

Google real-time search

Posted on

The Omgili blog (Yoav Pridor) seems to be the ones who first described the real-time search facility presently somewhat hidden in the Google. By tweeking the search parameters it is possible to search for web-pages from the past two minutes:http://www.google.dk/search?tbo=1&tbs=qdr%3An2&q=denmarkIt is not clear to me what the “two minutes” mean: published? Or Google-crawled?I was alert to this real-time search via twitter mia out.

Now links to Danmarks Radio video

Posted on

Direct links to the videos of Danmarks Radio are now possible: Et andet vidne fortæller om bilisternes affærd. Previously it has been quite a task (sometimes impossible) to view videos on the Debian and Kunbuntu system that I have. The present video describes a tragic truck and train accident. The man, Niels Stæhr, explains that the gate could be closed for as long as 7 minutes, and impatient drivers would zigzag between the gates. My immediate impression is that Banedanmark has a problem.

Yet another social media. That’s posterous.

Posted on

I hear of posterous from nitoen of overskrift.dk. It seems to be yet another social media website, but now with some kind of (more tightly?) email integration. I created a website as fnielsen.posterous.com. Lets see what happens when I send an email to post@posterous.com.