Month: September 2009
Generating pie chart with Python and Google Chart API
I recently discovered Google Chart API. From URLs it is able to generate image files with plots of different sorts, e.g., line plots, pie charts or even QR codes. The pie chart here was generated with the following code:
du -sk /usr/* > stats.txt ; python -c "d = open('stats.txt').read().split(); s = sum(map(float, d[0::2])); print('http://chart.apis.google.com/chart?cht=p&chd=t:' + ','.join(map(lambda x : str(int(float(x)/s*100)),d[0::2])) + '&chs=600x300&chl=' + '|'.join(d[1::2]));"
Copy and pasting the returned URL into a Web-browser will show the Google-generated pie chart as PNG. Alternatively one could let Python download the file by modifying the code to use ‘urllib.urlretrieve()’.
For the data in the ‘chd’ parameter it seems that one needs to indicate the percentage.
There is also a module called pycha, which I haven’t tried.
1000 total pages in the Brede Wiki
The MediaWiki page counter for my Brede Wiki now tells me that it has passed the 1000 “total pages” mark. Pages include, e.g., comments with data on scientific articles, pages for brain regions and pages for “topics” such as neuroticism.The Brede Wiki is presently open for anonymous edits and wiki spammers are quite interested in the article on Hidehiko Takahashi. I wonder if they are communicating something via the cryptic comment fields. Disregarding the spammers the article on Richard S. J. Frackowiak seems to be the most popular article after posterior cingulate gyrus.
Getting comments from YouTube via Python’s gdata.youtube
I would like to download comments from YouTube. This is possible via the gdata.youtube Python module. python-gdata is a Debian/Ubuntu module of GData but may not include the most recent additions, such as the youtube module, so it may be necessary to download the gdata-python-client package with something like:
wget http://gdata-python-client.googlecode.com/files/gdata-2.0.2.tar.gz
tar vfxz gdata-2.0.2.tar.gz
cd gdata-2.0.2
python setup.py install --home=~/python
With some help provided by the python code of Giles Bowkett it is now possible to download some of the comments to a video on YouTube with the following lines of Python code:
import gdata.youtube.service
yts = gdata.youtube.service.YouTubeService()
urlpattern = 'http://gdata.youtube.com/feeds/api/videos/' +
'JE5kkyucts8/comments?start-index=%d&max-results=25'
index = 1
url = urlpattern % index
comments = []
while url:
ytfeed = yts.GetYouTubeVideoCommentFeed(uri=url)
comments.extend([ comment.content.text for comment in ytfeed.entry ])
url = ytfeed.GetNextLink().href
print url
It seems only to be possible to download 1000 comments, – see also Stephen Mesa’s comment. So the small script will error after 1000 comments have been downloaded…
Google real-time search
The Omgili blog (Yoav Pridor) seems to be the ones who first described the real-time search facility presently somewhat hidden in the Google. By tweeking the search parameters it is possible to search for web-pages from the past two minutes:http://www.google.dk/search?tbo=1&tbs=qdr%3An2&q=denmarkIt is not clear to me what the “two minutes” mean: published? Or Google-crawled?I was alert to this real-time search via twitter mia out.
Now links to Danmarks Radio video
Direct links to the videos of Danmarks Radio are now possible: Et andet vidne fortæller om bilisternes affærd. Previously it has been quite a task (sometimes impossible) to view videos on the Debian and Kunbuntu system that I have. The present video describes a tragic truck and train accident. The man, Niels Stæhr, explains that the gate could be closed for as long as 7 minutes, and impatient drivers would zigzag between the gates. My immediate impression is that Banedanmark has a problem.