Python 2.5.1 in Arch

Recently I tried to compile ardour from the PKGBUILD in abs, but the build always failed for me. Very strange, I didn't modify anything in that PKGBUILD. Later I suspected that it might be caused by default build of python 2.5.1 in Arch, but I can't make sure of whether it is the root cause or not. So I post the steps I managed to compile ardour below:
1. I had the default build of python-2.5.1-1 and scons-0.96.95-2 installed and tried to makepkg ardour-2.0.2-1 from my /var/abs. makepkg always failed, with following error message:
patching file libs/libsndfile/configure
patching file libs/libsndfile/src/flac.c
scons: Reading SConscript files ...
scons: *** No tool named 'midl': not a Zip file
File "/root/src/extra/multimedia/ardour/src/ardour-2.0.2/SConstruct", line 16, in <module>
scons: Reading SConscript files ...
scons: *** No tool named 'midl': not a Zip file
File "/root/src/extra/multimedia/ardour/src/ardour-2.0.2/SConstruct", line 16, in <module>
(there is a small bug in ardour's PKGBUILD; it doesn't do the error check for the building process. Whether the build is ok or not, you always get a .tar.gz package for it)
2. First I tried to browse scons's source code and even installed the latest one from their official site(using makepkg & pacman -U). Still the same error.
3. I guessed the problem should be related to python. Fired up python shell and was surprised to find out that there is an entry for python 2.4 site-packages in sys.path. Upon my pure guess, I used pacman to remove all the packages under my /usr/lib/python2.4/site-packages/. Now even the directory /usr/lib/python2.4 didn't exist anymore(actually I was already using my scons mentioned in step 5 in this stage), but I still got the same error when makepkg ardour.
4. I checked the PKGBUILD and Makefile for python-2.5.1-1 and found out that the SITEPATH in Makefile would affect other python path settings(though I didn't know what these settings would do to the build process). I commented out the following line in the PKGBUILD for python and makepkg
sed -i 's#SITEPATH=#SITEPATH=:../python2.4/site-packages#' Makefile
5. Installed the newly compiled Python, but, unfortunately, scons-0.96.95-2 was installed to /usr/lib/python2.4/site-packages/. So I makepkged scons from source and installed my scons. (this time they were all installed to /usr/lib/python2.5/site-packages/)
6. Now ardour compiled!
Doesn't "sed -i 's#SITEPATH=#SITEPATH=:../python2.4/site-packages#' Makefile" just a hack for python version transition and should be removed when the transition is done? I found there are some binary package compiled for python 2.4, while some are compiled for python 2.5. Anyone has suggestion on this?
Is there any other method to compile ardour from source without recompile python and scons??(or it's just me who got this problem??)
Thanks in advance.

no issues at all. been using ardourvst since the old ardour. never had an issue with python.
I've been running ardour with vst support for 2 years now.
ardour2.0.2 builds and runs with no issue. I edit the PKGBUILD from extra, interupt the build, add the vst zip package to ../libs/fst and re-run makepkg.
never ever an issue with python or scons. I'm running the latest of all the the mentioned software with no issues.
from the ardour website:
This section applies only to people building Ardour 2.0 (not 0.99.X) and only for Linux/x86 platforms. At this time, you cannot run VST plugins in Ardour on OS X or Linux x86_64 platforms. Note that if you use your x86_64 system in 32 bit mode, that counts as x86, and things will work as expected.
Please note that it is illegal to build Ardour 2.0 with VST support and then distribute the binary to anyone else. This is because Steinberg continues to refuse to allow redistribution of the otherwise freely available VST SDK. It is therefore not possible for you to comply with the terms of the GPL (i.e. you cannot provide the person you distribute the binary to with all the source code required to build the binary). We hope that one day Steinberg/Yamaha will change the licensing to allow redistribution of the SDK, and then this silly restriction will go away.
Building Ardour 2.0 with VST support involves a few extra steps before the usual scons-based build.
1.    Download the VST 2.3 SDK from Steinberg. At this time, we cannot provide you with any advice on where to get this from. Steinberg seems to regularly change the URL required to get the SDK. We recommend that you use google to search for it. Do not download the 2.4 or upcoming 3.0 SDK packages, since Ardour cannot currently use them.
2.    put the VST SDK zip archive into libs/fst
3.    make sure you have the Wine "development" package installed (typically called "wine-devel")
4.    run scons VST=1
After a successful build, run scons install.
Running it
The command name for this version of Ardour is ardourvst, not ardour2 which is the non-VST supporting version. In all other ways, it should behave identically.
Where to install VST plugins
Ardour looks for VST plugins in the location(s) indicated by your environment variable
Last edited by funkmuscle (2007-05-24 13:10:46)

Similar Messages

  • [SOLVED] tv_grab_nl_py works with python 2.6.5, fails on 3.1.2

    Hi All,
    I have updated my mediacenter. Now tv_grab_nl_py does not work anymore:
    [cedric@tv ~]$ tv_grab_nl_py --output ~/listings.xml --fast
    File "/usr/bin/tv_grab_nl_py", line 341
    print 'tv_grab_nl_py: A grabber that grabs tvguide data from tvgids.nl\n'
    ^
    SyntaxError: invalid syntax
    [cedric@tv ~]$
    the version of python on the mediacenter (running arch linux):
    [cedric@tv ~]$ python
    Python 3.1.2 (r312:79147, Oct 4 2010, 12:35:40)
    [GCC 4.5.1] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>>
    I have copied the file to my laptop, there it looks like it's working:
    ./tv_grab_nl_py --output ~/listings.xml --fast
    Config file /home/cedric/.xmltv/tv_grab_nl_py.conf not found.
    Re-run me with the --configure flag.
    cedric@laptop:~$
    the version of python on my laptop (running arch linux):
    cedric@laptop:~$ python
    Python 2.6.5 (r265:79063, Apr 1 2010, 05:22:20)
    [GCC 4.4.3 20100316 (prerelease)] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>>
    the script I'm trying to run:
    [cedric@tv ~]$ cat tv_grab_nl_py
    #!/usr/bin/env python
    # $LastChangedDate: 2009-11-14 10:06:41 +0100 (Sat, 14 Nov 2009) $
    # $Rev: 104 $
    # $Author: pauldebruin $
    SYNOPSIS
    tv_grab_nl_py is a python script that trawls tvgids.nl for TV
    programming information and outputs it in XMLTV-formatted output (see
    http://membled.com/work/apps/xmltv). Users of MythTV
    (http://www.mythtv.org) will appreciate the output generated by this
    grabber, because it fills the category fields, i.e. colors in the EPG,
    and has logos for most channels automagically available. Check the
    website below for screenshots. The newest version of this script can be
    found here:
    http://code.google.com/p/tvgrabnlpy/
    USAGE
    Check the web site above and/or run script with --help and start from there
    HISTORY
    tv_grab_nl_py used to be called tv_grab_nl_pdb, first released on
    2003/07/09. The name change was necessary because more and more people
    are actively contributing to this script and I always disliked using my
    initials (I was just too lazy to change it). At the same time I switched
    from using CVS to SVN and as a result the version numbering scheme has
    changed. The lastest official release of tv_grab_nl_pdb is 0.48. The
    first official release of tv_grab_nl_py is 6.
    QUESTIONS
    Questions (and patches) are welcome at: paul at pwdebruin dot net.
    IMPORTANT NOTES
    If you were using tv_grab_nl from the XMLTV bundle then enable the
    compat flag or use the --compat command-line option. Otherwise, the
    xmltvid's are wrong and you will not see any new data in MythTV.
    CONTRIBUTORS
    Main author: Paul de Bruin (paul at pwdebruin dot net)
    Michel van der Laan made available his extensive collection of
    high-quality logos that is used by this script.
    Michael Heus has taken the effort to further enhance this script so that
    it now also includes:
    - Credit info: directors, actors, presenters and writers
    - removal of programs that are actually just groupings/broadcasters
    (e.g. "KETNET", "Wild Friday", "Z@pp")
    - Star-rating for programs tipped by tvgids.nl
    - Black&White, Stereo and URL info
    - Better detection of Movies
    - and much, much more...
    Several other people have provided feedback and patches (these are the
    people I could find in my email archive, if you are missing from this
    list let me know):
    Huub Bouma, Roy van der Kuil, Remco Rotteveel, Mark Wormgoor, Dennis van
    Onselen, Hugo van der Kooij, Han Holl, Ian Mcdonald, Udo van den Heuvel.
    # Modules we need
    import re, urllib2, getopt, sys
    import time, random
    import htmlentitydefs, os, os.path, pickle
    from string import replace, split, strip
    from threading import Thread
    from xml.sax import saxutils
    # Extra check for the datetime module
    try:
    import datetime
    except:
    sys.stderr.write('This script needs the datetime module that was introduced in Python version 2.3.\n')
    sys.stderr.write('You are running:\n')
    sys.stderr.write('%s\n' % sys.version)
    sys.exit(1)
    # XXX: fix to prevent crashes in Snow Leopard [Robert Klep]
    if sys.platform == 'darwin' and sys.version_info[:3] == (2, 6, 1):
    try:
    urllib2.urlopen('http://localhost.localdomain')
    except:
    pass
    # do extra debug stuff
    debug = 1
    try:
    import redirect
    except:
    debug = 0
    pass
    # globals
    # compile only one time
    r_entity = re.compile(r'&(#x[0-9A-Fa-f]+|#[0-9]+|[A-Za-z]+);')
    tvgids = 'http://www.tvgids.nl/'
    uitgebreid_zoeken = tvgids + 'zoeken/'
    # how many seconds to wait before we timeout on a
    # url fetch, 10 seconds seems reasonable
    global_timeout = 10
    # Wait a random number of seconds between each page fetch.
    # We want to be nice and not hammer tvgids.nl (these are the
    # friendly people that provide our data...).
    # Also, it appears tvgids.nl throttles its output.
    # So there, there is not point in lowering these numbers, if you
    # are in a hurry, use the (default) fast mode.
    nice_time = [1, 2]
    # Maximum length in minutes of gaps/overlaps between programs to correct
    max_overlap = 10
    # Strategy to use for correcting overlapping prgramming:
    # 'average' = use average of stop and start of next program
    # 'stop' = keep stop time of current program and adjust start time of next program accordingly
    # 'start' = keep start time of next program and adjust stop of current program accordingly
    # 'none' = do not use any strategy and see what happens
    overlap_strategy = 'average'
    # Experimental strategy for clumping overlapping programming, all programs that overlap more
    # than max_overlap minutes, but less than the length of the shortest program are clumped
    # together. Highly experimental and disabled for now.
    do_clump = False
    # Create a category translation dictionary
    # Look in mythtv/themes/blue/ui.xml for all category names
    # The keys are the categories used by tvgids.nl (lowercase please)
    cattrans = { 'amusement' : 'Talk',
    'animatie' : 'Animated',
    'comedy' : 'Comedy',
    'documentaire' : 'Documentary',
    'educatief' : 'Educational',
    'erotiek' : 'Adult',
    'film' : 'Film',
    'muziek' : 'Art/Music',
    'informatief' : 'Educational',
    'jeugd' : 'Children',
    'kunst/cultuur' : 'Arts/Culture',
    'misdaad' : 'Crime/Mystery',
    'muziek' : 'Music',
    'natuur' : 'Science/Nature',
    'nieuws/actualiteiten' : 'News',
    'overige' : 'Unknown',
    'religieus' : 'Religion',
    'serie/soap' : 'Drama',
    'sport' : 'Sports',
    'theater' : 'Arts/Culture',
    'wetenschap' : 'Science/Nature'}
    # Create a role translation dictionary for the xmltv credits part
    # The keys are the roles used by tvgids.nl (lowercase please)
    roletrans = {'regie' : 'director',
    'acteurs' : 'actor',
    'presentatie' : 'presenter',
    'scenario' : 'writer'}
    # We have two sources of logos, the first provides the nice ones, but is not
    # complete. We use the tvgids logos to fill the missing bits.
    logo_provider = [ 'http://visualisation.tudelft.nl/~paul/logos/gif/64x64/',
    'http://static.tvgids.nl/gfx/zenders/' ]
    logo_names = {
    1 : [0, 'ned1'],
    2 : [0, 'ned2'],
    3 : [0, 'ned3'],
    4 : [0, 'rtl4'],
    5 : [0, 'een'],
    6 : [0, 'canvas_color'],
    7 : [0, 'bbc1'],
    8 : [0, 'bbc2'],
    9 : [0,'ard'],
    10 : [0,'zdf'],
    11 : [1, 'rtl'],
    12 : [0, 'wdr'],
    13 : [1, 'ndr'],
    14 : [1, 'srsudwest'],
    15 : [1, 'rtbf1'],
    16 : [1, 'rtbf2'],
    17 : [0, 'tv5'],
    18 : [0, 'ngc'],
    19 : [1, 'eurosport'],
    20 : [1, 'tcm'],
    21 : [1, 'cartoonnetwork'],
    24 : [0, 'canal+red'],
    25 : [0, 'mtv-color'],
    26 : [0, 'cnn'],
    27 : [0, 'rai'],
    28 : [1, 'sat1'],
    29 : [0, 'discover-spacey'],
    31 : [0, 'rtl5'],
    32 : [1, 'trt'],
    34 : [0, 'veronica'],
    35 : [0, 'tmf'],
    36 : [0, 'sbs6'],
    37 : [0, 'net5'],
    38 : [1, 'arte'],
    39 : [0, 'canal+blue'],
    40 : [0, 'at5'],
    46 : [0, 'rtl7'],
    49 : [1, 'vtm'],
    50 : [1, '3sat'],
    58 : [1, 'pro7'],
    59 : [1, 'kanaal2'],
    60 : [1, 'vt4'],
    65 : [0, 'animal-planet'],
    73 : [1, 'mezzo'],
    86 : [0, 'bbc-world'],
    87 : [1, 'tve'],
    89 : [1, 'nick'],
    90 : [1, 'bvn'],
    91 : [0, 'comedy_central'],
    92 : [0, 'rtl8'],
    99 : [1, 'sport1_1'],
    100 : [0, 'rtvu'],
    101 : [0, 'tvwest'],
    102 : [0, 'tvrijnmond'],
    103 : [1, 'tvnoordholland'],
    104 : [1, 'bbcprime'],
    105 : [1, 'spiceplatinum'],
    107 : [0, 'canal+yellow'],
    108 : [0, 'tvnoord'],
    109 : [0, 'omropfryslan'],
    114 : [0, 'omroepbrabant']}
    # A selection of user agents we will impersonate, in an attempt to be less
    # conspicuous to the tvgids.nl police.
    user_agents = [ 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)',
    'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9',
    'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)',
    'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.7) Gecko/20060909 Firefox/1.5.0.7',
    'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)',
    'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.9) Gecko/20071105 Firefox/2.0.0.9',
    'Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9',
    'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.8) Gecko/20071022 Ubuntu/7.10 (gutsy) Firefox/2.0.0.8'
    # Work in progress, the idea is to cache program categories and
    # descriptions to eliminate a lot of page fetches from tvgids.nl
    # for programs that do not have interesting/changing descriptions
    class ProgramCache:
    A cache to hold program name and category info.
    TVgids stores the detail for each program on a separate URL with an
    (apparently unique) ID. This cache stores the fetched info with the ID.
    New fetches will use the cached info instead of doing an (expensive)
    page fetch.
    def __init__(self, filename=None):
    Create a new ProgramCache object, optionally from file
    # where we store our info
    self.filename = filename
    if filename == None:
    self.pdict = {}
    else:
    if os.path.isfile(filename):
    self.load(filename)
    else:
    self.pdict = {}
    def load(self, filename):
    Loads a pickled cache dict from file
    try:
    self.pdict = pickle.load(open(filename,'r'))
    except:
    sys.stderr.write('Error loading cache file: %s (possibly corrupt)' % filename)
    sys.exit(2)
    def dump(self, filename):
    Dumps a pickled cache, and makes sure it is valid
    if os.access(filename, os.F_OK):
    try:
    os.remove(filename)
    except:
    sys.stderr.write('Cannot remove %s, check permissions' % filename)
    pickle.dump(self.pdict, open(filename+'.tmp', 'w'))
    os.rename(filename+'.tmp', filename)
    def query(self, program_id):
    Updates/gets/whatever.
    try:
    return self.pdict[program_id]
    except:
    return None
    def add(self, program):
    Adds a program
    self.pdict[program['ID']] = program
    def clear(self):
    Clears the cache (i.e. empties it)
    self.pdict = {}
    def clean(self):
    Removes all cached programming before today.
    Also removes erroneously cached programming.
    now = time.localtime()
    dnow = datetime.datetime(now[0],now[1],now[2])
    for key in self.pdict.keys():
    try:
    if self.pdict[key]['stop-time'] < dnow or self.pdict[key]['name'].lower() == 'onbekend':
    del self.pdict[key]
    except:
    pass
    def usage():
    print 'tv_grab_nl_py: A grabber that grabs tvguide data from tvgids.nl\n'
    print 'and stores it in XMLTV-combatible format.\n'
    print 'Usage:'
    print '--help, -h = print this info'
    print '--configure = create configfile (overwrites existing file)'
    print '--config-file = name of the configuration file (default = ~/.xmltv/tv_grab_py.conf'
    print '--capabilities = xmltv required option'
    print '--desc-length = maximum allowed length of programme descriptions in bytes.'
    print '--description = prints a short description of the grabber'
    print '--output = file where to put the output'
    print '--days = # number of days to grab'
    print '--preferredmethod = returns the preferred method to be called'
    print '--fast = do not grab descriptions of programming'
    print '--slow = grab descriptions of programming'
    print '--quiet = suppress all output'
    print '--compat = append tvgids.nl to the xmltv id (use this if you were using tv_grab_nl)'
    print '--logos 0/1 = insert urls to channel icons (mythfilldatabase will then use these)'
    print '--nocattrans = do not translate the grabbed genres into MythTV-genres'
    print '--cache = cache descriptions and use the file to store'
    print '--clean_cache = clean the cache file before fetching'
    print '--clear_cache = empties the cache file before fetching data'
    print '--slowdays = grab slowdays initial days and the rest in fast mode'
    print '--max_overlap = maximum length of overlap between programming to correct [minutes]'
    print '--overlap_strategy = what strategy to use to correct overlaps (check top of source code)'
    def filter_line_identity(m, defs=htmlentitydefs.entitydefs):
    # callback: translate one entity to its ISO Latin value
    k = m.group(1)
    if k.startswith("#") and k[1:] in xrange(256):
    return chr(int(k[1:]))
    try:
    return defs[k]
    except KeyError:
    return m.group(0) # use as is
    def filter_line(s):
    Removes unwanted stuff in strings (adapted from tv_grab_be)
    # do the latin1 stuff
    s = r_entity.sub(filter_line_identity, s)
    s = replace(s,'&nbsp;',' ')
    # Ik vermoed dat de volgende drie regels overbodig zijn, maar ze doen
    # niet veel kwaad -- Han Holl
    s = replace(s,'\r',' ')
    x = re.compile('(<.*?>)') # Udo
    s = x.sub('', s) #Udo
    s = replace(s, '~Q', "'")
    s = replace(s, '~R', "'")
    # Hmm, not sure if I understand this. Without it, mythfilldatabase barfs
    # on program names like "Steinbrecher &..."
    # We most create valid XML -- Han Holl
    s = saxutils.escape(s)
    return s
    def calc_timezone(t):
    Takes a time from tvgids.nl and formats it with all the required
    timezone conversions.
    in: '20050429075000'
    out:'20050429075000 (CET|CEST)'
    Until I have figured out how to correctly do timezoning in python this method
    will bork if you are not in a zone that has the same DST rules as 'Europe/Amsterdam'.
    year = int(t[0:4])
    month = int(t[4:6])
    day = int(t[6:8])
    hour = int(t[8:10])
    minute = int(t[10:12])
    #td = {'CET': '+0100', 'CEST': '+0200'}
    #td = {'CET': '+0100', 'CEST': '+0200', 'W. Europe Standard Time' : '+0100', 'West-Europa (standaardtijd)' : '+0100'}
    td = {0 : '+0100', 1 : '+0200'}
    pt = time.mktime((year,month,day,hour,minute,0,0,0,-1))
    timezone=''
    try:
    #timezone = time.tzname[(time.localtime(pt))[-1]]
    timezone = (time.localtime(pt))[-1]
    except:
    sys.stderr.write('Cannot convert time to timezone')
    return t+' %s' % td[timezone]
    def format_timezone(td):
    Given a datetime object, returns a string in XMLTV format
    tstr = td.strftime('%Y%m%d%H%M00')
    return calc_timezone(tstr)
    def get_page_internal(url, quiet=0):
    Retrieves the url and returns a string with the contents.
    Optionally, returns None if processing takes longer than
    the specified number of timeout seconds.
    txtdata = None
    txtheaders = {'Keep-Alive' : '300',
    'User-Agent' : user_agents[random.randint(0, len(user_agents)-1)] }
    try:
    #fp = urllib2.urlopen(url)
    rurl = urllib2.Request(url, txtdata, txtheaders)
    fp = urllib2.urlopen(rurl)
    lines = fp.readlines()
    page = "".join(lines)
    return page
    except:
    if not quiet:
    sys.stderr.write('Cannot open url: %s\n' % url)
    return None
    class FetchURL(Thread):
    A simple thread to fetch a url with a timeout
    def __init__ (self, url, quiet=0):
    Thread.__init__(self)
    self.quiet = quiet
    self.url = url
    self.result = None
    def run(self):
    self.result = get_page_internal(self.url, self.quiet)
    def get_page(url, quiet=0):
    Wrapper around get_page_internal to catch the
    timeout exception
    try:
    fu = FetchURL(url, quiet)
    fu.start()
    fu.join(global_timeout)
    return fu.result
    except:
    if not quiet:
    sys.stderr.write('get_page timed out on (>%s s): %s\n' % (global_timeout, url))
    return None
    def get_channels(file, quiet=0):
    Get a list of all available channels and store these
    in a file.
    # store channels in a dict
    channels = {}
    # tvgids stores several instances of channels, we want to
    # find all the possibile channels
    channel_get = re.compile('<optgroup label=.*?>(.*?)</optgroup>', re.DOTALL)
    # this is how we will find a (number, channel) instance
    channel_re = re.compile('<option value="([0-9]+)" >(.*?)</option>', re.DOTALL)
    # this is where we will try to find our channel list
    total = get_page(uitgebreid_zoeken, quiet)
    if total == None:
    return
    # get a list of match objects of all the <select blah station>
    stations = channel_get.finditer(total)
    # and create a dict of number, channel_name pairs
    # we do this this way because several instances of the
    # channel list are stored in the url and not all of the
    # instances have all the channels, this way we get them all.
    for station in stations:
    m = channel_re.finditer(station.group(0))
    for p in m:
    try:
    a = int(p.group(1))
    b = filter_line(p.group(2))
    channels[a] = b
    except:
    sys.stderr.write('Oops, [%s,%s] does not look like a valid channel, skipping it...\n' % (p.group(1),p.group(2)))
    # sort on channel number (arbitrary but who cares)
    keys = channels.keys()
    keys.sort()
    # and create a file with the channels
    f = open(file,'w')
    for k in keys:
    f.write("%s %s\n" % (k, channels[k]))
    f.close()
    def get_channel_all_days(channel, days, quiet=0):
    Get all available days of programming for channel number
    The output is a list of programming in order where each row
    contains a dictionary with program information.
    now = datetime.datetime.now()
    programs = []
    # Tvgids shows programs per channel per day, so we loop over the number of days
    # we are required to grab
    for offset in range(0, days):
    channel_url = 'http://www.tvgids.nl/zoeken/?d=%i&z=%s' % (offset, channel)
    # For historic purposes, the old style url that gave us a full week in advance:
    # channel_url = 'http://www.tvgids.nl/zoeken/?trefwoord=Titel+of+trefwoord&interval=0&timeslot='+\
    # '&station=%s&periode=%i&genre=&order=0' % (channel,days-1)
    # Sniff, we miss you...
    if offset > 0:
    time.sleep(random.randint(nice_time[0], nice_time[1]))
    # get the raw programming for the day
    total = get_page(channel_url, quiet)
    if total == None:
    return programs
    # Setup a number of regexps
    # checktitle will match the title row in H2 tags of the daily overview page, e.g.
    # <h2>zondag 19 oktober 2008</h2>
    checktitle = re.compile('<h2>(.*?)</h2>',re.DOTALL)
    # getrow will locate each row with program details
    getrow = re.compile('<a href="/programma/(.*?)</a>',re.DOTALL)
    # parserow matches the required program info, with groups:
    # 1 = program ID
    # 2 = broadcast times
    # 3 = program name
    parserow = re.compile('(.*?)/.*<span class="time">(.*?)</span>.*<span class="title">(.*?)</span>', re.DOTALL)
    # normal begin and end times
    times = re.compile('([0-9]+:[0-9]+) - ([0-9]+:[0-9]+)?')
    # Get the day of month listed on the page as well as the expected date we are grabbing and compare these.
    # If these do not match, we skip parsing the programs on the page and issue a warning.
    #dayno = int(checkday.search(total).group(1))
    title = checktitle.search(total)
    if title:
    title = title.group(1)
    dayno = title.split()[1]
    else:
    sys.stderr.write('\nOops, there was a problem with page %s. Skipping it...\n' % (channel_url))
    continue
    expected = now + datetime.timedelta(days=offset)
    if (not dayno.isdigit() or int(dayno) != expected.day):
    sys.stderr.write('\nOops, did not expect page %s to list programs for "%s", skipping it...\n' % (channel_url,title))
    continue
    # and find relevant programming info
    allrows = getrow.finditer(total)
    for r in allrows:
    detail = parserow.search(r.group(1))
    if detail != None:
    # default times
    start_time = None
    stop_time = None
    # parse for begin and end times
    t = times.search(detail.group(2))
    if t != None:
    start_time = t.group(1)
    stop_time = t.group(2)
    program_url = 'http://www.tvgids.nl/programma/' + detail.group(1) + '/'
    program_name = detail.group(3)
    # store time, name and detail url in a dictionary
    tdict = {}
    tdict['start'] = start_time
    tdict['stop'] = stop_time
    tdict['name'] = program_name
    if tdict['name'] == '':
    tdict['name'] = 'onbekend'
    tdict['url'] = program_url
    tdict['ID'] = detail.group(1)
    tdict['offset'] = offset
    #Add star rating if tipped by tvgids.nl
    tdict['star-rating'] = '';
    if r.group(1).find('Tip') != -1:
    tdict['star-rating'] = '4/5'
    # and append the program to the list of programs
    programs.append(tdict)
    # done
    return programs
    def make_daytime(time_string, offset=0, cutoff='00:00', stoptime=False):
    Given a string '11:35' and an offset from today,
    return a datetime object. The cuttoff specifies the point where the
    new day starts.
    Examples:
    In [2]:make_daytime('11:34',0)
    Out[2]:datetime.datetime(2006, 8, 3, 11, 34)
    In [3]:make_daytime('11:34',1)
    Out[3]:datetime.datetime(2006, 8, 4, 11, 34)
    In [7]:make_daytime('11:34',0,'12:00')
    Out[7]:datetime.datetime(2006, 8, 4, 11, 34)
    In [4]:make_daytime('11:34',0,'11:34',False)
    Out[4]:datetime.datetime(2006, 8, 3, 11, 34)
    In [5]:make_daytime('11:34',0,'11:34',True)
    Out[5]:datetime.datetime(2006, 8, 4, 11, 34)
    h,m = [int(x) for x in time_string.split(':')];
    hm = int(time_string.replace(':',''))
    chm = int(cutoff.replace(':',''))
    # check for the cutoff, if the time is before the cutoff then
    # add a day
    extra_day = 0
    if (hm < chm) or (stoptime==True and hm == chm):
    extra_day = 1
    # and create a datetime object, DST is handled at a later point
    pt = time.localtime()
    dt = datetime.datetime(pt[0],pt[1],pt[2],h,m)
    dt = dt + datetime.timedelta(offset+extra_day)
    return dt
    def correct_times(programs, quiet=0):
    Parse a list of programs as generated by get_channel_all_days() and
    convert begin and end times to xmltv compatible times in datetime objects.
    if programs == []:
    return programs
    # the start time of programming for this day, times *before* this time are
    # assumed to be on the next day
    day_start_time = '06:00'
    # initialise using the start time of the first program on this day
    if programs[0]['start'] != None:
    day_start_time = programs[0]['start']
    for program in programs:
    if program['start'] == program['stop']:
    program['stop'] = None
    # convert the times
    if program['start'] != None:
    program['start-time'] = make_daytime(program['start'], program['offset'], day_start_time)
    else:
    program['start-time'] = None
    if program['stop'] != None:
    program['stop-time'] = make_daytime(program['stop'], program['offset'], day_start_time, stoptime=True)
    # extra correction, needed because the stop time of a program may be on the next day, after the
    # day cutoff. For example:
    # 06:00 - 23:40 Long Program
    # 23:40 - 00:10 Lala
    # 00:10 - 08:00 Wawa
    # This puts the end date of Wawa on the current, instead of the next day. There is no way to detect
    # this with a single cutoff in make_daytime. Therefore, check if there is a day difference between
    # start and stop dates and correct if necessary.
    if program['start-time'] != None:
    # make two dates
    start = program['start-time']
    stop = program['stop-time']
    single_day = datetime.timedelta(1)
    startdate = datetime.datetime(start.year,start.month,start.day)
    stopdate = datetime.datetime(stop.year,stop.month,stop.day)
    if startdate - stopdate == single_day:
    program['stop-time'] = program['stop-time'] + single_day
    else:
    program['stop-time'] = None
    def parse_programs(programs, offset=0, quiet=0):
    Parse a list of programs as generated by get_channel_all_days() and
    convert begin and end times to xmltv compatible times.
    # good programs
    good_programs = []
    # calculate absolute start and stop times
    correct_times(programs, quiet)
    # next, correct for missing end time and copy over all good programming to the
    # good_programs list
    for i in range(len(programs)):
    # Try to correct missing end time by taking start time from next program on schedule
    if (programs[i]['stop-time'] == None and i < len(programs)-1):
    if not quiet:
    sys.stderr.write('Oops, "%s" has no end time. Trying to fix...\n' % programs[i]['name'])
    programs[i]['stop-time'] = programs[i+1]['start-time']
    # The common case: start and end times are present and are not
    # equal to each other (yes, this can happen)
    if programs[i]['start-time'] != None and \
    programs[i]['stop-time'] != None and \
    programs[i]['start-time'] != programs[i]['stop-time']:
    good_programs.append(programs[i])
    # Han Holl: try to exclude programs that stop before they begin
    for i in range(len(good_programs)-1,-1,-1):
    if good_programs[i]['stop-time'] <= good_programs[i]['start-time']:
    if not quiet:
    sys.stderr.write('Deleting invalid stop/start time: %s\n' % good_programs[i]['name'])
    del good_programs[i]
    # Try to exclude programs that only identify a group or broadcaster and have overlapping start/end times with
    # the actual programs
    for i in range(len(good_programs)-2,-1,-1):
    if good_programs[i]['start-time'] <= good_programs[i+1]['start-time'] and \
    good_programs[i]['stop-time'] >= good_programs[i+1]['stop-time']:
    if not quiet:
    sys.stderr.write('Deleting grouping/broadcaster: %s\n' % good_programs[i]['name'])
    del good_programs[i]
    for i in range(len(good_programs)-1):
    # PdB: Fix tvgids start-before-end x minute interval overlap. An overlap (positive or
    # negative) is halved and each half is assigned to the adjacent programmes. The maximum
    # overlap length between programming is set by the global variable 'max_overlap' and is
    # default 10 minutes. Examples:
    # Positive overlap (= overlap in programming):
    # 10:55 - 12:00 Lala
    # 11:55 - 12:20 Wawa
    # is transformed in:
    # 10:55 - 11.57 Lala
    # 11:57 - 12:20 Wawa
    # Negative overlap (= gap in programming):
    # 10:55 - 11:50 Lala
    # 12:00 - 12:20 Wawa
    # is transformed in:
    # 10:55 - 11.55 Lala
    # 11:55 - 12:20 Wawa
    stop = good_programs[i]['stop-time']
    start = good_programs[i+1]['start-time']
    dt = stop-start
    avg = start + dt / 2
    overlap = 24*60*60*dt.days + dt.seconds
    # check for the size of the overlap
    if 0 < abs(overlap) <= max_overlap*60:
    if not quiet:
    if overlap > 0:
    sys.stderr.write('"%s" and "%s" overlap %s minutes. Adjusting times.\n' % \
    (good_programs[i]['name'],good_programs[i+1]['name'],overlap / 60))
    else:
    sys.stderr.write('"%s" and "%s" have gap of %s minutes. Adjusting times.\n' % \
    (good_programs[i]['name'],good_programs[i+1]['name'],abs(overlap) / 60))
    # stop-time of previous program wins
    if overlap_strategy == 'stop':
    good_programs[i+1]['start-time'] = good_programs[i]['stop-time']
    # start-time of next program wins
    elif overlap_strategy == 'start':
    good_programs[i]['stop-time'] = good_programs[i+1]['start-time']
    # average the difference
    elif overlap_strategy == 'average':
    good_programs[i]['stop-time'] = avg
    good_programs[i+1]['start-time'] = avg
    # leave as is
    else:
    pass
    # Experimental strategy to make sure programming does not disappear. All programs that overlap more
    # than the maximum overlap length, but less than the shortest length of the two programs are
    # clumped.
    if do_clump:
    for i in range(len(good_programs)-1):
    stop = good_programs[i]['stop-time']
    start = good_programs[i+1]['start-time']
    dt = stop-start
    overlap = 24*60*60*dt.days + dt.seconds
    length0 = good_programs[i]['stop-time'] - good_programs[i]['start-time']
    length1 = good_programs[i+1]['stop-time'] - good_programs[i+1]['start-time']
    l0 = length0.days*24*60*60 + length0.seconds
    l1 = length1.days*24*60*60 + length0.seconds
    if abs(overlap) >= max_overlap*60 <= min(l0,l1)*60 and \
    not good_programs[i].has_key('clumpidx') and \
    not good_programs[i+1].has_key('clumpidx'):
    good_programs[i]['clumpidx'] = '0/2'
    good_programs[i+1]['clumpidx'] = '1/2'
    good_programs[i]['stop-time'] = good_programs[i+1]['stop-time']
    good_programs[i+1]['start-time'] = good_programs[i]['start-time']
    # done, nothing to see here, please move on
    return good_programs
    def get_descriptions(programs, program_cache=None, nocattrans=0, quiet=0, slowdays=0):
    Given a list of programs, from get_channel, retrieve program information
    # This regexp tries to find details such as Genre, Acteurs, Jaar van Premiere etc.
    detail = re.compile('<li>.*?<strong>(.*?):</strong>.*?<br />(.*?)</li>', re.DOTALL)
    # These regexps find the description area, the program type and descriptive text
    description = re.compile('<div class="description">.*?<div class="text"(.*?)<div class="clearer"></div>',re.DOTALL)
    descrtype = re.compile('<div class="type">(.*?)</div>',re.DOTALL)
    descrline = re.compile('<p>(.*?)</p>',re.DOTALL)
    # randomize detail requests
    nprograms = len(programs)
    fetch_order = range(0,nprograms)
    random.shuffle(fetch_order)
    counter = 0
    for i in fetch_order:
    counter += 1
    if programs[i]['offset'] >= slowdays:
    continue
    if not quiet:
    sys.stderr.write('\n(%3.0f%%) %s: %s ' % (100*float(counter)/float(nprograms), i, programs[i]['name']))
    # check the cache for this program's ID
    cached_program = program_cache.query(programs[i]['ID'])
    if (cached_program != None):
    if not quiet:
    sys.stderr.write(' [cached]')
    # copy the cached information, except the start/end times, rating and clumping,
    # these may have changed.
    tstart = programs[i]['start-time']
    tstop = programs[i]['stop-time']
    rating = programs[i]['star-rating']
    try:
    clump = programs[i]['clumpidx']
    except:
    clump = False
    programs[i] = cached_program
    programs[i]['start-time'] = tstart
    programs[i]['stop-time'] = tstop
    programs[i]['star-rating'] = rating
    if clump:
    programs[i]['clumpidx'] = clump
    continue
    else:
    # be nice to tvgids.nl
    time.sleep(random.randint(nice_time[0], nice_time[1]))
    # get the details page, and get all the detail nodes
    descriptions = ()
    details = ()
    try:
    if not quiet:
    sys.stderr.write(' [normal fetch]')
    total = get_page(programs[i]['url'])
    details = detail.finditer(total)
    descrspan = description.search(total);
    descriptions = descrline.finditer(descrspan.group(1))
    except:
    # if we cannot find the description page,
    # go to next in the loop
    if not quiet:
    sys.stderr.write(' [fetch failed or timed out]')
    continue
    # define containers
    programs[i]['credits'] = {}
    programs[i]['video'] = {}
    # now parse the details
    line_nr = 1;
    # First, we try to find the program type in the description section.
    # Note that this is not the same as the generic genres (these are searched later on), but a more descriptive one like "Culinair programma"
    # If present, we store this as first part of the regular description:
    programs[i]['detail1'] = descrtype.search(descrspan.group(1)).group(1).capitalize()
    if programs[i]['detail1'] != '':
    line_nr = line_nr + 1
    # Secondly, we add one or more lines of the program description that are present.
    for descript in descriptions:
    d_str = 'detail' + str(line_nr)
    programs[i][d_str] = descript.group(1)
    # Remove sponsored link from description if present.
    sponsor_pos = programs[i][d_str].rfind('<i>Gesponsorde link:</i>')
    if sponsor_pos > 0:
    programs[i][d_str] = programs[i][d_str][0:sponsor_pos]
    programs[i][d_str] = filter_line(programs[i][d_str]).strip()
    line_nr = line_nr + 1
    # Finally, we check out all program details. These are generically denoted as:
    # <li><strong>(TYPE):</strong><br />(CONTENT)</li>
    # Some examples:
    # <li><strong>Genre:</strong><br />16 oktober 2008</li>
    # <li><strong>Genre:</strong><br />Amusement</li>
    for d in details:
    type = d.group(1).strip().lower()
    content_asis = d.group(2).strip()
    content = filter_line(content_asis).strip()
    if content == '':
    continue
    elif type == 'genre':
    # Fix detection of movies based on description as tvgids.nl sometimes
    # categorises a movie as e.g. "Komedie", "Misdaadkomedie", "Detectivefilm".
    genre = content;
    if (programs[i]['detail1'].lower().find('film') != -1 \
    or programs[i]['detail1'].lower().find('komedie') != -1)\
    and programs[i]['detail1'].lower().find('tekenfilm') == -1 \
    and programs[i]['detail1'].lower().find('animatiekomedie') == -1 \
    and programs[i]['detail1'].lower().find('filmpje') == -1:
    genre = 'film'
    if nocattrans:
    programs[i]['genre'] = genre.title()
    else:
    try:
    programs[i]['genre'] = cattrans[genre.lower()]
    except:
    programs[i]['genre'] = ''
    # Parse persons and their roles for credit info
    elif roletrans.has_key(type):
    programs[i]['credits'][roletrans[type]] = []
    persons = content_asis.split(',');
    for name in persons:
    if name.find(':') != -1:
    name = name.split(':')[1]
    if name.find('-') != -1:
    name = name.split('-')[0]
    if name.find('e.a') != -1:
    name = name.split('e.a')[0]
    programs[i]['credits'][roletrans[type]].append(filter_line(name.strip()))
    elif type == 'bijzonderheden':
    if content.find('Breedbeeld') != -1:
    programs[i]['video']['breedbeeld'] = 1
    if content.find('Zwart') != -1:
    programs[i]['video']['blackwhite'] = 1
    if content.find('Teletekst') != -1:
    programs[i]['teletekst'] = 1
    if content.find('Stereo') != -1:
    programs[i]['stereo'] = 1
    elif type == 'url':
    programs[i]['infourl'] = content
    else:
    # In unmatched cases, we still add the parsed type and content to the program details.
    # Some of these will lead to xmltv output during the xmlefy_programs step
    programs[i][type] = content
    # do not cache programming that is unknown at the time
    # of fetching.
    if programs[i]['name'].lower() != 'onbekend':
    program_cache.add(programs[i])
    if not quiet:
    sys.stderr.write('\ndone...\n\n')
    # done
    def title_split(program):
    Some channels have the annoying habit of adding the subtitle to the title of a program.
    This function attempts to fix this, by splitting the name at a ': '.
    if (program.has_key('titel aflevering') and program['titel aflevering'] != '') \
    or (program.has_key('genre') and program['genre'].lower() in ['movies','film']):
    return
    colonpos = program['name'].rfind(': ')
    if colonpos > 0:
    program['titel aflevering'] = program['name'][colonpos+1:len(program['name'])].strip()
    program['name'] = program['name'][0:colonpos].strip()
    def xmlefy_programs(programs, channel, desc_len, compat=0, nocattrans=0):
    Given a list of programming (from get_channels())
    returns a string with the xml equivalent
    output = []
    for program in programs:
    clumpidx = ''
    try:
    if program.has_key('clumpidx'):
    clumpidx = 'clumpidx="'+program['clumpidx']+'"'
    except:
    print program
    output.append(' <programme start="%s" stop="%s" channel="%s%s" %s> \n' % \
    (format_timezone(program['start-time']), format_timezone(program['stop-time']),\
    channel, compat and '.tvgids.nl' or '', clumpidx))
    output.append(' <title lang="nl">%s</title>\n' % filter_line(program['name']))
    if program.has_key('titel aflevering') and program['titel aflevering'] != '':
    output.append(' <sub-title lang="nl">%s</sub-title>\n' % filter_line(program['titel aflevering']))
    desc = []
    for detail_row in ['detail1','detail2','detail3']:
    if program.has_key(detail_row) and not re.search('[Gg]een detailgegevens be(?:kend|schikbaar)', program[detail_row]):
    desc.append('%s ' % program[detail_row])
    if desc != []:
    # join and remove newlines from descriptions
    desc_line = "".join(desc).strip()
    desc_line.replace('\n', ' ')
    if len(desc_line) > desc_len:
    spacepos = desc_line[0:desc_len-3].rfind(' ')
    desc_line = desc_line[0:spacepos] + '...'
    output.append(' <desc lang="nl">%s</desc>\n' % desc_line)
    # Process credits section if present.
    # This will generate director/actor/presenter info.
    if program.has_key('credits') and program['credits'] != {}:
    output.append(' <credits>\n')
    for role in program['credits']:
    for name in program['credits'][role]:
    if name != '':
    output.append(' <%s>%s</%s>\n' % (role, name, role))
    output.append(' </credits>\n')
    if program.has_key('jaar van premiere') and program['jaar van premiere'] != '':
    output.append(' <date>%s</date>\n' % program['jaar van premiere'])
    if program.has_key('genre') and program['genre'] != '':
    output.append(' <category')
    if nocattrans:
    output.append(' lang="nl"')
    output.append ('>%s</category>\n' % program['genre'])
    if program.has_key('infourl') and program['infourl'] != '':
    output.append(' <url>%s</url>\n' % program['infourl'])
    if program.has_key('aflevering') and program['aflevering'] != '':
    output.append(' <episode-num system="onscreen">%s</episode-num>\n' % filter_line(program['aflevering']))
    # Process video section if present
    if program.has_key('video') and program['video'] != {}:
    output.append(' <video>\n');
    if program['video'].has_key('breedbeeld'):
    output.append(' <aspect>16:9</aspect>\n')
    if program['video'].has_key('blackwhite'):
    output.append(' <colour>no</colour>\n')
    output.append(' </video>\n')
    if program.has_key('stereo'):
    output.append(' <audio><stereo>stereo</stereo></audio>\n')
    if program.has_key('teletekst'):
    output.append(' <subtitles type="teletext" />\n')
    # Set star-rating if applicable
    if program['star-rating'] != '':
    output.append(' <star-rating><value>%s</value></star-rating>\n' % program['star-rating'])
    output.append(' </programme>\n')
    return "".join(output)
    def main():
    # Parse command line options
    try:
    opts, args = getopt.getopt(sys.argv[1:], "h", ["help", "output=", "capabilities",
    "preferredmethod", "days=",
    "configure", "fast", "slow",
    "cache=", "clean_cache",
    "slowdays=","compat",
    "desc-length=","description",
    "nocattrans","config-file=",
    "max_overlap=", "overlap_strategy=",
    "clear_cache", "quiet","logos="])
    except getopt.GetoptError:
    usage()
    sys.exit(2)
    # DEFAULT OPTIONS - Edit if you know what you are doing
    # where the output goes
    output = None
    output_file = None
    # the total number of days to fetch
    days = 6
    # Fetch data in fast mode, i.e. do NOT grab all the detail information,
    # fast means fast, because as it then does not have to fetch a web page for each program
    # Default: fast=0
    fast = 0
    # number of days to fetch in slow mode. For example: --days 5 --slowdays 2, will
    # fetch the first two days in slow mode (with all the details) and the remaining three
    # days in fast mode.
    slowdays = 6
    # no output
    quiet = 0
    # insert url of channel logo into the xml data, this will be picked up by mythfilldatabase
    logos = 1
    # enable this option if you were using tv_grab_nl, it adjusts the generated
    # xmltvid's so that everything works.
    compat = 0
    # enable this option if you do not want the tvgids categories being translated into
    # MythTV-categories (genres)
    nocattrans = 0
    # Maximum number of characters to use for program description.
    # Different values may work better in different versions of MythTV.
    desc_len = 475
    # default configuration file locations
    hpath = ''
    if os.environ.has_key('HOME'):
    hpath = os.environ['HOME']
    # extra test for windows users
    elif os.environ.has_key('HOMEPATH'):
    hpath = os.environ['HOMEPATH']
    # hpath = ''
    xmltv_dir = hpath+'/.xmltv'
    program_cache_file = xmltv_dir+'/program_cache'
    config_file = xmltv_dir+'/tv_grab_nl_py.conf'
    # cache the detail information.
    program_cache = None
    clean_cache = 1
    clear_cache = 0
    # seed the random generator
    random.seed(time.time())
    for o, a in opts:
    if o in ("-h", "--help"):
    usage()
    sys.exit(1)
    if o == "--quiet":
    quiet = 1;
    if o == "--description":
    print "The Netherlands (tv_grab_nl_py $Rev: 104 $)"
    sys.exit(0)
    if o == "--capabilities":
    print "baseline"
    print "cache"
    print "manualconfig"
    print "preferredmethod"
    sys.exit(0)
    if o == '--preferredmethod':
    print 'allatonce'
    sys.exit(0)
    if o == '--desc-length':
    # Use the requested length for programme descriptions.
    desc_len = int(a)
    if not quiet:
    sys.stderr.write('Using description length: %d\n' % desc_len)
    for o, a in opts:
    if o == "--config-file":
    # use the provided name for configuration
    config_file = a
    if not quiet:
    sys.stderr.write('Using config file: %s\n' % config_file)
    for o, a in opts:
    if o == "--configure":
    # check for the ~.xmltv dir
    if not os.path.exists(xmltv_dir):
    if not quiet:
    sys.stderr.write('You do not have the ~/.xmltv directory,')
    sys.stderr.write('I am going to make a shiny new one for you...')
    os.mkdir(xmltv_dir)
    if not quiet:
    sys.stderr.write('Creating config file: %s\n' % config_file)
    get_channels(config_file)
    sys.exit(0)
    if o == "--days":
    # limit days to maximum supported by tvgids.nl
    days = min(int(a),6)
    if o == "--compat":
    compat = 1
    if o == "--nocattrans":
    nocattrans = 1
    if o == "--fast":
    fast = 1
    if o == "--output":
    output_file = a
    try:
    output = open(output_file,'w')
    # and redirect output
    if debug:
    debug_file = open('/tmp/kaas.xml','w')
    blah = redirect.Tee(output, debug_file)
    sys.stdout = blah
    else:
    sys.stdout = output
    except:
    if not quiet:
    sys.stderr.write('Cannot write to outputfile: %s\n' % output_file)
    sys.exit(2)
    if o == "--slowdays":
    # limit slowdays to maximum supported by tvgids.nl
    slowdays = min(int(a),6)
    # slowdays implies fast == 0
    fast = 0
    if o == "--logos":
    logos = int(a)
    if o == "--clean_cache":
    clean_cache = 1
    if o == "--clear_cache":
    clear_cache = 1
    if o == "--cache":
    program_cache_file = a
    if o == "--max_overlap":
    max_overlap = int(a)
    if o == "--overlap_strategy":
    overlap_strategy = a
    # get configfile if available
    try:
    f = open(config_file,'r')
    except:
    sys.stderr.write('Config file %s not found.\n' % config_file)
    sys.stderr.write('Re-run me with the --configure flag.\n')
    sys.exit(1)
    #check for cache
    program_cache = ProgramCache(program_cache_file)
    if clean_cache != 0:
    program_cache.clean()
    if clear_cache != 0:
    program_cache.clear()
    # Go!
    channels = {}
    # Read the channel stuff
    for blah in f.readlines():
    blah = blah.lstrip()
    blah = blah.replace('\n','')
    if blah:
    if blah[0] != '#':
    channel = blah.split()
    channels[channel[0]] = " ".join(channel[1:])
    # channels are now in channels dict keyed on channel id
    # print header stuff
    print '<?xml version="1.0" encoding="ISO-8859-1"?>'
    print '<!DOCTYPE tv SYSTEM "xmltv.dtd">'
    print '<tv generator-info-name="tv_grab_nl_py $Rev: 104 $">'
    # first do the channel info
    for key in channels.keys():
    print ' <channel id="%s%s">' % (key, compat and '.tvgids.nl' or '')
    print ' <display-name lang="nl">%s</display-name>' % channels[key]
    if (logos):
    ikey = int(key)
    if logo_names.has_key(ikey):
    full_logo_url = logo_provider[logo_names[ikey][0]]+logo_names[ikey][1]+'.gif'
    print ' <icon src="%s" />' % full_logo_url
    print ' </channel>'
    num_chans = len(channels.keys())
    channel_cnt = 0
    if program_cache != None:
    program_cache.clean()
    fluffy = channels.keys()
    nfluffy = len(fluffy)
    for id in fluffy:
    channel_cnt += 1
    if not quiet:
    sys.stderr.write('\n\nNow fetching %s(xmltvid=%s%s) (channel %s of %s)\n' % \
    (channels[id], id, (compat and '.tvgids.nl' or ''), channel_cnt, nfluffy))
    info = get_channel_all_days(id, days, quiet)
    blah = parse_programs(info, None, quiet)
    # fetch descriptions
    if not fast:
    get_descriptions(blah, program_cache, nocattrans, quiet, slowdays)
    # Split titles with colon in it
    # Note: this only takes place if all days retrieved are also grabbed with details (slowdays=days)
    # otherwise this function might change some titles after a few grabs and thus may result in
    # loss of programmed recordings for these programs.
    if slowdays == days:
    for program in blah:
    title_split(program)
    print xmlefy_programs(blah, id, desc_len, compat, nocattrans)
    # save the cache after each channel fetch
    if program_cache != None:
    program_cache.dump(program_cache_file)
    # be nice to tvgids.nl
    time.sleep(random.randint(nice_time[0], nice_time[1]))
    if program_cache != None:
    program_cache.dump(program_cache_file)
    # print footer stuff
    print "</tv>"
    # close the outputfile if necessary
    if output != None:
    output.close()
    # and return success
    sys.exit(0)
    # allow this to be a module
    if __name__ == '__main__':
    main()
    # vim:tw=0:et:sw=4
    [cedric@tv ~]$
    Best regards,
    Cedric
    Last edited by cdwijs (2010-11-04 18:44:51)

    Running the script by python2 solves it for me:
    su - mythtv -c "nice -n 19 python2 /usr/bin/tv_grab_nl_py --output ~/listings.xml"
    Best regards,
    Cedric

  • Guide me in making AUR for a python package

    This is the package http://pypi.python.org/pypi/BitTorrent-bencode/5.0.8
    - This python package is wrongly packaged. http://stackoverflow.com/questions/2693 … ode-module
    - Also, it contains a test directory. Is it a concern?
    This is the AUR (thats wrong)
    pkgname=python2-bencode
    _realname=BitTorrent-bencode
    pkgver=5.0.8
    pkgrel=1
    pkgdesc="The BitTorrent bencode python module as leight-weight, standalone package."
    url="http://pypi.python.org/pypi/BitTorrent-bencode"
    arch=('i686' 'x86_64')
    license=('GPL')
    depends=('python2')
    makedepends=('python2-distribute')
    source=(http://pypi.python.org/packages/source/B/BitTorrent-bencode/$_realname-$pkgver.tar.gz)
    md5sums=('5ad77003d18fc2e698d8d0d83be78d11')
    build(){
    cd "$srcdir/$_realname-$pkgver"
    # python2 fix
    for file in $(find . -name '*.py' -print); do
    sed -i 's_#!.*/usr/bin/python_#!/usr/bin/python2_' $file
    sed -i 's_#!.*/usr/bin/env.*python_#!/usr/bin/env python2_' $file
    done
    python2 setup.py install --root="$pkgdir" --optimize=1
    Guide me in making a perfect PKGBUILD.

    What I do is get the sources and MD5s in the PKGBUILD, then makepkg -o.  I'll manually compile the sources but not install them, and when I have a few one-liners that make the magic happen, I put it into the build() section.  Then I'll rm -rf the $srcdir and actually try it with package { true; } as the package() function.  If that works, I'll try some install lines (instead of true) and run find against the $pkgdir.  If it all looks good, I'll make sure my dependencies are in check and pack it up.

  • Python numeric goes numpy

    Hello everyone,
    I am using a lot of science packages for python. The current 'standard' package for numerical computation in python is 'python-numeric'. Tomorrow (october 25th) is the official release day for the replacement of numeric, called numpy. It is from the same author, Travis Oliphant, and has a very active user community. A few python packages that were using numeric are already enabled to use numpy, e.g. matplotlib and pytables. Some packages need a recompile (e.g. pytables), while the pure python packages, e.g. matplotlib, can use it at runtime.
    There is currently a 'numpy' package (rc2) in AUR, along with two dependencies in AUR, and even though I already voted for it, I would like to know if there are any immediate plans to include python-numpy into the official arch tree? This would make it a lot easier to build numeric python extensions for arch. I think it is already as important as python-numeric, judging from the traffic on their official mailing list.
    Best regards,
    Niklas.

    numpy works with 2.4 and (as far as I remember from their mailing list) with 2.5. I don't know about 2.3 though. The core of the numpy package (the array protocol) was actually meant to be put into python 2.5, but didn't make it on time.
    Numpy is the successor and a full replacement for both numeric and numarray. On the C-API side, there is a compatibility layer (or something similar), so that existing C-extensions written for numeric can easily switch to numpy. The biggest disadvantage of numeric, the time for the creation of new arrays, has been solved and is supposedly even better than in numarray.
    A lot more information can be found on http://www.scipy.org.

  • [Python] Building Extensions for Python in C

    I'm learning some of the more advanced features of Python (using 2.7 from repos), and one feature I want to learn is building extensions in C, but not many reliable tutorials exist online.  They are either outdated, or irrelevant (serving as ads for books).  The official documentation is very confusing as well; read it a few times and am still lost.  Is Python.h installed already with Arch, or is there a specifically needed package?  Must I do a custom build of python?

    Yannick_LM wrote:
    Just to be sure, by "official", you mean this doc:
    http://docs.python.org/extending/extending.html ?
    I personally found it quite clear :)
    You can still fall back to using swig if you don't mind trusting auto-generated code:
    http://www.swig.org/tutorial.html
    Exactly the documentation I was talking about.  I considered it official as it was hosted on their site along with all of the documentation for Python.  I'll go back and reread it a little more carefully, maybe give it a few days to "soak in."  Also, with SWIG (haven't looked just yet), is this a recommended option for an extension development system?  Auto-generated code always seems to be a big hit-or-miss, and I don't want to spend time learning how to use it if it's buggy.
    >> Is Python.h installed already with Arch ?
    Yes:
    /usr/include/python3.1/Python.h
    I am running Python 2.7, but after a quick look, I see it's in the same location.
    >> Must I do a custom build of python?
    I don't think that's necessary.
    Thank goodness.  I was afraid of the time it might have taken to do this.  Thank you for your help so far, Yannick; it really pushed me forward.

  • Pip2Arch - PyPi packages to Arch PKGBUILDs

    Pip2Arch is a simple tool to convert PyPi packages (the python equivalent of CPAN) into Arch PKGBUILDS.
    You can get the code here: https://github.com/bluepeppers/pip2arch
    Run
    pip2arch.py --help
    for help, usage should be simple enough.
    Currently does not track dependencies, due to a problem with PyPi(?). Patches very welcome.
    I use pip2arch for django servers, where I want to use obscure django packages that are not usually in the aur, but don't want to write a PKGBUILD for them. Using pip2arch, I just have to pay attention to the dependencies.
    Thanks,
    Laurie
    Last edited by Blue Peppers (2010-11-15 00:59:33)

    Couple of patches
    1st patch adds an -m option to allow for makedepends:
    pip2arch.py -m git some_package
    will create a some_package with a make dependency of git
    2nd patch adds search functionality to pip2arch:
    pip2arch.py -s mathematics
    bessy
    bidict
    IntPy
    isodate
    munkres
    NodeBox
    PGAPy
    qombinatorics
    SciMath
    scipy
    tabular
    TANGO Project - ALGENCAN
    makedeps.patch:
    --- pip2arch.py 2010-11-17 02:16:41.380000528 -0600
    +++ pip2arch.py.new 2010-11-17 02:07:33.536667195 -0600
    @@ -13,7 +13,8 @@
    pkgrel=1
    pkgdesc="{pkg.description}"
    url="{pkg.url}"
    -depends=('{pkg.pyversion}' {depends})
    +depends=('{pkg.pyversion}'{depends})
    +makedepends=({makedepends})
    license=('{pkg.license}')
    arch=('any')
    source=('{pkg.download_url}')
    @@ -33,6 +34,7 @@
    logging.info('Creating Server Proxy object')
    client = xmlrpclib.ServerProxy('http://pypi.python.org/pypi')
    depends = []
    + makedepends = []
    def get_package(self, name, outname, version=None):
    if version is None:
    @@ -47,7 +49,7 @@
    raw_urls = self.client.release_urls(name, version)
    logging.info('Got release_urls from PiPy')
    if not len(raw_urls) and len(data):
    - raise LackOfInformation('PyPi did not return the neccisary information to create the PKGBUILD')
    + raise LackOfInformation('PyPi did not return the necessary information to create the PKGBUILD')
    elif len(data) and len(raw_urls):
    urls = {}
    for url in raw_urls:
    @@ -79,7 +81,7 @@
    self.url = data.get('home_page', '')
    self.license = data['license']
    except KeyError:
    - raise pip2archException('PiPy did not return needed information')
    + raise pip2archException('Pypi did not return needed information')
    logging.info('Parsed other data')
    def choose_version(self, versions):
    @@ -95,10 +97,14 @@
    def add_depends(self, depends):
    self.depends += depends
    +
    + def add_makedepends(self, makedepends):
    + self.makedepends += makedepends
    def render(self):
    - depends = '\'' + '\' \''.join(d for d in self.depends) + '\'' if self.depends else ''
    - return BLANK_PKGBUILD.format(pkg=self, date=datetime.date.today(), depends=depends)
    + depends = ' \'' + '\' \''.join(d for d in self.depends) + '\'' if self.depends else ''
    + makedepends = '\'' + '\' \''.join(d for d in self.makedepends) + '\'' if self.makedepends else ''
    + return BLANK_PKGBUILD.format(pkg=self, date=datetime.date.today(), depends=depends, makedepends=makedepends)
    if __name__ == '__main__':
    @@ -113,6 +119,7 @@
    default=open('PKGBUILD', 'w'),
    help='The file to output the generated PKGBUILD to')
    parser.add_argument('-d', '--dependencies', dest='depends', action='append')
    + parser.add_argument('-m', '--make-dependencies', dest='makedepends', action='append')
    parser.add_argument('-n', '--output-package-name', dest='outname', action='store', default=None,
    help='The name of the package that pip2arch will generate')
    @@ -125,6 +132,10 @@
    sys.exit('ERROR: {0}'.format(e))
    if args.depends:
    p.add_depends(args.depends)
    +
    + if args.makedepends:
    + p.add_makedepends(args.makedepends)
    +
    print "Got package information"
    args.outfile.write(p.render())
    - print "Written PKGBUILD"
    \ No newline at end of file
    + print "Written PKGBUILD"
    search.patch:
    --- pip2arch.py.old 2010-11-17 02:17:43.440000529 -0600
    +++ pip2arch.py.new 2010-11-17 02:57:03.863333862 -0600
    @@ -34,7 +34,13 @@
    client = xmlrpclib.ServerProxy('http://pypi.python.org/pypi')
    depends = []
    - def get_package(self, name, outname, version=None):
    + def get_package(self, name, outname, version=None, search=False):
    + if search:
    + results = self.client.search({'description': '%s' % name[1:]})
    + for result in results:
    + print result['name']
    + sys.exit(1)
    +
    if version is None:
    versions = self.client.package_releases(name)
    version = self.choose_version(versions)
    @@ -113,6 +119,7 @@
    default=open('PKGBUILD', 'w'),
    help='The file to output the generated PKGBUILD to')
    parser.add_argument('-d', '--dependencies', dest='depends', action='append')
    + parser.add_argument('-s', '--search', dest='search', action='store_true')
    parser.add_argument('-n', '--output-package-name', dest='outname', action='store', default=None,
    help='The name of the package that pip2arch will generate')
    @@ -120,11 +127,11 @@
    p = Package()
    try:
    - p.get_package(name=args.pkgname, version=args.version, outname=args.outname or args.pkgname)
    + p.get_package(name=args.pkgname, version=args.version, outname=args.outname or args.pkgname, search=args.search)
    except pip2archException as e:
    sys.exit('ERROR: {0}'.format(e))
    if args.depends:
    p.add_depends(args.depends)
    print "Got package information"
    args.outfile.write(p.render())
    - print "Written PKGBUILD"
    \ No newline at end of file
    + print "Written PKGBUILD"

  • Python Virtualenvwrapper - Big Python Move

    Hi,
    I've just done a fresh install of Arch64, and I've tried to install python-virtualenvwrapper.
    I've added the following two lines to my ~/.bashrc file:
    export WORKON_HOME=$HOME/.virtualenvs
    source /usr/bin/virtualenvwrapper.sh
    and created the ~/.virtualenvs directory.
    However, when I startup a new bash shell or terminal, I get:
    Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/usr/lib/python2.7/site-packages/virtualenvwrapper/hook_loader.py", line 9, in <module>
    import inspect
    File "/usr/lib/python2.7/inspect.py", line 42, in <module>
    from collections import namedtuple
    ImportError: cannot import name namedtuple
    virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON=/usr/bin/python2 and that PATH is set properly.
    I'm assuming this has something to do with the big Python 2 to Python 3 move happening in Arch atm? I did notice python-virtualenvwrapper on the list at http://wiki.archlinux.org/index.php/Dev … _Todo_List, however, I'm not aware enough of what's happening to know how to fix this unfortunately.
    Have I done something wrong on my setup, and it should be working? Or if it is indeed behaving as intended, is there any way to fix this for now? Or any timeline on when the package might be fixed?
    Thanks,
    Victor

    i installed python-virtualenvwrapper and followed steps 1-4 of the setup detailed in /usr/bin/virtualenvwrapper.sh.
    unfortunately, i can't reproduce your problem.
    that traceback looks odd, because it is showing an error within python itself.
    what happens when you try this:
    $ python2
    Python 2.7 (r27:82500, Oct 6 2010, 12:18:19)
    [GCC 4.5.1] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import inspect
    >>>
    do you get any import errors?

  • Mouse not fully functional

    I'm running arch on an Acer Aspire One.  I just installed X and i3 and they seem to function just fine, but the mouse isn't working.  Arch detects mouse clicks but not mouse movement, so the cursor is stuck in the middle of the screen. I tried reinstalling xf86-input-evdev but it didn't help.

    ewaller wrote:
    Is there a mouse on your system, or are you using a touch pad?
    If you are not using a mouse, you might try connecting a USB as a sanity check.
    If you are using the touch pad, are you using a custom configuration file for it?
    Perhaps you could use wgetpaste to post your Xorg log and give us the link.  Like this:
    ewaller@turing ~ 1116 %wgetpaste ~/.local/share/xorg/Xorg.0.log
    Your paste can be seen here: http://paste.pound-python.org/show/Y1QwWYwO2Xdubgxw7oxw/
    ewaller@turing ~ 1117 %
    Then we can follow the link to see it http://paste.pound-python.org/show/Y1Qw … ubgxw7oxw/
    Arch says it doesn't have this command. What package is it under?

  • Conky "bar" across screen in Openbox.

    Hello, I am trying to configure conky aligned on the top of the screen stretching all the way across. How would I do that?

    My conkyrc:
    alignment top_left
    border_margin 5
    border_width 1
    default_color bbbbbb
    double_buffer yes
    draw_borders no
    draw_outline no
    draw_shades no
    gap_x 0
    gap_y 0
    maximum_width 765
    no_buffers yes
    override_utf8_locale yes
    own_window_colour eeeeee
    own_window_hints undecorated,below,skip_taskbar,skip_pager,sticky
    own_window_transparent yes
    own_window_type override
    own_window yes
    stippled_borders 0
    update_interval 5
    uppercase no
    #use_spacer yes
    use_xft yes
    xftfont Nu:size=9
    TEXT
    ${offset 19}${color eeeeee}${time %I:%M %p}$color ${time %A, %b %d %Y} ${color} | CPU: ${color eeeeee}$cpu %$color | ${color eeeeee}${voffset 1}${membar 6,75}${offset -75}${voffset -1}${color 000044}Mem${color eeeeee}${offset 55}${color} | ${voffset 1}${color eeeeee}${fs_bar 6,75 /home}${offset -75}${color 000044}${voffset -1}Home${color eeeeee}${offset 55}${color} | ${voffset 1}${color eeeeee}${fs_bar 6,75 /}${offset -75}${voffset -1}${color 000044}Root${color eeeeee}${offset 55} ${color}| ${color}Battery: ${color eeeeee}${battery} ${color}| ${color}Temp: ${color eeeeee}${acpitemp}
    ${voffset -8}${font OpenLogos:size=15}A${font}${voffset -3}${offset 1}Email: ${color eeeeee}${execi 300 python ~/scripts/gmail.py} ${color}| Arch:${color eeeeee} ${texeci 3550 perl ~/scripts/conky-updates.pl} ${color}| Weather: ${color eeeeee}${execi 3550 python ~/scripts/conkyForecast.py --location=USPA0796 --datatype=HT -i} ${color}| ${color}$mpd_status ${color eeeeee} ${scroll 75 ${mpd_smart} }
    My tint2 rc (tint2 from the AUR):
    # TINT2 CONFIG FILE
    # BACKGROUND AND BORDER
    rounded = 7
    border_width = 0
    background_color = #000000 0
    border_color = #ffffff 18
    rounded = 5
    border_width = 0
    background_color = #666666 30
    border_color = #ffffff 18
    rounded = 5
    border_width = 0
    background_color = #333333 50
    border_color = #ffffff 70
    # PANEL
    panel_monitor = all
    panel_position = top right
    panel_size = 510 21
    panel_margin = 0 0
    panel_padding = 0 0
    font_shadow = 0
    panel_background_id = 1
    # TASKBAR
    taskbar_mode = single_desktop
    taskbar_padding = 0 0 2
    taskbar_background_id = 0
    # TASK
    task_icon = 1
    task_text = 1
    task_width = 0
    task_centered = 0
    task_padding = 1 1
    task_font = Liberation Sans 8
    task_font_color = #ffffff 70
    task_active_font_color = #ffffff 85
    task_background_id = 2
    task_active_background_id = 3
    # SYSTRAYBAR
    systray_padding = 4 3 4
    systray_background_id = 0
    # CLOCK
    #time1_format = %H:%M
    time1_font = (null)
    #time2_format = %A %d %B
    time2_font = (null)
    clock_font_color = #000000 0
    clock_padding = 2 2
    clock_background_id = 0
    # MOUSE ACTION ON TASK
    mouse_middle = none
    mouse_right = close
    mouse_scroll_up = toggle
    mouse_scroll_down = iconify
    It's all in one row up at the top of the screen. Try it out, then tweak as you see fit.

  • X is eating RAM

    Hi. I've noticed that after some hours using KDE 4.5.1 the process X uses a lot of ram (now is on 400 MB).
    I don't know exactly what information do you need to help me, so I will post the KDE's Monitor (ksysguard) detailed memory information about X. If you need anything else, please tell me.
    Process 20722 - X
    Summary
    The process X (with pid 20722) is using approximately 411.1 MB of memory.
    It is using 409.2 MB privately, and a further 3.9 MB that is, or could be, shared with other programs.
    Dividing up the shared memory between all the processes sharing that memory we get a reduced shared memory usage of 1878.0 KB. Adding that to the private usage, we get the above mentioned total memory footprint of 411.1 MB.
    Library Usage
    The memory usage of a process is found by adding up the memory usage of each of its libraries, plus the process's own heap, stack and any other mappings.
    Private
    more
    400520 KB    [heap]
    15328 KB    /usr/lib/libnvidia-glcore.so.256.53
    824 KB    /usr/bin/Xorg
    708 KB    /usr/lib/xorg/modules/drivers/nvidia_drv.so
    620 KB    /usr/lib/xorg/modules/extensions/libglx.so.256.53
    Shared
    more
    3692 KB    /SYSV00000000 (deleted)
    164 KB    /lib/libc-2.12.1.so
    64 KB    /usr/lib/libpixman-1.so.0.18.4
    36 KB    /usr/lib/libz.so.1.2.5
    28 KB    /lib/libm-2.12.1.so
    Totals
    Private    419060 KB    (= 1780 KB clean + 417280 KB dirty)
    Shared    4028 KB    (= 4016 KB clean + 12 KB dirty)
    Rss    423088 KB    (= Private + Shared)
    Pss    420938 KB    (= Private + Shared/Number of Processes)
    Swap    0 KB
    Full Details
    Information about the complete virtual space for the process is available, with sortable columns. An empty filename means that it is an anonymous mapping.
    Both the MMU page size and the kernel page size are 4 KB.

    Ok.
    ps aux:
    USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
    root 1 0.0 0.0 1756 556 ? Ss 18:52 0:00 init [3]
    root 2 0.0 0.0 0 0 ? S 18:52 0:00 [kthreadd]
    root 3 0.0 0.0 0 0 ? S 18:52 0:00 [ksoftirqd/0]
    root 4 0.0 0.0 0 0 ? S 18:52 0:00 [migration/0]
    root 5 0.0 0.0 0 0 ? S 18:52 0:00 [watchdog/0]
    root 6 0.0 0.0 0 0 ? S 18:52 0:00 [migration/1]
    root 7 0.0 0.0 0 0 ? S 18:52 0:00 [ksoftirqd/1]
    root 8 0.0 0.0 0 0 ? S 18:52 0:00 [watchdog/1]
    root 9 0.0 0.0 0 0 ? S 18:52 0:00 [events/0]
    root 10 0.0 0.0 0 0 ? S 18:52 0:00 [events/1]
    root 11 0.0 0.0 0 0 ? S 18:52 0:00 [cpuset]
    root 12 0.0 0.0 0 0 ? S 18:52 0:00 [khelper]
    root 13 0.0 0.0 0 0 ? S 18:52 0:00 [netns]
    root 14 0.0 0.0 0 0 ? S 18:52 0:00 [async/mgr]
    root 15 0.0 0.0 0 0 ? S 18:52 0:00 [pm]
    root 16 0.0 0.0 0 0 ? S 18:52 0:00 [sync_supers]
    root 17 0.0 0.0 0 0 ? S 18:52 0:00 [bdi-default]
    root 18 0.0 0.0 0 0 ? S 18:52 0:00 [kblockd/0]
    root 19 0.0 0.0 0 0 ? S 18:52 0:00 [kblockd/1]
    root 20 0.0 0.0 0 0 ? S 18:52 0:00 [kacpid]
    root 21 0.0 0.0 0 0 ? S 18:52 0:00 [kacpi_notify]
    root 22 0.0 0.0 0 0 ? S 18:52 0:00 [kacpi_hotplug]
    root 23 0.0 0.0 0 0 ? S 18:52 0:00 [kseriod]
    root 24 0.0 0.0 0 0 ? S 18:52 0:00 [khungtaskd]
    root 25 0.0 0.0 0 0 ? S 18:52 0:00 [kswapd0]
    root 26 0.0 0.0 0 0 ? SN 18:52 0:00 [ksmd]
    root 27 0.0 0.0 0 0 ? S 18:52 0:00 [aio/0]
    root 28 0.0 0.0 0 0 ? S 18:52 0:00 [aio/1]
    root 29 0.0 0.0 0 0 ? S 18:52 0:00 [crypto/0]
    root 30 0.0 0.0 0 0 ? S 18:52 0:00 [crypto/1]
    root 466 0.0 0.0 0 0 ? S 18:52 0:00 [ata_aux]
    root 475 0.0 0.0 0 0 ? S 18:52 0:00 [ata_sff/0]
    root 506 0.0 0.0 0 0 ? S 18:52 0:00 [ata_sff/1]
    root 545 0.0 0.0 0 0 ? S 18:52 0:00 [scsi_eh_0]
    root 552 0.0 0.0 0 0 ? S 18:52 0:00 [scsi_eh_1]
    root 562 0.0 0.0 0 0 ? S 18:52 0:00 [scsi_eh_2]
    root 568 0.0 0.0 0 0 ? S 18:52 0:00 [scsi_eh_3]
    root 640 0.0 0.0 0 0 ? S 18:52 0:00 [jbd2/sda1-8]
    root 641 0.0 0.0 0 0 ? S 18:52 0:00 [ext4-dio-unwrit]
    root 642 0.0 0.0 0 0 ? S 18:52 0:00 [ext4-dio-unwrit]
    root 665 0.0 0.0 0 0 ? S 18:52 0:00 [flush-8:0]
    root 676 0.0 0.0 2164 964 ? S<s 18:52 0:00 /sbin/udevd --daemon
    root 954 0.0 0.0 0 0 ? S 18:52 0:00 [kpsmoused]
    root 1001 0.0 0.0 0 0 ? S 18:52 0:00 [khubd]
    root 1055 0.0 0.0 0 0 ? S 18:52 0:00 [scsi_eh_4]
    root 1056 0.0 0.0 0 0 ? S 18:52 0:00 [usb-storage]
    root 1065 0.0 0.0 0 0 ? S 18:52 0:00 [i915]
    root 1067 0.0 0.0 0 0 ? S< 18:52 0:00 [kslowd000]
    root 1068 0.0 0.0 0 0 ? S< 18:52 0:00 [kslowd001]
    root 1070 2.9 0.0 0 0 ? S 18:52 1:33 [hd-audio0]
    root 1193 0.0 0.0 0 0 ? S 18:52 0:00 [usbhid_resumer]
    root 1361 0.0 0.0 5092 424 ? S 18:52 0:00 supervising syslog-ng
    root 1362 0.0 0.1 5272 1724 ? Ss 18:52 0:00 /usr/sbin/syslog-ng
    root 1387 0.0 0.0 1804 596 ? Ss 18:52 0:00 /usr/sbin/crond -S -l info
    dbus 1402 0.0 0.1 2620 1308 ? Ss 18:52 0:00 /usr/bin/dbus-daemon --system
    hal 1410 0.0 0.3 14976 3172 ? Ssl 18:52 0:00 /usr/sbin/hald
    root 1411 0.0 0.1 3520 1164 ? S 18:52 0:00 hald-runner
    root 1443 0.0 0.0 3584 1000 ? S 18:52 0:00 hald-addon-input: Listening on /dev/input/event0 /dev/input/even
    root 1457 0.0 0.0 3584 992 ? S 18:52 0:00 hald-addon-storage: polling /dev/sdc (every 2 sec)
    root 1462 0.0 0.0 3584 996 ? S 18:52 0:00 hald-addon-storage: no polling on /dev/fd0 because it is explici
    hal 1464 0.0 0.0 3248 1008 ? S 18:52 0:00 hald-addon-acpi: listening on acpi kernel interface /proc/acpi/e
    root 1466 0.0 0.1 3584 1216 ? S 18:52 0:00 hald-addon-storage: polling /dev/sr0 (every 2 sec)
    root 1475 0.0 0.0 3772 636 ? Ss 18:52 0:00 /usr/bin/kdm
    root 1479 0.0 0.0 1756 560 tty1 Ss+ 18:52 0:00 /sbin/agetty -8 38400 tty1 linux
    root 1480 0.0 0.0 1756 568 tty2 Ss+ 18:52 0:00 /sbin/agetty -8 38400 tty2 linux
    root 1481 0.0 0.0 1756 560 tty3 Ss+ 18:52 0:00 /sbin/agetty -8 38400 tty3 linux
    root 1482 0.0 0.0 1756 564 tty4 Ss+ 18:52 0:00 /sbin/agetty -8 38400 tty4 linux
    root 1483 0.0 0.0 1756 564 tty5 Ss+ 18:52 0:00 /sbin/agetty -8 38400 tty5 linux
    root 1484 0.0 0.0 1756 568 tty6 Ss+ 18:52 0:00 /sbin/agetty -8 38400 tty6 linux
    root 1485 8.2 33.7 359228 347180 tty7 Ss+ 18:52 4:21 /usr/bin/X :0 vt7 -nolisten tcp -auth /var/run/xauth/A:0-vsCcfb
    root 1500 0.0 0.0 1948 544 ? Ss 18:53 0:00 /sbin/dhcpcd -t 30 -h arch-live eth0
    root 1502 0.0 0.0 2184 916 ? S< 18:53 0:00 /sbin/udevd --daemon
    root 1506 0.0 0.1 4116 1656 ? S 18:53 0:00 -:0
    root 1514 0.0 0.0 3176 504 ? S 18:53 0:00 dbus-launch --autolaunch 101e08cc1c3aaf54927f3a504a3b4f56 --bina
    root 1515 0.0 0.0 2356 856 ? Ss 18:53 0:00 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --se
    root 1544 0.0 0.2 18344 2448 ? Sl 18:53 0:00 /usr/sbin/console-kit-daemon --no-daemon
    arch 1616 0.0 0.1 4740 1452 ? Ss 18:53 0:00 /bin/sh /usr/bin/startkde
    arch 1645 0.0 0.0 4464 480 ? Ss 18:53 0:00 /usr/bin/gpg-agent --daemon --pinentry-program /usr/bin/pinentry
    arch 1648 0.0 0.0 3544 420 ? Ss 18:53 0:00 /usr/bin/ssh-agent -s
    arch 1659 0.0 0.0 3176 500 ? S 18:53 0:00 dbus-launch --sh-syntax --exit-with-session
    arch 1660 0.0 0.1 2856 1648 ? Ss 18:53 0:01 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --se
    root 1667 0.0 0.0 1600 56 ? S 18:53 0:00 /usr/lib/kde4/libexec/start_kdeinit +kcminit_startup
    arch 1668 0.0 3.1 93304 32704 ? Ss 18:53 0:00 kdeinit4: kdeinit4 Running...
    arch 1669 0.0 2.6 96204 27580 ? S 18:53 0:00 kdeinit4: klauncher [kdeinit] --fd=9
    arch 1671 0.0 3.6 153128 37472 ? Sl 18:53 0:01 kdeinit4: kded4 [kdeinit]
    arch 1678 0.0 3.2 130088 33180 ? S 18:53 0:00 kdeinit4: kglobalaccel [kdeinit]
    root 1686 0.0 0.0 2160 784 ? S< 18:53 0:00 /sbin/udevd --daemon
    arch 1688 0.0 0.0 1736 232 ? S 18:53 0:00 kwrapper4 ksmserver
    arch 1689 0.0 3.2 139260 33212 ? Sl 18:53 0:00 kdeinit4: ksmserver [kdeinit]
    arch 1691 23.6 7.8 274128 80732 ? Rl 18:53 12:26 kwin -session 10f3deccde000127156781200000017050000_1284967326_3
    root 1693 0.0 0.0 2796 732 ? Ss 18:53 0:00 /sbin/mount.ntfs-3g /dev/sdb1 /media/disk -o rw,nosuid,nodev,uhe
    root 1695 0.0 0.0 0 0 ? S 18:53 0:00 [jbd2/sda2-8]
    root 1696 0.0 0.0 0 0 ? S 18:53 0:00 [ext4-dio-unwrit]
    root 1697 0.0 0.0 0 0 ? S 18:53 0:00 [ext4-dio-unwrit]
    arch 1702 1.8 4.1 160300 42236 ? SLl 18:53 0:59 /usr/bin/knotify4
    arch 1706 0.0 1.1 75084 11960 ? S 18:53 0:00 /usr/bin/kuiserver
    arch 1711 0.0 0.3 36488 4044 ? Sl 18:53 0:00 /usr/bin/akonadi_control
    arch 1713 0.0 0.6 136752 6504 ? Sl 18:53 0:00 akonadiserver
    arch 1715 0.0 1.8 193724 18792 ? Sl 18:53 0:01 /usr/sbin/mysqld --defaults-file=/home/arch/.local/share/akonadi
    arch 1750 0.0 1.5 78724 15720 ? S 18:53 0:00 /usr/bin/akonadi_contacts_resource --identifier akonadi_contacts
    arch 1751 0.0 1.4 78288 15092 ? S 18:53 0:00 /usr/bin/akonadi_contacts_resource --identifier akonadi_contacts
    arch 1752 0.0 1.5 80292 15828 ? S 18:53 0:00 /usr/bin/akonadi_ical_resource --identifier akonadi_ical_resourc
    arch 1753 0.0 1.5 80292 15832 ? S 18:53 0:00 /usr/bin/akonadi_ical_resource --identifier akonadi_ical_resourc
    arch 1754 0.0 1.5 78772 15764 ? S 18:53 0:00 /usr/bin/akonadi_maildir_resource --identifier akonadi_maildir_r
    arch 1755 0.0 1.5 79168 16148 ? S 18:53 0:00 /usr/bin/akonadi_maildispatcher_agent --identifier akonadi_maild
    arch 1756 0.0 1.4 85996 15084 ? Sl 18:53 0:00 /usr/bin/akonadi_nepomuk_contact_feeder --identifier akonadi_nep
    arch 1757 0.0 1.5 78644 15584 ? S 18:53 0:00 /usr/bin/akonadi_vcard_resource --identifier akonadi_vcard_resou
    arch 1770 0.0 0.6 34140 6756 ? S 18:53 0:00 /usr/bin/nepomukserver
    arch 1773 0.0 3.1 129240 31988 ? S 18:53 0:00 kdeinit4: kaccess [kdeinit]
    arch 1792 0.0 0.7 32800 7300 ? S 18:53 0:00 /usr/bin/kwrited
    arch 1795 0.0 4.9 239732 50600 ? S 18:53 0:02 kdeinit4: krunner [kdeinit]
    arch 1802 0.0 3.8 215960 39712 ? S 18:53 0:00 kdeinit4: kmix [kdeinit] -session 10f3deccde00012715678170000001
    arch 1804 0.3 2.0 92628 20912 ? Sl 18:53 0:10 /usr/bin/yakuake -session 10f3deccde000128458099300000017460017_
    arch 1805 0.0 3.6 127944 37880 ? S 18:53 0:00 /usr/bin/colibri -session 10f3deccde000128466712900000017050014_
    arch 1808 0.0 0.1 4928 1808 pts/1 Ss 18:53 0:00 /bin/bash
    arch 1812 0.0 3.5 113820 36732 ? S 18:53 0:00 python /usr/bin/printer-applet
    arch 1816 0.0 3.2 129576 33088 ? S 18:53 0:00 kdeinit4: klipper [kdeinit]
    arch 1817 0.0 2.7 95416 27772 ? S 18:53 0:00 kdeinit4: kio_http_cache_cleaner [kdeinit]
    arch 1846 0.0 0.1 4928 1808 pts/2 Ss+ 18:56 0:00 /bin/bash
    arch 2727 0.0 0.3 6472 3164 ? S 18:57 0:00 /usr/lib/telepathy/mission-control-5
    arch 2731 0.0 0.3 32484 3184 ? SLl 18:57 0:00 /usr/bin/gnome-keyring-daemon --start --foreground --components=
    arch 2737 0.0 0.2 6576 2088 ? S 18:57 0:00 /usr/lib/gvfs/gvfsd
    arch 2746 0.0 0.1 29996 1940 ? Ssl 18:57 0:00 /usr/lib/gvfs//gvfs-fuse-daemon /home/arch/.gvfs
    arch 3156 14.1 8.5 322720 87448 ? Sl 19:32 1:52 kdeinit4: plasma-desktop [kdeinit] --nocrashhandler
    arch 3160 0.0 0.0 2136 888 ? S 19:32 0:00 ksysguardd
    arch 3184 0.0 2.5 94044 26560 ? S 19:40 0:00 kdeinit4: kio_file [kdeinit] file local:/tmp/ksocket-arch/klaunc
    arch 3185 0.0 2.5 94044 26560 ? S 19:40 0:00 kdeinit4: kio_file [kdeinit] file local:/tmp/ksocket-arch/klaunc
    arch 3186 0.0 2.5 94044 26560 ? S 19:40 0:00 kdeinit4: kio_file [kdeinit] file local:/tmp/ksocket-arch/klaunc
    arch 3187 0.0 2.5 94044 26560 ? S 19:40 0:00 kdeinit4: kio_file [kdeinit] file local:/tmp/ksocket-arch/klaunc
    arch 3199 0.0 2.5 94044 26536 ? S 19:42 0:00 kdeinit4: kio_file [kdeinit] file local:/tmp/ksocket-arch/klaunc
    arch 3200 0.0 2.5 94044 26536 ? S 19:42 0:00 kdeinit4: kio_file [kdeinit] file local:/tmp/ksocket-arch/klaunc
    arch 3201 0.0 2.5 94044 26536 ? S 19:42 0:00 kdeinit4: kio_file [kdeinit] file local:/tmp/ksocket-arch/klaunc
    arch 3202 0.0 2.5 94044 26536 ? S 19:42 0:00 kdeinit4: kio_file [kdeinit] file local:/tmp/ksocket-arch/klaunc
    arch 3206 2.8 4.1 268456 42528 ? Sl 19:42 0:05 /usr/lib/chromium/chromium
    arch 3207 0.0 0.2 67976 2892 ? S 19:42 0:00 /usr/lib/chromium/chromium
    arch 3209 0.0 1.1 71920 11712 ? S 19:42 0:00 /usr/lib/chromium/chromium --type=zygote
    arch 3225 0.0 1.3 115504 13416 ? Sl 19:42 0:00 /usr/lib/chromium/chromium --type=extension --lang=es --force-fi
    arch 3233 0.1 1.6 118764 17440 ? Sl 19:42 0:00 /usr/lib/chromium/chromium --type=extension --lang=es --force-fi
    arch 3240 0.3 2.4 123816 24776 ? Sl 19:42 0:00 /usr/lib/chromium/chromium --type=renderer --lang=es --force-fie
    arch 3251 4.0 6.2 252972 64236 ? Sl 19:43 0:07 /usr/bin/systemsettings -caption Preferencias del sistema -icon
    arch 3264 4.3 5.3 413132 55176 ? Sl 19:44 0:02 /usr/bin/firefox
    arch 3266 0.0 0.3 7064 3356 ? S 19:44 0:00 /usr/lib/GConf/gconfd-2
    arch 3322 0.1 2.8 95596 29028 ? S 19:45 0:00 kdeinit4: kio_http [kdeinit] http local:/tmp/ksocket-arch/klaunc
    arch 3327 0.0 0.1 4108 1036 pts/1 R+ 19:46 0:00 ps aux
    free:
    total used free shared buffers cached
    Mem: 1027228 992928 34300 0 48760 219004
    -/+ buffers/cache: 725164 302064
    Swap: 0 0 0
    I had to wait until my X process reached 300 MB or more. It starts on 3x MB.
    Last edited by soaliar (2010-09-20 23:41:49)

  • [SOLVED] fatal: repository '' does not exist

    Ive written a nice little Python library for ADB and Fastboot and figured I would use the opportunity to create my first AUR package, however, Im having some (hopefully) minor issues.
    # Maintainer: Edvard Holst <edvard.holst at gmail>
    pkgname=python2-pyand-git
    pkgver=0.9.1.2
    pkgrel=3
    pkgdesc="A python wrapper library for ADB and Fastboot"
    arch=('any')
    url="https://github.com/Zyg0te/pyand"
    license=('MIT')
    depends=('python2' 'python-setuptools')
    makedepends=('git')
    source=('git+https://github.com/Zyg0te/pyand.git')
    md5sums=('SKIP')
    package() {
    msg "Connecting to github server...."
    if [[ -d $_gitname ]] ; then
    ( cd "$_gitname" && git pull origin )
    msg "The local files are updated."
    else
    git clone "$_gitroot" --depth=1
    fi
    msg "GIT checkout done or server timeout"
    cd "$_gitname/../../"
    sudo easy_install-2.7 pyand
    mkdir -p "$pkgdir/usr/share/licenses/$pkgname/"
    cd "src/$_gitname/"
    cp LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
    With this, I get the following error when installing..
    ==> Building and installing package
    ==> Making package: python2-pyand-git 0.9.1.2-3 (Wed Apr 2 14:34:27 CEST 2014)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving sources...
    -> Cloning pyand git repo...
    Cloning into bare repository '/tmp/yaourt-tmp-myuser/aur-python2-pyand-git/pyand'...
    remote: Reusing existing pack: 96, done.
    remote: Counting objects: 46, done.
    remote: Compressing objects: 100% (46/46), done.
    remote: Total 142 (delta 22), reused 0 (delta 0)
    Receiving objects: 100% (142/142), 28.61 KiB | 0 bytes/s, done.
    Resolving deltas: 100% (65/65), done.
    Checking connectivity... done.
    ==> Validating source files with md5sums...
    pyand ... Skipped
    ==> Extracting sources...
    -> Creating working copy of pyand git repo...
    Cloning into 'pyand'...
    done.
    ==> Entering fakeroot environment...
    ==> Starting package()...
    ==> Connecting to github server....
    fatal: repository '' does not exist
    ==> ERROR: A failure occurred in package().
    Aborting...
    ==> ERROR: Makepkg was unable to build python2-pyand-git.
    Any ideas? And of course, any other feedback, suggestions, etc, is most welcome.
    Last edited by Zygote (2014-04-02 15:35:19)

    Further improvements can still be made. For example, you should still double quote all instances of $pkgdir and $srcdir.
    cd $srcdir/$pkgname/../
    Is completely redundant -- you always start in $srcdir, and that is a really odd way of getting to it even if you didn't.
    You should write a pkgver function to automatically update the pkgver value each time you run makepkg (explanation is in the wiki link I posted a few posts ago)
    I'll see if your packages 'builds' fine in a clean chroot when I get home. In the meantime, you could check your package+PKGBUILD with namcap to see if there are any other improvements you can make.
    The first thing I found was your pkgver doesn't work
    ==> Starting pkgver()...
    fatal: No names found, cannot describe anything.
    The repository doesn't have any tags, so git describe isn't very useful here. You would be better off using the final git example on the wiki page.
    The next problem I found, is that you haven't included the package that provides easy_install-2.7 in the makedepends array. You have, instead, declared the wrong package in the depends array. Note that packages with "python-" at the start of their name are actually python3 packages, whereas your package needs python2.
    The next thing I noticed is that you've included pacman as a dependency. You don't need to do that, you can assume that anyone installing the package has pacman installed, this is the Arch User Repository, after all.
    Finally, the completed package doesn't have any binary files in it, so nothing is compiled for a specific architecture. In this case, you can drop 'i686' and 'x86_64' from the arch array, and replace them with 'any'.
    A further point of interest, is that, what I assume to be the main python file, ADB.py, has a shebang calling /usr/bin/python. This may not be a problem if it isn't meant to be called directly, but will cause problems if it is, and expects to get python 2.x. On ARch, /usr/bin/python is a symbolic link to /usr/bin/python3. Ideally, this shebang should be replaced with '#!/usr/bin/python2', or even better '#!/usr/bin/env python2'. This should be done upstream, but can be done in the PKGBUILD, in a prepare function, using sed.
    If the script can be called with either python2 or python3, then the shebang doesn't need to change, and actually complies with PEP394.
    Last edited by WorMzy (2014-04-02 20:25:27)

  • Patch to update Jython to 2.5.1

    Installed Jython today and noticed it was woefully out-of-date.  It is in community and without a maintainer.  I have here a PKGBUILD that seems to work fine in my limited testing:
    # $Id: PKGBUILD 82 2009-07-17 19:56:55Z aaron $
    # Maintainer: Geoffroy Carrier <[email protected]>
    # Contributor: Richard Murri <[email protected]>
    pkgname=jython
    pkgver=2.5.1
    pkgrel=1
    pkgdesc="An implementation of the Python language written in Java"
    arch=('i686' 'x86_64')
    url="http://www.jython.org/"
    license=('python', 'apache', 'unknown')
    depends=('java-runtime')
    source=(http://downloads.sourceforge.net/$pkgname/${pkgname}_installer-${pkgver}.jar)
    build() {
    cd "$srcdir"
    java -jar ${pkgname}_installer-${pkgver}.jar -s -t standard -d "$pkgdir"/opt/jython
    sed -i s*"${pkgdir}"**g "${pkgdir}"/opt/jython/jython
    sed -i 's#/opt/java/jre/bin/java#java#g' "${pkgdir}"/opt/jython/jython
    md5sums=('2ee978eff4306b23753b3fe9d7af5b37')
    I wasn't quite sure what I should do with this, since [community] things aren't supposed to go in flyspray, right?
    If this package gets bumped out of [community] and back into the AUR, I'd be glad to maintain it.
    Last edited by Xiong Chiamiov (2009-10-01 20:35:08)

    Xyne wrote:I think that there are still some maintained packages that haven't yet been adopted on the web interface by their maintainers. I'd recommend posting a message to aur-general about this to find out of it's actually maintained or not.
    Mail sent.
    Is there anything that actually gets compiled during the installation? If there isn't, maybe arch=('i686' 'x86_64') could be replaced with arch=('any').
    It just unarchives the .jar file, and then seds a bit.  I think 'any' would work.
    Thanks for posting the patch. It might be simpler though to just post the updated PKGBUILD considering how small it is.
    Right you are.  I've updated the original post.
    If someone knows of a more robust way to test Jython, other than just running some code through it, I'd appreciate knowing it.

  • Gpacsearch - simple gtk program to search your packages

    It's my first program written in python, and I've use arch for 4 day's now, so its probly full of errors.
    But I thought some people might find it usefull. (I will keep on fidling with it for a while).
    http://download.qballcow.nl/gpacsearch.tar.gz
    extract it and run (inside the gpacsearch) ./gpacsearch.
    it depends on pygtk.

    of course there's a few variables to consider when running something like this; for one you might end up building an older version from ABS if the tree hasn't been updated to the newer version yet as it's only synced once(?) a day, and i found that sometimes i'll download something from say "staging" or "testing" and it wont be in the ABS tree yet so when i try to recompile using ABS for it i wont be able to find it. just something to think about if you're really tempered about having the exact version rebuilt as what's currently installed. i could be wrong in some aspects of this but i build basically all my packages from pkgbuilds (usually from ABS) so i've come across this a few times.
    i think the script will be good for most packages but i don't know about <all> packages. sounds like there's a bit of room for error when doing an entire system, also taking in the fact that if it's scraped from pacman it will be in alphabetical order(?) instead of by which depends on which, etc. i like to (re)build dependencies first. so you might need something a bit more complicated.
    regardless, i still like the idea and like i said i think it will work good for small batches of programs. maybe have a case $1 for command-line defined packages so you wont always have to edit the script. or have it read from a txt file. should be pretty simple to do. glad you followed up on this though and good job :}

  • Script creates+deletes Google Calendar-events. (free SMS-notification)

    I posted this in another thread too, but since a discussion would be off-topic there (http://bbs.archlinux.org/viewtopic.php?id=64933):
    Here a Python script that creates a Google Calendar-event in the main calendar, with as subject the contents of a specified file.
    Feel free to post modifications/optimizations! I use it this way to get notified of unanswered conversations in finch (as in the thread above), it was easier to work with a file then with specifying arguments..
    PS. The script uses python-mechanize, which is in community.
    #!/usr/bin/python
    # From: gondil
    # To: the Arch Linux forums ;)
    # Version 2 (deletion added)
    import mechanize
    import sys
    import urllib2
    import httplib
    from urllib2 import HTTPError
    # Create a mechanize browser
    mech = mechanize.Browser()
    mech.set_handle_robots(False)
    # With Goggles desired UAgent
    mech.addheaders = [("User-agent", "Java/1.5.0_06")]
    # See the G-API: We need a some token here, that is displayed by /~pkelchte/ (mah personal php site on ESAT that only does that)
    mech.open("https://www.google.com/accounts/AuthSubRequest?scope=http://www.google.com/calendar/feeds/&session=1&secure=0&next=http://homes.esat.kuleuven.be/~pkelchte/index.php")
    # But before we get the token, we need to submit our user-data
    mech.select_form(nr=0)
    mech["Email"] = "" # REPLACE THIS WITH YOUR GOOGLE ACCOUNT
    mech["Passwd"] = "" # AND THIS WITH YOUR PASSWORD
    try:
    mech.submit()
    except:
    print "Did not submit credentials"
    sys.exit("Error in submitting credentials")
    # Because that's not enough, we need to confirm that we want to go trough with this and submit another form...
    mech.select_form(nr=0)
    mech.submit(id='allow')
    # By now, we have one token, but it's not the final token! We need *another* token, called the AuthSubSessionToken, sigh...
    mech.addheaders = [("Authorization", "AuthSub token=\"" + mech.response().read() + "\""), ("User-agent", "Java/1.5.0_06")]
    mech.open("http://www.google.com/accounts/AuthSubSessionToken")
    # A bunch of tokens later...
    # Let's use urllib2 to do this POST request (some xml-y thing is the string you would manually type in the "New event" box on Google Calendar)
    # Encore some headers
    authsub = "AuthSub token=\"" + mech.response().read().replace("\n","").split("=")[1] + "\""
    headers = {"Authorization": authsub, "User-Agent": "Java/1.5.0_06", "Content-Type": "application/atom+xml", "GData-Version": "2"}
    # Read the file that we're interested in! Damn, it's so interesting!!
    file = open('/home/gondil/public_html/fifo', 'r') # CHANGE THIS FILE WITH YOUR THING
    message = file.read()
    file.close()
    # The actual event
    event = """
    <entry xmlns='http://www.w3.org/2005/Atom' xmlns:gCal='http://schemas.google.com/gCal/2005'>
    <content type="html">""" + message + """</content>
    <gCal:quickadd value="true"/>
    </entry>
    req = urllib2.Request("http://www.google.com/calendar/feeds/default/private/full", event, headers)
    calresponse = urllib2.urlopen(req)
    # Normally, we stop here... but since Google likes traffic, we need to go to a slightly different url, with the same headers and POST data
    req2 = urllib2.Request(calresponse.geturl(), event, headers)
    try:
    calresponse2 = urllib2.urlopen(req2)
    # You can check but normally this is a 201 CREATED response or something, I don't really care... It's my code, right :P
    except HTTPError, e :
    # I placed this sleep to give the event at least a 20 second lifetime (poor, poor event...)
    import time
    time.sleep(20)
    # Retrieve the event's edit url
    eventurl = e.read().split("<link rel='edit' type='application/atom+xml' href='http://www.google.com")[1].split("'/>")[0]
    # The Deletion has to be done via httplib, because Google wants a DELETE request (urllib2 only handles GET and POST)
    conn = httplib.HTTPConnection("www.google.com")
    conn.request("DELETE", eventurl, "", headers)
    calresponse3 = conn.getresponse()
    # Again, they like to have a little more traffic, we need to append a session ID to that last url (we can find it in the redirect page)
    eventurl2 = calresponse3.read().split("HREF=\"")[1].split("\"")[0]
    # Ooh and here there is need of a new header, no questions please
    headers2 = {"Authorization": authsub, "User-Agent": "Java/1.5.0_06", "Content-Type": "application/atom+xml", "GData-Version": "2", "If-Match": "*"}
    conn.request("DELETE", eventurl2, "", headers2)
    calresponse4 = conn.getresponse()
    # No errors? Ok we can close the connection
    conn.close()
    index.php looks like this (again I don't guarantee that I'll keep my index.php page like that for eternity, but I'll notify you when the url changes):
    <?php
    print $_POST['token'];
    print $_GET['token'];
    ?>
    Last edited by gondil (2009-04-23 22:45:38)

    I posted this in another thread too, but since a discussion would be off-topic there (http://bbs.archlinux.org/viewtopic.php?id=64933):
    Here a Python script that creates a Google Calendar-event in the main calendar, with as subject the contents of a specified file.
    Feel free to post modifications/optimizations! I use it this way to get notified of unanswered conversations in finch (as in the thread above), it was easier to work with a file then with specifying arguments..
    PS. The script uses python-mechanize, which is in community.
    #!/usr/bin/python
    # From: gondil
    # To: the Arch Linux forums ;)
    # Version 2 (deletion added)
    import mechanize
    import sys
    import urllib2
    import httplib
    from urllib2 import HTTPError
    # Create a mechanize browser
    mech = mechanize.Browser()
    mech.set_handle_robots(False)
    # With Goggles desired UAgent
    mech.addheaders = [("User-agent", "Java/1.5.0_06")]
    # See the G-API: We need a some token here, that is displayed by /~pkelchte/ (mah personal php site on ESAT that only does that)
    mech.open("https://www.google.com/accounts/AuthSubRequest?scope=http://www.google.com/calendar/feeds/&session=1&secure=0&next=http://homes.esat.kuleuven.be/~pkelchte/index.php")
    # But before we get the token, we need to submit our user-data
    mech.select_form(nr=0)
    mech["Email"] = "" # REPLACE THIS WITH YOUR GOOGLE ACCOUNT
    mech["Passwd"] = "" # AND THIS WITH YOUR PASSWORD
    try:
    mech.submit()
    except:
    print "Did not submit credentials"
    sys.exit("Error in submitting credentials")
    # Because that's not enough, we need to confirm that we want to go trough with this and submit another form...
    mech.select_form(nr=0)
    mech.submit(id='allow')
    # By now, we have one token, but it's not the final token! We need *another* token, called the AuthSubSessionToken, sigh...
    mech.addheaders = [("Authorization", "AuthSub token=\"" + mech.response().read() + "\""), ("User-agent", "Java/1.5.0_06")]
    mech.open("http://www.google.com/accounts/AuthSubSessionToken")
    # A bunch of tokens later...
    # Let's use urllib2 to do this POST request (some xml-y thing is the string you would manually type in the "New event" box on Google Calendar)
    # Encore some headers
    authsub = "AuthSub token=\"" + mech.response().read().replace("\n","").split("=")[1] + "\""
    headers = {"Authorization": authsub, "User-Agent": "Java/1.5.0_06", "Content-Type": "application/atom+xml", "GData-Version": "2"}
    # Read the file that we're interested in! Damn, it's so interesting!!
    file = open('/home/gondil/public_html/fifo', 'r') # CHANGE THIS FILE WITH YOUR THING
    message = file.read()
    file.close()
    # The actual event
    event = """
    <entry xmlns='http://www.w3.org/2005/Atom' xmlns:gCal='http://schemas.google.com/gCal/2005'>
    <content type="html">""" + message + """</content>
    <gCal:quickadd value="true"/>
    </entry>
    req = urllib2.Request("http://www.google.com/calendar/feeds/default/private/full", event, headers)
    calresponse = urllib2.urlopen(req)
    # Normally, we stop here... but since Google likes traffic, we need to go to a slightly different url, with the same headers and POST data
    req2 = urllib2.Request(calresponse.geturl(), event, headers)
    try:
    calresponse2 = urllib2.urlopen(req2)
    # You can check but normally this is a 201 CREATED response or something, I don't really care... It's my code, right :P
    except HTTPError, e :
    # I placed this sleep to give the event at least a 20 second lifetime (poor, poor event...)
    import time
    time.sleep(20)
    # Retrieve the event's edit url
    eventurl = e.read().split("<link rel='edit' type='application/atom+xml' href='http://www.google.com")[1].split("'/>")[0]
    # The Deletion has to be done via httplib, because Google wants a DELETE request (urllib2 only handles GET and POST)
    conn = httplib.HTTPConnection("www.google.com")
    conn.request("DELETE", eventurl, "", headers)
    calresponse3 = conn.getresponse()
    # Again, they like to have a little more traffic, we need to append a session ID to that last url (we can find it in the redirect page)
    eventurl2 = calresponse3.read().split("HREF=\"")[1].split("\"")[0]
    # Ooh and here there is need of a new header, no questions please
    headers2 = {"Authorization": authsub, "User-Agent": "Java/1.5.0_06", "Content-Type": "application/atom+xml", "GData-Version": "2", "If-Match": "*"}
    conn.request("DELETE", eventurl2, "", headers2)
    calresponse4 = conn.getresponse()
    # No errors? Ok we can close the connection
    conn.close()
    index.php looks like this (again I don't guarantee that I'll keep my index.php page like that for eternity, but I'll notify you when the url changes):
    <?php
    print $_POST['token'];
    print $_GET['token'];
    ?>
    Last edited by gondil (2009-04-23 22:45:38)

  • [SOLVED]Conky Gmail Script Incorrect

    The python gmail script on the arch wik no longer works on python3 or python2 for me. This error is shown in wing:
    builtins.ValueError: invalid literal for int() with base 10:
    I'm not sure how to fix the incorrect int value, but I would love to see it fixed and posted to the arch wiki again.
    #Enter your username and password below within double quotes
    # eg. username="username" and password="password"
    username="****"
    password="****"
    com="wget -q -O - https://"+username+":"+password+"@mail.google.com/mail/feed/atom --no-check-certificate"
    temp=os.popen(com)
    msg=temp.read()
    index=msg.find("<fullcount>")
    index2=msg.find("</fullcount>")
    fc=int(msg[index+11:index2])
    if fc==0:
    print("0 new")
    else:
    print(str((fc)+" new"))
    Thanks
    Last edited by duke11235 (2011-11-21 04:55:28)

    Your problem may be with wget.  Try curl:
    com = 'curl -s -u "{}:{}" https://mail.google.com/mail/feed/atom'.format( username, password )
    The url you are using is for the Gmail RSS feed, and it contains a ':'.  The python urllib.request module that lunar used expects that a port number will follow the ':'. That is why lunar's script is failing for you.
    The script I use does not use the RSS feed.  It uses the python IMAP library and connects to imap.gmail.com at port 993.
    #!/usr/bin/env python
    # -*- coding: UTF-8 -*-
    import sys, imaplib
    port = 993
    server = 'imap.gmail.com'
    username = '...'
    passwd = '...'
    imap_server = imaplib.IMAP4_SSL(server, port)
    try:
    imap_server.login(username, passwd)
    except:
    print('?? new')
    sys.exit( 1 )
    typ, data = imap_server.select ('Inbox', True)
    if typ == 'OK':
    total = int(data[0])
    typ, data = imap_server.search (None, 'SEEN')
    if typ == 'OK':
    seen = len(data[0].split())
    print('{}/{} new'.format(total, total - seen))
    if typ != 'OK':
    print('?? new')
    imap_server.logout()

Maybe you are looking for

  • Report with Union

    Hello Gurus I have a report with UNION of 3 criteria and the result i am getting is one of the columns is a union of 3 columns like Age Group,Gender and Race and here the problem is i need to show them in order Race first, Age Group, Gender now it is

  • 11.1.0.7 SPM is not functioning

    Hi all, I'm relying on SPM to control plan change. But it's found that the SPM cannot deliver what it promises to achieve, resulting in performance regression. The SQL Txt driven from application cannot be amended. The explain plan complains about th

  • How to increase the rows of a Table control on ITS

    Hi, I'm trying to increase the number of rows to 20 on ITS but it always shows 2 rows per navigation. Although in the backend module pool there are 8 rows and the GV_SOS_LIST_CONTROL-LINES contains the total no of rows of the internal table. yet ther

  • Unable to read space page parameter value throug EL

    Hi All, I have been stuck in a big problem, I have to use dynamic page template in web center space means Different template for different pages. For this I added a new page parameter in page properties but I am not able to read any Page Parameter Pr

  • HT1766 why won't my ipad 2 charge?

    Cannot charge my i pad?