Python 2.5?

i noticed that arch hasn't updated yet in a while...

you aren' getting it though, things like python take awhile to upgrade. Be patient, python 2.4 works perfectly fine

Similar Messages

  • Unable to capture stdout, stderr from python.

    I am trying to read the stdout and stderr of a python program.
    When I invoke anything else apart from python my class works fine.
    The python program takes ca 1 sec to start. It then prints one line of text, then it prints a promt and waits for input from user.
    I tryed with another program that behaves in the same way and there I got the first line. Not the prompt, that not a python but a native C program.
    I have tried using BufferedReader but without sucess.
    My class looks like this.
    import java.io.BufferedInputStream;
    import java.io.IOException;
    import java.io.OutputStream;
    public class Executer extends Thread {
        //read standard out output of the external executable
        //in seperate thread to avoid I/O deadlocks
        final int BUFFER = 2048;
        Process p;
        BufferedInputStream in;
        OutputStream out;
        StringBuffer strbuf;
        String[] cmdarr = null;
        public Executer(String path) {
            this.cmdarr = new String[] { path };
        public Executer(String[] path) {
            this.cmdarr = path;
        public void run() {
            try {
                System.out.println("doing call");
                p = Runtime.getRuntime().exec(cmdarr);
                in = new BufferedInputStream(p.getInputStream());
                strbuf = new StringBuffer();
                out = p.getOutputStream();
                int i;
                byte[] buf = new byte[BUFFER];
                String str;
                            while ((i = in.read(buf, 0, BUFFER)) != -1)
                                System.out.write(buf, 0, i);
            } catch (IOException e) {
                e.printStackTrace();
        public int killProcess() throws InterruptedException {
            p.destroy();
            try {
                in.close();
                System.out.println("Closed");
            } catch (IOException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            return p.waitFor();
    }Any ideas is appreciated.

    When Runtime Exec Won't - a good article on running processes from a Java program...
    Hope that helps;
    lutha

  • Calling a .py script inside a .VI, using anaconda's Libraries or specific python version (SystemExec.vi(?), MAC OSX)

    Hi,
    I'm new to both mac and LabView, so it may well be something obvious I'm missing.
    I'm trying to use System Exec (http://zone.ni.com/reference/en-XX/help/371361L-01/glang/system_exec/ with LabVIEW 2014 on Mac OS X Mavericks) to call a python script to do some image processing after taking some data and saving images via LabView.
    My problem is that I cannot figure out how to point it to the correct version/install of python to use, as it seems to use the pre-installed 2.7.5 apple version no matter whether that's set to be my default or not (and hence, it can't see my SciPy, PIL etc. packages that come with my Anaconda install).
    It doesn't seem to matter whether I have certain packages installed for 2.7 accessable via the terminal, the LabView System Exec command line can't seem to find them either way and throws up an error if a script is called that imports them.
    I've tried changing the working directory to the relevant anaconda/bin folder, I've also tried
    cd /Applications/anaconda/bin; python <file.py> 
    as the CL input to system exec, but it's not wanting to use anything other than the apple install. I've tried Anaconda installs with both python 2.7 and 3.4, so it doesn't seem to be down to the 3.* compatibility issues alone.
    Printing the python version to a string indicator box in LV via python script gives the following:
    2.7.5 (default, Mar 9 2014, 22:15:05)
    [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
    Whereas via the terminal, the default version is the anaconda install I'd like to use, currently
    Python 3.4.1 |Anaconda 2.1.0 (x86_64)| (default, Sep 10 2014, 17:24:09)
    [GCC 4.2.1 (Apple Inc. build 5577)] on darwin
    I can do which python3.4 on the sys exec CL input to get the correct /anaconda/bin location, but if I try  python3.4 <file.py>, I just get a Seg fault.
    I've found examples for System Exec use and instructions for calling python scripts with specific versions, but neither works with Mac (frustratingly).
    I've also tried editing the Path Environment Variable, and tried getting rid of the python 2.7 install entirely, but none of these work either (the latter just causing errors all over the shop).
    So, in summary
    I'd just like to know how to specify the python version LabView uses and to have use of the corresponding python libraries.
    Either python version (2 or 3) will do, I'd just like to be able to use my libraries which came with Anaconda which I'll be writing my scripts with (which I installed as the correct packages are only compatible with a specific python version on Mac, which is not the specific version that Xcode was compatible with, argh).
    All packages work normally when using the Anaconda version with the terminal or with Spyder.
    Is there a function I can use other than System Exec? What am I missing?
    Please help.

    janedoe777 wrote:
    Accidental. Also, that's not terribly helpful.
    What exactly does that mean?
    I'm not debating whether it was accidental or intentional.  It certainly isn't helpful to post the same message twice.
    It is helpful to point out this message is a duplicate so people don't waste their time answering this one if you are already being helped in the other one.

  • USB 6009 Python Control

     Hello, 
    I am attempting to use python (with ctypes) to control my USB-6009.  The goal is to write out an analog voltage and then read in the voltage through an analog input with ability to change the number of points averaged and the number of times this is repeated(sweeps) averaging the analog input array.  The issue is that as we increase the number of times the voltage ramp is repeated (sweeps) we get a time out error (nidaq call failed with error -200284: 'Some or all of the samples requested have not yet been acquired).  This has us confused because the sweeps are in a larger loop and the Daq function should be the same if there are 10 sweeps (works) or 1000 sweeps (crashes).  Any insight would be greatly appreciated.  I have included the code below for reference.  
    import ctypes
    import numpy
    from time import *
    from operator import add
    nidaq = ctypes.windll.nicaiu # load the DLL
    # Setup some typedefs and constants
    # to correspond with values in
    # C:\Program Files\National Instruments\NI-DAQ\DAQmx ANSI C Dev\include\NIDAQmx.h
    # Scan Settings
    aoDevice = "Dev2/ao0"
    aiDevice = "Dev2/ai0"
    NumAvgPts = 10
    NumSweeps = 50
    NumSpecPts = 100
    filename = '12Feb15_CO2 Test_12.txt'
    Readrate = 40000.0
    Samplerate = 1000
    StartVolt = 0.01
    FinalVolt = 1.01
    voltInc = (FinalVolt - StartVolt)/NumSpecPts
    # the typedefs
    int32 = ctypes.c_long
    uInt32 = ctypes.c_ulong
    uInt64 = ctypes.c_ulonglong
    float64 = ctypes.c_double
    TaskHandle = uInt32
    # the constants
    DAQmx_Val_Cfg_Default = int32(-1)
    DAQmx_Val_Volts = 10348
    DAQmx_Val_Rising = 10280
    DAQmx_Val_FiniteSamps = 10178
    DAQmx_Val_GroupByChannel = 0
    def CHK_ao( err ):
    """a simple error checking routine"""
    if err < 0:
    buf_size = 100
    buf = ctypes.create_string_buffer('\000' * buf_size)
    nidaq.DAQmxGetErrorString(err,ctypes.byref(buf),buf_size)
    raise RuntimeError('nidaq call failed with error %d: %s'%(err,repr(buf.value)))
    if err > 0:
    buf_size = 100
    buf = ctypes.create_string_buffer('\000' * buf_size)
    nidaq.DAQmxGetErrorString(err,ctypes.byref(buf),buf_size)
    raise RuntimeError('nidaq generated warning %d: %s'%(err,repr(buf.value)))
    def CHK_ai(err):
    """a simple error checking routine"""
    if err < 0:
    buf_size = NumAvgPts*10
    buf = ctypes.create_string_buffer('\000' * buf_size)
    nidaq.DAQmxGetErrorString(err,ctypes.byref(buf),buf_size)
    raise RuntimeError('nidaq call failed with error %d: %s'%(err,repr(buf.value)))
    def Analog_Output():
    taskHandle = TaskHandle(0)
    (nidaq.DAQmxCreateTask("",ctypes.byref(taskHandle )))
    (nidaq.DAQmxCreateAOVoltageChan(taskHandle,
    aoDevice,
    float64(0),
    float64(5),
    DAQmx_Val_Volts,
    None))
    (nidaq.DAQmxCfgSampClkTiming(taskHandle,"",float64(Samplerate),
    DAQmx_Val_Rising,DAQmx_Val_FiniteSamps,
    uInt64(NumAvgPts))); # means we could turn in this to continuously ramping and reading
    (nidaq.DAQmxStartTask(taskHandle))
    (nidaq.DAQmxWriteAnalogScalarF64(taskHandle, True, float64(10.0), float64(CurrentVolt), None))
    nidaq.DAQmxStopTask( taskHandle )
    nidaq.DAQmxClearTask( taskHandle )
    def Analog_Input():
    global average
    # initialize variables
    taskHandle = TaskHandle(0)
    data = numpy.zeros((NumAvgPts,),dtype=numpy.float64)
    # now, on with the program
    CHK_ai(nidaq.DAQmxCreateTask("",ctypes.byref(taskHandle)))
    CHK_ai(nidaq.DAQmxCreateAIVoltageChan(taskHandle,aiDevice,"",
    DAQmx_Val_Cfg_Default,
    float64(-10.0),float64(10.0),
    DAQmx_Val_Volts,None))
    CHK_ai(nidaq.DAQmxCfgSampClkTiming(taskHandle,"",float64(Readrate),
    DAQmx_Val_Rising,DAQmx_Val_FiniteSamps,
    uInt64(NumAvgPts)));
    CHK_ai(nidaq.DAQmxStartTask(taskHandle))
    read = int32()
    CHK_ai(nidaq.DAQmxReadAnalogF64(taskHandle,NumAvgPts,float64(10.0),
    DAQmx_Val_GroupByChannel,data.ctypes.data,
    NumAvgPts,ctypes.byref(read),None))
    #print "Acquired %d points"%(read.value)
    if taskHandle.value != 0:
    nidaq.DAQmxStopTask(taskHandle)
    nidaq.DAQmxClearTask(taskHandle)
    average = sum(data)/data.size
    Thanks, 
    Erin
    Solved!
    Go to Solution.

    This forum is for LabVIEW questions.

  • USB-6009, mac OS 10.6.8 and python

    Hello!
    I am using USB-6009 under mac OS 10.6.8 and python.
    I am trying to run the following commmand:
    analogOutputTask = nidaqmx.AnalogOutputTask(name='MyAOTask')
    I get the following message
    File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/nidaqmx/libnidaqmx.py", line 180, in CALL
    AttributeError: 'NoneType' object has no attribute 'DAQmxCreateTask'
    Any help would be very mch appreciated,
    Thanks 
    j.

    Hello Jamyamdy, 
    Honestly, I am not too sure about this, but were you able to acquire data before or is this your first trial? 
    From the usb-6009 product page, it says the following. 
    For Mac OS X and Linux users, download the NI-DAQmx Base driver software and program the USB-6009 with NI LabVIEW or C.
     http://sine.ni.com/nips/cds/view/p/lang/en/nid/201987
    Regards,

  • [SOLVED] tv_grab_nl_py works with python 2.6.5, fails on 3.1.2

    Hi All,
    I have updated my mediacenter. Now tv_grab_nl_py does not work anymore:
    [cedric@tv ~]$ tv_grab_nl_py --output ~/listings.xml --fast
    File "/usr/bin/tv_grab_nl_py", line 341
    print 'tv_grab_nl_py: A grabber that grabs tvguide data from tvgids.nl\n'
    ^
    SyntaxError: invalid syntax
    [cedric@tv ~]$
    the version of python on the mediacenter (running arch linux):
    [cedric@tv ~]$ python
    Python 3.1.2 (r312:79147, Oct 4 2010, 12:35:40)
    [GCC 4.5.1] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>>
    I have copied the file to my laptop, there it looks like it's working:
    ./tv_grab_nl_py --output ~/listings.xml --fast
    Config file /home/cedric/.xmltv/tv_grab_nl_py.conf not found.
    Re-run me with the --configure flag.
    cedric@laptop:~$
    the version of python on my laptop (running arch linux):
    cedric@laptop:~$ python
    Python 2.6.5 (r265:79063, Apr 1 2010, 05:22:20)
    [GCC 4.4.3 20100316 (prerelease)] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>>
    the script I'm trying to run:
    [cedric@tv ~]$ cat tv_grab_nl_py
    #!/usr/bin/env python
    # $LastChangedDate: 2009-11-14 10:06:41 +0100 (Sat, 14 Nov 2009) $
    # $Rev: 104 $
    # $Author: pauldebruin $
    SYNOPSIS
    tv_grab_nl_py is a python script that trawls tvgids.nl for TV
    programming information and outputs it in XMLTV-formatted output (see
    http://membled.com/work/apps/xmltv). Users of MythTV
    (http://www.mythtv.org) will appreciate the output generated by this
    grabber, because it fills the category fields, i.e. colors in the EPG,
    and has logos for most channels automagically available. Check the
    website below for screenshots. The newest version of this script can be
    found here:
    http://code.google.com/p/tvgrabnlpy/
    USAGE
    Check the web site above and/or run script with --help and start from there
    HISTORY
    tv_grab_nl_py used to be called tv_grab_nl_pdb, first released on
    2003/07/09. The name change was necessary because more and more people
    are actively contributing to this script and I always disliked using my
    initials (I was just too lazy to change it). At the same time I switched
    from using CVS to SVN and as a result the version numbering scheme has
    changed. The lastest official release of tv_grab_nl_pdb is 0.48. The
    first official release of tv_grab_nl_py is 6.
    QUESTIONS
    Questions (and patches) are welcome at: paul at pwdebruin dot net.
    IMPORTANT NOTES
    If you were using tv_grab_nl from the XMLTV bundle then enable the
    compat flag or use the --compat command-line option. Otherwise, the
    xmltvid's are wrong and you will not see any new data in MythTV.
    CONTRIBUTORS
    Main author: Paul de Bruin (paul at pwdebruin dot net)
    Michel van der Laan made available his extensive collection of
    high-quality logos that is used by this script.
    Michael Heus has taken the effort to further enhance this script so that
    it now also includes:
    - Credit info: directors, actors, presenters and writers
    - removal of programs that are actually just groupings/broadcasters
    (e.g. "KETNET", "Wild Friday", "Z@pp")
    - Star-rating for programs tipped by tvgids.nl
    - Black&White, Stereo and URL info
    - Better detection of Movies
    - and much, much more...
    Several other people have provided feedback and patches (these are the
    people I could find in my email archive, if you are missing from this
    list let me know):
    Huub Bouma, Roy van der Kuil, Remco Rotteveel, Mark Wormgoor, Dennis van
    Onselen, Hugo van der Kooij, Han Holl, Ian Mcdonald, Udo van den Heuvel.
    # Modules we need
    import re, urllib2, getopt, sys
    import time, random
    import htmlentitydefs, os, os.path, pickle
    from string import replace, split, strip
    from threading import Thread
    from xml.sax import saxutils
    # Extra check for the datetime module
    try:
    import datetime
    except:
    sys.stderr.write('This script needs the datetime module that was introduced in Python version 2.3.\n')
    sys.stderr.write('You are running:\n')
    sys.stderr.write('%s\n' % sys.version)
    sys.exit(1)
    # XXX: fix to prevent crashes in Snow Leopard [Robert Klep]
    if sys.platform == 'darwin' and sys.version_info[:3] == (2, 6, 1):
    try:
    urllib2.urlopen('http://localhost.localdomain')
    except:
    pass
    # do extra debug stuff
    debug = 1
    try:
    import redirect
    except:
    debug = 0
    pass
    # globals
    # compile only one time
    r_entity = re.compile(r'&(#x[0-9A-Fa-f]+|#[0-9]+|[A-Za-z]+);')
    tvgids = 'http://www.tvgids.nl/'
    uitgebreid_zoeken = tvgids + 'zoeken/'
    # how many seconds to wait before we timeout on a
    # url fetch, 10 seconds seems reasonable
    global_timeout = 10
    # Wait a random number of seconds between each page fetch.
    # We want to be nice and not hammer tvgids.nl (these are the
    # friendly people that provide our data...).
    # Also, it appears tvgids.nl throttles its output.
    # So there, there is not point in lowering these numbers, if you
    # are in a hurry, use the (default) fast mode.
    nice_time = [1, 2]
    # Maximum length in minutes of gaps/overlaps between programs to correct
    max_overlap = 10
    # Strategy to use for correcting overlapping prgramming:
    # 'average' = use average of stop and start of next program
    # 'stop' = keep stop time of current program and adjust start time of next program accordingly
    # 'start' = keep start time of next program and adjust stop of current program accordingly
    # 'none' = do not use any strategy and see what happens
    overlap_strategy = 'average'
    # Experimental strategy for clumping overlapping programming, all programs that overlap more
    # than max_overlap minutes, but less than the length of the shortest program are clumped
    # together. Highly experimental and disabled for now.
    do_clump = False
    # Create a category translation dictionary
    # Look in mythtv/themes/blue/ui.xml for all category names
    # The keys are the categories used by tvgids.nl (lowercase please)
    cattrans = { 'amusement' : 'Talk',
    'animatie' : 'Animated',
    'comedy' : 'Comedy',
    'documentaire' : 'Documentary',
    'educatief' : 'Educational',
    'erotiek' : 'Adult',
    'film' : 'Film',
    'muziek' : 'Art/Music',
    'informatief' : 'Educational',
    'jeugd' : 'Children',
    'kunst/cultuur' : 'Arts/Culture',
    'misdaad' : 'Crime/Mystery',
    'muziek' : 'Music',
    'natuur' : 'Science/Nature',
    'nieuws/actualiteiten' : 'News',
    'overige' : 'Unknown',
    'religieus' : 'Religion',
    'serie/soap' : 'Drama',
    'sport' : 'Sports',
    'theater' : 'Arts/Culture',
    'wetenschap' : 'Science/Nature'}
    # Create a role translation dictionary for the xmltv credits part
    # The keys are the roles used by tvgids.nl (lowercase please)
    roletrans = {'regie' : 'director',
    'acteurs' : 'actor',
    'presentatie' : 'presenter',
    'scenario' : 'writer'}
    # We have two sources of logos, the first provides the nice ones, but is not
    # complete. We use the tvgids logos to fill the missing bits.
    logo_provider = [ 'http://visualisation.tudelft.nl/~paul/logos/gif/64x64/',
    'http://static.tvgids.nl/gfx/zenders/' ]
    logo_names = {
    1 : [0, 'ned1'],
    2 : [0, 'ned2'],
    3 : [0, 'ned3'],
    4 : [0, 'rtl4'],
    5 : [0, 'een'],
    6 : [0, 'canvas_color'],
    7 : [0, 'bbc1'],
    8 : [0, 'bbc2'],
    9 : [0,'ard'],
    10 : [0,'zdf'],
    11 : [1, 'rtl'],
    12 : [0, 'wdr'],
    13 : [1, 'ndr'],
    14 : [1, 'srsudwest'],
    15 : [1, 'rtbf1'],
    16 : [1, 'rtbf2'],
    17 : [0, 'tv5'],
    18 : [0, 'ngc'],
    19 : [1, 'eurosport'],
    20 : [1, 'tcm'],
    21 : [1, 'cartoonnetwork'],
    24 : [0, 'canal+red'],
    25 : [0, 'mtv-color'],
    26 : [0, 'cnn'],
    27 : [0, 'rai'],
    28 : [1, 'sat1'],
    29 : [0, 'discover-spacey'],
    31 : [0, 'rtl5'],
    32 : [1, 'trt'],
    34 : [0, 'veronica'],
    35 : [0, 'tmf'],
    36 : [0, 'sbs6'],
    37 : [0, 'net5'],
    38 : [1, 'arte'],
    39 : [0, 'canal+blue'],
    40 : [0, 'at5'],
    46 : [0, 'rtl7'],
    49 : [1, 'vtm'],
    50 : [1, '3sat'],
    58 : [1, 'pro7'],
    59 : [1, 'kanaal2'],
    60 : [1, 'vt4'],
    65 : [0, 'animal-planet'],
    73 : [1, 'mezzo'],
    86 : [0, 'bbc-world'],
    87 : [1, 'tve'],
    89 : [1, 'nick'],
    90 : [1, 'bvn'],
    91 : [0, 'comedy_central'],
    92 : [0, 'rtl8'],
    99 : [1, 'sport1_1'],
    100 : [0, 'rtvu'],
    101 : [0, 'tvwest'],
    102 : [0, 'tvrijnmond'],
    103 : [1, 'tvnoordholland'],
    104 : [1, 'bbcprime'],
    105 : [1, 'spiceplatinum'],
    107 : [0, 'canal+yellow'],
    108 : [0, 'tvnoord'],
    109 : [0, 'omropfryslan'],
    114 : [0, 'omroepbrabant']}
    # A selection of user agents we will impersonate, in an attempt to be less
    # conspicuous to the tvgids.nl police.
    user_agents = [ 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)',
    'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9',
    'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)',
    'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.7) Gecko/20060909 Firefox/1.5.0.7',
    'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)',
    'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.9) Gecko/20071105 Firefox/2.0.0.9',
    'Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9',
    'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.8) Gecko/20071022 Ubuntu/7.10 (gutsy) Firefox/2.0.0.8'
    # Work in progress, the idea is to cache program categories and
    # descriptions to eliminate a lot of page fetches from tvgids.nl
    # for programs that do not have interesting/changing descriptions
    class ProgramCache:
    A cache to hold program name and category info.
    TVgids stores the detail for each program on a separate URL with an
    (apparently unique) ID. This cache stores the fetched info with the ID.
    New fetches will use the cached info instead of doing an (expensive)
    page fetch.
    def __init__(self, filename=None):
    Create a new ProgramCache object, optionally from file
    # where we store our info
    self.filename = filename
    if filename == None:
    self.pdict = {}
    else:
    if os.path.isfile(filename):
    self.load(filename)
    else:
    self.pdict = {}
    def load(self, filename):
    Loads a pickled cache dict from file
    try:
    self.pdict = pickle.load(open(filename,'r'))
    except:
    sys.stderr.write('Error loading cache file: %s (possibly corrupt)' % filename)
    sys.exit(2)
    def dump(self, filename):
    Dumps a pickled cache, and makes sure it is valid
    if os.access(filename, os.F_OK):
    try:
    os.remove(filename)
    except:
    sys.stderr.write('Cannot remove %s, check permissions' % filename)
    pickle.dump(self.pdict, open(filename+'.tmp', 'w'))
    os.rename(filename+'.tmp', filename)
    def query(self, program_id):
    Updates/gets/whatever.
    try:
    return self.pdict[program_id]
    except:
    return None
    def add(self, program):
    Adds a program
    self.pdict[program['ID']] = program
    def clear(self):
    Clears the cache (i.e. empties it)
    self.pdict = {}
    def clean(self):
    Removes all cached programming before today.
    Also removes erroneously cached programming.
    now = time.localtime()
    dnow = datetime.datetime(now[0],now[1],now[2])
    for key in self.pdict.keys():
    try:
    if self.pdict[key]['stop-time'] < dnow or self.pdict[key]['name'].lower() == 'onbekend':
    del self.pdict[key]
    except:
    pass
    def usage():
    print 'tv_grab_nl_py: A grabber that grabs tvguide data from tvgids.nl\n'
    print 'and stores it in XMLTV-combatible format.\n'
    print 'Usage:'
    print '--help, -h = print this info'
    print '--configure = create configfile (overwrites existing file)'
    print '--config-file = name of the configuration file (default = ~/.xmltv/tv_grab_py.conf'
    print '--capabilities = xmltv required option'
    print '--desc-length = maximum allowed length of programme descriptions in bytes.'
    print '--description = prints a short description of the grabber'
    print '--output = file where to put the output'
    print '--days = # number of days to grab'
    print '--preferredmethod = returns the preferred method to be called'
    print '--fast = do not grab descriptions of programming'
    print '--slow = grab descriptions of programming'
    print '--quiet = suppress all output'
    print '--compat = append tvgids.nl to the xmltv id (use this if you were using tv_grab_nl)'
    print '--logos 0/1 = insert urls to channel icons (mythfilldatabase will then use these)'
    print '--nocattrans = do not translate the grabbed genres into MythTV-genres'
    print '--cache = cache descriptions and use the file to store'
    print '--clean_cache = clean the cache file before fetching'
    print '--clear_cache = empties the cache file before fetching data'
    print '--slowdays = grab slowdays initial days and the rest in fast mode'
    print '--max_overlap = maximum length of overlap between programming to correct [minutes]'
    print '--overlap_strategy = what strategy to use to correct overlaps (check top of source code)'
    def filter_line_identity(m, defs=htmlentitydefs.entitydefs):
    # callback: translate one entity to its ISO Latin value
    k = m.group(1)
    if k.startswith("#") and k[1:] in xrange(256):
    return chr(int(k[1:]))
    try:
    return defs[k]
    except KeyError:
    return m.group(0) # use as is
    def filter_line(s):
    Removes unwanted stuff in strings (adapted from tv_grab_be)
    # do the latin1 stuff
    s = r_entity.sub(filter_line_identity, s)
    s = replace(s,'&nbsp;',' ')
    # Ik vermoed dat de volgende drie regels overbodig zijn, maar ze doen
    # niet veel kwaad -- Han Holl
    s = replace(s,'\r',' ')
    x = re.compile('(<.*?>)') # Udo
    s = x.sub('', s) #Udo
    s = replace(s, '~Q', "'")
    s = replace(s, '~R', "'")
    # Hmm, not sure if I understand this. Without it, mythfilldatabase barfs
    # on program names like "Steinbrecher &..."
    # We most create valid XML -- Han Holl
    s = saxutils.escape(s)
    return s
    def calc_timezone(t):
    Takes a time from tvgids.nl and formats it with all the required
    timezone conversions.
    in: '20050429075000'
    out:'20050429075000 (CET|CEST)'
    Until I have figured out how to correctly do timezoning in python this method
    will bork if you are not in a zone that has the same DST rules as 'Europe/Amsterdam'.
    year = int(t[0:4])
    month = int(t[4:6])
    day = int(t[6:8])
    hour = int(t[8:10])
    minute = int(t[10:12])
    #td = {'CET': '+0100', 'CEST': '+0200'}
    #td = {'CET': '+0100', 'CEST': '+0200', 'W. Europe Standard Time' : '+0100', 'West-Europa (standaardtijd)' : '+0100'}
    td = {0 : '+0100', 1 : '+0200'}
    pt = time.mktime((year,month,day,hour,minute,0,0,0,-1))
    timezone=''
    try:
    #timezone = time.tzname[(time.localtime(pt))[-1]]
    timezone = (time.localtime(pt))[-1]
    except:
    sys.stderr.write('Cannot convert time to timezone')
    return t+' %s' % td[timezone]
    def format_timezone(td):
    Given a datetime object, returns a string in XMLTV format
    tstr = td.strftime('%Y%m%d%H%M00')
    return calc_timezone(tstr)
    def get_page_internal(url, quiet=0):
    Retrieves the url and returns a string with the contents.
    Optionally, returns None if processing takes longer than
    the specified number of timeout seconds.
    txtdata = None
    txtheaders = {'Keep-Alive' : '300',
    'User-Agent' : user_agents[random.randint(0, len(user_agents)-1)] }
    try:
    #fp = urllib2.urlopen(url)
    rurl = urllib2.Request(url, txtdata, txtheaders)
    fp = urllib2.urlopen(rurl)
    lines = fp.readlines()
    page = "".join(lines)
    return page
    except:
    if not quiet:
    sys.stderr.write('Cannot open url: %s\n' % url)
    return None
    class FetchURL(Thread):
    A simple thread to fetch a url with a timeout
    def __init__ (self, url, quiet=0):
    Thread.__init__(self)
    self.quiet = quiet
    self.url = url
    self.result = None
    def run(self):
    self.result = get_page_internal(self.url, self.quiet)
    def get_page(url, quiet=0):
    Wrapper around get_page_internal to catch the
    timeout exception
    try:
    fu = FetchURL(url, quiet)
    fu.start()
    fu.join(global_timeout)
    return fu.result
    except:
    if not quiet:
    sys.stderr.write('get_page timed out on (>%s s): %s\n' % (global_timeout, url))
    return None
    def get_channels(file, quiet=0):
    Get a list of all available channels and store these
    in a file.
    # store channels in a dict
    channels = {}
    # tvgids stores several instances of channels, we want to
    # find all the possibile channels
    channel_get = re.compile('<optgroup label=.*?>(.*?)</optgroup>', re.DOTALL)
    # this is how we will find a (number, channel) instance
    channel_re = re.compile('<option value="([0-9]+)" >(.*?)</option>', re.DOTALL)
    # this is where we will try to find our channel list
    total = get_page(uitgebreid_zoeken, quiet)
    if total == None:
    return
    # get a list of match objects of all the <select blah station>
    stations = channel_get.finditer(total)
    # and create a dict of number, channel_name pairs
    # we do this this way because several instances of the
    # channel list are stored in the url and not all of the
    # instances have all the channels, this way we get them all.
    for station in stations:
    m = channel_re.finditer(station.group(0))
    for p in m:
    try:
    a = int(p.group(1))
    b = filter_line(p.group(2))
    channels[a] = b
    except:
    sys.stderr.write('Oops, [%s,%s] does not look like a valid channel, skipping it...\n' % (p.group(1),p.group(2)))
    # sort on channel number (arbitrary but who cares)
    keys = channels.keys()
    keys.sort()
    # and create a file with the channels
    f = open(file,'w')
    for k in keys:
    f.write("%s %s\n" % (k, channels[k]))
    f.close()
    def get_channel_all_days(channel, days, quiet=0):
    Get all available days of programming for channel number
    The output is a list of programming in order where each row
    contains a dictionary with program information.
    now = datetime.datetime.now()
    programs = []
    # Tvgids shows programs per channel per day, so we loop over the number of days
    # we are required to grab
    for offset in range(0, days):
    channel_url = 'http://www.tvgids.nl/zoeken/?d=%i&z=%s' % (offset, channel)
    # For historic purposes, the old style url that gave us a full week in advance:
    # channel_url = 'http://www.tvgids.nl/zoeken/?trefwoord=Titel+of+trefwoord&interval=0&timeslot='+\
    # '&station=%s&periode=%i&genre=&order=0' % (channel,days-1)
    # Sniff, we miss you...
    if offset > 0:
    time.sleep(random.randint(nice_time[0], nice_time[1]))
    # get the raw programming for the day
    total = get_page(channel_url, quiet)
    if total == None:
    return programs
    # Setup a number of regexps
    # checktitle will match the title row in H2 tags of the daily overview page, e.g.
    # <h2>zondag 19 oktober 2008</h2>
    checktitle = re.compile('<h2>(.*?)</h2>',re.DOTALL)
    # getrow will locate each row with program details
    getrow = re.compile('<a href="/programma/(.*?)</a>',re.DOTALL)
    # parserow matches the required program info, with groups:
    # 1 = program ID
    # 2 = broadcast times
    # 3 = program name
    parserow = re.compile('(.*?)/.*<span class="time">(.*?)</span>.*<span class="title">(.*?)</span>', re.DOTALL)
    # normal begin and end times
    times = re.compile('([0-9]+:[0-9]+) - ([0-9]+:[0-9]+)?')
    # Get the day of month listed on the page as well as the expected date we are grabbing and compare these.
    # If these do not match, we skip parsing the programs on the page and issue a warning.
    #dayno = int(checkday.search(total).group(1))
    title = checktitle.search(total)
    if title:
    title = title.group(1)
    dayno = title.split()[1]
    else:
    sys.stderr.write('\nOops, there was a problem with page %s. Skipping it...\n' % (channel_url))
    continue
    expected = now + datetime.timedelta(days=offset)
    if (not dayno.isdigit() or int(dayno) != expected.day):
    sys.stderr.write('\nOops, did not expect page %s to list programs for "%s", skipping it...\n' % (channel_url,title))
    continue
    # and find relevant programming info
    allrows = getrow.finditer(total)
    for r in allrows:
    detail = parserow.search(r.group(1))
    if detail != None:
    # default times
    start_time = None
    stop_time = None
    # parse for begin and end times
    t = times.search(detail.group(2))
    if t != None:
    start_time = t.group(1)
    stop_time = t.group(2)
    program_url = 'http://www.tvgids.nl/programma/' + detail.group(1) + '/'
    program_name = detail.group(3)
    # store time, name and detail url in a dictionary
    tdict = {}
    tdict['start'] = start_time
    tdict['stop'] = stop_time
    tdict['name'] = program_name
    if tdict['name'] == '':
    tdict['name'] = 'onbekend'
    tdict['url'] = program_url
    tdict['ID'] = detail.group(1)
    tdict['offset'] = offset
    #Add star rating if tipped by tvgids.nl
    tdict['star-rating'] = '';
    if r.group(1).find('Tip') != -1:
    tdict['star-rating'] = '4/5'
    # and append the program to the list of programs
    programs.append(tdict)
    # done
    return programs
    def make_daytime(time_string, offset=0, cutoff='00:00', stoptime=False):
    Given a string '11:35' and an offset from today,
    return a datetime object. The cuttoff specifies the point where the
    new day starts.
    Examples:
    In [2]:make_daytime('11:34',0)
    Out[2]:datetime.datetime(2006, 8, 3, 11, 34)
    In [3]:make_daytime('11:34',1)
    Out[3]:datetime.datetime(2006, 8, 4, 11, 34)
    In [7]:make_daytime('11:34',0,'12:00')
    Out[7]:datetime.datetime(2006, 8, 4, 11, 34)
    In [4]:make_daytime('11:34',0,'11:34',False)
    Out[4]:datetime.datetime(2006, 8, 3, 11, 34)
    In [5]:make_daytime('11:34',0,'11:34',True)
    Out[5]:datetime.datetime(2006, 8, 4, 11, 34)
    h,m = [int(x) for x in time_string.split(':')];
    hm = int(time_string.replace(':',''))
    chm = int(cutoff.replace(':',''))
    # check for the cutoff, if the time is before the cutoff then
    # add a day
    extra_day = 0
    if (hm < chm) or (stoptime==True and hm == chm):
    extra_day = 1
    # and create a datetime object, DST is handled at a later point
    pt = time.localtime()
    dt = datetime.datetime(pt[0],pt[1],pt[2],h,m)
    dt = dt + datetime.timedelta(offset+extra_day)
    return dt
    def correct_times(programs, quiet=0):
    Parse a list of programs as generated by get_channel_all_days() and
    convert begin and end times to xmltv compatible times in datetime objects.
    if programs == []:
    return programs
    # the start time of programming for this day, times *before* this time are
    # assumed to be on the next day
    day_start_time = '06:00'
    # initialise using the start time of the first program on this day
    if programs[0]['start'] != None:
    day_start_time = programs[0]['start']
    for program in programs:
    if program['start'] == program['stop']:
    program['stop'] = None
    # convert the times
    if program['start'] != None:
    program['start-time'] = make_daytime(program['start'], program['offset'], day_start_time)
    else:
    program['start-time'] = None
    if program['stop'] != None:
    program['stop-time'] = make_daytime(program['stop'], program['offset'], day_start_time, stoptime=True)
    # extra correction, needed because the stop time of a program may be on the next day, after the
    # day cutoff. For example:
    # 06:00 - 23:40 Long Program
    # 23:40 - 00:10 Lala
    # 00:10 - 08:00 Wawa
    # This puts the end date of Wawa on the current, instead of the next day. There is no way to detect
    # this with a single cutoff in make_daytime. Therefore, check if there is a day difference between
    # start and stop dates and correct if necessary.
    if program['start-time'] != None:
    # make two dates
    start = program['start-time']
    stop = program['stop-time']
    single_day = datetime.timedelta(1)
    startdate = datetime.datetime(start.year,start.month,start.day)
    stopdate = datetime.datetime(stop.year,stop.month,stop.day)
    if startdate - stopdate == single_day:
    program['stop-time'] = program['stop-time'] + single_day
    else:
    program['stop-time'] = None
    def parse_programs(programs, offset=0, quiet=0):
    Parse a list of programs as generated by get_channel_all_days() and
    convert begin and end times to xmltv compatible times.
    # good programs
    good_programs = []
    # calculate absolute start and stop times
    correct_times(programs, quiet)
    # next, correct for missing end time and copy over all good programming to the
    # good_programs list
    for i in range(len(programs)):
    # Try to correct missing end time by taking start time from next program on schedule
    if (programs[i]['stop-time'] == None and i < len(programs)-1):
    if not quiet:
    sys.stderr.write('Oops, "%s" has no end time. Trying to fix...\n' % programs[i]['name'])
    programs[i]['stop-time'] = programs[i+1]['start-time']
    # The common case: start and end times are present and are not
    # equal to each other (yes, this can happen)
    if programs[i]['start-time'] != None and \
    programs[i]['stop-time'] != None and \
    programs[i]['start-time'] != programs[i]['stop-time']:
    good_programs.append(programs[i])
    # Han Holl: try to exclude programs that stop before they begin
    for i in range(len(good_programs)-1,-1,-1):
    if good_programs[i]['stop-time'] <= good_programs[i]['start-time']:
    if not quiet:
    sys.stderr.write('Deleting invalid stop/start time: %s\n' % good_programs[i]['name'])
    del good_programs[i]
    # Try to exclude programs that only identify a group or broadcaster and have overlapping start/end times with
    # the actual programs
    for i in range(len(good_programs)-2,-1,-1):
    if good_programs[i]['start-time'] <= good_programs[i+1]['start-time'] and \
    good_programs[i]['stop-time'] >= good_programs[i+1]['stop-time']:
    if not quiet:
    sys.stderr.write('Deleting grouping/broadcaster: %s\n' % good_programs[i]['name'])
    del good_programs[i]
    for i in range(len(good_programs)-1):
    # PdB: Fix tvgids start-before-end x minute interval overlap. An overlap (positive or
    # negative) is halved and each half is assigned to the adjacent programmes. The maximum
    # overlap length between programming is set by the global variable 'max_overlap' and is
    # default 10 minutes. Examples:
    # Positive overlap (= overlap in programming):
    # 10:55 - 12:00 Lala
    # 11:55 - 12:20 Wawa
    # is transformed in:
    # 10:55 - 11.57 Lala
    # 11:57 - 12:20 Wawa
    # Negative overlap (= gap in programming):
    # 10:55 - 11:50 Lala
    # 12:00 - 12:20 Wawa
    # is transformed in:
    # 10:55 - 11.55 Lala
    # 11:55 - 12:20 Wawa
    stop = good_programs[i]['stop-time']
    start = good_programs[i+1]['start-time']
    dt = stop-start
    avg = start + dt / 2
    overlap = 24*60*60*dt.days + dt.seconds
    # check for the size of the overlap
    if 0 < abs(overlap) <= max_overlap*60:
    if not quiet:
    if overlap > 0:
    sys.stderr.write('"%s" and "%s" overlap %s minutes. Adjusting times.\n' % \
    (good_programs[i]['name'],good_programs[i+1]['name'],overlap / 60))
    else:
    sys.stderr.write('"%s" and "%s" have gap of %s minutes. Adjusting times.\n' % \
    (good_programs[i]['name'],good_programs[i+1]['name'],abs(overlap) / 60))
    # stop-time of previous program wins
    if overlap_strategy == 'stop':
    good_programs[i+1]['start-time'] = good_programs[i]['stop-time']
    # start-time of next program wins
    elif overlap_strategy == 'start':
    good_programs[i]['stop-time'] = good_programs[i+1]['start-time']
    # average the difference
    elif overlap_strategy == 'average':
    good_programs[i]['stop-time'] = avg
    good_programs[i+1]['start-time'] = avg
    # leave as is
    else:
    pass
    # Experimental strategy to make sure programming does not disappear. All programs that overlap more
    # than the maximum overlap length, but less than the shortest length of the two programs are
    # clumped.
    if do_clump:
    for i in range(len(good_programs)-1):
    stop = good_programs[i]['stop-time']
    start = good_programs[i+1]['start-time']
    dt = stop-start
    overlap = 24*60*60*dt.days + dt.seconds
    length0 = good_programs[i]['stop-time'] - good_programs[i]['start-time']
    length1 = good_programs[i+1]['stop-time'] - good_programs[i+1]['start-time']
    l0 = length0.days*24*60*60 + length0.seconds
    l1 = length1.days*24*60*60 + length0.seconds
    if abs(overlap) >= max_overlap*60 <= min(l0,l1)*60 and \
    not good_programs[i].has_key('clumpidx') and \
    not good_programs[i+1].has_key('clumpidx'):
    good_programs[i]['clumpidx'] = '0/2'
    good_programs[i+1]['clumpidx'] = '1/2'
    good_programs[i]['stop-time'] = good_programs[i+1]['stop-time']
    good_programs[i+1]['start-time'] = good_programs[i]['start-time']
    # done, nothing to see here, please move on
    return good_programs
    def get_descriptions(programs, program_cache=None, nocattrans=0, quiet=0, slowdays=0):
    Given a list of programs, from get_channel, retrieve program information
    # This regexp tries to find details such as Genre, Acteurs, Jaar van Premiere etc.
    detail = re.compile('<li>.*?<strong>(.*?):</strong>.*?<br />(.*?)</li>', re.DOTALL)
    # These regexps find the description area, the program type and descriptive text
    description = re.compile('<div class="description">.*?<div class="text"(.*?)<div class="clearer"></div>',re.DOTALL)
    descrtype = re.compile('<div class="type">(.*?)</div>',re.DOTALL)
    descrline = re.compile('<p>(.*?)</p>',re.DOTALL)
    # randomize detail requests
    nprograms = len(programs)
    fetch_order = range(0,nprograms)
    random.shuffle(fetch_order)
    counter = 0
    for i in fetch_order:
    counter += 1
    if programs[i]['offset'] >= slowdays:
    continue
    if not quiet:
    sys.stderr.write('\n(%3.0f%%) %s: %s ' % (100*float(counter)/float(nprograms), i, programs[i]['name']))
    # check the cache for this program's ID
    cached_program = program_cache.query(programs[i]['ID'])
    if (cached_program != None):
    if not quiet:
    sys.stderr.write(' [cached]')
    # copy the cached information, except the start/end times, rating and clumping,
    # these may have changed.
    tstart = programs[i]['start-time']
    tstop = programs[i]['stop-time']
    rating = programs[i]['star-rating']
    try:
    clump = programs[i]['clumpidx']
    except:
    clump = False
    programs[i] = cached_program
    programs[i]['start-time'] = tstart
    programs[i]['stop-time'] = tstop
    programs[i]['star-rating'] = rating
    if clump:
    programs[i]['clumpidx'] = clump
    continue
    else:
    # be nice to tvgids.nl
    time.sleep(random.randint(nice_time[0], nice_time[1]))
    # get the details page, and get all the detail nodes
    descriptions = ()
    details = ()
    try:
    if not quiet:
    sys.stderr.write(' [normal fetch]')
    total = get_page(programs[i]['url'])
    details = detail.finditer(total)
    descrspan = description.search(total);
    descriptions = descrline.finditer(descrspan.group(1))
    except:
    # if we cannot find the description page,
    # go to next in the loop
    if not quiet:
    sys.stderr.write(' [fetch failed or timed out]')
    continue
    # define containers
    programs[i]['credits'] = {}
    programs[i]['video'] = {}
    # now parse the details
    line_nr = 1;
    # First, we try to find the program type in the description section.
    # Note that this is not the same as the generic genres (these are searched later on), but a more descriptive one like "Culinair programma"
    # If present, we store this as first part of the regular description:
    programs[i]['detail1'] = descrtype.search(descrspan.group(1)).group(1).capitalize()
    if programs[i]['detail1'] != '':
    line_nr = line_nr + 1
    # Secondly, we add one or more lines of the program description that are present.
    for descript in descriptions:
    d_str = 'detail' + str(line_nr)
    programs[i][d_str] = descript.group(1)
    # Remove sponsored link from description if present.
    sponsor_pos = programs[i][d_str].rfind('<i>Gesponsorde link:</i>')
    if sponsor_pos > 0:
    programs[i][d_str] = programs[i][d_str][0:sponsor_pos]
    programs[i][d_str] = filter_line(programs[i][d_str]).strip()
    line_nr = line_nr + 1
    # Finally, we check out all program details. These are generically denoted as:
    # <li><strong>(TYPE):</strong><br />(CONTENT)</li>
    # Some examples:
    # <li><strong>Genre:</strong><br />16 oktober 2008</li>
    # <li><strong>Genre:</strong><br />Amusement</li>
    for d in details:
    type = d.group(1).strip().lower()
    content_asis = d.group(2).strip()
    content = filter_line(content_asis).strip()
    if content == '':
    continue
    elif type == 'genre':
    # Fix detection of movies based on description as tvgids.nl sometimes
    # categorises a movie as e.g. "Komedie", "Misdaadkomedie", "Detectivefilm".
    genre = content;
    if (programs[i]['detail1'].lower().find('film') != -1 \
    or programs[i]['detail1'].lower().find('komedie') != -1)\
    and programs[i]['detail1'].lower().find('tekenfilm') == -1 \
    and programs[i]['detail1'].lower().find('animatiekomedie') == -1 \
    and programs[i]['detail1'].lower().find('filmpje') == -1:
    genre = 'film'
    if nocattrans:
    programs[i]['genre'] = genre.title()
    else:
    try:
    programs[i]['genre'] = cattrans[genre.lower()]
    except:
    programs[i]['genre'] = ''
    # Parse persons and their roles for credit info
    elif roletrans.has_key(type):
    programs[i]['credits'][roletrans[type]] = []
    persons = content_asis.split(',');
    for name in persons:
    if name.find(':') != -1:
    name = name.split(':')[1]
    if name.find('-') != -1:
    name = name.split('-')[0]
    if name.find('e.a') != -1:
    name = name.split('e.a')[0]
    programs[i]['credits'][roletrans[type]].append(filter_line(name.strip()))
    elif type == 'bijzonderheden':
    if content.find('Breedbeeld') != -1:
    programs[i]['video']['breedbeeld'] = 1
    if content.find('Zwart') != -1:
    programs[i]['video']['blackwhite'] = 1
    if content.find('Teletekst') != -1:
    programs[i]['teletekst'] = 1
    if content.find('Stereo') != -1:
    programs[i]['stereo'] = 1
    elif type == 'url':
    programs[i]['infourl'] = content
    else:
    # In unmatched cases, we still add the parsed type and content to the program details.
    # Some of these will lead to xmltv output during the xmlefy_programs step
    programs[i][type] = content
    # do not cache programming that is unknown at the time
    # of fetching.
    if programs[i]['name'].lower() != 'onbekend':
    program_cache.add(programs[i])
    if not quiet:
    sys.stderr.write('\ndone...\n\n')
    # done
    def title_split(program):
    Some channels have the annoying habit of adding the subtitle to the title of a program.
    This function attempts to fix this, by splitting the name at a ': '.
    if (program.has_key('titel aflevering') and program['titel aflevering'] != '') \
    or (program.has_key('genre') and program['genre'].lower() in ['movies','film']):
    return
    colonpos = program['name'].rfind(': ')
    if colonpos > 0:
    program['titel aflevering'] = program['name'][colonpos+1:len(program['name'])].strip()
    program['name'] = program['name'][0:colonpos].strip()
    def xmlefy_programs(programs, channel, desc_len, compat=0, nocattrans=0):
    Given a list of programming (from get_channels())
    returns a string with the xml equivalent
    output = []
    for program in programs:
    clumpidx = ''
    try:
    if program.has_key('clumpidx'):
    clumpidx = 'clumpidx="'+program['clumpidx']+'"'
    except:
    print program
    output.append(' <programme start="%s" stop="%s" channel="%s%s" %s> \n' % \
    (format_timezone(program['start-time']), format_timezone(program['stop-time']),\
    channel, compat and '.tvgids.nl' or '', clumpidx))
    output.append(' <title lang="nl">%s</title>\n' % filter_line(program['name']))
    if program.has_key('titel aflevering') and program['titel aflevering'] != '':
    output.append(' <sub-title lang="nl">%s</sub-title>\n' % filter_line(program['titel aflevering']))
    desc = []
    for detail_row in ['detail1','detail2','detail3']:
    if program.has_key(detail_row) and not re.search('[Gg]een detailgegevens be(?:kend|schikbaar)', program[detail_row]):
    desc.append('%s ' % program[detail_row])
    if desc != []:
    # join and remove newlines from descriptions
    desc_line = "".join(desc).strip()
    desc_line.replace('\n', ' ')
    if len(desc_line) > desc_len:
    spacepos = desc_line[0:desc_len-3].rfind(' ')
    desc_line = desc_line[0:spacepos] + '...'
    output.append(' <desc lang="nl">%s</desc>\n' % desc_line)
    # Process credits section if present.
    # This will generate director/actor/presenter info.
    if program.has_key('credits') and program['credits'] != {}:
    output.append(' <credits>\n')
    for role in program['credits']:
    for name in program['credits'][role]:
    if name != '':
    output.append(' <%s>%s</%s>\n' % (role, name, role))
    output.append(' </credits>\n')
    if program.has_key('jaar van premiere') and program['jaar van premiere'] != '':
    output.append(' <date>%s</date>\n' % program['jaar van premiere'])
    if program.has_key('genre') and program['genre'] != '':
    output.append(' <category')
    if nocattrans:
    output.append(' lang="nl"')
    output.append ('>%s</category>\n' % program['genre'])
    if program.has_key('infourl') and program['infourl'] != '':
    output.append(' <url>%s</url>\n' % program['infourl'])
    if program.has_key('aflevering') and program['aflevering'] != '':
    output.append(' <episode-num system="onscreen">%s</episode-num>\n' % filter_line(program['aflevering']))
    # Process video section if present
    if program.has_key('video') and program['video'] != {}:
    output.append(' <video>\n');
    if program['video'].has_key('breedbeeld'):
    output.append(' <aspect>16:9</aspect>\n')
    if program['video'].has_key('blackwhite'):
    output.append(' <colour>no</colour>\n')
    output.append(' </video>\n')
    if program.has_key('stereo'):
    output.append(' <audio><stereo>stereo</stereo></audio>\n')
    if program.has_key('teletekst'):
    output.append(' <subtitles type="teletext" />\n')
    # Set star-rating if applicable
    if program['star-rating'] != '':
    output.append(' <star-rating><value>%s</value></star-rating>\n' % program['star-rating'])
    output.append(' </programme>\n')
    return "".join(output)
    def main():
    # Parse command line options
    try:
    opts, args = getopt.getopt(sys.argv[1:], "h", ["help", "output=", "capabilities",
    "preferredmethod", "days=",
    "configure", "fast", "slow",
    "cache=", "clean_cache",
    "slowdays=","compat",
    "desc-length=","description",
    "nocattrans","config-file=",
    "max_overlap=", "overlap_strategy=",
    "clear_cache", "quiet","logos="])
    except getopt.GetoptError:
    usage()
    sys.exit(2)
    # DEFAULT OPTIONS - Edit if you know what you are doing
    # where the output goes
    output = None
    output_file = None
    # the total number of days to fetch
    days = 6
    # Fetch data in fast mode, i.e. do NOT grab all the detail information,
    # fast means fast, because as it then does not have to fetch a web page for each program
    # Default: fast=0
    fast = 0
    # number of days to fetch in slow mode. For example: --days 5 --slowdays 2, will
    # fetch the first two days in slow mode (with all the details) and the remaining three
    # days in fast mode.
    slowdays = 6
    # no output
    quiet = 0
    # insert url of channel logo into the xml data, this will be picked up by mythfilldatabase
    logos = 1
    # enable this option if you were using tv_grab_nl, it adjusts the generated
    # xmltvid's so that everything works.
    compat = 0
    # enable this option if you do not want the tvgids categories being translated into
    # MythTV-categories (genres)
    nocattrans = 0
    # Maximum number of characters to use for program description.
    # Different values may work better in different versions of MythTV.
    desc_len = 475
    # default configuration file locations
    hpath = ''
    if os.environ.has_key('HOME'):
    hpath = os.environ['HOME']
    # extra test for windows users
    elif os.environ.has_key('HOMEPATH'):
    hpath = os.environ['HOMEPATH']
    # hpath = ''
    xmltv_dir = hpath+'/.xmltv'
    program_cache_file = xmltv_dir+'/program_cache'
    config_file = xmltv_dir+'/tv_grab_nl_py.conf'
    # cache the detail information.
    program_cache = None
    clean_cache = 1
    clear_cache = 0
    # seed the random generator
    random.seed(time.time())
    for o, a in opts:
    if o in ("-h", "--help"):
    usage()
    sys.exit(1)
    if o == "--quiet":
    quiet = 1;
    if o == "--description":
    print "The Netherlands (tv_grab_nl_py $Rev: 104 $)"
    sys.exit(0)
    if o == "--capabilities":
    print "baseline"
    print "cache"
    print "manualconfig"
    print "preferredmethod"
    sys.exit(0)
    if o == '--preferredmethod':
    print 'allatonce'
    sys.exit(0)
    if o == '--desc-length':
    # Use the requested length for programme descriptions.
    desc_len = int(a)
    if not quiet:
    sys.stderr.write('Using description length: %d\n' % desc_len)
    for o, a in opts:
    if o == "--config-file":
    # use the provided name for configuration
    config_file = a
    if not quiet:
    sys.stderr.write('Using config file: %s\n' % config_file)
    for o, a in opts:
    if o == "--configure":
    # check for the ~.xmltv dir
    if not os.path.exists(xmltv_dir):
    if not quiet:
    sys.stderr.write('You do not have the ~/.xmltv directory,')
    sys.stderr.write('I am going to make a shiny new one for you...')
    os.mkdir(xmltv_dir)
    if not quiet:
    sys.stderr.write('Creating config file: %s\n' % config_file)
    get_channels(config_file)
    sys.exit(0)
    if o == "--days":
    # limit days to maximum supported by tvgids.nl
    days = min(int(a),6)
    if o == "--compat":
    compat = 1
    if o == "--nocattrans":
    nocattrans = 1
    if o == "--fast":
    fast = 1
    if o == "--output":
    output_file = a
    try:
    output = open(output_file,'w')
    # and redirect output
    if debug:
    debug_file = open('/tmp/kaas.xml','w')
    blah = redirect.Tee(output, debug_file)
    sys.stdout = blah
    else:
    sys.stdout = output
    except:
    if not quiet:
    sys.stderr.write('Cannot write to outputfile: %s\n' % output_file)
    sys.exit(2)
    if o == "--slowdays":
    # limit slowdays to maximum supported by tvgids.nl
    slowdays = min(int(a),6)
    # slowdays implies fast == 0
    fast = 0
    if o == "--logos":
    logos = int(a)
    if o == "--clean_cache":
    clean_cache = 1
    if o == "--clear_cache":
    clear_cache = 1
    if o == "--cache":
    program_cache_file = a
    if o == "--max_overlap":
    max_overlap = int(a)
    if o == "--overlap_strategy":
    overlap_strategy = a
    # get configfile if available
    try:
    f = open(config_file,'r')
    except:
    sys.stderr.write('Config file %s not found.\n' % config_file)
    sys.stderr.write('Re-run me with the --configure flag.\n')
    sys.exit(1)
    #check for cache
    program_cache = ProgramCache(program_cache_file)
    if clean_cache != 0:
    program_cache.clean()
    if clear_cache != 0:
    program_cache.clear()
    # Go!
    channels = {}
    # Read the channel stuff
    for blah in f.readlines():
    blah = blah.lstrip()
    blah = blah.replace('\n','')
    if blah:
    if blah[0] != '#':
    channel = blah.split()
    channels[channel[0]] = " ".join(channel[1:])
    # channels are now in channels dict keyed on channel id
    # print header stuff
    print '<?xml version="1.0" encoding="ISO-8859-1"?>'
    print '<!DOCTYPE tv SYSTEM "xmltv.dtd">'
    print '<tv generator-info-name="tv_grab_nl_py $Rev: 104 $">'
    # first do the channel info
    for key in channels.keys():
    print ' <channel id="%s%s">' % (key, compat and '.tvgids.nl' or '')
    print ' <display-name lang="nl">%s</display-name>' % channels[key]
    if (logos):
    ikey = int(key)
    if logo_names.has_key(ikey):
    full_logo_url = logo_provider[logo_names[ikey][0]]+logo_names[ikey][1]+'.gif'
    print ' <icon src="%s" />' % full_logo_url
    print ' </channel>'
    num_chans = len(channels.keys())
    channel_cnt = 0
    if program_cache != None:
    program_cache.clean()
    fluffy = channels.keys()
    nfluffy = len(fluffy)
    for id in fluffy:
    channel_cnt += 1
    if not quiet:
    sys.stderr.write('\n\nNow fetching %s(xmltvid=%s%s) (channel %s of %s)\n' % \
    (channels[id], id, (compat and '.tvgids.nl' or ''), channel_cnt, nfluffy))
    info = get_channel_all_days(id, days, quiet)
    blah = parse_programs(info, None, quiet)
    # fetch descriptions
    if not fast:
    get_descriptions(blah, program_cache, nocattrans, quiet, slowdays)
    # Split titles with colon in it
    # Note: this only takes place if all days retrieved are also grabbed with details (slowdays=days)
    # otherwise this function might change some titles after a few grabs and thus may result in
    # loss of programmed recordings for these programs.
    if slowdays == days:
    for program in blah:
    title_split(program)
    print xmlefy_programs(blah, id, desc_len, compat, nocattrans)
    # save the cache after each channel fetch
    if program_cache != None:
    program_cache.dump(program_cache_file)
    # be nice to tvgids.nl
    time.sleep(random.randint(nice_time[0], nice_time[1]))
    if program_cache != None:
    program_cache.dump(program_cache_file)
    # print footer stuff
    print "</tv>"
    # close the outputfile if necessary
    if output != None:
    output.close()
    # and return success
    sys.exit(0)
    # allow this to be a module
    if __name__ == '__main__':
    main()
    # vim:tw=0:et:sw=4
    [cedric@tv ~]$
    Best regards,
    Cedric
    Last edited by cdwijs (2010-11-04 18:44:51)

    Running the script by python2 solves it for me:
    su - mythtv -c "nice -n 19 python2 /usr/bin/tv_grab_nl_py --output ~/listings.xml"
    Best regards,
    Cedric

  • Compability problem with Java and Python  RSA algorithm implementation

    I have client server application. Server is writtein in python, client in java. Client receives messages from server encrypted with RSA (http://stuvel.eu/rsa), and I'm unable to decrypt it. It seems that this is RSA algorithm compatibility problem. I'm using algorithm from java.security package, instatinating Cipher object like this: c = Cipher.getInstance("RSA"); . I noticed that this algorithm produces for input blocks of lengtrh <=117 ouput block of length 128. Server I guess uses the most triviall impelentation of RSA ( (1 byte is encrypted to 1 byte) So i want to make my java algorithm compatibile with this one which server uses. How to do that ? Do i have to instatinate Cipher object in different way ? Or use another library ?

    azedor wrote:
    First you said it was no good because it could only handle <= 117 byte inputs, now you say it is no good because it produces a 128-byte output. You're not making sense.First i said that this two RSA implementations are not compatibile, and first reason i noticed firstly is that Python imlementation for input of length N produces cryptogram of the same length. Not true. In general, the RSA encryption of any number of bytes less than the length of the modulus will produce a result of length near that of the modulus. When N is less than the length of the modulus, it is rare that N bytes of cleartext produces N bytes of ciphertext.
    Java implementation for data block of length <=117 produces alwasy 128 bytes of output.Pretty much correct and very much desirable. This is primarily a function of the PKCS1 padding which is used to solve two basic problems. First, as I alluded to in my first response, it is the nature of the algorithm that leading zeros are not preserved and second when the cleartext is very small (a few bytes) the exponentiation does not roll over and it is easy to decrypt the result. Both these problems are addressed by PKCS1 padding.
    >
    >
    After what sabre150 said i think of giving up idea of translating Python code to Java and considering to use another assymetric cryptography algorithms on both sides. Can you recommend me sth what should be compatibile with Python ?This seems to be at odds with your statement in reply #3 "Also have acces only to client code so i have to change sth in java." ! This statement is why I said "I suspect ... you have dug a deep hole".
    In your position I would use the Python bindings for openssl. Once more, Google is your friend.

  • Python openbox pipe menu

    I somewhat hijacked a different thread and my question is more suited here.
    I'm using a python script to check gmail in a pipe menu. At first it was creating problems because it would create a cache but would then not load until the file was removed. To fix this, I removed the last line (which created the cache) and it all works. However, I would prefer to have it work like it was intended.
    The script:
    #!/usr/bin/python
    # Authors: [email protected] [email protected]
    # License: GPL 2.0
    # Usage:
    # Put an entry in your ~/.config/openbox/menu.xml like this:
    # <menu id="gmail" label="gmail" execute="~/.config/openbox/scripts/gmail-openbox.py" />
    # And inside <menu id="root-menu" label="openbox">, add this somewhere (wherever you want it on your menu)
    # <menu id="gmail" />
    import os
    import sys
    import logging
    name = "111111"
    pw = "000000"
    browser = "firefox3"
    filename = "/tmp/.gmail.cache"
    login = "\'https://mail.google.com/mail\'"
    # Allow us to run using installed `libgmail` or the one in parent directory.
    try:
    import libgmail
    except ImportError:
    # Urghhh...
    sys.path.insert(1,
    os.path.realpath(os.path.join(os.path.dirname(__file__),
    os.path.pardir)))
    import libgmail
    if __name__ == "__main__":
    import sys
    from getpass import getpass
    if not os.path.isfile(filename):
    ga = libgmail.GmailAccount(name, pw)
    try:
    ga.login()
    except libgmail.GmailLoginFailure:
    print "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
    print "<openbox_pipe_menu>"
    print " <item label=\"login failed.\">"
    print " <action name=\"Execute\"><execute>" + browser + " " + login + "</execute></action>"
    print " </item>"
    print "</openbox_pipe_menu>"
    raise SystemExit
    else:
    ga = libgmail.GmailAccount(
    state = libgmail.GmailSessionState(filename = filename))
    msgtotals = ga.getUnreadMsgCount()
    print "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
    print "<openbox_pipe_menu>"
    print "<separator label=\"Gmail\"/>"
    if msgtotals == 0:
    print " <item label=\"no new messages.\">"
    elif msgtotals == 1:
    print " <item label=\"1 new message.\">"
    else:
    print " <item label=\"" + str(msgtotals) + " new messages.\">"
    print " <action name=\"Execute\"><execute>" + browser + " " + login + "</execute></action>"
    print " </item>"
    print "</openbox_pipe_menu>"
    state = libgmail.GmailSessionState(account = ga).save(filename)
    The line I removed:
    state = libgmail.GmailSessionState(account = ga).save(filename)
    The error I'd get if the cache existed:
    Traceback (most recent call last):
    File "/home/shawn/.config/openbox/scripts/gmail.py", line 56, in <module>
    msgtotals = ga.getUnreadMsgCount()
    File "/home/shawn/.config/openbox/scripts/libgmail.py", line 547, in getUnreadMsgCount
    q = "is:" + U_AS_SUBSET_UNREAD)
    File "/home/shawn/.config/openbox/scripts/libgmail.py", line 428, in _parseSearchResult
    return self._parsePage(_buildURL(**params))
    File "/home/shawn/.config/openbox/scripts/libgmail.py", line 401, in _parsePage
    items = _parsePage(self._retrievePage(urlOrRequest))
    File "/home/shawn/.config/openbox/scripts/libgmail.py", line 369, in _retrievePage
    if self.opener is None:
    AttributeError: GmailAccount instance has no attribute 'opener'
    EDIT - you might need the libgmail.py
    #!/usr/bin/env python
    # libgmail -- Gmail access via Python
    ## To get the version number of the available libgmail version.
    ## Reminder: add date before next release. This attribute is also
    ## used in the setup script.
    Version = '0.1.8' # (Nov 2007)
    # Original author: [email protected]
    # Maintainers: Waseem ([email protected]) and Stas Z ([email protected])
    # License: GPL 2.0
    # NOTE:
    # You should ensure you are permitted to use this script before using it
    # to access Google's Gmail servers.
    # Gmail Implementation Notes
    # ==========================
    # * Folders contain message threads, not individual messages. At present I
    # do not know any way to list all messages without processing thread list.
    LG_DEBUG=0
    from lgconstants import *
    import os,pprint
    import re
    import urllib
    import urllib2
    import mimetypes
    import types
    from cPickle import load, dump
    from email.MIMEBase import MIMEBase
    from email.MIMEText import MIMEText
    from email.MIMEMultipart import MIMEMultipart
    GMAIL_URL_LOGIN = "https://www.google.com/accounts/ServiceLoginBoxAuth"
    GMAIL_URL_GMAIL = "https://mail.google.com/mail/?ui=1&"
    # Set to any value to use proxy.
    PROXY_URL = None # e.g. libgmail.PROXY_URL = 'myproxy.org:3128'
    # TODO: Get these on the fly?
    STANDARD_FOLDERS = [U_INBOX_SEARCH, U_STARRED_SEARCH,
    U_ALL_SEARCH, U_DRAFTS_SEARCH,
    U_SENT_SEARCH, U_SPAM_SEARCH]
    # Constants with names not from the Gmail Javascript:
    # TODO: Move to `lgconstants.py`?
    U_SAVEDRAFT_VIEW = "sd"
    D_DRAFTINFO = "di"
    # NOTE: All other DI_* field offsets seem to match the MI_* field offsets
    DI_BODY = 19
    versionWarned = False # If the Javascript version is different have we
    # warned about it?
    RE_SPLIT_PAGE_CONTENT = re.compile("D\((.*?)\);", re.DOTALL)
    class GmailError(Exception):
    Exception thrown upon gmail-specific failures, in particular a
    failure to log in and a failure to parse responses.
    pass
    class GmailSendError(Exception):
    Exception to throw if we're unable to send a message
    pass
    def _parsePage(pageContent):
    Parse the supplied HTML page and extract useful information from
    the embedded Javascript.
    lines = pageContent.splitlines()
    data = '\n'.join([x for x in lines if x and x[0] in ['D', ')', ',', ']']])
    #data = data.replace(',,',',').replace(',,',',')
    data = re.sub(',{2,}', ',', data)
    result = []
    try:
    exec data in {'__builtins__': None}, {'D': lambda x: result.append(x)}
    except SyntaxError,info:
    print info
    raise GmailError, 'Failed to parse data returned from gmail.'
    items = result
    itemsDict = {}
    namesFoundTwice = []
    for item in items:
    name = item[0]
    try:
    parsedValue = item[1:]
    except Exception:
    parsedValue = ['']
    if itemsDict.has_key(name):
    # This handles the case where a name key is used more than
    # once (e.g. mail items, mail body etc) and automatically
    # places the values into list.
    # TODO: Check this actually works properly, it's early... :-)
    if len(parsedValue) and type(parsedValue[0]) is types.ListType:
    for item in parsedValue:
    itemsDict[name].append(item)
    else:
    itemsDict[name].append(parsedValue)
    else:
    if len(parsedValue) and type(parsedValue[0]) is types.ListType:
    itemsDict[name] = []
    for item in parsedValue:
    itemsDict[name].append(item)
    else:
    itemsDict[name] = [parsedValue]
    return itemsDict
    def _splitBunches(infoItems):# Is this still needed ?? Stas
    Utility to help make it easy to iterate over each item separately,
    even if they were bunched on the page.
    result= []
    # TODO: Decide if this is the best approach.
    for group in infoItems:
    if type(group) == tuple:
    result.extend(group)
    else:
    result.append(group)
    return result
    class SmartRedirectHandler(urllib2.HTTPRedirectHandler):
    def __init__(self, cookiejar):
    self.cookiejar = cookiejar
    def http_error_302(self, req, fp, code, msg, headers):
    # The location redirect doesn't seem to change
    # the hostname header appropriately, so we do
    # by hand. (Is this a bug in urllib2?)
    new_host = re.match(r'http[s]*://(.*?\.google\.com)',
    headers.getheader('Location'))
    if new_host:
    req.add_header("Host", new_host.groups()[0])
    result = urllib2.HTTPRedirectHandler.http_error_302(
    self, req, fp, code, msg, headers)
    return result
    class CookieJar:
    A rough cookie handler, intended to only refer to one domain.
    Does no expiry or anything like that.
    (The only reason this is here is so I don't have to require
    the `ClientCookie` package.)
    def __init__(self):
    self._cookies = {}
    def extractCookies(self, headers, nameFilter = None):
    # TODO: Do this all more nicely?
    for cookie in headers.getheaders('Set-Cookie'):
    name, value = (cookie.split("=", 1) + [""])[:2]
    if LG_DEBUG: print "Extracted cookie `%s`" % (name)
    if not nameFilter or name in nameFilter:
    self._cookies[name] = value.split(";")[0]
    if LG_DEBUG: print "Stored cookie `%s` value `%s`" % (name, self._cookies[name])
    if self._cookies[name] == "EXPIRED":
    if LG_DEBUG:
    print "We got an expired cookie: %s:%s, deleting." % (name, self._cookies[name])
    del self._cookies[name]
    def addCookie(self, name, value):
    self._cookies[name] = value
    def setCookies(self, request):
    request.add_header('Cookie',
    ";".join(["%s=%s" % (k,v)
    for k,v in self._cookies.items()]))
    def _buildURL(**kwargs):
    return "%s%s" % (URL_GMAIL, urllib.urlencode(kwargs))
    def _paramsToMime(params, filenames, files):
    mimeMsg = MIMEMultipart("form-data")
    for name, value in params.iteritems():
    mimeItem = MIMEText(value)
    mimeItem.add_header("Content-Disposition", "form-data", name=name)
    # TODO: Handle this better...?
    for hdr in ['Content-Type','MIME-Version','Content-Transfer-Encoding']:
    del mimeItem[hdr]
    mimeMsg.attach(mimeItem)
    if filenames or files:
    filenames = filenames or []
    files = files or []
    for idx, item in enumerate(filenames + files):
    # TODO: This is messy, tidy it...
    if isinstance(item, str):
    # We assume it's a file path...
    filename = item
    contentType = mimetypes.guess_type(filename)[0]
    payload = open(filename, "rb").read()
    else:
    # We assume it's an `email.Message.Message` instance...
    # TODO: Make more use of the pre-encoded information?
    filename = item.get_filename()
    contentType = item.get_content_type()
    payload = item.get_payload(decode=True)
    if not contentType:
    contentType = "application/octet-stream"
    mimeItem = MIMEBase(*contentType.split("/"))
    mimeItem.add_header("Content-Disposition", "form-data",
    name="file%s" % idx, filename=filename)
    # TODO: Encode the payload?
    mimeItem.set_payload(payload)
    # TODO: Handle this better...?
    for hdr in ['MIME-Version','Content-Transfer-Encoding']:
    del mimeItem[hdr]
    mimeMsg.attach(mimeItem)
    del mimeMsg['MIME-Version']
    return mimeMsg
    class GmailLoginFailure(Exception):
    Raised whenever the login process fails--could be wrong username/password,
    or Gmail service error, for example.
    Extract the error message like this:
    try:
    foobar
    except GmailLoginFailure,e:
    mesg = e.message# or
    print e# uses the __str__
    def __init__(self,message):
    self.message = message
    def __str__(self):
    return repr(self.message)
    class GmailAccount:
    def __init__(self, name = "", pw = "", state = None, domain = None):
    global URL_LOGIN, URL_GMAIL
    self.domain = domain
    if self.domain:
    URL_LOGIN = "https://www.google.com/a/" + self.domain + "/LoginAction"
    URL_GMAIL = "http://mail.google.com/a/" + self.domain + "/?"
    else:
    URL_LOGIN = GMAIL_URL_LOGIN
    URL_GMAIL = GMAIL_URL_GMAIL
    if name and pw:
    self.name = name
    self._pw = pw
    self._cookieJar = CookieJar()
    if PROXY_URL is not None:
    import gmail_transport
    self.opener = urllib2.build_opener(gmail_transport.ConnectHTTPHandler(proxy = PROXY_URL),
    gmail_transport.ConnectHTTPSHandler(proxy = PROXY_URL),
    SmartRedirectHandler(self._cookieJar))
    else:
    self.opener = urllib2.build_opener(
    urllib2.HTTPHandler(debuglevel=0),
    urllib2.HTTPSHandler(debuglevel=0),
    SmartRedirectHandler(self._cookieJar))
    elif state:
    # TODO: Check for stale state cookies?
    self.name, self._cookieJar = state.state
    else:
    raise ValueError("GmailAccount must be instantiated with " \
    "either GmailSessionState object or name " \
    "and password.")
    self._cachedQuotaInfo = None
    self._cachedLabelNames = None
    def login(self):
    # TODO: Throw exception if we were instantiated with state?
    if self.domain:
    data = urllib.urlencode({'continue': URL_GMAIL,
    'at' : 'null',
    'service' : 'mail',
    'userName': self.name,
    'password': self._pw,
    else:
    data = urllib.urlencode({'continue': URL_GMAIL,
    'Email': self.name,
    'Passwd': self._pw,
    headers = {'Host': 'www.google.com',
    'User-Agent': 'Mozilla/5.0 (Compatible; libgmail-python)'}
    req = urllib2.Request(URL_LOGIN, data=data, headers=headers)
    pageData = self._retrievePage(req)
    if not self.domain:
    # The GV cookie no longer comes in this page for
    # "Apps", so this bottom portion is unnecessary for it.
    # This requests the page that provides the required "GV" cookie.
    RE_PAGE_REDIRECT = 'CheckCookie\?continue=([^"\']+)'
    # TODO: Catch more failure exceptions here...?
    try:
    link = re.search(RE_PAGE_REDIRECT, pageData).group(1)
    redirectURL = urllib2.unquote(link)
    redirectURL = redirectURL.replace('\\x26', '&')
    except AttributeError:
    raise GmailLoginFailure("Login failed. (Wrong username/password?)")
    # We aren't concerned with the actual content of this page,
    # just the cookie that is returned with it.
    pageData = self._retrievePage(redirectURL)
    def _retrievePage(self, urlOrRequest):
    if self.opener is None:
    raise "Cannot find urlopener"
    if not isinstance(urlOrRequest, urllib2.Request):
    req = urllib2.Request(urlOrRequest)
    else:
    req = urlOrRequest
    self._cookieJar.setCookies(req)
    req.add_header('User-Agent',
    'Mozilla/5.0 (Compatible; libgmail-python)')
    try:
    resp = self.opener.open(req)
    except urllib2.HTTPError,info:
    print info
    return None
    pageData = resp.read()
    # Extract cookies here
    self._cookieJar.extractCookies(resp.headers)
    # TODO: Enable logging of page data for debugging purposes?
    return pageData
    def _parsePage(self, urlOrRequest):
    Retrieve & then parse the requested page content.
    items = _parsePage(self._retrievePage(urlOrRequest))
    # Automatically cache some things like quota usage.
    # TODO: Cache more?
    # TODO: Expire cached values?
    # TODO: Do this better.
    try:
    self._cachedQuotaInfo = items[D_QUOTA]
    except KeyError:
    pass
    #pprint.pprint(items)
    try:
    self._cachedLabelNames = [category[CT_NAME] for category in items[D_CATEGORIES][0]]
    except KeyError:
    pass
    return items
    def _parseSearchResult(self, searchType, start = 0, **kwargs):
    params = {U_SEARCH: searchType,
    U_START: start,
    U_VIEW: U_THREADLIST_VIEW,
    params.update(kwargs)
    return self._parsePage(_buildURL(**params))
    def _parseThreadSearch(self, searchType, allPages = False, **kwargs):
    Only works for thread-based results at present. # TODO: Change this?
    start = 0
    tot = 0
    threadsInfo = []
    # Option to get *all* threads if multiple pages are used.
    while (start == 0) or (allPages and
    len(threadsInfo) < threadListSummary[TS_TOTAL]):
    items = self._parseSearchResult(searchType, start, **kwargs)
    #TODO: Handle single & zero result case better? Does this work?
    try:
    threads = items[D_THREAD]
    except KeyError:
    break
    else:
    for th in threads:
    if not type(th[0]) is types.ListType:
    th = [th]
    threadsInfo.append(th)
    # TODO: Check if the total or per-page values have changed?
    threadListSummary = items[D_THREADLIST_SUMMARY][0]
    threadsPerPage = threadListSummary[TS_NUM]
    start += threadsPerPage
    # TODO: Record whether or not we retrieved all pages..?
    return GmailSearchResult(self, (searchType, kwargs), threadsInfo)
    def _retrieveJavascript(self, version = ""):
    Note: `version` seems to be ignored.
    return self._retrievePage(_buildURL(view = U_PAGE_VIEW,
    name = "js",
    ver = version))
    def getMessagesByFolder(self, folderName, allPages = False):
    Folders contain conversation/message threads.
    `folderName` -- As set in Gmail interface.
    Returns a `GmailSearchResult` instance.
    *** TODO: Change all "getMessagesByX" to "getThreadsByX"? ***
    return self._parseThreadSearch(folderName, allPages = allPages)
    def getMessagesByQuery(self, query, allPages = False):
    Returns a `GmailSearchResult` instance.
    return self._parseThreadSearch(U_QUERY_SEARCH, q = query,
    allPages = allPages)
    def getQuotaInfo(self, refresh = False):
    Return MB used, Total MB and percentage used.
    # TODO: Change this to a property.
    if not self._cachedQuotaInfo or refresh:
    # TODO: Handle this better...
    self.getMessagesByFolder(U_INBOX_SEARCH)
    return self._cachedQuotaInfo[0][:3]
    def getLabelNames(self, refresh = False):
    # TODO: Change this to a property?
    if not self._cachedLabelNames or refresh:
    # TODO: Handle this better...
    self.getMessagesByFolder(U_INBOX_SEARCH)
    return self._cachedLabelNames
    def getMessagesByLabel(self, label, allPages = False):
    return self._parseThreadSearch(U_CATEGORY_SEARCH,
    cat=label, allPages = allPages)
    def getRawMessage(self, msgId):
    # U_ORIGINAL_MESSAGE_VIEW seems the only one that returns a page.
    # All the other U_* results in a 404 exception. Stas
    PageView = U_ORIGINAL_MESSAGE_VIEW
    return self._retrievePage(
    _buildURL(view=PageView, th=msgId))
    def getUnreadMessages(self):
    return self._parseThreadSearch(U_QUERY_SEARCH,
    q = "is:" + U_AS_SUBSET_UNREAD)
    def getUnreadMsgCount(self):
    items = self._parseSearchResult(U_QUERY_SEARCH,
    q = "is:" + U_AS_SUBSET_UNREAD)
    try:
    result = items[D_THREADLIST_SUMMARY][0][TS_TOTAL_MSGS]
    except KeyError:
    result = 0
    return result
    def _getActionToken(self):
    try:
    at = self._cookieJar._cookies[ACTION_TOKEN_COOKIE]
    except KeyError:
    self.getLabelNames(True)
    at = self._cookieJar._cookies[ACTION_TOKEN_COOKIE]
    return at
    def sendMessage(self, msg, asDraft = False, _extraParams = None):
    `msg` -- `GmailComposedMessage` instance.
    `_extraParams` -- Dictionary containing additional parameters
    to put into POST message. (Not officially
    for external use, more to make feature
    additional a little easier to play with.)
    Note: Now returns `GmailMessageStub` instance with populated
    `id` (and `_account`) fields on success or None on failure.
    # TODO: Handle drafts separately?
    params = {U_VIEW: [U_SENDMAIL_VIEW, U_SAVEDRAFT_VIEW][asDraft],
    U_REFERENCED_MSG: "",
    U_THREAD: "",
    U_DRAFT_MSG: "",
    U_COMPOSEID: "1",
    U_ACTION_TOKEN: self._getActionToken(),
    U_COMPOSE_TO: msg.to,
    U_COMPOSE_CC: msg.cc,
    U_COMPOSE_BCC: msg.bcc,
    "subject": msg.subject,
    "msgbody": msg.body,
    if _extraParams:
    params.update(_extraParams)
    # Amongst other things, I used the following post to work out this:
    # <http://groups.google.com/groups?
    # selm=mailman.1047080233.20095.python-list%40python.org>
    mimeMessage = _paramsToMime(params, msg.filenames, msg.files)
    #### TODO: Ughh, tidy all this up & do it better...
    ## This horrible mess is here for two main reasons:
    ## 1. The `Content-Type` header (which also contains the boundary
    ## marker) needs to be extracted from the MIME message so
    ## we can send it as the request `Content-Type` header instead.
    ## 2. It seems the form submission needs to use "\r\n" for new
    ## lines instead of the "\n" returned by `as_string()`.
    ## I tried changing the value of `NL` used by the `Generator` class
    ## but it didn't work so I'm doing it this way until I figure
    ## out how to do it properly. Of course, first try, if the payloads
    ## contained "\n" sequences they got replaced too, which corrupted
    ## the attachments. I could probably encode the submission,
    ## which would probably be nicer, but in the meantime I'm kludging
    ## this workaround that replaces all non-text payloads with a
    ## marker, changes all "\n" to "\r\n" and finally replaces the
    ## markers with the original payloads.
    ## Yeah, I know, it's horrible, but hey it works doesn't it? If you've
    ## got a problem with it, fix it yourself & give me the patch!
    origPayloads = {}
    FMT_MARKER = "&&&&&&%s&&&&&&"
    for i, m in enumerate(mimeMessage.get_payload()):
    if not isinstance(m, MIMEText): #Do we care if we change text ones?
    origPayloads[i] = m.get_payload()
    m.set_payload(FMT_MARKER % i)
    mimeMessage.epilogue = ""
    msgStr = mimeMessage.as_string()
    contentTypeHeader, data = msgStr.split("\n\n", 1)
    contentTypeHeader = contentTypeHeader.split(":", 1)
    data = data.replace("\n", "\r\n")
    for k,v in origPayloads.iteritems():
    data = data.replace(FMT_MARKER % k, v)
    req = urllib2.Request(_buildURL(), data = data)
    req.add_header(*contentTypeHeader)
    items = self._parsePage(req)
    # TODO: Check composeid?
    # Sometimes we get the success message
    # but the id is 0 and no message is sent
    result = None
    resultInfo = items[D_SENDMAIL_RESULT][0]
    if resultInfo[SM_SUCCESS]:
    result = GmailMessageStub(id = resultInfo[SM_NEWTHREADID],
    _account = self)
    else:
    raise GmailSendError, resultInfo[SM_MSG]
    return result
    def trashMessage(self, msg):
    # TODO: Decide if we should make this a method of `GmailMessage`.
    # TODO: Should we check we have been given a `GmailMessage` instance?
    params = {
    U_ACTION: U_DELETEMESSAGE_ACTION,
    U_ACTION_MESSAGE: msg.id,
    U_ACTION_TOKEN: self._getActionToken(),
    items = self._parsePage(_buildURL(**params))
    # TODO: Mark as trashed on success?
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def _doThreadAction(self, actionId, thread):
    # TODO: Decide if we should make this a method of `GmailThread`.
    # TODO: Should we check we have been given a `GmailThread` instance?
    params = {
    U_SEARCH: U_ALL_SEARCH, #TODO:Check this search value always works.
    U_VIEW: U_UPDATE_VIEW,
    U_ACTION: actionId,
    U_ACTION_THREAD: thread.id,
    U_ACTION_TOKEN: self._getActionToken(),
    items = self._parsePage(_buildURL(**params))
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def trashThread(self, thread):
    # TODO: Decide if we should make this a method of `GmailThread`.
    # TODO: Should we check we have been given a `GmailThread` instance?
    result = self._doThreadAction(U_MARKTRASH_ACTION, thread)
    # TODO: Mark as trashed on success?
    return result
    def _createUpdateRequest(self, actionId): #extraData):
    Helper method to create a Request instance for an update (view)
    action.
    Returns populated `Request` instance.
    params = {
    U_VIEW: U_UPDATE_VIEW,
    data = {
    U_ACTION: actionId,
    U_ACTION_TOKEN: self._getActionToken(),
    #data.update(extraData)
    req = urllib2.Request(_buildURL(**params),
    data = urllib.urlencode(data))
    return req
    # TODO: Extract additional common code from handling of labels?
    def createLabel(self, labelName):
    req = self._createUpdateRequest(U_CREATECATEGORY_ACTION + labelName)
    # Note: Label name cache is updated by this call as well. (Handy!)
    items = self._parsePage(req)
    print items
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def deleteLabel(self, labelName):
    # TODO: Check labelName exits?
    req = self._createUpdateRequest(U_DELETECATEGORY_ACTION + labelName)
    # Note: Label name cache is updated by this call as well. (Handy!)
    items = self._parsePage(req)
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def renameLabel(self, oldLabelName, newLabelName):
    # TODO: Check oldLabelName exits?
    req = self._createUpdateRequest("%s%s^%s" % (U_RENAMECATEGORY_ACTION,
    oldLabelName, newLabelName))
    # Note: Label name cache is updated by this call as well. (Handy!)
    items = self._parsePage(req)
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def storeFile(self, filename, label = None):
    # TODO: Handle files larger than single attachment size.
    # TODO: Allow file data objects to be supplied?
    FILE_STORE_VERSION = "FSV_01"
    FILE_STORE_SUBJECT_TEMPLATE = "%s %s" % (FILE_STORE_VERSION, "%s")
    subject = FILE_STORE_SUBJECT_TEMPLATE % os.path.basename(filename)
    msg = GmailComposedMessage(to="", subject=subject, body="",
    filenames=[filename])
    draftMsg = self.sendMessage(msg, asDraft = True)
    if draftMsg and label:
    draftMsg.addLabel(label)
    return draftMsg
    ## CONTACTS SUPPORT
    def getContacts(self):
    Returns a GmailContactList object
    that has all the contacts in it as
    GmailContacts
    contactList = []
    # pnl = a is necessary to get *all* contacts
    myUrl = _buildURL(view='cl',search='contacts', pnl='a')
    myData = self._parsePage(myUrl)
    # This comes back with a dictionary
    # with entry 'cl'
    addresses = myData['cl']
    for entry in addresses:
    if len(entry) >= 6 and entry[0]=='ce':
    newGmailContact = GmailContact(entry[1], entry[2], entry[4], entry[5])
    #### new code used to get all the notes
    #### not used yet due to lockdown problems
    ##rawnotes = self._getSpecInfo(entry[1])
    ##print rawnotes
    ##newGmailContact = GmailContact(entry[1], entry[2], entry[4],rawnotes)
    contactList.append(newGmailContact)
    return GmailContactList(contactList)
    def addContact(self, myContact, *extra_args):
    Attempts to add a GmailContact to the gmail
    address book. Returns true if successful,
    false otherwise
    Please note that after version 0.1.3.3,
    addContact takes one argument of type
    GmailContact, the contact to add.
    The old signature of:
    addContact(name, email, notes='') is still
    supported, but deprecated.
    if len(extra_args) > 0:
    # The user has passed in extra arguments
    # He/she is probably trying to invoke addContact
    # using the old, deprecated signature of:
    # addContact(self, name, email, notes='')
    # Build a GmailContact object and use that instead
    (name, email) = (myContact, extra_args[0])
    if len(extra_args) > 1:
    notes = extra_args[1]
    else:
    notes = ''
    myContact = GmailContact(-1, name, email, notes)
    # TODO: In the ideal world, we'd extract these specific
    # constants into a nice constants file
    # This mostly comes from the Johnvey Gmail API,
    # but also from the gmail.py cited earlier
    myURL = _buildURL(view='up')
    myDataList = [ ('act','ec'),
    ('at', self._cookieJar._cookies['GMAIL_AT']), # Cookie data?
    ('ct_nm', myContact.getName()),
    ('ct_em', myContact.getEmail()),
    ('ct_id', -1 )
    notes = myContact.getNotes()
    if notes != '':
    myDataList.append( ('ctf_n', notes) )
    validinfokeys = [
    'i', # IM
    'p', # Phone
    'd', # Company
    'a', # ADR
    'e', # Email
    'm', # Mobile
    'b', # Pager
    'f', # Fax
    't', # Title
    'o', # Other
    moreInfo = myContact.getMoreInfo()
    ctsn_num = -1
    if moreInfo != {}:
    for ctsf,ctsf_data in moreInfo.items():
    ctsn_num += 1
    # data section header, WORK, HOME,...
    sectionenum ='ctsn_%02d' % ctsn_num
    myDataList.append( ( sectionenum, ctsf ))
    ctsf_num = -1
    if isinstance(ctsf_data[0],str):
    ctsf_num += 1
    # data section
    subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, ctsf_data[0]) # ie. ctsf_00_01_p
    myDataList.append( (subsectionenum, ctsf_data[1]) )
    else:
    for info in ctsf_data:
    if validinfokeys.count(info[0]) > 0:
    ctsf_num += 1
    # data section
    subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, info[0]) # ie. ctsf_00_01_p
    myDataList.append( (subsectionenum, info[1]) )
    myData = urllib.urlencode(myDataList)
    request = urllib2.Request(myURL,
    data = myData)
    pageData = self._retrievePage(request)
    if pageData.find("The contact was successfully added") == -1:
    print pageData
    if pageData.find("already has the email address") > 0:
    raise Exception("Someone with same email already exists in Gmail.")
    elif pageData.find("https://www.google.com/accounts/ServiceLogin"):
    raise Exception("Login has expired.")
    return False
    else:
    return True
    def _removeContactById(self, id):
    Attempts to remove the contact that occupies
    id "id" from the gmail address book.
    Returns True if successful,
    False otherwise.
    This is a little dangerous since you don't really
    know who you're deleting. Really,
    this should return the name or something of the
    person we just killed.
    Don't call this method.
    You should be using removeContact instead.
    myURL = _buildURL(search='contacts', ct_id = id, c=id, act='dc', at=self._cookieJar._cookies['GMAIL_AT'], view='up')
    pageData = self._retrievePage(myURL)
    if pageData.find("The contact has been deleted") == -1:
    return False
    else:
    return True
    def removeContact(self, gmailContact):
    Attempts to remove the GmailContact passed in
    Returns True if successful, False otherwise.
    # Let's re-fetch the contact list to make
    # sure we're really deleting the guy
    # we think we're deleting
    newContactList = self.getContacts()
    newVersionOfPersonToDelete = newContactList.getContactById(gmailContact.getId())
    # Ok, now we need to ensure that gmailContact
    # is the same as newVersionOfPersonToDelete
    # and then we can go ahead and delete him/her
    if (gmailContact == newVersionOfPersonToDelete):
    return self._removeContactById(gmailContact.getId())
    else:
    # We have a cache coherency problem -- someone
    # else now occupies this ID slot.
    # TODO: Perhaps signal this in some nice way
    # to the end user?
    print "Unable to delete."
    print "Has someone else been modifying the contacts list while we have?"
    print "Old version of person:",gmailContact
    print "New version of person:",newVersionOfPersonToDelete
    return False
    ## Don't remove this. contact stas
    ## def _getSpecInfo(self,id):
    ## Return all the notes data.
    ## This is currently not used due to the fact that it requests pages in
    ## a dos attack manner.
    ## myURL =_buildURL(search='contacts',ct_id=id,c=id,\
    ## at=self._cookieJar._cookies['GMAIL_AT'],view='ct')
    ## pageData = self._retrievePage(myURL)
    ## myData = self._parsePage(myURL)
    ## #print "\nmyData form _getSpecInfo\n",myData
    ## rawnotes = myData['cov'][7]
    ## return rawnotes
    class GmailContact:
    Class for storing a Gmail Contacts list entry
    def __init__(self, name, email, *extra_args):
    Returns a new GmailContact object
    (you can then call addContact on this to commit
    it to the Gmail addressbook, for example)
    Consider calling setNotes() and setMoreInfo()
    to add extended information to this contact
    # Support populating other fields if we're trying
    # to invoke this the old way, with the old constructor
    # whose signature was __init__(self, id, name, email, notes='')
    id = -1
    notes = ''
    if len(extra_args) > 0:
    (id, name) = (name, email)
    email = extra_args[0]
    if len(extra_args) > 1:
    notes = extra_args[1]
    else:
    notes = ''
    self.id = id
    self.name = name
    self.email = email
    self.notes = notes
    self.moreInfo = {}
    def __str__(self):
    return "%s %s %s %s" % (self.id, self.name, self.email, self.notes)
    def __eq__(self, other):
    if not isinstance(other, GmailContact):
    return False
    return (self.getId() == other.getId()) and \
    (self.getName() == other.getName()) and \
    (self.getEmail() == other.getEmail()) and \
    (self.getNotes() == other.getNotes())
    def getId(self):
    return self.id
    def getName(self):
    return self.name
    def getEmail(self):
    return self.email
    def getNotes(self):
    return self.notes
    def setNotes(self, notes):
    Sets the notes field for this GmailContact
    Note that this does NOT change the note
    field on Gmail's end; only adding or removing
    contacts modifies them
    self.notes = notes
    def getMoreInfo(self):
    return self.moreInfo
    def setMoreInfo(self, moreInfo):
    moreInfo format
    Use special key values::
    'i' = IM
    'p' = Phone
    'd' = Company
    'a' = ADR
    'e' = Email
    'm' = Mobile
    'b' = Pager
    'f' = Fax
    't' = Title
    'o' = Other
    Simple example::
    moreInfo = {'Home': ( ('a','852 W Barry'),
    ('p', '1-773-244-1980'),
    ('i', 'aim:brianray34') ) }
    Complex example::
    moreInfo = {
    'Personal': (('e', 'Home Email'),
    ('f', 'Home Fax')),
    'Work': (('d', 'Sample Company'),
    ('t', 'Job Title'),
    ('o', 'Department: Department1'),
    ('o', 'Department: Department2'),
    ('p', 'Work Phone'),
    ('m', 'Mobile Phone'),
    ('f', 'Work Fax'),
    ('b', 'Pager')) }
    self.moreInfo = moreInfo
    def getVCard(self):
    """Returns a vCard 3.0 for this
    contact, as a string"""
    # The \r is is to comply with the RFC2425 section 5.8.1
    vcard = "BEGIN:VCARD\r\n"
    vcard += "VERSION:3.0\r\n"
    ## Deal with multiline notes
    ##vcard += "NOTE:%s\n" % self.getNotes().replace("\n","\\n")
    vcard += "NOTE:%s\r\n" % self.getNotes()
    # Fake-out N by splitting up whatever we get out of getName
    # This might not always do 'the right thing'
    # but it's a *reasonable* compromise
    fullname = self.getName().split()
    fullname.reverse()
    vcard += "N:%s" % ';'.join(fullname) + "\r\n"
    vcard += "FN:%s\r\n" % self.getName()
    vcard += "EMAIL;TYPE=INTERNET:%s\r\n" % self.getEmail()
    vcard += "END:VCARD\r\n\r\n"
    # Final newline in case we want to put more than one in a file
    return vcard
    class GmailContactList:
    Class for storing an entire Gmail contacts list
    and retrieving contacts by Id, Email address, and name
    def __init__(self, contactList):
    self.contactList = contactList
    def __str__(self):
    return '\n'.join([str(item) for item in self.contactList])
    def getCount(self):
    Returns number of contacts
    return len(self.contactList)
    def getAllContacts(self):
    Returns an array of all the
    GmailContacts
    return self.contactList
    def getContactByName(self, name):
    Gets the first contact in the
    address book whose name is 'name'.
    Returns False if no contact
    could be found
    nameList = self.getContactListByName(name)
    if len(nameList) > 0:
    return nameList[0]
    else:
    return False
    def getContactByEmail(self, email):
    Gets the first contact in the
    address book whose name is 'email'.
    As of this writing, Gmail insists
    upon a unique email; i.e. two contacts
    cannot share an email address.
    Returns False if no contact
    could be found
    emailList = self.getContactListByEmail(email)
    if len(emailList) > 0:
    return emailList[0]
    else:
    return False
    def getContactById(self, myId):
    Gets the first contact in the
    address book whose id is 'myId'.
    REMEMBER: ID IS A STRING
    Returns False if no contact
    could be found
    idList = self.getContactListById(myId)
    if len(idList) > 0:
    return idList[0]
    else:
    return False
    def getContactListByName(self, name):
    This function returns a LIST
    of GmailContacts whose name is
    'name'.
    Returns an empty list if no contacts
    were found
    nameList = []
    for entry in self.contactList:
    if entry.getName() == name:
    nameList.append(entry)
    return nameList
    def getContactListByEmail(self, email):
    This function returns a LIST
    of GmailContacts whose email is
    'email'. As of this writing, two contacts
    cannot share an email address, so this
    should only return just one item.
    But it doesn't hurt to be prepared?
    Returns an empty list if no contacts
    were found
    emailList = []
    for entry in self.contactList:
    if entry.getEmail() == email:
    emailList.append(entry)
    return emailList
    def getContactListById(self, myId):
    This function returns a LIST
    of GmailContacts whose id is
    'myId'. We expect there only to
    be one, but just in case!
    Remember: ID IS A STRING
    Returns an empty list if no contacts
    were found
    idList = []
    for entry in self.contactList:
    if entry.getId() == myId:
    idList.append(entry)
    return idList
    def search(self, searchTerm):
    This function returns a LIST
    of GmailContacts whose name or
    email address matches the 'searchTerm'.
    Returns an empty list if no matches
    were found.
    searchResults = []
    for entry in self.contactList:
    p = re.compile(searchTerm, re.IGNORECASE)
    if p.search(entry.getName()) or p.search(entry.getEmail()):
    searchResults.append(entry)
    return searchResults
    class GmailSearchResult:
    def __init__(self, account, search, threadsInfo):
    `threadsInfo` -- As returned from Gmail but unbunched.
    #print "\nthreadsInfo\n",threadsInfo
    try:
    if not type(threadsInfo[0]) is types.ListType:
    threadsInfo = [threadsInfo]
    except IndexError:
    print "No messages found"
    self._account = account
    self.search = search # TODO: Turn into object + format nicely.
    self._threads = []
    for thread in threadsInfo:
    self._threads.append(GmailThread(self, thread[0]))
    def __iter__(self):
    return iter(self._threads)
    def __len__(self):
    return len(self._threads)
    def __getitem__(self,key):
    return self._threads.__getitem__(key)
    class GmailSessionState:
    def __init__(self, account = None, filename = ""):
    if account:
    self.state = (account.name, account._cookieJar)
    elif filename:
    self.state = load(open(filename, "rb"))
    else:
    raise ValueError("GmailSessionState must be instantiated with " \
    "either GmailAccount object or filename.")
    def save(self, filename):
    dump(self.state, open(filename, "wb"), -1)
    class _LabelHandlerMixin(object):
    Note: Because a message id can be used as a thread id this works for
    messages as well as threads.
    def __init__(self):
    self._labels = None
    def _makeLabelList(self, labelList):
    self._labels = labelList
    def addLabel(self, labelName):
    # Note: It appears this also automatically creates new labels.
    result = self._account._doThreadAction(U_ADDCATEGORY_ACTION+labelName,
    self)
    if not self._labels:
    self._makeLabelList([])
    # TODO: Caching this seems a little dangerous; suppress duplicates maybe?
    self._labels.append(labelName)
    return result
    def removeLabel(self, labelName):
    # TODO: Check label is already attached?
    # Note: An error is not generated if the label is not already attached.
    result = \
    self._account._doThreadAction(U_REMOVECATEGORY_ACTION+labelName,
    self)
    removeLabel = True
    try:
    self._labels.remove(labelName)
    except:
    removeLabel = False
    pass
    # If we don't check both, we might end up in some weird inconsistent state
    return result and removeLabel
    def getLabels(self):
    return self._labels
    class GmailThread(_LabelHandlerMixin):
    Note: As far as I can tell, the "canonical" thread id is always the same
    as the id of the last message in the thread. But it appears that
    the id of any message in the thread can be used to retrieve
    the thread information.
    def __init__(self, parent, threadsInfo):
    _LabelHandlerMixin.__init__(self)
    # TODO Handle this better?
    self._parent = parent
    self._account = self._parent._account
    self.id = threadsInfo[T_THREADID] # TODO: Change when canonical updated?
    self.subject = threadsInfo[T_SUBJECT_HTML]
    self.snippet = threadsInfo[T_SNIPPET_HTML]
    #self.extraSummary = threadInfo[T_EXTRA_SNIPPET] #TODO: What is this?
    # TODO: Store other info?
    # Extract number of messages in thread/conversation.
    self._authors = threadsInfo[T_AUTHORS_HTML]
    self.info = threadsInfo
    try:
    # TODO: Find out if this information can be found another way...
    # (Without another page request.)
    self._length = int(re.search("\((\d+?)\)\Z",
    self._authors).group(1))
    except AttributeError,info:
    # If there's no message count then the thread only has one message.
    self._length = 1
    # TODO: Store information known about the last message (e.g. id)?
    self._messages = []
    # Populate labels
    self._makeLabelList(threadsInfo[T_CATEGORIES])
    def __getattr__(self, name):
    Dynamically dispatch some interesting thread properties.
    attrs = { 'unread': T_UNREAD,
    'star': T_STAR,
    'date': T_DATE_HTML,
    'authors': T_AUTHORS_HTML,
    'flags': T_FLAGS,
    'subject': T_SUBJECT_HTML,
    'snippet': T_SNIPPET_HTML,
    'categories': T_CATEGORIES,
    'attach': T_ATTACH_HTML,
    'matching_msgid': T_MATCHING_MSGID,
    'extra_snippet': T_EXTRA_SNIPPET }
    if name in attrs:
    return self.info[ attrs[name] ];
    raise AttributeError("no attribute %s" % name)
    def __len__(self):
    return self._length
    def __iter__(self):
    if not self._messages:
    self._messages = self._getMessages(self)
    return iter(self._messages)
    def __getitem__(self, key):
    if not self._messages:
    self._messages = self._getMessages(self)
    try:
    result = self._messages.__getitem__(key)
    except IndexError:
    result = []
    return result
    def _getMessages(self, thread):
    # TODO: Do this better.
    # TODO: Specify the query folder using our specific search?
    items = self._account._parseSearchResult(U_QUERY_SEARCH,
    view = U_CONVERSATION_VIEW,
    th = thread.id,
    q = "in:anywhere")
    result = []
    # TODO: Handle this better?
    # Note: This handles both draft & non-draft messages in a thread...
    for key, isDraft in [(D_MSGINFO, False), (D_DRAFTINFO, True)]:
    try:
    msgsInfo = items[key]
    except KeyError:
    # No messages of this type (e.g. draft or non-draft)
    continue
    else:
    # TODO: Handle special case of only 1 message in thread better?
    if type(msgsInfo[0]) != types.ListType:
    msgsInfo = [msgsInfo]
    for msg in msgsInfo:
    result += [GmailMessage(thread, msg, isDraft = isDraft)]
    return result
    class GmailMessageStub(_LabelHandlerMixin):
    Intended to be used where not all message information is known/required.
    NOTE: This may go away.
    # TODO: Provide way to convert this to a full `GmailMessage` instance
    # or allow `GmailMessage` to be created without all info?
    def __init__(self, id = None, _account = None):
    _LabelHandlerMixin.__init__(self)
    self.id = id
    self._account = _account
    class GmailMessage(object):
    def __init__(self, parent, msgData, isDraft = False):
    Note: `msgData` can be from either D_MSGINFO or D_DRAFTINFO.
    # TODO: Automatically detect if it's a draft or not?
    # TODO Handle this better?
    self._parent = parent
    self._account = self._parent._account
    self.author = msgData[MI_AUTHORFIRSTNAME]
    self.id = msgData[MI_MSGID]
    self.number = msgData[MI_NUM]
    self.subject = msgData[MI_SUBJECT]
    self.to = msgData[MI_TO]
    self.cc = msgData[MI_CC]
    self.bcc = msgData[MI_BCC]
    self.sender = msgData[MI_AUTHOREMAIL]
    self.attachments = [GmailAttachment(self, attachmentInfo)
    for attachmentInfo in msgData[MI_ATTACHINFO]]
    # TODO: Populate additional fields & cache...(?)
    # TODO: Handle body differently if it's from a draft?
    self.isDraft = isDraft
    self._source = None
    def _getSource(self):
    if not self._source:
    # TODO: Do this more nicely...?
    # TODO: Strip initial white space & fix up last line ending
    # to make it legal as per RFC?
    self._source = self._account.getRawMessage(self.id)
    return self._source
    source = property(_getSource, doc = "")
    class GmailAttachment:
    def __init__(self, parent, attachmentInfo):
    # TODO Handle this better?
    self._parent = parent
    self._account = self._parent._account
    self.id = attachmentInfo[A_ID]
    self.filename = attachmentInfo[A_FILENAME]
    self.mimetype = attachmentInfo[A_MIMETYPE]
    self.filesize = attachmentInfo[A_FILESIZE]
    self._content = None
    def _getContent(self):
    if not self._content:
    # TODO: Do this a more nicely...?
    self._content = self._account._retrievePage(
    _buildURL(view=U_ATTACHMENT_VIEW, disp="attd",
    attid=self.id, th=self._parent._parent.id))
    return self._content
    content = property(_getContent, doc = "")
    def _getFullId(self):
    Returns the "full path"/"full id" of the attachment. (Used
    to refer to the file when forwarding.)
    The id is of the form: "<thread_id>_<msg_id>_<attachment_id>"
    return "%s_%s_%s" % (self._parent._parent.id,
    self._parent.id,
    self.id)
    _fullId = property(_getFullId, doc = "")
    class GmailComposedMessage:
    def __init__(self, to, subject, body, cc = None, bcc = None,
    filenames = None, files = None):
    `filenames` - list of the file paths of the files to attach.
    `files` - list of objects implementing sub-set of
    `email.Message.Message` interface (`get_filename`,
    `get_content_type`, `get_payload`). This is to
    allow use of payloads from Message instances.
    TODO: Change this to be simpler class we define ourselves?
    self.to = to
    self.subject = subject
    self.body = body
    self.cc = cc
    self.bcc = bcc
    self.filenames = filenames
    self.files = files
    if __name__ == "__main__":
    import sys
    from getpass import getpass
    try:
    name = sys.argv[1]
    except IndexError:
    name = raw_input("Gmail account name: ")
    pw = getpass("Password: ")
    domain = raw_input("Domain? [leave blank for Gmail]: ")
    ga = GmailAccount(name, pw, domain=domain)
    print "\nPlease wait, logging in..."
    try:
    ga.login()
    except GmailLoginFailure,e:
    print "\nLogin failed. (%s)" % e.message
    else:
    print "Login successful.\n"
    # TODO: Use properties instead?
    quotaInfo = ga.getQuotaInfo()
    quotaMbUsed = quotaInfo[QU_SPACEUSED]
    quotaMbTotal = quotaInfo[QU_QUOTA]
    quotaPercent = quotaInfo[QU_PERCENT]
    print "%s of %s used. (%s)\n" % (quotaMbUsed, quotaMbTotal, quotaPercent)
    searches = STANDARD_FOLDERS + ga.getLabelNames()
    name = None
    while 1:
    try:
    print "Select folder or label to list: (Ctrl-C to exit)"
    for optionId, optionName in enumerate(searches):
    print " %d. %s" % (optionId, optionName)
    while not name:
    try:
    name = searches[int(raw_input("Choice: "))]
    except ValueError,info:
    print info
    name = None
    if name in STANDARD_FOLDERS:
    result = ga.getMessagesByFolder(name, True)
    else:
    result = ga.getMessagesByLabel(name, True)
    if not len(result):
    print "No threads found in `%s`." % name
    break
    name = None
    tot = len(result)
    i = 0
    for thread in result:
    print "%s messages in thread" % len(thread)
    print thread.id, len(thread), thread.subject
    for msg in thread:
    print "\n ", msg.id, msg.number, msg.author,msg.subject
    # Just as an example of other usefull things
    #print " ", msg.cc, msg.bcc,msg.sender
    i += 1
    print
    print "number of threads:",tot
    print "number of messages:",i
    except KeyboardInterrupt:
    break
    print "\n\nDone."
    Last edited by Reasons (2008-03-20 01:18:27)

    Thought it might help to give lines 369-relevant of the libgmail so it's easier to read
    def _retrievePage(self, urlOrRequest):
    if self.opener is None:
    raise "Cannot find urlopener"
    if not isinstance(urlOrRequest, urllib2.Request):
    req = urllib2.Request(urlOrRequest)
    else:
    req = urlOrRequest
    self._cookieJar.setCookies(req)
    req.add_header('User-Agent',
    'Mozilla/5.0 (Compatible; libgmail-python)')
    try:
    resp = self.opener.open(req)
    except urllib2.HTTPError,info:
    print info
    return None
    pageData = resp.read()
    # Extract cookies here
    self._cookieJar.extractCookies(resp.headers)
    # TODO: Enable logging of page data for debugging purposes?
    return pageData
    def _parsePage(self, urlOrRequest):
    Retrieve & then parse the requested page content.
    items = _parsePage(self._retrievePage(urlOrRequest))
    # Automatically cache some things like quota usage.
    # TODO: Cache more?
    # TODO: Expire cached values?
    # TODO: Do this better.
    try:
    self._cachedQuotaInfo = items[D_QUOTA]
    except KeyError:
    pass
    #pprint.pprint(items)
    try:
    self._cachedLabelNames = [category[CT_NAME] for category in items[D_CATEGORIES][0]]
    except KeyError:
    pass
    return items
    def _parseSearchResult(self, searchType, start = 0, **kwargs):
    params = {U_SEARCH: searchType,
    U_START: start,
    U_VIEW: U_THREADLIST_VIEW,
    params.update(kwargs)
    return self._parsePage(_buildURL(**params))

  • Launching programs from python and ncurses

    I've made a little menu launcher with python and ncurses. It works most of the time, but occationally it does nothing when I select a program to launch.
    This is how it works:
    1. A keyboard shortcut launches an xterm window that runs the program.
    2. I select some program, which is launched via this command:
    os.system("nohup " + "program_name" + "> /dev/null &")
    3. ncurses cleans up and python calls "sys.exit()"
    The whole idea with nohup is that the xterm window will close after I select an application, but the application will still start normally (this was a bit of a problem). This works most of the time, but sometimes nothing happens (it never works the first time I try launching a program after starting the computer, the other times are more random).
    So, has anyone got an idea of what's going on or a better solution than the nohup hack?
    The whole code for the menu launcher is below. It's quite hackish, but then again it was just meant for me. The file's called "curmenu.py", and I launch it with an xbindkeys shortcut that runs "xterm -e curmenu.py".
    #!/usr/bin/env python
    import os,sys
    import curses
    ## Variables one might like to configure
    programs = ["Sonata", "sonata", "Ncmpc", "xterm -e ncmpc", "Emacs", "emacs", "Firefox", "swiftfox",\
    "Pidgin", "pidgin", "Screen", "xterm -e screen", "Thunar", "thunar", \
    "Gimp", "gimp", "Vlc", "vlc", "Skype", "skype"]
    highlight = 3
    on_screen = 7
    ## Functions
    # Gets a list of strings, figures out the middle one
    # and highlights it. Draws strings on screen.
    def drawStrings(strings):
    length = len(strings)
    middle = (length - 1)/2
    for num in range(length):
    addString(strings[num], middle, num, length)
    stdscr.refresh()
    def addString(string, middle, iter_step, iter_max):
    if iter_step < iter_max:
    string = string + "\n"
    if iter_step == middle:
    stdscr.addstr(iter_step + 1, 1, string, curses.A_REVERSE)
    else:
    stdscr.addstr(iter_step + 1, 1, string)
    # Returns a list of strings to draw on screen. The
    # strings chosen are centered around position.
    def listStrings(strings, position, on_screen):
    length = len(strings)
    low = (on_screen - 1)/2
    start = position - low
    str = []
    for num in range(start, start + on_screen):
    str = str + [strings[num % length]]
    return str
    ## Start doing stuff
    names = programs[::2]
    longest = max(map(lambda x: len(x), names))
    # Start our screen
    stdscr=curses.initscr()
    # Enable noecho and keyboard input
    curses.curs_set(0)
    curses.noecho()
    curses.cbreak()
    stdscr.keypad(1)
    # Display strings
    drawStrings(listStrings(names, highlight, on_screen))
    # Wait for response
    num_progs = len(names)
    low = (on_screen - 1)/2
    while 1:
    c = stdscr.getch()
    if c == ord("q") or c == 27: # 27 = "Escape"
    break
    elif c == curses.KEY_DOWN:
    highlight = (highlight + 1)%num_progs
    elif c == curses.KEY_UP:
    highlight = (highlight - 1)%num_progs
    elif c == curses.KEY_NPAGE:
    highlight = (highlight + low)%num_progs
    elif c == curses.KEY_PPAGE:
    highlight = (highlight - low)%num_progs
    elif c == 10: # actually "Enter", but hey
    os.system("nohup " + programs[2*highlight + 1] + "> /dev/null &")
    break
    drawStrings(listStrings(names, highlight, on_screen))
    # Close the program
    curses.nocbreak()
    stdscr.keypad(0)
    curses.echo()
    curses.endwin()
    sys.exit()

    Try:
    http://docs.python.org/lib/module-subprocess.html
    Should let you fork programs off into the background.

  • Can't run helloworld of Google App Engine (python 2.7)

    I looked and searched for the bug and found the post. -> See comment 29
    But, it seems that it's still not fixed in google-appengine 1.6.4 for python 2.7.
    The exact error is: (Trying to run the helloworld given in docs)
    WARNING 2012-04-06 23:26:00,928 py_zipimport.py:139] Can't open zipfile /usr/lib/python2.7/site-packages/setuptools-0.6c11.egg-info: IOError: [Errno 13] file not accessible: '/usr/lib/python2.7/site-packages/setuptools-0.6c11.egg-info'
    Full terminal output:
    shadyabhi@MBP-archlinux ~/codes/gae $ dev_appserver.py helloworld/
    INFO 2012-04-06 23:25:55,030 appengine_rpc.py:160] Server: appengine.google.com
    INFO 2012-04-06 23:25:55,034 appcfg.py:582] Checking for updates to the SDK.
    INFO 2012-04-06 23:25:56,709 appcfg.py:616] This SDK release is newer than the advertised release.
    WARNING 2012-04-06 23:25:56,710 datastore_file_stub.py:513] Could not read datastore data from /tmp/dev_appserver.datastore
    INFO 2012-04-06 23:25:56,773 dev_appserver_multiprocess.py:647] Running application dev~helloworld on port 8080: http://localhost:8080
    INFO 2012-04-06 23:25:56,774 dev_appserver_multiprocess.py:649] Admin console is available at: http://localhost:8080/_ah/admin
    WARNING 2012-04-06 23:26:00,928 py_zipimport.py:139] Can't open zipfile /usr/lib/python2.7/site-packages/setuptools-0.6c11.egg-info: IOError: [Errno 13] file not accessible: '/usr/lib/python2.7/site-packages/setuptools-0.6c11.egg-info'
    ERROR 2012-04-06 23:26:01,101 wsgi.py:189]
    Traceback (most recent call last):
    File "/opt/google-appengine-python/google/appengine/runtime/wsgi.py", line 187, in Handle
    handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
    File "/opt/google-appengine-python/google/appengine/runtime/wsgi.py", line 239, in _LoadHandler
    raise ImportError('%s has no attribute %s' % (handler, name))
    ImportError: <module 'helloworld' from '/home/shadyabhi/codes/gae/helloworld/helloworld.pyc'> has no attribute app
    INFO 2012-04-06 23:26:01,110 dev_appserver.py:2884] "GET / HTTP/1.1" 500 -
    ERROR 2012-04-06 23:26:01,479 wsgi.py:189]
    Traceback (most recent call last):
    File "/opt/google-appengine-python/google/appengine/runtime/wsgi.py", line 187, in Handle
    handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
    File "/opt/google-appengine-python/google/appengine/runtime/wsgi.py", line 239, in _LoadHandler
    raise ImportError('%s has no attribute %s' % (handler, name))
    ImportError: <module 'helloworld' from '/home/shadyabhi/codes/gae/helloworld/helloworld.pyc'> has no attribute app
    INFO 2012-04-06 23:26:01,486 dev_appserver.py:2884] "GET /favicon.ico HTTP/1.1" 500 -
    Any kind of help would be highly appreciated.
    Last edited by shadyabhi (2012-04-07 15:46:54)

    If you're using python2.7 libraries there's an error with the tutorial
    This line is incorrect:
    application = webapp2.WSGIApplication([('/', MainPage)], debug=True)
    The correct line should be:
    app = webapp2.WSGIApplication([('/', MainPage)], debug=True)
    If it is asking for an "app" attribute, your code helloworld must have this attribute.
    See http://stackoverflow.com/questions/1005 … python-2-7

  • Teaching Myself Python. How Can I Stop The Program From Closing :)

    I wrote a program today that will log into a router here at work and check if the line protocols are up and if i can see thir mac address.  If i run it from the IDLE program it works fine.  When i run it as just crosnet.py it runs fine but closes before i can see the output.  I imagine there is some simple line that will have run and just sit?  Or even better, loop back to the beginning and wait for me to give it a new number to then telnet with.
    #import getpass
    import sys
    import telnetlib
    HOST = "111.111.111.111"
    password = "password"
    vpc = raw_input("Enter The VPC Number: ")
    #password = getpass.getpass()
    tn = telnetlib.Telnet(HOST)
    #tn.read_until("login: ")
    #tn.write(user + "\n")
    #if password:
    tn.read_until("Password: ")
    tn.write(password + "\n")
    #tn.write("sh int atm6/0.vpc\n")
    tn.write("sh int atm6/0." + str(vpc) + "\n")
    tn.write("sh arp | in " + str(vpc) + "\n")
    #tn.write("exit\n")
    print tn.read_all()
    Sorry the code is bad, dont know python and i was shown a outline of how to telnet and adjusted it to my needs.

    while True:
    and just indent everything else under it
    Ends when you press ^C.
    if you wish to have a way to break out of the loop, say, when you press enter at the VPC number without entering anything, add the following after your vpc input:
    if not vpc:
    break
    which will break out of the while loop and end your program.
    Last edited by buttons (2007-12-14 22:03:06)

  • Not able to run python programs on OSx maverics

    I bought Mac book pro 1.5 yrs back, and it came with OSx 10.6 now few months back I upgraded to OSx maverics and since then I am not able to run python tools properly. Just importing something in the python commandline i get segmentation fault 11.
    now I just tried
    import os
    and i got segmentation fault with below error,
    Process:   
    Python [1230]
    Path:      
    /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/ MacOS/Python
    Identifier:
    Python
    Version:   
    2.7.4 (2.7.4)
    Code Type: 
    X86-64 (Native)
    Parent Process:  bash [883]
    Responsible:
    Terminal [749]
    User ID:   
    501
    Date/Time: 
    2015-01-15 09:46:08.784 +0530
    OS Version:
    Mac OS X 10.9.2 (13C64)
    Report Version:  11
    Anonymous UUID:  A975CB81-1635-0611-4AD9-0DC61C4EC9A0
    Crashed Thread:  0  Dispatch queue: com.apple.main-thread
    Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
    Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000
    VM Regions Near 0:
    -->
    __TEXT           
    0000000100000000-0000000100001000 [
    4K] r-x/rwx SM=COW  /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents /MacOS/Python
    Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
    0   readline.so            
    0x00000001002f2f97 call_readline + 647
    1   org.python.python      
    0x0000000100008e92 PyOS_Readline + 274
    2   org.python.python      
    0x000000010000a6e8 tok_nextc + 104
    3   org.python.python      
    0x000000010000ae93 PyTokenizer_Get + 147
    4   org.python.python      
    0x0000000100005a8a parsetok + 218
    5   org.python.python      
    0x00000001000e8af2 PyParser_ASTFromFile + 146
    6   org.python.python      
    0x00000001000e9dd3 PyRun_InteractiveOneFlags + 243
    7   org.python.python      
    0x00000001000ea0be PyRun_InteractiveLoopFlags + 78
    8   org.python.python      
    0x00000001000ea8d1 PyRun_AnyFileExFlags + 161
    9   org.python.python      
    0x000000010010155d Py_Main + 3101
    10  org.python.python      
    0x0000000100000f14 0x100000000 + 3860
    Thread 0 crashed with X86 Thread State (64-bit):
      rax: 0x0000000000000000  rbx: 0x0000000100322290  rcx: 0x0000000102000000  rdx: 0x0000000000000a00
      rdi: 0x0000000000000000  rsi: 0x00000001002f3254  rbp: 0x00007fff5fbff2d0  rsp: 0x00007fff5fbff200
       r8: 0x0000000102000000   r9: 0x0000000000000000  r10: 0x0080002140000003  r11: 0x0000000000000001
      r12: 0x0000000000000001  r13: 0x0000000000000009  r14: 0x00007fff5fbff290  r15: 0x00007fff5fbff210
      rip: 0x00000001002f2f97  rfl: 0x0000000000010206  cr2: 0x0000000000000000
    Logical CPU:
    1
    Error Code:
    0x00000004
    Trap Number:
    14
    Binary Images:
    0x100000000 -  
    0x100000fff +org.python.python (2.7.4 - 2.7.4) <A450FF23-FAE2-B1D7-E70F-5511C3FAD379> /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents /MacOS/Python
    0x100003000 -  
    0x10016fff7 +org.python.python (2.7.4, [c] 2004-2013 Python Software Foundation. - 2.7.4) <D4B4F67F-D7AA-AECF-2E7C-6C605A9B1EA9> /Library/Frameworks/Python.framework/Versions/2.7/Python
    0x1002f1000 -  
    0x1002f3fff +readline.so (???) <CDBF1919-7992-8A7E-5F30-D429DB67EF51> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/rea dline.so
    0x100700000 -  
    0x10071effb  libedit.2.dylib (39) <1B0596DB-F336-32E7-BB9F-51BF70DB5305> /usr/lib/libedit.2.dylib
    0x10072f000 -  
    0x100783fe7 +libncursesw.5.dylib (5) <3F0079C0-01C1-3CB8-19CA-F9B49AA4F4A4> /Library/Frameworks/Python.framework/Versions/2.7/lib/libncursesw.5.dylib
    0x7fff64d5d000 -
    0x7fff64d90817  dyld (239.4) <2B17750C-ED1B-3060-B64E-21897D08B28B> /usr/lib/dyld
    0x7fff88329000 -
    0x7fff8833aff7  libsystem_asl.dylib (217.1.4) <655FB343-52CF-3E2F-B14D-BEBF5AAEF94D> /usr/lib/system/libsystem_asl.dylib
    0x7fff8833b000 -
    0x7fff88340ff7  libunwind.dylib (35.3) <78DCC358-2FC1-302E-B395-0155B47CB547> /usr/lib/system/libunwind.dylib
    0x7fff88b16000 -
    0x7fff88b3dff7  libsystem_network.dylib (241.3) <8B1E1F1D-A5CC-3BAE-8B1E-ABC84337A364> /usr/lib/system/libsystem_network.dylib
    0x7fff88b3e000 -
    0x7fff88b5aff7  libsystem_kernel.dylib (2422.90.20) <20E00C54-9222-359F-BD98-CB79ABED769A> /usr/lib/system/libsystem_kernel.dylib
    0x7fff88de4000 -
    0x7fff88e14fff  libncurses.5.4.dylib (42) <BF763D62-9149-37CB-B1D2-F66A2510E6DD> /usr/lib/libncurses.5.4.dylib
    0x7fff88e15000 -
    0x7fff88e1bff7  libsystem_platform.dylib (24.90.1) <3C3D3DA8-32B9-3243-98EC-D89B9A1670B3> /usr/lib/system/libsystem_platform.dylib
    0x7fff88e8c000 -
    0x7fff88ea7ff7  libsystem_malloc.dylib (23.10.1) <A695B4E4-38E9-332E-A772-29D31E3F1385> /usr/lib/system/libsystem_malloc.dylib
    0x7fff891a9000 -
    0x7fff891f7fff  libcorecrypto.dylib (161.1) <F3973C28-14B6-3006-BB2B-00DD7F09ABC7> /usr/lib/system/libcorecrypto.dylib
    0x7fff892a2000 -
    0x7fff892acfff  libcommonCrypto.dylib (60049) <8C4F0CA0-389C-3EDC-B155-E62DD2187E1D> /usr/lib/system/libcommonCrypto.dylib
    0x7fff892c0000 -
    0x7fff89312fff  libc++.1.dylib (120) <4F68DFC5-2077-39A8-A449-CAC5FDEE7BDE> /usr/lib/libc++.1.dylib
    0x7fff8a0de000 -
    0x7fff8a105ffb  libsystem_info.dylib (449.1.3) <7D41A156-D285-3849-A2C3-C04ADE797D98> /usr/lib/system/libsystem_info.dylib
    0x7fff8b079000 -
    0x7fff8b080fff  libcompiler_rt.dylib (35) <4CD916B2-1B17-362A-B403-EF24A1DAC141> /usr/lib/system/libcompiler_rt.dylib
    0x7fff8b081000 -
    0x7fff8b082ffb  libremovefile.dylib (33) <3543F917-928E-3DB2-A2F4-7AB73B4970EF> /usr/lib/system/libremovefile.dylib
    0x7fff8b083000 -
    0x7fff8b23bff3  libicucore.A.dylib (511.31) <167DDD0A-A935-31AF-B5B9-940268EC3A3C> /usr/lib/libicucore.A.dylib
    0x7fff8bb00000 -
    0x7fff8bb2ffd2  libsystem_m.dylib (3047.16) <B7F0E2E4-2777-33FC-A787-D6430B630D54> /usr/lib/system/libsystem_m.dylib
    0x7fff8c3a5000 -
    0x7fff8c3a6ff7  libsystem_blocks.dylib (63) <FB856CD1-2AEA-3907-8E9B-1E54B6827F82> /usr/lib/system/libsystem_blocks.dylib
    0x7fff8cd63000 -
    0x7fff8cd65ff7  libquarantine.dylib (71) <7A1A2BCB-C03D-3A25-BFA4-3E569B2D2C38> /usr/lib/system/libquarantine.dylib
    0x7fff8d8fa000 -
    0x7fff8d923ff7  libc++abi.dylib (49.1) <21A807D3-6732-3455-B77F-743E9F916DF0> /usr/lib/libc++abi.dylib
    0x7fff8dbcc000 -
    0x7fff8dbd3ff7  libsystem_pthread.dylib (53.1.4) <AB498556-B555-310E-9041-F67EC9E00E2C> /usr/lib/system/libsystem_pthread.dylib
    0x7fff8dbfb000 -
    0x7fff8dbffff7  libcache.dylib (62) <BDC1E65B-72A1-3DA3-A57C-B23159CAAD0B> /usr/lib/system/libcache.dylib
    0x7fff8ddf9000 -
    0x7fff8de00ff8  liblaunch.dylib (842.90.1) <38D1AB2C-A476-385F-8EA8-7AB604CA1F89> /usr/lib/system/liblaunch.dylib
    0x7fff8de01000 -
    0x7fff8de43ff7  libauto.dylib (185.5) <F45C36E8-B606-3886-B5B1-B6745E757CA8> /usr/lib/libauto.dylib
    0x7fff8de4d000 -
    0x7fff8de51ff7  libsystem_stats.dylib (93.90.3) <1A55AF8A-B6C4-3163-B557-3AD25DA643A8> /usr/lib/system/libsystem_stats.dylib
    0x7fff8ecd8000 -
    0x7fff8eebdfff  com.apple.CoreFoundation (6.9 - 855.14) <617B8A7B-FAB2-3271-A09B-C542E351C532> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
    0x7fff90c81000 -
    0x7fff90c89fff  libsystem_dnssd.dylib (522.90.2) <A0B7CF19-D9F2-33D4-8107-A62184C9066E> /usr/lib/system/libsystem_dnssd.dylib
    0x7fff90c8b000 -
    0x7fff90c8cff7  libDiagnosticMessagesClient.dylib (100) <4CDB0F7B-C0AF-3424-BC39-495696F0DB1E> /usr/lib/libDiagnosticMessagesClient.dylib
    0x7fff91b4f000 -
    0x7fff91b54fff  libmacho.dylib (845) <1D2910DF-C036-3A82-A3FD-44FF73B5FF9B> /usr/lib/system/libmacho.dylib
    0x7fff91c4e000 -
    0x7fff91c55ff3  libcopyfile.dylib (103) <5A881779-D0D6-3029-B371-E3021C2DDA5E> /usr/lib/system/libcopyfile.dylib
    0x7fff91d91000 -
    0x7fff91e1aff7  libsystem_c.dylib (997.90.3) <6FD3A400-4BB2-3B95-B90C-BE6E9D0D78FA> /usr/lib/system/libsystem_c.dylib
    0x7fff91e6e000 -
    0x7fff91e77ff3  libsystem_notify.dylib (121) <52571EC3-6894-37E4-946E-064B021ED44E> /usr/lib/system/libsystem_notify.dylib
    0x7fff9272c000 -
    0x7fff9272dff7  libsystem_sandbox.dylib (278.11) <5E5A6E09-33A9-391A-AB34-E57D93BB1551> /usr/lib/system/libsystem_sandbox.dylib
    0x7fff92ede000 -
    0x7fff92f02fff  libxpc.dylib (300.90.2) <AB40CD57-F454-3FD4-B415-63B3C0D5C624> /usr/lib/system/libxpc.dylib
    0x7fff93601000 -
    0x7fff93602ff7  libSystem.B.dylib (1197.1.1) <BFC0DC97-46C6-3BE0-9983-54A98734897A> /usr/lib/libSystem.B.dylib
    0x7fff9363c000 -
    0x7fff937e9f27  libobjc.A.dylib (551.1) <AD7FD984-271E-30F4-A361-6B20319EC73B> /usr/lib/libobjc.A.dylib
    0x7fff93b8e000 -
    0x7fff93b8eff7  libkeymgr.dylib (28) <3AA8D85D-CF00-3BD3-A5A0-E28E1A32A6D8> /usr/lib/system/libkeymgr.dylib
    0x7fff949e8000 -
    0x7fff949ebff7  libdyld.dylib (239.4) <CF03004F-58E4-3BB6-B3FD-BE4E05F128A0> /usr/lib/system/libdyld.dylib
    0x7fff94df0000 -
    0x7fff94df2ff3  libsystem_configuration.dylib (596.13) <B51C8C22-C455-36AC-952D-A319B6545884> /usr/lib/system/libsystem_configuration.dylib
    0x7fff95357000 -
    0x7fff95358fff  libunc.dylib (28) <62682455-1862-36FE-8A04-7A6B91256438> /usr/lib/system/libunc.dylib
    0x7fff95380000 -
    0x7fff95391ff7  libz.1.dylib (53) <42E0C8C6-CA38-3CA4-8619-D24ED5DD492E> /usr/lib/libz.1.dylib
    0x7fff95392000 -
    0x7fff953acfff  libdispatch.dylib (339.90.1) <F3CBFE1B-FCE8-3F33-A53D-9092AB382DBB> /usr/lib/system/libdispatch.dylib
    External Modification Summary:
      Calls made by other processes targeting this process:
    task_for_pid: 0
    thread_create: 0
    thread_set_state: 0
      Calls made by this process:
    task_for_pid: 0
    thread_create: 0
    thread_set_state: 0
      Calls made by all processes on this machine:
    task_for_pid: 1192
    thread_create: 0
    thread_set_state: 0
    VM Region Summary:
    ReadOnly portion of Libraries: Total=76.9M resident=147.6M(192%) swapped_out_or_unallocated=16777216.0T(22877170565120%)
    Writable regions: Total=46.0M written=2644K(6%) resident=3112K(7%) swapped_out=0K(0%) unallocated=42.9M(93%)
    REGION TYPE                
    VIRTUAL
    ===========                
    =======
    Kernel Alloc Once               
    4K
    MALLOC                       
    37.6M
    MALLOC (admin)                 
    16K
    STACK GUARD                  
    56.0M
    Stack                        
    8192K
    VM_ALLOCATE                     
    8K
    __DATA                       
    1432K
    __LINKEDIT                   
    66.2M
    __TEXT                       
    10.7M
    __UNICODE                     
    544K
    shared memory                   
    4K
    ===========                
    =======
    TOTAL                       
    180.5M
    Model: MacBookPro9,2, BootROM MBP91.00D3.B06, 2 processors, Intel Core i5, 2.5 GHz, 4 GB, SMC 2.2f38
    Graphics: Intel HD Graphics 4000, Intel HD Graphics 4000, Built-In, 1024 MB
    Memory Module: BANK 0/DIMM0, 2 GB, DDR3, 1600 MHz, 0x02FE, 0x45424A3230554638424455302D474E2D4620
    Memory Module: BANK 1/DIMM0, 2 GB, DDR3, 1600 MHz, 0x02FE, 0x45424A3230554638424455302D474E2D4620
    AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0xF5), Broadcom BCM43xx 1.0 (5.106.98.100.22)
    Bluetooth: Version 4.2.3f10 13477, 3 services, 15 devices, 1 incoming serial ports
    Network Service: Wi-Fi, AirPort, en1
    Serial ATA Device: APPLE HDD HTS547550A9E384, 500.11 GB
    Serial ATA Device: HL-DT-ST DVDRW  GS31N
    USB Device: Hub
    USB Device: Android
    USB Device: FaceTime HD Camera (Built-in)
    USB Device: Hub
    USB Device: Hub
    USB Device: IR Receiver
    USB Device: Apple Internal Keyboard / Trackpad
    USB Device: BRCM20702 Hub
    USB Device: Bluetooth USB Host Controller
    Thunderbolt Bus: MacBook Pro, Apple Inc., 25.1
    I would really appreciate if some one can help solve the above issue.

    That is always a classpath issue. Fix your classpath so it uses the J2ee.jar. If you are using an IDE you will have to look up how to add JARs to the IDE's classpath, which is usually not the same as the system classpath.

  • How can I use automator to open terminal change directory run a python script?

    Hi all,
    I dont really ever use automator - which may be my big problem - and I am trying to make an apllication that seems fairly simple in theroy. I want something that will lunch terminal, cd to the place where my python script lives and then run it. I have tried a number of suggested ways and can get the terminal to open and the directory to change but cant figure out how to run the script. This is my workflow if i did it manually each time
    - open terminal
    - type cd "the place where my file lives"
    - type python uploadr.py -d
    I can get terminal to open then cd to the right place but cant get the "python uploadr.py -d" to run without an error.
    any help would be very appricated
    thanks!!!
    J

    this is the script I got closest with - I know its something to do with breaking those two commands up but i just dont know how to do it
    on run {input, parameters}
              tell application "Terminal"
      activate
                        if (the (count of the window) = 0) or ¬
                                  (the busy of window 1 = true) then
                                  tell application "System Events"
      keystroke "n" using command down
                                  end tell
                        end if
                        do script "cd /Users/behole/Desktop/FlickrUpload/" & "python uploadr.py -d" in window 1
              end tell
              return input
    end run
    this is the error I get in terminal after I run it
    Last login: Mon Jul 23 15:37:17 on ttys000
    be-holes-MacBook-Pro:~ behole$ cd /Users/behole/Desktop/FlickrUpload/python uploadr.py -d
    -bash: cd: /Users/behole/Desktop/FlickrUpload/python: No such file or directory
    be-holes-MacBook-Pro:~ behole$

  • How to use a com server develpped in python

    Hi,
    I am trying to use a Com server, developped with python to communicate with an equipment.
    The server is working fine if  called from VBScript ex:
    Set device = CreateObject("My_server")
    msg = device.DetectTactic(NomTACTIC,portCom).
    I have tried through the ActiveX AutomationOPen, but my Com server doesn't appear in the List.
    How can I correctly register this Server (I have only done a regsvr32 Path/Myserver.dll command)
    Any help will be welcome.
    Best regards
    Marc Baulin

    Hi,
    To select an ActiveX, you have to place an refunum automation on your front-panel and then right click >> Select ActiveX class >> Browse. Your server should appear in the list.
    On the follow link, you will find how manually Register Type Libraries, ActiveX Controls, and ActiveX Servers :
    http://digital.ni.com/public.nsf/websearch/4F811A9​B23F1D46E862566F700615B7A?OpenDocument
    Hope its help.
    Regards,
    Isabelle
    Ingénieur d'applications
    National Instruments France

  • Berkeley DB XML, Python, TEI and DTDs

    Dear BDB XML experts,
    I am developing a web application (Django based) that uses python to manipulate and query TEI based documents within a BDB XML container.
    The XML documents I am loading into the container contain a reference to an external DTD, which in turn references .ent files. The documents contain entity references that require resolving.
    I can successfully put the XML documents into the container if I include the contents of the DTD within each file, but the process fails if I do not include the contents of the DTD in the XML document and instead try to reference the DTD via a reference. Xquery queries also do not seem to work properly if the DTD contents is included within each XML document.
    I have found references in the forums to the following JAVA code which appears to allow BDB XML to reference an external DTD in the file system from within the database (I may be wrong on this though):
    XmlManagerConfig config = new XmlManagerConfig();
    config.setAllowExternalAccess(true);
    XmlManager manager = new XmlManager(config);
    My query is, can the same be achieved using python? The BDB documentation specifies this as a JAVA only operation and does not mention it as possible using python.
    Thanks for reading
    AL

    Hi AL,
    I have gathered quite a bit of experience with the Python bindings now, but I haven't been exactly in your situation. My experience has been that pretty much anything that you can do in Java you can do in Python but sometimes the syntax is a little different. From what I see Python does not have an XmlManagerConfig class but instead uses flags given to the XmlManager constructor. To do what you want to do I think you want (assuming from dbxml import *)
    mgr = XmlManager(env, DBXML_ALLOW_EXTERNAL_ACCESS)

  • How can I replace the Python cgi script with CF

    I am testing the sample app and want to just use CF to track
    peer ids.
    Can you give me the api to the web service calls.
    I can then create the CFC to handle this.
    In order to use the sample application that you built, you
    need to setup your web server and host the provided Python script
    (reg.cgi) for exchanging peer IDs. Please note that setting up the
    web service is not needed for the hosted VideoPhone sample. In
    VideoPhoneLabs.mxml, please set WebServiceUrl accordingly. You may
    also use Google Apps to host this web service (minimal
    modifications required).
    The Python script should be placed in the cgi-bin location
    according to your web server installation. The database is an
    SQLite3 database. In reg.cgi, please edit the location of the
    database in variable dbFile.
    You also need to create a database scheme using the
    followings:
    CREATE TABLE registrations (
    m_username VARCHAR COLLATE NOCASE,
    m_identity VARCHAR,
    m_updatetime DATETIME,
    PRIMARY KEY (m_username)
    CREATE INDEX registrations_updatetime ON registrations
    (m_updatetime ASC);

    the sample web service we provided has the following API:
    to record peerID "1234" for username "foo", do
    http://your.domain/cgi-bin/reg.cgi?username=foo&identity=1234
    you should get back an XML document with result.update=true
    or false depending on whether the database was updated.
    to look up the peerID for username "foo", do
    http://your.domain/cgi-bin/reg.cgi?friends=foo
    you should get back an XML document like:
    <?xml version="1.0" encoding="utf-8"?>
    <result>
    <friend>
    <user>foo</user>
    <identity>1234</identity>
    </friend>
    </result>
    there will be no "identity" entry if the user is not found.
    the lookup is case-insensitive but case-preserving, so if you
    looked up "Foo", the user field would be "Foo" but there'd be an
    extra "registered" field with the username as it was registered.
    -mike

Maybe you are looking for

  • I have a web service that I am trying to use an MDB with.

    I have a web service where I request using soapUI, and the the response from the soapUI that I get is correct. However, in addition to this "regular" program flow, I am sending data to a queue where 2 things should happen: 1. my message should get pu

  • How to capture the screen in AS3

    When I click on a button I want to capture the entire screen and make it available in the background of the new frame I display.How to do this?

  • Western Digital Diagnostics with Vista

    Hi, Does anyone know how to get the Western Digital diagnostics software for my N200 hard drive working? They don't provide a Windows Vista version and the DOS CD ISO seems to think it's really a floppy and will only run from drive A:. http://support

  • Process Order Screen Controls

    Hi, Does any one has used "Filter Criteria for Components" in Process order Config. I want to configure it and our business scenario is as follows - Our process order contains alternate components which have Usage Probability = 0, hence "Requirement

  • How to install Apex on small business server with IIS already installed?

    I am building an Oracle Application Express application for a small not-for-profit and they only have a single small business server that is already running the microsoft IIS server. Can I install the Oracle Http server and run both the Oracle and II