Pipes

Hi everyone,
here is where i am stuck in:
after reading the article i have a couple of questions:
how can the command mkfifo in java be implemented?
Besides, what i have done is as follows:
i have an app which is using the runtime to create another process in order to receive all the messages from some libraries. For the moment it is receiving all the messages and copying them into a file (this part works perfectly)
<Master Code>
FileOutputStream fos = new FileOutputStream("text.txt");//args[0]);
Runtime rt = Runtime.getRuntime();
//Process pipe = rt.exec("mkfifo pipe");
Process proc = rt.exec("java MainInterface");
StreamGobbler outputGobbler = new
StreamGobbler(proc.getInputStream(), "OUTPUT", fos);
// kick them off
outputGobbler.start();
<end Master Code>
<Thread which copyes into a file>
class StreamGobbler extends Thread
InputStream is;
String type;
OutputStream os;
StreamGobbler(InputStream is, String type, OutputStream redirect)
this.is = is;
this.type = type;
this.os = redirect;
public void run()
try
PrintWriter pw = null;
if (os != null)
pw = new PrintWriter(os);
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
String line=null;
while ( (line = br.readLine()) != null)
if (pw != null)
pw.println(line);
System.out.println(type + ">" + line);
pw.flush();
if (pw != null)
pw.flush();
} catch (IOException ioe)
ioe.printStackTrace();
<End Thread which copyes into a file>
And the app called by this master program, which should show the messages incoming in the pip in a text area:
<mainInterface>
public static void main(String[] args) throws IOException{
javax.swing.SwingUtilities.invokeLater(new Runnable() {
//Here should be the pipe called
public void run() {
createAndShowGUI();
<End mainInterface>
Now the point is how can be done to write instead in a file, into a pipe which the mainInterface could access

Sorry for the format but i forgot how it was. Here is clearer. and as i wrote, i do not know how to create pipes which could be used inter processes.
Master code:
FileOutputStream fos = new FileOutputStream("text.txt");//args[0]);
Runtime rt = Runtime.getRuntime();
//Process pipe = rt.exec("mkfifo pipe");
Process proc = rt.exec("java MainInterface");
StreamGobbler outputGobbler = new
StreamGobbler(proc.getInputStream(), "OUTPUT", fos);
// kick them off
outputGobbler.start();Gobbler which deals with the messages
class StreamGobbler extends Thread
InputStream is;
String type;
OutputStream os;
StreamGobbler(InputStream is, String type, OutputStream redirect)
this.is = is;
this.type = type;
this.os = redirect;
public void run()
try
PrintWriter pw = null;
if (os != null)
pw = new PrintWriter(os);
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
String line=null;
while ( (line = br.readLine()) != null)
if (pw != null)
pw.println(line);
System.out.println(type + ">" + line);
pw.flush();
if (pw != null)
pw.flush();
} catch (IOException ioe)
ioe.printStackTrace();
} mainInterface
public static void main(String[] args) throws IOException{
javax.swing.SwingUtilities.invokeLater(new Runnable() {
//Here should be the pipe called
public void run() {
createAndShowGUI();
}

Similar Messages

  • Jackd + guitar: "timeouts and broken pipes"

    Hi friends! I'm trying to pass my electric guitar via any rack/effects (like Guitarix or Creox) with no luck. I've got this sound card:
    01:06.0 Multimedia audio controller: Creative Labs [SB Live! Value] EMU10k1X
    01:06.1 Input device controller: Creative Labs [SB Live! Value] Input device controller
    I try with QJackCtl and invoking jackd from the terminal with any luck.
    jackd -d alsa -C -P
    jackd 0.121.3
    Copyright 2001-2009 Paul Davis, Stephane Letz, Jack O'Quinn, Torben Hohn and others.
    jackd comes with ABSOLUTELY NO WARRANTY
    This is free software, and you are welcome to redistribute it
    under certain conditions; see the file COPYING for details
    could not open driver .so '/usr/lib/jack/jack_net.so': libcelt0.so.2: cannot open shared object file: No such file or directory
    could not open driver .so '/usr/lib/jack/jack_firewire.so': libffado.so.2: cannot open shared object file: No such file or directory
    JACK compiled with System V SHM support.
    loading driver ..
    creating alsa driver ... hw:0|hw:0|1024|2|48000|0|0|nomon|swmeter|-|32bit
    control device hw:0
    configuring for 48000Hz, period = 1024 frames (21.3 ms), buffer = 2 periods
    ALSA: final selected sample format for capture: 16bit little-endian
    ALSA: use 2 periods for capture
    ALSA: final selected sample format for playback: 16bit little-endian
    ALSA: use 2 periods for playback
    jackd watchdog: timeout - killing jackd
    [gabo@machina ~]$
    This is the output from QJackCtl:
    00:12:07.126 Client deactivated.
    00:12:07.130 JACK is being forced...
    cannot read server event (Success)
    cannot continue execution of the processing graph (Bad file descriptor)
    zombified - calling shutdown handler
    cannot send request type 7 to server
    cannot read result for request type 7 from server (Broken pipe)
    cannot send request type 7 to server
    cannot read result for request type 7 from server (Broken pipe)
    00:12:07.339 JACK was stopped with exit status=1.
    I can hear my guitar and record with Audacity, but when jackd enter into the escenario everything blows up. I read that nowadays almost any sound card will work with QJackCtl with the default options. I play with the parameters and sometimes jack refuse to start. With the default options on i can make it run, but i get no sound of the racks or guitar effects processors neither the guitar tuners that use jack takes the sound from the guitar. My line input is in capture via alsamixer, but still no luck. Any clue on this? I'm skipping steps?
    Thanks in advance.
    iamgabo

    Hi!
    groups && cat /proc/asound/cards && cat ~/.asoundrc && cat '/etc/security/limits.d/audio.conf' && jackd -v
    adm disk lp wheel http network video audio optical storage power users polkitd vboxusers wireshark kismet
    0 [Live ]: EMU10K1X - Dell Sound Blaster Live!
    Dell Sound Blaster Live! at 0xcc00 irq 17
    #pcm.upmix71 {
    #type upmix
    #slave.pcm "surround71"
    #delay 15
    #channels 8
    pcm.!default {
    type hw
    card 0
    ctl.!default {
    type hw
    card 0
    # convert alsa API over jack API
    # use it with
    # % aplay foo.wav
    # use this as default
    pcm.!default {
    type plug
    slave { pcm "jack" }
    ctl.mixer0 {
    type hw
    card 1
    # pcm type jack
    pcm.jack {
    type jack
    playback_ports {
    0 system:playback_1
    1 system:playback_2
    capture_ports {
    0 system:capture_1
    1 system:capture_2
    cat: /etc/security/limits.d/audio.conf: No such file or directory
    I have a file called 99-audio.conf
    cat /etc/security/limits.d/99-audio.conf
    @audio - rtprio 99
    @audio - memlock unlimited
    Also i've seen some guys changing this file too:
    cat /etc/security/limits.conf
    # /etc/security/limits.conf
    #Each line describes a limit for a user in the form:
    #<domain> <type> <item> <value>
    #Where:
    #<domain> can be:
    # - an user name
    # - a group name, with @group syntax
    # - the wildcard *, for default entry
    # - the wildcard %, can be also used with %group syntax,
    # for maxlogin limit
    #<type> can have the two values:
    # - "soft" for enforcing the soft limits
    # - "hard" for enforcing hard limits
    #<item> can be one of the following:
    # - core - limits the core file size (KB)
    # - data - max data size (KB)
    # - fsize - maximum filesize (KB)
    # - memlock - max locked-in-memory address space (KB)
    # - nofile - max number of open files
    # - rss - max resident set size (KB)
    # - stack - max stack size (KB)
    # - cpu - max CPU time (MIN)
    # - nproc - max number of processes
    # - as - address space limit (KB)
    # - maxlogins - max number of logins for this user
    # - maxsyslogins - max number of logins on the system
    # - priority - the priority to run user process with
    # - locks - max number of file locks the user can hold
    # - sigpending - max number of pending signals
    # - msgqueue - max memory used by POSIX message queues (bytes)
    # - nice - max nice priority allowed to raise to values: [-20, 19]
    # - rtprio - max realtime priority
    #<domain> <type> <item> <value>
    #* soft core 0
    #* hard rss 10000
    #@student hard nproc 20
    #@faculty soft nproc 20
    #@faculty hard nproc 50
    #ftp hard nproc 0
    #@student - maxlogins 4
    * - rtprio 0
    * - nice 0
    @audio - rtprio 65
    @audio - nice -10
    @audio - memlock unlimited
    jackd 0.121.3
    There are the snaps for QJackCtl
    Also, checkout this stuff that i've recorded with audacity, only from the line and nothing else
    http://ompldr.org/vZ3A2eg
    Thanks!
    Last edited by iamgabo (2012-12-15 02:21:08)

  • Unable to see pipe line steps in the SXMB_MONI

    HI,
    i have done the development and quality work for my p i7.1
    i was testing the messages in the Quality System.
    So i went to SXMB_MONI to see the messages.
    After double clicking on the  successfully processed message, it shows
    pipe line steps.
    in that I am able to see
    1.inbound message(CENTRAL)
    2.XML Validation inbound channel Request
    3. call adapter
    4. Response
    but i am unable to see
    1. Receiver Determination
    2. Interface Determination
    3. Receiver Grouping
    4. Request Message Mapping
    5. Technical Routing
    whereas  in the develoment system i am able to see the above all.
    is there any configuration i have to do in the SXMB_ADM?

    can i know that   Configuration objects were created/ imported by using transport mechanisum?
    if created ,
    check the cache status and re activated those objects whatever u r posted in thread.
    if transported :
    u ll follow ,whatever posted by Earlier

  • What is the benifit of Pipe function

    Hi,
    I agree that piping function return some set of rows when it is ready.
    But if my main query where clause is using the piping function for filter like below
    select p_name a1, p_add a2 from tname, tadd where tname.id = tadd.id and tname in (select * from table(pipe_fun_return_some_names));
    Now my question is "using pipe function will my main query starts executing without waiting all data return by pipe function"
    If yes then how my main query gets executed.
    If No how can then will it cost me performance point of view.
    so shell using piping function like this improve performance of my sql.
    or simply can i use a table type and object to query ---
    select p_name a1, p_add a2 from tname, tadd where tname.id = tadd.id and tname in (select * from table(table_fun_return_some_names));
    Thanks Gurus...

    Ora_Is_Not_Magic wrote:
    Hi,
    I agree that piping function return some set of rows when it is ready.
    But if my main query where clause is using the piping function for filter like below
    select p_name a1, p_add a2 from tname, tadd where tname.id = tadd.id and tname in (select * from table(pipe_fun_return_some_names));
    Now my question is "using pipe function will my main query starts executing without waiting all data return by pipe function"
    If yes then how my main query gets executed.
    If No how can then will it cost me performance point of view.Do you mean something along the lines that, if the value is found in the subquery (pipeline function) results early on it will return the main query data quicker than if the value is found later on in the subquery (pipeline function) results?
    SQL> create or replace type t_nums as table of number;
      2  /
    Type created.
    SQL> ed
    Wrote file afiedt.buf
      1  create or replace function f_pipe_asc return t_nums pipelined as
      2  begin
      3    for i in 1..1000000
      4    loop
      5      pipe row(i);
      6    end loop;
      7    return;
      8* end;
    SQL> /
    Function created.
    SQL> ed
    Wrote file afiedt.buf
      1  create or replace function f_pipe_desc return t_nums pipelined as
      2  begin
      3    for i in 1..1000000
      4    loop
      5      pipe row(1000001-i);
      6    end loop;
      7    return;
      8* end;
    SQL> /
    Function created.
    SQL> @c:\statson
    SQL> select 1 from dual where 1 in (select * from table(f_pipe_asc()));
             1
             1
    Elapsed: 00:00:00.29
    Execution Plan
    Plan hash value: 4199234228
    | Id  | Operation                            | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                     |            |    82 |       |    28   (8)| 00:00:01 |
    |   1 |  NESTED LOOPS                        |            |    82 |       |    28   (8)| 00:00:01 |
    |   2 |   FAST DUAL                          |            |     1 |       |     2   (0)| 00:00:01 |
    |   3 |   VIEW                               | VW_NSO_1   |    82 |       |    26   (8)| 00:00:01 |
    |   4 |    SORT UNIQUE                       |            |    82 |   164 |    26   (8)| 00:00:01 |
    |*  5 |     COLLECTION ITERATOR PICKLER FETCH| F_PIPE_ASC |       |       |            |          |
    Predicate Information (identified by operation id):
       5 - filter(VALUE(KOKBF$)=1)
    Statistics
            100  recursive calls
              0  db block gets
             88  consistent gets
              0  physical reads
            116  redo size
            404  bytes sent via SQL*Net to client
            396  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> select 1 from dual where 1 in (select * from table(f_pipe_desc()));
             1
             1
    Elapsed: 00:00:00.31
    Execution Plan
    Plan hash value: 2978834354
    | Id  | Operation                            | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                     |             |    82 |       |    28   (8)| 00:00:01 |
    |   1 |  NESTED LOOPS                        |             |    82 |       |    28   (8)| 00:00:01 |
    |   2 |   FAST DUAL                          |             |     1 |       |     2   (0)| 00:00:01 |
    |   3 |   VIEW                               | VW_NSO_1    |    82 |       |    26   (8)| 00:00:01 |
    |   4 |    SORT UNIQUE                       |             |    82 |   164 |    26   (8)| 00:00:01 |
    |*  5 |     COLLECTION ITERATOR PICKLER FETCH| F_PIPE_DESC |       |       |            |          |
    Predicate Information (identified by operation id):
       5 - filter(VALUE(KOKBF$)=1)
    Statistics
             28  recursive calls
              0  db block gets
             48  consistent gets
              0  physical reads
              0  redo size
            404  bytes sent via SQL*Net to client
            396  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL>I don't have your data to test, but I'm guessing the fact the optimiser is doing a SORT UNIQUE on the pipeline results in order to perform the "IN", it's going to take the same amount of time whether the required value is the first out of the pipeline results or the last.

  • Scan for and connect to networks from an openbox pipe menu (netcfg)

    So the other day when i was using wifi-select (awesome tool) to connect to a friends hot-spot, i realized "hey! this would be great as an openbox pipe menu."  i'm fairly decent in bash and i knew both netcfg and wifi-select were in bash so why not rewrite it that way?
    Wifi-Pipe
    A simplified version of wifi-select which will scan for networks and populate an openbox right-click menu item with available networks.  displays security type and signal strength.  click on a network to connect via netcfg the same way wifi-select does it.
    zenity is used to ask for a password and notify of a bad connection.  one can optionally remove the netcfg profile if the connection fails.
    What's needed
    -- you have to be using netcfg to manage your wireless
    -- you have to install zenity
    -- you have to save the script as ~/.config/openbox/wifi-pipe and make it executable:
    chmod +x ~/.config/openbox/wifi-pipe
    -- you have to add a sudoers entry to allow passwordless sudo on this script and netcfg (!)
    USERNAME ALL=(ALL) NOPASSWD: /usr/bin/netcfg
    USERNAME ALL=(ALL) NOPASSWD: /home/USERNAME/.config/openbox/wifi-pipe
    -- you have to adjust  ~/.config/openbox/menu.xml like so:
    <menu id="root-menu" label="Openbox 3">
    <menu id="pipe-wifi" label="Wifi" execute="sudo /home/USERNAME/.config/openbox/wifi-pipe INTERFACE" />
    <menu id="term-menu"/>
    <item label="Run...">
    <action name="Execute">
    <command>gmrun</command>
    </action>
    </item>
    where USERNAME is you and INTERFACE is probably wlan0 or similar
    openbox --reconfigure and you should be good to go.
    The script
    #!/bin/bash
    # pbrisbin 2009
    # simplified version of wifi-select designed to output as an openbox pipe menu
    # required:
    # netcfg
    # zenity
    # NOPASSWD entries for this and netcfg through visudo
    # the following in menu.xml:
    # <menu id="pipe-wifi" label="Wifi" execute="sudo /path/to/wifi.pipe interface"/>
    # the idea is to run this script once to scan/print, then again immediately to connect.
    # therefore, if you scan but don't connect, a temp file is left in /tmp. the next scan
    # will overwrite it, and the next connect will remove it.
    # source this just to get PROFILE_DIR
    . /usr/lib/network/network
    [ -z "$PROFILE_DIR" ] && PROFILE_DIR='/etc/network.d/'
    # awk code for parsing iwlist output
    # putting it here removes the wifi-select dependency
    # and allows for my own tweaking
    # prints a list "essid=security=quality_as_percentage"
    PARSER='
    BEGIN { FS=":"; OFS="="; }
    /\<Cell/ { if (essid) print essid, security, quality[2]/quality[3]*100; security="none" }
    /\<ESSID:/ { essid=substr($2, 2, length($2) - 2) } # discard quotes
    /\<Quality=/ { split($1, quality, "[=/]") }
    /\<Encryption key:on/ { security="wep" }
    /\<IE:.*WPA.*/ { security="wpa" }
    END { if (essid) print essid, security, quality[2]/quality[3]*100 }
    errorout() {
    echo "<openbox_pipe_menu>"
    echo "<item label=\"$1\" />"
    echo "</openbox_pipe_menu>"
    exit 1
    create_profile() {
    ESSID="$1"; INTERFACE="$2"; SECURITY="$3"; KEY="$4"
    PROFILE_FILE="$PROFILE_DIR$ESSID"
    cat > "$PROFILE_FILE" << END_OF_PROFILE
    CONNECTION="wireless"
    ESSID="$ESSID"
    INTERFACE="$INTERFACE"
    DESCRIPTION="Automatically generated profile"
    SCAN="yes"
    IP="dhcp"
    TIMEOUT="10"
    SECURITY="$SECURITY"
    END_OF_PROFILE
    # i think wifi-select should adopt these perms too...
    if [ -n "$KEY" ]; then
    echo "KEY=\"$KEY\"" >> "$PROFILE_FILE"
    chmod 600 "$PROFILE_FILE"
    else
    chmod 644 "$PROFILE_FILE"
    fi
    print_menu() {
    # scan for networks
    iwlist $INTERFACE scan 2>/dev/null | awk "$PARSER" | sort -t= -nrk3 > /tmp/networks.tmp
    # exit if none found
    if [ ! -s /tmp/networks.tmp ]; then
    rm /tmp/networks.tmp
    errorout "no networks found."
    fi
    # otherwise print the menu
    local IFS='='
    echo "<openbox_pipe_menu>"
    while read ESSID SECURITY QUALITY; do
    echo "<item label=\"$ESSID ($SECURITY) ${QUALITY/.*/}%\">" # trim decimals
    echo " <action name=\"Execute\">"
    echo " <command>sudo $0 $INTERFACE connect \"$ESSID\"</command>"
    echo " </action>"
    echo "</item>"
    done < /tmp/networks.tmp
    echo "</openbox_pipe_menu>"
    connect() {
    # check for an existing profile
    PROFILE_FILE="$(grep -REl "ESSID=[\"']?$ESSID[\"']?" "$PROFILE_DIR" | grep -v '~$' | head -n1)"
    # if found use it, else create a new profile
    if [ -n "$PROFILE_FILE" ]; then
    PROFILE=$(basename "$PROFILE_FILE")
    else
    PROFILE="$ESSID"
    SECURITY="$(awk -F '=' "/$ESSID/"'{print $2}' /tmp/networks.tmp | head -n1)"
    # ask for the security key if needed
    if [ "$SECURITY" != "none" ]; then
    KEY="$(zenity --entry --title="Authentication" --text="Please enter $SECURITY key for $ESSID" --hide-text)"
    fi
    # create the new profile
    create_profile "$ESSID" "$INTERFACE" "$SECURITY" "$KEY"
    fi
    # connect
    netcfg2 "$PROFILE" >/tmp/output.tmp
    # if failed, ask about removal of created profile
    if [ $? -ne 0 ]; then
    zenity --question \
    --title="Connection failed" \
    --text="$(grep -Eo "[\-\>]\ .*$" /tmp/output.tmp) \n Remove $PROFILE_FILE?" \
    --ok-label="Remove profile"
    [ $? -eq 0 ] && rm $PROFILE_FILE
    fi
    rm /tmp/output.tmp
    rm /tmp/networks.tmp
    [ $(id -u) -ne 0 ] && errorout "root access required."
    [ -z "$1" ] && errorout "usage: $0 [interface]"
    INTERFACE="$1"; shift
    # i added a sleep if we need to explicitly bring it up
    # b/c youll get "no networks found" when you scan right away
    # this only happens if we aren't up already
    if ! ifconfig | grep -q $INTERFACE; then
    ifconfig $INTERFACE up &>/dev/null || errorout "$INTERFACE not up"
    while ! ifconfig | grep -q $INTERFACE; do sleep 1; done
    fi
    if [ "$1" = "connect" ]; then
    ESSID="$2"
    connect
    else
    print_menu
    fi
    Screenshots
    removed -- Hi-res shots available on my site
    NOTE - i have not tested this extensively but it was working for me in most cases.  any updates/fixes will be edited right into this original post.  enjoy!
    UPDATE - 10/24/2009: i moved the awk statement from wifi-select directly into the script.  this did two things: wifi-select is no longer needed on the system, and i could tweak the awk statement to be more accurate.  it now prints a true percentange.  iwlist prints something like Quality=17/70 and the original awk statement would just output 17 as the quality.  i changed to print (17/70)*100 then bash trims the decimals so you get a true percentage.
    Last edited by brisbin33 (2010-05-09 01:28:20)

    froli wrote:
    I think the script's not working ... When I type
    sh wifi-pipe
    in a term it returns nothing
    well, just to be sure you're doing it right...
    he above is only an adjustment to the OB script's print_menu() function, it's not an entire script to itself.  so, if the original OB script shows output for you with
    sh ./wifi-pipe
    then using the above pint_menu() function (with all the other supporting code) should also show output, (only really only changes the echo's so they print the info in the pekwm format).
    oh, and if neither version shows output when you rut it in a term, then you've got other issues... ;P
    here's an entire [untested] pekwm script:
    #!/bin/bash
    # pbrisbin 2009
    # simplified version of wifi-select designed to output as an pekwm pipe menu
    # required:
    # netcfg
    # zenity
    # NOPASSWD entries for this and netcfg through visudo
    # the following in pekwm config file:
    # SubMenu = "WiFi" {
    # Entry = { Actions = "Dynamic /path/to/wifi-pipe" }
    # the idea is to run this script once to scan/print, then again immediately to connect.
    # therefore, if you scan but don't connect, a temp file is left in /tmp. the next scan
    # will overwrite it, and the next connect will remove it.
    # source this to get PROFILE_DIR and SUBR_DIR
    . /usr/lib/network/network
    errorout() {
    echo "Dynamic {"
    echo " Entry = \"$1\""
    echo "}"
    exit 1
    create_profile() {
    ESSID="$1"; INTERFACE="$2"; SECURITY="$3"; KEY="$4"
    PROFILE_FILE="$PROFILE_DIR$ESSID"
    cat > "$PROFILE_FILE" << END_OF_PROFILE
    CONNECTION="wireless"
    ESSID="$ESSID"
    INTERFACE="$INTERFACE"
    DESCRIPTION="Automatically generated profile"
    SCAN="yes"
    IP="dhcp"
    TIMEOUT="10"
    SECURITY="$SECURITY"
    END_OF_PROFILE
    # i think wifi-select should adopt these perms too...
    if [ -n "$KEY" ]; then
    echo "KEY=\"$KEY\"" >> "$PROFILE_FILE"
    chmod 600 "$PROFILE_FILE"
    else
    chmod 644 "$PROFILE_FILE"
    fi
    print_menu() {
    # scan for networks
    iwlist $INTERFACE scan 2>/dev/null | awk -f $SUBR_DIR/parse-iwlist.awk | sort -t= -nrk3 > /tmp/networks.tmp
    # exit if none found
    if [ ! -s /tmp/networks.tmp ]; then
    rm /tmp/networks.tmp
    errorout "no networks found."
    fi
    # otherwise print the menu
    echo "Dynamic {"
    IFS='='
    cat /tmp/networks.tmp | while read ESSID SECURITY QUALITY; do
    echo "Entry = \"$ESSID ($SECURITY) $QUALITY%\" {"
    echo " Actions = \"Exec sudo $0 $INTERFACE connect \\\"$ESSID\\\"\"</command>"
    echo "}"
    done
    unset IFS
    echo "}"
    connect() {
    # check for an existing profile
    PROFILE_FILE="$(grep -REl "ESSID=[\"']?$ESSID[\"']?" "$PROFILE_DIR" | grep -v '~$' | head -n1)"
    # if found use it, else create a new profile
    if [ -n "$PROFILE_FILE" ]; then
    PROFILE=$(basename "$PROFILE_FILE")
    else
    PROFILE="$ESSID"
    SECURITY="$(awk -F '=' "/$ESSID/"'{print $2}' /tmp/networks.tmp | head -n1)"
    # ask for the security key if needed
    if [ "$SECURITY" != "none" ]; then
    KEY="$(zenity --entry --title="Authentication" --text="Please enter $SECURITY key for $ESSID" --hide-text)"
    fi
    # create the new profile
    create_profile "$ESSID" "$INTERFACE" "$SECURITY" "$KEY"
    fi
    # connect
    netcfg2 "$PROFILE" >/tmp/output.tmp
    # if failed, ask about removal of created profile
    if [ $? -ne 0 ]; then
    zenity --question \
    --title="Connection failed" \
    --text="$(grep -Eo "[\-\>]\ .*$" /tmp/output.tmp) \n Remove $PROFILE_FILE?" \
    --ok-label="Remove profile"
    [ $? -eq 0 ] && rm $PROFILE_FILE
    fi
    rm /tmp/output.tmp
    rm /tmp/networks.tmp
    [ $(id -u) -ne 0 ] && errorout "root access required."
    [ -z "$1" ] && errorout "usage: $0 [interface]"
    INTERFACE="$1"; shift
    # i added a sleep if we need to explicitly bring it up
    # b/c youll get "no networks found" when you scan right away
    # this only happens if we aren't up already
    if ! ifconfig | grep -q $INTERFACE; then
    ifconfig $INTERFACE up &>/dev/null || errorout "$INTERFACE not up"
    sleep 3
    fi
    if [ "$1" = "connect" ]; then
    ESSID="$2"
    connect
    else
    print_menu
    fi
    exit 0

  • Recovering the failed server to the Cluser !!!!!!!!!!!!!!!!!!! BROKEN PIPE

    Hi,
              I have a weblogic cluster with 2 managed servers on 2 different machines,
              Now i have an application.ear which is kept on the first server and while deploying i choose the option "copy to the other server" and i deploy this EAR to the cluster.
              Mine is a applet-servlet based application and the applet gets downloaded onto client browser.
              Now when both the servers are up load balancing and the failover is also happening.Now the problem is when rejoing the failed server back to the cluster.
              I have setup the classpath entries and arguments entries "Remote Start" tab in the server configuration,
              I bring back the server through weblogic console and it joins the cluster also without any problem.
              The problem is if some users have already opened the application then after rejoining also it works fine.
              But if a new user tries to access the applications what happens is
              we get
              java.net.SocketException: Broken pipe
              at java.net.SocketOutputStream.socketWrite0(Native Method)
              at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
              at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
              at weblogic.servlet.internal.ChunkUtils.writeChunkTransfer(ChunkUtils.java:267)
              at weblogic.servlet.internal.ChunkUtils.writeChunks(ChunkUtils.java:239)
              at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:311)
              at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:387)
              at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:254)
              at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:125)
              at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:184)
              at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1639)
              at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1603)
              at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1325)
              at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1304)
              at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1247)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1052)
              at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1224)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1050)
              at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1224)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1050)
              at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1332)
              at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1304)
              at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1247)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1052)
              at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1332)
              at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1304)
              at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1247)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1052)
              at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:278)
              at WebRequestController.doPost(WebRequestController.java:184)
              at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
              at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
                   at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1006)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:419)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:315)
              at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:6718)
              at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
              at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
              at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3764)
              at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2644)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)
              We use IPlanet proxy server to route to the cluster
              Somebody pls help..
              Regards
              Suresh

    Davin Czukoski wrote:
    We have been switching back and forth between the BEA MS SQL server driver
    and the Data Direct driver ever since the SP3 problem started.
    This seems to be a problem with the BEA MS SQL server driver. We switch to
    Data Direct and the problem went away.
    There were no memory messages in our logs for SQL server.Ok. And this only happens during a big sequence of inserts? Let me see your code,
    and please describe the table too. I'll try to duplicate it, but if the data direct driver
    or the free MS driver works for you, go with them.
    Joe
    >
    >
    "Joseph Weinstein" <[email protected]> wrote in message
    news:[email protected]..
    Davin Czukoski wrote:
    I am getting this error part way through an 1000 row update.
    Exception: I/O exception while talking to the server,
    java.io.IOException:
    Broken pipe
    Is it a driver or network issue?Probably neither. I'm guessing the DBMS ran out of memory for theinsert-logging
    or something like that, and killed the client connection. Check your DBMSlog.
    Joe Weinstein at BEA

  • Pipes (Input- and Output-streams) to a Process are not closed

    Hello experts!
    I have a long-living server-process that opens external programs with
    Runtime.getRuntime().exec() all the time. For communication with one
    of the external program I use stdin and stdout. I close all streams in
    finally-blocks, so there should not be any open stream.
    The problem is, that most of the time the used streams are closed
    correctly. But sometimes some of them left open. I check that with
    the linux command-line tool lsof. So over time the number of open
    pipes increases and eat up all file-handles till an IOException (too
    many open files) is thrown.
    If I watch the -verbosegc output, I see that most of the open pipes
    are cleaned up after a GC-run. But over time - not all.
    I start the external program in a thread, what could explain that
    it happens only sometimes.
    I'm hunting this bug now for quite a long time. Are there any known
    problems with using pipes to/from other processes (under linux?)
    thanks
    lukas

    Hi!
    Now I did some heavy logging and I saw that the remaining pipes are the ones I DON'T
    open to read or write! No joke! For example - for one process I don't read or write any of
    stdin,stdout,stderr - then in one of X executions all three pipes are left open (shown by
    lsof for the java-process, after many GC-runs).
    To test it I read the stdin of this process - then this stream was closed in the error case,
    but stdout and stderr are still open. This is really strange!
    anybody seen this before?
    lukas

  • Production order with pipe line material

    Production ordre created and got confirmed  and found pipe line materials got stuck in COGI and on analysis found in that specific ordrer ordre pipe line material are not activated with special stock key indicator i.e. 4.
    Due to this it got stuck in cogi and for the same material created an order and found there is a allocation and  activated with special stock key indicator i.e. 4.
    Help me to resolve the existing Goods issue from COGI and cause of not activated with special stock key indicator i.e. 4. in production order.

    Dear ,
    did the pipe line material is showing in  production reservation  as one of the BOM compoenent?
    If it is the case , then cancel the confirmation in CO13 and try to connfirm the operation with Clear Reservation Option .
    Refer the below thread which may help you to figure out the issue :
    Re : How to consider Power and Steam in power plants and paper mills
    Re : Power and steam
    Regards
    JH

  • Sales orders Dispatch Actual No of pipes do not match with R3 report

    Dear Gurus,
    Looking Ur assistance for an issue.
    Issue: For some of the sales orders Dispatch Actual No of pipes do not match with R3 report. Actual No of Pipes matches with LIKP table values in R3. Query needs to be modified to restrict records as per logic in WGSRLDES report/program.
    Can U help me in this Regard.
    Ur Responses are most appreciated.

    Hey Pathak,
                        The problem is, BI report is matching with the datails with R/3 LIKP table. But the business user is using Report WGSRLDES report/program, where the data is missmatching.
    Hope U got my Point.

  • Python openbox pipe menu

    I somewhat hijacked a different thread and my question is more suited here.
    I'm using a python script to check gmail in a pipe menu. At first it was creating problems because it would create a cache but would then not load until the file was removed. To fix this, I removed the last line (which created the cache) and it all works. However, I would prefer to have it work like it was intended.
    The script:
    #!/usr/bin/python
    # Authors: [email protected] [email protected]
    # License: GPL 2.0
    # Usage:
    # Put an entry in your ~/.config/openbox/menu.xml like this:
    # <menu id="gmail" label="gmail" execute="~/.config/openbox/scripts/gmail-openbox.py" />
    # And inside <menu id="root-menu" label="openbox">, add this somewhere (wherever you want it on your menu)
    # <menu id="gmail" />
    import os
    import sys
    import logging
    name = "111111"
    pw = "000000"
    browser = "firefox3"
    filename = "/tmp/.gmail.cache"
    login = "\'https://mail.google.com/mail\'"
    # Allow us to run using installed `libgmail` or the one in parent directory.
    try:
    import libgmail
    except ImportError:
    # Urghhh...
    sys.path.insert(1,
    os.path.realpath(os.path.join(os.path.dirname(__file__),
    os.path.pardir)))
    import libgmail
    if __name__ == "__main__":
    import sys
    from getpass import getpass
    if not os.path.isfile(filename):
    ga = libgmail.GmailAccount(name, pw)
    try:
    ga.login()
    except libgmail.GmailLoginFailure:
    print "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
    print "<openbox_pipe_menu>"
    print " <item label=\"login failed.\">"
    print " <action name=\"Execute\"><execute>" + browser + " " + login + "</execute></action>"
    print " </item>"
    print "</openbox_pipe_menu>"
    raise SystemExit
    else:
    ga = libgmail.GmailAccount(
    state = libgmail.GmailSessionState(filename = filename))
    msgtotals = ga.getUnreadMsgCount()
    print "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
    print "<openbox_pipe_menu>"
    print "<separator label=\"Gmail\"/>"
    if msgtotals == 0:
    print " <item label=\"no new messages.\">"
    elif msgtotals == 1:
    print " <item label=\"1 new message.\">"
    else:
    print " <item label=\"" + str(msgtotals) + " new messages.\">"
    print " <action name=\"Execute\"><execute>" + browser + " " + login + "</execute></action>"
    print " </item>"
    print "</openbox_pipe_menu>"
    state = libgmail.GmailSessionState(account = ga).save(filename)
    The line I removed:
    state = libgmail.GmailSessionState(account = ga).save(filename)
    The error I'd get if the cache existed:
    Traceback (most recent call last):
    File "/home/shawn/.config/openbox/scripts/gmail.py", line 56, in <module>
    msgtotals = ga.getUnreadMsgCount()
    File "/home/shawn/.config/openbox/scripts/libgmail.py", line 547, in getUnreadMsgCount
    q = "is:" + U_AS_SUBSET_UNREAD)
    File "/home/shawn/.config/openbox/scripts/libgmail.py", line 428, in _parseSearchResult
    return self._parsePage(_buildURL(**params))
    File "/home/shawn/.config/openbox/scripts/libgmail.py", line 401, in _parsePage
    items = _parsePage(self._retrievePage(urlOrRequest))
    File "/home/shawn/.config/openbox/scripts/libgmail.py", line 369, in _retrievePage
    if self.opener is None:
    AttributeError: GmailAccount instance has no attribute 'opener'
    EDIT - you might need the libgmail.py
    #!/usr/bin/env python
    # libgmail -- Gmail access via Python
    ## To get the version number of the available libgmail version.
    ## Reminder: add date before next release. This attribute is also
    ## used in the setup script.
    Version = '0.1.8' # (Nov 2007)
    # Original author: [email protected]
    # Maintainers: Waseem ([email protected]) and Stas Z ([email protected])
    # License: GPL 2.0
    # NOTE:
    # You should ensure you are permitted to use this script before using it
    # to access Google's Gmail servers.
    # Gmail Implementation Notes
    # ==========================
    # * Folders contain message threads, not individual messages. At present I
    # do not know any way to list all messages without processing thread list.
    LG_DEBUG=0
    from lgconstants import *
    import os,pprint
    import re
    import urllib
    import urllib2
    import mimetypes
    import types
    from cPickle import load, dump
    from email.MIMEBase import MIMEBase
    from email.MIMEText import MIMEText
    from email.MIMEMultipart import MIMEMultipart
    GMAIL_URL_LOGIN = "https://www.google.com/accounts/ServiceLoginBoxAuth"
    GMAIL_URL_GMAIL = "https://mail.google.com/mail/?ui=1&"
    # Set to any value to use proxy.
    PROXY_URL = None # e.g. libgmail.PROXY_URL = 'myproxy.org:3128'
    # TODO: Get these on the fly?
    STANDARD_FOLDERS = [U_INBOX_SEARCH, U_STARRED_SEARCH,
    U_ALL_SEARCH, U_DRAFTS_SEARCH,
    U_SENT_SEARCH, U_SPAM_SEARCH]
    # Constants with names not from the Gmail Javascript:
    # TODO: Move to `lgconstants.py`?
    U_SAVEDRAFT_VIEW = "sd"
    D_DRAFTINFO = "di"
    # NOTE: All other DI_* field offsets seem to match the MI_* field offsets
    DI_BODY = 19
    versionWarned = False # If the Javascript version is different have we
    # warned about it?
    RE_SPLIT_PAGE_CONTENT = re.compile("D\((.*?)\);", re.DOTALL)
    class GmailError(Exception):
    Exception thrown upon gmail-specific failures, in particular a
    failure to log in and a failure to parse responses.
    pass
    class GmailSendError(Exception):
    Exception to throw if we're unable to send a message
    pass
    def _parsePage(pageContent):
    Parse the supplied HTML page and extract useful information from
    the embedded Javascript.
    lines = pageContent.splitlines()
    data = '\n'.join([x for x in lines if x and x[0] in ['D', ')', ',', ']']])
    #data = data.replace(',,',',').replace(',,',',')
    data = re.sub(',{2,}', ',', data)
    result = []
    try:
    exec data in {'__builtins__': None}, {'D': lambda x: result.append(x)}
    except SyntaxError,info:
    print info
    raise GmailError, 'Failed to parse data returned from gmail.'
    items = result
    itemsDict = {}
    namesFoundTwice = []
    for item in items:
    name = item[0]
    try:
    parsedValue = item[1:]
    except Exception:
    parsedValue = ['']
    if itemsDict.has_key(name):
    # This handles the case where a name key is used more than
    # once (e.g. mail items, mail body etc) and automatically
    # places the values into list.
    # TODO: Check this actually works properly, it's early... :-)
    if len(parsedValue) and type(parsedValue[0]) is types.ListType:
    for item in parsedValue:
    itemsDict[name].append(item)
    else:
    itemsDict[name].append(parsedValue)
    else:
    if len(parsedValue) and type(parsedValue[0]) is types.ListType:
    itemsDict[name] = []
    for item in parsedValue:
    itemsDict[name].append(item)
    else:
    itemsDict[name] = [parsedValue]
    return itemsDict
    def _splitBunches(infoItems):# Is this still needed ?? Stas
    Utility to help make it easy to iterate over each item separately,
    even if they were bunched on the page.
    result= []
    # TODO: Decide if this is the best approach.
    for group in infoItems:
    if type(group) == tuple:
    result.extend(group)
    else:
    result.append(group)
    return result
    class SmartRedirectHandler(urllib2.HTTPRedirectHandler):
    def __init__(self, cookiejar):
    self.cookiejar = cookiejar
    def http_error_302(self, req, fp, code, msg, headers):
    # The location redirect doesn't seem to change
    # the hostname header appropriately, so we do
    # by hand. (Is this a bug in urllib2?)
    new_host = re.match(r'http[s]*://(.*?\.google\.com)',
    headers.getheader('Location'))
    if new_host:
    req.add_header("Host", new_host.groups()[0])
    result = urllib2.HTTPRedirectHandler.http_error_302(
    self, req, fp, code, msg, headers)
    return result
    class CookieJar:
    A rough cookie handler, intended to only refer to one domain.
    Does no expiry or anything like that.
    (The only reason this is here is so I don't have to require
    the `ClientCookie` package.)
    def __init__(self):
    self._cookies = {}
    def extractCookies(self, headers, nameFilter = None):
    # TODO: Do this all more nicely?
    for cookie in headers.getheaders('Set-Cookie'):
    name, value = (cookie.split("=", 1) + [""])[:2]
    if LG_DEBUG: print "Extracted cookie `%s`" % (name)
    if not nameFilter or name in nameFilter:
    self._cookies[name] = value.split(";")[0]
    if LG_DEBUG: print "Stored cookie `%s` value `%s`" % (name, self._cookies[name])
    if self._cookies[name] == "EXPIRED":
    if LG_DEBUG:
    print "We got an expired cookie: %s:%s, deleting." % (name, self._cookies[name])
    del self._cookies[name]
    def addCookie(self, name, value):
    self._cookies[name] = value
    def setCookies(self, request):
    request.add_header('Cookie',
    ";".join(["%s=%s" % (k,v)
    for k,v in self._cookies.items()]))
    def _buildURL(**kwargs):
    return "%s%s" % (URL_GMAIL, urllib.urlencode(kwargs))
    def _paramsToMime(params, filenames, files):
    mimeMsg = MIMEMultipart("form-data")
    for name, value in params.iteritems():
    mimeItem = MIMEText(value)
    mimeItem.add_header("Content-Disposition", "form-data", name=name)
    # TODO: Handle this better...?
    for hdr in ['Content-Type','MIME-Version','Content-Transfer-Encoding']:
    del mimeItem[hdr]
    mimeMsg.attach(mimeItem)
    if filenames or files:
    filenames = filenames or []
    files = files or []
    for idx, item in enumerate(filenames + files):
    # TODO: This is messy, tidy it...
    if isinstance(item, str):
    # We assume it's a file path...
    filename = item
    contentType = mimetypes.guess_type(filename)[0]
    payload = open(filename, "rb").read()
    else:
    # We assume it's an `email.Message.Message` instance...
    # TODO: Make more use of the pre-encoded information?
    filename = item.get_filename()
    contentType = item.get_content_type()
    payload = item.get_payload(decode=True)
    if not contentType:
    contentType = "application/octet-stream"
    mimeItem = MIMEBase(*contentType.split("/"))
    mimeItem.add_header("Content-Disposition", "form-data",
    name="file%s" % idx, filename=filename)
    # TODO: Encode the payload?
    mimeItem.set_payload(payload)
    # TODO: Handle this better...?
    for hdr in ['MIME-Version','Content-Transfer-Encoding']:
    del mimeItem[hdr]
    mimeMsg.attach(mimeItem)
    del mimeMsg['MIME-Version']
    return mimeMsg
    class GmailLoginFailure(Exception):
    Raised whenever the login process fails--could be wrong username/password,
    or Gmail service error, for example.
    Extract the error message like this:
    try:
    foobar
    except GmailLoginFailure,e:
    mesg = e.message# or
    print e# uses the __str__
    def __init__(self,message):
    self.message = message
    def __str__(self):
    return repr(self.message)
    class GmailAccount:
    def __init__(self, name = "", pw = "", state = None, domain = None):
    global URL_LOGIN, URL_GMAIL
    self.domain = domain
    if self.domain:
    URL_LOGIN = "https://www.google.com/a/" + self.domain + "/LoginAction"
    URL_GMAIL = "http://mail.google.com/a/" + self.domain + "/?"
    else:
    URL_LOGIN = GMAIL_URL_LOGIN
    URL_GMAIL = GMAIL_URL_GMAIL
    if name and pw:
    self.name = name
    self._pw = pw
    self._cookieJar = CookieJar()
    if PROXY_URL is not None:
    import gmail_transport
    self.opener = urllib2.build_opener(gmail_transport.ConnectHTTPHandler(proxy = PROXY_URL),
    gmail_transport.ConnectHTTPSHandler(proxy = PROXY_URL),
    SmartRedirectHandler(self._cookieJar))
    else:
    self.opener = urllib2.build_opener(
    urllib2.HTTPHandler(debuglevel=0),
    urllib2.HTTPSHandler(debuglevel=0),
    SmartRedirectHandler(self._cookieJar))
    elif state:
    # TODO: Check for stale state cookies?
    self.name, self._cookieJar = state.state
    else:
    raise ValueError("GmailAccount must be instantiated with " \
    "either GmailSessionState object or name " \
    "and password.")
    self._cachedQuotaInfo = None
    self._cachedLabelNames = None
    def login(self):
    # TODO: Throw exception if we were instantiated with state?
    if self.domain:
    data = urllib.urlencode({'continue': URL_GMAIL,
    'at' : 'null',
    'service' : 'mail',
    'userName': self.name,
    'password': self._pw,
    else:
    data = urllib.urlencode({'continue': URL_GMAIL,
    'Email': self.name,
    'Passwd': self._pw,
    headers = {'Host': 'www.google.com',
    'User-Agent': 'Mozilla/5.0 (Compatible; libgmail-python)'}
    req = urllib2.Request(URL_LOGIN, data=data, headers=headers)
    pageData = self._retrievePage(req)
    if not self.domain:
    # The GV cookie no longer comes in this page for
    # "Apps", so this bottom portion is unnecessary for it.
    # This requests the page that provides the required "GV" cookie.
    RE_PAGE_REDIRECT = 'CheckCookie\?continue=([^"\']+)'
    # TODO: Catch more failure exceptions here...?
    try:
    link = re.search(RE_PAGE_REDIRECT, pageData).group(1)
    redirectURL = urllib2.unquote(link)
    redirectURL = redirectURL.replace('\\x26', '&')
    except AttributeError:
    raise GmailLoginFailure("Login failed. (Wrong username/password?)")
    # We aren't concerned with the actual content of this page,
    # just the cookie that is returned with it.
    pageData = self._retrievePage(redirectURL)
    def _retrievePage(self, urlOrRequest):
    if self.opener is None:
    raise "Cannot find urlopener"
    if not isinstance(urlOrRequest, urllib2.Request):
    req = urllib2.Request(urlOrRequest)
    else:
    req = urlOrRequest
    self._cookieJar.setCookies(req)
    req.add_header('User-Agent',
    'Mozilla/5.0 (Compatible; libgmail-python)')
    try:
    resp = self.opener.open(req)
    except urllib2.HTTPError,info:
    print info
    return None
    pageData = resp.read()
    # Extract cookies here
    self._cookieJar.extractCookies(resp.headers)
    # TODO: Enable logging of page data for debugging purposes?
    return pageData
    def _parsePage(self, urlOrRequest):
    Retrieve & then parse the requested page content.
    items = _parsePage(self._retrievePage(urlOrRequest))
    # Automatically cache some things like quota usage.
    # TODO: Cache more?
    # TODO: Expire cached values?
    # TODO: Do this better.
    try:
    self._cachedQuotaInfo = items[D_QUOTA]
    except KeyError:
    pass
    #pprint.pprint(items)
    try:
    self._cachedLabelNames = [category[CT_NAME] for category in items[D_CATEGORIES][0]]
    except KeyError:
    pass
    return items
    def _parseSearchResult(self, searchType, start = 0, **kwargs):
    params = {U_SEARCH: searchType,
    U_START: start,
    U_VIEW: U_THREADLIST_VIEW,
    params.update(kwargs)
    return self._parsePage(_buildURL(**params))
    def _parseThreadSearch(self, searchType, allPages = False, **kwargs):
    Only works for thread-based results at present. # TODO: Change this?
    start = 0
    tot = 0
    threadsInfo = []
    # Option to get *all* threads if multiple pages are used.
    while (start == 0) or (allPages and
    len(threadsInfo) < threadListSummary[TS_TOTAL]):
    items = self._parseSearchResult(searchType, start, **kwargs)
    #TODO: Handle single & zero result case better? Does this work?
    try:
    threads = items[D_THREAD]
    except KeyError:
    break
    else:
    for th in threads:
    if not type(th[0]) is types.ListType:
    th = [th]
    threadsInfo.append(th)
    # TODO: Check if the total or per-page values have changed?
    threadListSummary = items[D_THREADLIST_SUMMARY][0]
    threadsPerPage = threadListSummary[TS_NUM]
    start += threadsPerPage
    # TODO: Record whether or not we retrieved all pages..?
    return GmailSearchResult(self, (searchType, kwargs), threadsInfo)
    def _retrieveJavascript(self, version = ""):
    Note: `version` seems to be ignored.
    return self._retrievePage(_buildURL(view = U_PAGE_VIEW,
    name = "js",
    ver = version))
    def getMessagesByFolder(self, folderName, allPages = False):
    Folders contain conversation/message threads.
    `folderName` -- As set in Gmail interface.
    Returns a `GmailSearchResult` instance.
    *** TODO: Change all "getMessagesByX" to "getThreadsByX"? ***
    return self._parseThreadSearch(folderName, allPages = allPages)
    def getMessagesByQuery(self, query, allPages = False):
    Returns a `GmailSearchResult` instance.
    return self._parseThreadSearch(U_QUERY_SEARCH, q = query,
    allPages = allPages)
    def getQuotaInfo(self, refresh = False):
    Return MB used, Total MB and percentage used.
    # TODO: Change this to a property.
    if not self._cachedQuotaInfo or refresh:
    # TODO: Handle this better...
    self.getMessagesByFolder(U_INBOX_SEARCH)
    return self._cachedQuotaInfo[0][:3]
    def getLabelNames(self, refresh = False):
    # TODO: Change this to a property?
    if not self._cachedLabelNames or refresh:
    # TODO: Handle this better...
    self.getMessagesByFolder(U_INBOX_SEARCH)
    return self._cachedLabelNames
    def getMessagesByLabel(self, label, allPages = False):
    return self._parseThreadSearch(U_CATEGORY_SEARCH,
    cat=label, allPages = allPages)
    def getRawMessage(self, msgId):
    # U_ORIGINAL_MESSAGE_VIEW seems the only one that returns a page.
    # All the other U_* results in a 404 exception. Stas
    PageView = U_ORIGINAL_MESSAGE_VIEW
    return self._retrievePage(
    _buildURL(view=PageView, th=msgId))
    def getUnreadMessages(self):
    return self._parseThreadSearch(U_QUERY_SEARCH,
    q = "is:" + U_AS_SUBSET_UNREAD)
    def getUnreadMsgCount(self):
    items = self._parseSearchResult(U_QUERY_SEARCH,
    q = "is:" + U_AS_SUBSET_UNREAD)
    try:
    result = items[D_THREADLIST_SUMMARY][0][TS_TOTAL_MSGS]
    except KeyError:
    result = 0
    return result
    def _getActionToken(self):
    try:
    at = self._cookieJar._cookies[ACTION_TOKEN_COOKIE]
    except KeyError:
    self.getLabelNames(True)
    at = self._cookieJar._cookies[ACTION_TOKEN_COOKIE]
    return at
    def sendMessage(self, msg, asDraft = False, _extraParams = None):
    `msg` -- `GmailComposedMessage` instance.
    `_extraParams` -- Dictionary containing additional parameters
    to put into POST message. (Not officially
    for external use, more to make feature
    additional a little easier to play with.)
    Note: Now returns `GmailMessageStub` instance with populated
    `id` (and `_account`) fields on success or None on failure.
    # TODO: Handle drafts separately?
    params = {U_VIEW: [U_SENDMAIL_VIEW, U_SAVEDRAFT_VIEW][asDraft],
    U_REFERENCED_MSG: "",
    U_THREAD: "",
    U_DRAFT_MSG: "",
    U_COMPOSEID: "1",
    U_ACTION_TOKEN: self._getActionToken(),
    U_COMPOSE_TO: msg.to,
    U_COMPOSE_CC: msg.cc,
    U_COMPOSE_BCC: msg.bcc,
    "subject": msg.subject,
    "msgbody": msg.body,
    if _extraParams:
    params.update(_extraParams)
    # Amongst other things, I used the following post to work out this:
    # <http://groups.google.com/groups?
    # selm=mailman.1047080233.20095.python-list%40python.org>
    mimeMessage = _paramsToMime(params, msg.filenames, msg.files)
    #### TODO: Ughh, tidy all this up & do it better...
    ## This horrible mess is here for two main reasons:
    ## 1. The `Content-Type` header (which also contains the boundary
    ## marker) needs to be extracted from the MIME message so
    ## we can send it as the request `Content-Type` header instead.
    ## 2. It seems the form submission needs to use "\r\n" for new
    ## lines instead of the "\n" returned by `as_string()`.
    ## I tried changing the value of `NL` used by the `Generator` class
    ## but it didn't work so I'm doing it this way until I figure
    ## out how to do it properly. Of course, first try, if the payloads
    ## contained "\n" sequences they got replaced too, which corrupted
    ## the attachments. I could probably encode the submission,
    ## which would probably be nicer, but in the meantime I'm kludging
    ## this workaround that replaces all non-text payloads with a
    ## marker, changes all "\n" to "\r\n" and finally replaces the
    ## markers with the original payloads.
    ## Yeah, I know, it's horrible, but hey it works doesn't it? If you've
    ## got a problem with it, fix it yourself & give me the patch!
    origPayloads = {}
    FMT_MARKER = "&&&&&&%s&&&&&&"
    for i, m in enumerate(mimeMessage.get_payload()):
    if not isinstance(m, MIMEText): #Do we care if we change text ones?
    origPayloads[i] = m.get_payload()
    m.set_payload(FMT_MARKER % i)
    mimeMessage.epilogue = ""
    msgStr = mimeMessage.as_string()
    contentTypeHeader, data = msgStr.split("\n\n", 1)
    contentTypeHeader = contentTypeHeader.split(":", 1)
    data = data.replace("\n", "\r\n")
    for k,v in origPayloads.iteritems():
    data = data.replace(FMT_MARKER % k, v)
    req = urllib2.Request(_buildURL(), data = data)
    req.add_header(*contentTypeHeader)
    items = self._parsePage(req)
    # TODO: Check composeid?
    # Sometimes we get the success message
    # but the id is 0 and no message is sent
    result = None
    resultInfo = items[D_SENDMAIL_RESULT][0]
    if resultInfo[SM_SUCCESS]:
    result = GmailMessageStub(id = resultInfo[SM_NEWTHREADID],
    _account = self)
    else:
    raise GmailSendError, resultInfo[SM_MSG]
    return result
    def trashMessage(self, msg):
    # TODO: Decide if we should make this a method of `GmailMessage`.
    # TODO: Should we check we have been given a `GmailMessage` instance?
    params = {
    U_ACTION: U_DELETEMESSAGE_ACTION,
    U_ACTION_MESSAGE: msg.id,
    U_ACTION_TOKEN: self._getActionToken(),
    items = self._parsePage(_buildURL(**params))
    # TODO: Mark as trashed on success?
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def _doThreadAction(self, actionId, thread):
    # TODO: Decide if we should make this a method of `GmailThread`.
    # TODO: Should we check we have been given a `GmailThread` instance?
    params = {
    U_SEARCH: U_ALL_SEARCH, #TODO:Check this search value always works.
    U_VIEW: U_UPDATE_VIEW,
    U_ACTION: actionId,
    U_ACTION_THREAD: thread.id,
    U_ACTION_TOKEN: self._getActionToken(),
    items = self._parsePage(_buildURL(**params))
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def trashThread(self, thread):
    # TODO: Decide if we should make this a method of `GmailThread`.
    # TODO: Should we check we have been given a `GmailThread` instance?
    result = self._doThreadAction(U_MARKTRASH_ACTION, thread)
    # TODO: Mark as trashed on success?
    return result
    def _createUpdateRequest(self, actionId): #extraData):
    Helper method to create a Request instance for an update (view)
    action.
    Returns populated `Request` instance.
    params = {
    U_VIEW: U_UPDATE_VIEW,
    data = {
    U_ACTION: actionId,
    U_ACTION_TOKEN: self._getActionToken(),
    #data.update(extraData)
    req = urllib2.Request(_buildURL(**params),
    data = urllib.urlencode(data))
    return req
    # TODO: Extract additional common code from handling of labels?
    def createLabel(self, labelName):
    req = self._createUpdateRequest(U_CREATECATEGORY_ACTION + labelName)
    # Note: Label name cache is updated by this call as well. (Handy!)
    items = self._parsePage(req)
    print items
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def deleteLabel(self, labelName):
    # TODO: Check labelName exits?
    req = self._createUpdateRequest(U_DELETECATEGORY_ACTION + labelName)
    # Note: Label name cache is updated by this call as well. (Handy!)
    items = self._parsePage(req)
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def renameLabel(self, oldLabelName, newLabelName):
    # TODO: Check oldLabelName exits?
    req = self._createUpdateRequest("%s%s^%s" % (U_RENAMECATEGORY_ACTION,
    oldLabelName, newLabelName))
    # Note: Label name cache is updated by this call as well. (Handy!)
    items = self._parsePage(req)
    return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
    def storeFile(self, filename, label = None):
    # TODO: Handle files larger than single attachment size.
    # TODO: Allow file data objects to be supplied?
    FILE_STORE_VERSION = "FSV_01"
    FILE_STORE_SUBJECT_TEMPLATE = "%s %s" % (FILE_STORE_VERSION, "%s")
    subject = FILE_STORE_SUBJECT_TEMPLATE % os.path.basename(filename)
    msg = GmailComposedMessage(to="", subject=subject, body="",
    filenames=[filename])
    draftMsg = self.sendMessage(msg, asDraft = True)
    if draftMsg and label:
    draftMsg.addLabel(label)
    return draftMsg
    ## CONTACTS SUPPORT
    def getContacts(self):
    Returns a GmailContactList object
    that has all the contacts in it as
    GmailContacts
    contactList = []
    # pnl = a is necessary to get *all* contacts
    myUrl = _buildURL(view='cl',search='contacts', pnl='a')
    myData = self._parsePage(myUrl)
    # This comes back with a dictionary
    # with entry 'cl'
    addresses = myData['cl']
    for entry in addresses:
    if len(entry) >= 6 and entry[0]=='ce':
    newGmailContact = GmailContact(entry[1], entry[2], entry[4], entry[5])
    #### new code used to get all the notes
    #### not used yet due to lockdown problems
    ##rawnotes = self._getSpecInfo(entry[1])
    ##print rawnotes
    ##newGmailContact = GmailContact(entry[1], entry[2], entry[4],rawnotes)
    contactList.append(newGmailContact)
    return GmailContactList(contactList)
    def addContact(self, myContact, *extra_args):
    Attempts to add a GmailContact to the gmail
    address book. Returns true if successful,
    false otherwise
    Please note that after version 0.1.3.3,
    addContact takes one argument of type
    GmailContact, the contact to add.
    The old signature of:
    addContact(name, email, notes='') is still
    supported, but deprecated.
    if len(extra_args) > 0:
    # The user has passed in extra arguments
    # He/she is probably trying to invoke addContact
    # using the old, deprecated signature of:
    # addContact(self, name, email, notes='')
    # Build a GmailContact object and use that instead
    (name, email) = (myContact, extra_args[0])
    if len(extra_args) > 1:
    notes = extra_args[1]
    else:
    notes = ''
    myContact = GmailContact(-1, name, email, notes)
    # TODO: In the ideal world, we'd extract these specific
    # constants into a nice constants file
    # This mostly comes from the Johnvey Gmail API,
    # but also from the gmail.py cited earlier
    myURL = _buildURL(view='up')
    myDataList = [ ('act','ec'),
    ('at', self._cookieJar._cookies['GMAIL_AT']), # Cookie data?
    ('ct_nm', myContact.getName()),
    ('ct_em', myContact.getEmail()),
    ('ct_id', -1 )
    notes = myContact.getNotes()
    if notes != '':
    myDataList.append( ('ctf_n', notes) )
    validinfokeys = [
    'i', # IM
    'p', # Phone
    'd', # Company
    'a', # ADR
    'e', # Email
    'm', # Mobile
    'b', # Pager
    'f', # Fax
    't', # Title
    'o', # Other
    moreInfo = myContact.getMoreInfo()
    ctsn_num = -1
    if moreInfo != {}:
    for ctsf,ctsf_data in moreInfo.items():
    ctsn_num += 1
    # data section header, WORK, HOME,...
    sectionenum ='ctsn_%02d' % ctsn_num
    myDataList.append( ( sectionenum, ctsf ))
    ctsf_num = -1
    if isinstance(ctsf_data[0],str):
    ctsf_num += 1
    # data section
    subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, ctsf_data[0]) # ie. ctsf_00_01_p
    myDataList.append( (subsectionenum, ctsf_data[1]) )
    else:
    for info in ctsf_data:
    if validinfokeys.count(info[0]) > 0:
    ctsf_num += 1
    # data section
    subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, info[0]) # ie. ctsf_00_01_p
    myDataList.append( (subsectionenum, info[1]) )
    myData = urllib.urlencode(myDataList)
    request = urllib2.Request(myURL,
    data = myData)
    pageData = self._retrievePage(request)
    if pageData.find("The contact was successfully added") == -1:
    print pageData
    if pageData.find("already has the email address") > 0:
    raise Exception("Someone with same email already exists in Gmail.")
    elif pageData.find("https://www.google.com/accounts/ServiceLogin"):
    raise Exception("Login has expired.")
    return False
    else:
    return True
    def _removeContactById(self, id):
    Attempts to remove the contact that occupies
    id "id" from the gmail address book.
    Returns True if successful,
    False otherwise.
    This is a little dangerous since you don't really
    know who you're deleting. Really,
    this should return the name or something of the
    person we just killed.
    Don't call this method.
    You should be using removeContact instead.
    myURL = _buildURL(search='contacts', ct_id = id, c=id, act='dc', at=self._cookieJar._cookies['GMAIL_AT'], view='up')
    pageData = self._retrievePage(myURL)
    if pageData.find("The contact has been deleted") == -1:
    return False
    else:
    return True
    def removeContact(self, gmailContact):
    Attempts to remove the GmailContact passed in
    Returns True if successful, False otherwise.
    # Let's re-fetch the contact list to make
    # sure we're really deleting the guy
    # we think we're deleting
    newContactList = self.getContacts()
    newVersionOfPersonToDelete = newContactList.getContactById(gmailContact.getId())
    # Ok, now we need to ensure that gmailContact
    # is the same as newVersionOfPersonToDelete
    # and then we can go ahead and delete him/her
    if (gmailContact == newVersionOfPersonToDelete):
    return self._removeContactById(gmailContact.getId())
    else:
    # We have a cache coherency problem -- someone
    # else now occupies this ID slot.
    # TODO: Perhaps signal this in some nice way
    # to the end user?
    print "Unable to delete."
    print "Has someone else been modifying the contacts list while we have?"
    print "Old version of person:",gmailContact
    print "New version of person:",newVersionOfPersonToDelete
    return False
    ## Don't remove this. contact stas
    ## def _getSpecInfo(self,id):
    ## Return all the notes data.
    ## This is currently not used due to the fact that it requests pages in
    ## a dos attack manner.
    ## myURL =_buildURL(search='contacts',ct_id=id,c=id,\
    ## at=self._cookieJar._cookies['GMAIL_AT'],view='ct')
    ## pageData = self._retrievePage(myURL)
    ## myData = self._parsePage(myURL)
    ## #print "\nmyData form _getSpecInfo\n",myData
    ## rawnotes = myData['cov'][7]
    ## return rawnotes
    class GmailContact:
    Class for storing a Gmail Contacts list entry
    def __init__(self, name, email, *extra_args):
    Returns a new GmailContact object
    (you can then call addContact on this to commit
    it to the Gmail addressbook, for example)
    Consider calling setNotes() and setMoreInfo()
    to add extended information to this contact
    # Support populating other fields if we're trying
    # to invoke this the old way, with the old constructor
    # whose signature was __init__(self, id, name, email, notes='')
    id = -1
    notes = ''
    if len(extra_args) > 0:
    (id, name) = (name, email)
    email = extra_args[0]
    if len(extra_args) > 1:
    notes = extra_args[1]
    else:
    notes = ''
    self.id = id
    self.name = name
    self.email = email
    self.notes = notes
    self.moreInfo = {}
    def __str__(self):
    return "%s %s %s %s" % (self.id, self.name, self.email, self.notes)
    def __eq__(self, other):
    if not isinstance(other, GmailContact):
    return False
    return (self.getId() == other.getId()) and \
    (self.getName() == other.getName()) and \
    (self.getEmail() == other.getEmail()) and \
    (self.getNotes() == other.getNotes())
    def getId(self):
    return self.id
    def getName(self):
    return self.name
    def getEmail(self):
    return self.email
    def getNotes(self):
    return self.notes
    def setNotes(self, notes):
    Sets the notes field for this GmailContact
    Note that this does NOT change the note
    field on Gmail's end; only adding or removing
    contacts modifies them
    self.notes = notes
    def getMoreInfo(self):
    return self.moreInfo
    def setMoreInfo(self, moreInfo):
    moreInfo format
    Use special key values::
    'i' = IM
    'p' = Phone
    'd' = Company
    'a' = ADR
    'e' = Email
    'm' = Mobile
    'b' = Pager
    'f' = Fax
    't' = Title
    'o' = Other
    Simple example::
    moreInfo = {'Home': ( ('a','852 W Barry'),
    ('p', '1-773-244-1980'),
    ('i', 'aim:brianray34') ) }
    Complex example::
    moreInfo = {
    'Personal': (('e', 'Home Email'),
    ('f', 'Home Fax')),
    'Work': (('d', 'Sample Company'),
    ('t', 'Job Title'),
    ('o', 'Department: Department1'),
    ('o', 'Department: Department2'),
    ('p', 'Work Phone'),
    ('m', 'Mobile Phone'),
    ('f', 'Work Fax'),
    ('b', 'Pager')) }
    self.moreInfo = moreInfo
    def getVCard(self):
    """Returns a vCard 3.0 for this
    contact, as a string"""
    # The \r is is to comply with the RFC2425 section 5.8.1
    vcard = "BEGIN:VCARD\r\n"
    vcard += "VERSION:3.0\r\n"
    ## Deal with multiline notes
    ##vcard += "NOTE:%s\n" % self.getNotes().replace("\n","\\n")
    vcard += "NOTE:%s\r\n" % self.getNotes()
    # Fake-out N by splitting up whatever we get out of getName
    # This might not always do 'the right thing'
    # but it's a *reasonable* compromise
    fullname = self.getName().split()
    fullname.reverse()
    vcard += "N:%s" % ';'.join(fullname) + "\r\n"
    vcard += "FN:%s\r\n" % self.getName()
    vcard += "EMAIL;TYPE=INTERNET:%s\r\n" % self.getEmail()
    vcard += "END:VCARD\r\n\r\n"
    # Final newline in case we want to put more than one in a file
    return vcard
    class GmailContactList:
    Class for storing an entire Gmail contacts list
    and retrieving contacts by Id, Email address, and name
    def __init__(self, contactList):
    self.contactList = contactList
    def __str__(self):
    return '\n'.join([str(item) for item in self.contactList])
    def getCount(self):
    Returns number of contacts
    return len(self.contactList)
    def getAllContacts(self):
    Returns an array of all the
    GmailContacts
    return self.contactList
    def getContactByName(self, name):
    Gets the first contact in the
    address book whose name is 'name'.
    Returns False if no contact
    could be found
    nameList = self.getContactListByName(name)
    if len(nameList) > 0:
    return nameList[0]
    else:
    return False
    def getContactByEmail(self, email):
    Gets the first contact in the
    address book whose name is 'email'.
    As of this writing, Gmail insists
    upon a unique email; i.e. two contacts
    cannot share an email address.
    Returns False if no contact
    could be found
    emailList = self.getContactListByEmail(email)
    if len(emailList) > 0:
    return emailList[0]
    else:
    return False
    def getContactById(self, myId):
    Gets the first contact in the
    address book whose id is 'myId'.
    REMEMBER: ID IS A STRING
    Returns False if no contact
    could be found
    idList = self.getContactListById(myId)
    if len(idList) > 0:
    return idList[0]
    else:
    return False
    def getContactListByName(self, name):
    This function returns a LIST
    of GmailContacts whose name is
    'name'.
    Returns an empty list if no contacts
    were found
    nameList = []
    for entry in self.contactList:
    if entry.getName() == name:
    nameList.append(entry)
    return nameList
    def getContactListByEmail(self, email):
    This function returns a LIST
    of GmailContacts whose email is
    'email'. As of this writing, two contacts
    cannot share an email address, so this
    should only return just one item.
    But it doesn't hurt to be prepared?
    Returns an empty list if no contacts
    were found
    emailList = []
    for entry in self.contactList:
    if entry.getEmail() == email:
    emailList.append(entry)
    return emailList
    def getContactListById(self, myId):
    This function returns a LIST
    of GmailContacts whose id is
    'myId'. We expect there only to
    be one, but just in case!
    Remember: ID IS A STRING
    Returns an empty list if no contacts
    were found
    idList = []
    for entry in self.contactList:
    if entry.getId() == myId:
    idList.append(entry)
    return idList
    def search(self, searchTerm):
    This function returns a LIST
    of GmailContacts whose name or
    email address matches the 'searchTerm'.
    Returns an empty list if no matches
    were found.
    searchResults = []
    for entry in self.contactList:
    p = re.compile(searchTerm, re.IGNORECASE)
    if p.search(entry.getName()) or p.search(entry.getEmail()):
    searchResults.append(entry)
    return searchResults
    class GmailSearchResult:
    def __init__(self, account, search, threadsInfo):
    `threadsInfo` -- As returned from Gmail but unbunched.
    #print "\nthreadsInfo\n",threadsInfo
    try:
    if not type(threadsInfo[0]) is types.ListType:
    threadsInfo = [threadsInfo]
    except IndexError:
    print "No messages found"
    self._account = account
    self.search = search # TODO: Turn into object + format nicely.
    self._threads = []
    for thread in threadsInfo:
    self._threads.append(GmailThread(self, thread[0]))
    def __iter__(self):
    return iter(self._threads)
    def __len__(self):
    return len(self._threads)
    def __getitem__(self,key):
    return self._threads.__getitem__(key)
    class GmailSessionState:
    def __init__(self, account = None, filename = ""):
    if account:
    self.state = (account.name, account._cookieJar)
    elif filename:
    self.state = load(open(filename, "rb"))
    else:
    raise ValueError("GmailSessionState must be instantiated with " \
    "either GmailAccount object or filename.")
    def save(self, filename):
    dump(self.state, open(filename, "wb"), -1)
    class _LabelHandlerMixin(object):
    Note: Because a message id can be used as a thread id this works for
    messages as well as threads.
    def __init__(self):
    self._labels = None
    def _makeLabelList(self, labelList):
    self._labels = labelList
    def addLabel(self, labelName):
    # Note: It appears this also automatically creates new labels.
    result = self._account._doThreadAction(U_ADDCATEGORY_ACTION+labelName,
    self)
    if not self._labels:
    self._makeLabelList([])
    # TODO: Caching this seems a little dangerous; suppress duplicates maybe?
    self._labels.append(labelName)
    return result
    def removeLabel(self, labelName):
    # TODO: Check label is already attached?
    # Note: An error is not generated if the label is not already attached.
    result = \
    self._account._doThreadAction(U_REMOVECATEGORY_ACTION+labelName,
    self)
    removeLabel = True
    try:
    self._labels.remove(labelName)
    except:
    removeLabel = False
    pass
    # If we don't check both, we might end up in some weird inconsistent state
    return result and removeLabel
    def getLabels(self):
    return self._labels
    class GmailThread(_LabelHandlerMixin):
    Note: As far as I can tell, the "canonical" thread id is always the same
    as the id of the last message in the thread. But it appears that
    the id of any message in the thread can be used to retrieve
    the thread information.
    def __init__(self, parent, threadsInfo):
    _LabelHandlerMixin.__init__(self)
    # TODO Handle this better?
    self._parent = parent
    self._account = self._parent._account
    self.id = threadsInfo[T_THREADID] # TODO: Change when canonical updated?
    self.subject = threadsInfo[T_SUBJECT_HTML]
    self.snippet = threadsInfo[T_SNIPPET_HTML]
    #self.extraSummary = threadInfo[T_EXTRA_SNIPPET] #TODO: What is this?
    # TODO: Store other info?
    # Extract number of messages in thread/conversation.
    self._authors = threadsInfo[T_AUTHORS_HTML]
    self.info = threadsInfo
    try:
    # TODO: Find out if this information can be found another way...
    # (Without another page request.)
    self._length = int(re.search("\((\d+?)\)\Z",
    self._authors).group(1))
    except AttributeError,info:
    # If there's no message count then the thread only has one message.
    self._length = 1
    # TODO: Store information known about the last message (e.g. id)?
    self._messages = []
    # Populate labels
    self._makeLabelList(threadsInfo[T_CATEGORIES])
    def __getattr__(self, name):
    Dynamically dispatch some interesting thread properties.
    attrs = { 'unread': T_UNREAD,
    'star': T_STAR,
    'date': T_DATE_HTML,
    'authors': T_AUTHORS_HTML,
    'flags': T_FLAGS,
    'subject': T_SUBJECT_HTML,
    'snippet': T_SNIPPET_HTML,
    'categories': T_CATEGORIES,
    'attach': T_ATTACH_HTML,
    'matching_msgid': T_MATCHING_MSGID,
    'extra_snippet': T_EXTRA_SNIPPET }
    if name in attrs:
    return self.info[ attrs[name] ];
    raise AttributeError("no attribute %s" % name)
    def __len__(self):
    return self._length
    def __iter__(self):
    if not self._messages:
    self._messages = self._getMessages(self)
    return iter(self._messages)
    def __getitem__(self, key):
    if not self._messages:
    self._messages = self._getMessages(self)
    try:
    result = self._messages.__getitem__(key)
    except IndexError:
    result = []
    return result
    def _getMessages(self, thread):
    # TODO: Do this better.
    # TODO: Specify the query folder using our specific search?
    items = self._account._parseSearchResult(U_QUERY_SEARCH,
    view = U_CONVERSATION_VIEW,
    th = thread.id,
    q = "in:anywhere")
    result = []
    # TODO: Handle this better?
    # Note: This handles both draft & non-draft messages in a thread...
    for key, isDraft in [(D_MSGINFO, False), (D_DRAFTINFO, True)]:
    try:
    msgsInfo = items[key]
    except KeyError:
    # No messages of this type (e.g. draft or non-draft)
    continue
    else:
    # TODO: Handle special case of only 1 message in thread better?
    if type(msgsInfo[0]) != types.ListType:
    msgsInfo = [msgsInfo]
    for msg in msgsInfo:
    result += [GmailMessage(thread, msg, isDraft = isDraft)]
    return result
    class GmailMessageStub(_LabelHandlerMixin):
    Intended to be used where not all message information is known/required.
    NOTE: This may go away.
    # TODO: Provide way to convert this to a full `GmailMessage` instance
    # or allow `GmailMessage` to be created without all info?
    def __init__(self, id = None, _account = None):
    _LabelHandlerMixin.__init__(self)
    self.id = id
    self._account = _account
    class GmailMessage(object):
    def __init__(self, parent, msgData, isDraft = False):
    Note: `msgData` can be from either D_MSGINFO or D_DRAFTINFO.
    # TODO: Automatically detect if it's a draft or not?
    # TODO Handle this better?
    self._parent = parent
    self._account = self._parent._account
    self.author = msgData[MI_AUTHORFIRSTNAME]
    self.id = msgData[MI_MSGID]
    self.number = msgData[MI_NUM]
    self.subject = msgData[MI_SUBJECT]
    self.to = msgData[MI_TO]
    self.cc = msgData[MI_CC]
    self.bcc = msgData[MI_BCC]
    self.sender = msgData[MI_AUTHOREMAIL]
    self.attachments = [GmailAttachment(self, attachmentInfo)
    for attachmentInfo in msgData[MI_ATTACHINFO]]
    # TODO: Populate additional fields & cache...(?)
    # TODO: Handle body differently if it's from a draft?
    self.isDraft = isDraft
    self._source = None
    def _getSource(self):
    if not self._source:
    # TODO: Do this more nicely...?
    # TODO: Strip initial white space & fix up last line ending
    # to make it legal as per RFC?
    self._source = self._account.getRawMessage(self.id)
    return self._source
    source = property(_getSource, doc = "")
    class GmailAttachment:
    def __init__(self, parent, attachmentInfo):
    # TODO Handle this better?
    self._parent = parent
    self._account = self._parent._account
    self.id = attachmentInfo[A_ID]
    self.filename = attachmentInfo[A_FILENAME]
    self.mimetype = attachmentInfo[A_MIMETYPE]
    self.filesize = attachmentInfo[A_FILESIZE]
    self._content = None
    def _getContent(self):
    if not self._content:
    # TODO: Do this a more nicely...?
    self._content = self._account._retrievePage(
    _buildURL(view=U_ATTACHMENT_VIEW, disp="attd",
    attid=self.id, th=self._parent._parent.id))
    return self._content
    content = property(_getContent, doc = "")
    def _getFullId(self):
    Returns the "full path"/"full id" of the attachment. (Used
    to refer to the file when forwarding.)
    The id is of the form: "<thread_id>_<msg_id>_<attachment_id>"
    return "%s_%s_%s" % (self._parent._parent.id,
    self._parent.id,
    self.id)
    _fullId = property(_getFullId, doc = "")
    class GmailComposedMessage:
    def __init__(self, to, subject, body, cc = None, bcc = None,
    filenames = None, files = None):
    `filenames` - list of the file paths of the files to attach.
    `files` - list of objects implementing sub-set of
    `email.Message.Message` interface (`get_filename`,
    `get_content_type`, `get_payload`). This is to
    allow use of payloads from Message instances.
    TODO: Change this to be simpler class we define ourselves?
    self.to = to
    self.subject = subject
    self.body = body
    self.cc = cc
    self.bcc = bcc
    self.filenames = filenames
    self.files = files
    if __name__ == "__main__":
    import sys
    from getpass import getpass
    try:
    name = sys.argv[1]
    except IndexError:
    name = raw_input("Gmail account name: ")
    pw = getpass("Password: ")
    domain = raw_input("Domain? [leave blank for Gmail]: ")
    ga = GmailAccount(name, pw, domain=domain)
    print "\nPlease wait, logging in..."
    try:
    ga.login()
    except GmailLoginFailure,e:
    print "\nLogin failed. (%s)" % e.message
    else:
    print "Login successful.\n"
    # TODO: Use properties instead?
    quotaInfo = ga.getQuotaInfo()
    quotaMbUsed = quotaInfo[QU_SPACEUSED]
    quotaMbTotal = quotaInfo[QU_QUOTA]
    quotaPercent = quotaInfo[QU_PERCENT]
    print "%s of %s used. (%s)\n" % (quotaMbUsed, quotaMbTotal, quotaPercent)
    searches = STANDARD_FOLDERS + ga.getLabelNames()
    name = None
    while 1:
    try:
    print "Select folder or label to list: (Ctrl-C to exit)"
    for optionId, optionName in enumerate(searches):
    print " %d. %s" % (optionId, optionName)
    while not name:
    try:
    name = searches[int(raw_input("Choice: "))]
    except ValueError,info:
    print info
    name = None
    if name in STANDARD_FOLDERS:
    result = ga.getMessagesByFolder(name, True)
    else:
    result = ga.getMessagesByLabel(name, True)
    if not len(result):
    print "No threads found in `%s`." % name
    break
    name = None
    tot = len(result)
    i = 0
    for thread in result:
    print "%s messages in thread" % len(thread)
    print thread.id, len(thread), thread.subject
    for msg in thread:
    print "\n ", msg.id, msg.number, msg.author,msg.subject
    # Just as an example of other usefull things
    #print " ", msg.cc, msg.bcc,msg.sender
    i += 1
    print
    print "number of threads:",tot
    print "number of messages:",i
    except KeyboardInterrupt:
    break
    print "\n\nDone."
    Last edited by Reasons (2008-03-20 01:18:27)

    Thought it might help to give lines 369-relevant of the libgmail so it's easier to read
    def _retrievePage(self, urlOrRequest):
    if self.opener is None:
    raise "Cannot find urlopener"
    if not isinstance(urlOrRequest, urllib2.Request):
    req = urllib2.Request(urlOrRequest)
    else:
    req = urlOrRequest
    self._cookieJar.setCookies(req)
    req.add_header('User-Agent',
    'Mozilla/5.0 (Compatible; libgmail-python)')
    try:
    resp = self.opener.open(req)
    except urllib2.HTTPError,info:
    print info
    return None
    pageData = resp.read()
    # Extract cookies here
    self._cookieJar.extractCookies(resp.headers)
    # TODO: Enable logging of page data for debugging purposes?
    return pageData
    def _parsePage(self, urlOrRequest):
    Retrieve & then parse the requested page content.
    items = _parsePage(self._retrievePage(urlOrRequest))
    # Automatically cache some things like quota usage.
    # TODO: Cache more?
    # TODO: Expire cached values?
    # TODO: Do this better.
    try:
    self._cachedQuotaInfo = items[D_QUOTA]
    except KeyError:
    pass
    #pprint.pprint(items)
    try:
    self._cachedLabelNames = [category[CT_NAME] for category in items[D_CATEGORIES][0]]
    except KeyError:
    pass
    return items
    def _parseSearchResult(self, searchType, start = 0, **kwargs):
    params = {U_SEARCH: searchType,
    U_START: start,
    U_VIEW: U_THREADLIST_VIEW,
    params.update(kwargs)
    return self._parsePage(_buildURL(**params))

  • The pipe "|" does not work in the Windows command console.

    This is a very weird problem. I am using 64-bit Windows VISTA command window for very long time and it just suddenly do not work with the pipe character anymore. I have checked the PATH and PATHEXT in Administrator mode and they look OK.
    Detail below:
    For example, the "dir" work as expected.
    C:\newcmd>dir *.txt
     Volume in drive C has no label.
     Volume Serial Number is 4CD2-5B84
     Directory of C:\newcmd
    10/31/2014  11:33 PM               737 adder.txt
    04/05/2015  03:21 PM         1,890,696 au01.txt
    12/17/2014  10:40 PM             4,204 bintree.txt
    07/26/2011  10:47 PM         3,298,568 ke30-07-2011.txt
    If pipe to a VBScript file below, the error message shows up. The error message is actually the content of %PATHEXT% environment variable:
    C:\newcmd>dir *.txt|cscript //nologo cut.vbs -c "1-12"
    '.COM;.EXE;.BAT;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.REX;.REXG;.REXP;.CMD;'
    is not recognized as an internal or external command, operable program or batch file.
    The VBScript works fine without the "|":
    C:\newcmd>cscript //nologo cut.vbs
    Usage:cut "<delimiter>" "<column 0>,<column 1>,..."
    Example:cut "|" "0,1,8"
    What is going on and how to fix it? Help?
    C:\newcmd>echo %PATH%
    C:\Windows\system32;C:\Windows\System32\Wbem;C:\Program Files (x86)\CyberLink\Power2Go;C:\Program Files (x86)\Java\jre1.6.0_07\bin;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files\nodejs\;C:\Program Files\ooRexx;c:\
    C:\newcmd>echo %PATHEXT%
    .COM;.EXE;.BAT;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.REX;.REXG;.REXP;.CMD;

    Hi,
    Please check if this happens in [https://support.mozilla.com/en-US/kb/Safe%20Mode Safe Mode.] Safe Mode is a temporary diagnostic feature and you can exit and start Firefox normally again. The installed '''Extensions''', themes ('''Appearance''') and '''Plugins''' are disabled in safe mode.
    [http://kb.mozillazine.org/Problematic_extensions Problematic Extensions]
    [https://support.mozilla.com/en-US/kb/Troubleshooting%20extensions%20and%20themes Troubleshooting Extensions and Themes]
    [http://support.mozilla.com/en-US/kb/Uninstalling+add-ons Uninstalling Add-ons]
    [http://kb.mozillazine.org/Uninstalling_toolbars Uninstalling Toolbars]

  • Ping timeouts at server and broken pipes at client

    I am experiencing unexpected broken pipe exceptions at the client side of a coherence server. These exceptions prevent our code to establish a connection to the cache server.
    Due to network restrictions, we are connecting client and server through Coherence*Extend.
    At the server side, the logs show the following message:
    DEBUG Coherence:3 - 2013-02-01 11:12:20.584/85.322 Oracle Coherence GE 3.7.1.5 <D6> (thread=Proxy:TcpProxyServicePof:TcpAcceptor, member=1): Closed: TcpConnection(Id=0x0000013C953D8F150AA202D9F632E5EAFD6BDDE1A713F026C65BC1E7CC1E5952, Open=false, Member(Id=0, Timestamp=2013-02-01 11:11:44.404, Address=127.0.0.1:0, MachineId=0, Location=site:,process:1612, Role=WeblogicServer), LocalAddress=10.162.2.217:28088, RemoteAddress=10.162.2.231:45202) due to:
    com.tangosol.net.messaging.ConnectionException: TcpConnection(Id=0x0000013C953D8F150AA202D9F632E5EAFD6BDDE1A713F026C65BC1E7CC1E5952, Open=true, Member(Id=0, Timestamp=2013-02-01 11:11:44.404, Address=127.0.0.1:0, MachineId=0, Location=site:,process:1612, Role=WeblogicServer), LocalAddress=10.162.2.217:28088, RemoteAddress=10.162.2.231:45202): did not receive a response to a ping within 500 millis
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.checkPingTimeout(Peer.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.Acceptor.checkPingTimeouts(Acceptor.CDB:7)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:115)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:662)
    And at the client side:
    ERROR (01/02/2013) 11:12:20 [threadsafe]Thread-15/AbstractComponentHandler Unable to create new instance 45067 ms
    com.tangosol.net.messaging.ConnectionException: TcpConnection(Id=0x0000013C953D8F150AA202D9F632E5EAFD6BDDE1A713F026C65BC1E7CC1E5952, Open=true, Member(Id=0, Timestamp=2013-02-01 11:11:44.404, Address=127.0.0.1:0, MachineId=0, Location=site:,process:1612, Role=WeblogicServer), LocalAddress=10.162.2.231:45202, RemoteAddress=10.162.2.217:28088)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.initiator.TcpInitiator$TcpConnection.send(TcpInitiator.CDB:35)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.send(Peer.CDB:29)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.post(Peer.CDB:23)
    at com.tangosol.coherence.component.net.extend.Channel.post(Channel.CDB:25)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.CDB:18)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.CDB:1)
    at com.tangosol.coherence.component.net.extend.RemoteNamedCache$BinaryCache.putAll(RemoteNamedCache.CDB:10)
    at com.tangosol.util.ConverterCollections$ConverterMap.putAll(ConverterCollections.java:1708)
    at com.tangosol.coherence.component.net.extend.RemoteNamedCache.putAll(RemoteNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.putAll(SafeNamedCache.CDB:1)
    at com.tangosol.net.cache.CachingMap.putAll(CachingMap.java:1023)
    Caused by: java.net.SocketException: Write failed: Broken pipe
    at jrockit.net.SocketNativeIO.writeBytesPinned(Native Method)
    at jrockit.net.SocketNativeIO.socketWrite(SocketNativeIO.java:46)
    at java.net.SocketOutputStream.socketWrite0(SocketOutputStream.java)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:104)
    at java.io.DataOutputStream.write(DataOutputStream.java:90)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.initiator.TcpInitiator$TcpConnection.send(TcpInitiator.CDB:27)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.send(Peer.CDB:29)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.post(Peer.CDB:23)
    at com.tangosol.coherence.component.net.extend.Channel.post(Channel.CDB:25)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.CDB:18)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.CDB:1)
    at com.tangosol.coherence.component.net.extend.RemoteNamedCache$BinaryCache.putAll(RemoteNamedCache.CDB:10)
    at com.tangosol.util.ConverterCollections$ConverterMap.putAll(ConverterCollections.java:1703)
    at com.tangosol.coherence.component.net.extend.RemoteNamedCache.putAll(RemoteNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.putAll(SafeNamedCache.CDB:1)
    at com.tangosol.net.cache.CachingMap.putAll(CachingMap.java:1023)
    I can ping the client machine from the server without a problem.
    Has anybody seen this before?
    Edited by: 982740 on Feb 1, 2013 2:29 AM

    Hi user,
    It seems from the log that when the proxy node accepts your client connection the client fails to respond to a ping request down the same pipe. Does this happen consistently? If so, no putAll would ever succeed.
    I have to admit I've never seen this behaviour before. I can't see how this would be a firewall port issue, as the connection has been established... the only things I can think of would be a) your weblogic client node is running way to hot and doesn't respond to the grids ping request in time, b) Some very aggressive network infrastructure is killing the connection just after its created, or c) aliens are interfering with your system.
    sorry I can't be of more help,
    Andy

  • Ssh configuration to avoid connection timeouts / broken pipes ?

    I'm running irssi through screen on my server via ssh. Recently, my parents' internet connection has become very dodgy, and because of this, my terminal freezes altogether once every 15 minutes or so, and resumes with some "broken pipe" message after a very long time (around 10 minutes). I usualy just kill the terminal when I notice the freezing and open a new one with another ssh connection, but this seems to be getting more and more frequent, so it's very annoying. I was wondering if there is a way to get around this? Server configuration, client configuration, a different ssh client ... ?
    By googling, I found that disabling "TCPKeepAlive" might do the trick, and so I did. It seemed to work at first, my connection was up for about half an hour, but then the same thing occured again
    The sadest part of this is that by using puTTY on my phone over 3G, the connection stays up forever, but with the wired broadband, it won't stay up for more than 15 minutes
    EDIT: This time I got this message: "Timeout, server <myaddress> not responding", after 29 minutes.
    Last edited by pauligrinder (2011-04-24 00:50:30)

    yep, this connection used to be reliable too, but now I get timeouts all the time, and also if I connect the Deluge-GTK to a daemon running on my server, it will randomly freeze and I have to reconnect to get it to work again. Luckily I'm getting out of here tomorrow
    Still, it would be nice to solve this problem, because I will be coming here every once in a while... I would call the ISP and complain (I don't think the problem can be with our routers, because ssh connections inside the LAN work just fine), but because it's easter, their customer support is closed Besides, I don't know how to explain the problem to them, because most likely they won't even know what ssh is...
    I tried rwd's configs, and they didn't help either. The only difference is that it seems to timeout faster now, instead of freezing the whole terminal for a long time...
    Last edited by pauligrinder (2011-04-25 14:13:11)

  • How to read the content of a blob col along with other cols as pipe delimit

    Hi,
    I would like to read the blob content along with the other columns . Assume table TAB1 has columns Response_log, Empcode and Ename. Here Response_log col is a blob data type, and the content of the blob is an xml file.Now i would like to read the content of the xml file of response_log column along with Empcode and Ename as pipe delimited . or else the best option would be to write to a text file with name extract.txt with the data being pipe delimited .
    create  table tab1(
    response_log blob,
    empcode  number,
    ename  varchar2(50 byte)
    )Sample code goes something like the one below .
    select xmltype( response_log, nls_charset_id( 'char_cs' ) ).getclobval() || '|' || empcode || '|' || ename
    from tab1 Can I have any other alternate way for this.
    Please advice

    Just Now one example is given in HOW TO WRITE ,SAVE A FILE IN BLOB COLUMN

  • How can I load an Excel File to a pipe-delimited .csv File

    In SSIS I am attempting to process a .xls File and I have a C# script that is reading the .xls File. My issue is this...I have some fields that have an embedded comma in them. Those fields are designated by a double quote though ". I have included my
    C# Script. I'm just not sure if I have to indicate to it that there is a field delimeter. The " double-quote is only utilized when there is indeed an embedded comma...like "Company Name, Inc"...or "Main St., Apt. 101"
    How can I read this .xls worksheet and account for the double-quotes and the embedded comma and produce a pipe-delimeted file?
    public void Main()
    // TODO: Add your code here
    // Create NEW .CSV Files for each .XLS File in the Directory as .CSV Stubs to store the records that will be re-formatted
    // in this Script.
    try
    string StringExcelPath = (Dts.Variables["User::FilePath"].Value.ToString());
    string StringExcelFileName = (Dts.Variables["User::FileName"].Value.ToString());
    string StringFileNameExtension = Path.GetExtension(StringExcelFileName);
    if (StringFileNameExtension != ".xls")
    return;
    string StringCSVFileName = (Dts.Variables["User::FileName"].Value.ToString());
    StringCSVFileName = Path.GetFileNameWithoutExtension(StringCSVFileName);
    StringCSVFileName = (Dts.Variables["User::FilePath"].Value.ToString()) + StringCSVFileName + ".csv";
    string StringExcelWorksheetName = (Dts.Variables["User::ExcelWorksheetName"].Value.ToString());
    string StringColumnDelimeter = "|";
    int IntHeaderRowsToSkip = 0;
    //FileStream stream = File.Open(StringFullyQualifiedPathFileName, FileMode.Open, FileAccess.Read);
    //// Reading from a binary Excel file ('97-2003 format; *.xls)
    //IExcelDataReader excelReader = ExcelReaderFactory.CreateBinaryReader(stream);
    //// Reading from a OpenXml Excel file (2007 format; *.xlsx)
    //IExcelDataReader excelReader = ExcelReaderFactory.CreateOpenXmlReader(stream);
    //// DataSet - The result of each spreadsheet will be created in the result.Tables
    //DataSet result = excelReader.AsDataSet();
    //// Free resources (IExcelDataReader is IDisposable)
    //excelReader.Close();
    if (ConvertExcelToCSV(StringExcelFileName, StringCSVFileName, StringExcelWorksheetName, StringColumnDelimeter, IntHeaderRowsToSkip) == true)
    Dts.TaskResult = (int)ScriptResults.Success;
    else
    Dts.TaskResult = (int)ScriptResults.Failure;
    catch (Exception)
    Dts.TaskResult = (int)ScriptResults.Failure;
    public static bool ConvertExcelToCSV(string sourceExcelPathAndName, string targetCSVPathAndName, string excelSheetName, string columnDelimeter, int headerRowsToSkip)
    try
    Microsoft.Office.Interop.Excel.Application ExcelApp = new Microsoft.Office.Interop.Excel.Application();
    Excel.Workbook ExcelWorkBook = ExcelApp.Workbooks.Open(
    sourceExcelPathAndName, // Filename
    0, // UpdateLinks ===> http://msdn.microsoft.com/en-us/library/office/ff194819(v=office.15).aspx
    true, // ReadOnly
    5, // Format ===> http://msdn.microsoft.com/en-us/library/office/ff194819(v=office.15).aspx
    "", // Password
    "", // WriteResPassword
    true, // IgnoreReadOnlyRecommended
    Excel.XlPlatform.xlWindows, // Origin
    "", // Delimeter
    true, // Editable
    false, // Notify
    0, // Converter
    false, // AddToMru
    false, // Local
    false // CorruptLoad
    // Gets the List of ALL Excel Worksheets within the Excel Spreadsheet
    Excel.Sheets ExcelWorkSheets = ExcelWorkBook.Worksheets;
    // Retrieves the Data from the EXACT Excel Worksheet that you want to process from
    Excel.Worksheet ExcelWorksheetToProcess = ExcelWorkSheets.get_Item(excelSheetName);
    // Gets the Range of Data from the EXACT Excel Worksheet that you want to process from
    Excel.Range ExcelWorksheetRange = ExcelWorksheetToProcess.UsedRange;
    // Sets the Cursor/Pointer at the Top Row of the Excel Worksheet
    Excel.Range ExcelRangeCurrentRow;
    // Deletes the Header Row and however many rows as specified in headerRowsToSkip
    for (int ExcelRowCount = 0; ExcelRowCount < headerRowsToSkip; ExcelRowCount++)
    ExcelRangeCurrentRow = ExcelWorksheetRange.get_Range("A1", Type.Missing).EntireRow;
    ExcelRangeCurrentRow.Delete(XlDeleteShiftDirection.xlShiftUp);
    // Replace ENTER, "\n", with a Space " "
    //ExcelWorksheetRange.Replace("\n", " ", Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);
    // Replace comma "," with the indicated Column Delimeter variable, columnDelimeter
    ExcelWorksheetRange.Replace(",", columnDelimeter, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);
    // Saves Data File as .csv Format
    ExcelWorkBook.SaveAs(
    targetCSVPathAndName, // Filename (See http://msdn.microsoft.com/en-us/library/microsoft.office.tools.excel.workbook.saveas.aspx)
    XlFileFormat.xlCSVMSDOS, // FileFormat
    Type.Missing, // Password
    Type.Missing, // WriteResPassword
    Type.Missing, // ReadOnlyRecommended
    Type.Missing, // CreateBackup
    Microsoft.Office.Interop.Excel.XlSaveAsAccessMode.xlExclusive, // AccessMode
    Type.Missing, // ConflictResolution
    Type.Missing, // AddToMru
    Type.Missing, // TextCodepage
    Type.Missing, // TextVisualLayout
    false // Local
    ExcelWorkBook.Close(false, Type.Missing, Type.Missing);
    ExcelApp.Quit();
    GC.WaitForPendingFinalizers();
    GC.Collect();
    System.Runtime.InteropServices.Marshal.FinalReleaseComObject(ExcelWorkSheets);
    System.Runtime.InteropServices.Marshal.FinalReleaseComObject(ExcelWorkBook);
    System.Runtime.InteropServices.Marshal.FinalReleaseComObject(ExcelApp);
    return true;
    catch (Exception exc)
    Console.WriteLine(exc.ToString());
    Console.ReadLine();
    return true;
    #region ScriptResults declaration
    /// <summary>
    /// This enum provides a convenient shorthand within the scope of this class for setting the
    /// result of the script.
    /// This code was generated automatically.
    /// </summary>
    enum ScriptResults
    Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
    Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
    #endregion

    I would prefer doing this using a standard data flow task component in SSIS. I will choose an Excel source and a flat file destination for this.
    See how you can handle the inconsistent/embedded delimiters within files inside SSIS data flow
    http://visakhm.blogspot.in/2014/06/ssis-tips-handling-embedded-text.html
    http://visakhm.blogspot.in/2014/07/ssis-tips-handling-inconsistent-text.html
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Can I invoke a SubVI in an event? and how do I set the background color of a pipe to #0000ff?

    When I click an image or a glass pipe(which belongs to Industry/Chesmitry category in palette), I want a SubVI to be invoked.
    The purpose is to fetch an OPC-UA data from a web service and to write to it via the service.
    We are building an HMI solution which displays an interactive water plant diagram.
    When users click pipes and motors in the diagram, clicked devices should be turned on and off and change their animations or colors accordingly.
    OPC-UA is for communication with devices.
    I couldn't even set the background color of a pipe to "#0000ff", but setting it to "Red" or "Blue" was possible, and I don't know how to invoke SubVIs in event scripts.
    The documentations in NI.com are confusing and lack depth.
    Even silverlight references are confusing.
    How do I do all of these?

    Hi iCat,
    Can you provide some more information about your current implementation so that we can help to answer your questions. Some questions I have for you are:
    Are you creating this project in the NI LabVIEW Web UI Builder or in LabVIEW?
    How are you publishing your webservice? Is this also in LabVIEW?
    How is your webservice interacting with an OPC-UA server?
    How is the certification set up with OPC-UA so that you can communicate between the server and the client?
    Best Regards,
    Allison M.
    Applications Engineer
    National Instruments
    ni.com/support

Maybe you are looking for

  • "No longer connected to the Internet" until router restarted

    I've been running a G4 Quicksilver with OSX Tiger on a BT Internet Home Hub router for some time now, years in fact, with no trouble. I also run a Windows PC from the same router using Ethernet connection for both. Yesterday I disconnected the ethern

  • How to use iBook for sharing internet via airport card... can it be done?

    The title says most of it all. I want to share the internet coming from the wall through the airport card in my iBook. I've made it work, however I'm concerned about doing it securely. Can it be done securely? If so, how? I've read in the help files

  • AVG add-in for Outlook 2013

    I'm running Outlook 2013 under Windows 8.1. I am also running AVG Internet Security 2015 (purchased version, not the free anti-virus one). I have enabled the AVG add-in. The first time I started Outlook after installing AVG, I noticed that the AVG gr

  • TFS 2010 Team Web Access Work Items not loading dropdowns and not saving

    Some of our users have lost their ability to add new work items. When they select New -> Bug the WorkItemEdit page opens, but none of the dropdowns load and required fields are no longer highlighted. When they click save it acts like it refreshes, bu

  • Automatic export to excel of a query

    Hi all, I have a query I must run on a daily basis and export the results to excel format them with a certain font and size and send it via email. Is there a way to automate all of this? Thanks