Exception passing
I have a problem passing exceptions from my RMI server to the client.
Whenever an exception is thrown within the rmi server, it triggers a remote exception due to Buffer overflow i think, and it is this remote exception that is caught by the client. The original exception is never caught, but it is this original exception that I am interested in.
anyone had this prob before? any ideas??
cheers
See http://java.sun.com/j2se/1.5.0/docs/api/java/rmi/ServerError.html
Similar Messages
-
Exceptions passing internal tables to memory...!
code in program 1
DATA :IT_FINAL2 LIKE WA_FINAL OCCURS 10 WITH HEADER LINE.
DATA : I_SORT TYPE SLIS_T_SORTINFO_ALV WITH HEADER LINE.
DATA : I_FCAT TYPE TABLE OF SLIS_FIELDCAT_ALV WITH NON-UNIQUE
DEFAULT KEY WITH HEADER LINE INITIAL SIZE 0,
EXPORT (IT_FINAL2[]) TO MEMORY ID 'main'.
EXPORT (I_FCAT[]) TO MEMORY ID 'i_fcat'.
EXPORT (I_SORT) to MEMORY ID 'sort'.
SY-SUBRC IS O FOR THE FIRST EXPORT.
while exporting the other 2 tables the system is throwing exception whose analysis is as follows.
Error analysis
The table "(itab)" has an illegal row type at position "comp. 1" in statement
"EXPORT (itab) TO ...".
The following types are allowed:
"C, CSTRING"
However, the field "(itab)" has the type:
8
( Meanings in type description:
- Values in brackets: Type length in bytes
- 'DEC' : Number of decimal places at type P
At the type description, there is partly only a technical type
description displayed.)
code in program 2 with same table declaration.
import (I_FCAT_S) FROM MEMORY id 'i_fcat'.
import (CH_ITAB) FROM MEMORY id 'main'.
import (I_SORT_S) FROM MEMORY id 'sort'.
Hey experts plz help me resolve this query .
or suggest a alternative to pass the internal table from one program to another...
Edited by: Anup Deshmukh on Jun 16, 2009 7:22 PMHey Anup...just try as Rich had said...firstly remove the brackets....
I tried your codes and it is working fine for me!
DATA :IT_FINAL2 LIKE WA_FINAL OCCURS 10 WITH HEADER LINE.
DATA : I_SORT TYPE SLIS_T_SORTINFO_ALV WITH HEADER LINE.
DATA : I_FCAT TYPE TABLE OF SLIS_FIELDCAT_ALV WITH NON-UNIQUE
DEFAULT KEY WITH HEADER LINE INITIAL SIZE 0.
EXPORT it_final2[] TO MEMORY ID 'main'.
EXPORT i_fcat[] TO MEMORY ID 'i_fcat'.
EXPORT i_sort TO MEMORY ID 'sort'.
And while importing use like below:
IMPORT i_fcat[] = i_fcat[] FROM MEMORY ID 'i_fcat'.
IMPORT it_final2[] = it_final2[] FROM MEMORY ID 'main'.
IMPORT i_sort = i_sort FROM MEMORY ID 'sort'.
And also remember to FREE the memory ID after importing.. -
What do they mean ...exception passed up call stack
For the project I am working on, we throw a created exception. This exception is "passed up the call stack if thrown by the object". I understand how to throw exceptions and try/catch, and rethrow. What are they talking about...passing it up the call stack?? I can't find any reference to this anywhere. Am I supposed to just ignore the exception and hope it gets caught in another class or method?
look at:
public class A {
public static void main(String args[]) {
try {
new A().foo();
} catch (Exception e) { }
public void foo() throws Exception {
bar(1);
public void bar(int i) throws Exception {
if (i>=0) bar(i-1) else gar();
public void gar() throws Exception {
goo();
public void goo() throws Exception {
throw new Exception("ouch!");
} // end of class AWhen the program is run, main creates an A, and calls foo()
foo() then calls bar()
bar() then calls bar() again
bar() then calls gar()
gar() then calls goo()
This is called the "call stack". It is (something like) the stack of method calls that are performed. The terminology comes from the fact that when a method calls another, the information about the calling method is actually pushed onto a Stack data structure. When the called method returns, the information is popped back off, so the calling method continues to run from where it left off.
Now goo() throws an Exception to gar()
gar() then throws it up to bar()
bar() then throws it up to bar() again
gar() then throws it up to bar()
bar() then throws it up to foo()
foo() then throws it up to main()
That's what it means that the Exception is passed up the call stack. -
OCI 22303 exception - Pass object to type Record in oracle procedure
Recently i had my first encounter with ODP.NET and Oracle. I'm developing a a datalayer that can access a stored procedure on an Oracle database.
The problem i'm having is the following:
I'm using this method to pass my parameters to the procedure: http://www.codeproject.com/KB/cs/CustomObject_Oracle.aspx
I have also attempted this approach:
http://developergeeks.com/article/48/3-steps-to-implement-oracle-udt-in-odpnet
I always get the message (litteraly):
Oracle.DataAccess.Client.OracleException: OCI-22303: type "e;PAC$WEBSHOP_PROCS"."CUSTOMER_IN_RECTYPE" not found.
It sounds weird to me, but what are the "es doing here in the error message I see?
Some code i use:
OracleParameter objParam = new OracleParameter
OracleDbType = OracleDbType.Object,
Direction = ParameterDirection.Input,
ParameterName = "PAC$WEBSHOP_PROCS.P_CUSTOMER_IN",
UdtTypeName = "PAC$WEBSHOP_PROCS.WEBSHOP_PROCS.CUSTOMER_IN_RECTYPE",
Value = card
The information i have about the Oracle procedure:
CREATE OR REPLACE PACKAGE PAC$WEBSHOP_PROCS IS
TYPE CUSTOMER_IN_RECTYPE IS RECORD
(CUS_STO_IDENTIFIER NUMBER(2)
,CUS_IDENTIFIER NUMBER(6)
,CH_IDENTIFIER NUMBER(2)
,CH_CARD_VERSION NUMBER(1)
PROCEDURE PRC$WS_VALIDATE_CARD
(P_CUSTOMER_IN IN PAC$WEBSHOP_PROCS.CUSTOMER_IN_RECTYPE
,P_RETURN_CODE IN OUT NUMBER
Any help to cover my problem would be greatly appreciated.
Thx
Edited by: 836497 on 14-feb-2011 4:36The only way to call it as is would be via an anonymous plsql block, where you create the record type inside the block. Interacting with the block via ODP would be limited to scalar values.
Here's a PLSQL example just to demonstrate. Here, v1 and v2 are bind variables of scalar type, which you'd setup/bind via ODP instead of the SQL prompt as I did, but I thought this might keep things simpler for the example.
The other choice would be to write a wrapper procedure that takes type OBJECT that you can call from ODP, and inside that procedure convert them to/from RECORD and call the original procedure.
Hope it helps,
Greg
SQL> drop package somepack;
Package dropped.
SQL> create package somepack as
2 type somerectype is record(n1 number);
3 function somefunc (v1 somerectype) return somerectype;
4 end;
5 /
Package created.
SQL>
SQL> create package body somepack as
2 function somefunc (v1 somerectype) return somerectype is
3 begin
4 return v1;
5 end;
6 end;
7 /
Package body created.
SQL>
SQL>
SQL> var v1 number;
SQL> exec :v1 := 5;
PL/SQL procedure successfully completed.
SQL> var v2 number;
SQL>
SQL>
SQL> declare
2 localvar1 somepack.somerectype;
3 localvar2 somepack.somerectype;
4 begin
5 localvar1.n1 := :v1;
6 localvar2 := somepack.somefunc(localvar1);
7 :v2 := localvar2.n1;
8 end;
9 /
PL/SQL procedure successfully completed.
SQL> print v2;
V2
5
SQL> -
Exception passing an object from a bean to a JSP
I have a class on /lib, here is:
package auxiliar;
public class Tabla {
public String oid;
public String ip;
public int frecuencia;
public int inicio;
public int fin;
public Tabla (String oid, String ip, int frecuencia, int inicio, int fin) {
this.oid = oid;
this.ip = ip;
this.frecuencia = frecuencia;
this.inicio = inicio;
this.fin = fin;
And then I have a bean like about this:
public class VerOid {
public ArrayList ver () {
ArrayList salida = new Arraylist();
while( rs.next() ) {
salida.add(new Tabla(_oid, ip, frecuencia, inicio, fin));
And at last, I have a JSP like this:
<jsp:useBean id="id1" scope="session" class="bean.VerOid"/>
<%
ArrayList d = id1.ver();
out.println( d.size());
out.println( ( (Tabla) (d.get(0)) ).oid );
ArrayList ff = new ArrayList();
ff.add(new Tabla ("1.1.","127.",3,4,5));
out.println( ( (Tabla) ff.get(0) ).frecuencia );
%>
d.size no problem
With ff neither.
But out.println( ( (Tabla) (d.get(0)) ).oid ); thrown an exception java.lang.ClassCastException and I don't know why.
I have JDK 1.4.
nullDid you import the Tabla class into the JSP?
<%@ page import="auxiliar.Tabla" %> -
How to add websites to the Options then to Security to Remember password on sites to Exception pass words?
Sorry, I don't understand what you are asking. See if this add-on has ther features you want.
https://addons.mozilla.org/en-US/firefox/addon/saved-password-editor/ <br />
Adds the ability to create and edit entries in the password manager. -
Exception handling in calling procedure
Hi,
i have a package where currently am making calls to private procedures from public procedure.
and the senario is:-
create package body p_tst
is
ex_failed exception;
-- this is private proc
procedure p_private
is
begin
raise ex_failed;
exception
when ex_failed
then
raise;
end p_private;
procedure p_public
is
begin
-- nomaking call to private
-- procedure
p_private;
-- here i need to catch
-- the raised exception
-- passed from the called
-- procedure
when ex_failed
end p_public;
end;
basically i want to catch the exception being passed from called procedure to calling procedure, and raise the same exception in calling procdure.
is it possible to catch the same exception in the calling procedure?Yes, you can catch the same exception in the calling procedure, exceptions are propagated to the caller if they are not handled in the called procedure.
Is this what you are trying to do?
CREATE OR REPLACE PACKAGE p_tst
AS
PROCEDURE p_public;
ex_failed EXCEPTION;
END;
CREATE OR REPLACE PACKAGE BODY p_tst
IS
PROCEDURE p_private
IS
BEGIN
RAISE ex_failed;
END p_private;
PROCEDURE p_public
IS
BEGIN
p_private;
EXCEPTION
WHEN ex_failed
THEN
DBMS_OUTPUT.put_line ('error');
END p_public;
END;
SQL> set serveroutput on;
SQL> exec p_tst.p_public;
error
PL/SQL procedure successfully completed. -
CS_BOM_EXPLOSION - Exception Handling
Dear Friends
I have use CS_BOM_EXPLOSION FM for explore material BOMs for some items its works properly
for some materials it exception pass 1 (ALT_NOT_FOUND)
i cant get what is the error
please let me know to solve this problem
hope my question is clear for you
Thanks in Advanced
Edited by: Nelson Rodrigo on May 11, 2009 1:27 AMDear Frends
I use as follows
in my systems alternative BOMs are available
CALL FUNCTION 'CS_BOM_EXPLOSION'
EXPORTING
CAPID = C_CAPID ('PP01')
DATUV = SY-DATUM
EMENG = C_EMENG ('1')
MTNRV = V_MTNRV ('50.10.12')
WERKS = V_WERKS ('1030')
STLAL = V_STLAL ('1030') Shipping Point
STLAN = C_STLAN ('1')
MEHRS = C_MEHRS ('X')
MDMPS = C_MDMPS ('1')
TABLES
STBD = IT_STBD
STBE = IT_STBE
STBK = IT_STBK
STBM = IT_STBM
STBP = IT_STBP
STBT = IT_STBT
EXCEPTIONS
ALT_NOT_FOUND = 1
CALL_INVALID = 2
MISSING_AUTHORIZATION = 3
NO_BOM_FOUND = 4
NO_PLANT_DATA = 5
NO_SUITABLE_BOM_FOUND = 6
OBJECT_NOT_FOUND = 7
CONVERSION_ERROR = 8
OTHERS = 9. -
Newbie: Passing Structures between DLLs via Java
A fairly high level question for you good people:
I have two DLLs. The first DLL is called by Java via JNI, and needs to return something akin to a C Structure (say an int, a double and a string for simplicity).
My Java code does not need to do anything with this data, except pass it to a second DLL for more processing (the second DLL for example needs to do something with just the double and the string).
My question is, is this sort of thing practical to do using JNI (I have the DLLs already but can easily add the JNIEXPORT etc functions) ?
(I have used JNI for a passing single values and arrays back and forth, but nothing with anything akin to structures).
Many thanks in advance.
DaveMy Java code does not need to do anything with this
data, except pass it to a second DLL for more
processing (the second DLL for example needs to do
something with just the double and the string).
My question is, is this sort of thing practical to do
using JNI (I have the DLLs already but can easily add
the JNIEXPORT etc functions) ?
Somewhere in memory a hunk is reserved for this. Your code does it or some external piece does.
You have a pointer to that.
It is very important that the memory does NOT go away until you tell it to.
You cast the pointer to a long and return that to java. Your java piece keeps track of that.
It passes the long off to the other dll. That dll casts it back to what is needed (a pointer) and uses it.
If the second dll disposes of the pointer then you are done. If not then you must dispose of the pointer.
If you want to keep it then you should add a 'destroy()' method in java that frees the pointer (native call) appropriately. This is also one of the few time where using finalize is probably appropriate as well. -
Hi All,
I have updated my mediacenter. Now tv_grab_nl_py does not work anymore:
[cedric@tv ~]$ tv_grab_nl_py --output ~/listings.xml --fast
File "/usr/bin/tv_grab_nl_py", line 341
print 'tv_grab_nl_py: A grabber that grabs tvguide data from tvgids.nl\n'
^
SyntaxError: invalid syntax
[cedric@tv ~]$
the version of python on the mediacenter (running arch linux):
[cedric@tv ~]$ python
Python 3.1.2 (r312:79147, Oct 4 2010, 12:35:40)
[GCC 4.5.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
I have copied the file to my laptop, there it looks like it's working:
./tv_grab_nl_py --output ~/listings.xml --fast
Config file /home/cedric/.xmltv/tv_grab_nl_py.conf not found.
Re-run me with the --configure flag.
cedric@laptop:~$
the version of python on my laptop (running arch linux):
cedric@laptop:~$ python
Python 2.6.5 (r265:79063, Apr 1 2010, 05:22:20)
[GCC 4.4.3 20100316 (prerelease)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
the script I'm trying to run:
[cedric@tv ~]$ cat tv_grab_nl_py
#!/usr/bin/env python
# $LastChangedDate: 2009-11-14 10:06:41 +0100 (Sat, 14 Nov 2009) $
# $Rev: 104 $
# $Author: pauldebruin $
SYNOPSIS
tv_grab_nl_py is a python script that trawls tvgids.nl for TV
programming information and outputs it in XMLTV-formatted output (see
http://membled.com/work/apps/xmltv). Users of MythTV
(http://www.mythtv.org) will appreciate the output generated by this
grabber, because it fills the category fields, i.e. colors in the EPG,
and has logos for most channels automagically available. Check the
website below for screenshots. The newest version of this script can be
found here:
http://code.google.com/p/tvgrabnlpy/
USAGE
Check the web site above and/or run script with --help and start from there
HISTORY
tv_grab_nl_py used to be called tv_grab_nl_pdb, first released on
2003/07/09. The name change was necessary because more and more people
are actively contributing to this script and I always disliked using my
initials (I was just too lazy to change it). At the same time I switched
from using CVS to SVN and as a result the version numbering scheme has
changed. The lastest official release of tv_grab_nl_pdb is 0.48. The
first official release of tv_grab_nl_py is 6.
QUESTIONS
Questions (and patches) are welcome at: paul at pwdebruin dot net.
IMPORTANT NOTES
If you were using tv_grab_nl from the XMLTV bundle then enable the
compat flag or use the --compat command-line option. Otherwise, the
xmltvid's are wrong and you will not see any new data in MythTV.
CONTRIBUTORS
Main author: Paul de Bruin (paul at pwdebruin dot net)
Michel van der Laan made available his extensive collection of
high-quality logos that is used by this script.
Michael Heus has taken the effort to further enhance this script so that
it now also includes:
- Credit info: directors, actors, presenters and writers
- removal of programs that are actually just groupings/broadcasters
(e.g. "KETNET", "Wild Friday", "Z@pp")
- Star-rating for programs tipped by tvgids.nl
- Black&White, Stereo and URL info
- Better detection of Movies
- and much, much more...
Several other people have provided feedback and patches (these are the
people I could find in my email archive, if you are missing from this
list let me know):
Huub Bouma, Roy van der Kuil, Remco Rotteveel, Mark Wormgoor, Dennis van
Onselen, Hugo van der Kooij, Han Holl, Ian Mcdonald, Udo van den Heuvel.
# Modules we need
import re, urllib2, getopt, sys
import time, random
import htmlentitydefs, os, os.path, pickle
from string import replace, split, strip
from threading import Thread
from xml.sax import saxutils
# Extra check for the datetime module
try:
import datetime
except:
sys.stderr.write('This script needs the datetime module that was introduced in Python version 2.3.\n')
sys.stderr.write('You are running:\n')
sys.stderr.write('%s\n' % sys.version)
sys.exit(1)
# XXX: fix to prevent crashes in Snow Leopard [Robert Klep]
if sys.platform == 'darwin' and sys.version_info[:3] == (2, 6, 1):
try:
urllib2.urlopen('http://localhost.localdomain')
except:
pass
# do extra debug stuff
debug = 1
try:
import redirect
except:
debug = 0
pass
# globals
# compile only one time
r_entity = re.compile(r'&(#x[0-9A-Fa-f]+|#[0-9]+|[A-Za-z]+);')
tvgids = 'http://www.tvgids.nl/'
uitgebreid_zoeken = tvgids + 'zoeken/'
# how many seconds to wait before we timeout on a
# url fetch, 10 seconds seems reasonable
global_timeout = 10
# Wait a random number of seconds between each page fetch.
# We want to be nice and not hammer tvgids.nl (these are the
# friendly people that provide our data...).
# Also, it appears tvgids.nl throttles its output.
# So there, there is not point in lowering these numbers, if you
# are in a hurry, use the (default) fast mode.
nice_time = [1, 2]
# Maximum length in minutes of gaps/overlaps between programs to correct
max_overlap = 10
# Strategy to use for correcting overlapping prgramming:
# 'average' = use average of stop and start of next program
# 'stop' = keep stop time of current program and adjust start time of next program accordingly
# 'start' = keep start time of next program and adjust stop of current program accordingly
# 'none' = do not use any strategy and see what happens
overlap_strategy = 'average'
# Experimental strategy for clumping overlapping programming, all programs that overlap more
# than max_overlap minutes, but less than the length of the shortest program are clumped
# together. Highly experimental and disabled for now.
do_clump = False
# Create a category translation dictionary
# Look in mythtv/themes/blue/ui.xml for all category names
# The keys are the categories used by tvgids.nl (lowercase please)
cattrans = { 'amusement' : 'Talk',
'animatie' : 'Animated',
'comedy' : 'Comedy',
'documentaire' : 'Documentary',
'educatief' : 'Educational',
'erotiek' : 'Adult',
'film' : 'Film',
'muziek' : 'Art/Music',
'informatief' : 'Educational',
'jeugd' : 'Children',
'kunst/cultuur' : 'Arts/Culture',
'misdaad' : 'Crime/Mystery',
'muziek' : 'Music',
'natuur' : 'Science/Nature',
'nieuws/actualiteiten' : 'News',
'overige' : 'Unknown',
'religieus' : 'Religion',
'serie/soap' : 'Drama',
'sport' : 'Sports',
'theater' : 'Arts/Culture',
'wetenschap' : 'Science/Nature'}
# Create a role translation dictionary for the xmltv credits part
# The keys are the roles used by tvgids.nl (lowercase please)
roletrans = {'regie' : 'director',
'acteurs' : 'actor',
'presentatie' : 'presenter',
'scenario' : 'writer'}
# We have two sources of logos, the first provides the nice ones, but is not
# complete. We use the tvgids logos to fill the missing bits.
logo_provider = [ 'http://visualisation.tudelft.nl/~paul/logos/gif/64x64/',
'http://static.tvgids.nl/gfx/zenders/' ]
logo_names = {
1 : [0, 'ned1'],
2 : [0, 'ned2'],
3 : [0, 'ned3'],
4 : [0, 'rtl4'],
5 : [0, 'een'],
6 : [0, 'canvas_color'],
7 : [0, 'bbc1'],
8 : [0, 'bbc2'],
9 : [0,'ard'],
10 : [0,'zdf'],
11 : [1, 'rtl'],
12 : [0, 'wdr'],
13 : [1, 'ndr'],
14 : [1, 'srsudwest'],
15 : [1, 'rtbf1'],
16 : [1, 'rtbf2'],
17 : [0, 'tv5'],
18 : [0, 'ngc'],
19 : [1, 'eurosport'],
20 : [1, 'tcm'],
21 : [1, 'cartoonnetwork'],
24 : [0, 'canal+red'],
25 : [0, 'mtv-color'],
26 : [0, 'cnn'],
27 : [0, 'rai'],
28 : [1, 'sat1'],
29 : [0, 'discover-spacey'],
31 : [0, 'rtl5'],
32 : [1, 'trt'],
34 : [0, 'veronica'],
35 : [0, 'tmf'],
36 : [0, 'sbs6'],
37 : [0, 'net5'],
38 : [1, 'arte'],
39 : [0, 'canal+blue'],
40 : [0, 'at5'],
46 : [0, 'rtl7'],
49 : [1, 'vtm'],
50 : [1, '3sat'],
58 : [1, 'pro7'],
59 : [1, 'kanaal2'],
60 : [1, 'vt4'],
65 : [0, 'animal-planet'],
73 : [1, 'mezzo'],
86 : [0, 'bbc-world'],
87 : [1, 'tve'],
89 : [1, 'nick'],
90 : [1, 'bvn'],
91 : [0, 'comedy_central'],
92 : [0, 'rtl8'],
99 : [1, 'sport1_1'],
100 : [0, 'rtvu'],
101 : [0, 'tvwest'],
102 : [0, 'tvrijnmond'],
103 : [1, 'tvnoordholland'],
104 : [1, 'bbcprime'],
105 : [1, 'spiceplatinum'],
107 : [0, 'canal+yellow'],
108 : [0, 'tvnoord'],
109 : [0, 'omropfryslan'],
114 : [0, 'omroepbrabant']}
# A selection of user agents we will impersonate, in an attempt to be less
# conspicuous to the tvgids.nl police.
user_agents = [ 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)',
'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)',
'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.7) Gecko/20060909 Firefox/1.5.0.7',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.9) Gecko/20071105 Firefox/2.0.0.9',
'Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.8) Gecko/20071022 Ubuntu/7.10 (gutsy) Firefox/2.0.0.8'
# Work in progress, the idea is to cache program categories and
# descriptions to eliminate a lot of page fetches from tvgids.nl
# for programs that do not have interesting/changing descriptions
class ProgramCache:
A cache to hold program name and category info.
TVgids stores the detail for each program on a separate URL with an
(apparently unique) ID. This cache stores the fetched info with the ID.
New fetches will use the cached info instead of doing an (expensive)
page fetch.
def __init__(self, filename=None):
Create a new ProgramCache object, optionally from file
# where we store our info
self.filename = filename
if filename == None:
self.pdict = {}
else:
if os.path.isfile(filename):
self.load(filename)
else:
self.pdict = {}
def load(self, filename):
Loads a pickled cache dict from file
try:
self.pdict = pickle.load(open(filename,'r'))
except:
sys.stderr.write('Error loading cache file: %s (possibly corrupt)' % filename)
sys.exit(2)
def dump(self, filename):
Dumps a pickled cache, and makes sure it is valid
if os.access(filename, os.F_OK):
try:
os.remove(filename)
except:
sys.stderr.write('Cannot remove %s, check permissions' % filename)
pickle.dump(self.pdict, open(filename+'.tmp', 'w'))
os.rename(filename+'.tmp', filename)
def query(self, program_id):
Updates/gets/whatever.
try:
return self.pdict[program_id]
except:
return None
def add(self, program):
Adds a program
self.pdict[program['ID']] = program
def clear(self):
Clears the cache (i.e. empties it)
self.pdict = {}
def clean(self):
Removes all cached programming before today.
Also removes erroneously cached programming.
now = time.localtime()
dnow = datetime.datetime(now[0],now[1],now[2])
for key in self.pdict.keys():
try:
if self.pdict[key]['stop-time'] < dnow or self.pdict[key]['name'].lower() == 'onbekend':
del self.pdict[key]
except:
pass
def usage():
print 'tv_grab_nl_py: A grabber that grabs tvguide data from tvgids.nl\n'
print 'and stores it in XMLTV-combatible format.\n'
print 'Usage:'
print '--help, -h = print this info'
print '--configure = create configfile (overwrites existing file)'
print '--config-file = name of the configuration file (default = ~/.xmltv/tv_grab_py.conf'
print '--capabilities = xmltv required option'
print '--desc-length = maximum allowed length of programme descriptions in bytes.'
print '--description = prints a short description of the grabber'
print '--output = file where to put the output'
print '--days = # number of days to grab'
print '--preferredmethod = returns the preferred method to be called'
print '--fast = do not grab descriptions of programming'
print '--slow = grab descriptions of programming'
print '--quiet = suppress all output'
print '--compat = append tvgids.nl to the xmltv id (use this if you were using tv_grab_nl)'
print '--logos 0/1 = insert urls to channel icons (mythfilldatabase will then use these)'
print '--nocattrans = do not translate the grabbed genres into MythTV-genres'
print '--cache = cache descriptions and use the file to store'
print '--clean_cache = clean the cache file before fetching'
print '--clear_cache = empties the cache file before fetching data'
print '--slowdays = grab slowdays initial days and the rest in fast mode'
print '--max_overlap = maximum length of overlap between programming to correct [minutes]'
print '--overlap_strategy = what strategy to use to correct overlaps (check top of source code)'
def filter_line_identity(m, defs=htmlentitydefs.entitydefs):
# callback: translate one entity to its ISO Latin value
k = m.group(1)
if k.startswith("#") and k[1:] in xrange(256):
return chr(int(k[1:]))
try:
return defs[k]
except KeyError:
return m.group(0) # use as is
def filter_line(s):
Removes unwanted stuff in strings (adapted from tv_grab_be)
# do the latin1 stuff
s = r_entity.sub(filter_line_identity, s)
s = replace(s,' ',' ')
# Ik vermoed dat de volgende drie regels overbodig zijn, maar ze doen
# niet veel kwaad -- Han Holl
s = replace(s,'\r',' ')
x = re.compile('(<.*?>)') # Udo
s = x.sub('', s) #Udo
s = replace(s, '~Q', "'")
s = replace(s, '~R', "'")
# Hmm, not sure if I understand this. Without it, mythfilldatabase barfs
# on program names like "Steinbrecher &..."
# We most create valid XML -- Han Holl
s = saxutils.escape(s)
return s
def calc_timezone(t):
Takes a time from tvgids.nl and formats it with all the required
timezone conversions.
in: '20050429075000'
out:'20050429075000 (CET|CEST)'
Until I have figured out how to correctly do timezoning in python this method
will bork if you are not in a zone that has the same DST rules as 'Europe/Amsterdam'.
year = int(t[0:4])
month = int(t[4:6])
day = int(t[6:8])
hour = int(t[8:10])
minute = int(t[10:12])
#td = {'CET': '+0100', 'CEST': '+0200'}
#td = {'CET': '+0100', 'CEST': '+0200', 'W. Europe Standard Time' : '+0100', 'West-Europa (standaardtijd)' : '+0100'}
td = {0 : '+0100', 1 : '+0200'}
pt = time.mktime((year,month,day,hour,minute,0,0,0,-1))
timezone=''
try:
#timezone = time.tzname[(time.localtime(pt))[-1]]
timezone = (time.localtime(pt))[-1]
except:
sys.stderr.write('Cannot convert time to timezone')
return t+' %s' % td[timezone]
def format_timezone(td):
Given a datetime object, returns a string in XMLTV format
tstr = td.strftime('%Y%m%d%H%M00')
return calc_timezone(tstr)
def get_page_internal(url, quiet=0):
Retrieves the url and returns a string with the contents.
Optionally, returns None if processing takes longer than
the specified number of timeout seconds.
txtdata = None
txtheaders = {'Keep-Alive' : '300',
'User-Agent' : user_agents[random.randint(0, len(user_agents)-1)] }
try:
#fp = urllib2.urlopen(url)
rurl = urllib2.Request(url, txtdata, txtheaders)
fp = urllib2.urlopen(rurl)
lines = fp.readlines()
page = "".join(lines)
return page
except:
if not quiet:
sys.stderr.write('Cannot open url: %s\n' % url)
return None
class FetchURL(Thread):
A simple thread to fetch a url with a timeout
def __init__ (self, url, quiet=0):
Thread.__init__(self)
self.quiet = quiet
self.url = url
self.result = None
def run(self):
self.result = get_page_internal(self.url, self.quiet)
def get_page(url, quiet=0):
Wrapper around get_page_internal to catch the
timeout exception
try:
fu = FetchURL(url, quiet)
fu.start()
fu.join(global_timeout)
return fu.result
except:
if not quiet:
sys.stderr.write('get_page timed out on (>%s s): %s\n' % (global_timeout, url))
return None
def get_channels(file, quiet=0):
Get a list of all available channels and store these
in a file.
# store channels in a dict
channels = {}
# tvgids stores several instances of channels, we want to
# find all the possibile channels
channel_get = re.compile('<optgroup label=.*?>(.*?)</optgroup>', re.DOTALL)
# this is how we will find a (number, channel) instance
channel_re = re.compile('<option value="([0-9]+)" >(.*?)</option>', re.DOTALL)
# this is where we will try to find our channel list
total = get_page(uitgebreid_zoeken, quiet)
if total == None:
return
# get a list of match objects of all the <select blah station>
stations = channel_get.finditer(total)
# and create a dict of number, channel_name pairs
# we do this this way because several instances of the
# channel list are stored in the url and not all of the
# instances have all the channels, this way we get them all.
for station in stations:
m = channel_re.finditer(station.group(0))
for p in m:
try:
a = int(p.group(1))
b = filter_line(p.group(2))
channels[a] = b
except:
sys.stderr.write('Oops, [%s,%s] does not look like a valid channel, skipping it...\n' % (p.group(1),p.group(2)))
# sort on channel number (arbitrary but who cares)
keys = channels.keys()
keys.sort()
# and create a file with the channels
f = open(file,'w')
for k in keys:
f.write("%s %s\n" % (k, channels[k]))
f.close()
def get_channel_all_days(channel, days, quiet=0):
Get all available days of programming for channel number
The output is a list of programming in order where each row
contains a dictionary with program information.
now = datetime.datetime.now()
programs = []
# Tvgids shows programs per channel per day, so we loop over the number of days
# we are required to grab
for offset in range(0, days):
channel_url = 'http://www.tvgids.nl/zoeken/?d=%i&z=%s' % (offset, channel)
# For historic purposes, the old style url that gave us a full week in advance:
# channel_url = 'http://www.tvgids.nl/zoeken/?trefwoord=Titel+of+trefwoord&interval=0×lot='+\
# '&station=%s&periode=%i&genre=&order=0' % (channel,days-1)
# Sniff, we miss you...
if offset > 0:
time.sleep(random.randint(nice_time[0], nice_time[1]))
# get the raw programming for the day
total = get_page(channel_url, quiet)
if total == None:
return programs
# Setup a number of regexps
# checktitle will match the title row in H2 tags of the daily overview page, e.g.
# <h2>zondag 19 oktober 2008</h2>
checktitle = re.compile('<h2>(.*?)</h2>',re.DOTALL)
# getrow will locate each row with program details
getrow = re.compile('<a href="/programma/(.*?)</a>',re.DOTALL)
# parserow matches the required program info, with groups:
# 1 = program ID
# 2 = broadcast times
# 3 = program name
parserow = re.compile('(.*?)/.*<span class="time">(.*?)</span>.*<span class="title">(.*?)</span>', re.DOTALL)
# normal begin and end times
times = re.compile('([0-9]+:[0-9]+) - ([0-9]+:[0-9]+)?')
# Get the day of month listed on the page as well as the expected date we are grabbing and compare these.
# If these do not match, we skip parsing the programs on the page and issue a warning.
#dayno = int(checkday.search(total).group(1))
title = checktitle.search(total)
if title:
title = title.group(1)
dayno = title.split()[1]
else:
sys.stderr.write('\nOops, there was a problem with page %s. Skipping it...\n' % (channel_url))
continue
expected = now + datetime.timedelta(days=offset)
if (not dayno.isdigit() or int(dayno) != expected.day):
sys.stderr.write('\nOops, did not expect page %s to list programs for "%s", skipping it...\n' % (channel_url,title))
continue
# and find relevant programming info
allrows = getrow.finditer(total)
for r in allrows:
detail = parserow.search(r.group(1))
if detail != None:
# default times
start_time = None
stop_time = None
# parse for begin and end times
t = times.search(detail.group(2))
if t != None:
start_time = t.group(1)
stop_time = t.group(2)
program_url = 'http://www.tvgids.nl/programma/' + detail.group(1) + '/'
program_name = detail.group(3)
# store time, name and detail url in a dictionary
tdict = {}
tdict['start'] = start_time
tdict['stop'] = stop_time
tdict['name'] = program_name
if tdict['name'] == '':
tdict['name'] = 'onbekend'
tdict['url'] = program_url
tdict['ID'] = detail.group(1)
tdict['offset'] = offset
#Add star rating if tipped by tvgids.nl
tdict['star-rating'] = '';
if r.group(1).find('Tip') != -1:
tdict['star-rating'] = '4/5'
# and append the program to the list of programs
programs.append(tdict)
# done
return programs
def make_daytime(time_string, offset=0, cutoff='00:00', stoptime=False):
Given a string '11:35' and an offset from today,
return a datetime object. The cuttoff specifies the point where the
new day starts.
Examples:
In [2]:make_daytime('11:34',0)
Out[2]:datetime.datetime(2006, 8, 3, 11, 34)
In [3]:make_daytime('11:34',1)
Out[3]:datetime.datetime(2006, 8, 4, 11, 34)
In [7]:make_daytime('11:34',0,'12:00')
Out[7]:datetime.datetime(2006, 8, 4, 11, 34)
In [4]:make_daytime('11:34',0,'11:34',False)
Out[4]:datetime.datetime(2006, 8, 3, 11, 34)
In [5]:make_daytime('11:34',0,'11:34',True)
Out[5]:datetime.datetime(2006, 8, 4, 11, 34)
h,m = [int(x) for x in time_string.split(':')];
hm = int(time_string.replace(':',''))
chm = int(cutoff.replace(':',''))
# check for the cutoff, if the time is before the cutoff then
# add a day
extra_day = 0
if (hm < chm) or (stoptime==True and hm == chm):
extra_day = 1
# and create a datetime object, DST is handled at a later point
pt = time.localtime()
dt = datetime.datetime(pt[0],pt[1],pt[2],h,m)
dt = dt + datetime.timedelta(offset+extra_day)
return dt
def correct_times(programs, quiet=0):
Parse a list of programs as generated by get_channel_all_days() and
convert begin and end times to xmltv compatible times in datetime objects.
if programs == []:
return programs
# the start time of programming for this day, times *before* this time are
# assumed to be on the next day
day_start_time = '06:00'
# initialise using the start time of the first program on this day
if programs[0]['start'] != None:
day_start_time = programs[0]['start']
for program in programs:
if program['start'] == program['stop']:
program['stop'] = None
# convert the times
if program['start'] != None:
program['start-time'] = make_daytime(program['start'], program['offset'], day_start_time)
else:
program['start-time'] = None
if program['stop'] != None:
program['stop-time'] = make_daytime(program['stop'], program['offset'], day_start_time, stoptime=True)
# extra correction, needed because the stop time of a program may be on the next day, after the
# day cutoff. For example:
# 06:00 - 23:40 Long Program
# 23:40 - 00:10 Lala
# 00:10 - 08:00 Wawa
# This puts the end date of Wawa on the current, instead of the next day. There is no way to detect
# this with a single cutoff in make_daytime. Therefore, check if there is a day difference between
# start and stop dates and correct if necessary.
if program['start-time'] != None:
# make two dates
start = program['start-time']
stop = program['stop-time']
single_day = datetime.timedelta(1)
startdate = datetime.datetime(start.year,start.month,start.day)
stopdate = datetime.datetime(stop.year,stop.month,stop.day)
if startdate - stopdate == single_day:
program['stop-time'] = program['stop-time'] + single_day
else:
program['stop-time'] = None
def parse_programs(programs, offset=0, quiet=0):
Parse a list of programs as generated by get_channel_all_days() and
convert begin and end times to xmltv compatible times.
# good programs
good_programs = []
# calculate absolute start and stop times
correct_times(programs, quiet)
# next, correct for missing end time and copy over all good programming to the
# good_programs list
for i in range(len(programs)):
# Try to correct missing end time by taking start time from next program on schedule
if (programs[i]['stop-time'] == None and i < len(programs)-1):
if not quiet:
sys.stderr.write('Oops, "%s" has no end time. Trying to fix...\n' % programs[i]['name'])
programs[i]['stop-time'] = programs[i+1]['start-time']
# The common case: start and end times are present and are not
# equal to each other (yes, this can happen)
if programs[i]['start-time'] != None and \
programs[i]['stop-time'] != None and \
programs[i]['start-time'] != programs[i]['stop-time']:
good_programs.append(programs[i])
# Han Holl: try to exclude programs that stop before they begin
for i in range(len(good_programs)-1,-1,-1):
if good_programs[i]['stop-time'] <= good_programs[i]['start-time']:
if not quiet:
sys.stderr.write('Deleting invalid stop/start time: %s\n' % good_programs[i]['name'])
del good_programs[i]
# Try to exclude programs that only identify a group or broadcaster and have overlapping start/end times with
# the actual programs
for i in range(len(good_programs)-2,-1,-1):
if good_programs[i]['start-time'] <= good_programs[i+1]['start-time'] and \
good_programs[i]['stop-time'] >= good_programs[i+1]['stop-time']:
if not quiet:
sys.stderr.write('Deleting grouping/broadcaster: %s\n' % good_programs[i]['name'])
del good_programs[i]
for i in range(len(good_programs)-1):
# PdB: Fix tvgids start-before-end x minute interval overlap. An overlap (positive or
# negative) is halved and each half is assigned to the adjacent programmes. The maximum
# overlap length between programming is set by the global variable 'max_overlap' and is
# default 10 minutes. Examples:
# Positive overlap (= overlap in programming):
# 10:55 - 12:00 Lala
# 11:55 - 12:20 Wawa
# is transformed in:
# 10:55 - 11.57 Lala
# 11:57 - 12:20 Wawa
# Negative overlap (= gap in programming):
# 10:55 - 11:50 Lala
# 12:00 - 12:20 Wawa
# is transformed in:
# 10:55 - 11.55 Lala
# 11:55 - 12:20 Wawa
stop = good_programs[i]['stop-time']
start = good_programs[i+1]['start-time']
dt = stop-start
avg = start + dt / 2
overlap = 24*60*60*dt.days + dt.seconds
# check for the size of the overlap
if 0 < abs(overlap) <= max_overlap*60:
if not quiet:
if overlap > 0:
sys.stderr.write('"%s" and "%s" overlap %s minutes. Adjusting times.\n' % \
(good_programs[i]['name'],good_programs[i+1]['name'],overlap / 60))
else:
sys.stderr.write('"%s" and "%s" have gap of %s minutes. Adjusting times.\n' % \
(good_programs[i]['name'],good_programs[i+1]['name'],abs(overlap) / 60))
# stop-time of previous program wins
if overlap_strategy == 'stop':
good_programs[i+1]['start-time'] = good_programs[i]['stop-time']
# start-time of next program wins
elif overlap_strategy == 'start':
good_programs[i]['stop-time'] = good_programs[i+1]['start-time']
# average the difference
elif overlap_strategy == 'average':
good_programs[i]['stop-time'] = avg
good_programs[i+1]['start-time'] = avg
# leave as is
else:
pass
# Experimental strategy to make sure programming does not disappear. All programs that overlap more
# than the maximum overlap length, but less than the shortest length of the two programs are
# clumped.
if do_clump:
for i in range(len(good_programs)-1):
stop = good_programs[i]['stop-time']
start = good_programs[i+1]['start-time']
dt = stop-start
overlap = 24*60*60*dt.days + dt.seconds
length0 = good_programs[i]['stop-time'] - good_programs[i]['start-time']
length1 = good_programs[i+1]['stop-time'] - good_programs[i+1]['start-time']
l0 = length0.days*24*60*60 + length0.seconds
l1 = length1.days*24*60*60 + length0.seconds
if abs(overlap) >= max_overlap*60 <= min(l0,l1)*60 and \
not good_programs[i].has_key('clumpidx') and \
not good_programs[i+1].has_key('clumpidx'):
good_programs[i]['clumpidx'] = '0/2'
good_programs[i+1]['clumpidx'] = '1/2'
good_programs[i]['stop-time'] = good_programs[i+1]['stop-time']
good_programs[i+1]['start-time'] = good_programs[i]['start-time']
# done, nothing to see here, please move on
return good_programs
def get_descriptions(programs, program_cache=None, nocattrans=0, quiet=0, slowdays=0):
Given a list of programs, from get_channel, retrieve program information
# This regexp tries to find details such as Genre, Acteurs, Jaar van Premiere etc.
detail = re.compile('<li>.*?<strong>(.*?):</strong>.*?<br />(.*?)</li>', re.DOTALL)
# These regexps find the description area, the program type and descriptive text
description = re.compile('<div class="description">.*?<div class="text"(.*?)<div class="clearer"></div>',re.DOTALL)
descrtype = re.compile('<div class="type">(.*?)</div>',re.DOTALL)
descrline = re.compile('<p>(.*?)</p>',re.DOTALL)
# randomize detail requests
nprograms = len(programs)
fetch_order = range(0,nprograms)
random.shuffle(fetch_order)
counter = 0
for i in fetch_order:
counter += 1
if programs[i]['offset'] >= slowdays:
continue
if not quiet:
sys.stderr.write('\n(%3.0f%%) %s: %s ' % (100*float(counter)/float(nprograms), i, programs[i]['name']))
# check the cache for this program's ID
cached_program = program_cache.query(programs[i]['ID'])
if (cached_program != None):
if not quiet:
sys.stderr.write(' [cached]')
# copy the cached information, except the start/end times, rating and clumping,
# these may have changed.
tstart = programs[i]['start-time']
tstop = programs[i]['stop-time']
rating = programs[i]['star-rating']
try:
clump = programs[i]['clumpidx']
except:
clump = False
programs[i] = cached_program
programs[i]['start-time'] = tstart
programs[i]['stop-time'] = tstop
programs[i]['star-rating'] = rating
if clump:
programs[i]['clumpidx'] = clump
continue
else:
# be nice to tvgids.nl
time.sleep(random.randint(nice_time[0], nice_time[1]))
# get the details page, and get all the detail nodes
descriptions = ()
details = ()
try:
if not quiet:
sys.stderr.write(' [normal fetch]')
total = get_page(programs[i]['url'])
details = detail.finditer(total)
descrspan = description.search(total);
descriptions = descrline.finditer(descrspan.group(1))
except:
# if we cannot find the description page,
# go to next in the loop
if not quiet:
sys.stderr.write(' [fetch failed or timed out]')
continue
# define containers
programs[i]['credits'] = {}
programs[i]['video'] = {}
# now parse the details
line_nr = 1;
# First, we try to find the program type in the description section.
# Note that this is not the same as the generic genres (these are searched later on), but a more descriptive one like "Culinair programma"
# If present, we store this as first part of the regular description:
programs[i]['detail1'] = descrtype.search(descrspan.group(1)).group(1).capitalize()
if programs[i]['detail1'] != '':
line_nr = line_nr + 1
# Secondly, we add one or more lines of the program description that are present.
for descript in descriptions:
d_str = 'detail' + str(line_nr)
programs[i][d_str] = descript.group(1)
# Remove sponsored link from description if present.
sponsor_pos = programs[i][d_str].rfind('<i>Gesponsorde link:</i>')
if sponsor_pos > 0:
programs[i][d_str] = programs[i][d_str][0:sponsor_pos]
programs[i][d_str] = filter_line(programs[i][d_str]).strip()
line_nr = line_nr + 1
# Finally, we check out all program details. These are generically denoted as:
# <li><strong>(TYPE):</strong><br />(CONTENT)</li>
# Some examples:
# <li><strong>Genre:</strong><br />16 oktober 2008</li>
# <li><strong>Genre:</strong><br />Amusement</li>
for d in details:
type = d.group(1).strip().lower()
content_asis = d.group(2).strip()
content = filter_line(content_asis).strip()
if content == '':
continue
elif type == 'genre':
# Fix detection of movies based on description as tvgids.nl sometimes
# categorises a movie as e.g. "Komedie", "Misdaadkomedie", "Detectivefilm".
genre = content;
if (programs[i]['detail1'].lower().find('film') != -1 \
or programs[i]['detail1'].lower().find('komedie') != -1)\
and programs[i]['detail1'].lower().find('tekenfilm') == -1 \
and programs[i]['detail1'].lower().find('animatiekomedie') == -1 \
and programs[i]['detail1'].lower().find('filmpje') == -1:
genre = 'film'
if nocattrans:
programs[i]['genre'] = genre.title()
else:
try:
programs[i]['genre'] = cattrans[genre.lower()]
except:
programs[i]['genre'] = ''
# Parse persons and their roles for credit info
elif roletrans.has_key(type):
programs[i]['credits'][roletrans[type]] = []
persons = content_asis.split(',');
for name in persons:
if name.find(':') != -1:
name = name.split(':')[1]
if name.find('-') != -1:
name = name.split('-')[0]
if name.find('e.a') != -1:
name = name.split('e.a')[0]
programs[i]['credits'][roletrans[type]].append(filter_line(name.strip()))
elif type == 'bijzonderheden':
if content.find('Breedbeeld') != -1:
programs[i]['video']['breedbeeld'] = 1
if content.find('Zwart') != -1:
programs[i]['video']['blackwhite'] = 1
if content.find('Teletekst') != -1:
programs[i]['teletekst'] = 1
if content.find('Stereo') != -1:
programs[i]['stereo'] = 1
elif type == 'url':
programs[i]['infourl'] = content
else:
# In unmatched cases, we still add the parsed type and content to the program details.
# Some of these will lead to xmltv output during the xmlefy_programs step
programs[i][type] = content
# do not cache programming that is unknown at the time
# of fetching.
if programs[i]['name'].lower() != 'onbekend':
program_cache.add(programs[i])
if not quiet:
sys.stderr.write('\ndone...\n\n')
# done
def title_split(program):
Some channels have the annoying habit of adding the subtitle to the title of a program.
This function attempts to fix this, by splitting the name at a ': '.
if (program.has_key('titel aflevering') and program['titel aflevering'] != '') \
or (program.has_key('genre') and program['genre'].lower() in ['movies','film']):
return
colonpos = program['name'].rfind(': ')
if colonpos > 0:
program['titel aflevering'] = program['name'][colonpos+1:len(program['name'])].strip()
program['name'] = program['name'][0:colonpos].strip()
def xmlefy_programs(programs, channel, desc_len, compat=0, nocattrans=0):
Given a list of programming (from get_channels())
returns a string with the xml equivalent
output = []
for program in programs:
clumpidx = ''
try:
if program.has_key('clumpidx'):
clumpidx = 'clumpidx="'+program['clumpidx']+'"'
except:
print program
output.append(' <programme start="%s" stop="%s" channel="%s%s" %s> \n' % \
(format_timezone(program['start-time']), format_timezone(program['stop-time']),\
channel, compat and '.tvgids.nl' or '', clumpidx))
output.append(' <title lang="nl">%s</title>\n' % filter_line(program['name']))
if program.has_key('titel aflevering') and program['titel aflevering'] != '':
output.append(' <sub-title lang="nl">%s</sub-title>\n' % filter_line(program['titel aflevering']))
desc = []
for detail_row in ['detail1','detail2','detail3']:
if program.has_key(detail_row) and not re.search('[Gg]een detailgegevens be(?:kend|schikbaar)', program[detail_row]):
desc.append('%s ' % program[detail_row])
if desc != []:
# join and remove newlines from descriptions
desc_line = "".join(desc).strip()
desc_line.replace('\n', ' ')
if len(desc_line) > desc_len:
spacepos = desc_line[0:desc_len-3].rfind(' ')
desc_line = desc_line[0:spacepos] + '...'
output.append(' <desc lang="nl">%s</desc>\n' % desc_line)
# Process credits section if present.
# This will generate director/actor/presenter info.
if program.has_key('credits') and program['credits'] != {}:
output.append(' <credits>\n')
for role in program['credits']:
for name in program['credits'][role]:
if name != '':
output.append(' <%s>%s</%s>\n' % (role, name, role))
output.append(' </credits>\n')
if program.has_key('jaar van premiere') and program['jaar van premiere'] != '':
output.append(' <date>%s</date>\n' % program['jaar van premiere'])
if program.has_key('genre') and program['genre'] != '':
output.append(' <category')
if nocattrans:
output.append(' lang="nl"')
output.append ('>%s</category>\n' % program['genre'])
if program.has_key('infourl') and program['infourl'] != '':
output.append(' <url>%s</url>\n' % program['infourl'])
if program.has_key('aflevering') and program['aflevering'] != '':
output.append(' <episode-num system="onscreen">%s</episode-num>\n' % filter_line(program['aflevering']))
# Process video section if present
if program.has_key('video') and program['video'] != {}:
output.append(' <video>\n');
if program['video'].has_key('breedbeeld'):
output.append(' <aspect>16:9</aspect>\n')
if program['video'].has_key('blackwhite'):
output.append(' <colour>no</colour>\n')
output.append(' </video>\n')
if program.has_key('stereo'):
output.append(' <audio><stereo>stereo</stereo></audio>\n')
if program.has_key('teletekst'):
output.append(' <subtitles type="teletext" />\n')
# Set star-rating if applicable
if program['star-rating'] != '':
output.append(' <star-rating><value>%s</value></star-rating>\n' % program['star-rating'])
output.append(' </programme>\n')
return "".join(output)
def main():
# Parse command line options
try:
opts, args = getopt.getopt(sys.argv[1:], "h", ["help", "output=", "capabilities",
"preferredmethod", "days=",
"configure", "fast", "slow",
"cache=", "clean_cache",
"slowdays=","compat",
"desc-length=","description",
"nocattrans","config-file=",
"max_overlap=", "overlap_strategy=",
"clear_cache", "quiet","logos="])
except getopt.GetoptError:
usage()
sys.exit(2)
# DEFAULT OPTIONS - Edit if you know what you are doing
# where the output goes
output = None
output_file = None
# the total number of days to fetch
days = 6
# Fetch data in fast mode, i.e. do NOT grab all the detail information,
# fast means fast, because as it then does not have to fetch a web page for each program
# Default: fast=0
fast = 0
# number of days to fetch in slow mode. For example: --days 5 --slowdays 2, will
# fetch the first two days in slow mode (with all the details) and the remaining three
# days in fast mode.
slowdays = 6
# no output
quiet = 0
# insert url of channel logo into the xml data, this will be picked up by mythfilldatabase
logos = 1
# enable this option if you were using tv_grab_nl, it adjusts the generated
# xmltvid's so that everything works.
compat = 0
# enable this option if you do not want the tvgids categories being translated into
# MythTV-categories (genres)
nocattrans = 0
# Maximum number of characters to use for program description.
# Different values may work better in different versions of MythTV.
desc_len = 475
# default configuration file locations
hpath = ''
if os.environ.has_key('HOME'):
hpath = os.environ['HOME']
# extra test for windows users
elif os.environ.has_key('HOMEPATH'):
hpath = os.environ['HOMEPATH']
# hpath = ''
xmltv_dir = hpath+'/.xmltv'
program_cache_file = xmltv_dir+'/program_cache'
config_file = xmltv_dir+'/tv_grab_nl_py.conf'
# cache the detail information.
program_cache = None
clean_cache = 1
clear_cache = 0
# seed the random generator
random.seed(time.time())
for o, a in opts:
if o in ("-h", "--help"):
usage()
sys.exit(1)
if o == "--quiet":
quiet = 1;
if o == "--description":
print "The Netherlands (tv_grab_nl_py $Rev: 104 $)"
sys.exit(0)
if o == "--capabilities":
print "baseline"
print "cache"
print "manualconfig"
print "preferredmethod"
sys.exit(0)
if o == '--preferredmethod':
print 'allatonce'
sys.exit(0)
if o == '--desc-length':
# Use the requested length for programme descriptions.
desc_len = int(a)
if not quiet:
sys.stderr.write('Using description length: %d\n' % desc_len)
for o, a in opts:
if o == "--config-file":
# use the provided name for configuration
config_file = a
if not quiet:
sys.stderr.write('Using config file: %s\n' % config_file)
for o, a in opts:
if o == "--configure":
# check for the ~.xmltv dir
if not os.path.exists(xmltv_dir):
if not quiet:
sys.stderr.write('You do not have the ~/.xmltv directory,')
sys.stderr.write('I am going to make a shiny new one for you...')
os.mkdir(xmltv_dir)
if not quiet:
sys.stderr.write('Creating config file: %s\n' % config_file)
get_channels(config_file)
sys.exit(0)
if o == "--days":
# limit days to maximum supported by tvgids.nl
days = min(int(a),6)
if o == "--compat":
compat = 1
if o == "--nocattrans":
nocattrans = 1
if o == "--fast":
fast = 1
if o == "--output":
output_file = a
try:
output = open(output_file,'w')
# and redirect output
if debug:
debug_file = open('/tmp/kaas.xml','w')
blah = redirect.Tee(output, debug_file)
sys.stdout = blah
else:
sys.stdout = output
except:
if not quiet:
sys.stderr.write('Cannot write to outputfile: %s\n' % output_file)
sys.exit(2)
if o == "--slowdays":
# limit slowdays to maximum supported by tvgids.nl
slowdays = min(int(a),6)
# slowdays implies fast == 0
fast = 0
if o == "--logos":
logos = int(a)
if o == "--clean_cache":
clean_cache = 1
if o == "--clear_cache":
clear_cache = 1
if o == "--cache":
program_cache_file = a
if o == "--max_overlap":
max_overlap = int(a)
if o == "--overlap_strategy":
overlap_strategy = a
# get configfile if available
try:
f = open(config_file,'r')
except:
sys.stderr.write('Config file %s not found.\n' % config_file)
sys.stderr.write('Re-run me with the --configure flag.\n')
sys.exit(1)
#check for cache
program_cache = ProgramCache(program_cache_file)
if clean_cache != 0:
program_cache.clean()
if clear_cache != 0:
program_cache.clear()
# Go!
channels = {}
# Read the channel stuff
for blah in f.readlines():
blah = blah.lstrip()
blah = blah.replace('\n','')
if blah:
if blah[0] != '#':
channel = blah.split()
channels[channel[0]] = " ".join(channel[1:])
# channels are now in channels dict keyed on channel id
# print header stuff
print '<?xml version="1.0" encoding="ISO-8859-1"?>'
print '<!DOCTYPE tv SYSTEM "xmltv.dtd">'
print '<tv generator-info-name="tv_grab_nl_py $Rev: 104 $">'
# first do the channel info
for key in channels.keys():
print ' <channel id="%s%s">' % (key, compat and '.tvgids.nl' or '')
print ' <display-name lang="nl">%s</display-name>' % channels[key]
if (logos):
ikey = int(key)
if logo_names.has_key(ikey):
full_logo_url = logo_provider[logo_names[ikey][0]]+logo_names[ikey][1]+'.gif'
print ' <icon src="%s" />' % full_logo_url
print ' </channel>'
num_chans = len(channels.keys())
channel_cnt = 0
if program_cache != None:
program_cache.clean()
fluffy = channels.keys()
nfluffy = len(fluffy)
for id in fluffy:
channel_cnt += 1
if not quiet:
sys.stderr.write('\n\nNow fetching %s(xmltvid=%s%s) (channel %s of %s)\n' % \
(channels[id], id, (compat and '.tvgids.nl' or ''), channel_cnt, nfluffy))
info = get_channel_all_days(id, days, quiet)
blah = parse_programs(info, None, quiet)
# fetch descriptions
if not fast:
get_descriptions(blah, program_cache, nocattrans, quiet, slowdays)
# Split titles with colon in it
# Note: this only takes place if all days retrieved are also grabbed with details (slowdays=days)
# otherwise this function might change some titles after a few grabs and thus may result in
# loss of programmed recordings for these programs.
if slowdays == days:
for program in blah:
title_split(program)
print xmlefy_programs(blah, id, desc_len, compat, nocattrans)
# save the cache after each channel fetch
if program_cache != None:
program_cache.dump(program_cache_file)
# be nice to tvgids.nl
time.sleep(random.randint(nice_time[0], nice_time[1]))
if program_cache != None:
program_cache.dump(program_cache_file)
# print footer stuff
print "</tv>"
# close the outputfile if necessary
if output != None:
output.close()
# and return success
sys.exit(0)
# allow this to be a module
if __name__ == '__main__':
main()
# vim:tw=0:et:sw=4
[cedric@tv ~]$
Best regards,
Cedric
Last edited by cdwijs (2010-11-04 18:44:51)Running the script by python2 solves it for me:
su - mythtv -c "nice -n 19 python2 /usr/bin/tv_grab_nl_py --output ~/listings.xml"
Best regards,
Cedric -
Peculiar behavior of Shared Variable RT FIFO
I'm trying to "leverage" the enhanced TCP/IP and Shared Variable properties of LabView 8.5. My application involves (among other things) doing continuous sampling (16 channels, 1KHz/channel) using 6-year-old PXIs (Pentium III) and streaming data to the host. I developed a small test routine that was more than capable of handling this data rate, even when I had the host put a 20msec wait between attending to the PXI (to simulate other processing on the host). To do this, I enabled the "RT FIFO" property of the Shared Variable (which was an array of 16 I16 integers) and specified a buffer size of 50 (that's 50 arrays). Key to making this work was figuring out the "error codes" associated with the SV RT FIFO, particularly the one that says the FIFO is empty (so don't save the "non-data" that is present).
Flush with success, I started developing a more realistic routine that involves rather more traffic between Host and Remote, including the passing back and forth of "event" data. These include, among other things, "state variables" to enable both host and remote to run state machines that stay "in sync"; in addition, the PXI also acquires digital data (button pushes, etc.) which are other "events" to be sent to the Host and streamed to disk. I developed the dual state-machine model without including the "analog data" machine, just to get the design of the Host/Remote system down and deal with exchanging digital data through other Shared Variables. Along the way, I decided to make these also use an RT FIFO, as I didn't want to "miss" any data. One problem I had noticed when using Shared Variables is the difficulty of telling "is this new?", i.e. is the variable present one that has been already read (and processed) or something that needs processing. I ended up adopting something of a kludge for the events by including an incrementing "event ID" that could be tested to see if it was "new".
Today, I put the two routines together by adding the "generate 16-channels of integer data at 1 KHz and send it to the Host via the Shared Variable" code to my existing Host/Remote state machine. I used exactly the same logic I'd previously employed to monitor the RT FIFO associated with this Shared Variable (basically, the Host reads the SV, then looks at the error code -- a value of -2220 means "Shared Variable FIFO Read Buffer Empty", so the value you just read is an "old" value, so throw it away). Very sad -- my code threw EVERYTHING away! No matter how slowly the Host ran, the indicator always said that the Shared Variable FIFO Read Buffer was empty! This wasn't true -- if I ignored the flag, and saved anyway, I saw reasonable-looking data (I was generating a sinusoid, and I saw numbers going up and down). The trouble was that I read many more points than were actually generated, since I read the same values multiple times!
Looking at the code, the error line coming into the Shared Variable (before it was read) was -2220, and it remained so after it was read. How could this be? One possibility is that my other Shared Variables were mucking up the error line, but I would have thought that the SV Engine handling reading my "analog data" SV would have set the error line appropriately for my variable. On a hunch, I turned of the RT FIFO on the two Event shared variables, and wouldn't you know, this more-or-less fixed it!
But why? What is the point of having a shared variable "attached" to an error line and having it return "Shared Variable FIFO Read Buffer Empty" if it doesn't apply to its own Read Buffer? This seems to me to be a very serious bug that renders this extremely useful feature almost worthless (certainly mega-frustrating). The beauty of the new Shared Variable structure and the new code in Version 8.5 is that it does seem to allow better and faster communication in real-time using TCP/IP, so we can devote the PXI to "real-time" chores (data acquisition, perhaps stimulus generation) and let the PC handle data streaming, displays, controls, etc.
Has anyone been successful in developing a data-streaming application using shared variables between a PXI and and PC, particularly one with multiple real-time streams (such as mine, where I have an analog stream from the PXI at 16 * 1KHz, a digital stream from the PXI at irregular intervalus, but possibly up to 300 Hz, and "control" information going between PC and PXI to keep them in step)? Note that I'm attempting to "modernize" some Version 7 code that (in the absence of a good communication mechanism) is something of a nightmare, with data being kept in PXI memory, written on occasion to the PXI hard drive (!), and then eventually being written up to the PC; in addition, because the data "stayed" on the PXI, we split the signal and ran a second A/D board in the PC just so we could "see" the signal and create a display. How much better to get the PXI to send the data to the PC, which can sock it away and take samples from the data stream to display as they fly by on their way to the hard drive!
But I need to get Shared Variables (or something similar) working more "understandably" first ...
Bob SchorBob,
The error lines passed into and out of functions are just just clusters with a status boolean, an error code, and an error string, and are not "attached" to a particular function as you describe in your post. Most functions have an error in input and an error out output, and most functions will simply do nothing except pass through the error cluster if the error in status is True (to verify this for yourself, double click on a function such as a DAQmx Read or Write and look at the block diagram. If there is an error passed in, no read/write occurs). This helps prevent unwanted code from executing when an error does arise in your program. By wiring the error cluster from your other shared variables to your analog data variable, you're essentially telling LabVIEW that these functions are related and that your analog data variable depends requires that the other shared variables are functioning properly. The error wire is a great way to enforce the flow of your program, but you must always consider how it will affect other functions if an error does arise.
Anyways, it's great that you have things more or less working at the moment. Keep us all updated! -
Statement.cancel() no longer works with WLS 8.1?
Has anyone else had a problem with the cancel() method in oracle.jdbc.OracleCallableStatement
on WLS 8.1 not doing a very good job of killing the thread in Oracle? We were
using cancel() with weblogic.jdbc.pool.CallableStatement on WLS 6.1 and it does
a very good job of killing the query on the Oracle side. But almost all of this
package has been removed and we've been told we need to use the Oracle vendor
package to get our stuff to work on 8.1. I decompiled both classes and they each
have very different implementations of the cancel() method.
I think I've managed to convert all of our DB access classes to the Oracle vendor
package successfully, with the exception of this issue. We use cancel() let users
cancel large queries and to cancel an existing query if a user tries to run a
new one. Since we implemented it we've had a huge performance boost from the lack
of runaway queries in the DB. It would be a real pain if we can't find a way to
get the Oracle version to work. Any help or tales of similar experiences would
be greatly appreciated.
thx a lot,
Matt SavinoI did a search on Oracle MetaLink. I didn't see any Oracle-confirmed
bugs related to cancel. I did see one user in the JDBC forum that
was having a problem with a remote (not on the local machine)
cancel. At this point, this case will need to be worked through
support - they will need to generate a reproducer and assuming they
do, file a tar with Oracle.
"Stephen Felts" <[email protected]> wrote in message
news:[email protected]...
Regarding the wrappers - now I remember discussing this with you in thebeta newsgroup about a month ago.
The goal was to make it transparent that everything was changed to bepassthrough in 8.1.
The good news is that you can now use the vendor interface directly andget their extensions
directly. That means that the wrapper for Oracle will be different fromthe wrapper for DB2.
Further, any new excentions in the interface will become visible to theapplication.
>
The bad news is that this only works when the vendor has a definedinterface; it doesn't work
if only a class is defined.
This is a problem for some of the Oracle data type classes that don't havedefined interfaces
and the reason why weblogic.vendor.oracle interfaces still need to be usedfor these classes.
That also applies to BEA classes and weblogic.jdbc.pool only had classesdefined, not interfaces.
>
We should have documented change.
Regarding the real problem of cancel(), the WLS JDBC code is not doinganything here
except passing the cancel call through to the thin driver. My guess isthat this is
a problem in the thin driver. I haven't had a chance to research this onthe Oracle site yet.
>
>
"Matt Savino" <[email protected]> wrote in message
news:[email protected]...
>>
Thanks a ton for your quick reply on this. On recommendation from yoursupport
team (case 426562) here is the primary line that we had to change:
[old] weblogic.jdbc.pool.CallableStatement cStat
=(weblogic.jdbc.pool.CallableStatement)connection.prepareCall(call);
>>
[new] - oracle.jdbc.OracleCallableStatement cStat
=(oracle.jdbc.OracleCallableStatement)connection.prepareCall(call);
>>
(FYI - We need this to take adavantage of some of the advanced Oraclefeatures
like returning multiple ResultSets. We'd like to avoid using the OCIclient if
possible, assuming that would even solve this.)
Further down in the code, here is the cancel() method:
public void cancelCall() {
try {
if (cStat != null) cStat.cancel();
releaseConnection();
catch (Exception e){ e.printStackTrace(); }
In both cases the cancel() call throws no error, but only on the oldversion do
we actually see the thread die promptly in Oracle.
Thanks again for your help on this,
Matt
"Stephen Felts" <[email protected]> wrote:
In versions prior to WLS 8.1, each Oracle extension had to be
individually,
explicitly wrapped
(and not all extensions were supported).
In WLS 8.1, it is a clean passthrough directly of all Oracle interfaces
using a dynamic proxy so that
all Oracle extensions show through. The only additional work that WLS
is doing to to
ensure that transactions are managed correctly (which shouldn't have
an impact here).
Note that in versions prior to WLS 8.1, you were using classes12.zip
for the client and now you are using ojdbc14.jar. There are
some big differences in this client implementation.
Maybe you should try testing this standalone without WLS in the picture
to see
if this is a driver problem. WLS doesn't have the code that isresponsible
for killing the thread in Oracle.
I'm not sure I understand what code you are changing. The goal was to
preseve the interfaces
provided in releases prior to 8.1. Code you show me an old code line
and what you are
changing it to? Thanks.
"Matt Savino" <[email protected]> wrote in message
news:[email protected]...
>>>>
Has anyone else had a problem with the cancel() method inoracle.jdbc.OracleCallableStatement
on WLS 8.1 not doing a very good job of killing the thread in Oracle?We were
using cancel() with weblogic.jdbc.pool.CallableStatement on WLS 6.1and it does
a very good job of killing the query on the Oracle side. But almostall of this
package has been removed and we've been told we need to use the
Oracle
vendor
package to get our stuff to work on 8.1. I decompiled both classesand they each
have very different implementations of the cancel() method.
I think I've managed to convert all of our DB access classes to theOracle vendor
package successfully, with the exception of this issue. We use
cancel()
let users
cancel large queries and to cancel an existing query if a user triesto run a
new one. Since we implemented it we've had a huge performance boostfrom the lack
of runaway queries in the DB. It would be a real pain if we can't
find
a way to
get the Oracle version to work. Any help or tales of similar
experiences
would
be greatly appreciated.
thx a lot,
Matt Savino -
ACS appliance1120 ACS 4.2.1.15 syslog message to syslog server
Hi All ,
I am using ACS 1120 appliance running ACS version 4.2.1.15 , I am pointing out all syslog message to my external syslog server (passed authentication , failed authentication , database replication , administration aduit ,tacacs accounting ) , but i could recieve only passed authentication log message to my external log server , no other log message except passed authentication is pushed to my external log server , But i could see failed attempts , database replication,administrtation audit log message locally on my acs appliance as CSV file ,
Syslog server configuration is configured under all logging (passed , failed , administration , tacacs accounting ) , but i am surprise to see only passed authentication logg is sent out from acs appliance , Is there any patch to be installed for logg message scripting ?? , please advise ..Refer the link : https://supportforums.cisco.com/discussion/11513026/migrating-acs-420-421
you can directly upgrade from 4.2.0.124 to 5.6 : http://www.cisco.com/c/en/us/td/docs/net_mgmt/cisco_secure_access_control_system/5-6/user/guide/acsuserguide/migrate.html#98379 -
Voximp help [solved]
Hello I was just trying to install and run voximp. I have followed each step from
http://ardoris.wordpress.com/2008/08/09 … ol-voximp/
here is voximpconf.py. I did run voximp -c and the file voximpconf.pyc does exist
languagemodel = '9882' #set this to something sensible
keycommand = {
'RIGHT': "super+Right", #move one tag to the right
'LEFT': "super+Left", #move one tag to the left
'TERMINAL': "ctrl+grave", #spawn the terminal
'CLOSE': "alt+F4", #close window
'ENTER': "Return",
'SAVE': "ctrl+s",
'NEW': "ctrl+n",
'TAB': "ctrl+Tab", #for seeing next firefox tab
'BACKSPACE': "BackSpace",
'CUT': "ctrl+x",
'COPY': "ctrl+c",
'PASTE': "ctrl+v"
for letter in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
keycommand[letter] = letter.lower() #add all the letters - yes this is a true python file, you can do w/e you want in here
programcommand = {
'FIREFOX': "firefox",
'NOTEPAD': "medit",
'GOOGLE': "firefox www.google.com", #open google in a new tab in firefox
'HIBERNATE': "sudo hibernate",
'PLAY': "xmms2 play",
'STOP': "xmms2 stop"
mousecommand = {
'CLICK': '1', #leftclick
'RIGHTCLICK': '3' #rightclick
progswithargs = {
'ALERT': "notify-send" #just to demonstrate with arguments
confirm = [ #anything listed here produces a confirm dialog before being executed
'HIBERNATE'
Here is /usr/bin/voximp
#!/usr/bin/env python2
# Copyright (c) 2008 Ben Duffield
# Licensing - no idea
# Probably w/e sphinx is, think it's MIT
# Be nice!
#REQUIRES:
# gstreamer
# pygtk
# pocketsphinx
# xdotool
import pygtk
pygtk.require('2.0')
import gtk
import gobject
import pygst
pygst.require('0.10')
gobject.threads_init()
import gst
from subprocess import Popen
import os
import sys
import getopt
config_dir = os.path.join(os.path.expanduser("~"), '.config/voximp/')
try:
os.makedirs(config_dir)
except:
pass
sys.path.append(config_dir)
from voximpconf import *
language_file = os.path.join(config_dir, str(languagemodel))
config = {
'hmm': '/usr/share/pocketsphinx/model/hmm/wsj1',
'lm': '%s.lm' % language_file,
'dict': '%s.dic' % language_file
class Voximp(object):
dial = None
def __init__(self):
self.init_gst()
self.pipeline.set_state(gst.STATE_PLAYING)
def init_gst(self):
self.pipeline = gst.parse_launch('alsasrc device="hw:0,1" ! audioconvert ! audioresample '
+ '! vader name=vad auto-threshold=true '
+ '! pocketsphinx name=asr ! fakesink')
asr = self.pipeline.get_by_name('asr')
asr.connect('partial_result', self.asr_partial_result)
asr.connect('result', self.asr_result)
asr.set_property('lm', config['lm'])
asr.set_property('dict', config['dict'])
asr.set_property('configured', True)
bus = self.pipeline.get_bus()
bus.add_signal_watch()
bus.connect('message::application', self.application_message)
self.pipeline.set_state(gst.STATE_PAUSED)
def asr_partial_result(self, asr, text, uttid):
struct = gst.Structure('partial_result')
struct.set_value('hyp', text)
struct.set_value('uttid', uttid)
asr.post_message(gst.message_new_application(asr, struct))
def asr_result(self, asr, text, uttid):
struct = gst.Structure('result')
struct.set_value('hyp', text)
struct.set_value('uttid', uttid)
asr.post_message(gst.message_new_application(asr, struct))
def application_message(self, bus, msg):
msgtype = msg.structure.get_name()
if msgtype == 'partial_result':
self.partial_result(msg.structure['hyp'], msg.structure['uttid'])
elif msgtype == 'result':
self.final_result(msg.structure['hyp'], msg.structure['uttid'])
#self.pipeline.set_state(gst.STATE_PAUSED)
#self.button.set_active(False)
def partial_result(self, hyp, uttid):
print "partial: %s" % hyp
def final_result(self, hyp, uttid):
print "final: %s" % hyp
prog = ''
command = None
if self.dial is not None:
if hyp == 'YES':
self.dial.response(gtk.RESPONSE_YES)
else:
self.dial.response(gtk.RESPONSE_NO)
elif hyp in programcommand:
prog = programcommand[hyp]
command = hyp
elif hyp in keycommand:
prog = "xdotool key ``%s''" % keycommand[hyp]
command = hyp
elif hyp in mousecommand:
prog = "xdotool click %s" % mousecommand[hyp]
command = hyp
else:
values = hyp.split(' ')
if len(values) <= 1:
return
if values[0] in progswithargs:
prog = progswithargs[values[0]] + ' ' + ' '.join(values)
command = values[0]
else:
for value in values:
self.final_result(value, 0)
if prog:
print "command is %s" % command
if command in confirm:
self.confirm(prog)
else:
p = Popen(prog, shell=True)
def confirm(self, prog):
print "Confirming %s" % prog
self.dial = gtk.MessageDialog(message_format = "Confirm?", type=gtk.MESSAGE_QUESTION)
self.dial.format_secondary_markup("Say <b><i>yes</i></b> or <b><i>no</i></b>")
self.dial.prog = prog
self.dial.show_all()
self.dial.connect("response", self.confirmCallback)
def confirmCallback(self, dialog, response_id):
print "callback called back"
if response_id == gtk.RESPONSE_YES:
p = Popen(dialog.prog, shell=True)
self.dial.destroy()
self.dial = None
versionNumber = '0.0.1'
usageInfo = '''Usage: voximp [options]
Options:
-v, --version show program version and exit
-h, --help show this help message and exit
-c, --corpus create a corpus.txt in current directory - used for generating language model files
def usage():
print "Voximp version %s" % versionNumber
print usageInfo
def version():
print "Version %s" % versionNumber
def corpus():
words = []
words.extend(keycommand.keys())
words.extend(programcommand.keys())
words.extend(mousecommand.keys())
words.extend(progswithargs.keys())
corpusText = "\n".join(words)
filename = os.path.join(os.getcwd(), 'corpus.txt')
print "Saving to %s" % filename
corp = open(filename, 'w')
corp.write(corpusText)
corp.flush()
corp.close()
print "Corpus saved"
print "Now visit http://www.speech.cs.cmu.edu/tools/lmtool.html"
print " ==> choose the corpus file, click COMPILE KNOWLEDGE BASE"
print " ==> save the three files to ~/.config/voximp/"
print " ==> edit ~/.config/voximp/voximpconf.py and set the languagemodel string to the appropriate value \n\t- e.g. if the files are named 4766.dic, 4766.lm and 4766.sent, set languagemodel = '4766'"
if __name__ == '__main__':
try:
opts, args = getopt.getopt(sys.argv[1:], "hcv", ["help", "corpus", "version"])
except getopt.GetoptError:
print "error"
usage()
sys.exit(2)
for opt, arg in opts:
if opt in ("-h", "--help"):
usage()
sys.exit()
elif opt in ("-c", "--corpus"):
corpus()
sys.exit()
elif opt in ("-v", "--version"):
version()
sys.exit()
app = Voximp()
gtk.main()
here is my hardware
**** List of CAPTURE Hardware Devices ****
card 0: Intel [HDA Intel], device 0: ALC268 Analog [ALC268 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: Microphone [Logitech USB Microphone], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0
I have also unmuted and turned up the mic in alsa mixer.
Here is the output of voximp
** Message: pygobject_register_sinkfunc is deprecated (GstObject)
INFO: cmd_ln.c(691): Parsing command line:
gst-pocketsphinx \
-samprate 8000 \
-cmn prior \
-fwdflat no \
-bestpath no \
-maxhmmpf 2000 \
-maxwpf 20
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes no
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current prior
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes no
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm
-input_endian little little
-jsgf
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm
-lmctl
-lmname default default
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf -1 2000
-maxnewoov 20 20
-maxwpf -1 20
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-round_filters yes yes
-samprate 16000 8.000000e+03
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02
INFO: cmd_ln.c(691): Parsing command line:
-nfilt 20 \
-lowerf 1 \
-upperf 4000 \
-wlen 0.025 \
-transform dct \
-round_filters no \
-remove_dc yes \
-svspec 0-12/13-25/26-38 \
-feat 1s_c_d_dd \
-agc none \
-cmn current \
-cmninit 56,-3,1 \
-varnorm no
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 56,-3,1
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 0
-logspec no no
-lowerf 133.33334 1.000000e+00
-ncep 13 13
-nfft 512 512
-nfilt 40 20
-remove_dc no yes
-round_filters yes no
-samprate 16000 8.000000e+03
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 4.000000e+03
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.500000e-02
INFO: acmod.c(246): Parsed model-specific feature parameters from /usr/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/feat.params
INFO: feat.c(713): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
INFO: cmn.c(142): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(167): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(517): Reading model definition: /usr/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/mdef
INFO: mdef.c(528): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /usr/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/mdef
INFO: bin_mdef.c(513): 50 CI-phone, 143047 CD-phone, 3 emitstate/phone, 150 CI-sen, 5150 Sen, 27135 Sen-Seq
INFO: tmat.c(205): Reading HMM transition probability matrices: /usr/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/transition_matrices
INFO: acmod.c(121): Attempting to use SCHMM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /usr/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /usr/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(903): Loading senones from dump file /usr/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/sendump
INFO: s2_semi_mgau.c(927): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(1022): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1296): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: dict.c(317): Allocating 4158 * 32 bytes (129 KiB) for word entries
INFO: dict.c(332): Reading main dictionary: /home/tron/.config/voximp/9882.dic
INFO: dict.c(211): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(335): 51 words read
INFO: dict.c(341): Reading filler dictionary: /usr/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/noisedict
INFO: dict.c(211): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(344): 11 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(404): Allocating 50^3 * 2 bytes (244 KiB) for word-initial triphones
INFO: dict2pid.c(131): Allocated 60400 bytes (58 KiB) for word-final triphones
INFO: dict2pid.c(195): Allocated 60400 bytes (58 KiB) for single-phone word triphones
INFO: ngram_model_arpa.c(477): ngrams 1=49, 2=94, 3=47
INFO: ngram_model_arpa.c(135): Reading unigrams
INFO: ngram_model_arpa.c(516): 49 = #unigrams created
INFO: ngram_model_arpa.c(195): Reading bigrams
INFO: ngram_model_arpa.c(533): 94 = #bigrams created
INFO: ngram_model_arpa.c(534): 3 = #prob2 entries
INFO: ngram_model_arpa.c(542): 3 = #bo_wt2 entries
INFO: ngram_model_arpa.c(292): Reading trigrams
INFO: ngram_model_arpa.c(555): 47 = #trigrams created
INFO: ngram_model_arpa.c(556): 2 = #prob3 entries
INFO: ngram_search_fwdtree.c(99): 41 unique initial diphones
INFO: ngram_search_fwdtree.c(147): 0 root, 0 non-root channels, 17 single-phone words
INFO: ngram_search_fwdtree.c(186): Creating search tree
INFO: ngram_search_fwdtree.c(191): before: 0 root, 0 non-root channels, 17 single-phone words
INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 191
INFO: ngram_search_fwdtree.c(338): after: 41 root, 63 non-root channels, 16 single-phone words
Last edited by mich04 (2013-12-19 20:34:23)I figured out what I was doing wrong hw:1,0 not hw:0,1
-
Hello,
Follows is a test program I have written. I am attempting to do multi-process inserts. Sometimes the program appears to deadlock and every other time, when it completes, it segfaults at the end complaining that a database handle is still in use. I am opening and deleting the container in the parent process and using the multiprocess module to handle the forking and IPC for me.
from bsddb3.db import *
from dbxml import *
import time
from multiprocessing import Process, Pool, Queue
numberOfItems = 100000
xml = """<item><type/></item>"""
def strAsDocument(mgr, str):
doc = mgr.createDocument()
doc.setContent(str)
return doc
def insertDoc(container, environment, mgr, number):
xtxn = mgr.createTransaction()
uc = mgr.createUpdateContext()
names = [];
print "inserting " + str(number) + " records"
for i in xrange(number):
name = container.putDocument(xtxn, 'item', xml, uc, DBXML_GEN_NAME)
names.append(name)
xtxn.commit()
print "done";
del uc
del xtxn
def go():
environment = DBEnv()
environment.set_cachesize(0, 25 * 1024 * 1024)
environment.open("env", DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL|DB_INIT_TXN|DB_RECOVER|DB_THREAD, 0)
try:
config = XmlContainerConfig()
config.setAllowCreate(True)
config.setTransactional(True)
mgr = XmlManager(environment, 0)
uc = mgr.createUpdateContext()
try:
mgr.removeContainer("test.dbxml")
except:
pass
container = mgr.openContainer("test.dbxml", config)
container.setAutoIndexing(False, uc)
before = time.time()
ps = []
for i in range(5):
p = Process(target=insertDoc, args=(container, environment, mgr, 2000))
p.start()
ps.append(p)
for p in ps:
p.join()
print time.time() - before
del container
del mgr
del uc
except XmlException, inst:
print "XmlException (", inst.exceptionCode,"): ", inst.what
if inst.exceptionCode == DATABASE_ERROR:
print "Database error code:",inst.dbError
environment.close(0)
for i in range(5):
go()
gives me:
[root@vladivar python]# python test.py
inserting 2000 records
inserting 2000 records
inserting 2000 records
inserting 2000 records
inserting 2000 records
done
done
done
done
done
2.8827149868
Traceback (most recent call last):
File "test.py", line 72, in <module>
go()
File "test.py", line 69, in go
environment.close(0)
bsddb3.db.DBInvalidArgError: (22, 'Invalid argument -- Open database handle: test.dbxml/secondary_configuration')
Segmentation fault
Thanks.This is very helpful, thanks.
w.r.t the GIL it was this posting that put me off: http://www.dabeaz.com/blog/2010/01/python-gil-visualized.html. But on reflection I think you're right; it's probably not an issue because the application will most likely be IO-bound. I will try the threading module on your recommendation (I much prefer threading anyway). I wasn't worried about the expense of forking as I would do this rarely (when a worker was started, which would service multiple requests) and it'll never be running on Windows. It appears though that controlling the database handles through a fork() is harder than I expected.
Considering I am going to now try the threading module this might seem a mute point but I did rewrite my test case to open the environment in each process instead of before fork()ing and I had deadlock issues, I'd like to understand why. Do I understand you correctly that I need to serialize opening of the environment and containers? I understand that creation needs to be serialized but opening too? In case you are interested here is my deadlocking (non-segfaulting) test case:
from bsddb3.db import *
from dbxml import *
import time
from multiprocessing import Process
xml = """<item><type/></item>"""
class DBTest:
def insertDoc(self, number):
uc = self.mgr.createUpdateContext()
try:
names = [];
print "inserting " + str(number) + " records"
for i in xrange(number):
name = self.container.putDocument('item', xml, uc, DBXML_GEN_NAME)
names.append(name)
print "done";
finally:
del uc
def joinEnvironment(self):
self.environment = DBEnv()
self.environment.open("env", DB_JOINENV|DB_THREAD)
@staticmethod
def createEnvironment():
environment = DBEnv()
environment.set_cachesize(0, 25 * 1024 * 1024)
environment.open("env", DB_CREATE|
DB_INIT_LOCK|
DB_INIT_MPOOL|
DB_THREAD, 0)
environment.close(0)
def createContainers(self):
mgr = XmlManager(self.environment, 0)
uc = mgr.createUpdateContext()
config = XmlContainerConfig()
config.setAllowCreate(True)
config.setThreaded(True)
try:
mgr.removeContainer("test.dbxml")
except:
pass
container = mgr.openContainer("test.dbxml", config)
container.setAutoIndexing(False, uc)
del container
def openContainers(self):
config = XmlContainerConfig()
config.setAllowCreate(False)
config.setThreaded(True)
self.mgr = XmlManager(self.environment, 0)
self.container = self.mgr.openContainer("test.dbxml", config)
def cleanup(self):
if hasattr(self, 'container'):
del self.container
if hasattr(self, 'mgr'):
del self.mgr
if hasattr(self, 'environment'):
self.environment.close(0)
del self.environment
# called by fork()ed process
def doProcess(num):
test = DBTest()
try:
test.joinEnvironment()
test.openContainers()
test.insertDoc(num)
except XmlException, inst:
print "XmlException (", inst.exceptionCode,"): ", inst.what
if inst.exceptionCode == DATABASE_ERROR:
print "Database error code:",inst.dbError
finally:
test.cleanup()
# main
DBTest.createEnvironment()
test = DBTest()
test.joinEnvironment()
test.createContainers()
test.cleanup()
ps = []
for i in range(3):
p = Process(target=doProcess, args=(5000,))
p.start()
ps.append(p)
for p in ps:
p.join()
Maybe you are looking for
-
Can anyone recommend a good color laser printer for under $500?
We have a small business that needs one. We've been using an Okidata C5300 for the last 6 years and it's worked really well, but starting to have some issues with it, more so after Snow Leopard. I'm thinking it may be time to switch to a more modern
-
'go to link' action to open page already open as opposed to opening a new page
I'm sure there is a simple answer to this but I cannot find it. I want to put this link (action?) on all new pages opened enabling the viewer to return to the home page if it is already open in the browser, or open the home page if it has been closed
-
Cant access my information from old sync
I have my recovery key, username, and password from old sync account. I have a pre v28 fire fox. i was at one point able to request a password change, and received the email to the account i had signed up with long time ago, but when i would click th
-
I am unable to get my referenced files to re-link to the original RAW file on my external hard drive. I have gone through many post and have experience relocating referenced files which works great when you can actually get to that screen. I have tri
-
Error -106 during install Acrobat 5.0
I don't know to solve this problem. Do you have some idea? No sé cómo resolver el problema. ¿Alguien puede ayudarme?