Python raw_input in nxos 7.0
After upgrade to 7.0 I lost ability to use raw_input in my scripts ((. It seems that there is no echo when I typing. Anybody know any workaround?
Many of my scripts are interactive and asking user some questions. All of them do not work after upgrade (((
The fixed version is kind of hard to decipher. It may be fixed in the next 7.1 release. But it might be best to open a TAC case so this can be properly tracked to make sure it's being fixed in the N5K branch.
Similar Messages
-
I somewhat hijacked a different thread and my question is more suited here.
I'm using a python script to check gmail in a pipe menu. At first it was creating problems because it would create a cache but would then not load until the file was removed. To fix this, I removed the last line (which created the cache) and it all works. However, I would prefer to have it work like it was intended.
The script:
#!/usr/bin/python
# Authors: [email protected] [email protected]
# License: GPL 2.0
# Usage:
# Put an entry in your ~/.config/openbox/menu.xml like this:
# <menu id="gmail" label="gmail" execute="~/.config/openbox/scripts/gmail-openbox.py" />
# And inside <menu id="root-menu" label="openbox">, add this somewhere (wherever you want it on your menu)
# <menu id="gmail" />
import os
import sys
import logging
name = "111111"
pw = "000000"
browser = "firefox3"
filename = "/tmp/.gmail.cache"
login = "\'https://mail.google.com/mail\'"
# Allow us to run using installed `libgmail` or the one in parent directory.
try:
import libgmail
except ImportError:
# Urghhh...
sys.path.insert(1,
os.path.realpath(os.path.join(os.path.dirname(__file__),
os.path.pardir)))
import libgmail
if __name__ == "__main__":
import sys
from getpass import getpass
if not os.path.isfile(filename):
ga = libgmail.GmailAccount(name, pw)
try:
ga.login()
except libgmail.GmailLoginFailure:
print "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
print "<openbox_pipe_menu>"
print " <item label=\"login failed.\">"
print " <action name=\"Execute\"><execute>" + browser + " " + login + "</execute></action>"
print " </item>"
print "</openbox_pipe_menu>"
raise SystemExit
else:
ga = libgmail.GmailAccount(
state = libgmail.GmailSessionState(filename = filename))
msgtotals = ga.getUnreadMsgCount()
print "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
print "<openbox_pipe_menu>"
print "<separator label=\"Gmail\"/>"
if msgtotals == 0:
print " <item label=\"no new messages.\">"
elif msgtotals == 1:
print " <item label=\"1 new message.\">"
else:
print " <item label=\"" + str(msgtotals) + " new messages.\">"
print " <action name=\"Execute\"><execute>" + browser + " " + login + "</execute></action>"
print " </item>"
print "</openbox_pipe_menu>"
state = libgmail.GmailSessionState(account = ga).save(filename)
The line I removed:
state = libgmail.GmailSessionState(account = ga).save(filename)
The error I'd get if the cache existed:
Traceback (most recent call last):
File "/home/shawn/.config/openbox/scripts/gmail.py", line 56, in <module>
msgtotals = ga.getUnreadMsgCount()
File "/home/shawn/.config/openbox/scripts/libgmail.py", line 547, in getUnreadMsgCount
q = "is:" + U_AS_SUBSET_UNREAD)
File "/home/shawn/.config/openbox/scripts/libgmail.py", line 428, in _parseSearchResult
return self._parsePage(_buildURL(**params))
File "/home/shawn/.config/openbox/scripts/libgmail.py", line 401, in _parsePage
items = _parsePage(self._retrievePage(urlOrRequest))
File "/home/shawn/.config/openbox/scripts/libgmail.py", line 369, in _retrievePage
if self.opener is None:
AttributeError: GmailAccount instance has no attribute 'opener'
EDIT - you might need the libgmail.py
#!/usr/bin/env python
# libgmail -- Gmail access via Python
## To get the version number of the available libgmail version.
## Reminder: add date before next release. This attribute is also
## used in the setup script.
Version = '0.1.8' # (Nov 2007)
# Original author: [email protected]
# Maintainers: Waseem ([email protected]) and Stas Z ([email protected])
# License: GPL 2.0
# NOTE:
# You should ensure you are permitted to use this script before using it
# to access Google's Gmail servers.
# Gmail Implementation Notes
# ==========================
# * Folders contain message threads, not individual messages. At present I
# do not know any way to list all messages without processing thread list.
LG_DEBUG=0
from lgconstants import *
import os,pprint
import re
import urllib
import urllib2
import mimetypes
import types
from cPickle import load, dump
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email.MIMEMultipart import MIMEMultipart
GMAIL_URL_LOGIN = "https://www.google.com/accounts/ServiceLoginBoxAuth"
GMAIL_URL_GMAIL = "https://mail.google.com/mail/?ui=1&"
# Set to any value to use proxy.
PROXY_URL = None # e.g. libgmail.PROXY_URL = 'myproxy.org:3128'
# TODO: Get these on the fly?
STANDARD_FOLDERS = [U_INBOX_SEARCH, U_STARRED_SEARCH,
U_ALL_SEARCH, U_DRAFTS_SEARCH,
U_SENT_SEARCH, U_SPAM_SEARCH]
# Constants with names not from the Gmail Javascript:
# TODO: Move to `lgconstants.py`?
U_SAVEDRAFT_VIEW = "sd"
D_DRAFTINFO = "di"
# NOTE: All other DI_* field offsets seem to match the MI_* field offsets
DI_BODY = 19
versionWarned = False # If the Javascript version is different have we
# warned about it?
RE_SPLIT_PAGE_CONTENT = re.compile("D\((.*?)\);", re.DOTALL)
class GmailError(Exception):
Exception thrown upon gmail-specific failures, in particular a
failure to log in and a failure to parse responses.
pass
class GmailSendError(Exception):
Exception to throw if we're unable to send a message
pass
def _parsePage(pageContent):
Parse the supplied HTML page and extract useful information from
the embedded Javascript.
lines = pageContent.splitlines()
data = '\n'.join([x for x in lines if x and x[0] in ['D', ')', ',', ']']])
#data = data.replace(',,',',').replace(',,',',')
data = re.sub(',{2,}', ',', data)
result = []
try:
exec data in {'__builtins__': None}, {'D': lambda x: result.append(x)}
except SyntaxError,info:
print info
raise GmailError, 'Failed to parse data returned from gmail.'
items = result
itemsDict = {}
namesFoundTwice = []
for item in items:
name = item[0]
try:
parsedValue = item[1:]
except Exception:
parsedValue = ['']
if itemsDict.has_key(name):
# This handles the case where a name key is used more than
# once (e.g. mail items, mail body etc) and automatically
# places the values into list.
# TODO: Check this actually works properly, it's early... :-)
if len(parsedValue) and type(parsedValue[0]) is types.ListType:
for item in parsedValue:
itemsDict[name].append(item)
else:
itemsDict[name].append(parsedValue)
else:
if len(parsedValue) and type(parsedValue[0]) is types.ListType:
itemsDict[name] = []
for item in parsedValue:
itemsDict[name].append(item)
else:
itemsDict[name] = [parsedValue]
return itemsDict
def _splitBunches(infoItems):# Is this still needed ?? Stas
Utility to help make it easy to iterate over each item separately,
even if they were bunched on the page.
result= []
# TODO: Decide if this is the best approach.
for group in infoItems:
if type(group) == tuple:
result.extend(group)
else:
result.append(group)
return result
class SmartRedirectHandler(urllib2.HTTPRedirectHandler):
def __init__(self, cookiejar):
self.cookiejar = cookiejar
def http_error_302(self, req, fp, code, msg, headers):
# The location redirect doesn't seem to change
# the hostname header appropriately, so we do
# by hand. (Is this a bug in urllib2?)
new_host = re.match(r'http[s]*://(.*?\.google\.com)',
headers.getheader('Location'))
if new_host:
req.add_header("Host", new_host.groups()[0])
result = urllib2.HTTPRedirectHandler.http_error_302(
self, req, fp, code, msg, headers)
return result
class CookieJar:
A rough cookie handler, intended to only refer to one domain.
Does no expiry or anything like that.
(The only reason this is here is so I don't have to require
the `ClientCookie` package.)
def __init__(self):
self._cookies = {}
def extractCookies(self, headers, nameFilter = None):
# TODO: Do this all more nicely?
for cookie in headers.getheaders('Set-Cookie'):
name, value = (cookie.split("=", 1) + [""])[:2]
if LG_DEBUG: print "Extracted cookie `%s`" % (name)
if not nameFilter or name in nameFilter:
self._cookies[name] = value.split(";")[0]
if LG_DEBUG: print "Stored cookie `%s` value `%s`" % (name, self._cookies[name])
if self._cookies[name] == "EXPIRED":
if LG_DEBUG:
print "We got an expired cookie: %s:%s, deleting." % (name, self._cookies[name])
del self._cookies[name]
def addCookie(self, name, value):
self._cookies[name] = value
def setCookies(self, request):
request.add_header('Cookie',
";".join(["%s=%s" % (k,v)
for k,v in self._cookies.items()]))
def _buildURL(**kwargs):
return "%s%s" % (URL_GMAIL, urllib.urlencode(kwargs))
def _paramsToMime(params, filenames, files):
mimeMsg = MIMEMultipart("form-data")
for name, value in params.iteritems():
mimeItem = MIMEText(value)
mimeItem.add_header("Content-Disposition", "form-data", name=name)
# TODO: Handle this better...?
for hdr in ['Content-Type','MIME-Version','Content-Transfer-Encoding']:
del mimeItem[hdr]
mimeMsg.attach(mimeItem)
if filenames or files:
filenames = filenames or []
files = files or []
for idx, item in enumerate(filenames + files):
# TODO: This is messy, tidy it...
if isinstance(item, str):
# We assume it's a file path...
filename = item
contentType = mimetypes.guess_type(filename)[0]
payload = open(filename, "rb").read()
else:
# We assume it's an `email.Message.Message` instance...
# TODO: Make more use of the pre-encoded information?
filename = item.get_filename()
contentType = item.get_content_type()
payload = item.get_payload(decode=True)
if not contentType:
contentType = "application/octet-stream"
mimeItem = MIMEBase(*contentType.split("/"))
mimeItem.add_header("Content-Disposition", "form-data",
name="file%s" % idx, filename=filename)
# TODO: Encode the payload?
mimeItem.set_payload(payload)
# TODO: Handle this better...?
for hdr in ['MIME-Version','Content-Transfer-Encoding']:
del mimeItem[hdr]
mimeMsg.attach(mimeItem)
del mimeMsg['MIME-Version']
return mimeMsg
class GmailLoginFailure(Exception):
Raised whenever the login process fails--could be wrong username/password,
or Gmail service error, for example.
Extract the error message like this:
try:
foobar
except GmailLoginFailure,e:
mesg = e.message# or
print e# uses the __str__
def __init__(self,message):
self.message = message
def __str__(self):
return repr(self.message)
class GmailAccount:
def __init__(self, name = "", pw = "", state = None, domain = None):
global URL_LOGIN, URL_GMAIL
self.domain = domain
if self.domain:
URL_LOGIN = "https://www.google.com/a/" + self.domain + "/LoginAction"
URL_GMAIL = "http://mail.google.com/a/" + self.domain + "/?"
else:
URL_LOGIN = GMAIL_URL_LOGIN
URL_GMAIL = GMAIL_URL_GMAIL
if name and pw:
self.name = name
self._pw = pw
self._cookieJar = CookieJar()
if PROXY_URL is not None:
import gmail_transport
self.opener = urllib2.build_opener(gmail_transport.ConnectHTTPHandler(proxy = PROXY_URL),
gmail_transport.ConnectHTTPSHandler(proxy = PROXY_URL),
SmartRedirectHandler(self._cookieJar))
else:
self.opener = urllib2.build_opener(
urllib2.HTTPHandler(debuglevel=0),
urllib2.HTTPSHandler(debuglevel=0),
SmartRedirectHandler(self._cookieJar))
elif state:
# TODO: Check for stale state cookies?
self.name, self._cookieJar = state.state
else:
raise ValueError("GmailAccount must be instantiated with " \
"either GmailSessionState object or name " \
"and password.")
self._cachedQuotaInfo = None
self._cachedLabelNames = None
def login(self):
# TODO: Throw exception if we were instantiated with state?
if self.domain:
data = urllib.urlencode({'continue': URL_GMAIL,
'at' : 'null',
'service' : 'mail',
'userName': self.name,
'password': self._pw,
else:
data = urllib.urlencode({'continue': URL_GMAIL,
'Email': self.name,
'Passwd': self._pw,
headers = {'Host': 'www.google.com',
'User-Agent': 'Mozilla/5.0 (Compatible; libgmail-python)'}
req = urllib2.Request(URL_LOGIN, data=data, headers=headers)
pageData = self._retrievePage(req)
if not self.domain:
# The GV cookie no longer comes in this page for
# "Apps", so this bottom portion is unnecessary for it.
# This requests the page that provides the required "GV" cookie.
RE_PAGE_REDIRECT = 'CheckCookie\?continue=([^"\']+)'
# TODO: Catch more failure exceptions here...?
try:
link = re.search(RE_PAGE_REDIRECT, pageData).group(1)
redirectURL = urllib2.unquote(link)
redirectURL = redirectURL.replace('\\x26', '&')
except AttributeError:
raise GmailLoginFailure("Login failed. (Wrong username/password?)")
# We aren't concerned with the actual content of this page,
# just the cookie that is returned with it.
pageData = self._retrievePage(redirectURL)
def _retrievePage(self, urlOrRequest):
if self.opener is None:
raise "Cannot find urlopener"
if not isinstance(urlOrRequest, urllib2.Request):
req = urllib2.Request(urlOrRequest)
else:
req = urlOrRequest
self._cookieJar.setCookies(req)
req.add_header('User-Agent',
'Mozilla/5.0 (Compatible; libgmail-python)')
try:
resp = self.opener.open(req)
except urllib2.HTTPError,info:
print info
return None
pageData = resp.read()
# Extract cookies here
self._cookieJar.extractCookies(resp.headers)
# TODO: Enable logging of page data for debugging purposes?
return pageData
def _parsePage(self, urlOrRequest):
Retrieve & then parse the requested page content.
items = _parsePage(self._retrievePage(urlOrRequest))
# Automatically cache some things like quota usage.
# TODO: Cache more?
# TODO: Expire cached values?
# TODO: Do this better.
try:
self._cachedQuotaInfo = items[D_QUOTA]
except KeyError:
pass
#pprint.pprint(items)
try:
self._cachedLabelNames = [category[CT_NAME] for category in items[D_CATEGORIES][0]]
except KeyError:
pass
return items
def _parseSearchResult(self, searchType, start = 0, **kwargs):
params = {U_SEARCH: searchType,
U_START: start,
U_VIEW: U_THREADLIST_VIEW,
params.update(kwargs)
return self._parsePage(_buildURL(**params))
def _parseThreadSearch(self, searchType, allPages = False, **kwargs):
Only works for thread-based results at present. # TODO: Change this?
start = 0
tot = 0
threadsInfo = []
# Option to get *all* threads if multiple pages are used.
while (start == 0) or (allPages and
len(threadsInfo) < threadListSummary[TS_TOTAL]):
items = self._parseSearchResult(searchType, start, **kwargs)
#TODO: Handle single & zero result case better? Does this work?
try:
threads = items[D_THREAD]
except KeyError:
break
else:
for th in threads:
if not type(th[0]) is types.ListType:
th = [th]
threadsInfo.append(th)
# TODO: Check if the total or per-page values have changed?
threadListSummary = items[D_THREADLIST_SUMMARY][0]
threadsPerPage = threadListSummary[TS_NUM]
start += threadsPerPage
# TODO: Record whether or not we retrieved all pages..?
return GmailSearchResult(self, (searchType, kwargs), threadsInfo)
def _retrieveJavascript(self, version = ""):
Note: `version` seems to be ignored.
return self._retrievePage(_buildURL(view = U_PAGE_VIEW,
name = "js",
ver = version))
def getMessagesByFolder(self, folderName, allPages = False):
Folders contain conversation/message threads.
`folderName` -- As set in Gmail interface.
Returns a `GmailSearchResult` instance.
*** TODO: Change all "getMessagesByX" to "getThreadsByX"? ***
return self._parseThreadSearch(folderName, allPages = allPages)
def getMessagesByQuery(self, query, allPages = False):
Returns a `GmailSearchResult` instance.
return self._parseThreadSearch(U_QUERY_SEARCH, q = query,
allPages = allPages)
def getQuotaInfo(self, refresh = False):
Return MB used, Total MB and percentage used.
# TODO: Change this to a property.
if not self._cachedQuotaInfo or refresh:
# TODO: Handle this better...
self.getMessagesByFolder(U_INBOX_SEARCH)
return self._cachedQuotaInfo[0][:3]
def getLabelNames(self, refresh = False):
# TODO: Change this to a property?
if not self._cachedLabelNames or refresh:
# TODO: Handle this better...
self.getMessagesByFolder(U_INBOX_SEARCH)
return self._cachedLabelNames
def getMessagesByLabel(self, label, allPages = False):
return self._parseThreadSearch(U_CATEGORY_SEARCH,
cat=label, allPages = allPages)
def getRawMessage(self, msgId):
# U_ORIGINAL_MESSAGE_VIEW seems the only one that returns a page.
# All the other U_* results in a 404 exception. Stas
PageView = U_ORIGINAL_MESSAGE_VIEW
return self._retrievePage(
_buildURL(view=PageView, th=msgId))
def getUnreadMessages(self):
return self._parseThreadSearch(U_QUERY_SEARCH,
q = "is:" + U_AS_SUBSET_UNREAD)
def getUnreadMsgCount(self):
items = self._parseSearchResult(U_QUERY_SEARCH,
q = "is:" + U_AS_SUBSET_UNREAD)
try:
result = items[D_THREADLIST_SUMMARY][0][TS_TOTAL_MSGS]
except KeyError:
result = 0
return result
def _getActionToken(self):
try:
at = self._cookieJar._cookies[ACTION_TOKEN_COOKIE]
except KeyError:
self.getLabelNames(True)
at = self._cookieJar._cookies[ACTION_TOKEN_COOKIE]
return at
def sendMessage(self, msg, asDraft = False, _extraParams = None):
`msg` -- `GmailComposedMessage` instance.
`_extraParams` -- Dictionary containing additional parameters
to put into POST message. (Not officially
for external use, more to make feature
additional a little easier to play with.)
Note: Now returns `GmailMessageStub` instance with populated
`id` (and `_account`) fields on success or None on failure.
# TODO: Handle drafts separately?
params = {U_VIEW: [U_SENDMAIL_VIEW, U_SAVEDRAFT_VIEW][asDraft],
U_REFERENCED_MSG: "",
U_THREAD: "",
U_DRAFT_MSG: "",
U_COMPOSEID: "1",
U_ACTION_TOKEN: self._getActionToken(),
U_COMPOSE_TO: msg.to,
U_COMPOSE_CC: msg.cc,
U_COMPOSE_BCC: msg.bcc,
"subject": msg.subject,
"msgbody": msg.body,
if _extraParams:
params.update(_extraParams)
# Amongst other things, I used the following post to work out this:
# <http://groups.google.com/groups?
# selm=mailman.1047080233.20095.python-list%40python.org>
mimeMessage = _paramsToMime(params, msg.filenames, msg.files)
#### TODO: Ughh, tidy all this up & do it better...
## This horrible mess is here for two main reasons:
## 1. The `Content-Type` header (which also contains the boundary
## marker) needs to be extracted from the MIME message so
## we can send it as the request `Content-Type` header instead.
## 2. It seems the form submission needs to use "\r\n" for new
## lines instead of the "\n" returned by `as_string()`.
## I tried changing the value of `NL` used by the `Generator` class
## but it didn't work so I'm doing it this way until I figure
## out how to do it properly. Of course, first try, if the payloads
## contained "\n" sequences they got replaced too, which corrupted
## the attachments. I could probably encode the submission,
## which would probably be nicer, but in the meantime I'm kludging
## this workaround that replaces all non-text payloads with a
## marker, changes all "\n" to "\r\n" and finally replaces the
## markers with the original payloads.
## Yeah, I know, it's horrible, but hey it works doesn't it? If you've
## got a problem with it, fix it yourself & give me the patch!
origPayloads = {}
FMT_MARKER = "&&&&&&%s&&&&&&"
for i, m in enumerate(mimeMessage.get_payload()):
if not isinstance(m, MIMEText): #Do we care if we change text ones?
origPayloads[i] = m.get_payload()
m.set_payload(FMT_MARKER % i)
mimeMessage.epilogue = ""
msgStr = mimeMessage.as_string()
contentTypeHeader, data = msgStr.split("\n\n", 1)
contentTypeHeader = contentTypeHeader.split(":", 1)
data = data.replace("\n", "\r\n")
for k,v in origPayloads.iteritems():
data = data.replace(FMT_MARKER % k, v)
req = urllib2.Request(_buildURL(), data = data)
req.add_header(*contentTypeHeader)
items = self._parsePage(req)
# TODO: Check composeid?
# Sometimes we get the success message
# but the id is 0 and no message is sent
result = None
resultInfo = items[D_SENDMAIL_RESULT][0]
if resultInfo[SM_SUCCESS]:
result = GmailMessageStub(id = resultInfo[SM_NEWTHREADID],
_account = self)
else:
raise GmailSendError, resultInfo[SM_MSG]
return result
def trashMessage(self, msg):
# TODO: Decide if we should make this a method of `GmailMessage`.
# TODO: Should we check we have been given a `GmailMessage` instance?
params = {
U_ACTION: U_DELETEMESSAGE_ACTION,
U_ACTION_MESSAGE: msg.id,
U_ACTION_TOKEN: self._getActionToken(),
items = self._parsePage(_buildURL(**params))
# TODO: Mark as trashed on success?
return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
def _doThreadAction(self, actionId, thread):
# TODO: Decide if we should make this a method of `GmailThread`.
# TODO: Should we check we have been given a `GmailThread` instance?
params = {
U_SEARCH: U_ALL_SEARCH, #TODO:Check this search value always works.
U_VIEW: U_UPDATE_VIEW,
U_ACTION: actionId,
U_ACTION_THREAD: thread.id,
U_ACTION_TOKEN: self._getActionToken(),
items = self._parsePage(_buildURL(**params))
return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
def trashThread(self, thread):
# TODO: Decide if we should make this a method of `GmailThread`.
# TODO: Should we check we have been given a `GmailThread` instance?
result = self._doThreadAction(U_MARKTRASH_ACTION, thread)
# TODO: Mark as trashed on success?
return result
def _createUpdateRequest(self, actionId): #extraData):
Helper method to create a Request instance for an update (view)
action.
Returns populated `Request` instance.
params = {
U_VIEW: U_UPDATE_VIEW,
data = {
U_ACTION: actionId,
U_ACTION_TOKEN: self._getActionToken(),
#data.update(extraData)
req = urllib2.Request(_buildURL(**params),
data = urllib.urlencode(data))
return req
# TODO: Extract additional common code from handling of labels?
def createLabel(self, labelName):
req = self._createUpdateRequest(U_CREATECATEGORY_ACTION + labelName)
# Note: Label name cache is updated by this call as well. (Handy!)
items = self._parsePage(req)
print items
return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
def deleteLabel(self, labelName):
# TODO: Check labelName exits?
req = self._createUpdateRequest(U_DELETECATEGORY_ACTION + labelName)
# Note: Label name cache is updated by this call as well. (Handy!)
items = self._parsePage(req)
return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
def renameLabel(self, oldLabelName, newLabelName):
# TODO: Check oldLabelName exits?
req = self._createUpdateRequest("%s%s^%s" % (U_RENAMECATEGORY_ACTION,
oldLabelName, newLabelName))
# Note: Label name cache is updated by this call as well. (Handy!)
items = self._parsePage(req)
return (items[D_ACTION_RESULT][0][AR_SUCCESS] == 1)
def storeFile(self, filename, label = None):
# TODO: Handle files larger than single attachment size.
# TODO: Allow file data objects to be supplied?
FILE_STORE_VERSION = "FSV_01"
FILE_STORE_SUBJECT_TEMPLATE = "%s %s" % (FILE_STORE_VERSION, "%s")
subject = FILE_STORE_SUBJECT_TEMPLATE % os.path.basename(filename)
msg = GmailComposedMessage(to="", subject=subject, body="",
filenames=[filename])
draftMsg = self.sendMessage(msg, asDraft = True)
if draftMsg and label:
draftMsg.addLabel(label)
return draftMsg
## CONTACTS SUPPORT
def getContacts(self):
Returns a GmailContactList object
that has all the contacts in it as
GmailContacts
contactList = []
# pnl = a is necessary to get *all* contacts
myUrl = _buildURL(view='cl',search='contacts', pnl='a')
myData = self._parsePage(myUrl)
# This comes back with a dictionary
# with entry 'cl'
addresses = myData['cl']
for entry in addresses:
if len(entry) >= 6 and entry[0]=='ce':
newGmailContact = GmailContact(entry[1], entry[2], entry[4], entry[5])
#### new code used to get all the notes
#### not used yet due to lockdown problems
##rawnotes = self._getSpecInfo(entry[1])
##print rawnotes
##newGmailContact = GmailContact(entry[1], entry[2], entry[4],rawnotes)
contactList.append(newGmailContact)
return GmailContactList(contactList)
def addContact(self, myContact, *extra_args):
Attempts to add a GmailContact to the gmail
address book. Returns true if successful,
false otherwise
Please note that after version 0.1.3.3,
addContact takes one argument of type
GmailContact, the contact to add.
The old signature of:
addContact(name, email, notes='') is still
supported, but deprecated.
if len(extra_args) > 0:
# The user has passed in extra arguments
# He/she is probably trying to invoke addContact
# using the old, deprecated signature of:
# addContact(self, name, email, notes='')
# Build a GmailContact object and use that instead
(name, email) = (myContact, extra_args[0])
if len(extra_args) > 1:
notes = extra_args[1]
else:
notes = ''
myContact = GmailContact(-1, name, email, notes)
# TODO: In the ideal world, we'd extract these specific
# constants into a nice constants file
# This mostly comes from the Johnvey Gmail API,
# but also from the gmail.py cited earlier
myURL = _buildURL(view='up')
myDataList = [ ('act','ec'),
('at', self._cookieJar._cookies['GMAIL_AT']), # Cookie data?
('ct_nm', myContact.getName()),
('ct_em', myContact.getEmail()),
('ct_id', -1 )
notes = myContact.getNotes()
if notes != '':
myDataList.append( ('ctf_n', notes) )
validinfokeys = [
'i', # IM
'p', # Phone
'd', # Company
'a', # ADR
'e', # Email
'm', # Mobile
'b', # Pager
'f', # Fax
't', # Title
'o', # Other
moreInfo = myContact.getMoreInfo()
ctsn_num = -1
if moreInfo != {}:
for ctsf,ctsf_data in moreInfo.items():
ctsn_num += 1
# data section header, WORK, HOME,...
sectionenum ='ctsn_%02d' % ctsn_num
myDataList.append( ( sectionenum, ctsf ))
ctsf_num = -1
if isinstance(ctsf_data[0],str):
ctsf_num += 1
# data section
subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, ctsf_data[0]) # ie. ctsf_00_01_p
myDataList.append( (subsectionenum, ctsf_data[1]) )
else:
for info in ctsf_data:
if validinfokeys.count(info[0]) > 0:
ctsf_num += 1
# data section
subsectionenum = 'ctsf_%02d_%02d_%s' % (ctsn_num, ctsf_num, info[0]) # ie. ctsf_00_01_p
myDataList.append( (subsectionenum, info[1]) )
myData = urllib.urlencode(myDataList)
request = urllib2.Request(myURL,
data = myData)
pageData = self._retrievePage(request)
if pageData.find("The contact was successfully added") == -1:
print pageData
if pageData.find("already has the email address") > 0:
raise Exception("Someone with same email already exists in Gmail.")
elif pageData.find("https://www.google.com/accounts/ServiceLogin"):
raise Exception("Login has expired.")
return False
else:
return True
def _removeContactById(self, id):
Attempts to remove the contact that occupies
id "id" from the gmail address book.
Returns True if successful,
False otherwise.
This is a little dangerous since you don't really
know who you're deleting. Really,
this should return the name or something of the
person we just killed.
Don't call this method.
You should be using removeContact instead.
myURL = _buildURL(search='contacts', ct_id = id, c=id, act='dc', at=self._cookieJar._cookies['GMAIL_AT'], view='up')
pageData = self._retrievePage(myURL)
if pageData.find("The contact has been deleted") == -1:
return False
else:
return True
def removeContact(self, gmailContact):
Attempts to remove the GmailContact passed in
Returns True if successful, False otherwise.
# Let's re-fetch the contact list to make
# sure we're really deleting the guy
# we think we're deleting
newContactList = self.getContacts()
newVersionOfPersonToDelete = newContactList.getContactById(gmailContact.getId())
# Ok, now we need to ensure that gmailContact
# is the same as newVersionOfPersonToDelete
# and then we can go ahead and delete him/her
if (gmailContact == newVersionOfPersonToDelete):
return self._removeContactById(gmailContact.getId())
else:
# We have a cache coherency problem -- someone
# else now occupies this ID slot.
# TODO: Perhaps signal this in some nice way
# to the end user?
print "Unable to delete."
print "Has someone else been modifying the contacts list while we have?"
print "Old version of person:",gmailContact
print "New version of person:",newVersionOfPersonToDelete
return False
## Don't remove this. contact stas
## def _getSpecInfo(self,id):
## Return all the notes data.
## This is currently not used due to the fact that it requests pages in
## a dos attack manner.
## myURL =_buildURL(search='contacts',ct_id=id,c=id,\
## at=self._cookieJar._cookies['GMAIL_AT'],view='ct')
## pageData = self._retrievePage(myURL)
## myData = self._parsePage(myURL)
## #print "\nmyData form _getSpecInfo\n",myData
## rawnotes = myData['cov'][7]
## return rawnotes
class GmailContact:
Class for storing a Gmail Contacts list entry
def __init__(self, name, email, *extra_args):
Returns a new GmailContact object
(you can then call addContact on this to commit
it to the Gmail addressbook, for example)
Consider calling setNotes() and setMoreInfo()
to add extended information to this contact
# Support populating other fields if we're trying
# to invoke this the old way, with the old constructor
# whose signature was __init__(self, id, name, email, notes='')
id = -1
notes = ''
if len(extra_args) > 0:
(id, name) = (name, email)
email = extra_args[0]
if len(extra_args) > 1:
notes = extra_args[1]
else:
notes = ''
self.id = id
self.name = name
self.email = email
self.notes = notes
self.moreInfo = {}
def __str__(self):
return "%s %s %s %s" % (self.id, self.name, self.email, self.notes)
def __eq__(self, other):
if not isinstance(other, GmailContact):
return False
return (self.getId() == other.getId()) and \
(self.getName() == other.getName()) and \
(self.getEmail() == other.getEmail()) and \
(self.getNotes() == other.getNotes())
def getId(self):
return self.id
def getName(self):
return self.name
def getEmail(self):
return self.email
def getNotes(self):
return self.notes
def setNotes(self, notes):
Sets the notes field for this GmailContact
Note that this does NOT change the note
field on Gmail's end; only adding or removing
contacts modifies them
self.notes = notes
def getMoreInfo(self):
return self.moreInfo
def setMoreInfo(self, moreInfo):
moreInfo format
Use special key values::
'i' = IM
'p' = Phone
'd' = Company
'a' = ADR
'e' = Email
'm' = Mobile
'b' = Pager
'f' = Fax
't' = Title
'o' = Other
Simple example::
moreInfo = {'Home': ( ('a','852 W Barry'),
('p', '1-773-244-1980'),
('i', 'aim:brianray34') ) }
Complex example::
moreInfo = {
'Personal': (('e', 'Home Email'),
('f', 'Home Fax')),
'Work': (('d', 'Sample Company'),
('t', 'Job Title'),
('o', 'Department: Department1'),
('o', 'Department: Department2'),
('p', 'Work Phone'),
('m', 'Mobile Phone'),
('f', 'Work Fax'),
('b', 'Pager')) }
self.moreInfo = moreInfo
def getVCard(self):
"""Returns a vCard 3.0 for this
contact, as a string"""
# The \r is is to comply with the RFC2425 section 5.8.1
vcard = "BEGIN:VCARD\r\n"
vcard += "VERSION:3.0\r\n"
## Deal with multiline notes
##vcard += "NOTE:%s\n" % self.getNotes().replace("\n","\\n")
vcard += "NOTE:%s\r\n" % self.getNotes()
# Fake-out N by splitting up whatever we get out of getName
# This might not always do 'the right thing'
# but it's a *reasonable* compromise
fullname = self.getName().split()
fullname.reverse()
vcard += "N:%s" % ';'.join(fullname) + "\r\n"
vcard += "FN:%s\r\n" % self.getName()
vcard += "EMAIL;TYPE=INTERNET:%s\r\n" % self.getEmail()
vcard += "END:VCARD\r\n\r\n"
# Final newline in case we want to put more than one in a file
return vcard
class GmailContactList:
Class for storing an entire Gmail contacts list
and retrieving contacts by Id, Email address, and name
def __init__(self, contactList):
self.contactList = contactList
def __str__(self):
return '\n'.join([str(item) for item in self.contactList])
def getCount(self):
Returns number of contacts
return len(self.contactList)
def getAllContacts(self):
Returns an array of all the
GmailContacts
return self.contactList
def getContactByName(self, name):
Gets the first contact in the
address book whose name is 'name'.
Returns False if no contact
could be found
nameList = self.getContactListByName(name)
if len(nameList) > 0:
return nameList[0]
else:
return False
def getContactByEmail(self, email):
Gets the first contact in the
address book whose name is 'email'.
As of this writing, Gmail insists
upon a unique email; i.e. two contacts
cannot share an email address.
Returns False if no contact
could be found
emailList = self.getContactListByEmail(email)
if len(emailList) > 0:
return emailList[0]
else:
return False
def getContactById(self, myId):
Gets the first contact in the
address book whose id is 'myId'.
REMEMBER: ID IS A STRING
Returns False if no contact
could be found
idList = self.getContactListById(myId)
if len(idList) > 0:
return idList[0]
else:
return False
def getContactListByName(self, name):
This function returns a LIST
of GmailContacts whose name is
'name'.
Returns an empty list if no contacts
were found
nameList = []
for entry in self.contactList:
if entry.getName() == name:
nameList.append(entry)
return nameList
def getContactListByEmail(self, email):
This function returns a LIST
of GmailContacts whose email is
'email'. As of this writing, two contacts
cannot share an email address, so this
should only return just one item.
But it doesn't hurt to be prepared?
Returns an empty list if no contacts
were found
emailList = []
for entry in self.contactList:
if entry.getEmail() == email:
emailList.append(entry)
return emailList
def getContactListById(self, myId):
This function returns a LIST
of GmailContacts whose id is
'myId'. We expect there only to
be one, but just in case!
Remember: ID IS A STRING
Returns an empty list if no contacts
were found
idList = []
for entry in self.contactList:
if entry.getId() == myId:
idList.append(entry)
return idList
def search(self, searchTerm):
This function returns a LIST
of GmailContacts whose name or
email address matches the 'searchTerm'.
Returns an empty list if no matches
were found.
searchResults = []
for entry in self.contactList:
p = re.compile(searchTerm, re.IGNORECASE)
if p.search(entry.getName()) or p.search(entry.getEmail()):
searchResults.append(entry)
return searchResults
class GmailSearchResult:
def __init__(self, account, search, threadsInfo):
`threadsInfo` -- As returned from Gmail but unbunched.
#print "\nthreadsInfo\n",threadsInfo
try:
if not type(threadsInfo[0]) is types.ListType:
threadsInfo = [threadsInfo]
except IndexError:
print "No messages found"
self._account = account
self.search = search # TODO: Turn into object + format nicely.
self._threads = []
for thread in threadsInfo:
self._threads.append(GmailThread(self, thread[0]))
def __iter__(self):
return iter(self._threads)
def __len__(self):
return len(self._threads)
def __getitem__(self,key):
return self._threads.__getitem__(key)
class GmailSessionState:
def __init__(self, account = None, filename = ""):
if account:
self.state = (account.name, account._cookieJar)
elif filename:
self.state = load(open(filename, "rb"))
else:
raise ValueError("GmailSessionState must be instantiated with " \
"either GmailAccount object or filename.")
def save(self, filename):
dump(self.state, open(filename, "wb"), -1)
class _LabelHandlerMixin(object):
Note: Because a message id can be used as a thread id this works for
messages as well as threads.
def __init__(self):
self._labels = None
def _makeLabelList(self, labelList):
self._labels = labelList
def addLabel(self, labelName):
# Note: It appears this also automatically creates new labels.
result = self._account._doThreadAction(U_ADDCATEGORY_ACTION+labelName,
self)
if not self._labels:
self._makeLabelList([])
# TODO: Caching this seems a little dangerous; suppress duplicates maybe?
self._labels.append(labelName)
return result
def removeLabel(self, labelName):
# TODO: Check label is already attached?
# Note: An error is not generated if the label is not already attached.
result = \
self._account._doThreadAction(U_REMOVECATEGORY_ACTION+labelName,
self)
removeLabel = True
try:
self._labels.remove(labelName)
except:
removeLabel = False
pass
# If we don't check both, we might end up in some weird inconsistent state
return result and removeLabel
def getLabels(self):
return self._labels
class GmailThread(_LabelHandlerMixin):
Note: As far as I can tell, the "canonical" thread id is always the same
as the id of the last message in the thread. But it appears that
the id of any message in the thread can be used to retrieve
the thread information.
def __init__(self, parent, threadsInfo):
_LabelHandlerMixin.__init__(self)
# TODO Handle this better?
self._parent = parent
self._account = self._parent._account
self.id = threadsInfo[T_THREADID] # TODO: Change when canonical updated?
self.subject = threadsInfo[T_SUBJECT_HTML]
self.snippet = threadsInfo[T_SNIPPET_HTML]
#self.extraSummary = threadInfo[T_EXTRA_SNIPPET] #TODO: What is this?
# TODO: Store other info?
# Extract number of messages in thread/conversation.
self._authors = threadsInfo[T_AUTHORS_HTML]
self.info = threadsInfo
try:
# TODO: Find out if this information can be found another way...
# (Without another page request.)
self._length = int(re.search("\((\d+?)\)\Z",
self._authors).group(1))
except AttributeError,info:
# If there's no message count then the thread only has one message.
self._length = 1
# TODO: Store information known about the last message (e.g. id)?
self._messages = []
# Populate labels
self._makeLabelList(threadsInfo[T_CATEGORIES])
def __getattr__(self, name):
Dynamically dispatch some interesting thread properties.
attrs = { 'unread': T_UNREAD,
'star': T_STAR,
'date': T_DATE_HTML,
'authors': T_AUTHORS_HTML,
'flags': T_FLAGS,
'subject': T_SUBJECT_HTML,
'snippet': T_SNIPPET_HTML,
'categories': T_CATEGORIES,
'attach': T_ATTACH_HTML,
'matching_msgid': T_MATCHING_MSGID,
'extra_snippet': T_EXTRA_SNIPPET }
if name in attrs:
return self.info[ attrs[name] ];
raise AttributeError("no attribute %s" % name)
def __len__(self):
return self._length
def __iter__(self):
if not self._messages:
self._messages = self._getMessages(self)
return iter(self._messages)
def __getitem__(self, key):
if not self._messages:
self._messages = self._getMessages(self)
try:
result = self._messages.__getitem__(key)
except IndexError:
result = []
return result
def _getMessages(self, thread):
# TODO: Do this better.
# TODO: Specify the query folder using our specific search?
items = self._account._parseSearchResult(U_QUERY_SEARCH,
view = U_CONVERSATION_VIEW,
th = thread.id,
q = "in:anywhere")
result = []
# TODO: Handle this better?
# Note: This handles both draft & non-draft messages in a thread...
for key, isDraft in [(D_MSGINFO, False), (D_DRAFTINFO, True)]:
try:
msgsInfo = items[key]
except KeyError:
# No messages of this type (e.g. draft or non-draft)
continue
else:
# TODO: Handle special case of only 1 message in thread better?
if type(msgsInfo[0]) != types.ListType:
msgsInfo = [msgsInfo]
for msg in msgsInfo:
result += [GmailMessage(thread, msg, isDraft = isDraft)]
return result
class GmailMessageStub(_LabelHandlerMixin):
Intended to be used where not all message information is known/required.
NOTE: This may go away.
# TODO: Provide way to convert this to a full `GmailMessage` instance
# or allow `GmailMessage` to be created without all info?
def __init__(self, id = None, _account = None):
_LabelHandlerMixin.__init__(self)
self.id = id
self._account = _account
class GmailMessage(object):
def __init__(self, parent, msgData, isDraft = False):
Note: `msgData` can be from either D_MSGINFO or D_DRAFTINFO.
# TODO: Automatically detect if it's a draft or not?
# TODO Handle this better?
self._parent = parent
self._account = self._parent._account
self.author = msgData[MI_AUTHORFIRSTNAME]
self.id = msgData[MI_MSGID]
self.number = msgData[MI_NUM]
self.subject = msgData[MI_SUBJECT]
self.to = msgData[MI_TO]
self.cc = msgData[MI_CC]
self.bcc = msgData[MI_BCC]
self.sender = msgData[MI_AUTHOREMAIL]
self.attachments = [GmailAttachment(self, attachmentInfo)
for attachmentInfo in msgData[MI_ATTACHINFO]]
# TODO: Populate additional fields & cache...(?)
# TODO: Handle body differently if it's from a draft?
self.isDraft = isDraft
self._source = None
def _getSource(self):
if not self._source:
# TODO: Do this more nicely...?
# TODO: Strip initial white space & fix up last line ending
# to make it legal as per RFC?
self._source = self._account.getRawMessage(self.id)
return self._source
source = property(_getSource, doc = "")
class GmailAttachment:
def __init__(self, parent, attachmentInfo):
# TODO Handle this better?
self._parent = parent
self._account = self._parent._account
self.id = attachmentInfo[A_ID]
self.filename = attachmentInfo[A_FILENAME]
self.mimetype = attachmentInfo[A_MIMETYPE]
self.filesize = attachmentInfo[A_FILESIZE]
self._content = None
def _getContent(self):
if not self._content:
# TODO: Do this a more nicely...?
self._content = self._account._retrievePage(
_buildURL(view=U_ATTACHMENT_VIEW, disp="attd",
attid=self.id, th=self._parent._parent.id))
return self._content
content = property(_getContent, doc = "")
def _getFullId(self):
Returns the "full path"/"full id" of the attachment. (Used
to refer to the file when forwarding.)
The id is of the form: "<thread_id>_<msg_id>_<attachment_id>"
return "%s_%s_%s" % (self._parent._parent.id,
self._parent.id,
self.id)
_fullId = property(_getFullId, doc = "")
class GmailComposedMessage:
def __init__(self, to, subject, body, cc = None, bcc = None,
filenames = None, files = None):
`filenames` - list of the file paths of the files to attach.
`files` - list of objects implementing sub-set of
`email.Message.Message` interface (`get_filename`,
`get_content_type`, `get_payload`). This is to
allow use of payloads from Message instances.
TODO: Change this to be simpler class we define ourselves?
self.to = to
self.subject = subject
self.body = body
self.cc = cc
self.bcc = bcc
self.filenames = filenames
self.files = files
if __name__ == "__main__":
import sys
from getpass import getpass
try:
name = sys.argv[1]
except IndexError:
name = raw_input("Gmail account name: ")
pw = getpass("Password: ")
domain = raw_input("Domain? [leave blank for Gmail]: ")
ga = GmailAccount(name, pw, domain=domain)
print "\nPlease wait, logging in..."
try:
ga.login()
except GmailLoginFailure,e:
print "\nLogin failed. (%s)" % e.message
else:
print "Login successful.\n"
# TODO: Use properties instead?
quotaInfo = ga.getQuotaInfo()
quotaMbUsed = quotaInfo[QU_SPACEUSED]
quotaMbTotal = quotaInfo[QU_QUOTA]
quotaPercent = quotaInfo[QU_PERCENT]
print "%s of %s used. (%s)\n" % (quotaMbUsed, quotaMbTotal, quotaPercent)
searches = STANDARD_FOLDERS + ga.getLabelNames()
name = None
while 1:
try:
print "Select folder or label to list: (Ctrl-C to exit)"
for optionId, optionName in enumerate(searches):
print " %d. %s" % (optionId, optionName)
while not name:
try:
name = searches[int(raw_input("Choice: "))]
except ValueError,info:
print info
name = None
if name in STANDARD_FOLDERS:
result = ga.getMessagesByFolder(name, True)
else:
result = ga.getMessagesByLabel(name, True)
if not len(result):
print "No threads found in `%s`." % name
break
name = None
tot = len(result)
i = 0
for thread in result:
print "%s messages in thread" % len(thread)
print thread.id, len(thread), thread.subject
for msg in thread:
print "\n ", msg.id, msg.number, msg.author,msg.subject
# Just as an example of other usefull things
#print " ", msg.cc, msg.bcc,msg.sender
i += 1
print
print "number of threads:",tot
print "number of messages:",i
except KeyboardInterrupt:
break
print "\n\nDone."
Last edited by Reasons (2008-03-20 01:18:27)Thought it might help to give lines 369-relevant of the libgmail so it's easier to read
def _retrievePage(self, urlOrRequest):
if self.opener is None:
raise "Cannot find urlopener"
if not isinstance(urlOrRequest, urllib2.Request):
req = urllib2.Request(urlOrRequest)
else:
req = urlOrRequest
self._cookieJar.setCookies(req)
req.add_header('User-Agent',
'Mozilla/5.0 (Compatible; libgmail-python)')
try:
resp = self.opener.open(req)
except urllib2.HTTPError,info:
print info
return None
pageData = resp.read()
# Extract cookies here
self._cookieJar.extractCookies(resp.headers)
# TODO: Enable logging of page data for debugging purposes?
return pageData
def _parsePage(self, urlOrRequest):
Retrieve & then parse the requested page content.
items = _parsePage(self._retrievePage(urlOrRequest))
# Automatically cache some things like quota usage.
# TODO: Cache more?
# TODO: Expire cached values?
# TODO: Do this better.
try:
self._cachedQuotaInfo = items[D_QUOTA]
except KeyError:
pass
#pprint.pprint(items)
try:
self._cachedLabelNames = [category[CT_NAME] for category in items[D_CATEGORIES][0]]
except KeyError:
pass
return items
def _parseSearchResult(self, searchType, start = 0, **kwargs):
params = {U_SEARCH: searchType,
U_START: start,
U_VIEW: U_THREADLIST_VIEW,
params.update(kwargs)
return self._parsePage(_buildURL(**params)) -
Teaching Myself Python. How Can I Stop The Program From Closing :)
I wrote a program today that will log into a router here at work and check if the line protocols are up and if i can see thir mac address. If i run it from the IDLE program it works fine. When i run it as just crosnet.py it runs fine but closes before i can see the output. I imagine there is some simple line that will have run and just sit? Or even better, loop back to the beginning and wait for me to give it a new number to then telnet with.
#import getpass
import sys
import telnetlib
HOST = "111.111.111.111"
password = "password"
vpc = raw_input("Enter The VPC Number: ")
#password = getpass.getpass()
tn = telnetlib.Telnet(HOST)
#tn.read_until("login: ")
#tn.write(user + "\n")
#if password:
tn.read_until("Password: ")
tn.write(password + "\n")
#tn.write("sh int atm6/0.vpc\n")
tn.write("sh int atm6/0." + str(vpc) + "\n")
tn.write("sh arp | in " + str(vpc) + "\n")
#tn.write("exit\n")
print tn.read_all()
Sorry the code is bad, dont know python and i was shown a outline of how to telnet and adjusted it to my needs.while True:
and just indent everything else under it
Ends when you press ^C.
if you wish to have a way to break out of the loop, say, when you press enter at the VPC number without entering anything, add the following after your vpc input:
if not vpc:
break
which will break out of the while loop and end your program.
Last edited by buttons (2007-12-14 22:03:06) -
[Solved] Help needed to interpret errors (python)
Have the following code (Haven't written it my self and don't know much about how it does what it's supposed to do.):
#!/usr/bin/python
import sys
import os
import re
import logging
import json
if sys.version_info < (3, 0):
from urllib2 import Request
from urllib2 import urlopen
from urlparse import urlparse
else:
from urllib.request import Request
from urllib.request import urlopen
from urllib.parse import urlparse
raw_input = input
useragent = 'Mozilla/5.0'
headers = {'User-Agent': useragent}
intro = """
Usage: drdown.py url
This script finds the stream URL from a dr.dk page so you can
download the tv program.
def fetch(url):
"""Download body from url"""
req = Request(url, headers=headers)
response = urlopen(req)
body = response.read()
response.close()
# convert to string - it is easier to convert here
if isinstance(body, bytes):
body = body.decode('utf8')
return body
class StreamExtractor:
def __init__(self, url):
self.url = url.lower()
self.urlp = urlparse(self.url)
def get_stream_data(self):
"""supply a URL to a video page on dr.dk and get back a streaming
url"""
if self.urlp.netloc not in ('dr.dk', 'www.dr.dk'):
raise Exception("Must be an URL from dr.dk")
self.html = fetch(self.url)
logging.info("Player fetched: " + self.url)
# Standalone show?
if self.urlp.path.startswith('/tv/se/'):
return self.get_stream_data_from_standalone()
# Bonanza?
elif self.urlp.path.startswith('/bonanza/'):
return self.get_stream_data_from_bonanza()
# Live tv?
elif self.urlp.path.startswith('/tv/live'):
return self.get_stream_data_from_live()
else:
return self.get_stream_data_from_series()
def get_stream_data_from_rurl(self, rurl):
"""Helper method to parse resource JSON document"""
body = fetch(rurl)
resource_data = json.loads(body)
qualities = resource_data.get('links')
# sort by quality
qualities = sorted(qualities, key=lambda x: x['bitrateKbps'],
reverse=True)
stream_data = qualities[0]
stream_url = stream_data.get('uri')
logging.info("Stream data fetched: " + stream_url)
playpath, filename = self.get_metadata_from_stream_url(stream_url)
stream_data = {'stream_url': stream_url,
'playpath': playpath,
'filename': filename,
'is_live': False}
return stream_data
def get_metadata_from_stream_url(self, stream_url):
"""Helper method to extacts playpath and filename suggestion from a
rtmp url"""
parsed = urlparse(stream_url)
playpath_s = parsed.path.split('/')[2:]
playpath = '/'.join(playpath_s)
# rerun to split the parameter
path = urlparse(parsed.path).path
filename = path.split('/')[-1]
return playpath, filename
def get_stream_data_from_standalone(self):
"""Extracts stream data from a normal single program page.
The data is hidden in a resource URL, that we need to download
and parse.
mu_regex = re.compile('resource: "([^"]+)"')
m = mu_regex.search(self.html)
if m and m.groups():
resource_meta_url = m.groups()[0]
return self.get_stream_data_from_rurl(resource_meta_url)
def get_stream_data_from_bonanza(self):
"""Finds stream URL from bonanza section. Just pick up the first RTMP
url.
stream_regex = re.compile('rtmp://.*?\.mp4')
m = stream_regex.search(self.html)
if m and m.group():
stream_url = m.group()
else:
raise Exception("Could not find Bonanza stream URL")
playpath, filename = self.get_metadata_from_stream_url(stream_url)
stream_data = {'stream_url': stream_url,
'playpath': playpath,
'filename': filename,
'is_live': False}
return stream_data
def get_stream_data_from_live(self):
stream_url = 'rtmp://livetv.gss.dr.dk/live'
quality = '3'
playpaths = {'dr1': 'livedr01astream',
'dr2': 'livedr02astream',
'dr-ramasjang': 'livedr05astream',
'dr-k': 'livedr04astream',
'dr-update-2': 'livedr03astream',
'dr3': 'livedr06astream'}
urlend = self.urlp.path.split('/')[-1]
playpath = playpaths.get(urlend)
filename = 'live.mp4'
if playpath:
playpath += quality
filename = urlend + '.mp4'
stream_data = {'stream_url': stream_url,
'playpath': playpath,
'filename': filename,
'is_live': True}
return stream_data
def get_stream_data_from_series(self):
"""dr.dk has a special player for multi episode series. This is the
fall back parser, as there seems to be no pattern in the URL."""
slug_regex = re.compile('seriesSlug=([^"]+)"')
m = slug_regex.search(self.html)
if m and m.groups():
slug_id = m.groups()[0]
else:
raise Exception("Could not find program slug")
logging.info("found slug: " + slug_id)
program_meta_url = 'http://www.dr.dk/nu/api/programseries/%s/videos'\
% slug_id
body = fetch(program_meta_url)
program_data = json.loads(body)
if not program_data:
raise Exception("Could not find data about the program series")
fragment = self.urlp.fragment
if fragment.startswith('/'):
fragment = fragment[1:]
fragment = fragment.split('/')
video_id = fragment[0]
logging.info("Video ID: " + video_id)
video_data = None
if video_id:
for item in program_data:
if item['id'] == int(video_id):
video_data = item
if not video_data:
video_data = program_data[0]
resource_meta_url = video_data.get('videoResourceUrl')
return self.get_stream_data_from_rurl(resource_meta_url)
def generate_cmd(self):
"""Build command line to download stream with the rtmpdump tool"""
sdata = self.get_stream_data()
if not sdata:
return "Not found"
filename = sdata['filename']
custom_filename = raw_input("Type another filename or press <enter> to keep default [%s]: " % filename)
if custom_filename:
filename = custom_filename
cmd_live = 'rtmpdump --live --rtmp="%s" --playpath="%s" -o %s'
cmd_rec = 'rtmpdump -e --rtmp="%s" --playpath="%s" -o %s'
if sdata['is_live'] is True:
cmd = cmd_live % (sdata['stream_url'], sdata['playpath'], filename)
else:
cmd = cmd_rec % (sdata['stream_url'], sdata['playpath'], filename)
return cmd
def main():
if len(sys.argv) > 1:
url = sys.argv[1]
extractor = StreamExtractor(url)
cmd = extractor.generate_cmd()
os.system(cmd)
else:
print(intro)
if __name__ == "__main__":
main()
When I run the script with an appropriate URL as parameter, I get this:
Traceback (most recent call last):
File "./drdown.py", line 243, in <module>
main()
File "./drdown.py", line 235, in main
cmd = extractor.generate_cmd()
File "./drdown.py", line 211, in generate_cmd
sdata = self.get_stream_data()
File "./drdown.py", line 65, in get_stream_data
return self.get_stream_data_from_standalone()
File "./drdown.py", line 123, in get_stream_data_from_standalone
return self.get_stream_data_from_rurl(resource_meta_url)
File "./drdown.py", line 87, in get_stream_data_from_rurl
reverse=True)
TypeError: 'NoneType' object is not iterable
I should note that I didn't expect it to work out of the box, but so far I've been unable even to figure out what the problem is, no less solve it.
I've tried my best to go through the code, looking for typos and such, but I haven't had any luck and much of what happens I don't really understand.
So please, if you can, I'd very much appreciate a little help.
Best regards.
NB:
Some might conclude that this script does something illegal. In actuality that is not so, just make that clear. Also, testing from outside Denmark, will probably not be possible.
Last edited by zacariaz (2013-11-08 18:00:33)Trilby wrote:
zacariaz wrote:Have the following code (Haven't written it my self and don't know much about how it does what it's supposed to do.)
You know a lot more than we do: like where you then found this code, and what it is supposed to do.
My only first thought without looking through random code that allegedly serves some unknown purpose and comes from some unknown source is that the shebang is ambiguous: it may be a python 2 script while /usr/bin/python is now python 3.
I've pretty much concluded that python 3.3 is the right choice, sp that should not be a problem, as for what it does, it uses "rtmpdump" to record or "rip" video from a danish site "dr.dk", which is a public service television network.
I have it from reliable sources that this actually works, or at least did at some point, on Windows machines, but of course i'm on linux, and that complicated matters.
All in all, as I understand it, what the script really does, is retrieving the relevant rtmp link. The rest is up to "rmtpdump" which I should be able to manage.
In any case, I'm not really that concerned with learning what the script does at this point, but rather what the error mean.
The only actual error description I see is: "TypeError: 'NoneType' object is not iterable" and I haven't a clue what that implies. I've read through the script, but haven't been able to find any obvious flaws.. -
Quick question that I just can't figure out. Say I'm writing a for statement to decode mp3's in python basically the code will look something like this:
for MP3 in glob.glob('*.mp3'):
... os.system('lame --decode ' + MP3 + '.mp3')
But I keep getting this error message:
sh: -c: line 0: unexpected EOF while looking for matching `''
sh: -c: line 1: syntax error: unexpected end of file
As far as I can tell, the problem lies with the MP3 variable, but I can't figure out how to fix it. Any help would be appreciated. Thanks!Well I got a completed draft of the script, here it is if anyone wants to take a look for any reason:
#!/usr/bin/env python
import os
import glob
breakvble='+'
print 'Hey Brad, welcome to CDBURNERIN! What do you want to do?'
print breakvble*30
print '1: Turn OGG's into a CD.'
print '2: Turn MP3's into a CD.'
print '3: Turn Both into a CD.'
print '4: Nothing! Exit the program.'
print breakvble*30
x = raw_input('Please choose a number[1-4]:')
#if '1' not x
# x = raw_input('Please choose a number[1-4]:')
if x == '1':
os.system('oggdec *.ogg')
elif x == '2':
for MP3 in glob.glob('*.mp3'):
print 'name is: "%s"' % (MP3)
os.system('lame --decode "%s"' % (MP3))
elif x == '3':
os.system('oggdec *.ogg')
for MP3 in glob.glob('*.mp3'):
print 'name is: "%s"' % (MP3)
os.system('lame --decode "%s"' % (MP3))
elif x == '4':
exit
if glob.glob('*.wav'):
os.system('normalize *.wav')
if glob.glob('*.wav'):
os.system('cdrecord -dao dev=ATAPI:0,0,0 gracetime=2 driveropts=burnfree
-pad -audio -v -eject *.wav')
if glob.glob('*.wav'):
print 'Would you like to delete all files now?'
a = raw_input('Please choose Y/N:')
if a == 'y':
os.system('rm *.wav *.mp3 *.ogg')
else:
exit -
Python script for Fabric Path Vlan audit?
Has anyone written any kind of scripts/automation to audit a NXOS FP domain and ensure CU vlans are properly configured?
Have you tried looking on Github? You might a Python script that you can re-use.
-
[Solved] Write a --help for Python program.
Hello.
I write programs in Python and I don't know one thing.
How to write --help command for python program (like calc.py --help).
And after this command you will see help.
Anyone know how to do something like that?
Thanks
Last edited by SpeedVin (2009-09-26 16:53:14)genisis300 wrote:
you need to use a module called option parser
http://www.alexonlinux.com/pythons-optp … man-beings
if you still need help let me know and i post an example.
Regards
Matthew
Thanks that awesome!.
But I added some arguments and when I try to run program I got:
File "calculator.py", line 7
(opts, args) = parser.parse_args()
^
SyntaxError: invalid syntax
Here is my code:
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 import optparse
4 parser = optparse.OptionParser()
5 parser.add_option('-h', '--help', help='Show this help message'
6
7 (opts, args) = parser.parse_args()
8 #
9 #def help():
10 # print "+ = plus"
11 # print "- = minus"
12 # print "* = multiplication"
13 # print "/ = division"
14 # print "// = floor division"
15 # End of function.
16 print "If you need help just write --help or -h."
17 number1 = int(raw_input('Write a number: '))
18 number2 = int(raw_input('Write a number nr.2: '))
19 znak = raw_input('What to do?')
20 if znak == '+':
21 print number1 + number2
22 elif znak == '-':
23 print number1 - number2
24 elif znak == '*':
25 print number1 * number2
26 elif znak == '/':
27 print number1 / number2
28 elif znak == '**':
29 print number1 ** number2
30 elif znak == '//':
31 print number1 // number2
32 else:
33 print "I don't know what to do with " + znak
34 #if raw_input() == help:
35 # help()
Last edited by SpeedVin (2009-09-26 09:40:26) -
Unable to capture stdout, stderr from python.
I am trying to read the stdout and stderr of a python program.
When I invoke anything else apart from python my class works fine.
The python program takes ca 1 sec to start. It then prints one line of text, then it prints a promt and waits for input from user.
I tryed with another program that behaves in the same way and there I got the first line. Not the prompt, that not a python but a native C program.
I have tried using BufferedReader but without sucess.
My class looks like this.
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.OutputStream;
public class Executer extends Thread {
//read standard out output of the external executable
//in seperate thread to avoid I/O deadlocks
final int BUFFER = 2048;
Process p;
BufferedInputStream in;
OutputStream out;
StringBuffer strbuf;
String[] cmdarr = null;
public Executer(String path) {
this.cmdarr = new String[] { path };
public Executer(String[] path) {
this.cmdarr = path;
public void run() {
try {
System.out.println("doing call");
p = Runtime.getRuntime().exec(cmdarr);
in = new BufferedInputStream(p.getInputStream());
strbuf = new StringBuffer();
out = p.getOutputStream();
int i;
byte[] buf = new byte[BUFFER];
String str;
while ((i = in.read(buf, 0, BUFFER)) != -1)
System.out.write(buf, 0, i);
} catch (IOException e) {
e.printStackTrace();
public int killProcess() throws InterruptedException {
p.destroy();
try {
in.close();
System.out.println("Closed");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return p.waitFor();
}Any ideas is appreciated.When Runtime Exec Won't - a good article on running processes from a Java program...
Hope that helps;
lutha -
Hi,
I'm new to both mac and LabView, so it may well be something obvious I'm missing.
I'm trying to use System Exec (http://zone.ni.com/reference/en-XX/help/371361L-01/glang/system_exec/ with LabVIEW 2014 on Mac OS X Mavericks) to call a python script to do some image processing after taking some data and saving images via LabView.
My problem is that I cannot figure out how to point it to the correct version/install of python to use, as it seems to use the pre-installed 2.7.5 apple version no matter whether that's set to be my default or not (and hence, it can't see my SciPy, PIL etc. packages that come with my Anaconda install).
It doesn't seem to matter whether I have certain packages installed for 2.7 accessable via the terminal, the LabView System Exec command line can't seem to find them either way and throws up an error if a script is called that imports them.
I've tried changing the working directory to the relevant anaconda/bin folder, I've also tried
cd /Applications/anaconda/bin; python <file.py>
as the CL input to system exec, but it's not wanting to use anything other than the apple install. I've tried Anaconda installs with both python 2.7 and 3.4, so it doesn't seem to be down to the 3.* compatibility issues alone.
Printing the python version to a string indicator box in LV via python script gives the following:
2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
Whereas via the terminal, the default version is the anaconda install I'd like to use, currently
Python 3.4.1 |Anaconda 2.1.0 (x86_64)| (default, Sep 10 2014, 17:24:09)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
I can do which python3.4 on the sys exec CL input to get the correct /anaconda/bin location, but if I try python3.4 <file.py>, I just get a Seg fault.
I've found examples for System Exec use and instructions for calling python scripts with specific versions, but neither works with Mac (frustratingly).
I've also tried editing the Path Environment Variable, and tried getting rid of the python 2.7 install entirely, but none of these work either (the latter just causing errors all over the shop).
So, in summary
I'd just like to know how to specify the python version LabView uses and to have use of the corresponding python libraries.
Either python version (2 or 3) will do, I'd just like to be able to use my libraries which came with Anaconda which I'll be writing my scripts with (which I installed as the correct packages are only compatible with a specific python version on Mac, which is not the specific version that Xcode was compatible with, argh).
All packages work normally when using the Anaconda version with the terminal or with Spyder.
Is there a function I can use other than System Exec? What am I missing?
Please help.janedoe777 wrote:
Accidental. Also, that's not terribly helpful.
What exactly does that mean?
I'm not debating whether it was accidental or intentional. It certainly isn't helpful to post the same message twice.
It is helpful to point out this message is a duplicate so people don't waste their time answering this one if you are already being helped in the other one. -
Hello,
I am attempting to use python (with ctypes) to control my USB-6009. The goal is to write out an analog voltage and then read in the voltage through an analog input with ability to change the number of points averaged and the number of times this is repeated(sweeps) averaging the analog input array. The issue is that as we increase the number of times the voltage ramp is repeated (sweeps) we get a time out error (nidaq call failed with error -200284: 'Some or all of the samples requested have not yet been acquired). This has us confused because the sweeps are in a larger loop and the Daq function should be the same if there are 10 sweeps (works) or 1000 sweeps (crashes). Any insight would be greatly appreciated. I have included the code below for reference.
import ctypes
import numpy
from time import *
from operator import add
nidaq = ctypes.windll.nicaiu # load the DLL
# Setup some typedefs and constants
# to correspond with values in
# C:\Program Files\National Instruments\NI-DAQ\DAQmx ANSI C Dev\include\NIDAQmx.h
# Scan Settings
aoDevice = "Dev2/ao0"
aiDevice = "Dev2/ai0"
NumAvgPts = 10
NumSweeps = 50
NumSpecPts = 100
filename = '12Feb15_CO2 Test_12.txt'
Readrate = 40000.0
Samplerate = 1000
StartVolt = 0.01
FinalVolt = 1.01
voltInc = (FinalVolt - StartVolt)/NumSpecPts
# the typedefs
int32 = ctypes.c_long
uInt32 = ctypes.c_ulong
uInt64 = ctypes.c_ulonglong
float64 = ctypes.c_double
TaskHandle = uInt32
# the constants
DAQmx_Val_Cfg_Default = int32(-1)
DAQmx_Val_Volts = 10348
DAQmx_Val_Rising = 10280
DAQmx_Val_FiniteSamps = 10178
DAQmx_Val_GroupByChannel = 0
def CHK_ao( err ):
"""a simple error checking routine"""
if err < 0:
buf_size = 100
buf = ctypes.create_string_buffer('\000' * buf_size)
nidaq.DAQmxGetErrorString(err,ctypes.byref(buf),buf_size)
raise RuntimeError('nidaq call failed with error %d: %s'%(err,repr(buf.value)))
if err > 0:
buf_size = 100
buf = ctypes.create_string_buffer('\000' * buf_size)
nidaq.DAQmxGetErrorString(err,ctypes.byref(buf),buf_size)
raise RuntimeError('nidaq generated warning %d: %s'%(err,repr(buf.value)))
def CHK_ai(err):
"""a simple error checking routine"""
if err < 0:
buf_size = NumAvgPts*10
buf = ctypes.create_string_buffer('\000' * buf_size)
nidaq.DAQmxGetErrorString(err,ctypes.byref(buf),buf_size)
raise RuntimeError('nidaq call failed with error %d: %s'%(err,repr(buf.value)))
def Analog_Output():
taskHandle = TaskHandle(0)
(nidaq.DAQmxCreateTask("",ctypes.byref(taskHandle )))
(nidaq.DAQmxCreateAOVoltageChan(taskHandle,
aoDevice,
float64(0),
float64(5),
DAQmx_Val_Volts,
None))
(nidaq.DAQmxCfgSampClkTiming(taskHandle,"",float64(Samplerate),
DAQmx_Val_Rising,DAQmx_Val_FiniteSamps,
uInt64(NumAvgPts))); # means we could turn in this to continuously ramping and reading
(nidaq.DAQmxStartTask(taskHandle))
(nidaq.DAQmxWriteAnalogScalarF64(taskHandle, True, float64(10.0), float64(CurrentVolt), None))
nidaq.DAQmxStopTask( taskHandle )
nidaq.DAQmxClearTask( taskHandle )
def Analog_Input():
global average
# initialize variables
taskHandle = TaskHandle(0)
data = numpy.zeros((NumAvgPts,),dtype=numpy.float64)
# now, on with the program
CHK_ai(nidaq.DAQmxCreateTask("",ctypes.byref(taskHandle)))
CHK_ai(nidaq.DAQmxCreateAIVoltageChan(taskHandle,aiDevice,"",
DAQmx_Val_Cfg_Default,
float64(-10.0),float64(10.0),
DAQmx_Val_Volts,None))
CHK_ai(nidaq.DAQmxCfgSampClkTiming(taskHandle,"",float64(Readrate),
DAQmx_Val_Rising,DAQmx_Val_FiniteSamps,
uInt64(NumAvgPts)));
CHK_ai(nidaq.DAQmxStartTask(taskHandle))
read = int32()
CHK_ai(nidaq.DAQmxReadAnalogF64(taskHandle,NumAvgPts,float64(10.0),
DAQmx_Val_GroupByChannel,data.ctypes.data,
NumAvgPts,ctypes.byref(read),None))
#print "Acquired %d points"%(read.value)
if taskHandle.value != 0:
nidaq.DAQmxStopTask(taskHandle)
nidaq.DAQmxClearTask(taskHandle)
average = sum(data)/data.size
Thanks,
Erin
Solved!
Go to Solution.This forum is for LabVIEW questions.
-
USB-6009, mac OS 10.6.8 and python
Hello!
I am using USB-6009 under mac OS 10.6.8 and python.
I am trying to run the following commmand:
analogOutputTask = nidaqmx.AnalogOutputTask(name='MyAOTask')
I get the following message
File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/nidaqmx/libnidaqmx.py", line 180, in CALL
AttributeError: 'NoneType' object has no attribute 'DAQmxCreateTask'
Any help would be very mch appreciated,
Thanks
j.Hello Jamyamdy,
Honestly, I am not too sure about this, but were you able to acquire data before or is this your first trial?
From the usb-6009 product page, it says the following.
For Mac OS X and Linux users, download the NI-DAQmx Base driver software and program the USB-6009 with NI LabVIEW or C.
http://sine.ni.com/nips/cds/view/p/lang/en/nid/201987
Regards, -
Hi All,
I have updated my mediacenter. Now tv_grab_nl_py does not work anymore:
[cedric@tv ~]$ tv_grab_nl_py --output ~/listings.xml --fast
File "/usr/bin/tv_grab_nl_py", line 341
print 'tv_grab_nl_py: A grabber that grabs tvguide data from tvgids.nl\n'
^
SyntaxError: invalid syntax
[cedric@tv ~]$
the version of python on the mediacenter (running arch linux):
[cedric@tv ~]$ python
Python 3.1.2 (r312:79147, Oct 4 2010, 12:35:40)
[GCC 4.5.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
I have copied the file to my laptop, there it looks like it's working:
./tv_grab_nl_py --output ~/listings.xml --fast
Config file /home/cedric/.xmltv/tv_grab_nl_py.conf not found.
Re-run me with the --configure flag.
cedric@laptop:~$
the version of python on my laptop (running arch linux):
cedric@laptop:~$ python
Python 2.6.5 (r265:79063, Apr 1 2010, 05:22:20)
[GCC 4.4.3 20100316 (prerelease)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
the script I'm trying to run:
[cedric@tv ~]$ cat tv_grab_nl_py
#!/usr/bin/env python
# $LastChangedDate: 2009-11-14 10:06:41 +0100 (Sat, 14 Nov 2009) $
# $Rev: 104 $
# $Author: pauldebruin $
SYNOPSIS
tv_grab_nl_py is a python script that trawls tvgids.nl for TV
programming information and outputs it in XMLTV-formatted output (see
http://membled.com/work/apps/xmltv). Users of MythTV
(http://www.mythtv.org) will appreciate the output generated by this
grabber, because it fills the category fields, i.e. colors in the EPG,
and has logos for most channels automagically available. Check the
website below for screenshots. The newest version of this script can be
found here:
http://code.google.com/p/tvgrabnlpy/
USAGE
Check the web site above and/or run script with --help and start from there
HISTORY
tv_grab_nl_py used to be called tv_grab_nl_pdb, first released on
2003/07/09. The name change was necessary because more and more people
are actively contributing to this script and I always disliked using my
initials (I was just too lazy to change it). At the same time I switched
from using CVS to SVN and as a result the version numbering scheme has
changed. The lastest official release of tv_grab_nl_pdb is 0.48. The
first official release of tv_grab_nl_py is 6.
QUESTIONS
Questions (and patches) are welcome at: paul at pwdebruin dot net.
IMPORTANT NOTES
If you were using tv_grab_nl from the XMLTV bundle then enable the
compat flag or use the --compat command-line option. Otherwise, the
xmltvid's are wrong and you will not see any new data in MythTV.
CONTRIBUTORS
Main author: Paul de Bruin (paul at pwdebruin dot net)
Michel van der Laan made available his extensive collection of
high-quality logos that is used by this script.
Michael Heus has taken the effort to further enhance this script so that
it now also includes:
- Credit info: directors, actors, presenters and writers
- removal of programs that are actually just groupings/broadcasters
(e.g. "KETNET", "Wild Friday", "Z@pp")
- Star-rating for programs tipped by tvgids.nl
- Black&White, Stereo and URL info
- Better detection of Movies
- and much, much more...
Several other people have provided feedback and patches (these are the
people I could find in my email archive, if you are missing from this
list let me know):
Huub Bouma, Roy van der Kuil, Remco Rotteveel, Mark Wormgoor, Dennis van
Onselen, Hugo van der Kooij, Han Holl, Ian Mcdonald, Udo van den Heuvel.
# Modules we need
import re, urllib2, getopt, sys
import time, random
import htmlentitydefs, os, os.path, pickle
from string import replace, split, strip
from threading import Thread
from xml.sax import saxutils
# Extra check for the datetime module
try:
import datetime
except:
sys.stderr.write('This script needs the datetime module that was introduced in Python version 2.3.\n')
sys.stderr.write('You are running:\n')
sys.stderr.write('%s\n' % sys.version)
sys.exit(1)
# XXX: fix to prevent crashes in Snow Leopard [Robert Klep]
if sys.platform == 'darwin' and sys.version_info[:3] == (2, 6, 1):
try:
urllib2.urlopen('http://localhost.localdomain')
except:
pass
# do extra debug stuff
debug = 1
try:
import redirect
except:
debug = 0
pass
# globals
# compile only one time
r_entity = re.compile(r'&(#x[0-9A-Fa-f]+|#[0-9]+|[A-Za-z]+);')
tvgids = 'http://www.tvgids.nl/'
uitgebreid_zoeken = tvgids + 'zoeken/'
# how many seconds to wait before we timeout on a
# url fetch, 10 seconds seems reasonable
global_timeout = 10
# Wait a random number of seconds between each page fetch.
# We want to be nice and not hammer tvgids.nl (these are the
# friendly people that provide our data...).
# Also, it appears tvgids.nl throttles its output.
# So there, there is not point in lowering these numbers, if you
# are in a hurry, use the (default) fast mode.
nice_time = [1, 2]
# Maximum length in minutes of gaps/overlaps between programs to correct
max_overlap = 10
# Strategy to use for correcting overlapping prgramming:
# 'average' = use average of stop and start of next program
# 'stop' = keep stop time of current program and adjust start time of next program accordingly
# 'start' = keep start time of next program and adjust stop of current program accordingly
# 'none' = do not use any strategy and see what happens
overlap_strategy = 'average'
# Experimental strategy for clumping overlapping programming, all programs that overlap more
# than max_overlap minutes, but less than the length of the shortest program are clumped
# together. Highly experimental and disabled for now.
do_clump = False
# Create a category translation dictionary
# Look in mythtv/themes/blue/ui.xml for all category names
# The keys are the categories used by tvgids.nl (lowercase please)
cattrans = { 'amusement' : 'Talk',
'animatie' : 'Animated',
'comedy' : 'Comedy',
'documentaire' : 'Documentary',
'educatief' : 'Educational',
'erotiek' : 'Adult',
'film' : 'Film',
'muziek' : 'Art/Music',
'informatief' : 'Educational',
'jeugd' : 'Children',
'kunst/cultuur' : 'Arts/Culture',
'misdaad' : 'Crime/Mystery',
'muziek' : 'Music',
'natuur' : 'Science/Nature',
'nieuws/actualiteiten' : 'News',
'overige' : 'Unknown',
'religieus' : 'Religion',
'serie/soap' : 'Drama',
'sport' : 'Sports',
'theater' : 'Arts/Culture',
'wetenschap' : 'Science/Nature'}
# Create a role translation dictionary for the xmltv credits part
# The keys are the roles used by tvgids.nl (lowercase please)
roletrans = {'regie' : 'director',
'acteurs' : 'actor',
'presentatie' : 'presenter',
'scenario' : 'writer'}
# We have two sources of logos, the first provides the nice ones, but is not
# complete. We use the tvgids logos to fill the missing bits.
logo_provider = [ 'http://visualisation.tudelft.nl/~paul/logos/gif/64x64/',
'http://static.tvgids.nl/gfx/zenders/' ]
logo_names = {
1 : [0, 'ned1'],
2 : [0, 'ned2'],
3 : [0, 'ned3'],
4 : [0, 'rtl4'],
5 : [0, 'een'],
6 : [0, 'canvas_color'],
7 : [0, 'bbc1'],
8 : [0, 'bbc2'],
9 : [0,'ard'],
10 : [0,'zdf'],
11 : [1, 'rtl'],
12 : [0, 'wdr'],
13 : [1, 'ndr'],
14 : [1, 'srsudwest'],
15 : [1, 'rtbf1'],
16 : [1, 'rtbf2'],
17 : [0, 'tv5'],
18 : [0, 'ngc'],
19 : [1, 'eurosport'],
20 : [1, 'tcm'],
21 : [1, 'cartoonnetwork'],
24 : [0, 'canal+red'],
25 : [0, 'mtv-color'],
26 : [0, 'cnn'],
27 : [0, 'rai'],
28 : [1, 'sat1'],
29 : [0, 'discover-spacey'],
31 : [0, 'rtl5'],
32 : [1, 'trt'],
34 : [0, 'veronica'],
35 : [0, 'tmf'],
36 : [0, 'sbs6'],
37 : [0, 'net5'],
38 : [1, 'arte'],
39 : [0, 'canal+blue'],
40 : [0, 'at5'],
46 : [0, 'rtl7'],
49 : [1, 'vtm'],
50 : [1, '3sat'],
58 : [1, 'pro7'],
59 : [1, 'kanaal2'],
60 : [1, 'vt4'],
65 : [0, 'animal-planet'],
73 : [1, 'mezzo'],
86 : [0, 'bbc-world'],
87 : [1, 'tve'],
89 : [1, 'nick'],
90 : [1, 'bvn'],
91 : [0, 'comedy_central'],
92 : [0, 'rtl8'],
99 : [1, 'sport1_1'],
100 : [0, 'rtvu'],
101 : [0, 'tvwest'],
102 : [0, 'tvrijnmond'],
103 : [1, 'tvnoordholland'],
104 : [1, 'bbcprime'],
105 : [1, 'spiceplatinum'],
107 : [0, 'canal+yellow'],
108 : [0, 'tvnoord'],
109 : [0, 'omropfryslan'],
114 : [0, 'omroepbrabant']}
# A selection of user agents we will impersonate, in an attempt to be less
# conspicuous to the tvgids.nl police.
user_agents = [ 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)',
'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)',
'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.7) Gecko/20060909 Firefox/1.5.0.7',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.9) Gecko/20071105 Firefox/2.0.0.9',
'Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.8) Gecko/20071022 Ubuntu/7.10 (gutsy) Firefox/2.0.0.8'
# Work in progress, the idea is to cache program categories and
# descriptions to eliminate a lot of page fetches from tvgids.nl
# for programs that do not have interesting/changing descriptions
class ProgramCache:
A cache to hold program name and category info.
TVgids stores the detail for each program on a separate URL with an
(apparently unique) ID. This cache stores the fetched info with the ID.
New fetches will use the cached info instead of doing an (expensive)
page fetch.
def __init__(self, filename=None):
Create a new ProgramCache object, optionally from file
# where we store our info
self.filename = filename
if filename == None:
self.pdict = {}
else:
if os.path.isfile(filename):
self.load(filename)
else:
self.pdict = {}
def load(self, filename):
Loads a pickled cache dict from file
try:
self.pdict = pickle.load(open(filename,'r'))
except:
sys.stderr.write('Error loading cache file: %s (possibly corrupt)' % filename)
sys.exit(2)
def dump(self, filename):
Dumps a pickled cache, and makes sure it is valid
if os.access(filename, os.F_OK):
try:
os.remove(filename)
except:
sys.stderr.write('Cannot remove %s, check permissions' % filename)
pickle.dump(self.pdict, open(filename+'.tmp', 'w'))
os.rename(filename+'.tmp', filename)
def query(self, program_id):
Updates/gets/whatever.
try:
return self.pdict[program_id]
except:
return None
def add(self, program):
Adds a program
self.pdict[program['ID']] = program
def clear(self):
Clears the cache (i.e. empties it)
self.pdict = {}
def clean(self):
Removes all cached programming before today.
Also removes erroneously cached programming.
now = time.localtime()
dnow = datetime.datetime(now[0],now[1],now[2])
for key in self.pdict.keys():
try:
if self.pdict[key]['stop-time'] < dnow or self.pdict[key]['name'].lower() == 'onbekend':
del self.pdict[key]
except:
pass
def usage():
print 'tv_grab_nl_py: A grabber that grabs tvguide data from tvgids.nl\n'
print 'and stores it in XMLTV-combatible format.\n'
print 'Usage:'
print '--help, -h = print this info'
print '--configure = create configfile (overwrites existing file)'
print '--config-file = name of the configuration file (default = ~/.xmltv/tv_grab_py.conf'
print '--capabilities = xmltv required option'
print '--desc-length = maximum allowed length of programme descriptions in bytes.'
print '--description = prints a short description of the grabber'
print '--output = file where to put the output'
print '--days = # number of days to grab'
print '--preferredmethod = returns the preferred method to be called'
print '--fast = do not grab descriptions of programming'
print '--slow = grab descriptions of programming'
print '--quiet = suppress all output'
print '--compat = append tvgids.nl to the xmltv id (use this if you were using tv_grab_nl)'
print '--logos 0/1 = insert urls to channel icons (mythfilldatabase will then use these)'
print '--nocattrans = do not translate the grabbed genres into MythTV-genres'
print '--cache = cache descriptions and use the file to store'
print '--clean_cache = clean the cache file before fetching'
print '--clear_cache = empties the cache file before fetching data'
print '--slowdays = grab slowdays initial days and the rest in fast mode'
print '--max_overlap = maximum length of overlap between programming to correct [minutes]'
print '--overlap_strategy = what strategy to use to correct overlaps (check top of source code)'
def filter_line_identity(m, defs=htmlentitydefs.entitydefs):
# callback: translate one entity to its ISO Latin value
k = m.group(1)
if k.startswith("#") and k[1:] in xrange(256):
return chr(int(k[1:]))
try:
return defs[k]
except KeyError:
return m.group(0) # use as is
def filter_line(s):
Removes unwanted stuff in strings (adapted from tv_grab_be)
# do the latin1 stuff
s = r_entity.sub(filter_line_identity, s)
s = replace(s,' ',' ')
# Ik vermoed dat de volgende drie regels overbodig zijn, maar ze doen
# niet veel kwaad -- Han Holl
s = replace(s,'\r',' ')
x = re.compile('(<.*?>)') # Udo
s = x.sub('', s) #Udo
s = replace(s, '~Q', "'")
s = replace(s, '~R', "'")
# Hmm, not sure if I understand this. Without it, mythfilldatabase barfs
# on program names like "Steinbrecher &..."
# We most create valid XML -- Han Holl
s = saxutils.escape(s)
return s
def calc_timezone(t):
Takes a time from tvgids.nl and formats it with all the required
timezone conversions.
in: '20050429075000'
out:'20050429075000 (CET|CEST)'
Until I have figured out how to correctly do timezoning in python this method
will bork if you are not in a zone that has the same DST rules as 'Europe/Amsterdam'.
year = int(t[0:4])
month = int(t[4:6])
day = int(t[6:8])
hour = int(t[8:10])
minute = int(t[10:12])
#td = {'CET': '+0100', 'CEST': '+0200'}
#td = {'CET': '+0100', 'CEST': '+0200', 'W. Europe Standard Time' : '+0100', 'West-Europa (standaardtijd)' : '+0100'}
td = {0 : '+0100', 1 : '+0200'}
pt = time.mktime((year,month,day,hour,minute,0,0,0,-1))
timezone=''
try:
#timezone = time.tzname[(time.localtime(pt))[-1]]
timezone = (time.localtime(pt))[-1]
except:
sys.stderr.write('Cannot convert time to timezone')
return t+' %s' % td[timezone]
def format_timezone(td):
Given a datetime object, returns a string in XMLTV format
tstr = td.strftime('%Y%m%d%H%M00')
return calc_timezone(tstr)
def get_page_internal(url, quiet=0):
Retrieves the url and returns a string with the contents.
Optionally, returns None if processing takes longer than
the specified number of timeout seconds.
txtdata = None
txtheaders = {'Keep-Alive' : '300',
'User-Agent' : user_agents[random.randint(0, len(user_agents)-1)] }
try:
#fp = urllib2.urlopen(url)
rurl = urllib2.Request(url, txtdata, txtheaders)
fp = urllib2.urlopen(rurl)
lines = fp.readlines()
page = "".join(lines)
return page
except:
if not quiet:
sys.stderr.write('Cannot open url: %s\n' % url)
return None
class FetchURL(Thread):
A simple thread to fetch a url with a timeout
def __init__ (self, url, quiet=0):
Thread.__init__(self)
self.quiet = quiet
self.url = url
self.result = None
def run(self):
self.result = get_page_internal(self.url, self.quiet)
def get_page(url, quiet=0):
Wrapper around get_page_internal to catch the
timeout exception
try:
fu = FetchURL(url, quiet)
fu.start()
fu.join(global_timeout)
return fu.result
except:
if not quiet:
sys.stderr.write('get_page timed out on (>%s s): %s\n' % (global_timeout, url))
return None
def get_channels(file, quiet=0):
Get a list of all available channels and store these
in a file.
# store channels in a dict
channels = {}
# tvgids stores several instances of channels, we want to
# find all the possibile channels
channel_get = re.compile('<optgroup label=.*?>(.*?)</optgroup>', re.DOTALL)
# this is how we will find a (number, channel) instance
channel_re = re.compile('<option value="([0-9]+)" >(.*?)</option>', re.DOTALL)
# this is where we will try to find our channel list
total = get_page(uitgebreid_zoeken, quiet)
if total == None:
return
# get a list of match objects of all the <select blah station>
stations = channel_get.finditer(total)
# and create a dict of number, channel_name pairs
# we do this this way because several instances of the
# channel list are stored in the url and not all of the
# instances have all the channels, this way we get them all.
for station in stations:
m = channel_re.finditer(station.group(0))
for p in m:
try:
a = int(p.group(1))
b = filter_line(p.group(2))
channels[a] = b
except:
sys.stderr.write('Oops, [%s,%s] does not look like a valid channel, skipping it...\n' % (p.group(1),p.group(2)))
# sort on channel number (arbitrary but who cares)
keys = channels.keys()
keys.sort()
# and create a file with the channels
f = open(file,'w')
for k in keys:
f.write("%s %s\n" % (k, channels[k]))
f.close()
def get_channel_all_days(channel, days, quiet=0):
Get all available days of programming for channel number
The output is a list of programming in order where each row
contains a dictionary with program information.
now = datetime.datetime.now()
programs = []
# Tvgids shows programs per channel per day, so we loop over the number of days
# we are required to grab
for offset in range(0, days):
channel_url = 'http://www.tvgids.nl/zoeken/?d=%i&z=%s' % (offset, channel)
# For historic purposes, the old style url that gave us a full week in advance:
# channel_url = 'http://www.tvgids.nl/zoeken/?trefwoord=Titel+of+trefwoord&interval=0×lot='+\
# '&station=%s&periode=%i&genre=&order=0' % (channel,days-1)
# Sniff, we miss you...
if offset > 0:
time.sleep(random.randint(nice_time[0], nice_time[1]))
# get the raw programming for the day
total = get_page(channel_url, quiet)
if total == None:
return programs
# Setup a number of regexps
# checktitle will match the title row in H2 tags of the daily overview page, e.g.
# <h2>zondag 19 oktober 2008</h2>
checktitle = re.compile('<h2>(.*?)</h2>',re.DOTALL)
# getrow will locate each row with program details
getrow = re.compile('<a href="/programma/(.*?)</a>',re.DOTALL)
# parserow matches the required program info, with groups:
# 1 = program ID
# 2 = broadcast times
# 3 = program name
parserow = re.compile('(.*?)/.*<span class="time">(.*?)</span>.*<span class="title">(.*?)</span>', re.DOTALL)
# normal begin and end times
times = re.compile('([0-9]+:[0-9]+) - ([0-9]+:[0-9]+)?')
# Get the day of month listed on the page as well as the expected date we are grabbing and compare these.
# If these do not match, we skip parsing the programs on the page and issue a warning.
#dayno = int(checkday.search(total).group(1))
title = checktitle.search(total)
if title:
title = title.group(1)
dayno = title.split()[1]
else:
sys.stderr.write('\nOops, there was a problem with page %s. Skipping it...\n' % (channel_url))
continue
expected = now + datetime.timedelta(days=offset)
if (not dayno.isdigit() or int(dayno) != expected.day):
sys.stderr.write('\nOops, did not expect page %s to list programs for "%s", skipping it...\n' % (channel_url,title))
continue
# and find relevant programming info
allrows = getrow.finditer(total)
for r in allrows:
detail = parserow.search(r.group(1))
if detail != None:
# default times
start_time = None
stop_time = None
# parse for begin and end times
t = times.search(detail.group(2))
if t != None:
start_time = t.group(1)
stop_time = t.group(2)
program_url = 'http://www.tvgids.nl/programma/' + detail.group(1) + '/'
program_name = detail.group(3)
# store time, name and detail url in a dictionary
tdict = {}
tdict['start'] = start_time
tdict['stop'] = stop_time
tdict['name'] = program_name
if tdict['name'] == '':
tdict['name'] = 'onbekend'
tdict['url'] = program_url
tdict['ID'] = detail.group(1)
tdict['offset'] = offset
#Add star rating if tipped by tvgids.nl
tdict['star-rating'] = '';
if r.group(1).find('Tip') != -1:
tdict['star-rating'] = '4/5'
# and append the program to the list of programs
programs.append(tdict)
# done
return programs
def make_daytime(time_string, offset=0, cutoff='00:00', stoptime=False):
Given a string '11:35' and an offset from today,
return a datetime object. The cuttoff specifies the point where the
new day starts.
Examples:
In [2]:make_daytime('11:34',0)
Out[2]:datetime.datetime(2006, 8, 3, 11, 34)
In [3]:make_daytime('11:34',1)
Out[3]:datetime.datetime(2006, 8, 4, 11, 34)
In [7]:make_daytime('11:34',0,'12:00')
Out[7]:datetime.datetime(2006, 8, 4, 11, 34)
In [4]:make_daytime('11:34',0,'11:34',False)
Out[4]:datetime.datetime(2006, 8, 3, 11, 34)
In [5]:make_daytime('11:34',0,'11:34',True)
Out[5]:datetime.datetime(2006, 8, 4, 11, 34)
h,m = [int(x) for x in time_string.split(':')];
hm = int(time_string.replace(':',''))
chm = int(cutoff.replace(':',''))
# check for the cutoff, if the time is before the cutoff then
# add a day
extra_day = 0
if (hm < chm) or (stoptime==True and hm == chm):
extra_day = 1
# and create a datetime object, DST is handled at a later point
pt = time.localtime()
dt = datetime.datetime(pt[0],pt[1],pt[2],h,m)
dt = dt + datetime.timedelta(offset+extra_day)
return dt
def correct_times(programs, quiet=0):
Parse a list of programs as generated by get_channel_all_days() and
convert begin and end times to xmltv compatible times in datetime objects.
if programs == []:
return programs
# the start time of programming for this day, times *before* this time are
# assumed to be on the next day
day_start_time = '06:00'
# initialise using the start time of the first program on this day
if programs[0]['start'] != None:
day_start_time = programs[0]['start']
for program in programs:
if program['start'] == program['stop']:
program['stop'] = None
# convert the times
if program['start'] != None:
program['start-time'] = make_daytime(program['start'], program['offset'], day_start_time)
else:
program['start-time'] = None
if program['stop'] != None:
program['stop-time'] = make_daytime(program['stop'], program['offset'], day_start_time, stoptime=True)
# extra correction, needed because the stop time of a program may be on the next day, after the
# day cutoff. For example:
# 06:00 - 23:40 Long Program
# 23:40 - 00:10 Lala
# 00:10 - 08:00 Wawa
# This puts the end date of Wawa on the current, instead of the next day. There is no way to detect
# this with a single cutoff in make_daytime. Therefore, check if there is a day difference between
# start and stop dates and correct if necessary.
if program['start-time'] != None:
# make two dates
start = program['start-time']
stop = program['stop-time']
single_day = datetime.timedelta(1)
startdate = datetime.datetime(start.year,start.month,start.day)
stopdate = datetime.datetime(stop.year,stop.month,stop.day)
if startdate - stopdate == single_day:
program['stop-time'] = program['stop-time'] + single_day
else:
program['stop-time'] = None
def parse_programs(programs, offset=0, quiet=0):
Parse a list of programs as generated by get_channel_all_days() and
convert begin and end times to xmltv compatible times.
# good programs
good_programs = []
# calculate absolute start and stop times
correct_times(programs, quiet)
# next, correct for missing end time and copy over all good programming to the
# good_programs list
for i in range(len(programs)):
# Try to correct missing end time by taking start time from next program on schedule
if (programs[i]['stop-time'] == None and i < len(programs)-1):
if not quiet:
sys.stderr.write('Oops, "%s" has no end time. Trying to fix...\n' % programs[i]['name'])
programs[i]['stop-time'] = programs[i+1]['start-time']
# The common case: start and end times are present and are not
# equal to each other (yes, this can happen)
if programs[i]['start-time'] != None and \
programs[i]['stop-time'] != None and \
programs[i]['start-time'] != programs[i]['stop-time']:
good_programs.append(programs[i])
# Han Holl: try to exclude programs that stop before they begin
for i in range(len(good_programs)-1,-1,-1):
if good_programs[i]['stop-time'] <= good_programs[i]['start-time']:
if not quiet:
sys.stderr.write('Deleting invalid stop/start time: %s\n' % good_programs[i]['name'])
del good_programs[i]
# Try to exclude programs that only identify a group or broadcaster and have overlapping start/end times with
# the actual programs
for i in range(len(good_programs)-2,-1,-1):
if good_programs[i]['start-time'] <= good_programs[i+1]['start-time'] and \
good_programs[i]['stop-time'] >= good_programs[i+1]['stop-time']:
if not quiet:
sys.stderr.write('Deleting grouping/broadcaster: %s\n' % good_programs[i]['name'])
del good_programs[i]
for i in range(len(good_programs)-1):
# PdB: Fix tvgids start-before-end x minute interval overlap. An overlap (positive or
# negative) is halved and each half is assigned to the adjacent programmes. The maximum
# overlap length between programming is set by the global variable 'max_overlap' and is
# default 10 minutes. Examples:
# Positive overlap (= overlap in programming):
# 10:55 - 12:00 Lala
# 11:55 - 12:20 Wawa
# is transformed in:
# 10:55 - 11.57 Lala
# 11:57 - 12:20 Wawa
# Negative overlap (= gap in programming):
# 10:55 - 11:50 Lala
# 12:00 - 12:20 Wawa
# is transformed in:
# 10:55 - 11.55 Lala
# 11:55 - 12:20 Wawa
stop = good_programs[i]['stop-time']
start = good_programs[i+1]['start-time']
dt = stop-start
avg = start + dt / 2
overlap = 24*60*60*dt.days + dt.seconds
# check for the size of the overlap
if 0 < abs(overlap) <= max_overlap*60:
if not quiet:
if overlap > 0:
sys.stderr.write('"%s" and "%s" overlap %s minutes. Adjusting times.\n' % \
(good_programs[i]['name'],good_programs[i+1]['name'],overlap / 60))
else:
sys.stderr.write('"%s" and "%s" have gap of %s minutes. Adjusting times.\n' % \
(good_programs[i]['name'],good_programs[i+1]['name'],abs(overlap) / 60))
# stop-time of previous program wins
if overlap_strategy == 'stop':
good_programs[i+1]['start-time'] = good_programs[i]['stop-time']
# start-time of next program wins
elif overlap_strategy == 'start':
good_programs[i]['stop-time'] = good_programs[i+1]['start-time']
# average the difference
elif overlap_strategy == 'average':
good_programs[i]['stop-time'] = avg
good_programs[i+1]['start-time'] = avg
# leave as is
else:
pass
# Experimental strategy to make sure programming does not disappear. All programs that overlap more
# than the maximum overlap length, but less than the shortest length of the two programs are
# clumped.
if do_clump:
for i in range(len(good_programs)-1):
stop = good_programs[i]['stop-time']
start = good_programs[i+1]['start-time']
dt = stop-start
overlap = 24*60*60*dt.days + dt.seconds
length0 = good_programs[i]['stop-time'] - good_programs[i]['start-time']
length1 = good_programs[i+1]['stop-time'] - good_programs[i+1]['start-time']
l0 = length0.days*24*60*60 + length0.seconds
l1 = length1.days*24*60*60 + length0.seconds
if abs(overlap) >= max_overlap*60 <= min(l0,l1)*60 and \
not good_programs[i].has_key('clumpidx') and \
not good_programs[i+1].has_key('clumpidx'):
good_programs[i]['clumpidx'] = '0/2'
good_programs[i+1]['clumpidx'] = '1/2'
good_programs[i]['stop-time'] = good_programs[i+1]['stop-time']
good_programs[i+1]['start-time'] = good_programs[i]['start-time']
# done, nothing to see here, please move on
return good_programs
def get_descriptions(programs, program_cache=None, nocattrans=0, quiet=0, slowdays=0):
Given a list of programs, from get_channel, retrieve program information
# This regexp tries to find details such as Genre, Acteurs, Jaar van Premiere etc.
detail = re.compile('<li>.*?<strong>(.*?):</strong>.*?<br />(.*?)</li>', re.DOTALL)
# These regexps find the description area, the program type and descriptive text
description = re.compile('<div class="description">.*?<div class="text"(.*?)<div class="clearer"></div>',re.DOTALL)
descrtype = re.compile('<div class="type">(.*?)</div>',re.DOTALL)
descrline = re.compile('<p>(.*?)</p>',re.DOTALL)
# randomize detail requests
nprograms = len(programs)
fetch_order = range(0,nprograms)
random.shuffle(fetch_order)
counter = 0
for i in fetch_order:
counter += 1
if programs[i]['offset'] >= slowdays:
continue
if not quiet:
sys.stderr.write('\n(%3.0f%%) %s: %s ' % (100*float(counter)/float(nprograms), i, programs[i]['name']))
# check the cache for this program's ID
cached_program = program_cache.query(programs[i]['ID'])
if (cached_program != None):
if not quiet:
sys.stderr.write(' [cached]')
# copy the cached information, except the start/end times, rating and clumping,
# these may have changed.
tstart = programs[i]['start-time']
tstop = programs[i]['stop-time']
rating = programs[i]['star-rating']
try:
clump = programs[i]['clumpidx']
except:
clump = False
programs[i] = cached_program
programs[i]['start-time'] = tstart
programs[i]['stop-time'] = tstop
programs[i]['star-rating'] = rating
if clump:
programs[i]['clumpidx'] = clump
continue
else:
# be nice to tvgids.nl
time.sleep(random.randint(nice_time[0], nice_time[1]))
# get the details page, and get all the detail nodes
descriptions = ()
details = ()
try:
if not quiet:
sys.stderr.write(' [normal fetch]')
total = get_page(programs[i]['url'])
details = detail.finditer(total)
descrspan = description.search(total);
descriptions = descrline.finditer(descrspan.group(1))
except:
# if we cannot find the description page,
# go to next in the loop
if not quiet:
sys.stderr.write(' [fetch failed or timed out]')
continue
# define containers
programs[i]['credits'] = {}
programs[i]['video'] = {}
# now parse the details
line_nr = 1;
# First, we try to find the program type in the description section.
# Note that this is not the same as the generic genres (these are searched later on), but a more descriptive one like "Culinair programma"
# If present, we store this as first part of the regular description:
programs[i]['detail1'] = descrtype.search(descrspan.group(1)).group(1).capitalize()
if programs[i]['detail1'] != '':
line_nr = line_nr + 1
# Secondly, we add one or more lines of the program description that are present.
for descript in descriptions:
d_str = 'detail' + str(line_nr)
programs[i][d_str] = descript.group(1)
# Remove sponsored link from description if present.
sponsor_pos = programs[i][d_str].rfind('<i>Gesponsorde link:</i>')
if sponsor_pos > 0:
programs[i][d_str] = programs[i][d_str][0:sponsor_pos]
programs[i][d_str] = filter_line(programs[i][d_str]).strip()
line_nr = line_nr + 1
# Finally, we check out all program details. These are generically denoted as:
# <li><strong>(TYPE):</strong><br />(CONTENT)</li>
# Some examples:
# <li><strong>Genre:</strong><br />16 oktober 2008</li>
# <li><strong>Genre:</strong><br />Amusement</li>
for d in details:
type = d.group(1).strip().lower()
content_asis = d.group(2).strip()
content = filter_line(content_asis).strip()
if content == '':
continue
elif type == 'genre':
# Fix detection of movies based on description as tvgids.nl sometimes
# categorises a movie as e.g. "Komedie", "Misdaadkomedie", "Detectivefilm".
genre = content;
if (programs[i]['detail1'].lower().find('film') != -1 \
or programs[i]['detail1'].lower().find('komedie') != -1)\
and programs[i]['detail1'].lower().find('tekenfilm') == -1 \
and programs[i]['detail1'].lower().find('animatiekomedie') == -1 \
and programs[i]['detail1'].lower().find('filmpje') == -1:
genre = 'film'
if nocattrans:
programs[i]['genre'] = genre.title()
else:
try:
programs[i]['genre'] = cattrans[genre.lower()]
except:
programs[i]['genre'] = ''
# Parse persons and their roles for credit info
elif roletrans.has_key(type):
programs[i]['credits'][roletrans[type]] = []
persons = content_asis.split(',');
for name in persons:
if name.find(':') != -1:
name = name.split(':')[1]
if name.find('-') != -1:
name = name.split('-')[0]
if name.find('e.a') != -1:
name = name.split('e.a')[0]
programs[i]['credits'][roletrans[type]].append(filter_line(name.strip()))
elif type == 'bijzonderheden':
if content.find('Breedbeeld') != -1:
programs[i]['video']['breedbeeld'] = 1
if content.find('Zwart') != -1:
programs[i]['video']['blackwhite'] = 1
if content.find('Teletekst') != -1:
programs[i]['teletekst'] = 1
if content.find('Stereo') != -1:
programs[i]['stereo'] = 1
elif type == 'url':
programs[i]['infourl'] = content
else:
# In unmatched cases, we still add the parsed type and content to the program details.
# Some of these will lead to xmltv output during the xmlefy_programs step
programs[i][type] = content
# do not cache programming that is unknown at the time
# of fetching.
if programs[i]['name'].lower() != 'onbekend':
program_cache.add(programs[i])
if not quiet:
sys.stderr.write('\ndone...\n\n')
# done
def title_split(program):
Some channels have the annoying habit of adding the subtitle to the title of a program.
This function attempts to fix this, by splitting the name at a ': '.
if (program.has_key('titel aflevering') and program['titel aflevering'] != '') \
or (program.has_key('genre') and program['genre'].lower() in ['movies','film']):
return
colonpos = program['name'].rfind(': ')
if colonpos > 0:
program['titel aflevering'] = program['name'][colonpos+1:len(program['name'])].strip()
program['name'] = program['name'][0:colonpos].strip()
def xmlefy_programs(programs, channel, desc_len, compat=0, nocattrans=0):
Given a list of programming (from get_channels())
returns a string with the xml equivalent
output = []
for program in programs:
clumpidx = ''
try:
if program.has_key('clumpidx'):
clumpidx = 'clumpidx="'+program['clumpidx']+'"'
except:
print program
output.append(' <programme start="%s" stop="%s" channel="%s%s" %s> \n' % \
(format_timezone(program['start-time']), format_timezone(program['stop-time']),\
channel, compat and '.tvgids.nl' or '', clumpidx))
output.append(' <title lang="nl">%s</title>\n' % filter_line(program['name']))
if program.has_key('titel aflevering') and program['titel aflevering'] != '':
output.append(' <sub-title lang="nl">%s</sub-title>\n' % filter_line(program['titel aflevering']))
desc = []
for detail_row in ['detail1','detail2','detail3']:
if program.has_key(detail_row) and not re.search('[Gg]een detailgegevens be(?:kend|schikbaar)', program[detail_row]):
desc.append('%s ' % program[detail_row])
if desc != []:
# join and remove newlines from descriptions
desc_line = "".join(desc).strip()
desc_line.replace('\n', ' ')
if len(desc_line) > desc_len:
spacepos = desc_line[0:desc_len-3].rfind(' ')
desc_line = desc_line[0:spacepos] + '...'
output.append(' <desc lang="nl">%s</desc>\n' % desc_line)
# Process credits section if present.
# This will generate director/actor/presenter info.
if program.has_key('credits') and program['credits'] != {}:
output.append(' <credits>\n')
for role in program['credits']:
for name in program['credits'][role]:
if name != '':
output.append(' <%s>%s</%s>\n' % (role, name, role))
output.append(' </credits>\n')
if program.has_key('jaar van premiere') and program['jaar van premiere'] != '':
output.append(' <date>%s</date>\n' % program['jaar van premiere'])
if program.has_key('genre') and program['genre'] != '':
output.append(' <category')
if nocattrans:
output.append(' lang="nl"')
output.append ('>%s</category>\n' % program['genre'])
if program.has_key('infourl') and program['infourl'] != '':
output.append(' <url>%s</url>\n' % program['infourl'])
if program.has_key('aflevering') and program['aflevering'] != '':
output.append(' <episode-num system="onscreen">%s</episode-num>\n' % filter_line(program['aflevering']))
# Process video section if present
if program.has_key('video') and program['video'] != {}:
output.append(' <video>\n');
if program['video'].has_key('breedbeeld'):
output.append(' <aspect>16:9</aspect>\n')
if program['video'].has_key('blackwhite'):
output.append(' <colour>no</colour>\n')
output.append(' </video>\n')
if program.has_key('stereo'):
output.append(' <audio><stereo>stereo</stereo></audio>\n')
if program.has_key('teletekst'):
output.append(' <subtitles type="teletext" />\n')
# Set star-rating if applicable
if program['star-rating'] != '':
output.append(' <star-rating><value>%s</value></star-rating>\n' % program['star-rating'])
output.append(' </programme>\n')
return "".join(output)
def main():
# Parse command line options
try:
opts, args = getopt.getopt(sys.argv[1:], "h", ["help", "output=", "capabilities",
"preferredmethod", "days=",
"configure", "fast", "slow",
"cache=", "clean_cache",
"slowdays=","compat",
"desc-length=","description",
"nocattrans","config-file=",
"max_overlap=", "overlap_strategy=",
"clear_cache", "quiet","logos="])
except getopt.GetoptError:
usage()
sys.exit(2)
# DEFAULT OPTIONS - Edit if you know what you are doing
# where the output goes
output = None
output_file = None
# the total number of days to fetch
days = 6
# Fetch data in fast mode, i.e. do NOT grab all the detail information,
# fast means fast, because as it then does not have to fetch a web page for each program
# Default: fast=0
fast = 0
# number of days to fetch in slow mode. For example: --days 5 --slowdays 2, will
# fetch the first two days in slow mode (with all the details) and the remaining three
# days in fast mode.
slowdays = 6
# no output
quiet = 0
# insert url of channel logo into the xml data, this will be picked up by mythfilldatabase
logos = 1
# enable this option if you were using tv_grab_nl, it adjusts the generated
# xmltvid's so that everything works.
compat = 0
# enable this option if you do not want the tvgids categories being translated into
# MythTV-categories (genres)
nocattrans = 0
# Maximum number of characters to use for program description.
# Different values may work better in different versions of MythTV.
desc_len = 475
# default configuration file locations
hpath = ''
if os.environ.has_key('HOME'):
hpath = os.environ['HOME']
# extra test for windows users
elif os.environ.has_key('HOMEPATH'):
hpath = os.environ['HOMEPATH']
# hpath = ''
xmltv_dir = hpath+'/.xmltv'
program_cache_file = xmltv_dir+'/program_cache'
config_file = xmltv_dir+'/tv_grab_nl_py.conf'
# cache the detail information.
program_cache = None
clean_cache = 1
clear_cache = 0
# seed the random generator
random.seed(time.time())
for o, a in opts:
if o in ("-h", "--help"):
usage()
sys.exit(1)
if o == "--quiet":
quiet = 1;
if o == "--description":
print "The Netherlands (tv_grab_nl_py $Rev: 104 $)"
sys.exit(0)
if o == "--capabilities":
print "baseline"
print "cache"
print "manualconfig"
print "preferredmethod"
sys.exit(0)
if o == '--preferredmethod':
print 'allatonce'
sys.exit(0)
if o == '--desc-length':
# Use the requested length for programme descriptions.
desc_len = int(a)
if not quiet:
sys.stderr.write('Using description length: %d\n' % desc_len)
for o, a in opts:
if o == "--config-file":
# use the provided name for configuration
config_file = a
if not quiet:
sys.stderr.write('Using config file: %s\n' % config_file)
for o, a in opts:
if o == "--configure":
# check for the ~.xmltv dir
if not os.path.exists(xmltv_dir):
if not quiet:
sys.stderr.write('You do not have the ~/.xmltv directory,')
sys.stderr.write('I am going to make a shiny new one for you...')
os.mkdir(xmltv_dir)
if not quiet:
sys.stderr.write('Creating config file: %s\n' % config_file)
get_channels(config_file)
sys.exit(0)
if o == "--days":
# limit days to maximum supported by tvgids.nl
days = min(int(a),6)
if o == "--compat":
compat = 1
if o == "--nocattrans":
nocattrans = 1
if o == "--fast":
fast = 1
if o == "--output":
output_file = a
try:
output = open(output_file,'w')
# and redirect output
if debug:
debug_file = open('/tmp/kaas.xml','w')
blah = redirect.Tee(output, debug_file)
sys.stdout = blah
else:
sys.stdout = output
except:
if not quiet:
sys.stderr.write('Cannot write to outputfile: %s\n' % output_file)
sys.exit(2)
if o == "--slowdays":
# limit slowdays to maximum supported by tvgids.nl
slowdays = min(int(a),6)
# slowdays implies fast == 0
fast = 0
if o == "--logos":
logos = int(a)
if o == "--clean_cache":
clean_cache = 1
if o == "--clear_cache":
clear_cache = 1
if o == "--cache":
program_cache_file = a
if o == "--max_overlap":
max_overlap = int(a)
if o == "--overlap_strategy":
overlap_strategy = a
# get configfile if available
try:
f = open(config_file,'r')
except:
sys.stderr.write('Config file %s not found.\n' % config_file)
sys.stderr.write('Re-run me with the --configure flag.\n')
sys.exit(1)
#check for cache
program_cache = ProgramCache(program_cache_file)
if clean_cache != 0:
program_cache.clean()
if clear_cache != 0:
program_cache.clear()
# Go!
channels = {}
# Read the channel stuff
for blah in f.readlines():
blah = blah.lstrip()
blah = blah.replace('\n','')
if blah:
if blah[0] != '#':
channel = blah.split()
channels[channel[0]] = " ".join(channel[1:])
# channels are now in channels dict keyed on channel id
# print header stuff
print '<?xml version="1.0" encoding="ISO-8859-1"?>'
print '<!DOCTYPE tv SYSTEM "xmltv.dtd">'
print '<tv generator-info-name="tv_grab_nl_py $Rev: 104 $">'
# first do the channel info
for key in channels.keys():
print ' <channel id="%s%s">' % (key, compat and '.tvgids.nl' or '')
print ' <display-name lang="nl">%s</display-name>' % channels[key]
if (logos):
ikey = int(key)
if logo_names.has_key(ikey):
full_logo_url = logo_provider[logo_names[ikey][0]]+logo_names[ikey][1]+'.gif'
print ' <icon src="%s" />' % full_logo_url
print ' </channel>'
num_chans = len(channels.keys())
channel_cnt = 0
if program_cache != None:
program_cache.clean()
fluffy = channels.keys()
nfluffy = len(fluffy)
for id in fluffy:
channel_cnt += 1
if not quiet:
sys.stderr.write('\n\nNow fetching %s(xmltvid=%s%s) (channel %s of %s)\n' % \
(channels[id], id, (compat and '.tvgids.nl' or ''), channel_cnt, nfluffy))
info = get_channel_all_days(id, days, quiet)
blah = parse_programs(info, None, quiet)
# fetch descriptions
if not fast:
get_descriptions(blah, program_cache, nocattrans, quiet, slowdays)
# Split titles with colon in it
# Note: this only takes place if all days retrieved are also grabbed with details (slowdays=days)
# otherwise this function might change some titles after a few grabs and thus may result in
# loss of programmed recordings for these programs.
if slowdays == days:
for program in blah:
title_split(program)
print xmlefy_programs(blah, id, desc_len, compat, nocattrans)
# save the cache after each channel fetch
if program_cache != None:
program_cache.dump(program_cache_file)
# be nice to tvgids.nl
time.sleep(random.randint(nice_time[0], nice_time[1]))
if program_cache != None:
program_cache.dump(program_cache_file)
# print footer stuff
print "</tv>"
# close the outputfile if necessary
if output != None:
output.close()
# and return success
sys.exit(0)
# allow this to be a module
if __name__ == '__main__':
main()
# vim:tw=0:et:sw=4
[cedric@tv ~]$
Best regards,
Cedric
Last edited by cdwijs (2010-11-04 18:44:51)Running the script by python2 solves it for me:
su - mythtv -c "nice -n 19 python2 /usr/bin/tv_grab_nl_py --output ~/listings.xml"
Best regards,
Cedric -
Issues with using the output redirection character with newer NXOS versions?
Has anyone seen any issues with using the output redirection character with newer NXOS versions?
Am receiving "Error 0x40870004 while copying."
Simply copying a file from bootflash to tftp is ok.
This occurs for both 3CDaemon and Tftpd32 softwares.
Have tried it on multiple switches - same issue.
Any known bugs?
thanks!
The following is an example of bad (NXOS4.1.1b) and good (SANOS3.2.1a)
MDS2# sho ver | inc system
system: version 4.1(1b)
system image file is: bootflash:///m9200-s2ek9-mz.4.1.1b.bin
system compile time: 10/7/2008 13:00:00 [10/11/2008 09:52:55]
MDS2# sh int br > tftp://10.73.54.194
Trying to connect to tftp server......
Connection to server Established. Copying Started.....
TFTP put operation failed:Access violation
Error 0x40870004 while copying tftp://10.73.54.194/
MDS2# copy bootflash:cpu_logfile tftp://10.73.54.194
Trying to connect to tftp server......
Connection to server Established. Copying Started.....
|
TFTP put operation was successful
MDS2#
ck-ci9216-001# sho ver | inc system
system: version 3.2(1a)
system image file is: bootflash:/m9200-ek9-mz.3.2.1a.bin
system compile time: 9/25/2007 18:00:00 [10/06/2007 06:46:51]
ck-ci9216-001# sh int br > tftp://10.73.54.194
Trying to connect to tftp server......
|
TFTP put operation was successfulPlease check with new version of TFTPD 32 server. The error may be due to older version of TFPT server, the new version available solved this error. Files are getting uploaded with no issues.
1. Download tftpd32b.zip from:
http://tftpd32.jounin.net/tftpd32_download.html
2. Copy the tftpd32b.zip file into an empty directory and extract it.
3. Copy the file you want to transver into the directory containing tftpd32.exe.
4. Run tftpd32.exe from that directory. The "Base Directory" field should show the path to the directory containing the file you want to transfer.
At this point, the tftpserver is ready to begin serving files. As devices request files, the main tftpd32 window will log the requests.
Best Regards... -
Compability problem with Java and Python RSA algorithm implementation
I have client server application. Server is writtein in python, client in java. Client receives messages from server encrypted with RSA (http://stuvel.eu/rsa), and I'm unable to decrypt it. It seems that this is RSA algorithm compatibility problem. I'm using algorithm from java.security package, instatinating Cipher object like this: c = Cipher.getInstance("RSA"); . I noticed that this algorithm produces for input blocks of lengtrh <=117 ouput block of length 128. Server I guess uses the most triviall impelentation of RSA ( (1 byte is encrypted to 1 byte) So i want to make my java algorithm compatibile with this one which server uses. How to do that ? Do i have to instatinate Cipher object in different way ? Or use another library ?
azedor wrote:
First you said it was no good because it could only handle <= 117 byte inputs, now you say it is no good because it produces a 128-byte output. You're not making sense.First i said that this two RSA implementations are not compatibile, and first reason i noticed firstly is that Python imlementation for input of length N produces cryptogram of the same length. Not true. In general, the RSA encryption of any number of bytes less than the length of the modulus will produce a result of length near that of the modulus. When N is less than the length of the modulus, it is rare that N bytes of cleartext produces N bytes of ciphertext.
Java implementation for data block of length <=117 produces alwasy 128 bytes of output.Pretty much correct and very much desirable. This is primarily a function of the PKCS1 padding which is used to solve two basic problems. First, as I alluded to in my first response, it is the nature of the algorithm that leading zeros are not preserved and second when the cleartext is very small (a few bytes) the exponentiation does not roll over and it is easy to decrypt the result. Both these problems are addressed by PKCS1 padding.
>
>
After what sabre150 said i think of giving up idea of translating Python code to Java and considering to use another assymetric cryptography algorithms on both sides. Can you recommend me sth what should be compatibile with Python ?This seems to be at odds with your statement in reply #3 "Also have acces only to client code so i have to change sth in java." ! This statement is why I said "I suspect ... you have dug a deep hole".
In your position I would use the Python bindings for openssl. Once more, Google is your friend. -
Launching programs from python and ncurses
I've made a little menu launcher with python and ncurses. It works most of the time, but occationally it does nothing when I select a program to launch.
This is how it works:
1. A keyboard shortcut launches an xterm window that runs the program.
2. I select some program, which is launched via this command:
os.system("nohup " + "program_name" + "> /dev/null &")
3. ncurses cleans up and python calls "sys.exit()"
The whole idea with nohup is that the xterm window will close after I select an application, but the application will still start normally (this was a bit of a problem). This works most of the time, but sometimes nothing happens (it never works the first time I try launching a program after starting the computer, the other times are more random).
So, has anyone got an idea of what's going on or a better solution than the nohup hack?
The whole code for the menu launcher is below. It's quite hackish, but then again it was just meant for me. The file's called "curmenu.py", and I launch it with an xbindkeys shortcut that runs "xterm -e curmenu.py".
#!/usr/bin/env python
import os,sys
import curses
## Variables one might like to configure
programs = ["Sonata", "sonata", "Ncmpc", "xterm -e ncmpc", "Emacs", "emacs", "Firefox", "swiftfox",\
"Pidgin", "pidgin", "Screen", "xterm -e screen", "Thunar", "thunar", \
"Gimp", "gimp", "Vlc", "vlc", "Skype", "skype"]
highlight = 3
on_screen = 7
## Functions
# Gets a list of strings, figures out the middle one
# and highlights it. Draws strings on screen.
def drawStrings(strings):
length = len(strings)
middle = (length - 1)/2
for num in range(length):
addString(strings[num], middle, num, length)
stdscr.refresh()
def addString(string, middle, iter_step, iter_max):
if iter_step < iter_max:
string = string + "\n"
if iter_step == middle:
stdscr.addstr(iter_step + 1, 1, string, curses.A_REVERSE)
else:
stdscr.addstr(iter_step + 1, 1, string)
# Returns a list of strings to draw on screen. The
# strings chosen are centered around position.
def listStrings(strings, position, on_screen):
length = len(strings)
low = (on_screen - 1)/2
start = position - low
str = []
for num in range(start, start + on_screen):
str = str + [strings[num % length]]
return str
## Start doing stuff
names = programs[::2]
longest = max(map(lambda x: len(x), names))
# Start our screen
stdscr=curses.initscr()
# Enable noecho and keyboard input
curses.curs_set(0)
curses.noecho()
curses.cbreak()
stdscr.keypad(1)
# Display strings
drawStrings(listStrings(names, highlight, on_screen))
# Wait for response
num_progs = len(names)
low = (on_screen - 1)/2
while 1:
c = stdscr.getch()
if c == ord("q") or c == 27: # 27 = "Escape"
break
elif c == curses.KEY_DOWN:
highlight = (highlight + 1)%num_progs
elif c == curses.KEY_UP:
highlight = (highlight - 1)%num_progs
elif c == curses.KEY_NPAGE:
highlight = (highlight + low)%num_progs
elif c == curses.KEY_PPAGE:
highlight = (highlight - low)%num_progs
elif c == 10: # actually "Enter", but hey
os.system("nohup " + programs[2*highlight + 1] + "> /dev/null &")
break
drawStrings(listStrings(names, highlight, on_screen))
# Close the program
curses.nocbreak()
stdscr.keypad(0)
curses.echo()
curses.endwin()
sys.exit()Try:
http://docs.python.org/lib/module-subprocess.html
Should let you fork programs off into the background.
Maybe you are looking for
-
Stateless Session Bean calling another Stateless Session Bean
Hi all, I have written 2 Stateless Session bean and would like to call the business method of 2nd Stateless Session Bean in the business method of the 1st. I have written the lookup code for 2nd SB in the business method of 1st SB and would like to e
-
I movie 11: how do i find an event for a particular clip in my project
I movie 11: how do i find an event for a particular clip in my project
-
Count for distinct values ...
Dear Team, I have following situation at hand... COLUMN1 COLUMN2 COLUMN3 TECH111 A11111 MATERAL1 TECH111 A11112 MATERAL2 TECH111 A11112 MATERAL3 TECH111 A11113 MATERAL4 TECH111 A11113
-
NSInvalidArgumentException : Attempt to set value for immutable property
I get "An unexpected error has occurred. Please quit and reopen Keynote." EVERY time I try to copy a slide and change the master that controls it. Console says "Exception caught by top level: NSInvalidArgumentException : Attempt to set value for immu
-
I cannot connet to Home Sharing on iTunes, so that I can send movies and TV shows to another computer. I cannot find the name of the other computer in the "Library" box on the upper right side of the iTunes screen. Do I need to have the other compu