AUR3 [aur-pyjs] implementation in python (pyjs) + JSON-RPC

Welcome to AUR3.
available (pre-alpha)...
http://aur.archlinux.org/packages.php?ID=38359
QUICKSTART
1) install
     # http://aur.archlinux.org/packages.php?ID=38359
2) run
     # /usr/lib/aur-pyjs-git/aur serve -f
3) view
     # http://127.0.0.1:8000/Aur.html
INTRODUCTION
This is a reimplementation of the AUR using a widget toolkit known as Pyjamas (http://pyjs.org).  Client-side code (to run in web browsers) is written and maintained in python; when deploying to production, the python sources are compiled/translated to 100% pure javascript.  The resulting javascript code can be ran on any major browser.  When deployed, this is a 100% pure javascript application... thus, javascript is required, else you will see only a blank page.
WHY PYJS?
1) anyone that knows python can write clean, maintainable client-side code
2) eliminates the need to know the umpteen inconsistencies amongst browsers
3) via pyjamas-desktop, the app runs as 100% pure python, as a true desktop app, with no modifications
4) back-ends are JSON-RPC; allows back-ends to be written in any language, and enforces a clean separation
PROJECT STATUS
1) FRAMEWORK
     [complete] basic URL dispatcher
     [complete] load new content pages with cache support
     [new] create generic URL-to-module dispatcher (like cherrypy/etc.)
2) SHELL
     [complete] pixel perfect to other sections of the site
     [complete] links/titles/copyright in place
3) BASIC SEARCH
     [complete] search front-end
     [incomplete] perform search front-end
     [incomplete] perform search back-end
4) ADVANCED SEARCH
     [complete] define toggles
     [incomplete] define filters
     [incomplete] define sorts
     [incomplete] enable/enforce limits
     [complete] search front-end
     [incomplete] perform search front-end
     [incomplete] perform search back-end
5) LANGUAGES
     [incomplete] Language.py module methods
     [incomplete] language JSON-RPC backend (preferred), or hardcode in Language.py
     [incomplete] replace hardcoded text with calls to Language module
6) HOME
     [complete] create page
     [incomplete] introduction/disclaimer
     [incomplete] recent updates front-end
     [incomplete] recent updates back-end
     [incomplete] statistics front-end
     [incomplete] statistics back-end
7) BROWSE/SEARCH/MY PACKAGES
     [complete] create page
     [incomplete] paginating results front-end
     [incomplete] paginating results back-end
8) VIEW PACKAGE
     [incomplete] create page
     [incomplete] package details front-end (name/link/desc/deps/files/etc.)
     [incomplete] package details back-end
     [incomplete] list comments front-end
     [incomplete] list comments back-end
     [incomplete] add comment front-end
     [incomplete] add comment back-end
9) ACCOUNTS
     [incomplete] create page
     [incomplete] create/edit account front-end
     [incomplete] create/edit account back-end
10) SUBMIT
     [incomplete] create page
     [incomplete] submit package front-end
     [incomplete] submit package back-end
PYJAMAS-DESKTOP ONLY (as python)
1) INSTALL
     [incomplete] add 'install' links (view/browse/search pages)
     [incomplete] install status/details front-end (GUI)
     [incomplete] python module "back-end" to download packages + dependencies and install
GET INVOLVED
1) set up a development environment + pyjs sandbox at ~/aur-pyjs...
     # /usr/lib/aur-pyjs-git/aur sync ~/aur-pyjs
     # cd ~/aur-pyjs
2) update pyjamas anytime by running...
     # ./aur sync
3) generate the AUR...
     # ./aur trans
4) view the AUR...
     # ./aur serve -f
     # http://127.0.0.1:8000/Aur.html
5) create a package based on remote git master...
     # ./aur pkg
6) create a package based on your local git master...
     # ./aur pkg -l
7) create a package based on your local git master AND install it...
     # ./aur pkg -li
All of the commands support a 'help' and 'usage' parameter:
# ./aur help
Usage: aur-pyjs COMMAND [help|OPTION]...
Develop, translate, package, and serve AUR3
COMMANDs:
pkg build package from local/remote master; opt. install; opt. upload
sync bootstrap/update local/target devel environment
trans translate python sources to javascript for deployment
serve lastest local build at http://localhost:8000
# ./aur pkg help
Usage: aur-pyjs pkg [-l] [-i] [-u]
Package local/remote build and optionally install and/or upload to AUR legacy
Options:
-l favor local source over remote
-i install package locally
-u upload to AUR legacy after building
LAST WORDS
yes.  I hope this to become the official AUR, and/or some other directions [1]
yes.  running as a python desktop app, you will be able to install packages directly
yes.  project is active; tentative completion date, ~October 2010
possibly.  I'd like to make it capable of running independently, managing the local system
no.  project is not a revived incarnation of other attempts
) advanced search will be restricted
) SOMEONE CAN START BUILDING JSON-RPC BACKENDS NOW
) PHP and MySQL s_ck.   If I do it, the back-ends will be python ( + git/sqlite/couchdb)
) feedback appreciated
) to contribute, fork on github and send me a request.
C Anthony
[1] http://bbs.archlinux.org/viewtopic.php?id=90970
Last edited by extofme (2010-09-12 23:37:47)

Xyne wrote:Sorry for being too lazy right now to check myself, but would this require that Javascript be enabled to use the AUR or does Pyjamas gracefully fall back to plain HTML?
heh, yeah i purposefully wasn't mentioning that at this point .
no, it is not able to fall back to plain old HTML.  there was/is a project, pyjamas-server or something, where the backend/server would run the app as python (i think, might have been spidermonkey), compute the DOM, then spit the result to the client/browser as an HTML document.  every time you did an "action" (clicked something/whatever), you would make a new request to the server, it would do the action for you, then spit back the result.  i'm not sure completely how it worked, but the biggest problem was that in order to maintain a "state" from the clients perspective (simply retaining variables/etc.), a permanent/daemon process was required on the server.  this daemon would essentially be doing the same thing a client browser would have done, had javascript been enabled, but the daemon is doing it for _every_ client that doesn't support JS.  the other option was to serialize/deserialize the entire state after each request, but that's slow
i have some hope for this in the future, but ATM it's not here.  although, it may be easy to "snapshot" points in a running pyjamas app, and save the DOM.  this could then be served up, and could reuse the same CSS as the JS version; i believe this is sort of how the new test suite in pyjs works, but i'm not sure.
the other option would be to just create a super simple HTML only version, maybe try to reuse the CSS, i dunno.  i've done manual web development for so long, that frankly i stopped caring about supporting 15 browsers and degrading gracefully to 10 different levels of functionality, it's just too much, and drives you mad wasting time.  this is why i love pyjamas; it takes care of the top 5 browsers or so, and others usually work fine.  so, i make a full functioning version, and a super crappy basic HTML version that does the bare minimum; if you don't like it, enable JS.  with the advent of HTML5 and the JS API's, JS will start to become a hard requirement anyways i think.
that was my long winded way of saying "no".
Xyne wrote:Regardless, it seems that you have a good vision for this project and I hope that your enthusiasm for it continues. I know that it can be somewhat discouraging when initially met with nay-saying and/or a seeming lack of interest.
thanks, and yeah it can be.  haha i thought people would be all over the idea of a new AUR, esp. one capable of running as a desktop app/installing with one click.  thats why i made it so easy to bootstrap a dev environment.  but meh, i code because i enjoy it, so i keep doing it cuz it feels gooooood.
C Anthony

Similar Messages

  • Trouble with aur/kdevelop-extra-plugins-python-svn

    I can't get the kdevelop python plug-in to compile.
    <snip>
    [ 65%] Building CXX object duchain/CMakeFiles/kdev4pythonduchain.dir/contextbuilder.o
    /tmp/yaourt-tmp-root/aur-kdevelop-extra-plugins-python-svn/src/kdevelop-python/duchain/contextbuilder.cpp:32:45: fatal error: language/duchain/smartconverter.h: No such file or directory
    compilation terminated.
    make[2]: *** [duchain/CMakeFiles/kdev4pythonduchain.dir/contextbuilder.o] Error 1
    make[1]: *** [duchain/CMakeFiles/kdev4pythonduchain.dir/all] Error 2
    make: *** [all] Error 2
    Aborting...
    ==> ERROR: Makepkg was unable to build kdevelop-extra-plugins-python-svn.
    After some searching, I came to discover smartconverter.h has been removed so I commented it out in contextbuilder.cpp.
    <snip>
    [ 65%] Building CXX object duchain/CMakeFiles/kdev4pythonduchain.dir/contextbuilder.o
    In file included from /var/abs/local/yaourtbuild/kdevelop-extra-plugins-python-svn/src/kdevelop-python/duchain/contextbuilder.cpp:33:0:
    /var/abs/local/yaourtbuild/kdevelop-extra-plugins-python-svn/src/kdevelop-python/duchain/pythoneditorintegrator.h:26:46: fatal error: language/editor/editorintegrator.h: No such file or directory
    compilation terminated.
    make[2]: *** [duchain/CMakeFiles/kdev4pythonduchain.dir/contextbuilder.o] Error 1
    make[1]: *** [duchain/CMakeFiles/kdev4pythonduchain.dir/all] Error 2
    make: *** [all] Error 2
    Aborting...
    Does anyone have this working?  Any ideas?

    I have the same problem , so i would also like to know

  • Implement and deploy a JSON rendering extension for SSRS.

    Hi,
    Can anyone please let me know how to implement and deploy a JSON extension for SSRS 2005?

    Hi Anoopmulamoodu,
    In Reporting Services, we can implement custom rendering extensions to generate reports in expected formats. After doing some research, I am afraid there is no example of JSON Rendering Extension for Reporting Services. Generally, it is difficult to write
    a custom rendering extension because it must typically support all possible combinations of report elements and requires that you implement hundreds of classes, interfaces, methods, and properties. Before you decide to create a custom rendering extension,
    you should evaluate simpler alternatives:
    Customize rendered output by specifying device information settings for existing extensions.
    Add custom formatting and presentation features by combining XSL Transformations (XSLT) with the output of the XML rendering format.
    If you indeed to implement a custom rendering extension, please go through the following MSND document as well as an code example of Zip Rendering Extension:
    Implementing a Rendering Extension
    Zip Rendering Extension for SQL Server Reporting Services 2005/2008/2012
    Hope this helps.
    Regards,
    Mike Yin
    TechNet Community Support

  • JSON/RPC

    Hello everyone
    Can anybody point me toa working implementation of JSON-RPC for iPhone? I found a couple of JSON libraries (BSJSON and TouchJSON) - but not any implementation of the "RPC" part. I need to do JSON-RPC over http-post, btw.
    help would be highly appreciated!
    /morten

    Anyone ?

  • Json rpc and json rpc cpp integration

    hi frnds,
    I am new with json rpc and json rpc cpp, all i just need a way to talk to c++ code from my servlet/jsp without using jni, i got following link by searching on google but not able to implement it, so i am looking for json rpc with json rpc cpp
    http://www.ibm.com/developerworks/webservices/library/ws-xml-rpc/
    thanks in advance
    Irfan

    Anyone ?

  • JSF/AJAX vs JSON-RPC/AJAX

    Has anyone directly compared these two approaches to rich internet client applications? I am in the process of choosing a technology base for AJAX applications, and see some attraction in the lighter weight of JSON, and the direct access to java objects from javascript looks appealing and much simpler than JSF. It does appear that the JSON-RPC-Java approach will require the client js to handle all of the UI events. How much effort in this area does JSF save?

    I should have been more clear.
    On the https://bpcatalog.dev.java.net page in the left navbar is a link to the CVS repository "Version control - CVS", click on it. Then there is a link "Setup CVS command line client" that leads you to a the page ( https://bpcatalog.dev.java.net/servlets/ProjectSource) that you will have to login to see. This page tells you how to connect and download with CVS.
    I have summarized the page content below:
    Hope this helps - Thanks - Mark
    To use WinCvs to check out your own set of source code files, you must first set up the correct cvs root using the following steps.
    1. Launch WinCvs and select Admin - Preferences. Enter the CVSroot:
    :pserver:[email protected]:/cvs
    Click OK.
    2. If this is your first cvs checkout, create a folder in Windows Explorer to hold all of your cvs project folders. Then create a subfolder for this project. (You may even want to create separate subfolders for each module if you're working in more than one.)
    3. In WinCvs, select Admin - Login and enter your CVS password.
    4. Click on the left window in the program and select a folder. Then select Create - Checkout Module. Select the project folder you created earlier.
    5. Enter the project module name and click OK. You should see a scrolling list of filenames as these are created in your folder(s).
    6. Repeat the module creation process for each additional cvs module you wish to check out.

  • Distrib -e "Arch(org|code|pkgs|aur|forum|wiki|bugs|.*)?" -- thoughts

    design the output of every tool we use to look like merge-able chains in a DSCM.
    this is something i've been thinking about long before i came to Arch.  i want to see next-generation package/configuration/change management in a distributed distribution.
    I am familiar with git, and most of what i have tried is relating with it; however. it's only because i know much about it.  other possibilities would be bazaar?/mercurial/fossil/etc.  i like fossil; i have not tried it but it looks closest to what i want to achieve.  i don't think it could scale to the levels we'd need however.
    ASSERTIONS
    ) all PKGBUILD are DSCM (git/?) based
    ) bugs should ride along with software, and be merge-able when branches merge
    ) cryptographic signatures for each user
    ) wiki for each software
    ) forum "channels" for each software
    ) P2P sharing of SCM (blobs/trees/commits in git) units
    ) P2P sharing of common SCM (packs in git) pack
    ) P2P sharing of user configs and ABS build trees; each user may host their own binary/source repo, and sign their packages)
    ) P2P and distribution are good
    essentially, everything is a branch/tree and we use facilities of DSCM with a P2P layer above.  the arch servers could become another node in the system and a long term record keeping peer.  others could add servers.  you could open the wiki/bugs/etc offline, in a web browser, and merge later.  when you edit your PKGBUILDS, they can be forked by others and improved, maybe pushed to the core/community repos.  official repo builds could be signed by an official Arch GPG key.  bring everything as close to source as possible, and spread it out.
    this is completely brainstorming right now, but i have done some tricky cool stuff with git.  i want to keep all/most of the logic/information withing the git DAG (commit graph).  i think we could do neat stuff with the git index, git grafts, and several operations could safely be done in parallel.  we could do mapreduce type calculations on the "ArchNet" to get crazy statistics and visualizations.
    i intend to actually build something soon-ish-awhile.  right now im working on an app that can produce 3D visualizations in VPython from any kind of input stream... i want to hook that kind of stuff up and visualize the arch/linux/gnu/buzz.
    another offshoot project for me was to use VPython (that app is really fun) to navigate and manipulate git repositories in real time.  imagine visualizing your system in 3D while working on it.  like a 3D admin panel where you overlay others configs and entire systems on to your own to see what changes/etc. DSCM can do this.
    thoughts?  what other kinds of things could we do if everything Arch behaved like a P2P super-repository?
    Last edited by extofme (2010-02-14 01:34:37)

    Anntoin wrote:Some interesting ideas. But you need to start small first and make a proof of concept before you try and tackle everything. A detailed plan of how packages are handled and a basic implementation would be a start for example (still not a small job though), then testing the behaviours that you are interested with that framework. You have an idea where you want to go with this but you will need to focus on a few core features and show their benefit before anyone will consider this.
    ah yes of course.  the first step is getting a distributed index, and a "package" format (packages become fuzzy items, below/above); this will be realized in the form of:
    "AUR3 [aur-pyjs] implementation in python (pyjs) + JSON-RPC"
    https://bbs.archlinux.org/viewtopic.php?pid=823972
    i have a package in the AUR for that [aur-pyjs], but it's old as i haven't been able to update in awhile, and won't be until i secure a development job in my new city (next week hopefully).  aur-pyjs will be built on top of the concepts i have outlined in this thread, and will in time become prototype.  check it out; pretty neat even though it can't do much yet :-).  soon though, i will update the package, and it will then be able to run as a native python desktop app (pyjamas allows the same code to run as a website or a desktop app).  at that point, it will be trivial to implement connectivity to the old AUR/repos, and we will in effect have a pacman+aur replacement.  from there i will tackle bugs+forum, of which there are already several implementations on top of DSCM sub-systems to research and learn from.
    stefanwilkens wrote:
    Dieter@be wrote:Interesting ideas, I think i like them.
    but trying to store too many big files (ie package files) inside a VCS seems like a bad idea.  space requirements will be much bigger then what we have now, unless you make the VCS "forget" about older versions or something...
    mostly this.
    the base of what you're proposing is a tremendous amount of data that would never be touched after a new version is released, how do your suggestions fit the rolling release model. Especially relatively large packages updated with high frequency (nvidia binary drivers, for instance) could cause the space requirement to increase rapidly unless moderated.
    packages are not stored in the DSCM, their contents are.  the package itself is simply a top-level tree object in git, linking to all other trees and blobs comprising the package state, and a reference to said tree.  this means everything and anything that is common between _any_ package and _any_ version will be reused; if ten unrelated packages reference the same file, only one copy will ever exist; blobs are the same.  however, some packages may indeed create gigantic, singular blob type objects that always change, and this will be addressed (next...).
    git compresses the individual objects itself, in gz format; this could be changed to use the xz format, or anything else.  it also generates pack files full of differentiated objects, also compressed. it would not always be necessary to have the full history of a package (if you look somewhere above, i breifly touch this point with various "kinds" of packages, some capable of source rebuild, some capable of becoming any past source/binary version, some a single version/binary only, etc.).  you would not have to retain all versions of "packages" if you did not want, but you could retrieve them at anytime so long as their components existed somewhere on the network.  servers could be set up to provide all packs, all version, effectively and automatically performing the intended duty of the "arch rollback machine".  the exact mechanism is not defined yet, but it will likely involve some sort of SHA routing protocol, to resolve missing chunks.
    git's data model is stupid simple; structures can be created to represent a package, it's history, it's bugs/status, and it's information (wiki/etc.), in an independent way so they do not depend on each other, but still relate to each other, and possess knowledge of how to "complete" and find each other.  it will not be structured in the typical way git is used now.  unfortunately this is very low level git stuff, and difficult to explain properly, so i won't go there; just know that ultimately the system will only pull the objects you need to fulfill the directive you gave it, and there will be rules to control your object cache.  your object cache can then be used to fulfill the requests of others; ie. P2P.
    since git itself is in a rather poor state when it comes to bindings, i will be using the pure python git library, dulwich, instead.  while in time this could be changed to use proper bindings, or some bits written as C modules, it's possible pypy will make all that unnecessary.  i don't need anything git core offers except its data structures and concepts; although, i intend to make the entire system (adding bugs/updating packages/editing wiki/editing forum/etc.) _completely_ 100% accessible from a basic git client.  for example, you could write a post in the forum by "committing" to a special branch; you could search the entire wiki, and its history from the terminal while installing; you could add a bug, and link a patch to it, directly usable and buildable by others for testing; this could all be done offline, and pushed once a connection was available... this will lead to all sorts of interesting paths...
    in one super run-on sentence:
    i intend to construct a "social", 100% distributed distribution platform, where everyone is a [potentially] contributing node and has [nearly] full access to all informations, each node's contributions are cryptographically verifiable, each node may easily participate in testing/discussion or lend computing resources, each node may republish variations of any object or collection under their own signature, each node may "track" or "follow" any number of signatures with configurable aggressiveness (no such thing as "official repos"; your personal "repo" is the unique overlay of other nodes you trust, and by proxy some nodes they trust; "official repos" degrade into an Arch signature, a Debian signature, Fedora, etc.), and finally, do all of this is a way that is agnostic to the customized distribution (or other package managers) above it, or it's goals, and eventually spread to other distros, thus creating a monstrous pool of shared bandwidth, space, ideas, and workload, whilst at the same time converging user/developer/tester/vendor/packager/contributor/etc. toward: person.
    piece of cake
    C Anthony
    Last edited by extofme (2010-09-11 05:23:46)

  • Slurpy - An AUR search/download/update helper in Python

    Some of you guys from IRC already know about slurpy, but I've tagged a release and uploaded a stable PKGBUILD to the AUR.
    AUR packages:
    slurpy - should remain usable, but will not contain latest bug fixes until the next release
    slurpy-git - latest and greatest, but will also contain the latest bugs I've introduced
    (Below is shamelessly ripped from my project page)
    Preamble
    slurpy is another AUR helper script written in Python. I've been an advocate of arson since it's inception but the fact that it's written in Ruby always bugged me. Since I am much more comfortable in Python I decided to write a port. The arson code base changed a lot as I worked on this and I decided to continue the direction I was heading rather than rewriting the port to match. slurpy is where I ended up.
    What it is
        * Faster searching, downloading, retrieving info, and checking for updates for AUR packages.
        * Dependency resolution for packages in the AUR.
        * Written in Python with only one optional dependency - python-cjson (makes processing faster with large result sets).
        * Colorized output based on pacman-color's color.conf. Color is disabled by default and must be enabled with -c|--color.
        * Easy downloading of package source through the ABS for packages in the official repositories. (Very experimental!!)
    What it isn't
        * slurpy is not a way to automate the download-build-install process. It is only a means to manage PKGBUILDS. If you are looking for an automatic installer, check out yaourt.
        * slurpy is not a `prettyfier'. Output is mostly modeled after pacman output to keep a uniform feel across tools.
    Other thoughts
        * I've tested this quite a bit, but I know there are many bugs I've missed. I could use a few testers giving quality feedback. If you've got a minute, check it out and let me know what you think.

    Evanlec and I talked about this issue on IRC.  I know why it breaks, I'm not sure how to handle it.  That cromium package uses a pkgver that is grabbed from the cromium webpage, but the AUR doesn't know how to handle that pkgver.  This causes the AUR to return a version number of -7 (negative $pkgrel).  This is more of an issue with the way the PKGBUILD is written and what the AUR can support, but I'll see if I can implement an elegant work around for this.
    I've run into this before with other packages too.  My conclusion was that the best way to deal with it is to ignore any packages who's verson number returns as negative, or who's version number slurpy doesn't understand.  This would mean that there are possibly packages installed on your system that need to be updated that slurpy isn't showing updates for.  In that case slurpy show a warning message saying that it can't determine the latest version of that package.  Thoughts?
    Last edited by rson451 (2009-09-30 12:27:14)

  • Java interoperability solution to use C++,Python etc.

    I am working on a generic scripting/programming architecture for a Java application.
    There is a set of classes that I want to expose via API so that it can be used to developed plugins for interpreted languages. The final aim is to expose the Java classes as COM interfaces via a COM server or as Python packages etc. To achieve this I would have to use a form of RPC that is statefull. By statefull what I mean is that if a method in a Java class returns other Java object (non primitive) then that object should be available as proxy at the client side and it should not be passed by value. So any method invoked on the proxy object would result in a call to the actual Java object.
    Can someone comment on how I can achieve the above behavior? Which of these technologies could be used:
    JNI, XML-RPC, SOAP, CORBA etc.
    Are there any other useful options?
    Probably JNI could be used but its pretty complex and all the object reference handling would have to be done manually along with creation of proxy classes on the C++ side.
    I have looked at Jython, but its way behind the current Python versions.
    The idea is to develop a generic set of API so that plugins can be developed for any language on any platform. Any comments appreciated.

    Hey .. can someone help me here .. People have replied that CORBA would suite to my problem. I would like to know why. Its not that I am questioning the experts .. but I have to convince other developers working with me about it. Most of the developers working above me have heard of the recent (come one they are recent than Corba) buzzwords like SOAP, XML-RPC etc and would like me to use them. .. But I couldnt find of implementations based on SOAP/XML-RPC that would provide me a true RPC mechanism that I have described above. Please help me.
    Thanks.

  • Parallel Backup script written in python

    I'm writing a backup script in python, based on one I wrote in BASH earlier on. Both of these leverage rsync, but I decided to move to a python implementation because python is much more flexible than BASH. My goals with the new implementation are to back up the system, home folder, and my documents folder to a set of multiple primary, secondary, and tertiary disks, respectively. The method is as follows:
    1. check for the existence of disks and create folders which will contain mountpoints for each category of disk.
    2. decrypt and mount disks found under subfolders within those category folders, creating the mountpoints if they don't exist.
    3. syncronize the aforementioned data to the mounted disks using rsync, doing all three classes of disk in parallel.
    4. unmount and close disks
    This is really my first serious python program, and I realize that it's a bit complicated. My code is rather sloppy, as well, perhaps understandably so given my novice status. My only other programming experience is with BASH scripts, but I digress.
    Here is the code for the script (about 250 lines). It is written as a series of functions, and I'm uncertain as to whether functions or objects would work better. Additionally, I'm sure there's a python function provided by the os module analogous to the sync system call, but I've yet to find it in my python desk reference. The backup functions need work, and I'm still trying to figure out how to get them to loop through the mounted disks in each folder and copy to them. I require assistance in determining how to write the backup functions to do as outlined above, and how to run them in parallel. This is still a work in progress, mind.
    #!/usr/bin/python
    # Backup Script
    #### preferences ####
    # set your primary/secondary backup disks, encryption, and keyfile (if applicable) here.
    # backup disks
    # primary and secondary backups. As used here,
    # primary backups refer to backups of the entire system made to an external drive
    # secondary backups refer to backups made of individual folders, such as documents
    # primary backup disks by UUID:
    global PDISKS
    PDISKS = ("/dev/disk/by-uuid/16d64026-28bd-4e1f-a452-74e76bb4d47b","")
    # secondary backups by UUID.
    global SDISKS
    SDISKS = ()
    # tertiary disks by UUID:
    global TDISKS
    TDISKS = ("/dev/disk/by-uuid/39543e6e-cf50-4416-9669-e97a6abd2a37","")
    # backup paths
    # these are the paths of the folders you wish to back up to secondary
    # and tertiary disks, respectively. Primary disks are set to back up the
    # contents of the root filesystem (/*). NO TRAILING SLASHES.
    global SBACKUP
    SBACKUP = "/home/bryant"
    global TBACKUP
    TBACKUP = "/home/bryant/docs"
    # use encryption:
    use_encryption = True
    # keyfile
    # set the full path to your keyfile here
    # this assumes a single keyfile for all backup disks
    # set this to None if you don't have a single keyfile for all of your backups
    keyfile = "/usr/local/bin/backup.keyfile"
    # import modules
    import os, subprocess, sys
    ### preliminary functions ###
    # these do the setup and post-copy work
    def check_dirs():
    """checks that the folders which contain the mountpoints exist, creates them if they don't"""
    print("checking for mountpoints...")
    p = os.path.isdir("/mnt/pbackup")
    if p == True:
    print("primary mountpoint exists.")
    elif p == False:
    print("mountpoint /mnt/pbackup does not exist.\n\tcreating...")
    os.mkdir("/mnt/pbackup")
    s = os.path.isdir("/mnt/sbackup")
    if s == True:
    print("secondary mountpoint exists.")
    elif s == False:
    print("mountpoint /mnt/pbackup does not exist.\n\tcreating...")
    os.mkdir("/mnt/sbackup")
    t = os.path.isdir("/mnt/tbackup")
    if t == True:
    print("tertiary mountpoint exists.")
    elif t == False:
    print("mountpoint /mnt/tbackup does not exist.\n\tcreating...")
    os.mkdir("/mnt/tbackup")
    def mount_disks():
    """mounts available backup disks in their respective subdirectories"""
    pfolder = 1
    sfolder = 1
    tfolder = 1
    pmapper = "pbackup" + str(pfolder)
    smapper = "sbackup" + str(sfolder)
    tmapper = "tbackup" + str(tfolder)
    for pdisk in PDISKS:
    e = os.path.islink(pdisk)
    if e == True:
    subprocess.call("sync",shell=True)
    kf=os.path.isfile(keyfile)
    if kf == True:
    print("keyfile found. Using keyfile to decrypt...")
    subprocess.call("sudo cryptsetup luksOpen " + pdisk + " " + pmapper + " --key-file " + keyfile,shell=True)
    if kf == False:
    print("keyfile not found or keyfile not set. \t\nAsking for passphrase...")
    subprocess.call("sudo cryptsetup luksOpen " + pdisk + " " + pmapper,shell=True)
    f = os.path.isdir("/mnt/pbackup/pbak" + str(pfolder))
    if f == True:
    subprocess.call("mount " + "/dev/mapper/" + pmapper + " /mnt/pbak" + str(pfolder),shell=True)
    pfolder += 1
    elif f == False:
    os.mkdir("/mnt/pbackup/pbak" + str(pfolder))
    subprocess.call("mount " + "/dev/mapper/" + pmapper + " /mnt/pbak" + str(pfolder),shell=True)
    pfolder += 1
    for sdisk in SDISKS:
    e = os.path.islink(sdisk)
    if e == True:
    subprocess.call("sync",shell=True)
    kf=os.path.isfile(keyfile)
    if kf == True:
    print("keyfile found. Using keyfile to decrypt...")
    subprocess.call("sudo cryptsetup luksOpen " + sdisk + " " + smapper + " --key-file " + keyfile,shell=True)
    if kf == False:
    print("keyfile not found or keyfile not set. \t\nAsking for passphrase...")
    subprocess.call("sudo cryptsetup luksOpen " + sdisk + " " + smapper,shell=True)
    f = os.path.isdir("/mnt/sbackup/sbak" + str(sfolder))
    if f == True:
    subprocess.call("mount " + "/dev/mapper/" + smapper + " /mnt/sbackup/sbak" + str(sfolder),shell=True)
    sfolder += 1
    elif f == False:
    os.mkdir("/mnt/sbackup/sbak" + str(folder))
    subprocess.call("mount " + "/dev/mapper/" + smapper + " /mnt/sbackup/sbak" + str(sfolder),shell=True)
    sfolder += 1
    for tdisk in TDISKS:
    e = os.path.islink(tdisk)
    if e == True:
    subprocess.call("sync",shell=True)
    kf=os.path.isfile(keyfile)
    if kf == True:
    print("keyfile found. Using keyfile to decrypt...")
    subprocess.call("sudo cryptsetup luksOpen " + tdisk + " " + tmapper + " --key-file " + keyfile,shell=True)
    if kf == False:
    print("keyfile not found or keyfile not set. \t\nAsking for passphrase...")
    subprocess.call("sudo cryptsetup luksOpen " + tdisk + " " + tmapper,shell=True)
    f = os.path.isdir("/mnt/tbackup/tbak" + str(tfolder))
    if f == True:
    subprocess.call("mount " + "/dev/mapper/" + tmapper + " /mnt/pbak" + str(tfolder),shell=True)
    tfolder += 1
    elif f == False:
    os.mkdir("/mnt/tbackup/tbak" + str(tfolder))
    subprocess.call("mount " + "/dev/mapper/" + tmapper + " /mnt/tbak" + str(tfolder),shell=True)
    tfolder += 1
    def umount_disks():
    """unmounts and relocks disks"""
    subprocess.call("umount /mnt/pbackup*",shell=True)
    subprocess.call("umount /mnt/sbackup*",shell=True)
    subprocess.call("umount /mnt/tbackup*",shell=True)
    subprocess.call("cryptsetup luksClose /dev/mapper/pbackup*",shell=True)
    subprocess.call("cryptsetup luksClose /dev/mapper/sbackup*",shell=True)
    subprocess.call("cryptsetup luksClose /dev/mapper/tbackup*",shell=True)
    def check_disks():
    """checks to see how many disks exist, exits program if none are attached"""
    pdisknum = 0
    sdisknum = 0
    tdisknum = 0
    for pdisk in PDISKS:
    p = os.path.islink(pdisk)
    if p == True:
    pdisknum += 1
    elif p == False:
    print("disk " + pdisk + " not detected.")
    for sdisk in SDISKS:
    s = os.path.islink(sdisk)
    if s == True:
    sdisknum += 1
    elif s == False:
    print("disk " + sdisk + " not detected.")
    for tdisk in TDISKS:
    t = os.path.islink(tdisk)
    if t == True:
    tdisknum += 1
    elif t == False:
    print("disk " + tdisk + " not detected.")
    total = pdisknum + sdisknum + tdisknum
    if total == 0:
    print("ERROR: no disks detected.")
    sys.exit()
    elif total > 0:
    print("found " + str(total) + " attached backup disks")
    print(str(pdisknum) + " Primary")
    print(str(sdisknum) + " secondary")
    print(str(tdisknum) + " tertiary")
    return total, pdisknum, sdisknum, tdisknum
    ### backup functions ###
    # these need serious work. Need to get them to loop through available mounted
    # disks in their categories and then execute rsync
    def pbackup():
    """calls rsync to backup the entire system to all pdisks"""
    dirs = os.listdir("/mnt/pbackup")
    for dir in dirs:
    m = os.path.ismount(dir)
    if m == True:
    subprocess.call("sync",shell=True)
    print("syncing disks with rsync...")
    # subprocess.call("rsync --progress --human-readable --numeric-ids --inplace --verbose --archive --delete-after --hard-links --xattrs --delete --compress --skip-compress={*.jpg,*.bz2,*.gz,*.tar,*.tar.gz,*.ogg,*.mp3,*.tar.xz,*.avi} /* /mnt/pbackup/" + dir + "/ --exclude={/sys/*,/mnt/*,/proc/*,/dev/*,/lost+found,/media/*,/tmp/*,/home/*/.gvfs/*,/home/*/downloads/*,/opt/*,/run/*",shell=True)
    print("test1")
    subprocess.call("sync",shell=True)
    print("disk synced with disk " + pdisk + ".")
    print("sync with disk " + pdisk + " complete.")
    elif m == False:
    continue
    def sbackup():
    """calls rsync to backup everything under SBACKUP folder to all sdisks"""
    dirs = os.listdir("/mnt/sbackup")
    for dir in dirs:
    m = os.path.ismount(dir)
    if m == True:
    subprocess.call("sync",shell=True)
    # subprocess.call("rsync --progress --human-readable --numeric-ids --inplace --verbose --archive --delete-after --hard-links --xattrs --delete --compress --skip-compress={*.jpg,*.bz2,*.gz,*.tar,*.tar.gz,*.ogg,*.mp3,*.tar.xz,*.avi} SBACKUP/* /mnt/sbackup/" + dir + "/",shell=True)
    print("test2")
    subprocess.call("sync",shell=True)
    print("disk synced with disk " + pdisk + ".")
    print("sync with disk " + sdisk + " complete.")
    elif m == False:
    continue
    def tbackup():
    """calls rsync to backup everything under TBACKUP folder to all tdisks"""
    dirs = os.listdir("/mnt/tbackup")
    for dir in dirs:
    m = os.path.ismount(dir)
    if m == True:
    subprocess.call("sync",shell=True)
    # subprocess.call("rsync --progress --human-readable --numeric-ids --inplace --verbose --archive --delete-after --hard-links --xattrs --delete --compress --skip-compress={*.jpg,*.bz2,*.gz,*.tar,*.tar.gz,*.ogg,*.mp3,*.tar.xz,*.avi} TBACKUP/* /mnt/sbackup/" + dir + "/",shell=True)
    print("test3")
    subprocess.call("sync",shell=True)
    print("disk synced with disk " + pdisk + ".")
    print("sync with disk " + sdisk + " complete.")
    elif m == False:
    continue
    #### main ####
    # check for root access:
    r=os.getuid()
    if r != 0:
    print("ERROR: script not run as root.\n\tThis script MUST be run as root user.")
    sys.exit()
    elif r == 0:
    # program body
    check_dirs()
    check_disks()
    mount_disks()
    # pbackup()
    # sbackup()
    tbackup()
    umount_disks()
    print("backup process complete.")
    Last edited by ParanoidAndroid (2013-08-07 20:01:07)

    I've run into a problem on line 149. I'm asking the program to list the directories under the top-level backup directories under /mnt, check to see if each one is a mountpoint, and if it is unmount it. It does this, but it appears to recurse into the directories under the directories I'm asking it to check. The output is:
    checking for mountpoints...
    primary mountpoint exists.
    secondary mountpoint exists.
    tertiary mountpoint exists.
    disk /dev/disk/by-uuid/16d64026-28bd-4e1f-a452-74e76bb4d47b not detected.
    found 1 attached backup disks
    0 Primary
    0 secondary
    1 tertiary
    keyfile found. Using keyfile to decrypt...
    mounting tbackup1 at /mnt/tbak1
    test3
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    not a mountpoint
    Device /dev/mapper/pbackup* is not active.
    Device /dev/mapper/sbackup* is not active.
    backup process complete.
    here is the code for the entire script. It's been much modified from the previously posted version, so I included all of the code versus the section in question for reference. As I said, the section that seems to be causing the issue is on line 149.
    #!/usr/bin/python
    # Backup Script
    #### preferences ####
    # set your primary/secondary backup disks, encryption, and keyfile (if applicable) here.
    # backup disks
    # primary and secondary backups. As used here,
    # primary backups refer to backups of the entire system made to an external drive
    # secondary backups refer to backups made of individual folders, such as documents
    # primary backup disks by UUID:
    global PDISKS
    PDISKS = ["/dev/disk/by-uuid/16d64026-28bd-4e1f-a452-74e76bb4d47b"]
    # secondary backups by UUID.
    global SDISKS
    SDISKS = []
    # tertiary disks by UUID:
    global TDISKS
    TDISKS = ["/dev/disk/by-uuid/39543e6e-cf50-4416-9669-e97a6abd2a37"]
    # backup paths
    # these are the paths of the folders you wish to back up to secondary
    # and tertiary disks, respectively. Primary disks are set to back up the
    # contents of the root filesystem (/*). NO TRAILING SLASHES.
    global SBACKUP
    SBACKUP = "/home/bryant"
    global TBACKUP
    TBACKUP = "/home/bryant/docs"
    # use encryption:
    use_encryption = True
    # keyfile
    # set the full path to your keyfile here
    # this assumes a single keyfile for all backup disks
    # set this to None if you don't have a single keyfile for all of your backups
    keyfile = "/usr/local/bin/backup.keyfile"
    # import modules
    import os, subprocess, sys
    ### preliminary functions ###
    # these do the setup and post-copy work
    def check_dirs():
    """checks that the folders which contain the mountpoints exist, creates them if they don't"""
    print("checking for mountpoints...")
    if os.path.isdir("/mnt/pbackup"):
    print("primary mountpoint exists.")
    else:
    print("mountpoint /mnt/pbackup does not exist.\n\tcreating...")
    os.mkdir("/mnt/pbackup")
    if os.path.isdir("/mnt/sbackup"):
    print("secondary mountpoint exists.")
    else:
    print("mountpoint /mnt/pbackup does not exist.\n\tcreating...")
    os.mkdir("/mnt/sbackup")
    if os.path.isdir("/mnt/tbackup"):
    print("tertiary mountpoint exists.")
    else:
    print("mountpoint /mnt/tbackup does not exist.\n\tcreating...")
    os.mkdir("/mnt/tbackup")
    def mount_disks(wdisk):
    """mounts available backup disks in their respective subdirectories"""
    pfolder = 1
    sfolder = 1
    tfolder = 1
    pmapper = "pbackup"
    smapper = "sbackup"
    tmapper = "tbackup"
    if wdisk == "p":
    for pdisk in PDISKS:
    if os.path.islink(pdisk):
    subprocess.call("sync",shell=True)
    if os.path.isfile(keyfile):
    print("keyfile found. Using keyfile to decrypt...")
    subprocess.call("sudo cryptsetup luksOpen " + pdisk + " " + pmapper + str(pfolder) + " --key-file " + keyfile,shell=True)
    else:
    print("keyfile not found or keyfile not set. \t\nAsking for passphrase...")
    subprocess.call("sudo cryptsetup luksOpen " + pdisk + " " + pmapper + str(pfolder),shell=True)
    if os.path.isdir("/mnt/pbackup/pbak" + str(pfolder)):
    print("mounting " + pmapper + str(pfolder) + " at /mnt/pbak" + str(pfolder))
    subprocess.call("mount " + "/dev/mapper/" + pmapper + str(pfolder) + " /mnt/pbackup/pbak" + str(pfolder),shell=True)
    pfolder += 1
    else:
    os.mkdir("/mnt/pbackup/pbak" + str(pfolder))
    subprocess.call("mount " + "/dev/mapper/" + pmapper + str(pfolder) + " /mnt/pbackup/pbak" + str(pfolder),shell=True)
    pfolder += 1
    elif wdisk == "s":
    for sdisk in SDISKS:
    if os.path.islink(sdisk):
    subprocess.call("sync",shell=True)
    if os.path.isfile(keyfile):
    print("keyfile found. Using keyfile to decrypt...")
    subprocess.call("sudo cryptsetup luksOpen " + sdisk + " " + smapper + str(sfolder) + " --key-file " + keyfile,shell=True)
    else:
    print("keyfile not found or keyfile not set. \t\nAsking for passphrase...")
    subprocess.call("sudo cryptsetup luksOpen " + sdisk + " " + smapper + str(sfolder),shell=True)
    if os.path.isdir("/mnt/sbackup/sbak" + str(sfolder)):
    print("mounting " + smapper + str(sfolder) + " at /mnt/sbak" + str(sfolder))
    subprocess.call("mount " + "/dev/mapper/" + smapper + str(sfolder) + " /mnt/sbackup/sbak" + str(sfolder),shell=True)
    sfolder += 1
    else:
    os.mkdir("/mnt/sbackup/sbak" + str(folder))
    subprocess.call("mount " + "/dev/mapper/" + smapper + str(sfolder) + " /mnt/sbackup/sbak" + str(sfolder),shell=True)
    sfolder += 1
    elif wdisk == "t":
    for tdisk in TDISKS:
    if os.path.islink(tdisk):
    subprocess.call("sync",shell=True)
    if os.path.isfile(keyfile):
    print("keyfile found. Using keyfile to decrypt...")
    subprocess.call("sudo cryptsetup luksOpen " + tdisk + " " + tmapper + str(tfolder) + " --key-file " + keyfile,shell=True)
    else:
    print("keyfile not found or keyfile not set. \t\nAsking for passphrase...")
    subprocess.call("sudo cryptsetup luksOpen " + tdisk + " " + tmapper + str(tfolder),shell=True)
    if os.path.isdir("/mnt/tbackup/tbak" + str(tfolder)):
    print("mounting " + tmapper + str(tfolder) + " at /mnt/tbak" + str(tfolder))
    subprocess.call("mount " + "/dev/mapper/" + tmapper + str(tfolder) + " /mnt/tbackup/tbak" + str(tfolder),shell=True)
    if os.path.islink(tdisk):
    tfolder += 1
    else:
    os.mkdir("/mnt/tbackup/tbak" + str(tfolder))
    subprocess.call("mount " + "/dev/mapper/" + tmapper + " /mnt/tbackup/tbak" + str(tfolder),shell=True)
    tfolder += 1
    def umount_disks():
    """unmounts and relocks disks"""
    pdirs = os.listdir("/mnt/pbackup")
    sdirs = os.listdir("/mnt/sbackup")
    tdirs = os.listdir("/mnt/tbackup")
    for pdir in pdirs:
    if os.path.ismount("/mnt/pbackup/" + pdir):
    subprocess.call("umount /mnt/pbackup/" + pdir,shell=True)
    else:
    print("not a mountpoint")
    for sdir in sdirs:
    if os.path.ismount("/mnt/sbackup/" + sdir):
    subprocess.call("umount /mnt/sbackup/" + sdir,shell=True)
    else:
    print("not a mountpoint")
    for tdir in tdirs:
    if os.path.ismount("/mnt/tbackup/" + tdir):
    subprocess.call("umount /mnt/tbackup/" + tdir,shell=True)
    else:
    print("not a mountpoint")
    subprocess.call("cryptsetup luksClose /dev/mapper/pbackup*",shell=True)
    subprocess.call("cryptsetup luksClose /dev/mapper/sbackup*",shell=True)
    subprocess.call("cryptsetup luksClose /dev/mapper/tbackup*",shell=True)
    def check_disks():
    """checks to see how many disks exist, exits program if none are attached"""
    pdisknum = 0
    sdisknum = 0
    tdisknum = 0
    for pdisk in PDISKS:
    if os.path.islink(pdisk):
    pdisknum += 1
    else:
    print("\ndisk " + pdisk + " not detected.")
    for sdisk in SDISKS:
    if os.path.islink(sdisk):
    sdisknum += 1
    else:
    print("\ndisk " + sdisk + " not detected.")
    for tdisk in TDISKS:
    if os.path.islink(tdisk):
    tdisknum += 1
    else:
    print("\ndisk " + tdisk + " not detected.")
    total = pdisknum + sdisknum + tdisknum
    if total == 0:
    print("\nERROR: no disks detected.")
    sys.exit()
    elif total > 0:
    print("found " + str(total) + " attached backup disks")
    print(str(pdisknum) + " Primary")
    print(str(sdisknum) + " secondary")
    print(str(tdisknum) + " tertiary")
    return total, pdisknum, sdisknum, tdisknum
    ### backup functions ###
    # these need serious work. Need to get them to loop through available mounted
    # disks in their categories and then execute rsync
    def pbackup():
    """calls rsync to backup the entire system to all pdisks"""
    dirs = os.listdir("/mnt/pbackup")
    for dir in dirs:
    if os.path.ismount("/mnt/pbackup/" + dir) == True:
    subprocess.call("sync",shell=True)
    print("syncing disks with rsync...")
    subprocess.call("rsync --progress --human-readable --numeric-ids --inplace --verbose --archive --delete-after --hard-links --xattrs --delete --compress --skip-compress={*.jpg,*.bz2,*.gz,*.tar,*.tar.gz,*.ogg,*.mp3,*.tar.xz,*.avi} /* /mnt/pbackup/" + dir + "/ --exclude={/sys/*,/mnt/*,/proc/*,/dev/*,/lost+found,/media/*,/tmp/*,/home/*/.gvfs/*,/home/*/downloads/*,/opt/*,/run/*",shell=True)
    subprocess.call("sync",shell=True)
    else:
    continue
    def sbackup():
    """calls rsync to backup everything under SBACKUP folder to all sdisks"""
    dirs = os.listdir("/mnt/sbackup")
    for dir in dirs:
    if os.path.ismount("/mnt/sbackup/" + dir):
    subprocess.call("sync",shell=True)
    subprocess.call("rsync --progress --human-readable --numeric-ids --inplace --verbose --archive --delete-after --hard-links --xattrs --delete --compress --skip-compress={*.jpg,*.bz2,*.gz,*.tar,*.tar.gz,*.ogg,*.mp3,*.tar.xz,*.avi} " + SBACKUP + "/* /mnt/sbackup/" + dir + "/",shell=True)
    subprocess.call("sync",shell=True)
    else:
    continue
    def tbackup():
    """calls rsync to backup everything under TBACKUP folder to all tdisks"""
    dirs = os.listdir("/mnt/tbackup")
    for dir in dirs:
    if os.path.ismount("/mnt/tbackup/" + dir):
    subprocess.call("sync",shell=True)
    subprocess.call("rsync --progress --human-readable --numeric-ids --inplace --verbose --archive --delete-after --hard-links --xattrs --delete --compress --skip-compress={*.jpg,*.bz2,*.gz,*.tar,*.tar.gz,*.ogg,*.mp3,*.tar.xz,*.avi} " + TBACKUP + "/* /mnt/sbackup/" + dir + "/",shell=True)
    subprocess.call("sync",shell=True)
    else:
    continue
    #### main ####
    # check for root access:
    r=os.getuid()
    if r != 0:
    print("ERROR: script not run as root.\n\tThis script MUST be run as root user.")
    sys.exit()
    elif r == 0:
    # program body
    check_dirs()
    d=check_disks()
    if d[1] > 0:
    mount_disks("p")
    pbackup()
    elif d[2] > 0:
    mount_disks("s")
    sbackup()
    elif d[3] > 0:
    mount_disks("t")
    tbackup()
    umount_disks()
    print("backup process complete.")
    Last edited by ParanoidAndroid (2013-08-11 00:32:02)

  • Python tool for keeping track of strings

    I wrote this just now. It associates keys to strings; basically a centralized means of storing values.
    #!/usr/bin/env python
    from cPickle import load, dump
    from sys import argv
    from os.path import expanduser
    strings_file = expanduser('~/lib/cfg-strings')
    try:
    with open(strings_file) as f:
    strings = load(f)
    except:
    strings = {}
    if len(argv) < 2:
    print('''usage:
    {0} dump
    {0} get <key>
    {0} del <key>
    {0} set <key> <val>'''.format(argv[0]))
    elif len(argv) == 2:
    if argv[1] == 'dump':
    for k in strings.keys(): print(k + ': ' + strings[k])
    elif len(argv) == 3:
    if argv[1] == 'get':
    if argv[2] in strings.keys():
    print(strings[argv[2]])
    elif argv[1] == 'del':
    if argv[2] in strings.keys():
    del(strings[argv[2]])
    elif len(argv) == 4:
    if argv[1] == 'set':
    strings[argv[2]] = argv[3]
    with open(strings_file, 'w') as f:
    dump(strings, f)
    Replace '~/lib/cfg-strings' with your preferred destination for the pickle file.
    As an example, I have this at the end of my .xinitrc:
    exec $(cfg get wm)
    so all I have to do is type "cfg set wm ..." to change my window manager. Note that on my system, the script is named 'cfg', so you'll want to change that depending on what you call it.
    To be honest, though, I think everyone has written something like this at least once.
    Last edited by Peasantoid (2010-01-18 01:29:14)

    Nice idea Peasantoid! I have wanted something similar for myself for a while now however wasn't exactly sure how best to do this. Here's my version. It is based on yours though as I prefer plain text for the settings file so I used JSON.
    #!/usr/bin/python
    import json
    import os.path
    import sys
    SETTINGS_FILE = os.path.expanduser('~/configs/settings.json')
    def dump(s):
    print json.dumps(s, sort_keys = True, indent=2)
    def get(s, key):
    if s.has_key(key):
    print s[key]
    def set(s, key, val):
    s[key] = val
    save(s)
    def delete(s, key):
    if s.has_key(key):
    del s[key]
    save(s)
    def save(s):
    json.dump(s, open(SETTINGS_FILE, 'w'))
    def usage():
    str = [
    "usage: %s dump (default)",
    " %s get <key>",
    " %s set <key> <val>",
    " %s delete <key>"
    for x in str:
    print x % sys.argv[0]
    def main():
    try:
    settings = json.load(open(SETTINGS_FILE))
    except:
    settings = {}
    a = sys.argv
    n = len(a)
    if n == 1 or (n == 2 and a[1] == 'dump'):
    dump(settings)
    elif n == 3 and a[1] == 'get':
    get(settings, a[2])
    elif n == 3 and a[1] == 'delete':
    delete(settings, a[2])
    elif n == 4 and a[1] == 'set':
    set(settings, a[2], a[3])
    else:
    usage()
    if __name__ == "__main__":
    main()

  • AUR Helper Scripts

    This thread is intended to provide a "locator" resource for all the AUR helper scripts out there - for searching and/or building from the AUR.  This does include some of the GUI pacman frontends, and it is ok to repost those here.
    I would appretiate it if the author of each front end posted a small (2-3 line) description of their creation, along with a homepage link and an AUR link (where applicable).  Screenshots would also be nice (if applicable).
    This thread is only intended for this purpose and will be cleaned of other posts.  If this thread ends up locked and you have a front end to add, feel free to PM any of the moderator staff and we will unlock it for you.

    aur-get
    aur-get is simple AUR manager written in python. It will help you with searching, downloading, installing and updating packages from AUR.
    aur-get is no more maintained. It's ugly and buggy code. Don't use it.
    Homepage: http://husio.arch-linux.pl/aur_get/about.html
    PKGBUILD: http://aur.archlinux.org/packages/aur-g … t/PKGBUILD
    Screenshot 1: http://img100.imageshack.us/img100/6125 … 8schg5.png
    Screenshot 2: http://img152.imageshack.us/img152/4695 … 0scsu7.png
    Last edited by Husio (2008-04-01 15:09:08)

  • PYTHON in HTMLDB

    I know nothing about PYTHON, so please think that you are talking to 6 year old ...
    my setup: APEX 2.2.1 on ORACLE 10.2.0.2 on SOLARIS 10
    I would like to implement this PYTHON library (or better say 2 libraries). http://bitworking.org/projects/sparklines
    since i really know nothing about PYTHON, I'm really not sure where to start and I would really appretiany any help.
    thank you
    jiri

    Jiri,
    based on the information on the web-page, you can also use there system to generate the graphic, you just have to generate an URL in the following format.
    http://bitworking.org/projects/sparklines/spark.cgi?type=smooth&d=88,84,82,92,82,86,66,82,44,64,66,88,96,80,24,26,14,0,0,26,8,6,6,24,52,66,36,6,10,14,30&height=20&limits=0,100&min-m=false&max-m=false&last-m=false&min-color=red&max-color=blue&last-color=green&step=2
    Then you don't have to hassle installing PYTHON on your local Apache. A detailed description of the parameters can be found of the web-page.
    Patrick
    Check out my APEX-blog: http://inside-apex.blogspot.com
    Check out the ApexLib Framework: http://apexlib.sourceforge.net

  • Has anyone implemented JavaScript Object Notation in LabVIEW?

    Hi Guys,
    I was looking at writing an application using JSON-RPC
    http://en.wikipedia.org/wiki/JSON-RPC
    Has anyone implemented something similar, or are there any examples of this?
    I was going to write it with TCP VIs, and use string parsing to work through any responses, but if anyone had some advice, it would be appreciated.
    Cheers,
    Anthony

    Anthony,
    This sounds like a very interesting project and I would like to hear how it goes for you.
    You can of course choose to implement a JSON-RPC system using either TCP or HTTP as the transport. TCP would give you a potentially more responsive system but you would have to implement much more of the system from scratch as it were.
    As LaRisa_s was saying, if LabVIEW 8.6 is available to you, you can take advantage of the Web Services feature which will do a lot of the work for you. Your VI that runs as a web service would have to parse the data sent from the client to determine the correct VI to call and then convert the parameters and call the VI.
    You are completely correct that there is no direct support for JSON-RPC. We evaluated several Web Service and RPC mechanisms before deciding on RESTful web services for LabVIEW 8.6. In fact, if you have LV8.6 and JSON-RPC isn't a hard requirement, I would strongly recommend looking at using the RESTful mechanism that is built in. If you can use it then much less work will be required of you on the server side of your application.
    If you do need to go with JSON-RPC I would be interested to hear what factors went into the decision so we can improve LabVIEW's built in web services.
    Either way- let us know how your project goes.

  • Review my PySide (python 3) package

    Hello, I noticed that PySide AUR packages still uses Python 2. Since I cant find their Python 3 counterpart, I might just use it as an opportunity to build my own packages.
    This is my first time writing an AUR package, so please review them and give me your thoughts on my work so far.
    - - - EDIT - - -
    Thank you for trying out in my package, the latest version can be found at my repository:
    https://bitbucket.org/arsooy/pyside_aur/
    I'll try to answer any question as fast as I can. Please keep in mind that I cant login to ArchLinux forum regularly at this time, so I might be a little bit late (or maybe waaay too late) to respond if you have any issue. I usually push my changes to bitbucket before going to bed, so people should just check for new commits at my repository instead of posts from me in this thread.
    Thank you.
    Last edited by arlx_ignacy (2012-09-06 12:16:44)

    First I'm sorry for the late reply, I'm one of those (rare) people who check their e-mails once every two week or so.
    andre.vmatos wrote:
    Hi.
    Thanks very much for this effort, since I was waiting for a pyside python3 capable pkgbuild.
    From hg repository, python-shibokengenerator and python-libshiboken compiled fine. But python-pyside package failed in make with the following error:
    [ 54%] Building CXX object PySide/QtGui/CMakeFiles/QtGui.dir/PySide/QtGui/qtexttablecellformat_wrapper.cpp.o
    [ 54%] Building CXX object PySide/QtGui/CMakeFiles/QtGui.dir/PySide/QtGui/qtexttablecell_wrapper.cpp.o
    [ 54%] Building CXX object PySide/QtGui/CMakeFiles/QtGui.dir/PySide/QtGui/qtexttableformat_wrapper.cpp.o
    In file included from /sec/Dev/PKGBUILDs/pyside_aur/python-pyside/src/build/PySide/QtGui/PySide/QtGui/qtexttableformat_wrapper.cpp:36:0:
    /sec/Dev/PKGBUILDs/pyside_aur/python-pyside/src/build/PySide/QtGui/PySide/QtGui/pyside_qtgui_python.h: In function ‘PyTypeObject* Shiboken::SbkType() [with T = QStyleOptionButton, PyTypeObject = _typeobject]’:
    /sec/Dev/PKGBUILDs/pyside_aur/python-pyside/src/build/PySide/QtGui/PySide/QtGui/pyside_qtgui_python.h:1181:65: internal compiler error: Falha de segmentação
    Please submit a full bug report,
    with preprocessed source if appropriate.
    See <https://bugs.archlinux.org/> for instructions.
    make[2]: ** [PySide/QtGui/CMakeFiles/QtGui.dir/PySide/QtGui/qtexttableformat_wrapper.cpp.o] Erro 1
    make[1]: ** [PySide/QtGui/CMakeFiles/QtGui.dir/all] Erro 2
    make: ** [all] Erro 2
    Can you help fixing it? Thnx again
    As for your problem, I cannot re-produce this. I just remove my python-shiboken* + python-pyside* packages, rebuild them using AUR scripts from my bitbucket, and I manage to install them (yet again) successfully. Please provide more detail on your system.
    andre.vmatos wrote:
    Hi.
    Trying to compile again, but now with pyside for python2 (from aur) installed, python-pyside compiled fine. But, the python libraries were installed into /usr/lib/python2.7. Follow content of package:
    [root@midichlorian python-pyside #] tar -tf python-pyside-1.1.0-1-x86_64.pkg.tar.xz
    .PKGINFO
    usr/
    usr/include/
    usr/lib/
    usr/share/
    usr/share/PySide-py3/
    usr/share/PySide-py3/typesystems/
    usr/share/PySide-py3/typesystems/typesystem_help.xml
    usr/share/PySide-py3/typesystems/typesystem_core_mac.xml
    usr/share/PySide-py3/typesystems/typesystem_webkit.xml
    usr/share/PySide-py3/typesystems/typesystem_opengl.xml
    usr/share/PySide-py3/typesystems/typesystem_xml.xml
    usr/share/PySide-py3/typesystems/typesystem_gui_win.xml
    usr/share/PySide-py3/typesystems/typesystem_core_win.xml
    usr/share/PySide-py3/typesystems/typesystem_gui.xml
    usr/share/PySide-py3/typesystems/typesystem_webkit_simulator.xml
    usr/share/PySide-py3/typesystems/typesystem_core_common.xml
    usr/share/PySide-py3/typesystems/typesystem_svg.xml
    usr/share/PySide-py3/typesystems/typesystem_test.xml
    usr/share/PySide-py3/typesystems/typesystem_gui_x11.xml
    usr/share/PySide-py3/typesystems/typesystem_phonon.xml
    usr/share/PySide-py3/typesystems/typesystem_gui_simulator.xml
    usr/share/PySide-py3/typesystems/typesystem_network.xml
    usr/share/PySide-py3/typesystems/typesystem_declarative.xml
    usr/share/PySide-py3/typesystems/typesystem_core_x11.xml
    usr/share/PySide-py3/typesystems/typesystem_core.xml
    usr/share/PySide-py3/typesystems/typesystem_core_maemo.xml
    usr/share/PySide-py3/typesystems/typesystem_sql.xml
    usr/share/PySide-py3/typesystems/typesystem_scripttools.xml
    usr/share/PySide-py3/typesystems/typesystem_gui_maemo.xml
    usr/share/PySide-py3/typesystems/typesystem_multimedia.xml
    usr/share/PySide-py3/typesystems/typesystem_gui_mac.xml
    usr/share/PySide-py3/typesystems/typesystem_templates.xml
    usr/share/PySide-py3/typesystems/typesystem_uitools.xml
    usr/share/PySide-py3/typesystems/typesystem_xmlpatterns.xml
    usr/share/PySide-py3/typesystems/typesystem_gui_common.xml
    usr/share/PySide-py3/typesystems/typesystem_script.xml
    usr/lib/libpyside-py3-python2.7.so.1.1.0
    usr/lib/cmake/
    usr/lib/libpyside-py3-python2.7.so
    usr/lib/pkgconfig/
    usr/lib/python2.7/
    usr/lib/libpyside-py3-python2.7.so.1.1
    usr/lib/python2.7/site-packages/
    usr/lib/python2.7/site-packages/PySide/
    usr/lib/python2.7/site-packages/PySide/phonon.so
    usr/lib/python2.7/site-packages/PySide/QtUiTools.so
    usr/lib/python2.7/site-packages/PySide/QtXml.so
    usr/lib/python2.7/site-packages/PySide/QtMultimedia.so
    usr/lib/python2.7/site-packages/PySide/QtScript.so
    usr/lib/python2.7/site-packages/PySide/QtSvg.so
    usr/lib/python2.7/site-packages/PySide/QtScriptTools.so
    usr/lib/python2.7/site-packages/PySide/QtGui.so
    usr/lib/python2.7/site-packages/PySide/__init__.py
    usr/lib/python2.7/site-packages/PySide/QtNetwork.so
    usr/lib/python2.7/site-packages/PySide/QtXmlPatterns.so
    usr/lib/python2.7/site-packages/PySide/QtDeclarative.so
    usr/lib/python2.7/site-packages/PySide/QtCore.so
    usr/lib/python2.7/site-packages/PySide/QtHelp.so
    usr/lib/python2.7/site-packages/PySide/QtTest.so
    usr/lib/python2.7/site-packages/PySide/QtWebKit.so
    usr/lib/python2.7/site-packages/PySide/QtOpenGL.so
    usr/lib/python2.7/site-packages/PySide/QtSql.so
    usr/lib/pkgconfig/pyside-py3.pc
    usr/lib/cmake/PySide-py3-1.1.0/
    usr/lib/cmake/PySide-py3-1.1.0/PySideConfig.cmake
    usr/lib/cmake/PySide-py3-1.1.0/PySideConfig-python2.7.cmake
    usr/lib/cmake/PySide-py3-1.1.0/PySideConfigVersion.cmake
    usr/include/PySide-py3/
    usr/include/PySide-py3/QtGui/
    usr/include/PySide-py3/QtNetwork/
    usr/include/PySide-py3/pysidemetafunction.h
    usr/include/PySide-py3/dynamicqmetaobject.h
    usr/include/PySide-py3/phonon/
    usr/include/PySide-py3/pysideconversions.h
    usr/include/PySide-py3/QtXmlPatterns/
    usr/include/PySide-py3/pysideweakref.h
    usr/include/PySide-py3/pysidemacros.h
    usr/include/PySide-py3/QtCore/
    usr/include/PySide-py3/QtDeclarative/
    usr/include/PySide-py3/QtScriptTools/
    usr/include/PySide-py3/destroylistener.h
    usr/include/PySide-py3/pysideproperty.h
    usr/include/PySide-py3/QtMultimedia/
    usr/include/PySide-py3/QtSvg/
    usr/include/PySide-py3/QtScript/
    usr/include/PySide-py3/QtWebKit/
    usr/include/PySide-py3/pyside.h
    usr/include/PySide-py3/globalreceiver.h
    usr/include/PySide-py3/QtOpenGL/
    usr/include/PySide-py3/QtHelp/
    usr/include/PySide-py3/pysidesignal.h
    usr/include/PySide-py3/pysideqflags.h
    usr/include/PySide-py3/pyside_global.h
    usr/include/PySide-py3/QtUiTools/
    usr/include/PySide-py3/signalmanager.h
    usr/include/PySide-py3/QtXml/
    usr/include/PySide-py3/pysideclassinfo.h
    usr/include/PySide-py3/QtSql/
    usr/include/PySide-py3/QtTest/
    usr/include/PySide-py3/QtTest/pyside_qttest_python.h
    usr/include/PySide-py3/QtSql/pyside_qtsql_python.h
    usr/include/PySide-py3/QtXml/pyside_qtxml_python.h
    usr/include/PySide-py3/QtUiTools/pyside_qtuitools_python.h
    usr/include/PySide-py3/QtHelp/pyside_qthelp_python.h
    usr/include/PySide-py3/QtOpenGL/pyside_qtopengl_python.h
    usr/include/PySide-py3/QtWebKit/pyside_qtwebkit_python.h
    usr/include/PySide-py3/QtScript/pyside_qtscript_python.h
    usr/include/PySide-py3/QtSvg/pyside_qtsvg_python.h
    usr/include/PySide-py3/QtMultimedia/pyside_qtmultimedia_python.h
    usr/include/PySide-py3/QtScriptTools/pyside_qtscripttools_python.h
    usr/include/PySide-py3/QtDeclarative/pyside_qtdeclarative_python.h
    usr/include/PySide-py3/QtCore/pyside_qtcore_python.h
    usr/include/PySide-py3/QtXmlPatterns/pyside_qtxmlpatterns_python.h
    usr/include/PySide-py3/phonon/pyside_phonon_python.h
    usr/include/PySide-py3/QtNetwork/pyside_qtnetwork_python.h
    usr/include/PySide-py3/QtGui/pyside_qtgui_python.h
    usr/include/PySide-py3/QtGui/qpytextobject.h
    Thnx again.
    I see you are running as root, I dont know if this related to the problem.. maybe you should try running makepkg as normal user?
    As for the files .. they are going to the right locations. It seems your system uses Python 2 modules (usr/lib/libpyside-py3-python2.7.so) when building pyside, this should not happen if python-shibokengenerator installed.
    Are you sure you have python-shibokengenerator and python-libshiboken installed before building python-pyside? Most likely the cmake failed to detect your python 3 module. If you have python-shibokengenerator (which uses Python 3) installed it will tell cmake to use your python 3 interpreter.
    As a comparision, this is how I install my pyside packages:
    build python-shibokengenerator
    build python-libshiboken
    install python-shibokengenerator
    install python-libshiboken
    build python-pyside
    install python-pyside
    build python-pyside-tools
    install python-pyside-tools
    Thanks.

Maybe you are looking for

  • Adobe creative cloud is installed, but will not run

    , adobe creative cloud is installed legally., but will not run

  • Hot to use the data grid to edit column objects

    I have a simple object type: CREATE OR REPLACE TYPE TIMESLICE AS OBJECT SINCE TIMESTAMP (6), UNTIL TIMESTAMP (6) And a simple table with one column containing objects of this type: CREATE TABLE TABLE1 COLUMN1 TIMESLICE I can insert into the table: IN

  • BPC Security Migration

    I'm wondering how people migrate security between environments i.e. production to development. Security is typically quite different in development vs. production.  Some users will have more access in development and some users will not be active in

  • How do you create flow chart in Photoshop?

    I am trying to make a massive flowchart in Photoshop but connecting the arrows, lines, etc is a total pain in the butt.  Is there a template, or a better way to do it?  It is very easy in excel, power point and so on.  Any help would be greatly appre

  • Drawing with the Pen, Pencil, or Flare tool

    This question was posted in response to the following article: http://help.adobe.com/en_US/illustrator/cs/using/WS3f28b00cc50711d9-2cf89fa2133b344d448-80 00.html