ODI loop in package

Hi
I am moving data from table into a file So I am performing looping in a package.
The steps what I did was\
1> took a variable (var1) and assigned its value using the sql.
2> took a counter variable (var2) and set its value as 1.
3> var1 in evaluate variable type (var1>var2)
4> Load data to file
5> Incremet var2+1 and move to step 3.
Problem is with step3 even if the condition var1>var2 fails, the loop still is executing.
So what should I do break that.
Thnaks
Sri

Thanks Phani,
Actually the issue got solved in a different way.
at step 3 while I was performing var1>var2 then its going into an infinate loop as this is not taking the var2 value in the condition. So at step 3, declare the var2 as project_name.var2 so that the problem will be solved.
Sri

Similar Messages

  • Event based Automatic execution of ODI scenario (or package)

    My requirement is that whenever my source XML data file is replaced with new file (file-name remains same by the way; the new file is ftp'ed at this location);
    Whenever this event occurs my odi scenario (or package) should get executed automatically.
    I don't wish to achieve this using web-services.
    I am using ODI 11g.
    Any help appreciated.

    Thanks a lot Sutirtha Roy.
    I achieved what I was trying but one issue is still there:
    Following are steps I follow in my ODI Package:
    Step 1:
    Odi File Wait (Waits for XML input file)
    Step 2:
    Execute interface (this interface actually does transfer of Src data of XML to Oracle db)
    Step 3:
    ODI File Move (Moves processed file to some other dir)
    ---- At this point I see a lock .lck file is created of input file
    Step 4:
    ODI File Delete (deletes .lck file)
    Step 5:
    Loop Back to "Odi File Wait" step 1
    But after first file is processed and second file is received at src directory the ODI package couldn't load data in db.
    The interface got executed as if it was void.
    I suspect File is locked in Driver.
    The deletion in step 4 didnt help.
    How can I resolve it.
    Is there a way to execute 'UNLOCK file <fileName>' command on the driver.

  • WinRM endless loop script/package sccm 2012

    I'm trying to enable winrm in task sequence for sccm 2012 during installation of Windows 7 clients.
    winrm quickconfig -quiet -force
    or
    cmd /c winrm quickconfig -quiet -force
    I have tried variations of this but I always get an endless loop, doesn't matter if I do command line or a package with a bat file.
    Anyone else having this issue?

    What I mean with loop, is that it keeps rerunning the same command over and over again. In task manager I see multiple cmd processes running. If I manually run the command in cmd it configures it correctly.
    Here is the essential part of execmgr.log:
    <![LOG[Successfully prepared command line "C:\WINDOWS\system32\cmd.exe" /c winrm quickconfig -quiet -force]LOG]!><time="14:26:03.416-60" date="12-04-2014" component="execmgr" context="" type="1"
    thread="2572" file="scriptexecution.cpp:650">
    <![LOG[Command line = "C:\WINDOWS\system32\cmd.exe" /c winrm quickconfig -quiet -force, Working Directory = C:\Windows\ccmcache\13\]LOG]!><time="14:26:03.416-60" date="12-04-2014" component="execmgr" context=""
    type="1" thread="2572" file="scriptexecution.cpp:352">
    <![LOG[Created Process for the passed command line]LOG]!><time="14:26:03.427-60" date="12-04-2014" component="execmgr" context="" type="1" thread="2572" file="scriptexecution.cpp:513">
    <![LOG[Raising event:
    [SMS_CodePage(850), SMS_LocaleID(1053)]
    instance of SoftDistProgramStartedEvent
     AdvertisementId = "S012000E";
     ClientID = "GUID:5b5d231d-116e-47f0-81d0-c93839a752e2";
     CommandLine = "\"C:\\WINDOWS\\system32\\cmd.exe\" /c winrm quickconfig -quiet -force";
     DateTime = "20141204132603.431000+000";
     MachineName = "SCCMTESTVMSTHLM";
     PackageName = "S010005C";
     ProcessID = 1344;
     ProgramName = "WinRM";
     SiteCode = "S01";
     ThreadID = 2572;
     UserContext = "NT instans\\SYSTEM";
     WorkingDirectory = "C:\\Windows\\ccmcache\\13\\";
    ]LOG]!><time="14:26:03.435-60" date="12-04-2014" component="execmgr" context="" type="1" thread="2572" file="event.cpp:715">
    <![LOG[Raised Program Started Event for Ad:S012000E, Package:S010005C, Program: WinRM]LOG]!><time="14:26:03.437-60" date="12-04-2014" component="execmgr" context="" type="1" thread="2572"
    file="executioncontext.cpp:459">
    <![LOG[Raising client SDK event for class CCM_Program, instance CCM_Program.PackageID="S010005C",ProgramID="WinRM", actionType 1l, value NULL, user NULL, session 4294967295l, level 0l, verbosity 30l]LOG]!><time="14:26:03.439-60"
    date="12-04-2014" component="execmgr" context="" type="1" thread="2572" file="event.cpp:405">
    <![LOG[Raising client SDK event for class CCM_Program, instance CCM_Program.PackageID="S010005C",ProgramID="WinRM", actionType 1l, value , user NULL, session 4294967295l, level 0l, verbosity 30l]LOG]!><time="14:26:03.478-60"
    date="12-04-2014" component="execmgr" context="" type="1" thread="2572" file="event.cpp:405">
    <![LOG[MTC task with id {4BC725EF-6456-4C9E-AEEA-F3639600D829}, changed state from 4 to 5]LOG]!><time="14:26:03.511-60" date="12-04-2014" component="execmgr" context="" type="1" thread="2912"
    file="execreqmgr.cpp:6288">
    <![LOG[Program exit code 1]LOG]!><time="14:28:40.165-60" date="12-04-2014" component="execmgr" context="" type="1" thread="296" file="scriptexecution.cpp:676">
    <![LOG[Looking for MIF file to get program status]LOG]!><time="14:28:40.177-60" date="12-04-2014" component="execmgr" context="" type="1" thread="296" file="executionstatus.cpp:282">
    <![LOG[Script for Package:S010005C, Program: WinRM failed with exit code 1]LOG]!><time="14:28:40.178-60" date="12-04-2014" component="execmgr" context="" type="3" thread="296" file="executionstatus.cpp:252">

  • Issue with oracle.odi.sdk.invocation package to run scenario through PL/SQL

    Hi,
    I am new to call ODI scenario through PL/SQL.
    Actually just to test I tried following code-
    1. create or replace and compile java source named "Run_Scen_DCP"
    as
    import oracle.odi.sdk.invocation.*;
    public class Run_Scen_DCP
    public static void Run_Scen()
    OdiCommandScenario cmd = new OdiCommandScenario();
    Output- Warning: execution completed with warning
    and compile Compiled.
    2. create or replace procedure run_scen as language java name 'Run_Scen_DCP.Run_Scen()';
    Output- procedure run_scen Compiled.
    3. EXECUTE run_scen;
    Output- Error starting at line 1 in command:
    EXECUTE run_scen;
    Error report:
    ORA-29541: class TEST_JAVA.Run_Scen_DCP could not be resolved
    ORA-06512: at "TEST_JAVA.RUN_SCEN", line 1
    ORA-06512: at line 1
    29541. 00000 - "class %s.%s could not be resolved"
    *Cause: An attempt was made to execute a method in a Java class
    that had not been previously and cannot now be compiled
    or resolved successfully.
    *Action: Adjust the call or make the class resolvable.
    I am getting the error in calling the wrapper PL/SQL procedure.
    I have set the classpath for SDK jar files C:\oracle\product\11.1.1\Oracle_ODI_1\oracledi.sdk\lib\*.jar
    Please help me understand what I am doing wrong in this.
    Regards,
    Vipin

    Waiting for the solution....

  • Error while using Oracle Table as source file :- ODQ for ODI

    Hi All,
    I am getting some errors while working on ODQ with Oracle Tables as source file.
    If I am trying with text files (*.txt) as source and output it works fine.
    Please let me know how we can connect to an oracle table which is my source file.
    In the exported project -> “settings” folder
    In the file named eN_transfmr_pXX.stx
    For
    /CATEGORY/INPUT/PARAMETER/INPUT_SETTINGS/ARGUMENTS/EN
    TRY/DATA_FILE_NAME=
    What I need to give? (URL – source file)
    I tried with
    1. jdbc:oracle:thin:@xxx.xxx.x.xx:1521:ORCL
    2. jdbc:oracle:thin:UserName/Password@// xxx.xxx.x.xx:1521:ORCL
    i am not sure , is there any thing missing ???
    (Note: for text file I am giving “D:\Sourcefolder\customer.txt”)
    If I am running the batch file directly from CMD Prompt it’s displaying the error message
    “Cannot open file”
    If I am connecting with ODI it’s displaying the error
    com.sunopsis.dwg.function.SnpsFunctionBaseException: OS command returned 3503. …………………….
    Thanks in advance…
    Rathish A M

    Hi Ratish,
    ODQ supports files as inputs not Oracle tables, what you should do is:
    - define an ODQ process that takes a file as an input.
    - create an ODI process that dumps your Oracle table into a file that will be used by ODQ. (interface or OdiSqlUnload step)
    - run the ODQ process in ODI (in a package)
    - create an ODI interface that will load your ODQ output file into a DB.
    You can profile Oracle tables directly using Oracle Data Profiling.
    Thanks,
    Julien

  • The speed of the "for loop iteration"

    Hi all:
    I have written a short mini program just to test out the speed of a "for loop iteration"
    package generator;
    * <p>Title: </p>
    * <p>Description: </p>
    * <p>Copyright: Copyright (c) 2004</p>
    * <p>Company: </p>
    * @author not attributable
    * @version 1.0
      private static long tmp = 0;
      public static int count=0;
      public synchronized long getValue(){
          long value = 0;
        value = 10 * System.currentTimeMillis();
        if(value == tmp){
          value++;
          count++;
        tmp = value;
        return value;
      public static void main(String[] args) {
       for (int i=0;i<1000;i++){
          getValue();
        System.out.println("count is " + count);
    }And i find that on average, count will be a value between 498-500, which probably means that on average the speed of a for loop is around 0.5 ms. But I guess the speed of the "for loop " iteration is also very much CPU dependent, could somebody possibly give me some education on this issue :) ?
    Also, anybody know exactly how System.currentTimeMillis works? For example, how does it get the current time from the OS, etc. let me know.
    Again, Many many thanks...

    Hi all:
    I have written a short mini program just to test out the speed of a "for loop iteration"You will find the overhead of the getValue() call, the increments and most importantly the System.currentTimeMillis() is what you are timing.
        public static void main(String args[])  {
            int ITER = 1000000000;
            long start = System.currentTimeMillis();
            for(int i=0;i<ITER;i++);
            long end = System.currentTimeMillis();
            System.out.println("Iterations per sec="+ITER*1000L/(end - start));
        }Prints the following Iterations per sec=576368876Running on a 1.8 GHz Intel/Windows2000

  • ODI 11g : How to migrate ODI objects ?

    Hello,
    We need to migrate our ODI 11g objects (Packages,Interfaces, DataServers, DataStores and so on) from UAT to PROD environment. For that, we have the following option in our mind.
    Option
    - create ODI Master and Work repositories on PROD environment;
    - import the Master & work repositories from UAT into the new one (ODI Studio);
    - configure the Phisical Schemas according to the database connection details for the production environment;
    What is the best way for to do that? What elements should be considered in this kind of migration? Some special recommendations?
    Any input will be appreciated.
    Thanks in advance.
    Iulia

    If you are migrating to a production environment I would suggest that you create your work repository as an execution only repository. This will limit your access to the Operator interface and you will only have to import your compiled scenarios rather than all of the other work repository specific objects i.e. projects, models, interfaces etc. You will still have to create a master repository which you can do as you have already detailed i.e. create a master repository schema and then import the repository you exported from UAT and update the physical topology settings to match your new environment settings.
    You can even generate an ODI package to automate the export and import of scenarios between environments.

  • ODI Documentation

    Hi.
    Any suggestion on how to write documentation on the ODI interfaces and packages? is there any feature with in ODI that generates any documentation templates.
    Regards

    Hi there,
    There are "print" options available in the ODI client tools. In Designer you can right-click on a project or model object and it will generate a PDF for you automatically. In Topology Manager you can print off the physical architecture, logical architecture, or contexts through the File | Print menu, which will also generate a PDF for you.
    Thanks,
    OracleSeeker

  • ODI Execute Button Acting Different than Right-Click - Execute in Diagram

    We have noticed that clicking the 'Green Arrow' Execute button in ODI to begin the Execution using First Step with only one Procedure is acting differently than simply right-clicking and selecting 'Execute' within the Diagram tab. We have noticed this issue with only a SQL Procedure.
    In other words, when clicking the 'Green Arrow', ODI compiles the package together before executing which seems to make the procedure execute differently than right-clicking and executing the procedure individually.
    Has anyone else noticed this issue?

    Executing with Green arrow means the entire package would be compiled and executed. In such case if you have any special configuration for that step then that would be effective and thus might show different behavior.
    Running a particular step by right clicking on it would be equivalent to running the procedure or interface directly.
    Do you see difference in behavior if you run the procedure directly v/s running it within package too?

  • ODI send mail problem

    Hi,
    i am trying to send mail via ODI send mail package and getting error below,
    com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.1 Client was not authenticated
    any advise pls
    thank you

    ODI is sending the email without a password which is the behavior of OdiSendMail.
    Where as your mail server is looking for a authenticated mail i.e. which has password , this is why you are getting this error.
    For OdiSendMail , use a mail server which does not requires an authentication
    Else use customized Jython mail sending procedure ... see details in ODIsendmail error

  • ODI Conditions

    Hi ,
    i am moving data from FLAT FILE----> stage table------> target table.
    interface1 ----> flat file to stage table(oracle) ----> no transformations just simple load of evrything from file to table
    interface2 ----> load data from stage table to target (oracle)
    I have included this ODI Condition (constraint) on stage table.
    REGEXP_LIKE(BILL_TO_PHONE,
    '^[0-9]{3}-[0-9]{3}-[0-9]{4}$')
    Please let mw know where i have check this constraint (in 1st or 2nd interface)....
    as of now i am checking this contraint in interface1 because this contarint is not showing up in interface2.
    Thanks

    1.) If the purpose of the stage table is only to perform validations on the data, then you dont need a stage table.
    You can put the constraint on the target table and set the FLOW_CONTROL to Yes in the Interface.
    2.) ODI doesnt provide any out of the box functionality for correcting E$ tables.
    I can think of approaches to do these corrections but this will require some design and coding effort both in terms of database tables and KMs. These tables will provide metadata information on correction rules which can be accessed by KMs to perform corrections. And then you can use RECYCLE_ERRORS = Yes to process the records into the target table.
    However, ODI has a supporting module known as Data Quality + Data Profiling. This comes in the same install as that of ODI. This supposedly can do corrections on your data based on some business rules. Data Profiling is an easy to use tool but I donot know much about Data Quality. Also, I could not figure out how to integrate Data Profiling and Data Quality in ODI interfaces and packages.

  • Backing up ODI

    Hello, I am looking to ensure all needed items in ODI are being backed up properly. ODI is being used to pull data from Oracle EBS to a SQL SSRS reporting environment and currently only the ODI master and Work repository databases are being backed up. Is there anything else that is needed if a restore of ODI would ever need done? I have read of the OdiExportMaster tool but unsure if this is the same thing as just backing up the database? Any information to help would be great, thanks.

    OdiExportMaster and OdiExportWork can be good ODI tool to packaged it up and put it on daily back up schedule .
    The reason being even if the database backup was not successful or some kind of corruption happens, the Master and Work Repository can be used to import from this OdiExportMaster and OdiExportWork .
    As this ODI tool will create required XML's and zip them and store in the drive mentioned .
    Hope this helps.

  • Get data from a JMS XML queue and save it in a file

    Hi,
    Here I come with my issue again.
    I'm trying to test a JMS XML implementation with ODI 10.1.3.4.0 and my target is a single delimited file.
    I made simple things, my XML contains the following code :
    +<?xml version="1.0" encoding="UTF-8"?>+
    +<test><id>456</id><value>789</value></test>+
    In the target file, I specified that I wanted a header line, the id and the value, separated by a tab.
    I send only one message, before executing the ODI interface or package. The queue is persistant. There's no problem from the queue, I manage to read JMS messages with a little program I made.
    The result is also simple : there's no data in the file (not even the header), though it's well created by ODI, and there's no error in the opertor, it's as everything went well.
    Here is the settings I did in ODI :
    Topology : JMS XML Queue config
    Name : JMSXML_TEST
    Technology : JMS XML Queue
    I've a user and password set.
    JNDI Auth : simple
    The user and password are the same as above.
    JNDI protocole : not defined
    JNDI Driver : org.jnp.interfaces.NamingContextFactory
    JNDI URL : jnp://localhost:1099/?re=test&d=<DTD_FILE>&s=JMSXML_TEST_SCH&JMS_DESTINATION=queue/TestQueue1
    The connection test is OK.
    Model
    The JMS XML model is reversed from the DTD.
    There is only one datastore named TEST with an ID and VALUE columns, and other columns usefull to ODI.
    The target is a file represented by a datastore also named TEST with ID and VALUE columns.
    Project
    I created an interface with the following configurations :
    - The staging area is the sunopsis engine,
    - The source datastore and target datastore are the two described above,
    - The LKM is JMS XML to SQL, the IKM is SQL to File Append
    - The JMS_COMMIT option is set to yes
    - The SYNCHRO_JMS_TO_XML is set to yes
    - I added manually a NEXTMESSAGETIMEOUT in the options list because it was missing
    - The IKM settings are all set to yes (INSERT, TRUNCATE, GENERATE_HEADER).
    I also created a package containing this interface.
    Everything is done in the global context.
    Everything is installed locally on my computer : the program sending the messages, the JMS provider and ODI.
    The problem is I don't know where the problem is and neither ODI.
    In the operator, there is 0 insert and the error code is 0.
    Thanks in advance for any insights.
    Marie
    Edited by: Marie123456 on 21 août 2012 10:36

    Hi,
    Since I still have problems on this subject, I would like to share on how it progresses.
    Currently I have a problem of timeout in step "Truncate XML Schema" with the URL that I mentioned above.
    The exact error is the following : 7000 : null : com.sunopsis.sql.l: Oracle Data Integrator TimeOut : connection with URL [...]
    The connection test is still OK.
    I tried to increase the value in the user's pref but there's no change.

  • Pacproxy (or something that vaguely resembles an apt-proxy clone)

    this may not be the right forum for this, but here is a little python app to proxy packages from a mirror, and (eventually) to automatically create repos from available packages in the local ABS tree.  i didnt like the suggested solution of network mounting /var/cache/pacman/pkg/, and i wanted my ABS built packages to autoupdate without running a repo-add manually all the time... and i detest cron jobs.
    as started, this started as a project to automatically create repos from any available binary packages existing within the ABS tree; i havent quite finished that yet.  i am using the proxy part for 4 arch machines in my home and it seems to be doing pretty good.  when the ABS stuff is done it will behave like this:
    /var/abs/<repo_name>/..../..../{pkg/,src/}
    where any packages in directory <repo_name> will be advertised as being a part of a repo with the same name (the <repo_name>.db.tar.gz file will be dynamically created and cached, this is the part im not done with).  it wont matter how deep the pkg file is, and the architecture will be automatically accounted for by reading the .PKGINFO file.
    right now though, proxying from ONE mirror (will probably add support for a mirror list like in pacman.d, but im not sure how this is handled exactly, any info on that would be great) seems to work pretty good, and it will proxy both architectures.  as is, it will store pakages and a small "cache" file in .pacproxy/<repo_name>/<arch>/.  the cache file has the same name as the package, and simply holds the Etag or Last-Modified header from when the package was pulled from the mirror.  everytime a file is requested, a HEAD request is sent to the mirror using that information, and if a 304 (not-modified) is returned, the cached copy is used, else the new copy is pulled and the cache file updated.  looks something like this:
    [cr@extOFme-d0 ~]$ tree .pacproxy
    .pacproxy
    |-- community
    | |-- i686
    | | |-- community.db.tar.gz
    | | `-- community.db.tar.gz.cache
    | `-- x86_64
    | |-- community.db.tar.gz
    | `-- community.db.tar.gz.cache
    |-- core
    | |-- i686
    | | |-- core.db.tar.gz
    | | |-- core.db.tar.gz.cache
    | | |-- coreutils-8.2-1-i686.pkg.tar.gz
    | | |-- coreutils-8.2-1-i686.pkg.tar.gz.cache
    | | |-- filesystem-2009.11-1-any.pkg.tar.gz
    | | |-- filesystem-2009.11-1-any.pkg.tar.gz.cache
    | | |-- glib2-2.22.3-1-i686.pkg.tar.gz
    | | `-- glib2-2.22.3-1-i686.pkg.tar.gz.cache
    | `-- x86_64
    | |-- core.db.tar.gz
    | `-- core.db.tar.gz.cache
    `-- extra
    |-- i686
    | |-- boost-1.41.0-2-i686.pkg.tar.gz
    | |-- boost-1.41.0-2-i686.pkg.tar.gz.cache
    | |-- extra.db.tar.gz
    | |-- extra.db.tar.gz.cache
    | |-- xdg-utils-1.0.2.20091216-1-any.pkg.tar.gz
    | |-- xdg-utils-1.0.2.20091216-1-any.pkg.tar.gz.cache
    | |-- xf86-input-synaptics-1.2.1-1-i686.pkg.tar.gz
    | |-- xf86-input-synaptics-1.2.1-1-i686.pkg.tar.gz.cache
    | |-- xulrunner-1.9.1.6-1-i686.pkg.tar.gz
    | `-- xulrunner-1.9.1.6-1-i686.pkg.tar.gz.cache
    `-- x86_64
    |-- extra.db.tar.gz
    `-- extra.db.tar.gz.cache
    i am still relatively new to the python scene, and i know there are several optimizations and probably alot of refactoring that will happen before im satisfied with it, but i think it is useful enough at this point to release to everyone here.  any ideas are very welcome, and once i finally get my server back to the datacenter, ill host this (in git) on extof.me along with some other goodies TBA at a later date :).  see POSSIBLE CAVEATS and DEVELOPMENT for ideas as to where im going and some issues that are definately present right now.
    DEPENDENCIES
    $ pacman -S cherrypy
    HOW TO USE
    ...point pacman.conf to it (port 8080 by default)...
    Server = http://localhost:8080/archlinux/$repo/os/x86_64
    ...edit pacproxy.py and change "mirrors" to an appropriate one for you (use $arch variable!)...
    mirrors = {'mirrors.gigenet.com': '/archlinux/$repo/os/$arch'}
    $ python pacproxy.py
    POSSIBLE CAVEATS
    1) multiple requests from multiple machines at the same time will probably cause some problems right now, as the cache's state will be inconsistent.  i *think* concurrent tools like powerpill will still work correctly from the same machine, since its not pulling the same packages twice, and cherrypy will handle the threading.
    2) im pretty sure the caching stuff is working correctly, but im not sure if pacman is realizing that its real copy (specifically the db.tar.gz files) is up to date.  i may need to send some additional headers
    3) if its not obvious, this only proxies http requests, not ftp
    4) there is no cache cleaning in .pacproxy, as files become out of date, they will just stay there and take up space.  not a huge deal, im just not sure how to address this, maybe remove them after X days of not being accessed
    5) there is some security issues with eval'ing the .cache file (its just a dict), should maybe do that differently
    6) im sure there are many other problems and security flaws, ill list/remove them as the show up/are fixed
    DEVELOPMENT
    1) anyone looking to mess with this (please do!), you can use cherrypy.log(msg) to send stuff to the log file (stdout unless you've changed it)
    2) fix some concurrency issues by stalling one thread's download of a pkg until another thread has finished writing the pkg to the cache (hopefully lockfiles+timeouts will take care of this)
    3) finish the ABS autobuilder, and maybe look into pulling in packages from another machines ABS tree using ssh/paramiko or similar (that would be cool)
    4) make the code cleaner and avoid some of the duplication, move to multiple files (cherrypy supports auotmatically monitoring modules for changes then reloading them) otherwise changes to the main file causes a reload of the entire server which could interrupt downloads.  plus right now its pretty much one huge function
    5) python/cherrypy isnt the most efficient way to deliver a large file, maybe there is a way to use python for the logic and a better server to actually read out the file to the client?
    6) probably dont need to ping the server for updates to pkg files... the name of the file *should* change when the file's updated.  this was mainly for ABS derived packages
    7) shower me with ideas!
    and now some code!
    PACPROXY.PY
    import os, fnmatch, httplib, cherrypy
    # we use this to know what to skip in local_dbs
    abs_standard = ['core','extra','community','community-testing']
    # what dbs should we proxy, and which are locally derived from custom ABS directories
    # /var/abs/[db_name]/ are local dbs
    proxy_dbs = abs_standard
    local_dbs = [p for p in os.listdir('/var/abs') if os.path.isdir('/var/abs/' + p) and p not in abs_standard]
    mirrors = {'mirrors.gigenet.com': '/archlinux/$repo/os/$arch'}
    valid_arch = ['i686', 'x86_64', 'any']
    # we'll put stuff here
    cache_root = os.getenv('HOME') + '/.pacproxy'
    def locate_pkgs(pattern):
    top_exclude_dirs = ['core','extra','community','community-testing']
    rel_exclude_dirs = ['pkg','src']
    for path, dirs, files in os.walk('/var/abs'):
    # no reason to look thru folders provided by ABS
    if path=='/var/abs':
    for d in [dir for dir in dirs if dir in top_exclude_dirs]: dirs.remove(d)
    # or folders created by makepkg
    for d in [dir for dir in dirs if dir in rel_exclude_dirs]: dirs.remove(d)
    for filename in fnmatch.filter(files, pattern):
    yield os.path.join(path, filename)
    def gen_proxy(remote_fd, local_fd=None):
    def read_chunks(fd, chunk=1024):
    while True:
    bytes = fd.read(chunk)
    if not bytes: break
    yield bytes
    for bytes in read_chunks(remote_fd):
    if local_fd is not None:
    local_fd.write(bytes)
    yield bytes
    def serve_repository(repo, future, arch, target):
    # couple sanity checks
    if arch not in valid_arch or repo not in proxy_dbs + local_dbs:
    raise cherrypy.HTTPError(404)
    is_db = fnmatch.fnmatch(target, repo + '.db.tar.gz')
    is_pkg = fnmatch.fnmatch(target, '*.pkg.tar.gz')
    is_proxy = repo in proxy_dbs
    is_local = repo in local_dbs
    if not any((is_db, is_pkg)) or not any((is_proxy, is_local)):
    raise cherrypy.HTTPError(404)
    active_mirror = mirrors.iterkeys().next()
    remote_target = '/'.join([mirrors[active_mirror].replace('$repo', repo).replace('$arch', arch), target])
    remote_file = 'http://' + '/'.join([active_mirror, remote_target])
    local_file = '/'.join([cache_root, repo, arch, target])
    cache_dir = os.path.dirname(local_file)
    if not os.path.exists(cache_dir): os.makedirs(cache_dir)
    # find out if there is a cached copy, and if its still good
    if is_proxy:
    if os.path.exists(local_file) and os.path.exists(local_file + '.cache'):
    cache = eval(open(local_file + '.cache').read())
    req = httplib.HTTPConnection(active_mirror)
    req.request('HEAD', remote_target, headers=cache)
    res = req.getresponse()
    if res.status==304:
    remote_fd = open(local_file, 'rb')
    local_fd = None
    elif res.status==200:
    map(os.unlink, [local_file, local_file + '.cache'])
    etag = res.getheader('etag')
    last_mod = res.getheader('last-modified')
    cache_dict = {}
    if etag is not None:
    # try etag first
    cache_dict['If-None-Match'] = etag
    elif last_mod is not None:
    cache_dict['If-Modified-Since'] = last_mod
    if len(cache_dict)>0:
    cache_fd = open(local_file + '.cache', 'wb')
    cache_fd.write(repr(cache_dict))
    cache_fd.close()
    req2 = httplib.HTTPConnection(active_mirror)
    req2.request('GET', remote_target)
    remote_fd = req2.getresponse()
    local_fd = open(local_file, 'wb')
    else:
    raise cherrypy.HTTPError(res.status)
    else:
    if os.path.exists(local_file): os.unlink(local_file)
    if os.path.exists(local_file + '.cache'): os.unlink(local_file + '.cache')
    req = httplib.HTTPConnection(active_mirror)
    req.request('GET', remote_target)
    remote_fd = req.getresponse()
    if remote_fd.status!=200:
    raise cherrypy.HTTPError(remote_fd.status)
    local_fd = open(local_file, 'wb')
    etag = remote_fd.getheader('etag')
    last_mod = remote_fd.getheader('last-modified')
    cache_dict = {}
    if etag is not None:
    # try etag first
    cache_dict['If-None-Match'] = etag
    elif last_mod is not None:
    cache_dict['If-Modified-Since'] = last_mod
    if len(cache_dict)>0:
    cache_fd = open(local_file + '.cache', 'wb')
    cache_fd.write(repr(cache_dict))
    cache_fd.close()
    cherrypy.response.headers['Content-Type'] = 'application/octet-stream'
    if repo in proxy_dbs:
    return gen_proxy(remote_fd, local_fd)
    if repo in local_dbs:
    pass
    # nothing seems valid? throw a 404
    raise cherrypy.HTTPError(404)
    serve_repository.exposed = True
    conf = {'server.socket_host': '0.0.0.0',
    'server.socket_port': 8080,
    'request.show_tracebacks': False}
    cherrypy.config.update(conf)
    cherrypy.quickstart(serve_repository,'/archlinux')
    Last edited by extofme (2009-12-20 02:57:41)

    well, i didnt get it to pass downloading the first to packages and then hanging (and pacproxy looping glibc package), even on subsequent restart (so that pacproxy would already have those cached), but i did found out an aif profile for this
    # aif -p automatic -c pacinst.aif
    SOURCE=net
    SYNC_URL=http://192.168.0.35:8080/archlinux/core/os/i686
    HARDWARECLOCK=localtime
    TIMEZONE=Europe/Berlin
    # Do you want to have additional pacman repositories or packages available at runtime (during installation)?
    # RUNTIME_REPOSITORIES = array like this ('name1' 'location of repo 1' ['name2' 'location of repo2',..])
    RUNTIME_REPOSITORIES=
    # space separated list
    RUNTIME_PACKAGES=
    # packages to install
    TARGET_GROUPS=base # all packages in this group will be installed (defaults to base if no group and no packages are specified)
    TARGET_PACKAGES_EXCLUDE= # Exclude these packages if they are member of one of the groups in TARGET_GROUPS. example: 'nano reiserfsprogs' (they are in base)
    TARGET_PACKAGES= # you can also specify separate packages to install (this is empty by default)
    # you can optionally also override some functions...
    #worker_intro () {
    #bug ? following gives: inform command not found
    #inform "Automatic procedure running the generic-install-on-sda example config. THIS WILL ERASE AND OVERWRITE YOUR /DEV/SDA. IF YOU DO NOT WANT THIS PRESS CTRL+C WITHIN 10 SECONDS"
    #sleep 10
    worker_configure_system () {
    prefill_configs
    sed -i 's/^HOSTNAME="myhost"/HOSTNAME="arch-generic-install"/' $var_TARGET_DIR/etc/rc.conf
    # These variables are mandatory
    GRUB_DEVICE=/dev/sda
    PARTITIONS='/dev/sda *:ext3'
    BLOCKDATA='/dev/sda1 raw no_label ext3;yes;/;target;no_opts;no_label;no_params'

  • HD computer-based slide show with music

    I've been asked to do a 1024x768 slide looped slide show with background music. I know using Encore slide show to make a DVD will not supply the HD requirement. If I can play the show on a PC, I can output to a digital projector at that resolution and to an audio amplifier and speakers. Many programs will do PC slide shows for 1024x768 images, but I haven't found any that will sync to wav or MP3 audio and loop the package. Does anyone have a recommendation within or outside Adobe software?

    Premiere Pro might do what you need here.
    It will certainly handle the HD part, but you're gonna need a serious system to set this up.
    Premiere's HD editing will go to the following resolutions:
    1440x1080 @ 16:9
    960x720 @ 16:9
    After Effects will do 1280x1080 @ 16:9
    1024x768 is not an HD spec resolution.
    It might be supported under WMV though.

Maybe you are looking for

  • How to Send Emails for failed messages without using Alert

    Hi Experts, In our project, we need to trigger Emails for the failed messages in SXMB_MONI / Adapter Engine without using Alert Framewrok. Please let me know the below : 1. Is this really possible to avoid Alert? 2. If possible, then can we accomplis

  • Exchange Rate Calculation - Can I change it?

    BPCu2019s conversion process is different from the way weu2019re currently operating.  Currently, weu2019re pulling data from Oanda.COM for every Currency-to-Currency permutation and using Oandau2019s rounded to 5 decimal place rate to calculate the

  • Content Management problem

    Hello all, I am working with KM 5.0 SP 5 on two different portal environments, dev and qa. Everything seems to be working fine in qa (we installed there first), but I cannot get content management working in dev. Unfortunately, when we installed in q

  • Package name ends in ".java" causes failure

    I /think/ I've identified a bug in javadoc: javadoc will not allow module names which end in ".java". (maybe this is a feature). It seems that javadoc should check for directory status or have some other check other than just string comparison to det

  • InDesign Free Trial Problems

    I have downloaded and installed the free trial version of InDesign CS3, but when I launch the application for the first time and select trial version, I get the following error: Problem with Trial A problem was encountered while trying to load the tr