Executing a job chain "on demand"

Hello,
I have a overall job chain which contains several job chains. This overall job chain is executing every day.
The job chains included in the overall job chains have precondictions and different time windows.
Now I want to include job chains (in this overall job chain) which are executing "on demand"
How can I do that ?
Thanks.

Hello Shashi,
here's the coding for an exit function that can be used to trigger any process chain.
FUNCTION Z_RSPC_API_CHAIN_START.
*"*"Local interface:
*"  IMPORTING
*"     VALUE(I_AREA) TYPE  UPC_Y_AREA
*"     VALUE(I_PLEVEL) TYPE  UPC_Y_PLEVEL
*"     VALUE(I_PACKAGE) TYPE  UPC_Y_PACKAGE
*"     VALUE(I_METHOD) TYPE  UPC_Y_METHOD
*"     VALUE(I_PARAM) TYPE  UPC_Y_PARAM
*"     REFERENCE(IT_EXITP) TYPE  UPF_YT_EXITP
*"     REFERENCE(ITO_CHASEL) TYPE  UPC_YTO_CHASEL
*"     REFERENCE(ITO_CHA) TYPE  UPC_YTO_CHA
*"     REFERENCE(ITO_KYF) TYPE  UPC_YTO_KYF
*"  EXPORTING
*"     REFERENCE(ETO_CHAS) TYPE  ANY TABLE
*"     REFERENCE(ET_MESG) TYPE  UPC_YT_MESG
* SEM-BPS: Exit function for starting a BW process chain
* Parameter:
* P_CHAIN     RSPC_CHAIN
* Fields to be changed:
* All
* Run the function for a planning package that contains no data.
  DATA:
    ls_exitp TYPE upf_ys_exitp,
    l_pchain TYPE rspc_chain.
* Get name of process chain from parameter
  READ TABLE it_exitp INTO ls_exitp WITH KEY parnm = 'P_CHAIN'.
  IF sy-subrc = 0.
    l_pchain = ls_exitp-chavl.
  ELSE.
    EXIT.
  ENDIF.
* Start process chain
  CALL FUNCTION 'RSPC_API_CHAIN_START'
    EXPORTING
      i_chain = l_pchain.
ENDFUNCTION.
It looks simple but you have to provide additional logic to prevent users from running this function too many times.
Regards,
Marc
SAP NetWeaver RIG

Similar Messages

  • How to Schedule a Job Chain to start automatically on SAP CPS.

    Hi,
    I did a job chain and i want to run automatically on sap cps Tuesday thru Saturday at 6:00 a.m., i make a calendar on sap cps with this specific options but the job chain doesn't start running.  I don't know if i need to do something more, so if someone can give a little help with this i will apreciate a lot.
    Thanks,
    Omar

    It finished ok but on the operator message i got the following message:
    Unable to resubmit this job.
    Details:
    com.redwood.scheduler.api.exception.TimeWindowExpectedOpenWindowException: CalculateNextClose should only be called on an open time window
    at com.redwood.scheduler.model.method.impl.TimeWindowMethodImpl.calculateNextCloseIntersectionInt(TimeWindowMethodImpl.java:388)
    at com.redwood.scheduler.model.method.impl.TimeWindowMethodImpl.calculateNextCloseIntersectInt(TimeWindowMethodImpl.java:249)
    at com.redwood.scheduler.model.TimeWindowImpl.calculateNextCloseIntersectInt(TimeWindowImpl.java:212)
    at com.redwood.scheduler.model.method.impl.SubmitFrameMethodImpl.calculateNextInt(SubmitFrameMethodImpl.java:178)
    at com.redwood.scheduler.model.SubmitFrameImpl.calculateNext(SubmitFrameImpl.java:176)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.resubmitSubmitFrameJob(JobStatusChangePrepareListener.java:763)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.resubmitJob(JobStatusChangePrepareListener.java:637)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.processJobToFinalState(JobStatusChangePrepareListener.java:520)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.modelModified(JobStatusChangePrepareListener.java:233)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.informListeners(LowLevelPersistenceImpl.java:728)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.writeDirtyObjectListRetry(LowLevelPersistenceImpl.java:207)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.access$000(LowLevelPersistenceImpl.java:38)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl$WriteDirtyObjectListUnitOfWork.execute(LowLevelPersistenceImpl.java:79)
    at com.redwood.scheduler.persistence.impl.PersistenceUnitOfWorkManager.execute(PersistenceUnitOfWorkManager.java:34)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.writeDirtyObjectList(LowLevelPersistenceImpl.java:102)
    at com.redwood.scheduler.cluster.persistence.ClusteredLowLevelPersistence.writeDirtyObjectList(ClusteredLowLevelPersistence.java:59)
    at com.redwood.scheduler.model.SchedulerSessionImpl.writeDirtyListLocal(SchedulerSessionImpl.java:648)
    at com.redwood.scheduler.model.SchedulerSessionImpl.persist(SchedulerSessionImpl.java:626)
    at com.redwood.scheduler.apiint.model.UnitOfWorkManager.perform(UnitOfWorkManager.java:32)
    at com.redwood.scheduler.apiint.model.UnitOfWorkManager.perform(UnitOfWorkManager.java:13)
    at com.redwood.scheduler.jobchainservice.JobChainService.childJobFinalStatus(JobChainService.java:223)
    at com.redwood.scheduler.core.processserver.ProcessServerRuntime.childJobFinalStatus(ProcessServerRuntime.java:836)
    at com.redwood.scheduler.core.processserver.ProcessServerRuntime.onMessage(ProcessServerRuntime.java:248)
    at com.redwood.scheduler.infrastructure.work.MessageEnabledWork.run(MessageEnabledWork.java:104)
    at com.redwood.scheduler.infrastructure.work.WorkerImpl.run(WorkerImpl.java:109)
    at java.lang.Thread.run(Thread.java:534)

  • Scheduling a BI job chain in Redwood

    The problem I am having is we are trying to schedule a BI job chain via Redwood software and are not getting any response. Within Redwood, I have executed these jobs IMPORT_BW_CHAINS, IMPORT_BW_CHAIN_DEFINITION, IMPORT_BW_INFOPACKAGES using BI job chain  0fcsm_cm_10 which is defined in BI as a job chain. These jobs run to completion but nothing is moved into Redwood to schedule as you would see from a import of a CCMS job. When I run job RUN_BW_CHAIN using the same BI job chain ID I receive the below error.  Not sure what I’m missing or doing with the process to get to schedule the BI job chains with Redwood.
    ORA-06502: PL/SQL: numeric or value error
    ORA-06512: at "RSI.RSIEXEC", line 1638
    ORA-06512: at "RSI.RSIEXEC", line 1759
    ORA-06512: at "RSI.RSI_RUN_BW_CHAIN", line 21
    ORA-06512: at "RSI.RSI_RUN_BW_CHAIN", line 80
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_SYS_SQL", line 1200
    ORA-06512: at "SYS.DBMS_SQL", line 323
    ORA-06512: at "SYSJCS.DDL", line 1085
    ORA-06512: at "SYSJCS.DDL", line 1118
    ORA-06512: at "SYSJCS.DDL", line 1177
    ORA-06512: at line 3
    JCS-00215: in statement RSOJ_EXECUTE_JOB

    I am also seeing the same issue 
    anton the last information  you requested
    The following products are installed in the Cronacle repository:
    Product                                  Version    Status
    Cronacle for SAP solutions               7.0.3      Production 
    Cronacle Forecast Module                 7.0.3      Production 
    Cronacle Reports Module                  7.0.3      Production 
    Cronacle &module Module                  7.0.2      development
    Cronacle Mail Module                     7.0.3      Production 
    Cronacle Audit Module                    7.0.2 r2.2 Production 
    Cronacle Process Manager for Web         7.0.3      Production 
    Cronacle Module Installer                7.0.3      Production 
    Cronacle Repository                      7.0.3.34   Production 
    Cronacle Monitor Module                  7.0.3      Production

  • Error in redwood job chain for Infopackage

    Hi,
    We have recently installed redwood for handling sap jobs and are able to run all the job chains with job step in abap program successfully.
    However, for APO and BW job chains we have intermediate step for executing BW infopackage where the job is getting failed with the below error:
    SAP/BW Error Message: rfc call failed 089: Job BI_BTC<infopackage_name>has not (yet ?) been started        
    The preceding abap job steps are getting executed successfully. After the infopackage step is failed the consecutive steps all fails.
    This problem is common to all job chains with infopackage.
    Any help is greatly appreciated.
    Regards,
    Sandeep.

    Hello Anton,
    We are facing the same problem: Same log error message.
    The infopackage is correctly started and ended in BW.
    Here our versions:
    Redwood Explorer 7.0.4.2 SP2
    BW :   SAP_BASIS 70016
               SAP_BW 70018
    Do you think applying SAP CPS SP3 would solve the problem?
    Or can we solve it by modifying some specific parameters?
    Thanks in advance.
    Regards;
    Mathieu

  • Backing up Jobs, Chains and Programs in Oracle Job Scheduler

    What is the best way to back up Jobs, Chains and Programs created in the Oracle Job Scheduler via Enterprise Manager - and also the best way to get them from one database to another. I am creating quite a long chain which executes many programs in our test database and wish to back everything up along the way. I will also then need to migrate to the production database.
    Thanks for any advice,
    Susan

    Hi Susan,
    Unfortunately there are not too many options.
    To backup a job you can use dbms_scheduler.copy_job. I believe EM has a button called "create like" for jobs and programs but I am not sure about chains and this can be used to create backups as well.
    A more general purpose solution which should also cover chains is to do a schema-level export using expdp i.e. a dump of an entire schema.
    e.g.
    SQL> create directory dumpdir as '/tmp';
    SQL> grant all on directory dumpdir to public;
    # expdp scott/tiger DUMPFILE=scott_backup.dmp directory=dumpdir
    You can then import into a SQL text file e.g.
    # impdp scott/tiger DIRECTORY=dumpdir DUMPFILE=scott_backup SQLFILE=scott_backup.out
    or import into another database (and even another schema) e.g.
    # impdp scott/tiger DIRECTORY=dumpdir DUMPFILE=scott_backup
    Hope this helps,
    Ravi.

  • Execute 4 jobs at the same time

    Hello guys,
    Is it possible with DBMS_SCHEDULER to execute 4 jobs at the same time. I'd like to use DBMS scheduler to refresh 4 different materialized view at the same time. After that, I'd like to start another job if they refreshed correctly.
    So, can this be achieved?
    Thanks for your help

    It can be achieved using a chain job. Here is how you would do it:
    Call DBMS_SCHEDULER.create_program to create 5 programs corresponding to the 4 MV jobs + the one extra job.
    Call DBMS_SCHEDULER.create_chain to create a new disabled chain.
    Call DBMS_SCHEDULER.define_chain_step to create 5 chain steps corresponding to the 5 programs you created earlier.
    Call DBMS_SCHEDULER.define_chain_rule to create 3 chain rules:
    -- Rule1:
    condition => 'TRUE'
    action => 'START step1, step2, step3, step4'
    -- Rule2:
    condition => 'step1 SUCCEEDED and step2 SUCCEEDED and step3 SUCCEEDED and step4 SUCCEEDED'
    action => 'START step5'
    -- Rule3:
    condition => 'step5 SUCCEEDED'
    action => 'END'
    Call DBMS_SCHEDULER.enable to enable the chain.
    Call DBMS_SCHEDULER.create_job to create the corresponding chain job of type chain.
    Hope that helps.

  • SAP CS 8.0 - question to parallel tasks in one job chain

    Dear all,
    I have a question about job chains in SAP CPS/Redwood
    We have a job chain like this:
    Job 1
         Job 1.1
                   Job 1.1.1
                   Job 1.1.2
         Job 1.2
                    Job 1.2.1
    The Jobs 1.1.1 and 1.1.2 should start  when the Job 1.1 is compled and don't have to wait until Job 1.2 is complete.
    How could I release it in Redwood in one job chain? Should I use a precondition in the job definition of 1.1 and 1.2?
    Thank you for your help.
    Best regards,
    Hans

    Assumptions :
    Job 1.1.1 and Job 1.1.2 are running in parallel.
    Job 1.1, job 1.2 are also running in parallel.
    Chain A -
    Step 1 - Job 1.1
    Step 2 - Job 1.1.1, Job 1.1.2 (these both will start as soon as Job 1.1 completes)
    Chain B -
    Step 1 - Job 1.2
    Step 2 - Job 1.2.1
    Chain C
    Step 1 - Job 1
    Step 2 - Chain A, Chain B (Chain A and Chain B will start as soon as Job 1 completes).
    Submit Chain C.
    Preconditions are usually  used to check the timewindow, for example , if you want to execute a step in a job chain, only if it is a Friday.
    thanks
    Nanda

  • Need informations about dependency scenario with job chains

    Hi,
    I created several job chains containing one ora mor scripts.
    Now I want to create dependencies between this job chains. For example : job chain 2 must begin when job chain 1 is finished, job chain 3 must begin when job chain 2 is finished etc...
    So my first question : how can I do that ? (create dependencies between job chains)
    Second point : all this job chains have not the same time periods : example : job chain 1 must execute every day, job chain 2 every monday, job chain 3 every day...... so if i take the example of tuesday : job chain 1 must execute and when it is finished then job chain 3 must execute. On Monday : job chain1, then job chain 2 and for finish job chain 3. How can I do that ?
    Edited by: kennel yves on Aug 19, 2009 12:27 PM

    Hi Yves,
    If your chains are not dependent on each other, you should submit them as separate jobs.
    So if you have a chain that runs every workday at 18:00, and another chain that runs on Tuesday and Friday at 14:00, and there are no dependencies between the chains, then you should:
    - submit the first chain at 18:00, time window "Workdays", submit frame "EveryDay"
    - submit the second chain at 14:00, time window "TuesdaysAndFridays", submit frame "EveryDay" (it will skip the days that the time window is closed anyway)
    Same for the other independent chains with different schedules. If there are no dependencies, there is no need to combine chains together.
    The first reply was for the scenario where you have chain A and B, A starts at some time and B starts after A. Additionally, B only runs on Tuesdays, while A runs every workday. In that case there are dependencies between the two chains, and then a parent chain is very convenient:
    - chain "RunAandB", with two steps
    - step 1: call chain A
    - step 2: call chain B, with a precondition so that B only runs on Tuesdays.
    Regards,
    Anton Goselink.

  • How can we recognize the time taking for executing any process chain

    Hi,
    I have one metachain which consists of several local process chain. It was executed successfully. So how can I find out total time for executing this meta chain.Is there any way to find out that this process execution time is lile some hours and some mins.

    ST13 trans description is Analysis & Service Tools Launch pad and is part of the ST-A/PI plugin.  See Note 69455 - Servicetools for Applications ST-A/PI (ST14, RTCCTOOL, ST12) aand talk to your Basis folks to find out if they can install it since it sounds like it is not installed at your site.
    Here's a list of functions the tool provides - only some are BW specific, so if you can not get ST13 authorization, see if you can at least run the pgm
    Analysis/Service Tools
    Name                     Title                                          
    RTCCTOOL              SAP Servicetools Update
    ST12                        Single transaction analysis
    SDCCVIEWER          SDCC: View data from ST-A/PI subroutines
    SOS_CUSTOMER_DATA                Customer Checks for Security Optimitation
    BATCH_JOB_ANALYSIS               Batch job analysis
    CMO_SYSTEM_MONITORING            CMO - System Monitoring
    ITSTRACEVIEW                     ITS trace viewer (SAPjulep) for EBP/CRM
    STAD_DATA_GET                    Select STAD data and put to download
    BPSTOOLS                         BW-BPS Performance Toolset
    BIIPTOOLS                        BI-IP Performance Toolset
    BW_QUERY_ACCESSES                BW: aggregate/InfoCube accesses of queries
    BW_QUERY_USAGE                   BW: query usage statistics
    BW-TOOLS                         BW Tools
    SEM_BCS_STATISTIC_ANALYSIS       SEM-BCS: Statistic Analysis
    BWQUAC_CUST                      BW Query Alert Collector Customizing
    TABLE_ANALYSIS                   Table Analysis Tools
    UPGRADE_TOOLS                    Tools for Upgrade
    MASS_MAN_MONITORING              Monitoring for Mass Data
    CUST_DMA_TAANA                   DVM: Settings for ST14 CA
    PROJBROWSER                      Service Software: Local project browser
    ANALYSISBROWSER                  Service Software: Analysis browser
    SDCC_DOWNLOAD_SIMULATION         SDCC download simulation for ST-A/PI collectors
    TEXTBROWSER                      Service Software: Short&Longtext browser
    SET_SDCC_PRODUCTIVE_CLIENT       Manually set analysis client for SDCC datacollection
    There were a few OSS Notes relating to ST03N dumping you might want to review to see if they need to be applied
    Nothing is ever as easy as it ought to be.

  • Job Chain question

    Hi,
    i want to schedule a job chain.
    I 've created 4 chain steps.
    The first perform an action and the second must be executed only if the first completed successfully.
    The third must be executed only if the second ends successfully, etc.
    The Chain_rule_2 is
    CHAIN_STEP_1 Completed and CHAIN_STEP_1 SUCCEEDED
    The chain_rule_3 is
    CHAIN_STEP_2 Completed and CHAIN_STEP_2 SUCCEEDED
    But the job is actually running because there isn't an end steps (I suppose).
    End step must be done if all steps works fine or if only one failed.
    Any ideas?
    Thanks.

    You can add rules in Chains:
    Chain rules define when steps run, and define dependencies between steps. Each rule has a condition and an action. Whenever rules are evaluated, if a rule's condition evaluates to TRUE, its action is performed. The condition can contain Scheduler chain condition syntax or any syntax that is valid in a SQL WHERE clause. The syntax can include references to attributes of any chain step, including step completion status. A typical action is to run a specified step or to run a list of steps.
    All chain rules work together to define the overall action of the chain. When the chain job starts and at the end of each step, all rules are evaluated to see what action or actions occur next. If more than one rule has a TRUE condition, multiple actions can occur. You can cause rules to also be evaluated at regular intervals by setting the evaluation_interval attribute of a chain.
    Conditions are usually based on the outcome of one or more previous steps. For example, you might want one step to run if the two previous steps succeeded, and another to run if either of the two previous steps failed.
    Scheduler chain condition syntax takes one of the following two forms:
    stepname [NOT] SUCCEEDED
    stepname ERROR_CODE {comparision_operator|[NOT] IN} integer
    You can combine conditions with boolean operators AND, OR, and NOT() to create conditional expressions. You can employ parentheses in your expressions to determine order of evaluation.

  • Any WebI reports / graphs for job chain performance over time?

    I am trying to get reports over time that contain job and job chain information such as job chain, job description, elapsed time, job status, etc to allow for trend reporting against the job schedules.  Thanks for any help.  I would have thought that there was some basic reporting that could execute and be saved for reporting and reference.

    Hi Robert,
    For WebI/Graph report, you need to create the Dashboard but it will not give details which you required. It will be created on basis over day, time and dialer. You can find the option there for Graph/Techno etc.
    If you want to create the report, please refer to below thread:
    Re: Question on Reporting
    CPS Report with Schedule and Job Parameter Information
    Regards,
    Abhishek Singh

  • Error while scheduling the Email Alert JOB chain

    Hi All,
    I have defined a job chain in CPS and when i am going to schedule it then it is giving me error message.We have taken the trial version.
    Please find the log attached below.
    11:18:31 PM:
    JCS-111004: Queue ETD.sapetd00_Queue has no ProcessServer with the required JobDefinitionType/Service/Resource for Job 932 (submitted from ETD.Z_MONI_BATCH_DP copy from 2009/12/30 18:22:23,113 Australia/Sydney) (submitted from Job Definition ETD.Z_MONI_BATCH_DP (Copy from 2009/12/30 18:22:23,113 Australia/Sydney)): Job Definition Type CSH/Service PlatformAgentService/"Empty"
    JCS-102064: Job 934 (submitted from System_Mail_Send copy from 2009/12/29 17:54:16,608 Australia/Sydney) is global but refers (via Job) to an object in an isolation group
    JCS-102064: Job 934 (submitted from System_Mail_Send copy from 2009/12/29 17:54:16,608 Australia/Sydney) is global but refers (via Chain Step) to an object in an isolation group
    JCS-102064: Job 934 (submitted from System_Mail_Send copy from 2009/12/29 17:54:16,608 Australia/Sydney) is global but refers (via Parent Job) to an object in an isolation group Show error details
    Thanks
    Rishi Abrol

    Hi
    Are you logged into the correct isolation group ?
    Ensure the process server is also assigned to the queue.
    Regards

  • Error while executing a JOB

    Post Author: rkolturu
    CA Forum: Data Integration
    I tried to execute a job which is compiled and error free but get the error below has any one of you any workaround for this problem
    5308 4880 PAR-010102 12.12.2007 08:44:52 |SessionJOB_XYZ
    5308 4880 PAR-010102 12.12.2007 08:44:52 Syntax error at line : 00297-182
    5308 4880 PAR-010102 12.12.2007 08:44:52 0-189393172: near found expecting .
    5308 4880 PAR-010102 12.12.2007 08:44:52 Syntax error at line : <>: near found .
    5308 4880 PAR-010102 12.12.2007 08:44:52 2 error(s), 0 warning(s).
    5308 4880 PAR-010102 12.12.2007 08:44:52 . Please check and fix the syntax and retry the operation.

    Hi,
    If  CI & DB are on different host this error normally occures.SAP return codes 236 means no connection to gateway, can you check it?
    Also check whether TCP/IP RFC connection SAPXPG_DBDEST_*  is working fine or not.If it is not working fine then please check whether you have mentioned below details
    Program              sapxpg
    Target Host          DB Host name
    Gateway Host    DB host/gateway host
    Gateway service sapgw##
    Also refer SAP Note  108777 - CCMS: Message 'SAPXPG failed for BRTOOLS' & Note 446172 - SXPG_COMMAND_EXECUTE (program_start_error) in DB13
    Hope this helps.
    Thanks,
    Sushil

  • Error on auto execute of job 1032656.. Where i can get the details?

    ORA-12012: error on auto execute of job 1032656
    ORA-04063: ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors
    ORA-06508: PL/SQL: could not find program unit being called: "ORACLE_OCM.MGMT_DB_LL_METRICS"
    ORA-06512: at line 1
    ORA-1580 signalled during: alter database backup controlfile to 'backup_control_file.ctl'..
    Hi All,
    I am getting the above error in my alert log.
    When i check in my dba_jobs there were only two jobs having job column 1 & 21.
    Where i can see the job 1032656 and its details ?
    Regards
    Arun

    Hi Arun,
    This is due to invalid objects in ORACLE_OCM schema.
    Please read metalink note id:
    Invalid Objects After Revoking EXECUTE Privilege From PUBLIC [ID 797706.1]
    Symptoms
    OEM recommends that EXECUTE privilege being revoked from PUBLIC. After revoking the privilege, the following errors appeared in alert log:
    ORA-12012: error on auto execute of job 66171
    ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors
    ORA-06508: PL/SQL: could not find program unit being called: ORACLE_OCM.MGMT_DB_LL_METRICS"
    ORA-06512: at line 1 has errors .
    Also, the below query returns invalid rows count (approximately 46 rows)
    SQL> select object_name, owner, object_type, status from dba_objects where status = 'INVALID';
    Owners of invalid objects are ORACLE_OCM and WMSYS
    *Cause*
    At the time of installation of the database, Oracle execute scripts that test to see if PUBLIC access is granted to all of the routines that ORACLE_OCM and WMSYS need access to. If PUBLIC access is NOT available, Oracle scripts grant specific access rights. However, If EXECUTE privilege is revoked from PUBLIC after installation, then those specific access rights needs to be granted manually.
    *Solution*
    You will need to grant execute on UTL_FILE and DBMS_SCHEDULER to ORACLE_OCM and WMSYS. The below action plan should solve the issue:
    SQL> grant execute on UTL_FILE to oracle_ocm;
    SQL> grant execute on DBMS_SCHEDULER to oracle_ocm;
    SQL> grant execute on UTL_FILE to wmsys;
    SQL> grant execute on DBMS_SCHEDULER to wmsys;
    SQL> shutdown immediate;
    SQL> startup restrict;
    SQL> @utlrp.sql /* $ORACLE_HOME/rdbms/admin/@utlrp.sql */
    SQL> shutdown immediate;
    SQL> startup;
    Regards
    Rajesh

  • Job Chaining and Quickcluster

    I always get
    Status: Failed - HOST [Macintosh.local] QuickTime file not found.
    after the first part of the job is successful.
    If I just submit with "This Computer" it works fine. Original file is ProRes 422, first job uses ProRes 422 to scale it down to 480x270. Second job compresses to h.264. I found some info on this board from 2008 saying that job chaining and quickclusters don't work together. Is that still how it is? that's really useless..
    I also found this from Jan 2009
    David M Brewer said:
    The reason the second rendering is failing is.......this has happen to me a few times until I figured it out.....make sure you set the dimension to the video in the h.264 settings, set to the same size as the Pro Rez dimensions.
    For the most part the dimensions are left blank for the second link, h.264. And don't use 100% of source. Put the physical numbers into the spaces. When you link one video to another, the second codec doesn't know the specs settings you made for the first video settings.
    Also make sure you (at least check) set the audio for the second video. I usually have the Pro Res do the audio conversion and just past-through to the second video settings. Again it can happen the the audio is disable in the h.264 settings. This has happen a few time for me........... Check and double check your settings!
    He doesn't mention anything about with or w/o Quickclusters, but I tried what he said and could not get it to work with quickclusters...
    Anyone got any new info on this?

    Studio X,
    Thanks for taking the time to run some tests and post your results.
    I'm finding the same results with converting ProRes422 to mp4, But...
    Other codecs are giving me very different results.
    I've run some random tests to try to get a grip on whats happening.
    First I was playing around with the # of instances. I've read here and on Barefeats that (at least for my model MacPro) the instances should be set to (# of processors/2), so I've been using 4 for quite a while now and thought I'd test it for myself.
    A single 5min ProRes422 1920x1080 29.97 file to h.264
    This Computer- 15:28
    2 Instances- 14:56
    3 Instances- 13:52
    4 Instances- 14:48
    5 Instances- 13:43
    6 Instances- 13:48
    7 Instances- 13:58
    In this case 5i was the fastest but not using a Quickcluster wasn't far off
    A single 2m30s ProRes422 1920x1080 29.97 file to h.264
    This Computer- 3:19
    2 Instances- 3:45
    3 Instances- 3:45
    4 Instances- 3:45
    5 Instances- 3:50
    6 Instances- 4:00
    7 Instances- 4:00
    Interesting...not using a Quickcluster is fastest
    A single 2m30s ProRes422 1920x1080 29.97 file Scaled down using original codec
    This Computer- 5:20
    4 Instances- 4:10
    5 Instances- 4:10
    7 Instances- 4:11
    A single 1m30s ProRes422 1920x1080 29.97 file to mpeg-2
    This Computer- 2:12
    5 Instances- 2:10
    When Quickclusters are faster, 4-5 instances does seem to be the sweet spot(again for my setup).
    In the mpeg-2 test, I should have used a longer clip to get a better test but it was getting late and I was just tring to get an idea of the codecs usage of my resources. I was also monitoring CPU usage with Activity Monitor in all tests.
    Now multiclip batches:
    I forgot to write down the length of the clips in this first test but it consisted of 8 ProRes 422 clips. 3 about 1m long and the rest between 13s and 30s
    8 ProRes 422 clips to mp4
    This Computer- 11:25
    4 Instances- 5:16
    Same results as Studio X
    Next tests with 5 clips(total 1m51s)
    5 ProRes 422 clips to h.264
    This Computer- 5:00
    4 Instances- 4:52
    5 ProRes 422 clips to mpeg-2
    This Computer- 2:55
    4 Instances- 3:01
    5 ProRes 422 clips to DV NTSC
    This Computer- 6:40
    4 Instances- 5:12
    5 ProRes 422 clips to Photo Jpeg
    This Computer- 2:44
    4 Instances- 2:46
    I re-ran the last test with 7 clips because of the time it took reassemble the segmented clips
    7 ProRes 422 clips to Photo Jpeg(total 3m14s)
    This Computer- 4:43
    4 Instances- 3:41
    One last test,
    A single ProRes 422 clip to Photo Jpeg(4:05;23)
    This Computer- 5:52
    4 Instances- 4:10
    Let me start off by saying it is clear that there are many factors that effect compression times such as # of clips, length of clips, and codecs, but here are some of the things I noted:
    1)Some codecs themselves seem to be "more aware" of the computers resources than others.
    When I compress to h.264 w/o a cluster it will use about 80-85% of all resources
    When I compress to h.264 with a cluster it will use about 90-95% of all resources
    When I compress to PhotoJpeg w/o a cluster it will use about 20-25% of all resources
    When I compress to PhotoJpeg with a cluster it will use about 80-85% of all resources
    2)The time it takes to reassemble clips can be quite long and could effect overall speed
    In the very last test, compressing a single file to photoJpeg using 4 instances took 4m10s. Watching Batch Monitor I noted that it took 2m0s to compress and 2m10s to reassemble.Wow...
    It would be interesting to see how the disassemble/reassemble of bigger and larger batches using clusters effect overall time. But that would take some time.
    I think the thing I will be taking with me from all of this is your workflow is your own. If you want to optimize it, you should inspect it, test it and adjust it where it needs adjusting. Now if anyone has the time and was to run similar tests with very different results I'd love to know about it...

Maybe you are looking for

  • Sync session fails

    I bought a new computer, loaded itunes, and it won't sync. All I get is sync session failed to start.

  • BI REPORTING?? urgent plz

    hi, I need to give BI REPORTING Presentation with all the functions with examples on live. could you plz provide me the documentation on BI Reporting?? plz provide me the docs on Report Designer and WAD. it will be grateful to all of you and i will a

  • Move Apple Loops to External Drive?

    hello i was looking at all the files on my drive, and i noticed that so much space is taken up by the apple loop .caf files. Is there a way to move these to a folder on my external drive but still have logic know where to find them? thanks

  • ZENWorks as applied to Mac

    We (a school district with ~2500 MacBook Air soon) are cosidering registering them in ZENWorks, mainly for having a means to distribute specific files and applications later on. The MBAs run Mountain Lion or Mavericks and with ZCM 11 SP3 that should

  • DVD Studio Project too big for DVD

    Well, I've exported my video and audio out of FCP through compressor and got into DVD Studio. The movie I'm making is a compiliation of the years 1991-2007 (family film), thou there were no film from some years. So I made a main meny with the buttons