Event job with dbms_scheduler after dml on table

is is possible to start an event job using dbms_schduler every time a record is inserted or updated in a table?
After a dml statement that job has to start an external command (a shell script).
In few words, a dml statment can be the event?

It would scale badly, but yes, when you are looking for methods to develop a disaster application, this is one of the ways to do it. It is definitely possible. But it has BAD IDEA inscribed all over it.
When you insist on misusing Oracle for things it was not designed for you may want to consider calling O/S commands using Java. Examples on http://asktom.oracle.com
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Creating backup Job with DBMS_SCHEDULER

    Hello,
    Can someone please help me out here:
    I'm using Oracle10g release 1 on windowsXP
    I'm trying to create a backup job with dbms_scheduler and it's not working.
    This is what I did:
    I created a job as follows:
    BEGIN
    dbms_scheduler.create_job (
    job_name => 'RMAN_FULL',
    job_type => 'EXCUTABLE',
    job_action => 'E:\wkdir\rman_bkp',
    enabled => TRUE,
    start_date => '24-NOV-2007 2:10:00 PM',
    repeat_interval => 'FREQ=WEEKLY',
    comments => 'Full Database Backup');
    END;
    While rman_bkp is an RMAN command but it wasn't working.
    Please where do I get it wrong?
    Kindly put me through the EXECUTABLE or should I use PL/SQL_BLOCK and how?
    Thanks.
    Regards,
    Cherish

    Hi,
    There is a guide to running external jobs using the Scheduler here
    Guide to External Jobs on 10g with dbms_scheduler e.g. scripts,batch files
    You need to use the full path to a real Windows executable and the arguments to it e.g. for a batch script you would have to do something like
    c:\windows\cmd.exe /q /c c:\myscript.bat
    There is a forum dedicated to the Scheduler here
    Scheduler
    Hope this helps,
    Ravi.

  • Schedule Job with Job_close after successful job doesn't work

    Hi guys,
    I'm using FM CLOSE_JOB with parameters : 
                JOBCOUNT             = w_JobId
                JOBNAME              = w_JobName
                PREDJOB_CHECKSTAT    = 'X'
                PRED_JOBCOUNT        = w_oldJobId
                PRED_JOBNAME         = w_oldJobName
    I have about 9 jobs wich must run the one after the others. The first start without PRED* parameters, but the 8 others one are filled with CHECKSTAT, and previous job name and id.
    It works fine for the side "the one after the others", BUT, wathever the previous job give as result (cancelled or finished), the next one starts whereas I pass the parameter CHECKSTAT to X.
    Any ideas of the problem and how to solve it?
    Thanks in advance for your answers.

    Here my code with explanation :
    REPORT  YCOMJ023.
    start-of-selection.
    "Initialization of my vars
    w_StepCount = 0. >> number of steps maxi in a job
    w_jobCount = 1.  >> number to see easier in SM37 the job order
    CONCATENATE pe_name '_STEPS' w_jobCountC INTO w_JobName. (example : TOTO_STEP1)
    CONDENSE w_JobName NO-GAPS.
    "I open my first job
      CALL FUNCTION 'JOB_OPEN' (OPEN job TOTO_STEP1)
        EXPORTING
          jobname          = w_JobName
        IMPORTING
          jobcount         = w_JobID
        EXCEPTIONS
          cant_create_job  = 1
          invalid_job_data = 2
          jobname_missing  = 3
          OTHERS           = 4.
    "We keep in memory first job IDs to close it at the end of the prg
      w_firstjobName = w_JobName.
      w_firstjobID = w_JobID.
    "imagine you do the bellow code in a loop and it makes several jobs TOTO_STEP2 TOTO_STEP3 TOTO_STEP4...
    ADD 1 TO w_StepCount.
    IF w_StepCount GT 250.
        "I call close job eatch time I reach 250 steps
         PERFORM fx_jobclose.
    ENDIF.
    submit RKGALKEUB to sap-spool and return
                                       without spool dynpro
                                       spool parameters print_parameters
                                            VIA JOB w_JobName NUMBER w_JobID
    "End of the programmI close the current Job and the first one :
    "Current
            CALL FUNCTION 'JOB_CLOSE'
              EXPORTING
                JOBCOUNT             = w_JobId
                JOBNAME              = w_JobName
                PREDJOB_CHECKSTAT    = 'X'
                PRED_JOBCOUNT        = w_oldJobId
                PRED_JOBNAME         = w_oldJobName.
    "First one + launch with STRIMMED
            CALL FUNCTION 'JOB_CLOSE'
            EXPORTING
              JOBCOUNT             = w_firstjobId
              JOBNAME              = w_firstjobName
              STRTIMMED            = 'X'
            EXCEPTIONS
              cant_start_immediate = 1
              invalid_startdate    = 2
              jobname_missing      = 3
              job_close_failed     = 4
              job_nosteps          = 5
              job_notex            = 6
              lock_failed          = 7
              OTHERS               = 8.
    *&      Form  fx_jobclose
    *       text
    FORM fx_jobclose.
      "Step to zero to do a new loop after this form
      w_StepCount = 0.
      DATA : w_job_released TYPE CHAR1.
      "If the flag IsFirst, we don't do nothing, because it's the first JOb, and it should not be closed
      IF w_IsFirst = 'X'.
        "Flag is set to blank
        w_IsFirst = ''.
      ELSE.
          "Else it mean we are closing a job with predecessor :
          CALL FUNCTION 'JOB_CLOSE'
            EXPORTING
              JOBCOUNT             = w_JobId
              JOBNAME              = w_JobName
              PREDJOB_CHECKSTAT    = 'X'
              PRED_JOBCOUNT        = w_oldJobId
              PRED_JOBNAME         = w_oldJobName
              "SDLSTRTDT            = sy-datum
              "SDLSTRTTM            = sy-timlo
            IMPORTING
              JOB_WAS_RELEASED = w_job_released.
      ENDIF.
      "Vars get the value of current job, witch will become the older one
      w_oldJobId = w_jobID.
      w_oldJobName = w_JobName.
    "I make the new TOTO_STEPX job name
      ADD 1 TO w_jobCount.
      w_jobCountC = w_jobCount.
      CONCATENATE pe_name '_STEPS' w_jobCountC INTO w_JobName.
      CONDENSE w_JobName NO-GAPS.
      "I open the new job
      CALL FUNCTION 'JOB_OPEN'
        EXPORTING
          jobname          = w_JobName
        IMPORTING
          jobcount         = w_JobId
        EXCEPTIONS
          cant_create_job  = 1
          invalid_job_data = 2
          jobname_missing  = 3
          OTHERS           = 4.
    ENDFORM.                    "fx_jobclose
    I hope my code is clear enought, I tried to delete the superfluities code.

  • Scheduling jobs with condition-after job programatically

    Hi,
    Could anybody please tell me how can we schedule jobs from programs(prgramatically) with condition-start after job(after a particular job completed) like we have the same option in sm36.
    Thanks,
    Rahul.

    Hello Rahul,
    Check the following Link Page Number 41.
    "Sample Program: Wait for Predecessor Job with
    JOB_CLOSE"
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCDWBLIB/BCDWBLIB.pdf
    Regards,
    Abhishek Jolly

  • Creating a job with DBMS_SCHEDULE  from Forms?

    I'm new to DBMS_SCHEDULER. I'd like to run a stored procedure from an Oracle Forms application passing some parameters from the Form to the procedure, then immediately running the procedure and after the procedure has finished dropping it from the job list. The procedure may take some time to run and the user doesn't want to wait before exiting the form. Can this be done?

    Sure, no problem. The job will be dropped after termination by default (autodrop). If privileges are ok it should work. Things to think about are error reporting and retries.
    regards,
    Ronald
    http://ronr.nl/unix-dba

  • MVC Event Handling with different rows of same table view

    Hello friends,
    I have a tableview with some rows.I have some columns in the tableview.First column contains the drop down list box and other columns are output fields.
    I am allready able to populate the columns according to the selection of a value in the drop down listbox of the same row.
    In the layout of view for the selectionmode attribute of tableview i have given 'lineedit'.
    In the do_handle_event i am also able to get the current selected row of the tableview.
    DATA: tv_data TYPE REF TO cl_htmlb_event_tableview.
    selection = tv_data->selectedrowindex.
    Scenario:I have selected the first row then i am selecting a value in the dropdown and able to populate their corresponding value in the corresponding output field of same row.
    So the first row has the selected value in drop down listbox also the corresponding outputfield of same row also has the value.This i am doing by inserting it to the final internal table for the tableview according to currrent selected row .
    Now i am going to select the second row.then i am selecting someother value in the dropdown listbox of the second row and filling the other columns correspondigly.I am able to do this for the secondrow.What i now need to do is i also want to see the first row with slected values correspondingly when the final view comes.ie i need both rows with corresponding values.
    Similarly when selecting third row and other rows so on.
    how to do this?
    Thanks & Regards,
    Renju.

    Hi Renju,
    Try using MULTILINEDIT - this is used when you want to edit multiple rows all at once.
    Cheers

  • Job triggered by event created with BP_EVENT_RAISE

    Hello,
    in the exit for sales orders we trigger one event to call one program using FM BP_EVENT_RAISE. We have one job that starts after this event is created and looking with SM37 the job is still released under this condition (as is also in table BTCEVTJOB). The event is created correctly and the process has been working fine until a couple of days ago. I create new sales orders and see that this FM is still called, but the job does not start.
    I am trying to track the created events with transaction SWEL, but it does not work. How can I check that the event is being created and processed? any other transactions or programs to check this?
    Thanks.

    hi,
    as per my experience, in some user-exit you cannot call the BP_RAISE_EVENT ( or the method of the static class CL_BATCH_EVENT->RAISE ) . Even if the event is define correctly and the job is scheduled correctly the bapi call has no effect on event rasing and job is still in released status. 
    You can try to create a job having the needed steps and directly executed .( JOB_OPEN, submit 'your_program',  JOB_CLOSED ).... .. so i suggest to create a custom FM that execute this work .( JOB_OPEN, submit 'your_program',  JOB_CLOSED )  and call it into the user exit..
    Hope this can be a good input for you.
    Best regards luigi.
    Edited by: luigi la motta on Jun 14, 2010 4:14 PM

  • Guide to External Jobs on 10g with dbms_scheduler e.g. scripts,batch files

    GUIDE TO RUNNING EXTERNAL JOBS ON 10g WITH DBMS_SCHEDULER
    NOTE: Users using 11g should use the new method of specifying a credential which eliminates many of the issues mentioned in this note.
    This guide covers several common questions and problems encountered when using
    dbms_scheduler to run external jobs, either on Windows or on UNIX.
    What operating system (OS) user does the job run as ?
    External jobs which have a credential (available in 11g) run as the user
    specified in the credential. But for jobs without credentials including
    all jobs in 10gR1 and 10gR2 there are several cases.
    - On UNIX systems, in releases including and after 10.2.0.2 there is a file $ORACLE_HOME/rdbms/admin/externaljob.ora . All external jobs not in the SYS schema and with no credential run as the user and group specified in this file. This should be nobody:nobody by default.
    - On UNIX systems, in releases prior to 10.2.0.2 there was no "externaljob.ora" file. In this case all external jobs not in the SYS schema and with no credential run as the owner and group of the $ORACLE_HOME/bin/extjob file which should be setuid and setgid. By default extjob is owned by nobody:nobody except for oracle-xe where it is owned by oracle:oraclegroup and is not setuid/setgid.
    - On Windows, external jobs not in the SYS schema and with no credential run as the user that the OracleJobScheduler Windows service runs as. This service must be started before these jobs can run.
    - In all releases on both Windows and UNIX systems, external jobs in the SYS schema without a credential run as the oracle user.
    What errors are reported in SCHEDULERJOB_RUN_DETAILS views ?
    If a job fails, the first place to look for diagnostic information is the SCHEDULERJOB_RUN_DETAILS set of views. In 10gR2 and up the first 200 characters of the standard error stream is included in the additional_info column.
    In all releases, the error number returned by the job is converted into a
    system error message (e.g. errno.h on UNIX or net helpmsg on Windows) and that
    system error message is recorded in the additional info column. If there is no
    corresponding message the number is displayed.
    In 11g and up the error number returned by the job is additionally recorded in
    the error# column. In earlier releases 27369 would always be recorded in the
    error# column.
    Generic Issues Applicable to UNIX and Windows
    - The job action (script or executable) must return 0 or the job run will be marked as failed.
    - Always use the full pathname to executables and scripts.
    - Do not count on environment variables being set in your job. Make sure that the script or executable that your jobs runs sets all required environment variables including ORACLE_HOME, ORACLE_SID, PATH etc.
    - It is not recommended to pass in a complete command line including arguments as the action. Instead it is recommended to pass in only the path to and name of the executable and to pass in arguments as job argument values.
    - Scripts with special characters in the execution path or script name may give problems.
    - Ensure that the OS user your job runs as has the required privileges/permissions to run your job. See above for how to tell who the job runs as.
    - External job actions cannot contain redirection operators e.g. > < >> | && ||
    - In general try getting a simple external job working first e.g. /bin/echo or ipconfig.exe on Windows. Also try running the job action directly from the commandline as the OS user that the job will run as.
    Windows-specific Issues
    - The OracleJobScheduler Windows service must be started before external jobs will run (except for jobs in the SYS schema and jobs with credentials).
    - The user that the OracleJobScheduler Windows service runs as must have the "Log on as batch job" Windows privilege.
    - A batch file (ending in .bat) cannot be called directly by the Scheduler. Instead cmd.exe must be used and the name of the batch file passed in as an argument. For example
    begin
    dbms_scheduler.create_job('myjob',
       job_action=>'C:\WINDOWS\SYSTEM32\CMD.EXE',
       number_of_arguments=>3,
       job_type=>'executable', enabled=>false);
    dbms_scheduler.set_job_argument_value('myjob',1,'/q');
    dbms_scheduler.set_job_argument_value('myjob',2,'/c');
    dbms_scheduler.set_job_argument_value('myjob',3,'c:\temp\test.bat');
    dbms_scheduler.enable('myjob');
    end;
    /- In 10gR1 external jobs that wrote to standard output or standard error streams would sometimes return errors. Redirect to files or suppress all output and error messages when using 10gR1 to run external jobs.
    UNIX-specific Issues
    - When running scripts, make sure that the executable bit is set.
    - When running scripts directly, make sure that the first line of the script in a valid shebang line - starting with "#!" and containing the interpreter for the script.
    - In release 10.2.0.1, jobs creating a large amount of standard error text may hang when running (this was fixed in the first 10.2.0.2 patchset). If you are seeing this issue, redirect standard error to a file in your job. This issue has been seen when running the expdp utility which may produce large amounts of standard error text.
    - the user that the job runs as (see above section) must have execute access on $ORACLE_HOME/bin and all parent directories. If this is not the case the job may be reported as failed or hang in a running state. For example if your $ORACLE_HOME is /opt/oracle/db then you would have to make sure that
    chmod a+rx /opt
    chmod a+rx /opt/oracle
    chmod a+rx /opt/oracle/db
    chmod a+rx /opt/oracle/db/bin
    - On oracle-xe, the primary group of your oracle user (if it exists) must be dba before you install oracle-xe for external jobs to work. If you have an oracle user from a regular Oracle installation it may have the primary group set to oinstall.
    - On oracle-xe, the extjobo executable is missing so external jobs in the SYS schema will not work properly. This can be fixed by copying the extjob executable to extjobo in the same directory ($ORACLE_HOME/bin).
    - Check that correct permissions are set for external job files - extjob and externaljob.ora (see below)
    Correct permissions for extjob and externaljob.ora on UNIX
    There is some confusion as to what correct permissions are for external job related files.
    In 10gR1 and 10.2.0.1 :
    - rdbms/admin/externaljob.ora should not exist
    - bin/extjob should be setuid and setgid 6550 (r-sr-s---). It should be owned by the user that jobs should run as and by the group that jobs should run as.
    - bin/extjobo should have normal 755 (rwxr-xr-x) permissions and be owned by oracle:oraclegroup
    In 10.2.0.2 and higher
    - rdbms/admin/externaljob.ora file must must be owned by root:oraclegroup and be writable only by the owner i.e. 644 (rw-r--r--) It must contain at least two lines: one specifying the run-user and one specifying the run-group.
    - bin/extjob file must be also owned by root:oraclegroup but must be setuid i.e. 4750 (-rwsr-x---)
    - bin/extjobo should have normal 755 (rwxr-xr-x) permissions and be owned by oracle:oraclegroup
    In 11g and higher
    Same as 10.2.0.2 but additionally bin/jssu should exist with root setuid
    permissions i.e. owned by root:oraclegroup with 4750 (-rwsr-x---)
    Internal Error numbers for UNIX on 10.2.0.2 or 10.1.0.6 or higher
    If you are not using a credential and are using version 10.2.0.2 or higher or 10.1.0.6 or higher you may come across an internal error number. Here are the meanings for the internal error numbers.
    274661 - can't get owner of or permissions of externaljob.ora file
    274662 - not running as root or externaljob.ora file is writable by group or other or externaljob.ora file not owned by root (can't switch user)
    274663 - setting the group or effective group failed
    274664 - setting the user or effective user failed
    274665 - a user or group id was not changed successfully
    274666 - cannot access or open externaljob.ora file
    274667 - invalid run_user specified in externaljob.ora file
    274668 - invalid run_group specified in externaljob.ora file
    274669 - error parsing externaljob.ora file
    274670 - extjobo is running as root user or group

    Hi Ravi,
    Can you help me...
    Hi All,
    I planned to create a job to do rman backup daily at 04:00 AM.
    1. I created a program as follows
    BEGIN
    DBMS_SCHEDULER.CREATE_PROGRAM(
    program_name => 'rman_backup_prg',
    program_action => '/u02/rmanback/rman.sh',
    program_type => 'EXECUTABLE',
    comments => 'RMAN BACKUP');
    END;
    my rman script is
    #!/usr/bin/ksh
    export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
    export PATH=$PATH:/u01/app/oracle/product/10.2.0/db_1/bin
    /u01/app/oracle/product/10.2.0/db_1/bin/exp rman/cat@catdb file=/u02/rmanback/rm
    an_220108.dmp log=/u02/rmanback/rman_220108.log owner=rman statistics=none comp
    ress=n buffer=400000
    compress *.dmp
    exit
    2. I created a schedule as follows
    BEGIN
    DBMS_SCHEDULER.CREATE_SCHEDULE(
    schedule_name => 'rman_backup_schedule',
    start_date => SYSTIMESTAMP,
    end_date => '31-DEC-16 05.00.00 AM',
    repeat_interval => 'FREQ=DAILY; BYHOUR=4',
    comments => 'Every day at 4 am');
    END;
    3. I created ajob as follows.
    BEGIN
    DBMS_SCHEDULER.CREATE_JOB (
    job_name => 'rman_backup_job',
    program_name => 'rman_backup_prg',
    schedule_name => 'rman_backup_schedule',
    enabled=> true,
    auto_drop=> false
    END;
    While I am running the job I am getting the following error anybody help me.
    ORA-27369: job of type EXECUTABLE failed with exit code: Not owner
    ORA-06512: at "SYS.DBMS_ISCHED", line 150
    ORA-06512: at "SYS.DBMS_SCHEDULER", line 441
    ORA-06512: at line 2
    If I removed "compress *.dmp" line in rman script it is working fine.
    /* additional Info from dba_scheduler_job_run_details as follows */
    ORA-27369: job of type EXECUTABLE failed with exit code: Not owner
    STANDARD_ERROR="
    Export: Release 10.2.0.3.0 - Production on Tue Jan 22 14:30:08 2008
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Release 10.2.0.3.0 - Production
    Export"
    Regards,
    Kiran

  • STOPPED JOBS with expdp and dbms_scheduler

    Hello.
    I am working with the 10g release 2 in a RAC enviroment, and i am trying to put an export job at the scheduler.
    To launch the export i have make a shell script, then first exec the export process and after launch a bzip2 command to compress the resultant dmp file.
    The problem is that the export process finish ok, but it don't compress the file, because the scheduler mark the job as STOPPED.
    The log say:
    REASON="Stop job with force called by user: 'SYS'"
    and the expdp S.O process that launch the extjobo stay runing for ever, like if it was waiting for the expdp to exit and it can`t so the script never arrive to the part that compress the file.
    I put the script that i make to export the schema:
    #!/bin/bash
    export ORACLE_HOME=/opt/oracle/product/10.2.0/db
    export PATH=$PATH:$ORACLE_HOME/bin
    export DIRBACK=/ORACLE/BACKUPS/BMR/Dumps
    export dia=`date +%d_%m_%Y_%H_%M_%S`
    export LOG=dump_backup_bmr_$dia.log
    cd $DIRBACK
    $ORACLE_HOME/bin/expdp userid=oracle_backup/orabck@BMR dumpfile="BMR_BMR_$dia.dmp" schemas=BMR directory=Dumps logfile=$LOG
    cd $DIRBACK
    /usr/bin/bzip2 -f --best ./BMR_BMR_$dia.dmp
    cd $DIRBACK
    /bin/mail -s "DUMP BACKUP BMR DIARIO [$dia]" [email protected] < ./dump_backup_bmr_$dia.log
    I have put several cd $DIRBACK to see if it fail because the script don`t find the dmp file.
    Any idea why it STOP after finish the script ?
    PD: sorry for my poor english.
    Regards

    Hi,
    A stop is only done in two cases - if the user calls dbms_scheduler.stop_job or if the database is shutdown while a job is running. Make sure the database is not being shutdown while the job is running or inside of the job.
    If expdp is still running then this suggests that it is hanging. One possibility for that is that expdp is generating a lot of standard error messages and hanging the job (this is a known issue in 10gR2). You can try redirecting standard output and error to files to see if this helps.
    e.g.
    $ORACLE_HOME/bin/expdp > /tmp/output 2> /tmp/errors
    Hope this helps,
    Ravi.

  • Job with multiple event schedules

    Is it possible to create a job with multiple schedules? Can you have multiple schedule names?
    DBMS_SCHEDULER.CREATE_JOB (
    job_name => 'my_new_job2',
    job_type => 'PLSQL_BLOCK',
    job_action => 'BEGIN SALES_PKG.UPDATE_SALES_SUMMARY; END;',
    schedule_name => 'my_saved_schedule, my_saved_schedule2'); <------------------ like this?
    END;
    thanks.

    I am using oracle 10g and have installed the file arrival package. I want my job to run when multiple files arrive. I have created the file arrival event schedules. I know i can create chain event steps to respond, but my chain has to be running for the steps to respond to the events. I want the chain (or rather the job that starts the chain) to kick off when 2 or more files arrive.
    thanks.

  • Table name for background job with report, variant and step user id list.

    Hello All,
    I need to generate the list of scheduled backgroung job with the list of Report Name, Variant, Step User Id called. Please any one tell the SAP Table name from which I can get these data.
    Thanks in Advance,
    Amit

    Hi Rohit,
    Thanks for your reply. But from TBTCO, i can't find program/report name and variant. Just the list of background job i can see.
    Regards,
    Amit

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • Not able to run the job with user id - is

    Hello experts,
    We have problem.
    Every day we run the job.
    ( the job contians two programs called  ZTIBCOPRG  and J_5HJSTP  )).
    I would like to know why the above job is running with the user ID TIBCOADM.  Because this user has German settings for the size conversion in program ZTIBCOPRG and we are having some issues.
    We changed the user to TIBCOUSA and the job would not run. 
    We have changed the user back to TIBCOADM.  It is running.
    KINDLY HELP ME WITH YOUR VALUABLE inputs.   YOUR HELP WILL BE HIGHLY APPRECIATED.
    I have checked for both users in AGR_USERS  table.
    The AGR_USERS is the old user and have many roles compared to userid TIBCOUSA.
    WHERE WE NEED TO SEE . IS it authorisation problem or the other user with whom unable to run the job ?
    or is this the error in the program ZTIBCOPRG which is not able to do size conversion and causing some issues.
    Thanks and Regards,

    Hi,
         after running the job with user id TIBCOUSA, it should be in cancelled state as per your comments.
    you just select the cancelled job and type JDBG in command box and enter it takes you to Debug mode.
    There you may get some information where it is failing.
    Sudheer. A

  • How to pull the job with the latest effective date in PSFT?

    Hello,
    I am running the PSFT 9.1.1 connector workforcefullsync.
    I ran into a problem when the user has multiple job records. Each time that the user job is updated, a new job record is created. So, when the data comes from PSFT, for a user, we have multiple jobs records with different effective date (For example <EFFDT IsChanged="Y">1996-10-21</EFFDT>). OIM is only interested in the latest EFFDT.
    Question 1: is it something that we can set on the PeopleSoft side, so that only the job with the highest EFFDT is sent to OIM?
    Question 1b: is this is not easy to accomplish from the PSFT side, can I use transformation in the connector to only pull the job record with the highest effective date?
    Question 2: if the EFFDT shows a date in the future (HR wants to disable a person in the future), does OIM ignore this change until the sysdate is after that EFFDT date?
    Thanks
    PS: workforcefullsync worked fine if the file contains only 1 job per employee. When the xml files contain multiple job records, I got the error:
    ERROR QuartzWorkerThread-1 OIMCP.PSFTER - oracle.iam.connectors.psft.common.handler.impl.PSFTWorkForceSyncReconMessageHandlerImpl : handleMessage
    ERROR QuartzWorkerThread-1 OIMCP.PSFTER - 1
    ERROR QuartzWorkerThread-1 OIMCP.PSFTER - Description : 1
    ERROR QuartzWorkerThread-1 OIMCP.PSFTER - java.lang.ArrayIndexOutOfBoundsException: 1
    Edited by: user12049102 on Mar 22, 2010 12:06 PM

    Hello,
    When the PSFT team sends a message over to OIM to disable a user with today's date, the user got disabled fine in OIM. A reconciliation event is created.
    When the PSFT team sends a message over to OIM to disable a user with a future date for EFFDT, let's say 3/31/2010, no reconciliation event was created.
    Does OIM store this information somewhere so that it will process it later on?
    It's a good thing that OIM does not disable the user who has an EFFDT in the future, but we don't want that record to be forgotten.
    Please help.
    Thanks

  • BI_PROCESS_TRIGGER event job in released status ONLY

    HI ALL,
    I WANT TO TRIGGAR PC TWICE IN DAY 5 PM AND 2AM.
    1.I CTEATED EVENT IN SM62 (ZSDPC_EVENT)
    2.I CTEATED SE 38 PROGRAM (zsdpp_pc_event_program )LIKE BELOW
    REPORT  zsdpp_pc_event_program.
    DATA: gv_time TYPE sy-uzeit,
          eventid TYPE btceventid.
    gv_time = sy-uzeit.
    eventid = 'ZSDPC_EVENT'.
    IF ( gv_time GE '170000' AND gv_time LE '170000' )
       OR ( gv_time EQ '020000' AND gv_time LE '200000' ).
      CALL METHOD cl_batch_event=>raise
        EXPORTING
          i_eventid                      = eventid
        EXCEPTIONS
          excpt_raise_failed             = 1
          excpt_server_accepts_no_events = 2
          excpt_raise_forbidden          = 3
          excpt_unknown_event            = 4
          excpt_no_authority             = 5
          OTHERS                         = 6.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                   WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.
    ENDIF.
    3.I SPECIFIED AFTER EVENT IN MANAIN VARIANT .ACTIVATED AND SCHEDULED PC
    4. THEN RUNE THIS PRORGAM IN SM36
    BUT MY PROBLEM IS IN SM37 JOB BI_PROCESS_TRIGGER event job in released status ONLY
    please guide me if i am wrong .
    REGARDS,
    KP

    It is supposed to be in Released status.
    The BI_PROCESS_TRIGGER job is scheduled to run based upon the event being triggered (from your ABAP program).  The only thing this job does is trigger the event that runs the next process in your process chain.  If you set up the Start variant in your PC to schedule "After Event" and you set the periodic flag, then after the BI_PROCESS_TRIGGER job finishes, it will reschedule itself.  So, there will always be a BI_PROCESS_TRIGGER job in a Released status.
    Again, the BI_PROCESS_TRIGGER job does not run the entire chain.  It only triggers the next processes in your chain.
    Does this help?
    PS.  Are there any BI_PROCESS_TRIGGER jobs in a Complete status?  If not, then there is an issue with your ABAP program.
    Edited by: Geo on May 4, 2009 11:13 AM

Maybe you are looking for