Write a Dbms_scheduler to run a job frequently

Hi,
I need to write dbms_scheduler to run the job on monthly basis which would delete data from database which are of 2 year old from sysdate.
For Eg: I have 50000 records in my table which are of differetn dates stored in database since 3 years.
If I run the job today, records which are created before today's date 2 years back should be deleted.
Can any one help me in step by step process as how can I do it as I am new in writing jobs.
Thanks in advance.

HI, thx for the info.
I want to run this job from front end application code (calling as a procedure within java or oracle bpm programming) instead of running from database level.
How can I do that. How can I pass arguments for this as I have parameter for my procedure. Below given are my procedure and job.
create or replace
procedure requests_delete_proc(p_request_date string)
as
request_count number;
nodatafound exception;
begin
select count(request_id) into request_count from max_request_dtls
where requested_date < to_date(p_request_date,'dd/mm/yyyy') - (2*365);
if request_count <> 0 then
delete from max_req_history_dtls
where request_id in
(select request_id from max_request_dtls
where requested_date < add_months(to_date(p_request_date,'dd-MON-yyyy'),-(2 * 365))
delete from max_request_dtls
where requested_date < add_months(to_date(p_request_date,'dd-MON-yyyy'),-(2 * 365));
dbms_output.put_line('requests deleted');
commit;
else
raise nodatafound;
end if;
exception
when nodatafound then
dbms_output.put_line('no records found for mentioned requested date');
end requests_delete_proc;
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'JOB_DELETEOLD',
job_type => 'STORED_PROCEDURE',
job_action => 'requests_delete_proc',
number_of_arguments => 1,
start_date => SYSTIMESTAMP,
repeat_interval => 'freq=MONTHLY; BYMONTHDAY=1; byhour=1; byminute=0',
end_date => NULL,
enabled => TRUE,
comments => 'JOB_DELETEOLD');
END;
BEGIN
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
job_name => 'JOB_DELETEOLD',
argument_name => 'requested_date',
argument_value => sysdate);
END;
Can I club these 2 and put them in a package and call the same in code.
pls suggest.

Similar Messages

  • Analytics: Content Item metrics - run your jobs frequently!

    In our environment the analytics job for "Content Item Sync Job" was set to run once daily.
    As it turns out, this job needs to run before analytics can start to be collected on a particular content item.
    In our case because we were running it in the middle of the night, we would miss all analytics that would be captured on the very first day a content item is posted (which for us can be the bulk of an items hits).
    Upon realizing the significance we've boosted the job to now run every 15 minutes.
    Hope that helps someone!

    Nice Contribution..
    Thanks Alot :)
    Joe

  • Error ORA-01017 happened when dbms_scheduler run a job.

    Hi All,
    I got a problem when I use dbms_scheduler to run a job. I got Error code 1017 when the job is run by scheduler. Please find my steps below:
    Oracle version is : Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    1. Created a job successfully by using the code below:
    begin
    dbms_scheduler.create_job(
    job_name => 'monthly_refresh_elec_splits',
    job_type => 'PLSQL_BLOCK',
    job_action => 'BEGIN TRADINGANALYSIS.PKG_IM_REPORTING_ERM.REFRESH_ELEC_SPLITS_TEST; commit; END;',
    start_date => SYSTIMESTAMP,
    repeat_interval => 'freq=monthly;bymonthday=25;byhour=10;byminute=35;bysecond=0;',
    end_date => NULL,
    enabled => TRUE,
    comments => 'monthly_refresh_elec_splits.',
    auto_drop => FALSE
    end;
    2. Got the job run details from talbe user_scheduler_job_run_details after the job is finished:
    select * from user_scheduler_job_run_details where job_name = 'MONTHLY_REFRESH_ELEC_SPLITS' order by log_id desc;
    LOG_ID     LOG_DATE     OWNER     JOB_NAME     JOB_SUBNAME     STATUS     ERROR#     REQ_START_DATE     ACTUAL_START_DATE     RUN_DURATION     INSTANCE_ID     SESSION_ID     SLAVE_PID     CPU_USED     DESTINATION     ADDITIONAL_INFO
    2054804     25/06/2012 10:35:01.086000 AM +10:00     TRADINGANALYSIS     MONTHLY_REFRESH_ELEC_SPLITS          FAILED     1017     25/06/2012 10:35:00.300000 AM +10:00     25/06/2012 10:35:00.400000 AM +10:00     +00 00:00:01.000000     1     1025,37017     129396     +00 00:00:00.030000          ORA-01017: invalid username/password; logon denied
    ORA-02063: preceding line from NETS
    ORA-06512: at "TRADINGANALYSIS.PKG_IM_REPORTING_ERM", line 574
    ORA-06512: at line 1
    3. If I run the job directly the job will be finished successfully.
    begin
    dbms_scheduler.run_job('monthly_refresh_elec_splits',TRUE);
    end;
    LOG_ID     LOG_DATE     OWNER     JOB_NAME     JOB_SUBNAME     STATUS     ERROR#     REQ_START_DATE     ACTUAL_START_DATE     RUN_DURATION     INSTANCE_ID     SESSION_ID     SLAVE_PID     CPU_USED     DESTINATION     ADDITIONAL_INFO
    2054835     25/06/2012 11:05:38.515000 AM +10:00     TRADINGANALYSIS     MONTHLY_REFRESH_ELEC_SPLITS          SUCCEEDED     0     25/06/2012 11:04:35.787000 AM +10:00     25/06/2012 11:04:35.787000 AM +10:00     +00 00:01:03.000000     1     1047,700          +00 00:00:00.030000
    Additional Info:
    PL/SQL Code in procedure
    PROCEDURE Refresh_Elec_Splits_Test IS
    BEGIN
    --Refresh im_fact_nets_genvol from v_im_facts_nets_genvol in NETS
    DELETE FROM IM_FACT_NETS_GENVOL;
    --the local NETS_GENVOL table has an additional column providing volume splits by generator and month.
    --INSERT INTO IM_FACT_NETS_GENVOL values ('test',sysdate,'test',1,2,3,4,5,6,7);
    INSERT INTO IM_FACT_NETS_GENVOL
    select ngv.*,
    ratio_to_report (net_mwh) OVER (PARTITION BY settlementmonth, state)
    gen_percent
    from [email protected] ngv;
    commit;
    END;
    Does anyone can advice where should I check and how can I solve the problem?
    Thanks in advance
    Edited by: user13244529 on 24/06/2012 18:33
    Edited by: user13244529 on 24/06/2012 18:43

    I apologize if you already solved this.. but see Metalink ID 790221.1
    +*<Moderator Edit - deleted contents of MOS Doc - pl do NOT post such content - it is a violation of your Support agreement>*+                                                                                                                                                                                                                                                                                                                                                                                                               

  • How to run the job using DBMS_SCHEDULER

    How to run the job using DBMS_SCHEDULER
    pleas give some sample Iam very new to DBMS_SCHEDULER

    Hi
    DBMS_SCHEDULER
    In Oracle 10g the DBMS_JOB package is replaced by the DBMS_SCHEDULER package. The DBMS_JOB package is now depricated and in Oracle 10g it's only provided for backward compatibility. From Oracle 10g the DBMS_JOB package should not be used any more, because is could not exist in a future version of Oracle.
    With DBMS_SCHEDULER Oracle procedures and functions can be executed. Also binary and shell-scripts can be scheduled.
    Rights
    If you have DBA rights you can do all the scheduling. For administering job scheduling you need the privileges belonging to the SCHEDULER_ADMIN role. To create and run jobs in your own schedule you need the 'CREATE JOB' privilege.
    With DBMS_JOB you needed to set an initialization parameter to start a job coordinator background process. With Oracle 10g DBMS_SCHEDULER this is not needed any more.
    If you want to user resource plans and/or consumer groups you need to set a system parameter:
    ALTER SYSTEM SET RESOURCE_LIMIT = TRUE;
    Baisc Parts: Job
    A job instructs the scheduler to run a specific program at a specific time on a specific date.
    Programs
    A program contains the code (or reference to the code ) that needs to be run to accomplish a task. It also contains parameters that should be passed to the program at runtime. And it?s an independent object that can referenced by many jobs
    Schedules
    A schedule contains a start date, an optional end date, and repeat interval with these elements; an execution schedule can be calculated.
    Windows
    A window identifies a recurring block of time during which a specific resource plan should be enabled to govern resource allocation for the database.
    Job groups
    A job group is a logical method of classifying jobs with similar characteristics.
    Window groups
    A window groups is a logical method of grouping windows. They simplify the management of windows by allowing the members of the group to be manipulated as one object. Unlike job groups, window groups don?t set default characteristics for windows that belong to the group.
    Using Job Scheduler
    SQL> drop table emp;
    SQL> Create table emp (eno int, esal int);
    SQL > begin
    dbms_scheduler.create_job (
    job_name => 'test_abc',
    job_type => 'PLSQL_BLOCK',
    job_action => 'update emp set esal=esal*10 ;',
    start_date => SYSDATE,
    repeat_interval => 'FREQ=DAILY; INTERVAL=10',
    comments => 'Iam tesing scheduler');
    end;
    PL/SQL procedure successfully completed.
    Verification
    To verify that job was created, the DBA | ALL | USER_SCHEDULER_JOBS view can be queried.
    SQL> select job_name,enabled,run_count from user_scheduler_jobs;
    JOB_NAME ENABL RUN_COUNT
    TEST_abc FALSE 0
    Note :
    As you can see from the results, the job was indeed created, but is not enabled because the ENABLE attribute was not explicitly set in the CREATE_JOB procedure.
    Run your job
    SQL> begin
    2 dbms_scheduler.run_job('TEST_abc',TRUE);
    3* end;
    SQL> /
    PL/SQL procedure successfully completed.
    SQL> select job_name,enabled,run_count from user_scheduler_jobs;
    JOB_NAME ENABL RUN_COUNT
    TEST_ABC FALSE 0
    Copying Jobs
    SQL> begin
    2 dbms_scheduler.copy_job('TEST_ABC','NEW_TEST_ABC');
    3 END;
    4 /
    PL/SQL procedure successfully completed. Hope it will help you upto some level..!!
    Regards
    K

  • Run a Job (DBMS_Scheduler) at 11:30 am and 5:30 pm every day

    Version:10gR2
    I want to run a job at at 11:30 am and 5:30 pm every day. After setting
    start_date        =>  sysdate,
    repeat_interval   =>  'freq = daily;Don't know how to set BYHOUR and BYMINUTE parameters twice in a day

    You'll probably want a repeat interval like this:
    FREQ=DAILY;BYHOUR=11,17;BYMINUTE=30;BYSECOND=0The DBMS_SCHEDULER package provides a procedure called EVALUATE_CALENDAR_STRING which allows you to test different repeat intervals and see if they meet your requirements. They even provide example code that you can easily copy and modify.
    The documentation also goes into great detail about the Calendaring Syntax. It's worth a read.
    For example here is the code I used to test your requirement.
    DECLARE
            start_date        TIMESTAMP;
            return_date_after TIMESTAMP;
            next_run_date     TIMESTAMP;
    BEGIN
            start_date := systimestamp;
            return_date_after := start_date;
            FOR i IN 1..5 LOOP
                    DBMS_SCHEDULER.EVALUATE_CALENDAR_STRING
                    ( 'FREQ=DAILY;BYHOUR=11,17;BYMINUTE=30;BYSECOND=0'
                    , start_date
                    , return_date_after
                    , next_run_date
                    DBMS_OUTPUT.PUT_LINE('next_run_date: ' || next_run_date);
                    return_date_after := next_run_date;
            END LOOP;
    END;
    /

  • Alv show in report but when see in spool (after run background job) there i

    my program have some error when i run result alv show in report but when see in spool (after run background job) there is no data, (other program can see result in spool)
    Please help
    here is some example of my program
    ********************************declare internal table*****************************
    internal table output for BDC
    data : begin of t_output occurs 0,
    bukrs type anla-bukrs,
    anln1 type anla-anln1,
    anln2 type anla-anln2,
    zugdt type anla-zugdt,
    result(70) type c,
    end of t_output.
    *****get data from loop********************************
      loop at t_anla.
        CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
             EXPORTING
                  INPUT  = t_anla-anln1
             IMPORTING
                  OUTPUT = t_anla-anln1.
        CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
             EXPORTING
                  INPUT  = t_anla-anln2
             IMPORTING
                  OUTPUT = t_anla-anln2.
    check record is correct or not
        select single bukrs anln1 anln2 zugdt
        into w_output
        from anla
        where bukrs = t_anla-bukrs and
        anln1 = t_anla-anln1 and
        anln2 = t_anla-anln2
       zugdt = '00000000'
    if record is correct
        if sy-subrc = 0 and w_output-zugdt = '00000000'.
          w_output-bukrs = t_anla-bukrs.
          w_output-anln1 = t_anla-anln1.
          w_output-anln2 = t_anla-anln2.
          w_output-result = 'Yes : this asset can delete'.
          append w_output to t_output.
    if record is not correct
        elseif sy-subrc = 0 and w_output-zugdt <> '00000000'.
    there is error record  this asset have value already
          v_have_error = 'X'.
          w_output-bukrs = t_anla-bukrs.
          w_output-anln1 = t_anla-anln1.
          w_output-anln2 = t_anla-anln2.
          w_output-result = 'Error : this asset have value already'.
          append w_output to t_output.
        else.
    there is error record this asset donot exist in table anla
          v_have_error = 'X'.
          w_output-bukrs = t_anla-bukrs.
          w_output-anln1 = t_anla-anln1.
          w_output-anln2 = t_anla-anln2.
          w_output-result = 'Error : this asset doest not exist'.
          append w_output to t_output.
        endif.
    *end of check record is correct or not
        clear w_output.
      endloop.
    ******************************show data in ALV***************************************************
    show data from file in ALV
      perform display_report_ALV.
    *&      Form  display_report_ALV
    form display_report_ALV.
      DATA: LT_FIELD_CAT TYPE SLIS_T_FIELDCAT_ALV,
          LT_EVENTS TYPE SLIS_T_EVENT,
          LV_REPID LIKE SY-REPID.
      PERFORM ALV_DEFINE_FIELD_CAT USING LT_FIELD_CAT.
      PERFORM ALV_HEADER_BUILD USING T_LIST_TOP_OF_PAGE[].
      PERFORM ALV_EVENTTAB_BUILD USING LT_EVENTS[].
      LV_REPID = SY-REPID.
      CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
           EXPORTING
                I_CALLBACK_PROGRAM = LV_REPID
                IT_FIELDCAT        = LT_FIELD_CAT
                I_SAVE             = 'A'
                IT_EVENTS          = LT_EVENTS[]
           TABLES
                T_OUTTAB           = t_output
           EXCEPTIONS
                PROGRAM_ERROR      = 1
                OTHERS             = 2.
      IF SY-SUBRC NE 0.
        WRITE: / 'Return Code : ', SY-SUBRC,
          'from FUNCTION REUSE_ALV_GRID_DISPLAY'.
      ENDIF.
    endform.
    *&      Form  alv_define_field_cat
          text
         -->P_LT_FIELD_CAT  text
    FORM ALV_DEFINE_FIELD_CAT USING  TB_FCAT TYPE SLIS_T_FIELDCAT_ALV.
      DATA: WA_FIELDCAT LIKE LINE OF TB_FCAT,
        LV_COL_POS TYPE I.
      DEFINE FIELD_CAT.
        CLEAR WA_FIELDCAT.
        ADD 1 TO LV_COL_POS.
        WA_FIELDCAT-FIELDNAME = &1.
        WA_FIELDCAT-REF_TABNAME = &2.
        WA_FIELDCAT-COL_POS = LV_COL_POS.
        WA_FIELDCAT-KEY = &3.
        WA_FIELDCAT-NO_OUT = &4.
        WA_FIELDCAT-REF_FIELDNAME = &5.
        WA_FIELDCAT-DDICTXT = 'M'.
        IF NOT &6 IS INITIAL.
          WA_FIELDCAT-SELTEXT_L = &6.
          WA_FIELDCAT-SELTEXT_M = &6.
          WA_FIELDCAT-SELTEXT_S = &6.
        ENDIF.
        WA_FIELDCAT-DO_SUM = &7.
        WA_FIELDCAT-OUTPUTLEN = &8.
        APPEND WA_FIELDCAT TO TB_FCAT.
      END-OF-DEFINITION.
      FIELD_CAT  'BUKRS'  'ANLA'     'X' '' 'BUKRS' 'Company Code' '' ''.
      FIELD_CAT  'ANLN1'  'ANLA'     'X' '' 'ANLN1' 'Asset Number' '' ''.
      FIELD_CAT  'ANLN2'  'ANLA'     'X' '' 'ANLN2' 'Asset Sub Number' '' ''.
    FIELD_CAT  'ATEXT'   'T5EAE'     'X' '' 'ATEXT' 'Result' '' ''.
      FIELD_CAT  'RESULT'  ''     'X' '' 'RESULT' 'RESULT' '' ''.
    ENDFORM.                    " alv_define_field_cat

    Hi,
    Check this code..
    FORM display_report_alv.
      DATA: lt_field_cat TYPE slis_t_fieldcat_alv,
      lt_events TYPE slis_t_event,
      lv_repid LIKE sy-repid.
      PERFORM alv_define_field_cat USING lt_field_cat.
      PERFORM alv_header_build USING t_list_top_of_page[].
      PERFORM alv_eventtab_build USING lt_events[].
      lv_repid = sy-repid.
      IF sy-batch EQ 'X'.  ----> " System Field for Backgroud..if Background use list display
        CALL FUNCTION 'REUSE_ALV_LIST_DISPLAY'
          EXPORTING
            i_callback_program = lv_repid
            it_fieldcat        = lt_field_cat
            i_save             = 'A'
            it_events          = lt_events[]
          TABLES
            t_outtab           = t_output
          EXCEPTIONS
            program_error      = 1
            OTHERS             = 2.
      ELSE.
        CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
          EXPORTING
            i_callback_program = lv_repid
            it_fieldcat        = lt_field_cat
            i_save             = 'A'
            it_events          = lt_events[]
          TABLES
            t_outtab           = t_output
          EXCEPTIONS
            program_error      = 1
            OTHERS             = 2.
      ENDIF.
      IF sy-subrc NE 0.
        WRITE: / 'Return Code : ', sy-subrc,
        'from FUNCTION REUSE_ALV_GRID_DISPLAY'.
      ENDIF.
    ENDFORM.                    "display_report_ALV

  • How do you register an agent of on 11.2 DB to run remote jobs on another ?

    There are examples available showing how to install a remote agent on a host that doesnt have an Oracle database (using the gateway CD in 11.1 or the Oracle Client in 11.2) and from reading the documentation, it suggests you only need to install the remote agent if there is not an Oracle database on the host. If I have two Oracle databases and want to run remote jobs from DBMS_SCHEDULER on host A against data on host B, how can i register the agent as i cant find any examples that do this.
    With the remote agent installed, there are schagent either in unix or windows but i cant find this in the $ORACLE_HOME/bin of an 11.2 Enterprise install. I've run the prvtrsch.sql script as SYS and it's created the REMOTE_SCHEDULER_AGENT user and objects (which i think might be the equivalent of running schagent on the remote client) but then when i want to register the agent on the calling Oracle database, i dont know what the agent name is to specify in the CREATE_SCHEDULER_DESTINATION call.
    I've added the TNS entries for both directions but just dont have enough information to find the missing bit that lets me connect them.
    Any help appreciated, or just some pointer to know if I am in the right direction would be great
    Thanks

    Hi Ronald
    I have your book which has been very useful in other areas I have been investigating on DBMS_SCHEDULER ( I certainly recommend it to anyone doing any serious work on DBMS_SCHEDULER) but it's not in there either - I've read the chapter ''Getting out of the database' several times and whilst it goes into great detail on how to install the remote agent on a machine without a database, I could only find a brief mention of running an agent in the database starting on page 113 where it talks about 'preparing the database for remote agent usage'.
    I've done these things on the second database but the later part of the chapter is back to running jobs on a machine without Oracle installed and use of schagent which doesn't exist in in the $ORACLE_HOME/bin on a machine that has Oracle installed so I am stuck on how to proceed.
    You also mention the enhancement request so would be interested to know what happened with this ?
    The first thing that comes to my mind when a registration has been done is: "How
    can I check this?" Unfortunately, there appears to be no way to check the status of the
    agent's registration—not even in the database. It would be very convenient to have
    an Oracle view that gives an oversight of which agents are talking with the database.
    I filed an enhancement request (7462577) for this. So with a little luck, we can check
    the status of remote agents in the near future.
    I figure if I have the name of the agent, I can use it in the CREATE_DATABASE_DESTINATION call on my calling database but I cant find the name anywhere. In SQL Developer on the SQL table of create destination, it shows this as being SYS."" and inserts whatever you select from a dropdown list but I don't know how to get any values into the dropdown so possibly the registration wasn't complete but it has created the database objects in the schema and I got no errors when running it.
    Any advise on how to proceed welcome and perhaps it can be added as an addition to the next version of the book.
    Regards
    Trevor

  • USER_SCHEDULER_JOBS has LAST_START_DATE as null even after running the jobs

    Hi,
    I have tried creating and running multiple Jobs using following statements, Jobs are executing the given procedure but USER_SCHEDULER_JOBS's last_start_date is null.
    //creation of job
    DBMS_SCHEDULER.CREATE_JOB (job_name => 'demo'||i,job_type => 'STORED_PROCEDURE',number_of_arguments => 2,job_action => 'POPULATE_DATA');
    //running the job using these commands..
    dbms_scheduler.set_job_argument_value(PROCESS_NAME,1,''||START_RANGE);
    dbms_scheduler.set_job_argument_value(PROCESS_NAME,2,''||END_RANGE);
    dbms_scheduler.set_job_argument_value(PROCESS_NAME,3,''||PROCESS_NAME);
    DBMS_SCHEDULER.RUN_JOB(PROCESS_NAME);
    I am able to see data getting getting populated by each job, but USER_SCHEDULER_JOBS has last_start_date value as null.
    Any help will be highly appreciated.
    Thanks,
    Vaseem Saeed.

    hi read this link, there is some explaination about start_date null:
    http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_sched.htm

  • Use_current_session = FALSE does not run my job correctly

    We have a custom scheduler that invokes jobs based on schedules/conditions. Until now, the jobs were all kicked off in the same session. Since the record set to be processed is increasing, we want the jobs submitted in parallel.
    So the main job is split into several discrete jobs & run them in different sessions (dbms_scheduler.run_job with use_current_session = FALSE). The programs & jobs get created successfully.
    The program has around 12 arguments defined.
    The jobs run; however error out with "ORA-06502: PL/SQL: numeric or value error ORA-06502: PL/SQL: numeric or value error: character to number conversion error" *(DBA_SCHEDULER_JOB_RUN_DETAILS)*
    If I run the jobs with this parameter = TRUE the jobs run successfully. Any pointers greatly appreciated.
    Here are additional details..
    DB: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    dba_scheduler_global_attribute
    MAX_JOB_SLAVE_PROCESSES
    LOG_HISTORY 30
    DEFAULT_TIMEZONE US/Pacific
    LAST_OBSERVED_EVENT
    EVENT_EXPIRY_TIME
    CURRENT_OPEN_WINDOW WEEKEND_WINDOW
    v$parameter where name like '%process%'
    processes 150
    gcs_server_processes 0
    db_writer_processes 1
    log_archive_max_processes 2
    job_queue_processes 20
    aq_tm_processes 0
    Thanks
    Kiran.

    Hi,
    This error seems clear,
    character to number conversion error : at "XXA.XXX_ANP_ENGINE_MASTER_PKG", line 24
    This is application code which the scheduler did run but the application code is throwing the error.
    You will have to debug the issue occurring at line 24 of package "XXA.XXX_ANP_ENGINE_MASTER_PKG". You may be relying on something in your session which is not available in a background session - so the job fails when run in the background.
    Hope this helps,
    Ravi.

  • Get Running Timer Jobs by PowerShell

    Dear All,
    Kindly, do you know if it is possible to get the running timers jobs using PowerShell..
    We can not write server side code in production as we can not have downtime.
    ... Is there API for it? Is there a limitation? Is there a workaround? 
    We have tried to get latest and check date by greater than that should mean the running one.. but it did not work..
    Regards,
    Mai
    Mai Omar Desouki | Software Consultant | Infusion | MCP, MCTS, MCPD, MCITP, MCT Microsoft Certified Trainer & MCC Microsoft Community Contributor | Email: [email protected] | Blog: http://moresharepoint.wordpress.com

    Hi
    Use below command if you want specific timer job.
     $JobName="mytimer"
    $WebApp = Get-SPWebApplication http://mywebappurl
    $job= Get-SPTimerJob|
    ?{$_.Name -match$JobName}
    | ?{$_.Parent -eq$WebApp}
    for
    all timer job related to webapplication
    $job = Get-SPTimerJob
    -Webapplication $WebApp
    Regards,
    Rajendra Singh
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful
    http://sharepointundefind.wordpress.com/

  • Can a long running batch job causing deadlock bring server performance down

    Hi
    I have a customer having a long running batch job (approx 6 hrs), recently we experienced performance issue where the job now taking &gt;12 hrs. The database server is crawling. Looking at the alert.log showing some deadlock,
    The batch job are in fact many parallel child batch job that running at the same time, that would have explain the deadlock.
    Thus, i just wondering any possibility that due to deadlock, can cause the whole server to be crawling, even connect to the database using toad is also getting slow or doing ls -lrt..
    Thanks
    Rgds
    Ung

    Kok Aik wrote:
    According to documentation, complex deadlock can make the job appeared hang & affect throughput, but it didn't mentioned how it will make the whole server to slow down. My initial thought would be the rolling back and reconstruct of CR copy that would have use up the cpu.
    I think your ideas on rolling back, CR construction etc. are good guesses. If you have deadlocks, then you have multiple processes working in the same place in the database at the same time, so there may be other "near-deadlocks" that cause all sorts of interference problems.
    Obviously you could have processes queueing for the same resource for some time without getting into a deadlock.
    You can have a long running update hit a row which was changed by another user after the update started - which woudl cause the long-running update to rollback and start again (Tom Kyte refers to this as 'write consistency' if you want to search his website for a discussion on the topic).
    Once concurrent processes start sliding out of their correct sequences because of a few delays, it's possible for reports that used to run when nothing else was going on suddenly finding themselves running while updates are going on - and doing lots more reads (physical I/O) of the undo tablespace to take blocks a long way back into the past.
    And so on...
    Anyway, according to the customer, the problem seems to be related to the lgpr_size as the problem disappeared after they revert it back to its orignial default value,0. I couldn't figure out what the lgpr_size is - can you explain.
    Thanks
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking" Carl Sagan

  • Funds Commitment Batch-Input running in Job

    Hi Folks,
    I'm having some trouble with a batch-input running in job for transaction FMZ1.
    When I run the batch-input in a dialog process (no job) it works perfectly, however when I execute this same program in a Job the job doesn't finishes, and when I debug the job, it freezes when calling transaction FMZ1.
    Does anyone know if I'm missing something? Or is there any restriction for this transaction to be executed in a job?
    Thanks in advance.
    Regards,
    Gilberto Li

    Hi Vijay,
    Using the following function modules you can create a batch input session.
    BDC_OPEN_GROUP
    BDC_INSERT
    BDC_CLOSE_GROUP
    Example - http://www.sap-img.com/abap/auto-disallowed-back-posting-to-previous-period.htm
    You can use the program RSBDCSUB to schedule batch input sessions in background.
    http://www.sap-img.com/abap/learning-bdc-programming.htm - How to write BDC program
    Executing batch-input sessions in background jobs - Example for how to call the bdc session.
    Hope these are helpful.
    Thanks
    Vinod

  • Running ksh jobs directly on EXADATA compute node.

    My organization is in the process of considering the acquisition of an EXADATA applicance.
    We have a 100TB database (V11.2.0.1.0) running on a 48 processor AIX P770 box. We process approximately 2000 jobs / day (using CONTROL-M) against our PROD ORACLE instance. All of the jobs run via AIX ksh scripts directly on the database server that hosts our PROD database. We want to simply forklift our current database to EXADATA, and we want to be able to run our current job set unchanged. Can we move our AIX ksh scripts over to the EXADATA compute nodes, and simply execute the jobs directly under Linux on one of the compute nodes?
    We know that there are likely to be some syntactical differences between LINUX ksh and AIX ksh and are prepared to deal with that.
    NOTE: We do a lot of processing using external flat files (usually loaded as external tables), and we write out hundreds of flat files for export to other systems using UTL_FILE. Given that, we would like to do our job processing directly on the database server / compute node where the files would be loaded from / written to.
    Any thoughts are appreciated.

    Hi user6201670,
    The short answer to your question is yes: it is possible to run ksh jobs directly on a database server.  As for file storage, there is a limited amount of space on the OS disks, but for large data and I/O volumes consider mounting an external file system:  be it DBFS, an external NFS device, or (as of GI 12.1.0.2) ACFS.
    A few things to keep in mind:
    - For critical services, plan on how to offer high availability in case of a node failure
    - If data loads are a big part of your workload, think about how to load-balance between multiple database servers
    - File processing by script with UTL_FILE is considerably slower than direct-path inserts via something like SQL*Loader
    - When paying for Oracle licenses by processor, using database server CPU time to run dataload scripts can be quite expensive
    HTH,
    Marc

  • Privileges needed to run a job?

    I have a database procedure that I have put in the 10g R2 job queue with a disabled status. I would like to allow a user to pass some parameters to the database procedure using the dbms_scheduler.set_job_argument_value and then enable the job with the dbms_scheduler.enable to run the job, but when I try this the user gets access errors. What privileges do I need to grant to a user to get this to work?

    Hi,
    You can use the SQL ALTER privilege on the job e.g.
    GRANT ALTER ON JOB SCOTT.J1 to BLAKE ;
    If you do not want the user to be able to do other modifications to the job you will have to create a procedure which does only the required steps, have it run with definer's rights and then grant access to that procedure to the user.
    Hope this helps,
    Ravi.

  • How to run a job automatically with file watcher

    Hi,
    I want to execute below job automatically when the file arrived in the oracle directory path..
    I am able to run the job manually.
    version details
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    "CORE 11.2.0.3.0 Production"
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    BEGIN
        SYS.DBMS_SCHEDULER.CREATE_JOB (
                job_name => '"UPN_COMMON"."EMPJOBS"',
                job_type => 'STORED_PROCEDURE',
                job_action => '"UPN_COMMON"."INS_EMP"',
                number_of_arguments => 0,
                start_date => TO_TIMESTAMP_TZ('2013-09-30 09:22:20 America/New_York','YYYY-MM-DD HH24.MI.SS TZR'),
                event_condition => '(1=1)',
                queue_spec => '"UPN_COMMON"."FILE_WATCHER"',
                end_date => TO_TIMESTAMP_TZ('2013-09-30 10:52:20 America/New_York','YYYY-MM-DD HH24.MI.SS TZR'),
                job_class => '"SYS"."DEFAULT_JOB_CLASS"',
                enabled => FALSE,
                auto_drop => FALSE,
                comments => 'TESTING A PROCEDURE',
                credential_name => NULL,
                destination_name => NULL);
        SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(
                 name => '"UPN_COMMON"."EMPJOBS"',
                 attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF);
        SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(
                 name => '"UPN_COMMON"."EMPJOBS"',
                 attribute => 'max_run_duration', value => INTERVAL '1' MINUTE);
        SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(
                 name => '"UPN_COMMON"."EMPJOBS"',
                 attribute => 'schedule_limit', value => INTERVAL '1' MINUTE);  
        SYS.DBMS_SCHEDULER.enable(
                 name => '"UPN_COMMON"."EMPJOBS"');
    END;

    >I want to execute below job automatically when the file arrived in the oracle directory path..
    the code which results in the file arriving in the Oracle directory path needs to be enhanced to invoke the desired PL/SLQ procedure.
    file arrival is OS operation that Oracle knows nothing about &  is totally & completely oblivious to this event

Maybe you are looking for

  • Par file deployment from nwds - sap-plugin.log error

    i'm trying to deploy par file all my setting are right, but i get error that i see in sap-plugin.log file Server returned HTTP response code: 401 for URL: http://sapepdev:50100 does anyone knows how to fix that problem? is my "nwds-> preference-? wor

  • My applet is not working correctly...

    I am able to run it easily using Eclipse and it saves the file fine. When I try to run it using firefox, though, it shows the button but does not save the file. It's running offline, I thought I didn't need to JAR it... import java.awt.Container; imp

  • [SOLVED] How to get ibus to work in xterm

    Can anyone help me get ibus working in xterm? Ibus is working in all my other applications except xterm. In my pre-Arch days I had the same problem in Debian. In Debian I fixed the problem by editing /etc/X11/xinit/xinput.d/default so that it looked

  • Modprobe cciss not working in 0.7.1 but works in 0.7 CD

    Thanks in advance for any help.  I'm installing Arch on an scsi raid partition.  Both drives are connected to a compaq smart array 532 controller.  After booting from the 0.7 base arch cd I can do a modprobe cciss and then do cfdisk /dev/cciss/c0d0 w

  • Using FCP remotely when not at home

    Has anyone on this forum ever edited on FCP remotely? That is, using a remote application, such as LogMe In? Where I work they have PCs, and sometimes I have down time and want to edit what I got going at home on FCP. I can access my Mac remotely fro