Bacjground job Trace

Hi Guruz,
One of my consultant has scheduled a job in bacjground which runs on daily basis.But what I noticed that It ran successfully on 4th of November last.After that it didn't run.It was rescheduled today.
From SM37 I can't see why the job didn't run.So is it possible to trace out the jobs?
Regards,
Mofizur

Hi Juan,
There is no entry in between 4th Nov and 12th of Dec.
JOB Name                Status       Date
RMMRP000     Finished     01.11.2007
RMMRP000     Finished     01.11.2007
RMMRP000     Finished     02.11.2007
RMMRP000     Finished     02.11.2007
RMMRP000     Finished     03.11.2007
RMMRP000     Finished     03.11.2007
RMMRP000     Finished     04.11.2007
<i><b>RMMRP000     Finished     04.11.2007
RMMRP000     Finished     12.12.2007</b></i>RMMRP000     Finished     12.12.2007
I think you have understood the problem.I have checked all the options in SM 37 job status Scheduled,Released...
Regards,
Mofizur

Similar Messages

  • Setting bacjground job

    hi i am beginner in abap i want to set a back ground job for a particular date and time through program only, and to stop a program with in certain time means the whole job time is only o seconds after 10 seconds the job will be canceled,
    plz help me

    Hi Kiran,
    you can schedule your job in the job_close function module.
    Check this code.
    DATA:
      RDD_JOBNAME   LIKE TBTCO-JOBNAME,
      RDD_USERNAME  LIKE TBTCO-AUTHCKNAM VALUE 'DDIC',
      RDD_REPORT    LIKE SY-REPID,
      RDD_VARIANT   LIKE RALDB-VARIANT   VALUE '1',
      RDD_PRDDAYS   LIKE TBTCO-PRDDAYS,
      RDD_PRDMONTHS LIKE TBTCO-PRDMONTHS,
      RDD_STRTTIME  LIKE TBTCO-SDLSTRTTM VALUE 28800,
      RDD_STRTDATE  LIKE TBTCO-SDLSTRTDT,
      RDD_JOBCOUNT  LIKE TBTCO-JOBCOUNT,
      COUNT  LIKE TBTCO-SDLSTRTTM,
    COUNT = 28800.
      DO 12 TIMES.
        RDD_STRTDATE = SY-DATUM + 1.       " tomorrow
        RDD_STRTTIME = COUNT.              " 1 am
        RDD_JOBNAME = '#SAP_COLLECTOR_FOR_PERFMONITOR'.
        RDD_REPORT = 'RSCOLL00'.
        PERFORM SCHEDULE_JOB.
        COUNT = COUNT + 3600.
      ENDDO.
    FORM SCHEDULE_JOB.
      CALL FUNCTION 'JOB_OPEN'
           EXPORTING
                JOBNAME          = RDD_JOBNAME
           IMPORTING
                JOBCOUNT         = RDD_JOBCOUNT
           EXCEPTIONS
                CANT_CREATE_JOB  = 1
                INVALID_JOB_DATA = 2
                JOBNAME_MISSING  = 3
                OTHERS           = 4.
      IF SY-SUBRC EQ 0.
              CALL FUNCTION 'JOB_SUBMIT'
               EXPORTING
                    JOBNAME                 = RDD_JOBNAME
                    JOBCOUNT                = RDD_JOBCOUNT
                    AUTHCKNAM               = RDD_USERNAME
                    REPORT                  = RDD_REPORT
                  variant                 = rdd_variant
               EXCEPTIONS
                    BAD_PRIPARAMS           = 1
                    BAD_XPGFLAGS            = 2
                    INVALID_JOBDATA         = 3
                    JOBNAME_MISSING         = 4
                    JOB_NOTEX               = 5
                    JOB_SUBMIT_FAILED       = 6
                    LOCK_FAILED             = 7
                    PROGRAM_MISSING         = 8
                    PROG_ABAP_AND_EXTPG_SET = 9
                    OTHERS                  = 10.
        ENDIF.
        IF SY-SUBRC EQ 0.
              CALL FUNCTION 'JOB_CLOSE'
                 EXPORTING
                      JOBNAME              = RDD_JOBNAME
                      JOBCOUNT             = RDD_JOBCOUNT
                      PRDDAYS              = RDD_PRDDAYS
                      <b>SDLSTRTDT            = RDD_STRTDATE
                      SDLSTRTTM            = RDD_STRTTIME</b>             EXCEPTIONS
                      CANT_START_IMMEDIATE = 1
                      INVALID_STARTDATE    = 2
                      JOBNAME_MISSING      = 3
                      JOB_CLOSE_FAILED     = 4
                      JOB_NOSTEPS          = 5
                      JOB_NOTEX            = 6
                      LOCK_FAILED          = 7
                      OTHERS               = 8.
            IF SY-SUBRC <> 0.
              MESSAGE S587 WITH RDD_JOBNAME.
            ENDIF.
           ENDIF.
    ENDFORM.
    Thanks,
    Susmitha

  • Server Group not working when one of Job Servers is down

    I have a Server Group of two job servers. They have the same version 12.2.2.3 and are attached to the same Local Repository of version 12.2.2.0.
    When I execute a batch job (from the Designer) and explicitly specify on which job server it should run, it works fine for either job server. Also, when I specify Batch Job execution on Server Group, it works fine.
    However, when I shutdown one of the Job Servers, and then try to execute the job on Server Group, I'm getting two error messages, BODI-1111011 that one Job Server is down, and BODI-1111009 that another Job Server has failed.
    At the same time, when I check the Server Group in the Management Console, it shows that the allegedly failed Job Server is in the status green.
    That error is not reflected in a job server eveng log, nor there is anything written to webadmin log, not in the job trace (the latter isn't created at all).
    Is there anything I can do at this point except raise a support message?

    The issue was with different users for Local Repository in Admin and Job Server config. I discovered it when trying to run the job from Admin Console. Designer is probably not the best diagnostic tool for such kind of issues.

  • Regarding status of job

    Hi friends,
    we  are having some  rapidmart , one of the rapid mart job server shows red in color in data services management console
    Iam just gettinng confusion whether the job or data has been scheduled succesfully or not?
    or Job has already schedule succesfully but  i think it may have some errors?
    when i check the DSMC iin JOB Trace log It shows
    DATAFLOW: Run as separate process flag is set to YES for the Hierarchy_flattening transform <Hierarchy_Flattening> in data flow  <DF_xx>. This is because the dataflow is set to run in Pageable cache and Hierarchy flattening cannot run in pageable mode. To avoid running as separate process, set data flow cache type to in_memory_cache.
    (so is this a error ? if it is error please guide me how to resolve)
    and siome data flows show successful  and some are still showing '' started ''
    Please assist I am new to this.
    Thanks
    Thanks in advance

    Are you getting a red mark against a local repository? If so, one of your job in the repository has failed. That can either be a scheduled job, or that might have been triggered manually.
    DATAFLOW: Run as separate process flag is set to YES
    This is not an error seemingly, but a warning that the HF has been set to run as a separate process. If you want to change this warning, right click on the Data Flow and change the Pageable Cache to In Memory. The performance impact can be negative if the memory consumed by the Data Flow is higher than the available virtual memory.
    If it is againsts the job server that you find a red status, you probably have set up a server group and is seeing status against this. That means the specified job server cannot connect to it's repository.
    Regards,
    Suneer

  • Please advise me about ETL Job slowly

    Hi expert
    I wonder why the ETL job takes longer than I would check the Job Trace Log found below.
    I check the Data Flow, then find a way to use the lookup data
    I check the database, it can not use count () to see it.
    I think this is probably the reason for the ETL run slowly. Thank you for your advise.

    I think your using cache type as pageable, select "collect statistics for monitoring" don't select use collected statistics, if we select this option it will use the cache based on the previous run and  also refer this document for more information.
    http://wiki.scn.sap.com/wiki/display/EIM/DI+Caching+Example
    http://wiki.scn.sap.com/wiki/display/EIM/Pageable+Cache+and+DSConfig

  • Performance Degradation - High fetches and Prses

    Hello,
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    2) High fetches and poor number/ low number of rows being processed
    Please let me kno as to how the performance degradation can be minimised, Perhaps the high number of SQL* Net Client wait events may be due to multiple fetches and transactions with the client.
    EXPLAIN PLAN FOR SELECT /*+ FIRST_ROWS (1)  */ * FROM  SAPNXP.INOB
    WHERE MANDT = :A0
    AND KLART = :A1
    AND OBTAB = :A2
    AND OBJEK LIKE :A3 AND ROWNUM <= :A4;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      119      0.00       0.00          0          0          0           0
    Execute    239      0.16       0.13          0          0          0           0
    Fetch      239   2069.31    2127.88          0   13738804          0           0
    total      597   2069.47    2128.01          0   13738804          0           0
    PLAN_TABLE_OUTPUT
    Plan hash value: 1235313998
    | Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |        |     2 |   268 |     1   (0)| 00:00:01 |
    |*  1 |  COUNT STOPKEY               |        |       |       |            |          |
    |*  2 |   TABLE ACCESS BY INDEX ROWID| INOB   |     2 |   268 |     1   (0)| 00:00:01 |
    |*  3 |    INDEX SKIP SCAN           | INOB~2 |  7514 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=TO_NUMBER(:A4))
       2 - filter("OBJEK" LIKE :A3 AND "KLART"=:A1)
       3 - access("MANDT"=:A0 AND "OBTAB"=:A2)
           filter("OBTAB"=:A2)
    18 rows selected.
    SQL> SELECT INDEX_NAME,TABLE_NAME,COLUMN_NAME FROM DBA_IND_COLUMNS WHERE INDEX_OWNER='SAPNXP' AND INDEX_NAME='INOB~2';
    INDEX_NAME      TABLE_NAME                     COLUMN_NAME
    INOB~2          INOB                           MANDT
    INOB~2          INOB                           CLINT
    INOB~2          INOB                           OBTAB
    Is it possible to Maximise the rows/fetch
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      163      0.03       0.00          0          0          0           0
    Execute    163      0.01       0.03          0          0          0           0
    Fetch   174899     55.26      59.14          0    1387649          0     4718932
    total   175225     55.30      59.19          0    1387649          0     4718932
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 27
    Rows     Row Source Operation
      28952  TABLE ACCESS BY INDEX ROWID EDIDC (cr=8505 pr=0 pw=0 time=202797 us)
      28952   INDEX RANGE SCAN EDIDC~1 (cr=1457 pr=0 pw=0 time=29112 us)(object id 202995)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                  174899        0.00          0.16
      SQL*Net more data to client                155767        0.01          5.69
      SQL*Net message from client                174899        0.11        208.21
      latch: cache buffers chains                     2        0.00          0.00
      latch free                                      4        0.00          0.00
    ********************************************************************************

    user4566776 wrote:
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    But if you look at the text you are using bind variables.
    The first query is executed 239 times - which matches the 239 fetches. You cut off some of the useful information from the tkprof output, but the figures show that you're executing more than once per parse call. The time is CPU time spent using a bad execution plan to find no data -- this looks like a bad choice of index, possibly a side effect of the first_rows(1) hint.
    2) High fetches and poor number/ low number of rows being processedThe second query is doing a lot of fetches because in 163 executions it is fetching 4.7 million rows at roughly 25 rows per fetch. You might improve performance a little by increasing the array fetch size - but probably not by more than a factor of 2.
    You'll notice that even though you record 163 parse calls for the second statement the number of " Misses in library cache during parse" is zero - so the parse calls are pretty irrelevant, the cursor is being re-used.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Streams propagation error

    I have 2 Oracle 11.2 standard edition databases and I am attempting to perform synchronous capture replication from one to the other.
    I have a propagation rule created with:
    begin
    DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name => 'duptest.test1',
    streams_name => 'send_test1',
    source_queue_name => 'strmadmin.streams_capt_q',
    destination_queue_name => '[email protected]',
    include_dml => true,
    include_ddl => false,
    inclusion_rule => true);
    end;
    As stradmin on the source database I can query the streams_apply_q_table at orac11g.world (the destination database).
    However the propagation fails and produces a job trace file stating:
    kwqpdest: exception 24033
    kwqpdest: Error 24033 propagating to "STRMADMIN"."STREAMS_APPLY_Q"
    94272890B6B600FFE040007F01006D4C
    Can anyone suggest what is wrong and how to fix this?

    Paul,
    The capture process is:
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'duptest.test1',
    streams_type => 'sync_capture',
    streams_name => 'sync_capture',
    queue_name => 'streams_capt_q');
    END;
    The apply process on the destination database is:
    begin
    dbms_apply_adm.create_apply(
    queue_name => 'streams_apply_q',
    apply_name => 'sync_apply',
    apply_captured => false,
    source_database => 'ORALIN1');
    end;
    begin
    dbms_streams_adm.add_table_rules(
    table_name => 'duptest.test1',
    streams_type => 'apply',
    streams_name => 'sync_apply',
    queue_name => 'streams_apply_q',
    include_dml => true,
    include_ddl => false,
    include_tagged_lcr => false,
    source_database => 'ORALIN1',
    inclusion_rule => true);
    end;
    There is an entry in the AQ$_STREAMS_CAPT_Q_TABLE_E table, and no entries in dba_apply_error or streams_apply_q_table so I am assuming that the message does not make it to the destination queue.
    I have been assuming that the propagation and apply steps are independent. I.e. The apply queue separates the propagation activity from the apply activity. Is this wrong?
    Thanks

  • How is delivery date calculated

    How can anyone please explain how delivery date is calculated using forward and backward scheduling
    I want to have it broken down into the following steps
    for eg for delivery date calculation following dates are used
    Material Availabilty Date
    Material Staging Date
    Pick/pack time
    Transportation PLanning date
    Loading date
    Goods issue date
    Transit Date
    Delivery Date
    Can some one please give me an example and explain wht these dates are
    for eg customer needs delivery date on  11/20/2008
    how would the system cacluate whether it can meet the delivery date using backward scheduling
    and if it doesnt meet how does the system do the forward scheduling
    also i am not clear with the following dates
    material avaialibilty date
    material staging date
    transportation date
    can some one please explain me all this in detail
    Also i have another question at the sales order creation when is shipping point and route determined
    coz based on the ATP check only material avaialabilty date is determined and if we have a bacjground job running every 1 hours for atp then immediately when we create a sales order is a route and shipping point determined (just before we save the sales order)
    Let me be more clear
    Suppose customer representative recevies a order on the phone
    he enters sold to party, ship to party ,PO number,delivery date and material number and then hits enter
    so at tht time the shipping point and route is determined ?
    also when an atp check runs and if the delivery date is not met then the system will propose a new delivery date but if we have a different route configured say for eg overnight so we can meet the delivery date and we want to change this route to overnight what must we do?
    should we change the shipping condition in the header?
    I am not very sure about the process can you please also explain me this in detail?
    Thanks

    Hi there,
    When a sales order is logged & the user enters the requested delivery date, system 1st does the backward scheduling date. Pla note that the factory calender mentioned in the shipping point & route plays a crutial role in defining the working days & confirmed delivery date.
    For eg:  Customer has raised an order on 11/15 & requests delivery on 11/20/2008.
    the following times are important in delivery scheduling.
    Transit time: maintained in route
    Loading time maintained in the shipping point
    Transportation planing time maintained in the transportation planning point.
    pick pack time maintained in the shipping point.
    Material availability time maintained in MM02 --> MRP screens. This is the time that the material can be manufactured (for inhouse producted items) or external processing time (for externallly procured materials like TAS items).
    From the requested delivery date 11/20 system does the backward scheduling & determines the following dates:
    Goods issue date, loading date, pick pack date, transportation planning date & material availability date.
    Time between:
    goods issue date - reqested delivery date: transit time
    Goods issue date - loading date: loading time
    transportation planning date - pick pack date: picking pack time
    Material availability date - transportation date: transportation planning time.
    Consider that the factory calender has all days of the week as working dates (to make it simple to explain). Also transit time is 3 days, loading time is 1 day,pick pack time is 1 day, material availability time is 3 days.
    From 11/20 ussing backward scheduling system determines the following dates:
    Goods issue date: 11/17
    Loading date: 11/16
    Pick pack date: 11/15
    System will check if material is available on the 11/15 to start pick / pack. If it is available then system will confirm the reqested delivery date. Else it will check when the material is available. For eg basing on the MRP settings mnaterial is available only on 11/18. So from 18th system does forward scheduling & redetermines all the dates. So pick / pack date is 11/18. Loading date is 11/19, goods issue date is 11/20 & possible delivery date is 11/23. So system will confirm the delivery date on 11/23. This is when complete delivery is required. If partial delivery is allowed, then system will check how much quantity is available on 11/15. Accordingly it will give 2 delivery dates.
    In the above example include teh factory calender which will have 5 day week with Fri & Sat as holidays. Accordingly dates will change.
    Here replenishment lead time also plays an imp role. Pls refer http://help.sap.com/erp2005_ehp_03/helpdata/EN/6b/2785347860ea35e10000009b38f83b/frameset.htm for further information
    Regards,
    Sivanand

  • Unable to send emails using SMTP Plug In

    Hi all.
    I am trying to configure SAPConnect so that I can send emails from SAP to
    external domains. I have done all settings in SCOT. But our Outgoing SMTP Server needs authentication. So in my JOB TRACE I am getting error '530 Authentication Required'.
    In transaction RZ10 where I have maintained Instance Profile IDS_DVEBMGS00_IDES-SERVER
    I have set Parameter: icm/server_port_0 with value PROT=SMTP, PORT=25. Is there any way I can pass "Authentication Required" as a value here or in transaction SCOT.
    Anyone who has a solution plz help me.
    Thanks and regards.
    Vipin Varghese.

    Thanks all for the help.
    The server that I am supposed to configure that is the email server that you are talking about right?
    I m fairly new into WF and these Basis steps are new for me.
    There are 4 profiles created in RZ10 - 2 start profiles and 2 instance profiles.
    I am making changes in an instance profile. But after the changes when I try to activate, its showing some warnings and errors. But somehow it gets activated and an instance profile with version 2 is actively saved.
    Also when I open MS OUTLOOK and go to the account settings, for Outgoing server ( SMTP ) , Authentication Required checkbox has been marked.
    Now is any one of these the problem?
    Regards.
    Vipin Varghese.

  • Which exceptions are really catchable?

    Hi,
    In the documentation, following exception groups are listed:
    u2022 Execution errors (1001)
    u2022 Database access errors (1002)
    u2022 Database connection errors (1003)
    u2022 Flat file processing errors (1004)
    u2022 File access errors (1005)
    u2022 Repository access errors (1006)
    u2022 SAP system errors (1007)
    u2022 System resource exception (1008)
    u2022 SAP BW execution errors (1009)
    u2022 XML processing errors (1010)
    u2022 COBOL copybook errors (1011)
    u2022 Excel book errors (1012)
    u2022 Data Quality transform errors (1013)
    from this list, I understand that the number in parathesis represent a group of numbers in a certain range.
    Yet, I am puzzled by the numbers that are thrown and catched or not by Data Services at runtime.
    Example 1
    In a try/catch, I have selected ALL errors to be catchable. The script in the catch editor is printing the error number and message. I have an audit rule which fails. The number of the message is NOT within the groups mentioned above, yet it is catched. What is the meaning of those numbers exactly? The error in this example is  RUN-050450 which seems to cover the audit rule failure. Why is this catch but not in the groups above? Unless those numbers mean something else?
    Example 2
    In a try/catch, I have selected ALL errors to be catchable. In this, there is an issue with the mail_to function. The job does log an error BUT it is not catched by the try/catch even though the ALL errors is set. Why are there some errors that the job files but are not catchable?
    Thanks a lot & Best Regards
    Isabelle

    Thanks for sharing your experience. Indeed, the messages handling is not so obvious in Data Services.
    2 other things I would like to point out:
    - The Reference Documentation, it is mentioned under the Error Log section that If the error log is empty (that is, the button is dimmed), the job completed successfully. This doesn't seem true in fact. I have example where the job log is not empty and the job trace last statement is Job <job_name> is completed successfully. It is not clear what is the intend exactly.
    - The Reference Documentation, also have a table of errors which has not relation with the catchable errors mentioned in Designer Documentation under section Categories of available exceptions. Yet, those 2 topics are related. Once again, it is not clear how the catchable exceptions are related to the error numbers.
    Any hints to understand the concepts and usage is appreciated.

  • Background job running for a long time - trace is not capturing.

    hi guys,
    My background job runs for more thatn 4 hours.  some times. (normally it completes withing one hour)
    i tried to trace using st12 ....and executed this program. but not got complete trace report of 4 hours.
    what is the best way to trace long running programs like this.
    is there any alternative for this...???
    regards
    girish

    Giri,
    There is no need to trace a program for the full 4 hours. There is usually just one piece of rogue code causing the problem, and it can be identified quite quickly.
    If I were you I would just watch the job run via SM50, to see if it lingers on any database tables. Also check if the memory is filling up - this is the sign of a memory leak.
    You can also try stopping the program (via SM50 debugging) at random points, to see what piece of code it is stuck in.
    The issue should reveal itself fairly quickly.
    Tell us what you find!
    cheers
    Paul

  • SQL trace for Background jobs

    Hi
    Can anyone suggest how to find the SQL trace for background jobs.
    Thanks in advance.
    Regards
    D.Vadivukkarasi

    Hi
    Check the transaction ST05.
    Plz Reward Points if helpful

  • DeskI Job Server Hangs with Connection timed out error in trace file

    Hi, in these days I have a problem with Desktop Intelligence Job Server. After some hours after its starting (it's restarted every day in the morning to allow db backup) in the trace file I can see errors like this:
    [Wed Apr  8 09:58:40 2009]      28116   1457753008      (../iinfoprocessing_implmixedproc.cpp:607): trace message: Failed to send message to child: 28527, err 0, PipeListener: timed out reading from pipe: Connection timed out
    [Wed Apr  8 09:58:43 2009]      28116   1457753008      (csipipe.cpp:242): trace message: PipeListener: ReadString(1) timed out: Connection timed out
    [Wed Apr  8 09:58:43 2009]      28116   1457753008      (../iinfoprocessing_implmixedproc.cpp:607): trace message: Failed to send message to child: 28529, err 0, PipeListener: timed out reading from pipe: Connection timed out
    [Wed Apr  8 09:58:46 2009]      28116   1457753008      (csipipe.cpp:242): trace message: PipeListener: ReadString(1) timed out: Connection timed out
    [Wed Apr  8 09:58:46 2009]      28116   1457753008      (../iinfoprocessing_implmixedproc.cpp:607): trace message: Failed to send message to child: 28553, err 0, PipeListener: timed out reading from pipe: Connection timed out
    [Wed Apr  8 09:58:49 2009]      28116   1457753008      (csipipe.cpp:242): trace message: PipeListener: ReadString(1) timed out: Connection timed out
    [Wed Apr  8 09:58:49 2009]      28116   1457753008      (../iinfoprocessing_implmixedproc.cpp:607): trace message: Failed to send message to child: 28555, err 0, PipeListener: timed out reading from pipe: Connection timed out
    [Wed Apr  8 09:58:52 2009]      28116   1457753008      (csipipe.cpp:242): trace message: PipeListener: ReadString(1) timed out: Connection timed out
    [Wed Apr  8 09:58:52 2009]      28116   1457753008      (../iinfoprocessing_implmixedproc.cpp:607): trace message: Failed to send message to child: 28591, err 0, PipeListener: timed out reading from pipe: Connection timed out
    All job submitted remains in "Suspend" mode and never pass in "Executing".
    To solve the problem I've to manually stop and restart the DeskIjob server, but after some hours the problem reappear.
    How to solve it?
    I've installed Business Objects XI R2 SP5 on Linux SLES 10
    Thanks

    Checking the jobcd trace of child process in connection timeout, I can see, at the time in which in jobsd process the error occurs, the following error:
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      (../ijob_implmixedproc.cpp:143): trace message: MixedProcMgr: msg = GetLoad
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      (../ijob_implmixedproc.cpp:128): trace message: MixedProcMgr: timed out wating for new jobs, shutting down.
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      (pp_procFC.cpp:3716): trace message: IJobDllUnInitialize
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      (pp_procFC.cpp:951): trace message: ReleaseStaticSubsystem()
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      (pp_procFC.cpp:168): trace message: t_shutdown_ptr::reset()
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      (pp_procFC.cpp:172): trace message: Shutting down INTERFACE
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      trace message: TraceLog: [ENTER] BOLApp::ExitInstance
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      trace message: TraceLog: [ENTER] ~_BOUserUniverseManager
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      trace message: TraceLog: [ENTER] ReleaseUniverseCache
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      trace message: TraceLog: [EXIT]  ReleaseUniverseCache: 0
    [Thu Apr  9 09:14:08 2009]      8940    1452690272      trace message: TraceLog: [EXIT]  ~_BOUserUniverseManager: 0
    The error occurs exaclty 2 hours after the starting of the child ... I don't think it's a coincidence ....
    Edited by: David Loreto on Apr 9, 2009 11:41 AM

  • DS Job Fails with Print All Trace Logs turned on

    Hey all,
    I have a Realtime DS 12.2 job which uses a global address cleanse and a few query transforms. If I add a data cleanse transform to the job it fails but does not provide any errors in the log. it only has:
    16105     4008583808     VAL-030910     5/10/2011 8:08:11 AM     |Session RT_Job_DQ_CR_Address_Cleanse_Match
    16105     4008583808     VAL-030910     5/10/2011 8:08:11 AM     Transform <PEP_CUST_USARegulatoryNonCertified_AddressCleanse>:Option Error(Option:
    16105     4008583808     VAL-030910     5/10/2011 8:08:11 AM     NON_CERTIFIED_OPTIONS/DISABLE_CERTIFICATION): PSFORM 3553 will not be generated  because some Non-Certified options are enabled.
    16105     1100020032     DQX-058306     5/10/2011 8:08:14 AM     |Session RT_Job_DQ_CR_Address_Cleanse_Match|Data flow DF_CR_Address_Cleanse_Match|Transform PEP_Cust_EnglishNorthAmerica_DataCleanse
    16105     1100020032     DQX-058306     5/10/2011 8:08:14 AM     Transform <PEP_Cust_EnglishNorthAmerica_DataCleanse>: Synchronizing the entire performance dictionary due to an unexpected
    16105     1100020032     DQX-058306     5/10/2011 8:08:14 AM     environment; this can be a time-consuming process.
    Then if I Print All Trace Messages the error log looks like this:
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     |Session Copy_1_RT_Job_DQ_CR_Address_Cleanse_Match|Data flow Copy_1_DF_CR_Address_Cleanse_Match|Reader READ MESSAGE Input_Realtime_DQ_Adrs_Clean_Match_NS OUTPUT(Input_
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     XML parser failed: Error <An exception occurred! Type:TranscodingException, Message:Could not create a converter for encoding:
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     > at line <0>, char <0> in <<?xml version="1.0" encoding="utf-8"?>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <AddressCleanseMatchRequest xmlns="http://businessobjects.com/service/RT_Job_DQ_CR_Address_Cleanse_Match/input">
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <Record>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <Name1>JEDD</Name1>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <Address1>1 CASCADE PLAZA</Address1>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <Address2></Address2>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <City>AKROn</City>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <State>OHIO</State>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <PostalCode></PostalCode>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <Country></Country>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <StoreNum>321</StoreNum>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <POBox></POBox>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     <RequestSysID>CM</RequestSysID>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     </Record>
    17487     1086335296     XML-240304     5/10/2011 8:13:50 AM     </AddressCleanseMatchRequest>>, file <>.
    17487     1086335296     XML-240307     5/10/2011 8:13:51 AM     |Session Copy_1_RT_Job_DQ_CR_Address_Cleanse_Match|Data flow Copy_1_DF_CR_Address_Cleanse_Match|Reader READ MESSAGE Input_Realtime_DQ_Adrs_Clean_Match_NS OUTPUT(Input_
    17487     1086335296     XML-240307     5/10/2011 8:13:51 AM     XML parser failed: See previously displayed error message.
    17487     1451250304     XML-240307     5/10/2011 8:14:01 AM     |Session Copy_1_RT_Job_DQ_CR_Address_Cleanse_Match|Data flow Copy_1_DF_CR_Address_Cleanse_Match|Reader READ MESSAGE Input_Realtime_DQ_Adrs_Clean_Match_NS OUTPUT(Input_
    17487     1451250304     XML-240307     5/10/2011 8:14:01 AM     XML parser failed: See previously displayed error message.
    Here is something that is really strange: If I drop the Datacleanse datastore, the job will finish. Except in cases where I leave the Print all trace messages turned on, then it will error with the same XML Parser error above.
    Thanks for all help.
    Ryan

    Turned out I had a global address engine with US engine enabled, and a US Addrss engine in the same job. It did not like that.
    I changed the global address engine to not handle US addresses, and it worked.

  • Dbms_monitor.SERV_MOD_ACT_TRACE_ENABLE cannot trace service in job class

    Hi All,
    I have a issue about dbms_monitor.SERV_MOD_ACT_TRACE_ENABLE. It can
    trace sqlplus query. But it cannot trace service in job class.
    I am appreciated if anyone can help.
    Below is test steps
    1. add service
    srvctl add service -d rdb -s oltp -r rdb1 -a rdb2
    2. startup service
    srvctl start service -d rdb -s oltp
    3. using service in tnsnames.ora
    oltp =
    (DESCRIPTION =
    (LOAD_BALANCE = ON)
    (FAILOVER = ON)
    (ADDRESS = (PROTOCOL = TCP)(HOST = rdb1)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = rdb2)(PORT = 1521))
    (CONNECT_DATA =
    (SERVICE_NAME = OLTP)
    (FAILOVER_MODE =
    (TYPE = SELECT)
    (METHOD = BASIC)
    (RETRIES = 20)
    (DELAY = 1)
    4. open trace
    dbms_monitor.SERV_MOD_ACT_TRACE_ENABLE('oltp');
    5. run one query and check trace file
    sqlplus /nolog
    conn sys/sys@oltp as sysdba
    select * from test;
    in $ORACLE_BASE/admin/rdb/udump/rdb1_ora_22960.trc
    There is the query: select * from test
    6. create job class with service
    BEGIN
    DBMS_SCHEDULER.create_job_class(
    job_class_name => 'OLTP_JOB_CLASS',
    service => 'OLTP');
    END;
    7. create job
    BEGIN
    DBMS_SCHEDULER.create_job (
    job_name => 'my_job',
    job_type => 'PLSQL_BLOCK',
    job_action => 'insert into test values (sysdate);',
    start_date => SYSTIMESTAMP,
    repeat_interval => 'FREQ=MINUTELY; INTERVAL=10;',
    job_class => 'OLTP_JOB_CLASS',
    end_date => SYSDATE + 7,
    enabled => TRUE,
    comments => 'Job linked to the OLTP_JOB_CLASS.');
    END;
    8. very result the job
    select to_char(c1, 'YYYY-MM-DD HH24:MI') from test;
    2010-05-16 17:17
    2010-05-16 17:27
    9. check trace file
    in $ORACLE_BASE/admin/rdb/udump/ directory, there is no query like
    insert into test values (sysdate);
    Thanks in advance
    Jacky

    Thank you.
    I see that we can use alter session to enable trace. But I don't want to add this kind of query into job. It is not convenient to do performance.
    I want enable trace for the service 'oltp', and I can get trace files for all queries including job class.
    Thanks for your suggestion, I am using sys as test on test db.
    Thanks
    Jacky

Maybe you are looking for

  • Sticky?

    I'm guessing there aren't many moderators around but, I think creating sticky threads that stay at the top of the forum would help. Example: problems with the ipod touch and syncing new january applications, it's been asked 40 times, and I've had to

  • Best practice regarding work flow (cutting, applying effects, exporting etc)

    Hi! I've been asked to shoot and edit a music video for a friend of mine, and I'm trying to figure out the best way to manage this project in PrE (in what order to do things and so on). I have a picture in my head which make sence, but I'd like to ha

  • How do I change the length of multiple photo's after I've added them?

    How do I change the length of multiple photo's after I've added them?

  • Fi Entries Documentation required

    Dear Sap Gurus, How the entries will get posted either in FI or Co for Cost of goods sold,cost of goods manufactured, Wip,Variance and Settlement. Can any one forward me the detail documentation on this to mail id  [email protected] Regards, krishna

  • Photo shop download problems

    How do I redownload photoshop that failed the first time?