Performance Problem While Data updating In Customize TMS System

Dear guys
I have developed Customize time management system,but there is performance problem while updating machine date in monthly roster table,there is if else condition,which check record status either is late or Present on time etc.
have any clue to improve performance.
Thanks in advance
--regard
furqan

Furqan wrote:
Dear guys
I have developed Customize time management system,but there is performance problem while updating machine date in monthly roster table,there is if else condition,which check record status either is late or Present on time etc.
have any clue to improve performance.From that description and without any database version, code, explain plans, execution traces or anything that would be remotely useful.... erm... No.
Hint:-
How to Post an SQL statement tuning request...
HOW TO: Post a SQL statement tuning request - template posting

Similar Messages

  • Problem in data update to BW

    Hi All,
    We are facing a very peculiar problem while data updating in BW. Data for GL, AR gets updated evryday twice through process chain. But sometimes even after normal load data does not gets updated properly and changed data comes after 2-3 loads though it sould have come before. Can you pls provide possible suggestion for this behaviour.
    Thanks in advance,
    Sananda

    Hi Sananda,
    Please got through the below article
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a00ae8f2-03ad-2d10-71b7-962915661a93?quicklink=index&overridelayout=true
    and can you tell me the type of delta update you are using
    This should be because of your delta update mode
    Hope this helps
    Regards,
    Venkatesh

  • Problem while  data processing TRANSACTION data from DSO to CUBE

    Hi Guru's,
    we are facing problem while  data processing TRANSACTION data from DSO to CUBE. data packets  processing very slowly  and updating .Please help me regarding this.
    Thanks and regards,
    Sridhar

    Hi,
    I will suggest you to check a few places where you can see the status
    1) SM37 job log (give BIrequest name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
    2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. See if its accessing/updating some tables or is not doing anything at all.
    If its running and if you are able to see it active in SM66 you can wait for some time to let it finish.
    3) RSMO see what is available in details tab. It may be in update rules.
    4) ST22 check if any short dump has occured.
    You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
    If you feel its active and running you can verify by checking if the number of records has increased in the cube.
    Thanks,
    JituK

  • Performance Problem while Aggregation

    Performance problem while aggrgating
    These r my dimension and cube i wrote customized Aggrgation map and i m aggragating all dimension (except for last level a unique one (PK) + cube .
    My system config is good ..
    But Aggrgation deployment (calculation ) is really really very slow compared to other vendors product
    i.e. It took me 3 hours to aggrgate all dimension (all levels except last) and cube (only containing 1000 rows to check and deleted all others rows)
    Dimensions Number of rows
    dim_product 156,0
    t_time 730
    dim_promotion 186,4
    dim_store 25
    dim_distributor 102,81
    Cube Number of Row
    Cube_SalesFact 300,000
    Plz solve my problem coz if it take that much time then i must say the performance of software is not that good where it should be....
    and i must suggest oracle to do some thing about this serious problem
    Thanks
    Well wisher of Oracle Corporation

    BEGIN
    cwm2_olap_manager.set_echo_on;
    CWM2_OLAP_MANAGER.BEGIN_LOG('D:\', 'AggMap_CubeSalesfact.log');
    DBMS_AW.EXECUTE('aw attach RTTARGET.AW_WH_SALES RW' );
    BEGIN
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_DISTRIBUTOR', 'RTTARGET', 'DIM_DISTRIBUTOR');
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_PRODUCT', 'RTTARGET', 'DIM_PRODUCT');
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_PROMOTION', 'RTTARGET', 'DIM_PROMOTION');
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_STORE', 'RTTARGET', 'DIM_STORE');
    DBMS_AWM.DELETE_AWDIMLOAD_SPEC('T_TIME', 'RTTARGET', 'T_TIME');
    --Deleting AW_CubeLoad_Spec
    DBMS_AWM.DELETE_AWCUBELOAD_SPEC('CUBESALESFACT', 'RTTARGET', 'CUBE_SALESFACT');
    DBMS_AW.EXECUTE('upd '||'RTTARGET'||'.'||'AW_WH_SALES' ||'; commit');
    Commit;
    --Deleting AggMap
    DBMS_AWM.Delete_AWCUBEAGG_SPEC('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT');
    DBMS_AW.EXECUTE('upd '||'RTTARGET'||'.'||'AW_WH_SALES' ||'; commit');
    Commit;
    EXCEPTION WHEN OTHERS THEN NULL;
    END;
    --Creating Agg Map for cube cube_salesfact
    -- DBMS_AWM.CREATE_AWCUBEAGG_SPEC(AggMap_Name , USER , AW_NAME, CUBE_NAME);
    DBMS_AWM.CREATE_AWCUBEAGG_SPEC('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT');
    --Specifying aggrgation for measures of cube
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_MEASURE('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'STORECOST');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_MEASURE('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'STORESALES');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_MEASURE('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'UNITSALES');
    --Specifying aggrgation for different level of dimensions
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_ALLYEARS');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_YEAR');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_QUARTER');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_MONTH');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_ALLCOUNTRIES');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_COUNTRY');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_PROVINCE');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_CITY');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_ALLPRODUCTS');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_BRANDCLASS');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_BRANDCATEGORY');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_BRAND');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_DISTRIBUTOR', 'L_ALLDIST');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_DISTRIBUTOR', 'L_DISTINCOME');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PROMOTION', 'L_ALLPROM');
    DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PROMOTION', 'L_PROMOTIONMEDIA');
    Begin
    --************************     CODE      **********************************
    --aw_dim.sql
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_DISTRIBUTOR', 'RTTARGET', 'DIM_DISTRIBUTOR', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_DISTRIBUTOR', 'DIM_DISTRIBUTOR');
    commit;
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_PRODUCT', 'RTTARGET', 'DIM_PRODUCT', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_PRODUCT', 'DIM_PRODUCT');
    commit;
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_PROMOTION', 'RTTARGET', 'DIM_PROMOTION', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_PROMOTION', 'DIM_PROMOTION');
    commit;
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_STORE', 'RTTARGET', 'DIM_STORE', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_STORE', 'DIM_STORE');
    commit;
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC('T_TIME', 'RTTARGET', 'T_TIME', 'FULL_LOAD_ADDITIONS_ONLY');
    DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_T_TIME', 'T_TIME');
    commit;
    --aw_cube.sql
    DBMS_AWM.CREATE_AWCUBELOAD_SPEC('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'LOAD_DATA');
    dbms_awm.add_awcubeload_spec_measure('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'STORECOST', 'STORECOST', 'STORECOST');
    dbms_awm.add_awcubeload_spec_measure('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'STORESALES', 'STORESALES', 'STORESALES');
    dbms_awm.add_awcubeload_spec_measure('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'UNITSALES', 'UNITSALES', 'UNITSALES');
    DBMS_AWM.REFRESH_AWCUBE('RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'CUBE_SALESFACT');
    EXCEPTION WHEN OTHERS THEN NULL;
    END;
    -- Now build the cube. This may take some time on large cubes.
    -- DBMS_AWM.aggregate_awcube(USER, AW_NAME, CUBE_NAME, aggspec);
    DBMS_AWM.aggregate_awcube('RTTARGET','AW_WH_SALES', 'WH_CUBE_SALESFACT','AGG_CUBESALESFACT');
    DBMS_AW.EXECUTE('upd '||'RTTARGET'||'.'||'AW_WH_SALES' ||'; commit');
    Commit;
    CWM2_OLAP_METADATA_REFRESH.MR_REFRESH();
    CWM2_OLAP_METADATA_REFRESH.MR_AC_REFRESH();
    DBMS_AW.Execute('aw detach RTTARGET.AW_WH_Sales');
    CWM2_OLAP_MANAGER.END_LOG;
    cwm2_olap_manager.set_echo_off;
    EXCEPTION WHEN OTHERS THEN NULL;
    -- EXCEPTION WHEN OTHERS THEN RAISE;
    END;

  • Problem while determining receivers using interface mapping: "SYSTEM FAILURE" during JCo call. Bean SMPP_CALL_JAVA_RUNTIME3 not found

    We have a SOAP to PROXY scenario Which is in Production.
    We keep getting the Error:
    " Problem while determining receivers using interface mapping: "SYSTEM FAILURE" during JCo call. Bean SMPP_CALL_JAVA_RUNTIME3 not found on host XXXXXX, ProgId =AI_RUNTIME_XXX.
    We are using Standard Receiver Determination with single receiver without any condition. And no mapping being used in interface determination.
    What are all the possible situation where we face such as this issue in Production.

    Please check the SAP note
    # 1706936 - messages fails with error java.lang.RuntimeException Bean SMPP_CALL_JAVA_RUNTIME3 not found
    1944248 - PI unstable due to JCO_SYSTEM_FAILURE mapping issues

  • Performance problem while CPU is 80% Idel ?

    Hi,
    My end users are claim for performance problem during execution of batch process.
    As you can see there are 1,745 statement executing each second.
    Awr report shows 98.1% of the time , waits on CPU .
    Also Awr report shows that Host CPU is :79.9% Idel.
    The second wait event shows only 212 seconds waits on db file sequential read.
    Yet , 4 minute in 1 hour period is seems not an issue.
    Please advise
    DB Name         DB Id    Instance     Inst Num Startup Time    Release     RAC
    QERP          xxx        erp                 1 21-Jan-13 15:40 11.2.0.2.0 ; NO
    Host Name        Platform                         CPUs Cores Sockets Memory(GB)
    erptst           HP-UX IA (64-bit)                  16    16       4     127.83
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:     40066 22-Jan-13 20:00:52       207       9.6
      End Snap:     40067 22-Jan-13 21:00:05       210       9.6
       Elapsed:               59.21 (mins)
       DB Time:              189.24 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     8,800M     8,800M  Std Block Size:         8K
               Shared Pool Size:     1,056M     1,056M      Log Buffer:    49,344K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                3.2 ;               0.1 ;      0.00 ;      0.05
           DB CPU(s):                3.1 ;               0.1 ;      0.00 ;      0.05
           Redo size:          604,285.1 ;          27,271.3
       Logical reads:          364,792.3 ;          16,463.0
       Block changes:            3,629.5 ;             163.8
      Physical reads:               21.5 ;               1.0
    Physical writes:               95.3 ;               4.3
          User calls:               68.7 ;               3.1
              Parses:              212.9 ;               9.6
         Hard parses:                0.3 ;               0.0
    W/A MB processed:                1.2 ;               0.1
              Logons:                0.3 ;               0.0
            Executes:            1,745.2 ;              78.8
           Rollbacks:                1.2 ;               0.1
        Transactions:               22.2
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00 ;      Redo NoWait %:  100.00
                Buffer  Hit   %:   99.99 ;   In-memory Sort %:  100.00
                Library Hit   %:   99.95 ;       Soft Parse %:   99.85
             Execute to Parse %:   87.80 ;        Latch Hit %:   99.99
    Parse CPU to Parse Elapsd %:   74.76 ;    % Non-Parse CPU:   99.89
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   75.37 ;  76.85
        % SQL with executions>1:   95.31 ;  85.98
      % Memory for SQL w/exec>1:   90.33 ;  82.84
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    DB CPU                                           11,144          98.1
    db file sequential read              52,714         214      4    1.9 User I/O
    SQL*Net break/reset to client        29,050           6      0     .1 Applicatio
    log file sync                         2,536           6      2     .0 Commit
    buffer busy waits                     4,338           2      1     .0 Concurrenc
    Host CPU (CPUs:   16 Cores:   16 Sockets:    4)
    ~~~~~~~~         Load Average
                   Begin       End     %User   %System      %WIO     %Idle
                    0.34 ;     0.33 ;     19.7 ;      0.4 ;      1.8 ;     79.9

    Nikolay Savvinov wrote:
    if the users are complaining about performance of the batch process, then that's what you should be looking at, not the entire system.I find it strange to see "end users" and "the batch process" in the same sentence (as it was in the first post). "End users" gives me the feeling of a significant number of concurrent sessions with people waiting for results in real time at the far end, while "batch process" carries the image a small number of large scale processes running overnight to prepare the data for the following morning.
    I mention this because my first view of the AWR output was: you've got 16 CPUs, only three in use, virtually no users, and doing very little work, how can the users complain. (One answer, of course, is that the 13 CPUs could be locked out of use as far as Oracle is concerned). On the second read I decided that the "users" had gone home, and the complaint was simply that the batch process wasn't completing in time.
    In this case I think "the entire system" IS "the batch process"
    Determine which stored procedures and/or SQL statements took longer than usual and then find out why. Most likely you'll be able to find
    everything you need in AWR views (DBA_HIST_SQL%) and ASH archive (DBA_HIST_ACTIVE_SESS_HISTORY).
    If the batch process has changed dramatically and recently, then a simple first step might be to look at the current AWR report, find the few most time-consuming SQL statements, and use the awrsqrpt.sql script to find their history of execution plans.
    But I'd also just look at the expensive SQL - bearing in mind, particularly, that there are very few user calls per second, yet many hundred executions per second: it strikes me that there could be quite a lot of PL/SQL going on doing something a little bit expensive many times or some PL/SQL function that calls some SQL that used to be called rarely from an SQL statement but is now (due, perhaps to a change in plan) being called much more frequently - so check SQL Ordered by Executions.
    Regards
    Jonathan Lewis

  • Performance Problem  while signing into Application

    Hello
    Could someone plz throw some light into which area I can look into for my performance problem. its an E-business suite version 11.5.10.2 which was upgrade from 11.5.8.
    the Problem : When the Sign in Page is displayed , After the user name / Pwd is entered it sort of takes for ever for the System to actually log the user in. Sometimes I have to click twice on the Sign in Button.
    I have run purge sign on audit / purge concurrent Request/manager logs / gather schema stats but its still slow. Are there any way of check whether the Middle Tier is the bottle neck.
    Thanks
    Nini

    can you check the profile option FND%diagnostic% if it was enabled or not
    fadi

  • Update rule problem - while data load

    Hi friends,
    I got the following error while doing initialisation for 2lis_02_sgr.
    "ABORT was set in the customer routine 9998
    Error 1 in the update "
    In the forum i searched for this error and this error is something related to the start routine in my update rule.
    But i dont know whats wrong with my routine.
    Im giving the start routine below,pls go through this and give me your suggestions..
    PROGRAM UPDATE_ROUTINE.
    $$ begin of global - insert your declaration only below this line  -
    TABLES: ...
    <i>TABLES /bic/AZMM_PUR100 .
    DATA:  T_PUR1 LIKE /bic/AZMM_PUR100 OCCURS 0 WITH HEADER LINE.</i>
    $$ end of global - insert your declaration only before this line   -
    The follow definition is new in the BW3.x
    TYPES:
      BEGIN OF DATA_PACKAGE_STRUCTURE.
         INCLUDE STRUCTURE /BIC/CS2LIS_02_SGR.
    TYPES:
         RECNO   LIKE sy-tabix,
      END OF DATA_PACKAGE_STRUCTURE.
    DATA:
      DATA_PACKAGE TYPE STANDARD TABLE OF DATA_PACKAGE_STRUCTURE
           WITH HEADER LINE
           WITH NON-UNIQUE DEFAULT KEY INITIAL SIZE 0.
    FORM startup
      TABLES   MONITOR STRUCTURE RSMONITOR "user defined monitoring
               MONITOR_RECNO STRUCTURE RSMONITORS " monitoring with record n
               DATA_PACKAGE STRUCTURE DATA_PACKAGE
      USING    RECORD_ALL LIKE SY-TABIX
               SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS
      CHANGING ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update
    $$ begin of routine - insert your code only below this line        -
    fill the internal tables "MONITOR" and/or "MONITOR_RECNO",
    to make monitor entries
    if abort is not equal zero, the update process will be canceled
      CLEAR: T_PUR1[] ,
             T_PUR1,
             ABORT.
      SELECT * INTO TABLE T_PUR1 FROM /bic/AZMM_PUR100.
      IF SY-SUBRC EQ 0.
        SORT T_PUR1 BY DOC_DATE
                       DOC_ITEM
                        DOC_NUM.
      ELSE.
        MONITOR-msgid = sy-msgid.
        MONITOR-msgty = sy-msgty.
        MONITOR-msgno = sy-msgno.
        MONITOR-msgv1 = sy-msgv1.
        MONITOR-msgv2 = sy-msgv2.
        MONITOR-msgv3 = sy-msgv3.
        MONITOR-msgv4 = sy-msgv4.
        append MONITOR.
      if abort is not equal zero, the update process will be canceled
             ABORT = 1.
      ENDIF.
       ABORT = 0.
    $$ end of routine - insert your code only before this line         -
    ENDFORM.
    Thanks & Regards
    Ragu

    thanks gimmo and a.h.p,
    i have done the correction as you said,pls verify that.
    And also kindly explain me what is the reason for this start routine,what exactly it does???
    CLEAR: T_PUR1[] ,
             T_PUR1,
             ABORT.
      SELECT * INTO TABLE T_PUR1 FROM /bic/AZMM_PUR100.
      IF SY-SUBRC EQ 0.
        SORT T_PUR1 BY DOC_DATE
                       DOC_ITEM
                        DOC_NUM.
    abort = 0.    (  added  abort = 0 as per your suggestion )
      ELSE.
        MONITOR-msgid = sy-msgid.
        MONITOR-msgty = sy-msgty.
        MONITOR-msgno = sy-msgno.
        MONITOR-msgv1 = sy-msgv1.
        MONITOR-msgv2 = sy-msgv2.
        MONITOR-msgv3 = sy-msgv3.
        MONITOR-msgv4 = sy-msgv4.
        append MONITOR.
      if abort is not equal zero, the update process will be canceled
             ABORT = 1.
    exit. ( added exit as per your suggestion )
      ENDIF.
       ABORT = 0.
    $$ end of routine - insert your code only before this line         -
    ENDFORM.
    Thanks & Regards
    ragu

  • Performance Problem While Selecting a Query....

    Hi Users
    I have problem with performance with appliction
    where are using D2K frent end and backend oracle 9i
    we have a validaion buttion that is taking lot of time to check the date like 500 records
    i want some links about performance issue process pls help.
    Here some of the query taking more than 1sec to more
    1)
    SELECT      
    /*+(INDEX(KBS_CHKSHTCARDTB(IND1_CHKSHTCARD))*/
    COUNT(DISTINCT A.KBCK_CHKSHEET_NO)
    FROM     
    KBS_CHKSHTCARDTB A ,KBS_CHKSHTHDRTB B                              WHERE A.KBCK_CHKSHEET_NO=B.KBCH_CHKSHEET_NO          
    AND KBCK_E_DATE =TRUNC(SYSDATE)
    AND KBCH_PRINT_STATUS='P'     
    OutPut: 206
    Time : 1sec
    2)
    UPDATE KBS_CARDMASTERTB
    SET KBCM_LOCK_FROM = KBCM_LOCK_FROM_CONTROL,
    KBCM_LOCK_STATUS= NULL
         WHERE KBCM_LOCK_FROM_CONTROL is not null
         and KBCM_LOCK_FROM IS NULL
         and KBCM_LOCK_FROM_CONTROL <=trunc(sysdate)      
         AND KBCM_LOCK_STATUS = 'Y'
    and KBCM_UNIQUE_IDNO IN(SELECT DISTINCT KBSA_UNIQUE_IDNO
    FROM KBS_SCANTB WHERE TRUNC(KBSA_E_DATE)      = TRUNC(:KANBAN_CTRL_BLK.DAT)
    AND KBSA_TRUCK_SQ_NO           = :Kanban_ctrl_blk.cycl
    AND KBSA_ERROR_CODE      IS NULL)
    AND (KBCM_VENDOR_NO,KBCM_PLANT_CODE)in (SELECT DISTINCT kbsa_vendor_no,KBSA_PLANT_CODE
    FROM KBS_SCANTB      WHERE TRUNC(KBSA_E_DATE)      = TRUNC(:KANBAN_CTRL_BLK.DAT)           AND KBSA_TRUCK_SQ_NO = :Kanban_ctrl_blk.cycl AND KBSA_ERROR_CODE      IS NULL)
    AND KBCM_PROCESS_CODE IN (SELECT DISTINCT KBSA_PROCESS_CODE
    FROM KBS_SCANTB                                              WHERE TRUNC(KBSA_E_DATE)      = TRUNC(:KANBAN_CTRL_BLK.DAT) AND KBSA_TRUCK_SQ_NO = :Kanban_ctrl_blk.cycl
    AND KBSA_ERROR_CODE      IS NULL);
    OutPut: Totatl Number of Table:29288
    Time : more than     5sec
    3)
    CURSOR GET_TEMP_CARDS_SWIPED_CUR IS
         SELECT KBCM_VENDOR_NO
                        ,KBCM_PLANT_CODE
                        ,KBCM_FAMILY --ADDED BY SUJITH.C TO SUPPORT PSMS2
                        ,KBCM_BACK_NO
                        ,KBCM_UNIQUE_IDNO
                        ,KBCM_KANBAN_TYPE
         FROM KBS_CARDMASTERTB
         WHERE KBCM_KANBAN_TYPE IN ('T','B')
         AND KBCM_UNIQUE_IDNO IN
    (SELECT KBSA_UNIQUE_IDNO
    FROM          KBS_SCANTB
    WHERE KBSA_E_DATE                = :DAT
    AND          KBSA_TRUCK_SQ_NO      = :CYCL
    AND KBSA_ERROR_CODE      IS NULL
    AND KBSA_TYPE IS NULL);
    Thanks Advance ........

    [When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=1812597#1812597]
    [How to post a SQL statement tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]

  • Performance problem in data replication program written in java

    Dear all,
    I need your valuable ideas on improving below logic on replicating data fromDB2 to Oracle 9i.We have a huge tables in DB2 to replicate to Oracle side.For one table this taking lot of time.The whole app' is written in java.The current logic is Setting soft delete to specific set of records in oracel table and Reading all records from DB2 table to set only these records in oracle table to 'N' so that deleted records got soft deleted in oralce side.The DB2 query is having 3 table join and taking nearly 1minute.We are updating the oracle table in batch of 100000.For 610275 record update in batch mode it is taking 2.25 hours which has to be reduced to <1hour.The first update to all Y and second update using DB2 query is taking 2.85 hrs.
    Do you have any clever idea to reduce this time?? kindly help us.we are in critical situation now.Even new approach in logic to replicate also welcome..

    hi,
    just remove joins and use for all entries.
    if sy-subrc = 0.
    use delete adjacent duplicates from itab comparing key fields.(it will increase performance)
    then write another select statement.
    endif.
    some tips:
    Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    Avoid for all entries in JOINS
    Try to avoid joins and use FOR ALL ENTRIES.
    Try to restrict the joins to 1 level only ie only for tables
    Avoid using Select *.
    Avoid having multiple Selects from the same table in the same object.
    Try to minimize the number of variables to save memory.
    The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    Avoid creation of index as far as possible
    Avoid operators like <>, > , < & like % in where clause conditions
    Avoid select/select single statements in loops.
    Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
    Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    Avoid using ORDER BY in selects
    Avoid Nested Selects
    Avoid Nested Loops of Internal Tables
    Try to use FIELD SYMBOLS.
    Try to avoid into Corresponding Fields of
    Avoid using Select Distinct, Use DELETE ADJACENT
    Go through the following Document
    Check the following Links
    Re: performance tuning
    Re: Performance tuning of program
    http://www.sapgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTunin

  • Problem while generating Update progam for a Change Document Object

    Hi,
    I'm trying to deal with Change Document concept in a R/3 4.6C environment and to establish new Change Document object for my (Z)-table. I haven't using any namespace and created object with name ZTEST. Following the online documentation I came to the point, where I have to generate include program. I made all the nessesary inputs (using Z prefix) but faced
    "Function module name is reserved for SAP"
    Creating everything similarily using some of our registered namespaces (/somenamespace/) I succeed to generate the Update program and to integrate it in my Z-programs as well.
    My question is: is it possible to use Change Document Object-names without predefined namespace - being a regular customer developer, but not an SAP developer - means, if I am allowed to manage programs in the customer namespace (Z,Y, X) only. If yes, how to do this?
    Further (I decided not to open a new thread) - generated Update program uses
    CALL FUNCTION 'xxxxx' IN UPDATE TASK
    for creation of Change Numbers for generated Change Document Object. This CALL doesn't work when I implemented it in my Z-program, but when changed  (IN UPDATE TASK was commented) - everything goes well and the system creates records in CDHDR/CDPOS tables for my object class and table.
    Why is that? According to the documentation I shoud only fill appropriate variables for the Change Document Object (class, tr.code, etc.) and call generated FM, nothing is pointed out about any possible problems? Am I doing something wrong?
    Well, to be precise, I think I have to give a sample:
    1. I have a sample Z-table with few fields (their data elements are marked as Change-Document relevant).
    2. Using own z-progam I created a new record for this z-table. Also fill all the nessesary variables included in the interface of generated FM for Change Document Object (for example - FM-mane CD_CALL_my_object).
    3. I call the CDO FM.
    4. Check what is happening (directly in both tables CDHDR/CDPOS or using FM CHANGEDOCUMENT_READ).
    Further, I perfom the steps from 1 to 4 updating the created in previous loop record in my Z-table.
    In both cases if the FM 'CD_CALL_my_object' is called IN UPDATE TASK nothing happens, but in case of direct call (without addition IN UPDATE TASK) the system behaves as expected. Well, obliously I can change the generated code for our production needs, but it doesn't seems to be the correct decision - in case of next possible modification of the Change Document Object definition, respectively in need of re-generation of the code.
    Any hints are wellcome.
    Thanks in advance.
    Ivaylo Mutafchiev
    Edited by: Ivaylo Mutafchiev on Jan 24, 2008 10:24 AM

    "IN UPDATE TASK" resolved by myself. The one should explicitly call 'COMMIT WORK' from Z-program after CALL FUNCTION '...' IN UPDATE TASK in order to get changes in the CDHDR/CDPOS commited. The key in this issue for me was to check the documentation of CALL FUNCTION :-).
    Regards,
    Ivo

  • Problem in Data updation with multiple windows in JSF

    Hi,
    I am facing a problem whitle working in JSF. As per my project scenario I have a list page which shows list of entities. I can click on the id of the entity and open a new window containing the details of that particular entity. In that perticular window I have option to update the details of the entity. I can open as many windows as I wish (Every window opens the same JSP Page) and can update them at the same time.
    If I open 3 windows containing details of 3 entities and update them one by one by clicking on edit button and then update button, it will work fine for all 3 windows. But if I click edit in all 3 windows simultaneously then click update on each of them it will work fine for the first updated entity but for the next windows I am getting the same object as the object is not getting updated with new data in the request for particular entity.
    Can anyone please suggest me any solution.

    Hi Deepak,
    Good to hear that your first problem is solved. For the problem you are currently facing now, when ever some inbound request is stuck in either waiting or partial state any sync that you do from the client is useless because middleware doesn't accept any further inbound requests until and unless the previous requests which got stuck in either partial or waiting state move to either finished or ignore or error state.
    This problem could happen because of many reasons. One reason could be either that particular synbo is not active. If it is active and still the requests are stuck in partial state then the problem could be with middleware system i mean because of heavy load it will take some time to process the request or if the system goes down in middle of processing then it doesn't restart processing.
    So i would like you to check if all MAM syncbo's are active (status should be green). If you are sure that middleware system is also working fine then move that request to ignore state and then do a sync from client and check if the middleware processes the new requests.
    Even if the new requests are stuck in partial state then the only solution could be to reinstall the application on middleware.
    Regards,
    Siva.

  • Problem when data update through the jsp

    Hello,
    Im using hibernate 3 with postgre database with proxool .
    i got edit list jsp which has form and list of data, when i update the data and reload the page.
    data in the list update, when i refresh few more times, its start to show old data again.
    i have check the data in the database, they are change. and change stay the same.
    its not a session problem, i think its something to do with proxool getting data from the database.
    can anyone give me some tips to allocate this issue please.

    Hi Sanjaya,
    This was a bug that was introduced in 6.22 or 6.23 (can't remember which).
    It should be fixed if you upgrade to 6.24

  • Shared pool size problem while data capture

    Sir ,
    I am using oracle 10g streams for Data Replication
    But facing some problems after replicating 2-3 tables.
    While adding rule of type capture am getting error
    Ora-4031 & ora 1280.
    Next I set the
    Database Mode = archive
    Share_pool_size=52m &
    share_pool_reseved_size= 5m (10% of share pool Area)
    processes=500
    parallel_max_servers=35
    parallel-min_servers=1
    job_que_processes=1
    aq_tm_processe=1
    global_name=true

    This can be done in Oracle 9i, at least 9.2, if you are using an SPFILE.
    Brgds
    JE

  • IKM Problem While Data Loading

    Hi Everyone,
    I am new to ODI. And trying to change somethings in exixting ODI IKM.
    I have an IKM with several steps. In a step of insert into I$ table i am encountering a problem.
    Firstly you will need to know i have two different schema Temp_schema for creating temp tables and Target_schema for target tables.
    And our client wants to have all temporary tables in temp_schema.
    What happening over here is When i am trying to run following code it is taking correct schema that is temp_schema for odiRef.getTable("L","INT_NAME","A")%> this code.
    While for <%=odiRef.getFrom(i)%> code it takes target schema.
    I have specified temp_schema at steps in IKM.
    Code :-
    insert into     *<%=odiRef.getTable("L","INT_NAME","A")%>*
         <%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "(((INS or UPD) and !TRG) and REW)")%>,
         IND_UPDATE
    <%for (int i=0; i < odiRef.getDataSetCount(); i++){%>
    <%=odiRef.getDataSet(i, "Operator")%>
    select      <%=odiRef.getUserExit("OPTIMIZER_HINT")%>
         <%=odiRef.getPop("DISTINCT_ROWS")%>
         <%=odiRef.getColList(i,"", "[EXPRESSION]", ",\n\t", "", "(((INS or UPD) and !TRG) and REW)")%>,
    <% if (odiRef.getDataSet(i, "HAS_JRN").equals("1")) { %>
         JRN_FLAG IND_UPDATE
    <%} else {%>
         'I' IND_UPDATE
    <%}%>
    from     <%=odiRef.getFrom(i)%>
    where     (1=1)
    <%=odiRef.getJoin(i)%>
    <%=odiRef.getFilter(i)%>
    <%=odiRef.getJrnFilter(i)%>
    <%=odiRef.getGrpBy(i)%>
    <%=odiRef.getHaving(i)%>
    <%}%>
    I really need to have temp_schema in odiRef.getFrom(i) step.
    Any help would be thankful.
    Edited by: MaheshGx on Dec 7, 2011 1:12 AM
    Edited by: MaheshGx on Dec 7, 2011 1:14 AM

    Hi
    Go to target physical schema and select workschema as OMPODI..Now use default LKM..Go to IKM change the step 200,210,220 to your target schema that is OMP (Not recommended).As you are giving your workschema different in your physical schema so no need to change the IKM.use the default one.
    Thanks

Maybe you are looking for