BIREQU_* job consuming more time in R/3 Source system

Hi Experts,
I am performance issues while extracting data from SAP R/3, after upgrade to oracle.
R/3 job BIREQU_* takes more time in selection of data.
Particularly in step
**02.08.2008 15:32:38 ***************************************************************************
*02.08.2008 16:37:04 533 LUWs confirmed and 533 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA*
02.08.2008 15:32:38 Job started                                                                               
02.08.2008 15:32:38 Step 001 started (program SBIE0001, variant &0000000013512, user ID BW_BG)              
02.08.2008 15:32:38 Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)                
02.08.2008 15:32:38 DATASOURCE = 0UC_BILLORD                                                                
02.08.2008 15:32:38 *************************************************************************               
02.08.2008 15:32:38 *          Current Values for Selected Profile Parameters               *               
02.08.2008 15:32:38 *************************************************************************               
02.08.2008 15:32:38 * abap/heap_area_nondia......... 2000683008                              *              
02.08.2008 15:32:38 * abap/heap_area_total.......... 2000683008                              *              
02.08.2008 15:32:38 * abap/heaplimit................ 40894464                                *              
02.08.2008 15:32:38 * zcsa/installed_languages...... DEN                                     *              
02.08.2008 15:32:38 * zcsa/system_language.......... N                                       *              
02.08.2008 15:32:38 * ztta/max_memreq_MB............ 2047                                    *              
02.08.2008 15:32:38 * ztta/roll_area................ 6500352                                 *              
02.08.2008 15:32:38 * ztta/roll_extension........... 2000683008                              *              
02.08.2008 15:32:38 *************************************************************************               
02.08.2008 16:37:04 533 LUWs confirmed and 533 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
02.08.2008 16:38:18 Call customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 6.597 records               
02.08.2008 16:38:18 Result of customer enhancement: 6.597 records                                           
02.08.2008 16:38:18 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 6.597 records                   
02.08.2008 16:38:18 Result of customer enhancement: 6.597 records                                           
02.08.2008 16:38:18 Asynchronous send of data package 1 in task 0002 (1 parallel tasks)                     
02.08.2008 16:38:18 IDOC: Info IDoc 2, IDoc No. 3256989, Duration 00:00:00                                  
02.08.2008 16:38:18 IDoc: Start = 02.08.2008 15:32:38, End = 02.08.2008 15:32:38                            
02.08.2008 16:38:19 Altogether, 0 records were filtered out through selection conditions                    
02.08.2008 16:38:19 Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks)                
02.08.2008 16:38:19 IDOC: Info IDoc 3, IDoc No. 3256996, Duration 00:00:00                                  
02.08.2008 16:38:19 IDoc: Start = 02.08.2008 16:38:19, End = 02.08.2008 16:38:19                            
02.08.2008 16:38:28 tRFC: Data Package = 1, TID = 0AB50A6764EE4894715A019E, Duration = 00:00:10, ARFCSTATE =
02.08.2008 16:38:28 tRFC: Start = 02.08.2008 16:38:18, End = 02.08.2008 16:38:28                            
02.08.2008 16:38:28 Synchronized transmission of info IDoc 4 (0 parallel tasks)  
02.08.2008 16:38:29 IDOC: Info IDoc 4, IDoc No. 3256997, Duration 00:00:01       
02.08.2008 16:38:29 IDoc: Start = 02.08.2008 16:38:28, End = 02.08.2008 16:38:29 
02.08.2008 16:38:29 Job finished                                                 
I am facing this problem while extracting data in case of Delta uploads.
for full uploads it is working fine.
What might be the problem,
Please advice.

Hi all,
Does anyone of you got the solution for the original problem ?
Step taking too long to finish?
n LUWs confirmed and n LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
Regards,
Sanjyot
Edited by: Surekha Shembekar/ Sanjyot Mishra on Jul 14, 2009 9:07 AM

Similar Messages

  • Background job is running for long tome in source system (ECC)

    Hi All,
    Background job is running for long tome in source system (ECC) while extracting data to PSA.
    I checked in ECC system SM66,SM50 the job is still running
    in SM37 the job is Active
    There are only maximum 7000 records the extractor is 2LIS_02_ITM but it is taking 11 to 13 hours to load to PSA daily
    I had checked enhancements every thing is correct.
    Please help me on this how can I solve this issue.
    Regards
    Supraja K

    Hi sudhi,
    The difference between Call customer enhancement...  and  Result of customer enhancement:... is very less we can say this as a second.
    The difference is between LUWs confirmed and 1 LUWs -
       and Call customer enhancement -
    Please find the job log details below, and give me the solution to ressolve this
    01:06:43 ***01:06:43 * ztta/roll_extension........... 2000000000                              *                 R8           050
                     R8           048
    01:06:43 1 LUWs confirmed and 1 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA     RSQU          036
    06:56:31 Call customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 5.208 records                  R3           407
    06:56:31 Result of customer enhancement: 5.208 records                                              R3           408
    06:56:31 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 5.208 records                      R3           407
    06:56:31 Result of customer enhancement: 5.208 records                                              R3           408
    06:56:31 PSA=1 USING SMQS SCHEDULER / IF [tRFC=ON] STARTING qRFC ELSE STARTING SAPI                 R3           299
    06:56:31 Synchronous send of data package 1 (0 parallel tasks)                                      R3           410
    06:56:32 tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =                           R3           038
    06:56:32 tRFC: Start = 00.00.0000 00:00:00, End = 00.00.0000 00:00:00                               R3           039
    06:56:32 Synchronized transmission of info IDoc 3 (0 parallel tasks)                                R3           414
    06:56:32 IDOC: Info IDoc 3, IDoc No. 1549822, Duration 00:00:00                                     R3           088
    06:56:32 IDoc: Start = 04.10.2011 06:56:32, End = 04.10.2011 06:56:32                               R3           089
    06:56:32 Altogether, 0 records were filtered out through selection conditions                      RSQU          037
    06:56:32 Synchronized transmission of info IDoc 4 (0 parallel tasks)                                R3           414
    06:56:32 IDOC: Info IDoc 4, IDoc No. 1549823, Duration 00:00:00                                     R3           088
    06:56:32 IDoc: Start = 04.10.2011 06:56:32, End = 04.10.2011 06:56:32                               R3           089
    06:56:32 Job finished                                                                               00           517
    Regards
    Supraja

  • Issue with background job--taking more time

    Hi,
    We have a custom program which runs as the background job. It runs every 2 hours.
    Itu2019s taking more time than expected on ECC6 SR2 & SR3 on Oracle 10.2.0.4. We found that it taking more time while executing native SQL on DBA_EXTENTS. When we tried to fetch less number of records from DBA_EXTENTS, it works fine.
    But we need the program to fetch all the records.
    But it works fine on ECC5 on 10.2.0.2 & 10.2.0.4.
    Here is the SQL statement:
    EXEC SQL PERFORMING SAP_GET_EXT_PERF.
      SELECT OWNER, SEGMENT_NAME, PARTITION_NAME,
             SEGMENT_TYPE, TABLESPACE_NAME,
             EXTENT_ID, FILE_ID, BLOCK_ID, BYTES
       FROM SYS.DBA_EXTENTS
       WHERE OWNER LIKE 'SAP%'
       INTO
       : EXTENTS_TBL-OWNER, :EXTENTS_TBL-SEGMENT_NAME,
       :EXTENTS_TBL-PARTITION_NAME,
       :EXTENTS_TBL-SEGMENT_TYPE , :EXTENTS_TBL-TABLESPACE_NAME,
       :EXTENTS_TBL-EXTENT_ID, :EXTENTS_TBL-FILE_ID,
       :EXTENTS_TBL-BLOCK_ID, :EXTENTS_TBL-BYTES
    ENDEXEC.
    Can somebody suggest what has to be done?
    Has something changed in SAP7 (wrt background job etc) or do we need to fine tune the SQL statement?
    Regards,
    Vivdha

    Hi,
    there was an issue with LMT's but that was fixed  in 10.2.0.4
    besides missing system statistics:
    But WHY do you collect every 2 hours this information? The dba_extents view is based on really heavy used system tables.
    Normally , you would start queries of this type against dba_extents ie. to identify corrupt blocks and such:
    SELECT  owner , segment_name , segment_type
            FROM  dba_extents
           WHERE  file_id = &AFN
             AND  &BLOCKNO BETWEEN block_id AND block_id + blocks -1
    Not sure what you want to achieve with it.
    There are monitoring tools (OEM ?) around that may cover your needs.
    Bye
    yk

  • MRP job takes more time (Duration / Sec)

    Hello PP Guru’s,
    I have below situation in production environment,
    MRP job for plant – A (Takes more time – 14.650)
    MRP job for plant – B (Takes less time – 4.512)
    When I compare the variant / attributes with plant A and B, I observe only difference in attribute is scheduling (2 - Lead Time Scheduling and Capacity Planning) for plant A.
    For plant B scheduling (1- Determination of Basic Dates for Planned) it was updated.
    So in my observation this scheduling is playing a major role for MRB job which takes more time in plant – A.
    I am in process of changing the variant attribute for plant – A from scheduling 2 to scheduling 1.
    I wanted to know from experts whether if I change any impact / problem will happen in future for plant A or not.
    Please let me know what all the hidden impacts are there if I change the scheduling in variant attribute.
    I look forward to your valuable input to reduce time for my MRP related job.
    Regards,
    Kumar S

    Hi Kumar,
    You have no need to change the inhouse production time you just need to update the Lot size dependent  inhouse production time in work scheduling view of material master. That you can do by scheduling the routing/recipe.
    Transactions CA97 or CA97N can be used to update the in-house production time with the information from the routing.
    If business don't want to have the capacity planning for planned orders then you change the scheduling from 2 to 1 basic date scheduling.
    Expert Caetano already answer you query
    The reports listed below can be used to compare the MRP past executions regarding the runtime:
    RMMDMONI: This report compares the runtime of the MRP execution and also provides the total of planning elements (planned orders, purchase requisitions, etc) changed, created or deleted. It also shows which planning parameters where used and how much time MRP spent on each step (database read, BOM explosion, MRP calculation, scheduling, BAdIs, etc). With this information is possible to observe the relation of runtime and number of elements changed/created/deleted and also to see on which step MRP is taking more time.
    RMMDPERF: This report shows the "material hit list", that means, which materials had the highest runtime during the MRP execution and also on which step MRP is taking more time. Knowing which materials have the highest runtime, allow you to isolate the problem and reproduce it on MD03, where it is possible to run an ABAP or SQL trace for a more detailed analysis.
    Regards,
    R.Brahmankar

  • Job-scheduling on a specific instance on source system

    Hello All,
    Is there any possibility to schedule an infopackage in BW and tell the job that the corresponding job on source system (e.g. R/3-system) should run on a specific instance?
    Thanks,
    Pavan

    Hi,
    I guess the job is related to extraction job in R/3 that relates to the BI load..
    Here if this job runs on a particaular RFC user always , depending on the user u can resrtict the instance on which it is supposed to run...
    Take the help from BASIS they can fix the issue....
    rgds,

  • MRP Job taking more time

    Dear Folks,
    We run the MRP at MRP area level with total 72 plants..Normally this job takes 3 to 4 hour to complete...but suddenly since last two weeks it is taking more than 9 hours. with this delaying business is getting problem to send the scheudles at proper time and becomes critical issue.
    Now anybody have idea to check the root causes of this delay? And how to reduce the time... We have already processing this job with parallel processing.
    Reasonable answer will get full points.
    Regards
    TAJUDDIN

    Hi TAJUDDIN
    Unfortunately, I do not have any documents related to parallel processing.
    But I can explain how to do, So I hope following explanation help you.
    1. At first you need to check whether current parallel MRP works fine or not.
        To do this, please check MRP result in the spool.
         (In the last page of spool, you can see the result task use of each WP.
          To open last page of spool, just go to SM37 and find MRP job and then
          press spool button. Now you will see spool overview.  But before open
          spool please change page setting that you will see.
          <From the menu,  Goto=>Display request=>Setting> Then select last
          10 page.  By doing this operation, you can see last 10 page.
    2.In the button of the spool page, you will see task use of MRP.
       For example. if you use 2 application servers and each application server,
       if you assign 3 work process, you will see 6 task.
                             Number of calculated item   Runtime
       AP1 WP1        1000                                  30 min
       AP1 WP2        500                                     5  min
       AP1 WP3        200                                     3  min
       AP2 WP1        200                                     3  min
       AP2 WP2        200                                     3  min
       AP2 WP3        100                                     2  min
      In the log, if you observed above situation, this indicate unbalanced system use.
      (This situation occur depending on your BOM structure). So there is a possibility
      that dispatching system use more equally improve MRP performance.
      To do equal dispatch, you need to de-activate MRP package logic to  bottlneck
       Lowlevel code.(You can see bottlneck item in the spool.  So if you observe
       10 items belongs to lowlevel code 5, it is better to deactivate package logic
       for lowlevel code 5. Then for this lowlevel code, no package logic works and this
       bring to equal distribution of task use).
    The way to de-activate is described in SAP Note 568593  and 52033
       (Until 46C, you need to use modification <manual change coding>. From
        Enterprise, you can use BADI).
    Regarding on package logic, I recommned you to read SAP Note 52033
    (Depending on runtime of former task, MRP combine several item into 1 package.
    So if task 1 finish previous MRP around 60 sec, next MRP for this task will contain
      more material <package comes to contain more material>. But if bottlneck items
      are put in togather in 1 package, this task 1 might take more longer time
      cpmpared to others.  And MRP calculation occur low level code by low level
      code. So if this task 1 does not finish calculation < other task already finish
      MRP calculation>, other task cannot start MRP for next lowlevel. So other task
      have to wait until task 1 finish his MRP.  Due to this, you will see big task usage
      and runtime difference in spool).
    But this behavior depend on BOM structure. So you also have possibility not to see
    this behavior in your spool. In this case, no necessity to consider balancing..
    I hope this help you.
    best regards
    Keiji

  • Report Developed in Webi Rich Client Consuming more time in Data Retrieval

    Dear All,
    I am a BO Consultant, recently in my project I have developed one report in Webi Rich Client., at the time of development and subsequent days the report was working fine (taking Data Retrieval time less than 1 minute), but after some days its taking much time (increasing day by day and now its taking more than 11 minutes).
    Can anybody point out what could be the reason?????
    We are using,
    1. SAP BI 7.0
    2. SAP BO XI 3.1 Edge
    3. Webi Rich Client Version :12.3.0 and Build 601
    This report is made on a Multiprovider (Sales).
    What are the important points that should be considered so that we can improve the performance of Webi Reports????
    Waiting for a suitable solution.....................
    Regards,
    Arun Krishnan.G
    SAP BO Consultant
    Edited by: ArunKG on Oct 11, 2011 3:50 PM

    Hi,
    Please come back here with a copy/paste of the 2 MDX statements from the MDA.log to compare the good/bad runtimes.
    & the 2 equivalent DPCOMMANDS clauses (good and bad) from the WebI trace logs.
    Can u explain what u really mean in the bold text above..................Actually I didn't get you..........
    Pardon, I have only 3 months experience in BO.
    Regards,
    Arun
    Edited by: ArunKG on Oct 11, 2011 4:28 PM

  • BW Job Taking more time than normal execution time

    Hi,
    Customer is trying to extract data from R/3 with BW OCPA Extractor.
    Selections within Infopackage under tab Data Selection are
    0FISCPER = 010.2000
    ZVKORG = DE
    Then it is scheduled (Info Package), after this monitor button is selected for the scheduled date and time, which gathers information from R/3 system is taking approximately 2 hours which used to take normally minutes.
    This is pulling data from R/3 which is updating PSA, the concern is the time taken for pulling data from R/3 to BW.
    If any input is required please let me know and the earliest solution is appreciated.
    Thanks
    Vijay

    Hi Vijay,
    If you think the data transfer is the problem (i.e. extractor runs for a long time), try and locate the job on R/3 side using SM37 (user ALEREMOTE) or look in SM50 to see of extraction is still running.
    You can also test the extraction in R/3 using tcode RSA3 and use same selection-criteria.
    If this goes fast (as expected) the problem must be on BW side. Another thing you can do is check if a shortdump occurred in either R/3 or BW -> tcode ST22. This will often keep the traffic light on yellow.
    Hope this helps to solve your problem.
    Grtx,
    Marco

  • Background job taking more time.

    Hi All,
    I have a background job for standard program RBDAPP01 which is scheduled to run after every 3 minutes.
    The problem is that it takes very long time at particular time everyday.
    Like normally it take 1 or 2 second.
    But at 11.14 it takes approx 1500 seconds to execute.
    Can anybody help me about what may be the reason for this?
    Regards,
    VIkas Maurya

    has it been sucessfully executed..if not then there must be an open loop in the back ground program. Generally a open loop is put in the program for debugging purpose in background programs, if its not removed, the program gets stuck....
    Or if it has been executed successfully...then u have to check the performance...u can make use of st05 or se30..for the same..

  • Connect by level with regular expression is consuming more time,

    Oracle 11g R2,
    Dear EXPERTS/GURUS,
    i have a table with 4 columns, say
    ID number,OBJECT_NAME varchar2,OBJECT_MANUFACTURER varchar2,REGIONS varchar2.In the column REGIONS i have information like EMEA,AMERICA,CCC, etc..
    The problem is this column is having redudant copy of same date like EMEA,AMERICA,CCC,EMEA,AMERICA,CCC,EMEA,AMERICA,CCC,EMEA,AMERICA,CCC
    All i want to do is to remove that redundancy, and make as one like EMEA,AMERICA,CCC.
    If i do a query like
    select distinct regexp_substr(REGIONS,'[[:alpha:]]+',1,level),ID,OBJECT_NAME,OBJECT_MANUFACTURER from table_name connect by level<=regexp_count(REGIONS,'[[:alpha:]]+');................ then i can get data as i expected with distinct REGION information, but the heck is this column REGION is having 300 times same copy of data, and more over table is having 10000 records, so the query is not at all completing, even when i tried to limit the query to 1000 rows like where rownum<1001, still query was running for more that 30 Mins.
    I need some query, which do same like above, but with alternative, faster approach.

    902629 wrote:
    Oracle 11g R2,
    Dear EXPERTS/GURUS,
    i have a table with 4 columns, say
    ID number,OBJECT_NAME varchar2,OBJECT_MANUFACTURER varchar2,REGIONS varchar2.In the column REGIONS i have information like EMEA,AMERICA,CCC, etc..
    The problem is this column is having redudant copy of same date like EMEA,AMERICA,CCC,EMEA,AMERICA,CCC,EMEA,AMERICA,CCC,EMEA,AMERICA,CCC
    All i want to do is to remove that redundancy, and make as one like EMEA,AMERICA,CCC.
    If i do a query like
    select distinct regexp_substr(REGIONS,'[[:alpha:]]+',1,level),ID,OBJECT_NAME,OBJECT_MANUFACTURER from table_name connect by level<=regexp_count(REGIONS,'[[:alpha:]]+');................ then i can get data as i expected with distinct REGION information, but the heck is this column REGION is having 300 times same copy of data, and more over table is having 10000 records, so the query is not at all completing, even when i tried to limit the query to 1000 rows like where rownum<1001, still query was running for more that 30 Mins.
    I need some query, which do same like above, but with alternative, faster approach.Sounds like a great time to revisit the data model and fix the design.
    With a sub-optimal design, there's only so much performance you can coax out of anything, at some point it becomes necessary to end the madness and address the source of the problem. Perhaps you've hit that point in time?

  • Query consuming more time

    Hi,
    Db: Oracle 9.2.0
    OS: AIX.
    We are running the below report. and it takes around 6 hours.
    I want to know exactly where the issue is. else, please refer me any gud doc id for query tuning.
    SELECT FLT . MEANING "Bank Name" ,
    PPV . EMPLOYEE_NUMBER "Emp Number" ,
    PPV . FULL_NAME "Emp Name" ,
    PEA . SEGMENT4 "Bank Account Number" ,
    PPP . VALUE "Net Pay" ,
    NVL ( COST . segment1 , '00000' ) "BU Code" ,
    SELECT vl . description
    FROM fnd_flex_values_vl vl
    WHERE vl . flex_value_set_id = '1002603'
    AND vl . flex_value = NVL ( COST . segment1 , '00000' )
    "BU Name"
    FROM PAY_ASSIGNMENT_ACTIONS PAC ,
    PAY_PAYROLL_ACTIONS PPA ,
    PAY_PRE_PAYMENTS PPP ,
    PER_ASSIGNMENTS_V PAV ,
    PER_PEOPLE_V PPV ,
    PAY_PERSONAL_PAYMENT_METHODS_V PPPM ,
    PAY_EXTERNAL_ACCOUNTS PEA ,
    FND_LOOKUP_VALUES FLT ,
    APPS . PER_TIME_PERIODS PTP ,
    hr_all_organization_units org_us ,
    pay_cost_allocation_keyflex COST
    WHERE PAC . PAYROLL_ACTION_ID = PPA . PAYROLL_ACTION_ID
    AND PAC . PRE_PAYMENT_ID = PPP . PRE_PAYMENT_ID
    AND PAC . ASSIGNMENT_ID = PAV . ASSIGNMENT_ID
    AND PAV . PERSON_ID = PPV . PERSON_ID
    AND PPA . PAYROLL_ID = PPPM . PAYROLL_ID
    AND PPA . PAYROLL_ID = : P_PAYROLL_ID
    AND PPA . CONSOLIDATION_SET_ID = : P_CONSOLIDATION_SET_ID
    AND PAV . ASSIGNMENT_ID = PPPM . ASSIGNMENT_ID
    AND PPP . PERSONAL_PAYMENT_METHOD_ID = PPPM . PERSONAL_PAYMENT_METHOD_ID
    AND PPPM . EXTERNAL_ACCOUNT_ID = PEA . EXTERNAL_ACCOUNT_ID
    AND PEA . SEGMENT1 = FLT . LOOKUP_CODE
    AND FLT . LOOKUP_TYPE = 'SA_BANKS'
    AND FLT . LANGUAGE = USERENV ( 'LANG' )
    AND FLT . LOOKUP_CODE = NVL ( : P_BANK_NAME , FLT . LOOKUP_CODE )
    AND pay_assignment_actions_pkg . get_payment_status_code ( PPP . assignment_action_id , PPP .
    pre_payment_id ) = 'P'
    AND PTP . END_DATE = : P_PERIOD_END
    AND PTP . PAYROLL_ID = PPA . PAYROLL_ID
    AND PAV . organization_id = org_us . organization_id
    AND org_us . cost_allocation_keyflex_id = COST . cost_allocation_keyflex_id (+)
    AND ppa . effective_date BETWEEN ptp . start_date AND ptp . end_date
    AND ppa . effective_date BETWEEN ppv . effective_start_date AND ppv . effective_end_date
    AND ppa . effective_date BETWEEN pav . effective_start_date AND pav . effective_end_date
    AND DECODE (hr_security.view_all,'Y', 'TRUE',hr_security.show_record ('PER_ALL_ASSIGNMENTS_F',
    PAV.assignment_id,PAV.person_id,PAV.assignment_type)) = 'TRUE'
    AND DECODE (hr_general.get_xbg_profile,'Y', PAV.business_group_id,hr_general.get_business_group_id)
    = PAV.business_group_id
    AND DECODE (hr_security.view_all,'Y', 'TRUE',hr_security.show_person (PPV.person_id,
    PPV.current_applicant_flag,PPV.current_employee_flag,PPV.current_npw_flag,PPV.employee_number,
    PPV.applicant_number,PPV.npw_number)) = 'TRUE'
    AND DECODE (hr_general.get_xbg_profile,'Y', PPV.business_group_id,hr_general.get_business_group_id)
    = PPV.business_group_id
    ORDER BY 1 ,
    regards,

    For purely sql tuning questions, you can try PL/SQL Before posting there, please please read and follow HOW TO: Post a SQL statement tuning request - template posting
    Sandeep Gandhi

  • Collection function taking more time to execute

    Hi all,
    I am using a collection function in my sql_report it is taking plenty of time to return rows, is there any way to get the resulted rows(using collection) without consuming more time.
    SELECT tab_to_string(CAST(COLLECT(wot_vw."Name") AS t_varchar2_tab)) FROM  REPORT_VW wot_vw
    WHERE(wot_vw."Task ID" = wot."task_id") GROUP BY wot_rept_vw."Task ID") as "WO"
    from   TASK_TBL wot
    INNER JOIN
    (SELECT "name", MAX("task_version") AS MaxVersion from TASK_TBL
             GROUP BY "name") q
    ON (wot."name" = q."name" AND wot."task_version" = q.MaxVersion)
    order by NLSSORT(wot."name",'NLS_SORT=generic_m')
    Here this order by is causing problem
    Apex version is 4.0
    Thanks.
    Edited by: apex on Feb 21, 2012 7:24 PM

    'My car doesn't start, please help me to start my car'
    Do you think we are clairvoyant?
    Or is your salary subtracted for every letter you type here?
    Please be aware this is not a chatroom, and we can not see your webcam.
    Sybrand Bakker
    Senior Oracle DBA

  • Why Garbage Collection take more time on JRockit?

    My company use <br>
    <b>BEA WebLogic 8.1.2<br>
    JRockit version 1.4.2<br>
    Window 2003 32bit<br>
    RAM 4 Gig<br>
    <br>
    -Xms = 1300<br>
    -Xmx = 1300<br></b>
    and running ejb application.<br>
    My problem is why JRockit take more time. How Can I solve this problem. Because my application will down again.
    <br>
    This is my infomation on JRockit :
    <br>
    Gc Algorithm: JRockit Garbage Collection System currently running strategy: Single generational, parallel mark, parallel sweep.
    <br>
    Total Garbage Collection Count: 10340
    <br>
    Last GC End: Wed May 10 13:55:37 ICT 2006
    <br>
    Last GC Start: Wed May 10 13:55:35 ICT 2006
    <br>
    <b>Total Garbage Collection Time: 2:53:13.1</b>
    <br>
    GC Handles Compaction: true
    <br>
    Concurrent: false
    <br>
    Generational: false
    <br>
    Incremental: false
    <br>
    Parallel: true
    <br>

    Hi,
    I will suggest you to check a few places where you can see the status
    1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
    Also see if there is any 'sysfail' for any datapacket in SM37.
    2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
    3) RSMO see what is available in details tab. It may be in update rules.
    4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
    5) SM58 and BD87 for pending tRFCs and IDOCS.
    Once you identify you can rectify the error.
    If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
    If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
    If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
    SM21 - System log can also be helpful.
    Also RSA7 will show LUWS which means more than one record.
    Thanks,
    JituK

  • Delta loading take more time

    Hi
    the daily delta load from 2lis_03_bf to cube 0ic_c03
    daily its take max 2 hars but to day its still running for 5 hrs not yet finished with 0 from 0 records
    why,
    how can i  get the reason
    In rsa7 its showing 302 records
    Regards
    Ogeti
    Edited by: Ogeti on May 8, 2008 7:47 AM

    Hi,
    I will suggest you to check a few places where you can see the status
    1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
    Also see if there is any 'sysfail' for any datapacket in SM37.
    2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
    3) RSMO see what is available in details tab. It may be in update rules.
    4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
    5) SM58 and BD87 for pending tRFCs and IDOCS.
    Once you identify you can rectify the error.
    If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
    If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
    If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
    SM21 - System log can also be helpful.
    Also RSA7 will show LUWS which means more than one record.
    Thanks,
    JituK

  • Data loading from source system takes long time.

    Hi,
         I am loading data from R/3 to BW. I am getting following message in the monitor.
    Request still running
    Diagnosis
    No errors could be found. The current process has probably not finished yet.
    System response
    The ALE inbox of the SAP BW is identical to the ALE outbox of the source system
    and/or
    the maximum wait time for this request has not yet run out
    and/or
    the batch job in the source system has not yet ended.
    Current status
    in the source system
    Is there anything wrong with partner profile maintanance in the source system.
    Cheers
    Senthil

    Hi,
    I will suggest you to check a few places where you can see the status
    1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
    Also see if there is any 'sysfail' for any datapacket in SM37.
    2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
    3) RSMO see what is available in details tab. It may be in update rules.
    4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
    5) SM58 and BD87 for pending tRFCs and IDOCS.
    Once you identify you can rectify the error.
    If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
    If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
    If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
    SM21 - System log can also be helpful.
    Thanks,
    JituK

Maybe you are looking for

  • IDOC imported in XI

    Hi, I created an IDOC in R/3 and imported it in XI. On execution of my report in R/3 the IDOC created a message in XI. Then I had to change one of the segments of the IDOC so i reimported it in XI to reflect the change. Now on execution of my report

  • Downloading purchased iTunes after computer was stolen

    I purchased a couple hundred dollars worth of songs onto my old laptop and backed them up as MP3s on CDs. My laptop was stolen, and I since closed that credit card account that I purchased the iTunes on. I tried uploading the saved MP3s onto another

  • OS thinks my downloads folder is an application and asks me to select one

    I can't open my downloads folder as the OS thinks it's an application and directs me to find one to open it.

  • Goods movement with errors

    hi, When we confermation the production order we get this error Error-goods movement failed(sales order stock does not exit for raw material) pl tell me answer                                                                          regards          

  • Simple excel query

    Hi, I am connecting to an excel spreadsheet. I have no problems connecting but when I query it my app. goes into an infinate loop. When coding: rs = stmt.executeQuery(query); while(rs.next()) { // display my results I get valid results fine but after