Query takes much time while sum of yearly bases amount

I have made query on the basis of joing to get payroll data it's woking fine but when we accumulate this on yearly basis while giving parameter from and to date then it takes much time, so how can we optimise this.
please advice.

this is query
SELECT paa.assignment_id,MAX(EFFECTIVE_DATE) effective_date,
paypa.business_group_id,paypa.payroll_id,
nvl(SUM(decode(pet.element_type_id,10,decode(pivf.name,'Pay Value',TO_NUMBER(NVL(prrv.result_value, 0))))),0) AMOUNT
FROM pay_assignment_actions paa,
pay_payroll_actions paypa,
pay_run_results prr,
pay_element_types_f pet,
pay_element_classifications pec,
pay_run_result_values prrv,
pay_input_values_f pivf
where paypa.payroll_ACTION_id = paa.payroll_ACTION_id
AND prr.assignment_ACTION_id = paa.assignment_ACTION_id
AND paypa.action_status = 'C'
AND paa.action_status = 'C'
and paypa.action_type in ('Q','R')
AND pet.element_type_id = prr.element_type_id
AND pec.classification_id = pet.classification_id
AND pivf.input_value_id=prrv.input_value_id
AND prr.run_result_id = prrv.run_result_id
AND pivf.element_type_id = pet.element_type_id
AND paypa.effective_date BETWEEN pivf.effective_start_date AND pivf.effective_end_date
AND paypa.effective_date BETWEEN pet.effective_start_date AND pet.effective_end_date
AND paypa.effective_date between to_date('01-JUL-2010') AND TO_DATE('30-JUN-2011')
group by paa.assignment_id,paypa.business_group_id,paypa.payroll_id
any idea for this ,how can we improve performance,althoug it's woking fine without using group by function
Edited by: oracle0282 on Mar 31, 2011 11:36 PM

Similar Messages

  • Update query takes much time to execute

    Hi Experts,
    I need help regarding performance of the query.
    update TEST_TAB
    set fail=1, msg='HARD'
    where id in (
    select src.id from TEST_TAB src
    inner join TEST_TAB l_1 on src.email=l_1.email and l_1.database_id=335090 and l_1.msg='HARD' and l_1.fail=1
    inner join TEST_TAB l_2 on src.email=l_2.email and l_2.database_id=338310 and l_2.msg='HARD' and l_2.fail=1
    inner join TEST_TAB l_3 on src.email=l_3.email and l_3.database_id=338470 and l_3.msg='HARD' and l_3.fail=1
    where src.database_id=1111111;
    This query is running for too long, takes >1 hour and it updates 26000 records.
    But, if we run inner select query
    select src.id from TEST_TAB src
    inner join TEST_TAB l_1 on src.email=l_1.email and l_1.database_id=335090 and l_1.msg='HARD' and l_1.fail=1
    inner join TEST_TAB l_2 on src.email=l_2.email and l_2.database_id=338310 and l_2.msg='HARD' and l_2.fail=1
    inner join TEST_TAB l_3 on src.email=l_3.email and l_3.database_id=338470 and l_3.msg='HARD' and l_3.fail=1
    where src.database_id=1111111
    It takes <1 minute to execute.
    Please give me suggetions in the update query so that i will improve performance of the query.

    SELECT src.id FROM lead src
            inner join lead l_1 ON src.email=l_1.email AND
    l_1.database_id=335090 AND l_1.bounce_msg_t='HARD' AND l_1.failed=1
            inner join lead l_2 ON src.email=l_2.email AND
    l_2.database_id=338310 AND l_2.bounce_msg_t='HARD' AND l_2.failed=1
            inner join lead l_3 ON src.email=l_3.email AND
    l_3.database_id=338470 AND l_3.bounce_msg_t='HARD' AND l_3.failed=1
        WHERE src.database_id=264170;
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=ALL_ROWS          1             10453                              
      TABLE ACCESS BY INDEX ROWID     LEAD     1       32       27                              
        NESTED LOOPS          1       130       10453                              
          HASH JOIN          1       98       10426                              
            HASH JOIN          199       12 K     6950                              
              TABLE ACCESS BY INDEX ROWID     LEAD     202       6 K     3476                              
                INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
              TABLE ACCESS BY INDEX ROWID     LEAD     94 K     3 M     3473                              
                INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
            TABLE ACCESS BY INDEX ROWID     LEAD     202       6 K     3476                              
              INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
          INDEX RANGE SCAN     LEAD_IDX_4     24             3   Update for one row.
         UPDATE lead SET failed=1, bounce_msg_t='HARD'
    WHERE id IN (SELECT src.id FROM lead src
    inner join lead l_1 ON src.email=l_1.email AND
    l_1.database_id=335090 AND l_1.bounce_msg_t='HARD' AND l_1.failed=1
    inner join lead l_2 ON src.email=l_2.email AND
    l_2.database_id=338310 AND l_2.bounce_msg_t='HARD' AND l_2.failed=1
    inner join lead l_3 ON src.email=l_3.email AND
    l_3.database_id=338470 AND l_3.bounce_msg_t='HARD' AND l_3.failed=1
    WHERE src.database_id=264170
         AND ROWNUM=1)
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=ALL_ROWS          1             10456                              
      UPDATE     LEAD                                               
        NESTED LOOPS          1       32       10456                              
          VIEW     VW_NSO_1     1       13       10453                              
            SORT UNIQUE          1       130                                    
              COUNT STOPKEY                                                    
                TABLE ACCESS BY INDEX ROWID     LEAD     1       32       27                              
                  NESTED LOOPS          1       130       10453                              
                    HASH JOIN          1       98       10426                              
                      HASH JOIN          199       12 K     6950                              
                        TABLE ACCESS BY INDEX ROWID     LEAD     202       6 K     3476                              
                          INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
                        TABLE ACCESS BY INDEX ROWID     LEAD     94 K     3 M     3473                              
                          INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
                      TABLE ACCESS BY INDEX ROWID     LEAD     202       6 K     3476                              
                        INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
                    INDEX RANGE SCAN     LEAD_IDX_4     24             3                              
          TABLE ACCESS BY INDEX ROWID     LEAD     1       19       2                              
            INDEX UNIQUE SCAN     LEADS_PK     1             1 

  • While running the query how much time it will taken, I want to see the time

    Hi Folks
    I would like to know ... While running the query how much time it will be taken, I want to see the time? in WEBI XI R2.....
    Plz let me know  the answer.......

    Hi Ravi,
    The time a report runs is estimated based on the last time it was run. So you need to run the report once before you can see how long it will take. Also it depends on several factors... the database server could cache some queries so running it a second time immediately after the first time could be quicker. And there is the chance of changing filters to bring back different sets of data.
    You could also schedule a report and then check the scheduled instance's status properties and view how long a report actually ran.
    Good luck

  • Query Take more time on timesten

    Hi
    One query takes lot of time on timesten , while the same query takes less time on oracle
    Query :-
    select *
                    from (SELECT RELD_EM_ENTITY_ID,
                                 RELD_EXM_EXCH_ID,
                                 RELD_SEGMENT_TYPE,
                                NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
                                 ROUND(NVL(sum(RELD_RTO_EXP), -1), 2) RTO_EXP,
                                 ROUND(NVL(sum(RELD_NE_EXP), -1), 2) NET_EXP,
                                 ROUND(NVL(sum(RELD_MAR_UTILIZATION), -1),
                                       2) MAR_EXP,
                                 ROUND(decode(sign(sum(C.reld_m2m_exp)),
                                              -1,
                                              abs(sum(C.reld_m2m_exp)),
                                              0),
                                       2) M2M_EXP,
                                 NVL(OLM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
                                 EM_RP_PROFILE_ID
                            FROM ENTITY_MASTER         A,
                                 ORDER_LMT_MASTER      B,
                                 RMS_ENTITY_LIMIT_DTLS C
                           WHERE A.EM_CONTROLLER_ID = 100100010000
                             AND A.EM_ENTITY_ID = C.RELD_EM_ENTITY_ID
                             AND C.RELD_EM_ENTITY_ID = B.OLM_EPM_EM_ENTITY_ID(+)
                             AND C.RELD_SEGMENT_TYPE = 'E'
                             AND C.RELD_EXM_EXCH_ID = B.OLM_EXCH_ID(+)
                             AND C.RELD_EXM_EXCH_ID <> 'ALL'
                             AND B.OLM_SEM_SMST_SECURITY_ID(+) =
                                 'ALL'
                             AND ((A.EM_ENTITY_TYPE IN ('CL') AND
                                 A.EM_CLIENT_TYPE <> 'CC') OR
                                 A.EM_ENTITY_TYPE <> 'CL')
                             AND B.OLM_PRODUCT_ID(+) = 'M' --Added by Harshit Shah on 4th June 09
                           GROUP BY RELD_EM_ENTITY_ID,
                                    RELD_EXM_EXCH_ID,
                                    RELD_SEGMENT_TYPE,
                                    RELD_ACC_TYPE,
                                    OLM_SUSPNSN_FLG,
                                    EM_RP_PROFILE_ID,
                                    OLM_PRODUCT_ID
                          UNION --union all removed by pramod on 08-jan-2012 as it was giving multiple rows
                          SELECT RELD_EM_ENTITY_ID,
                                 RELD_EXM_EXCH_ID,
                                 RELD_SEGMENT_TYPE SEGMENTID,
                               NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
                                 ROUND(NVL(SUM(RELD_RTO_EXP), -1), 2) RTO_EXP,
                                 ROUND(NVL(SUM(RELD_NE_EXP), -1), 2) NET_EXP,
                                 ROUND(NVL(SUM(RELD_MAR_UTILIZATION), -1),
                                       2) MAR_EXP,
                                 ROUND(decode(sign(SUM(C.reld_m2m_exp)),
                                              -1,
                                              abs(SUM(C.reld_m2m_exp)),
                                              0),
                                       2) M2M_EXP,
                                 NVL(OLM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
                                 EM_RP_PROFILE_ID
                            FROM ENTITY_MASTER         A,
                                 ORDER_LMT_MASTER      B,
                                 RMS_ENTITY_LIMIT_DTLS C
                           WHERE A.EM_CONTROLLER_ID = 100100010000
                             AND A.EM_ENTITY_ID = B.OLM_EPM_EM_ENTITY_ID
                             AND B.OLM_EPM_EM_ENTITY_ID = C.RELD_EM_ENTITY_ID(+)
                             AND C.RELD_SEGMENT_TYPE = 'E'
                             AND B.OLM_EXCH_ID = 'ALL'
                             AND B.OLM_SEM_SMST_SECURITY_ID(+) =
                                 'ALL'
                             AND ((A.EM_ENTITY_TYPE IN ('CL') AND
                                 A.EM_CLIENT_TYPE <> 'CC') OR
                                 A.EM_ENTITY_TYPE <> 'CL')
                             AND B.OLM_PRODUCT_ID(+) = 'M' --Added by Harshit Shah on 4th June 09
                           GROUP BY RELD_EM_ENTITY_ID,
                                    RELD_EXM_EXCH_ID,
                                    RELD_SEGMENT_TYPE,
                                    RELD_ACC_TYPE,
                                    OLM_SUSPNSN_FLG,
                                    EM_RP_PROFILE_ID,
                                    OLM_PRODUCT_ID
                          UNION --union all removed by pramod on 08-jan-2012 as it was giving multiple rows
                          SELECT RELD_EM_ENTITY_ID,
                                 RELD_EXM_EXCH_ID,
                                 RELD_SEGMENT_TYPE,
                                 NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
                                 ROUND(NVL(sum(RELD_RTO_EXP), -1), 2) RTO_EXP,
                                 ROUND(NVL(sum(RELD_NE_EXP), -1), 2) NET_EXP,
                                 ROUND(NVL(sum(RELD_MAR_UTILIZATION), -1),
                                       2) MAR_EXP,
                                 ROUND(decode(sign(sum(C.reld_m2m_exp)),
                                              -1,
                                              abs(sum(C.reld_m2m_exp)),
                                              0),
                                       2) M2M_EXP,
                                 NVL(OLIM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
                                 EM_RP_PROFILE_ID
                            FROM ENTITY_MASTER             A,
                                 DRV_ORDER_INST_LMT_MASTER B,
                                 RMS_ENTITY_LIMIT_DTLS     C
                           WHERE A.EM_CONTROLLER_ID = 100100010000
                             AND A.EM_ENTITY_ID = C.RELD_EM_ENTITY_ID
                             AND C.RELD_EM_ENTITY_ID =
                                 B.OLIM_EPM_EM_ENTITY_ID(+)
                             AND C.RELD_SEGMENT_TYPE = 'D'
                             AND C.RELD_EXM_EXCH_ID = B.OLIM_EXCH_ID(+)
                             AND C.RELD_EXM_EXCH_ID <> 'ALL'
                             AND B.OLIM_INSTRUMENT_ID(+) = 'ALL'
                             AND ((A.EM_ENTITY_TYPE IN ('CL') AND
                                 A.EM_CLIENT_TYPE <> 'CC') OR
                                 A.EM_ENTITY_TYPE <> 'CL')
                           GROUP BY RELD_EM_ENTITY_ID,
                                    RELD_EXM_EXCH_ID,
                                    RELD_SEGMENT_TYPE,
                                    RELD_ACC_TYPE,
                                    OLIM_SUSPNSN_FLG,
                                    EM_RP_PROFILE_ID
                          UNION --union all removed by pramod on 08-jan-2012 as it was giving multiple rows
                          SELECT RELD_EM_ENTITY_ID,
                                 RELD_EXM_EXCH_ID,
                                 RELD_SEGMENT_TYPE SEGMENTID,
                                 NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
                                 ROUND(NVL(SUM(RELD_RTO_EXP), -1), 2) RTO_EXP,
                                 ROUND(NVL(SUM(RELD_NE_EXP), -1), 2) NET_EXP,
                                 ROUND(NVL(SUM(RELD_MAR_UTILIZATION), -1),
                                       2) MAR_EXP,
                                 ROUND(decode(sign(SUM(C.reld_m2m_exp)),
                                              -1,
                                              abs(SUM(C.reld_m2m_exp)),
                                              0),
                                       2) M2M_EXP,
                                 NVL(OLIM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
                                 EM_RP_PROFILE_ID
                            FROM ENTITY_MASTER             A,
                                 DRV_ORDER_INST_LMT_MASTER B,
                                 RMS_ENTITY_LIMIT_DTLS     C
                           WHERE A.EM_CONTROLLER_ID = 100100010000
                             AND A.EM_ENTITY_ID = B.OLIM_EPM_EM_ENTITY_ID
                             AND B.OLIM_EPM_EM_ENTITY_ID =
                                 C.RELD_EM_ENTITY_ID(+)
                             AND C.RELD_SEGMENT_TYPE = 'D'
                             AND B.OLIM_EXCH_ID = 'ALL'
                             AND B.OLIM_INSTRUMENT_ID(+) = 'ALL'
                             AND ((A.EM_ENTITY_TYPE IN ('CL') AND
                                 A.EM_CLIENT_TYPE <> 'CC') OR
                                 A.EM_ENTITY_TYPE <> 'CL')
                           GROUP BY RELD_EM_ENTITY_ID,
                                    RELD_EXM_EXCH_ID,
                                    RELD_SEGMENT_TYPE,
                                    RELD_ACC_TYPE,
                                    OLIM_SUSPNSN_FLG,
                                    EM_RP_PROFILE_ID)
                   ORDER BY RELD_EM_ENTITY_ID,
                            RELD_SEGMENT_TYPE,
                            RELD_EXM_EXCH_ID;
    Please suggest  what should i check for this.

    As always when examining SQL performance, start by checking the query execution plan. If you use ttIsql you can just prepend EXPLAIN to the query and the plan will be displayed. e.g.
    EXPLAIN  select ...........;
    Check the plan is optimal and all necessary indexes are in place. You may need to add indexes depending o what the plan shows.
    Please note that Oracle database can, and usually does, execute many types of query in parallel using multiple CPU cores. TimesTen does not currently support parallelisation of individual queries. Hence in some cases Oracle database may indeed be faster than TimesTen due to the parallel execution that occurs in Oracle.
    Chris

  • Why update query takes  long time ?

    Hello everyone;
    My update query takes long time.  In  emp  ( self testing) just  having 2 records.
    when i issue update query , it takes long time;
    SQL> select  *  from  emp;
      EID  ENAME     EQUAL     ESALARY     ECITY    EPERK       ECONTACT_NO
          2   rose              mca                  22000   calacutta                   9999999999
          1   sona             msc                  17280    pune                          9999999999
    Elapsed: 00:00:00.05
    SQL> update emp set esalary=12000 where eid='1';
    update emp set esalary=12000 where eid='1'
    * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:01:11.72
    SQL> update emp set esalary=15000;
    update emp set esalary=15000
      * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:02:22.27

    Hi  BCV;
    Thanks for your reply but it doesn't provide output,  please  see   this.
    SQL> update emp set esalary=15000;
    ........... Lock already occured.
    >> trying to trace  >>
    SQL> select HOLDING_SESSION from dba_blockers;
    HOLDING_SESSION
                144
    SQL> select sid , username, event from v$session where username='HR';
    SID USERNAME     EVENT
       144   HR    SQL*Net message from client
       151   HR    enq: TX - row lock contention
       159   HR    SQL*Net message from client
    >> It  does n 't  provide  clear output about  transaction lock >>
    SQL> SELECT username, v$lock.SID, TRUNC (id1 / POWER (2, 16)) rbs,
      2  BITAND (id1, TO_NUMBER ('ffff', 'xxxx')) + 0 slot, id2 seq, lmode,
      3  request
      4  FROM v$lock, v$session
      5  WHERE v$lock.TYPE = 'TX'
      6  AND v$lock.SID = v$session.SID
      7  AND v$session.username = USER;
      no rows selected
    SQL> select MACHINE from v$session where sid = :sid;
    SP2-0552: Bind variable "SID" not declared.

  • Select Query Takes more time

    Hi All,
    I have cloned KSB1 tcode to custom one as required by business.
    Below query takes more time than excepted.
    Here V_DB_TABLE = COVP.
    Values in Where clause are as follows
    OBNJR in ( KSBB010000001224  BT  KSBB012157221571)
    GJAHR in blank
    VERSN in '000'
    WRTTP in '04' and '11'
    all others are blank
    VT_VAR_COND = ( CPUDT BETWEEN '20091201' and '20091208' )
    SELECT (VT_FIELDS) INTO CORRESPONDING FIELDS OF GS_COVP_EXT      
                        FROM (V_DB_TABLE)                             
                        WHERE LEDNR = '00'                            
                        AND   OBJNR IN LR_OBJNR                       
                        AND   GJAHR IN GR_GJAHR                       
                        AND   VERSN IN GR_VERSN                       
                        AND   WRTTP IN GR_WRTTP                       
                        AND   KSTAR IN LR_KSTAR                       
                        AND   PERIO IN GR_PERIO                       
                        AND   BUDAT IN GR_BUDAT                       
                        AND   PAROB IN GR_PAROB                       
                        AND   (VT_VAR_COND).    
    Checked in table for this condition it has only 92 entries.
    But when i execute program takes long time as 3 Hrs.
    Could any one help me on this

    >1.Dont use SELECT/ENDSELECT instead use INTO TABLE addition .
    > 2.Avoid using corresponding addition.create a type and reference it.
    > If the select is going for dump beacause of storage limitations ,then use Cursors.
    you got three large NOs .... all three recommendations are wrong!
    The SE16 test is going in the right direction ... but what was filled. Nobody knows!!!!
    Select options:
    Did you ever try to trace the SE16?  The generic statement has for every field an in-condition!
    Without the information what was actually filled, nobody can say something there
    are at least 2**n  combinations possible!
    Use ST05 for SE16 and check actual statement plus explain!

  • Takes much time

    hi all
    When i tries to retrive the data from outline using OLAP outline extractor it takes much time and less data
    i dont know whats the reason and where as in razzza i get in fraction of sec but its not format which ima looking for
    help appreciated
    regards

    You may need to make some registry settings, have a look at this post and there is a link to Tim's blog - Re: Extracting large dimension using outline extractor
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • MacBook Pro takes much time for Shutdown (More than 1 min.)

    I have a MacBook Pro, and some weeks ago, it takes much time for shutdown.
    I've tried to install the 10.5.2 update, but nothing solves.
    My mac takes for shutdown 1 min & 30 seconds every time. I don't understand. Why?
    What can I do? I tried to repair permissions and reset the PMU, but the problem still there.
    Help me! Please!
    Thanks
    Sorry, but I'm spanish and it's possible that I've made more mistakes writing in English.

    Thanks. I tried to dissable an EyeTv option, then EyeConnect don't appears in the Activity Monitor. This is the result:
    Now, my MacBook Pro takes only 35 seconds for shutdown. I think is better than yesterdey, but is more than the 5 seconds that you described in the last post. Is 35 seconds a good time? In this 35 seconds, I only can see the background image, without icons and without the finder menu bar. After 35 seconds, Mac is off.
    How can I disable iDisk sync? I have a free mac Account, but is expired and then I can't use iDisk, only the name in iChat. But, is iDisk enabled?

  • Problem : SELECT from LTAP table takes much time (Sort in Database layer)

    Guys,
    Im having problem with this select statement. It takes much time just to get single record.
    The problem is with accessing the LTAP table and the ORDER BY DESCENDING statement.
    The objective of this select statement is to get the non blocked storage bin which is used by latest transfer order number.
    If the latest transfer order no storage bin is blocked, then it will loop and get the 2nd latest transfer order no's storage bin and
    checks whether it blocked or not. It will keep looping.
    The secondary index has been created but the it still taking much time (3 minutes for 10K records in LTAP)
    Secondary Indexes:
    a) LTAP_M ->MANDTLGNUM PQUIT MATNR
    b)LTAP_L  ->LGNUM PQUIT VLTYP VLPLA
    Below is the coding.
    ******************Start of DEVK9A14JW**************************
        SELECT ltaptanum ltapnlpla ltap~wdatu INTO (ltap-tanum, ltap-nlpla, ltap-wdatu)
              UP TO 1 ROWS
              FROM ltap INNER JOIN lagp                         "DEVK9A15OA
              ON lagplgnum =  ltaplgnum
              AND lagplgtyp =  ltapnltyp
              AND lagplgpla =  ltapnlpla
                WHERE lagp~skzue = ' '
                AND ltap~pquit = 'X'
                AND ltap~matnr = ls_9001_scrn-matnr
                AND ltap~lgort = ls_9001_scrn-to_lgort
                AND ltap~lgnum = ls_9001_scrn-lgnum
                AND ltap~nltyp = ls_9001_scrn-nltyp
          ORDER BY tanum DESCENDING.
         ENDSELECT.
        IF sy-subrc EQ 0.
          ls_9001_scrn-nlpla = ltap-nlpla.
          EXIT.
        ENDIF.
    ******************End of DEVK9A14JW**************************

    > Im having problem with this select statement. It takes much time just to get single record.
    This is not true. Together with the ORDER BY the UP TO 1 ROWS does not read 1 record but prepare all records, orders them and return one record, i.e. the largest in sort order.
    You must check what you need, either you need the largest record, then this can be your only possible solution.
    If you need only one recoird then the ORDER BY does not make sense.
    If you need the single largest record, then sometimes the aggregate function MAX can be an alternative.
    I did not look at the index support, this can always be a problem.
    Siegfried

  • Primavera Contract managment take much time when open large PDF files.

    Dears,
    i have a big problem!
    i made integration between the PCM and sharepoint 2010 and make migration from the file system to sharepoint.
    the sharepoint database reach 355GB
    after that unfortunately, when i try to open large pdf attachment through PCM(Primavera Contract Managmnet) it take much time then whan was opened from the file server.
    i made everthing upgrde the RAM and processor but the problem still exists.
    please help!
    Edited by: 948060 on Sep 19, 2012 1:48 AM

    we start store attachment on 2007. all of these files are migrated to sharepoint 2010 now on the staging enviroment.
    but, we faced the performance issue as mentioned above.
    the large files (begin 5MB) take a lot of time to open through the PCM

  • What is the reason for query take more time to execute

    Hi,
    What is the reason for the query take more time inside procedure.
    but if i execute that query alone then it excute within a minutes.
    query includes update and insert.

    I have a insert and update query when I execute
    without Procedure then that query execute faster but
    If I execute inside procedure then It takes 2 hours
    to execute.Put you watch 2 hours back and the problem will disappear.
    do you understand what I want to say?I understood what you wanted to say and I understood you didn't understood what I said.
    What does the procedure, what does the query, how can you say the query does the same as the procedure that takes longer. You didn't say anything useful to have an idea of what you're talking about.
    Everyone knows what means that something is slower than something else, but it means nothing if you don't say what you're talking about.
    To begin with something take a look at this
    When your query takes too long ...
    especially the part regarding the trace.
    Bye Alessandro

  • Select query taking Much time

    Dear all ,
    I am fetching data from pool table a006.  The select query is mentioned below.
    select * from a005 into table i_a005 for all wntries in it_table
                 where  kappl  = 'V'
                 and      kschl   IN  s_kschl
                 and     vkorg   in   s_vkorg
                 and     vtweg  in   s_vtgew
                 and     matnr   in s_matnr
                 and    knumh  =  it_table-knumh .
    here every fields are primary key fields except one field knumh which is comparing with table it_table. Because of these field this query is taking too much time as KNUMH is not primary key. And a005 is pool table . So , i cant create index for same. If there is alternate solutions , than please let me know..
    Thank You ,
    And in technical setting of table ITS Metioned as Fully buffered and size category is 0 .. But data are around 9000000. Is it issue or What ?  Can somebody tell some genual reason ? Or improvement in my select query.........
    Edited by: TVC6784 on Jun 30, 2011 3:31 PM

    TVC6784 wrote:
    Hi Yuri ,
    >
    > Thanks for your reply....I will check as per your comment...
    > bUT if i remove field KNUMH  From selection condition and also for all entries in it_itab ,  than data fetch very fast As KNUMH is not primary key..
    > .  the example is below
    >
    > select * from a005 into table i_a005
    > where kappl = 'V'
    > and kschl IN s_kschl
    > and vkorg in s_vkorg
    > and vtweg in s_vtgew
    > and matnr in s_matnr.
    >
    > Can you comment anything about it ?
    >
    > And can you please say how can i check its size as you mention that is  2-3 Mb More   ?
    >
    > Edited by: TVC6784 on Jun 30, 2011 7:37 PM
    I cannot see the trace and other information about the table so I cannot judge why the select w/o KNUMH is faster.
    Basically, if the table is buffered and it's contents is in the SAP application server memory, the access should be really fast. Does not really matter if it is with KNUMH or without.
    I would really like to see at least ST05 trace of your report that is doing this select. This would clarify many things.
    You can check the size by multiplying the entries in A005 table by 138. This is (in my test system) the ABAP width of the structure.
    If you have 9.000.000 records in A005, then it would take 1,24 Gb in the buffer (which is a clear sign to unbuffer).

  • Query taking much time Orace 9i

    Hi,
    **How can we tune the sql query in oracle 9i.**
    The select query taking more than 1 and 30 min to throw the result.
    Due to this,
    We have created materialsed view on the select query and also we submitted a job to get Materilazed view refreshed daily in dba_jobs.
    When we tried to retrive the data from Materilased view getting result very quickly.
    But the job which we has been assisgned in Dbajobs taking equal  time to complete, as the query use to take.
    We feel since the job taking much time in the test Database and it may cause load if we move the same scripts in Production Environment.
    Please suggest how to resolvethe issue and also how to tune the sql
    With Regards,
    Srinivas
    Edited by: Srinivas.. on Dec 17, 2009 6:29 AM

    Hi Srinivas;
    Please follow this search and see its helpful
    Regard
    Helios

  • Query takes large time to execute

    Hi,
    These are interview questions:
    1. If the query takes lot of time to execute, what should be the preliminary analysis done by a DBA
    2. If we have system, users, rollback and temporary tablespaces, 2 disks disk1 and disk2, how should the tablespaces be distributed on the disks
    I am interested to know the answers. Kindly respond
    Thanks and Regards
    Sumit Sharma

    2. If we have system, users, rollback and temporary tablespaces, 2 disks disk1 and disk2, how should the tablespaces be distributed on the disks
    2 disk are not enough for a good distribution but however you could the best possible with that.
    Really the optimal distribution of disk must be so:
    DISK1: Tablespaces of data, One Controlfile, Redo Logs ( 1 member per group )
    DISK2: Redo Logs ( 1 member per group ), One Controlfile
    DISK3: Tablespaces for Indexes, Redo Logs ( 1 member per group ), One Controlfile
    DISK4: Archives, Rollback Segments
    Joel Pérez
    http://otn.oracle.com/experts

  • ABAP QUERY taking much time after ERP Upgrade from 4.6 to 6.0

    Hi All,
    I have an ABAP QUERY which uses the INFOSET INVOICE_INBOUND and the USER GROUP InvoiceVerif. The INFOSET is using the tables RBKP and RSEG connected using a JOIN on BELNR and GJAHR fields.
    The query was working fine in 4.6 C Version.  Now the system has been upgraded to 6.0 version.
    Now it takes so much time that the processing is not getting completed. Do we have to make any changes to the existing queries for an upgrade?
    Thanks a lot in advance.
    Gautham.

    Did u regenrated the query & Infoset & Program  before transporting it to ECC6.0?

Maybe you are looking for

  • Exchange server 2010 sp2 slow issue

    My Exchange server 2010 sp2 performance very slow in morning time 9 to 10 AM and evening time 4 to 5 pm only and other time working fine

  • How to delete (remove) a approval procedures templates ?

    I have added an approval procedures template for 2 stages for Good Receipt PO. then we used this approval procedure for several days. recently , our manager hopes me to remove this approval procedure.    ╮(﹀_﹀")╭ I have tried many methods to delete (

  • Apple TV 2 stuck updating Software

    Last night my ATV notified me there was a software upgrade and I accpted it - it went to about 30% complete in 20 min or so and has been stuck there for over 12 hours - teh screen cearly says to not unplug the Apple TV during upgrade. Remote control

  • Help in ADF JCleint Table sorting

    I am having an EntityObject OrderEntity. I am having one ViewObject OrderView, created from the EntityObject OrderEntity. I having some calculated fields in the View Object OrderView which are not in EntityObject OrderEntity. Like Profit and GrossPro

  • Cannot invoke Web service wizard

    I get an error when I try to create Regenerate a Webservice from the JDeveloper in a Modal Dialog box. Cannot Invoke Wizard oracle.bali.xml.addin.XMLSourceNode I checked to see if the xmladdin.jar file is in the extensions folder and find it there. T