INSTR() function is taking more time

im using the below query select query in my procedure.
select distinct SUPPLIER_CIRCUIT_ID from SUPPLIER_DATA
where INSTR(i.SYSTEM_CIRCUIT_ID,SUPPLIER_CIRCUIT_ID) > 0;
I am taking SYSTEM_CIRCUIT_ID in cursor.
This query is taking more time. Is that possiblt to create function based index and speed up the query?

Hi,
Welcome to the forum!
993620 wrote:
im using the below query select query in my procedure.
select distinct SUPPLIER_CIRCUIT_ID from SUPPLIER_DATA
where INSTR(i.SYSTEM_CIRCUIT_ID,SUPPLIER_CIRCUIT_ID) > 0;
I am taking SYSTEM_CIRCUIT_ID in cursor.Show exactly what you're doing.
Whenever you have a problem, post a complete test script that people can run to re-create the problem and test their ideas.
See te forum FAQ {message:id=9360002}
This query is taking more time. Is that possiblt to create function based index and speed up the query?Sorry, unless you doing something very specific (such as alwyas looking for the same sub-string) then a function-based index won't help.
Oracle sells a separate product, called Oracle Text, for this kind of searching.
You might try LIKE:
WHERE  i.SYSTEM_CIRCUIT_ID LIKE  '%' || SUPPLIER_CIRCUIT_ID || '%'If you're using a cursor, then that's probably slowing the process down much more than the part you're showing.

Similar Messages

  • Custom Report taking more time to complete Normat

    Hi All,
    Custom report(Aging Report) in oracle is taking more time to complete Normal.
    In one instance, the same report is taking 5 min and the other instance this is taking 40-50 min to complete.
    We have enabled the trace and checked the trace file, but all the queries are working fine.
    Could you please suggest me regarding this issue.
    Thanks in advance!!

    TKPROF: Release 10.1.0.5.0 - Production on Tue Jun 5 10:49:32 2012
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Sort options: prsela exeela fchela
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    Error in CREATE TABLE of EXPLAIN PLAN table: APPS.prof$plan_table
    ORA-00922: missing or invalid option
    parse error offset: 1049
    EXPLAIN PLAN option disabled.
    SELECT DISTINCT OU.ORGANIZATION_ID , OU.BUSINESS_GROUP_ID
    FROM
    HR_OPERATING_UNITS OU WHERE OU.SET_OF_BOOKS_ID =:B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.05 11 22 0 3
    total 3 0.00 0.05 11 22 0 3
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    3 HASH UNIQUE (cr=22 pr=11 pw=0 time=52023 us cost=10 size=66 card=1)
    3 NESTED LOOPS (cr=22 pr=11 pw=0 time=61722 us)
    3 NESTED LOOPS (cr=20 pr=11 pw=0 time=61672 us cost=9 size=66 card=1)
    3 NESTED LOOPS (cr=18 pr=11 pw=0 time=61591 us cost=7 size=37 card=1)
    3 NESTED LOOPS (cr=16 pr=11 pw=0 time=61531 us cost=7 size=30 card=1)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=11 pr=9 pw=0 time=37751 us cost=6 size=22 card=1)
    18 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK1 (cr=1 pr=1 pw=0 time=17135 us cost=1 size=0 card=18)(object id 43610)
    3 TABLE ACCESS BY INDEX ROWID HR_ALL_ORGANIZATION_UNITS (cr=5 pr=2 pw=0 time=18820 us cost=1 size=8 card=1)
    3 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK (cr=2 pr=0 pw=0 time=26 us cost=0 size=0 card=1)(object id 43657)
    3 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK (cr=2 pr=0 pw=0 time=32 us cost=0 size=7 card=1)(object id 44020)
    3 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=2 pr=0 pw=0 time=52 us cost=1 size=0 card=1)(object id 330960)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=2 pr=0 pw=0 time=26 us cost=2 size=29 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 11 0.01 0.05
    asynch descriptor resize 2 0.00 0.00
    INSERT INTO FND_LOG_MESSAGES ( ECID_ID, ECID_SEQ, CALLSTACK, ERRORSTACK,
    MODULE, LOG_LEVEL, MESSAGE_TEXT, SESSION_ID, USER_ID, TIMESTAMP,
    LOG_SEQUENCE, ENCODED, NODE, NODE_IP_ADDRESS, PROCESS_ID, JVM_ID, THREAD_ID,
    AUDSID, DB_INSTANCE, TRANSACTION_CONTEXT_ID )
    VALUES
    ( SYS_CONTEXT('USERENV', 'ECID_ID'), SYS_CONTEXT('USERENV', 'ECID_SEQ'),
    :B16 , :B15 , SUBSTRB(:B14 ,1,255), :B13 , SUBSTRB(:B12 , 1, 4000), :B11 ,
    NVL(:B10 , -1), SYSDATE, FND_LOG_MESSAGES_S.NEXTVAL, :B9 , SUBSTRB(:B8 ,1,
    60), SUBSTRB(:B7 ,1,30), SUBSTRB(:B6 ,1,120), SUBSTRB(:B5 ,1,120),
    SUBSTRB(:B4 ,1,120), :B3 , :B2 , :B1 ) RETURNING LOG_SEQUENCE INTO :O0
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 20 0.00 0.03 4 1 304 20
    Fetch 0 0.00 0.00 0 0 0 0
    total 21 0.00 0.03 4 1 304 20
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 LOAD TABLE CONVENTIONAL (cr=1 pr=4 pw=0 time=36498 us)
    1 SEQUENCE FND_LOG_MESSAGES_S (cr=0 pr=0 pw=0 time=24 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.01 0.03
    SELECT MESSAGE_TEXT, MESSAGE_NUMBER, TYPE, FND_LOG_SEVERITY, CATEGORY,
    SEVERITY
    FROM
    FND_NEW_MESSAGES M, FND_APPLICATION A WHERE :B3 = M.MESSAGE_NAME AND :B2 =
    M.LANGUAGE_CODE AND :B1 = A.APPLICATION_SHORT_NAME AND M.APPLICATION_ID =
    A.APPLICATION_ID
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.03 4 12 0 2
    total 5 0.00 0.03 4 12 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=6 pr=2 pw=0 time=15724 us cost=3 size=134 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=20 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_NEW_MESSAGES (cr=4 pr=2 pw=0 time=15693 us cost=2 size=125 card=1)
    1 INDEX UNIQUE SCAN FND_NEW_MESSAGES_PK (cr=3 pr=1 pw=0 time=6386 us cost=1 size=0 card=1)(object id 34367)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.00 0.03
    DELETE FROM MO_GLOB_ORG_ACCESS_TMP
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.02 3 4 4 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.02 3 4 4 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    0 DELETE MO_GLOB_ORG_ACCESS_TMP (cr=4 pr=3 pw=0 time=29161 us)
    1 TABLE ACCESS FULL MO_GLOB_ORG_ACCESS_TMP (cr=3 pr=2 pw=0 time=18165 us cost=2 size=13 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 3 0.01 0.02
    SET TRANSACTION READ ONLY
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.01 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.01 0 0 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    begin Fnd_Concurrent.Init_Request; end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 148 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 148 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    log file sync 1 0.01 0.01
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    declare X0rv BOOLEAN; begin X0rv := FND_INSTALLATION.GET(:APPL_ID,
    :DEP_APPL_ID, :STATUS, :INDUSTRY); :X0 := sys.diutil.bool_to_int(X0rv);
    end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 9 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 9 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 8 0.00 0.00
    SQL*Net message from client 8 0.00 0.00
    begin fnd_global.bless_next_init('FND_PERMIT_0000');
    fnd_global.initialize(:session_id, :user_id, :resp_id, :resp_appl_id,
    :security_group_id, :site_id, :login_id, :conc_login_id, :prog_appl_id,
    :conc_program_id, :conc_request_id, :conc_priority_request, :form_id,
    :form_application_id, :conc_process_id, :conc_queue_id, :queue_appl_id,
    :server_id); fnd_profile.put('ORG_ID', :org_id);
    fnd_profile.put('MFG_ORGANIZATION_ID', :mfg_org_id);
    fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :coa);
    fnd_profile.put('APPS_MAINTENANCE_MODE', :amm); end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 8 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 8 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    SELECT FPI.STATUS, FPI.INDUSTRY, FPI.PRODUCT_VERSION, FOU.ORACLE_USERNAME,
    FPI.TABLESPACE, FPI.INDEX_TABLESPACE, FPI.TEMPORARY_TABLESPACE,
    FPI.SIZING_FACTOR
    FROM
    FND_PRODUCT_INSTALLATIONS FPI, FND_ORACLE_USERID FOU, FND_APPLICATION FA
    WHERE FPI.APPLICATION_ID = FA.APPLICATION_ID AND FPI.ORACLE_ID =
    FOU.ORACLE_ID AND FA.APPLICATION_SHORT_NAME = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.00 0 7 0 1
    total 4 0.00 0.00 0 7 0 1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=7 pr=0 pw=0 time=89 us)
    1 NESTED LOOPS (cr=6 pr=0 pw=0 time=93 us cost=4 size=76 card=1)
    1 NESTED LOOPS (cr=5 pr=0 pw=0 time=77 us cost=3 size=67 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=19 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_PRODUCT_INSTALLATIONS (cr=3 pr=0 pw=0 time=51 us cost=2 size=58 card=1)
    1 INDEX RANGE SCAN FND_PRODUCT_INSTALLATIONS_PK (cr=2 pr=0 pw=0 time=27 us cost=1 size=0 card=1)(object id 22583)
    1 INDEX UNIQUE SCAN FND_ORACLE_USERID_U1 (cr=1 pr=0 pw=0 time=7 us cost=0 size=0 card=1)(object id 22597)
    1 TABLE ACCESS BY INDEX ROWID FND_ORACLE_USERID (cr=1 pr=0 pw=0 time=7 us cost=1 size=9 card=1)
    SELECT P.PID, P.SPID, AUDSID, PROCESS, SUBSTR(USERENV('LANGUAGE'), INSTR(
    USERENV('LANGUAGE'), '.') + 1)
    FROM
    V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.AUDSID =
    USERENV('SESSIONID')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 0 0 1
    total 3 0.00 0.00 0 0 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1883 us cost=1 size=108 card=2)
    1 HASH JOIN (cr=0 pr=0 pw=0 time=1869 us cost=1 size=100 card=2)
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1059 us cost=0 size=58 card=2)
    182 FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=285 us cost=0 size=1288 card=161)
    1 FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=617 us cost=0 size=21 card=1)
    181 FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=187 us cost=0 size=10500 card=500)
    1 FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=4 us cost=0 size=4 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 2 0.00 0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 6 0.00 0.00 0 0 0 0
    Execute 6 0.01 0.02 0 165 0 4
    Fetch 1 0.00 0.00 0 0 0 1
    total 13 0.01 0.02 0 165 0 5
    Misses in library cache during parse: 0
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 37 0.00 0.00
    SQL*Net message from client 37 1.21 2.19
    log file sync 1 0.01 0.01
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 49 0.00 0.00 0 0 0 0
    Execute 89 0.01 0.07 7 38 336 24
    Fetch 29 0.00 0.09 15 168 0 27
    total 167 0.02 0.16 22 206 336 51
    Misses in library cache during parse: 3
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 6 0.00 0.00
    db file sequential read 22 0.01 0.14
    48 user SQL statements in session.
    1 internal SQL statements in session.
    49 SQL statements in session.
    0 statements EXPLAINed in this session.
    Trace file compatibility: 10.01.00
    Sort options: prsela exeela fchela
    1 session in tracefile.
    48 user SQL statements in trace file.
    1 internal SQL statements in trace file.
    49 SQL statements in trace file.
    48 unique SQL statements in trace file.
    928 lines in trace file.
    1338833753 elapsed seconds in trace file.

  • Trunc taking more time to execute..........

    Hiii
    I running a query that is including TRUNC in where condition. But it is taking more time time to execute. Query is ::::::::::
    SELECT POD.REQ_DISTRIBUTION_ID, X.*
    FROM
    SELECT MSI.SEGMENT1||'.'||MSI.SEGMENT2||'.'||MSI.SEGMENT3||'.'||MSI.SEGMENT4 ITEM, RT.TRANSACTION_TYPE,
         MSI.DESCRIPTION, rt.TRANSACTION_ID,RT.PARENT_TRANSACTION_ID,
         RSH.RECEIPT_NUM,
         RSH.SHIP_TO_ORG_ID,
         TRUNC(RT.TRANSACTION_DATE) RCP_DATE,
         RSL.QUANTITY_RECEIVED RCV_QTY,
         PLLA.SHIPMENT_NUM,
         PLA.LINE_NUM PO_LINE,
         PHA.SEGMENT1 PO_NUM,
         PHA.CREATION_DATE,
         PHA.APPROVED_DATE,
         PLA.QUANTITY PO_QTY,
         RSH.SHIPMENT_HEADER_ID,
         RSL.SHIPMENT_LINE_ID,
         PLLA.LINE_LOCATION_ID
    FROM PO_HEADERS_ALL PHA,
         PO_LINES_ALL PLA,
         MTL_SYSTEM_ITEMS MSI,
         RCV_SHIPMENT_HEADERS RSH,
         RCV_SHIPMENT_LINES RSL,
         RCV_TRANSACTIONS RT,
         PO_LINE_LOCATIONS_ALL PLLA
    WHERE PHA.PO_HEADER_ID = PLA.PO_HEADER_ID
    AND     PHA.PO_HEADER_ID = RSL.PO_HEADER_ID
    AND     PHA.PO_HEADER_ID = PLLA.PO_HEADER_ID
    AND     PHA.ORG_ID = PLLA.ORG_ID
    AND     PLA.ITEM_ID = MSI.INVENTORY_ITEM_ID
    AND     PLA.PO_LINE_ID = RSL.PO_LINE_ID
    AND     MSI.INVENTORY_ITEM_ID = RSL.ITEM_ID
    AND     MSI.ORGANIZATION_ID = RSH.SHIP_TO_ORG_ID
    AND     RSH.SHIPMENT_HEADER_ID = RSL.SHIPMENT_HEADER_ID
    AND     RSH.SHIPMENT_HEADER_ID = RT.SHIPMENT_HEADER_ID
    AND     RSL.SHIPMENT_LINE_ID = RT.SHIPMENT_LINE_ID
    AND     RSL.PO_LINE_ID = PLLA.PO_LINE_ID
    AND     RT.TRANSACTION_TYPE = 'RECEIVE'
    AND     NVL(MSI.ENABLED_FLAG,'N') = 'Y'
    AND     NVL(RSL.QUANTITY_RECEIVED,0) > 0
    AND     PHA.ORG_ID = :P_ORG_ID
    AND        TRUNC(RT.TRANSACTION_DATE) BETWEEN :P_FROM_DATE AND :P_TO_DATE ) X, PO_DISTRIBUTIONS_ALL POD
    WHERE POD.LINE_LOCATION_ID = X.LINE_LOCATION_ID
    How can i execute it fast. Any alternate for TRUNC..?
    PS

    You could use a function based index,
    create index idx_trunc_trans_date on RCV_TRANSACTIONS(TRUNC(TRANSACTION_DATE));I guess the trunc you are using will not be of any use unless you are having any time format in your p_from_date and p_to_date or you enter same date for to and from dates.
    Trunc just truncates the time in the date and resets it begining of the day. I dont think you need trunc at all.
    Just do
    IF P_FROM_DATE AND P_TO_DATE
    then
    l_to_date := (p_to_date+1)-(1/(24*3600))
    -- use local variables
    then useAND RT.TRANSACTION_DATE BETWEEN :P_FROM_DATE AND L_TO_DATE
    I believe you should get the same reults unless you have time in your from and to date parameters.
    If you are getting, same results, just remove the trunc.
    G.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Standard tcode is taking more time

    Hi,
    When i am trying to delete or save any shipments through standard tcodes VT01N and VT02N. It is taking more time to process. Please tell me the solution to increase the performance.

    Hi Shruti,
    Whenever you face an issue with a standard SAP functionality, its wise to consult SAP notes @ [http://service.sap.com|http://service.sap.com]. I did a search using the string "Saving shipment VT01N VT02N performance" and got some very useful notes on the issue.
    Some useful notes are:
    859483, 391287, 391287 .... you would want to refer to the version and apply using BASIS help (SNOTE transaction and only if applicable for your version release).
    Regards,
    Aditya

  • BCC push taking more time

    Hi Experts,
    We are using ATG 9.4 and we have around 70 thousand products, i think its not large number. but when we push projects from BCC it is taking more time.
    here is my problem- we have 2 data centeres ,each data centere is having around more than 100 instances (agents) . we need to push to each data center. BCC is taking more time to push(hours). can you suggest me , what best can be done to avoid those issues. any soulution would be much appreciated
    Thanks
    Krish

    We usually find performance problems can be traced back to the database layer.
    Have you considered cleaning up the versioned schemas, either by using the OOTB purging functionality:
    http://docs.oracle.com/cd/E26180_01/Platform.94/ATGCAProgGuide/html/s1301purgingassetversions01.html
    or a complete rebaselining of your Publishing environment
    For the two data centres you may want to consider using a product like GoldenGate
    FAQ: Using Oracle GoldenGate with Oracle Commerce (Doc ID 1670439.1)
    https://support.oracle.com/rs?type=doc&id=1670439.1
    Whitepaper: Oracle ATG Web Commerce Maximum Availability Architecture (MAA) on Exadata and Exalogic (Doc ID 1590928.1)
    https://support.oracle.com/rs?type=doc&id=1590928.1
    ++++
    Thanks
    Gareth
    Please mark any update as "Correct Answer" or "Helpful Answer" if that update helps/answers your question, so that
    others can identify the Correct/helpful update between many updates.

  • PGI Taking more time approximate 30 to 45 minutes

    Dear Sir,
    While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
    We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
    Kindly provide suitable solution for the same.
    Regards,
    Vijay Sanguri

    Hello Vijay,
    I've just found the SAP Note 1459217 which definitively refers to your issue. Please have a look on it (see below the respective SAP note text).
    In case you have question let me know!
    Best Regards,
    Marcel Mizt
    Symptom
    Long runtimes occur when using transaction VL06G or VL06O in order to post goods issue (PGI) deliveries.
    Poor response times occur when using transaction VL06G or VL06O in order to PGI deliveries.
    Poor performance occurs with transaction VL06G / VL06O.
    Performance issues occur with transaction VL06G / VL06O.
    Environment
    SAP R/3 All Release Levels
    Reproducing the Issue
    Execute transaction VL06O.
    Choose "For Goods Issue". (Transaction VL06G).
    Long runtimes occur.
    Cause
    There are too many documents in the database that need to be accessed.
    The customising settings in the activity "set updating of partner index" are not activated.
    (IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index).                                                                               
    Resolution
    If there are too many documents in the database to access, archiving them improves the performance of VL06G.
    The customising settings in the activity "set updating of partner index" can be updated to improve the performance of VL06G. (IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index). In this transaction, check the entries for the transaction group 6 (= delivery). The effect of these settings is that the table VLKPA (SD index: deliveries by partner functions) is only filled with entries based on the partner functions listed (for example WE = ship-to party). In transaction VL060 the system is checking this customising in order to access the table VBPA or VLKPA.
    If you change the settings of the activity "updating of partner index", run the report RVV05IVB to reorganize the index seleting only partner index in the delivery section of the screen. (see note 128947).
    Flag the checkbox "display forwarding agent" (available in the display options section of the selection screen). When the list is generated, use the "set filter" functionality (menu path: edit -> set filter) in order to select the deliveries correspondng to one forwarding agent.

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • CDP Performance Issue-- Taking more time fetch data

    Hi,
    I'm working on Stellent 7.5.1.
    For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
    public void getManager(final HashMap binderMap)
    throws VistaInvalidInputException, VistaDataNotFoundException,
    DataException, ServiceException, VistaTemplateException
         String collectionID =
    getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
         long firstStartTime = System.currentTimeMillis();
    HashMap resultSetMap = null;
    String isNonRecursive = getStringLocal(VistaFolderConstants
    .ISNONRECURSIVE_KEY);
    if (isNonRecursive != null
    && isNonRecursive.equalsIgnoreCase(
    VistaContentFetchHelperConstants.STRING_TRUE))
    VistaLibraryContentFetchManager libraryContentFetchManager =
    new VistaLibraryContentFetchManager(
    binderMap);
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
              resultSetMap = libraryContentFetchManager
    .getFolderContentItems(m_workspace);
    * used to add the resultset to the binder.
    addResultSetToBinder(resultSetMap , true);
    else
         long startTime = System.currentTimeMillis();
    * isStandard is used to decide the call is for Standard or
    * Extended.
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
    String isStandard = getTemplateInformation(binderMap);
    long endTimeTemplate = System.currentTimeMillis();
    binderMap.put(VistaFolderConstants.IS_STANDARD,
    isStandard);
    long endTimebinderMap = System.currentTimeMillis();
    VistaContentFetchManager contentFetchManager
    = new VistaContentFetchManager(binderMap);
    long endTimeFetchManager = System.currentTimeMillis();
    resultSetMap = contentFetchManager
    .getAllFolderContentItems(m_workspace);
    long endTimeresultSetMap = System.currentTimeMillis();
    * used to add the resultset and the total no of content items
    * to the binder.
    addResultSetToBinder(resultSetMap , false);
    long endTime = System.currentTimeMillis();
    if (perfLogEnable.equalsIgnoreCase("true"))
         Log.info("Time taken to execute " +
                   "getTemplateInformation=" +
                   (endTimeTemplate - startTime)+
                   "ms binderMap=" +
                   (endTimebinderMap - startTime)+
                   "ms contentFetchManager=" +
                   (endTimeFetchManager - startTime)+
                   "ms resultSetMap=" +
                   (endTimeresultSetMap - startTime) +
                   "ms getManager:getAllFolderContentItems = " +
                   (endTime - startTime) +
                   "ms overallTime=" +
                   (endTime - firstStartTime) +
                   "ms folderID =" +
                   collectionID);
    Edited by: 838623 on Feb 22, 2011 1:43 AM

    Hi.
    The Select statment accessing MSEG Table is Slow Many a times.
    To Improve the performance of  MSEG.
    1.Check for the proper notes in the Service Market Place if you are working for CIN version.
    2.Index the MSEG table
    2.Check and limit the Columns in the Select statment .
    Possible Way.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
    EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
    FROM MSEG
    INTO CORRESPONDING FIELDS OF TABLE ITAB
    WHERE WERKS EQ P_WERKS AND
    MBLNR IN S_MBLNR AND
    BWART EQ '105' .
    Delete itab where itab EQ '5002361303'
    Delete itab where itab EQ  '5003501080' 
    Delete itab where itab EQ  '5002996300'
    Delete itab where itab EQ '5002996407'
    Delete itab where itab EQ '5003587026'
    Delete itab where itab EQ  '5003587026'
    Delete itab where itab EQ  '5003493186'
    Delete itab where itab EQ  '5002720583'
    Delete itab where itab EQ '5002928122'
    Delete itab where itab EQ '5002628263'.
    Select
    Regards
    Bala.M
    Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM

  • Report rdf with size 8mb taking more time to open

    Hello All,
    I have a rdf ( reports 6i) report with size 8.5mb taking more time to open and more time to access each field.
    Please let me know how do i solve this issue.
    Please do the needful.
    Thanks.

    Thanks for immediate response.
    pls let me know how do i know this.
    Right now i have the below details from report->help
    Report Builder 6.0.8.11.3
    ORACLE Server Release 8.0.6.0.0
    Oracle Procedure Builder 6.0.8.11.0
    Oracle ORACLE PL/SQL V8.0.6.0.0 - Production
    Oracle CORE Version 4.0.6.0.0 - Production
    Oracle Tools Integration Services 6.0.8.10.2
    Oracle Tools Common Area 6.0.5.32.1
    Oracle Toolkit 2 for Windows 32-bit platforms 6.0.5.35.0
    Resource Object Store 6.0.5.0.1
    Oracle Help 6.0.5.35.0
    Oracle Sqlmgr 6.0.8.11.3
    Oracle Query Builder 6.0.7.0.0 - Production
    PL/SQL Editor (c) WinMain Software (www.winmain.com), v1.0 (Production)
    Oracle ZRC 6.0.8.11.3
    Oracle Express 6.0.8.3.5
    Oracle XML Parser     1.0.2.1.0     Production
    Oracle Virtual Graphics System 6.0.5.35.0
    Oracle Image 6.0.5.34.0
    Oracle Multimedia Widget 6.0.5.34.0
    Oracle Tools GUI Utilities 6.0.5.35.0
    Thanks
    Edited by: Abdul Khan on Jan 26, 2010 11:54 PM

  • XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD

    HI
    XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD.
    The sql has 5 union clauses.
    It takes 20-30 minutes in TOAD compared to running through Concurrent Program in XML Publisher in EBS taking around 4-5 hours.
    The Scalable Flag at report level is turned on with the JVM options set to -Xmx1024m -Xmx1024m in Concurrent Program definition.
    Other configurations for Data Template like XSLT, Scalable, Optimization are turned on though didn't bounce the OPP Server for these to take effect as I am not sure whether it is needed.
    Thanks in advance for your help.

    But the question is that how come it is working in TOAD and takes only 15-20 minutes?
    with initialization of session ?
    what about sqlplus ?
    Do I have to set up the the temp directory for the XML Publisher report to make it faster?
    look at
    R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues (Doc ID 1410160.1)
    BI Publisher - Troubleshooting Oracle Business Intelligence (XML) Publisher For The Oracle E-Business Suite (Doc ID 364547.1)

  • Post Goods Issue (VL06O) - taking more time approximate 30 to 45 minutes

    Dear Sir,
    While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
    We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
    Kindly provide suitable solution for the same.
    Regards,
    Vijay Sanguri

    Hi
    See Note 113048 - Collective note on delivery monitor and search notes related with performance.
    Do a trace with tcode ST05 (look for help from a basis consultant) and search the bottleneck. Search possible sources of performance problems in userexits, enhancements and so on.
    I hope this helps you
    Regards
    Eduardo

  • Create index is taking more time

    Hi,
    One of the concurrent program is taking more time , We generate the trace file and found that the create index is taking more time.
    Below is from the trace file and such type of index creation is happening lot of time in Oracle standard program.
    Can somebody let me know why there is a big difference between cpu and elapse time.
    We are seeing the PX Deq: Execute Reply Event as well.look idle time for database.
    Please let me know which parameter of the database is affecting this.
    CREATE INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
    SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL TABLESPACE MSCX
    STORAGE( INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
    MAXTRANS 255
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 3 0 0
    Execute 1 0.35 364.82 131168 117945 60324 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.35 364.83 131168 117948 60324 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 80 (recursive depth: 2)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    reliable message 1 0.00 0.00
    enq: KO - fast object checkpoint 1 0.01 0.01
    PX Deq: Join ACK 6 0.00 0.00
    PX Deq Credit: send blkd 112 0.00 0.01
    PX qref latch 7 0.00 0.00
    PX Deq: Parse Reply 3 0.00 0.00
    PX Deq: Execute Reply 604 1.96 364.42
    log file sync 1 0.00 0.00
    PX Deq: Signal ACK 1 0.00 0.00
    latch: session allocation 2 0.00 0.00
    Regards,

    user12121524 wrote:
    CREATE  INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
    SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL  TABLESPACE MSCX
    STORAGE(  INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
    MAXTRANS 255
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          3          0           0
    Execute      1      0.35     364.82     131168     117945      60324           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.35     364.83     131168     117948      60324           0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 80     (recursive depth: 2)
    Elapsed times include waiting on following events:
    Event waited on                             Times   Max. Wait  Total Waited
    ----------------------------------------   Waited  ----------  ------------
    reliable message                                1        0.00          0.00
    enq: KO - fast object checkpoint                1        0.01          0.01
    PX Deq: Join ACK                                6        0.00          0.00
    PX Deq Credit: send blkd                      112        0.00          0.01
    PX qref latch                                   7        0.00          0.00
    PX Deq: Parse Reply                             3        0.00          0.00
    PX Deq: Execute Reply                         604        1.96        364.42
    log file sync                                   1        0.00          0.00
    PX Deq: Signal ACK                              1        0.00          0.00
    latch: session allocation                       2        0.00          0.00
    What you've given us is the query co-ordinator trace, which basically tells us that the the coordinator waited 364 seconds for the PX slaves to tell it that they had completed their tasks ("PX Deq: Execute Reply" time). You need to look at the slave traces to find out where they spent their time - and that's probably not going to be easy if there are lots of parallel pieces of processing going on.
    If you want to do some debugging (in general) one option is to add a query against V$pq_tqstat after each piece of parallel processing and log the results to a named file, or write them to a table with a tag, as this will tell you how many slaves were involved, how, and what the distribution of work and time was.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan

  • Snapshot Refresh taking More Time

    Dear All,
    We are facing a Snapshot refresh problem currently in Production Environment.
    Oracle Version : Oracle8i Enterprise Edition Release 8.1.6.1.0
    Currently we have created a Snapshot on a Join with 2 remote tables using Synonyms.
    ex:
    CREATE SNAPSHOT XYZ REFRESH COMPLETE WITH ROWID
    AS
    SELECT a.* FROM SYN1 a, SYN2 b
    Where b.ACCT_NO=a.ACCT_NO;
    We have created a Index on the above Snapshot XYZ.
    Create index XYZ_IDX1 on XYZ (ACCT_NO);
    a. The Explain plan of the above query shows the Index Scan on SYN1.
    If we query above Select Statement,it hardly takes 2 seconds to exedute.
    b. But the Complete Refresh of Snapshot XYZ is almost taking 20 Mins for just truncating and inserting 500 records and is generating huge Disk Reads as SYN1 in remote table consists of 32 Million records whereas SYN2 contains only 500 Records.
    If we truncate and insert inot a table as performed by the Complete refresh of Snapshot,it hardly takes 4 Seconds to refresh the table.
    Please let me know what might be the possible reasons for the Complete refresh of Snapshot taking more time.

    Dear All,
    While refreshing the Snapshot XYZ,I could find the following.
    a. Sort/Merge operation was performed while inserting the data into Snapshot.
    INSERT /*+ APPEND */ INTO "XYZ"
    SELECT a.* FROM SYN1 a, SYN2 b Where b.ACCT_NO=a.ACCT_NO;
    The above operation performed huge disk reads.
    b. By Changing the session parameter sort_area_size ,the time decreased by 50% but still the disk reads are huge.
    I would like to know why Sort/Merge Operation is performed in the above Insert?
    Edited by: Prashanth Deshmukh on Mar 13, 2009 10:54 AM
    Edited by: Prashanth Deshmukh on Mar 13, 2009 10:55 AM

  • Expdp taking more time to start export

    Hi Gurus
    my database oracle 10.2.0.3 in AIX
    I have started expdp to export selective around 130 tables but it is taking more time to start , almost 20 min passed since it start.
    $ expdp system/*** parfile=parfile.par
    Export: Release 10.2.0.3.0 - 64bit Production on Friday, 22 June, 2012 17:21:26
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_02": system/******** parfile=parfile.par
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    If share me tips and let me know the reason
    Regards
    Rabi

    user623166 wrote:
    Hi Gurus
    my database oracle 10.2.0.3 in AIX
    I have started expdp to export selective around 130 tables but it is taking more time to start , almost 20 min passed since it start.
    $ expdp system/*** parfile=parfile.par
    Export: Release 10.2.0.3.0 - 64bit Production on Friday, 22 June, 2012 17:21:26
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_02": system/******** parfile=parfile.par
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    I am not sure that this is enough information for us to suggest something. For starters, can you try selecting less tables than 120 and see what's going on?
    Aman....

  • Sql query is taking more time

    Hi all,
    db:oracle 9i
    I am facing below query prob.
    prob is that query is taking more time 45 min than earliar (10 sec).
    please any one suggest me .....
    SQL> SELECT MAX (tdar1.ID) ID, tdar1.request_id, tdar1.lolm_transaction_id,
    2 tdar1.transaction_version
    3 FROM transaction_data_arc tdar1
    4 WHERE tdar1.transaction_name ='O96U '
    5 AND tdar1.transaction_type = 'REQUEST'
    6 AND tdar1.message_type_code ='PCN'
    7 AND NOT EXISTS (
    8 SELECT NULL
    9 FROM transaction_data_arc tdar2
    10 WHERE tdar2.request_id = tdar1.request_id
    11 AND tdar2.lolm_transaction_id != tdar1.lolm_transaction_id
    12 AND tdar2.ID > tdar1.ID)
    13 GROUP BY tdar1.request_id,
    14 tdar1.lolm_transaction_id,
    15 tdar1.transaction_version;
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1 Bytes=42)
    1 0 SORT (GROUP BY) (Cost=12 Card=1 Bytes=42)
    2 1 FILTER
    3 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
    ' (Cost=1 Card=1 Bytes=42)
    4 3 INDEX (RANGE SCAN) OF 'NK_TDAR_2' (NON-UNIQUE) (Cost
    =3 Card=1)
    5 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
    ' (Cost=5 Card=918 Bytes=20196)
    6 5 INDEX (RANGE SCAN) OF 'NK_TDAR_7' (NON-UNIQUE) (Cost
    =8 Card=4760)

    prob is that query is taking more time 45 min than earliar (10 sec).Then something must have changed (data growth/stale statistics/...?).
    You should post as much details as possible, how and what it is described in the FAQ, see:
    *3. How to improve the performance of my query? / My query is running slow*.
    When your query takes too long...
    How to post a SQL statement tuning request
    SQL and PL/SQL FAQ
    Also, given your database version, using NOT IN instead of NOT EXISTS might make a difference (but they're not the same).
    See: SQL and PL/SQL FAQ

Maybe you are looking for

  • Is it possible to make st more responsive?

    I like st - it's lean and mean and slow as hell e.g. when I use the cursor keys. OK, so the last one is hardly a virtue, but maybe we can teach st to behave more like urxvt, which is a faster, but more complex piece of software and uses more memory.

  • Automatic posting of prepaids out of project

    Hi there, we have the requirement to post part of the expenses recorded on a project into prepaids at month end depending on the POC of the project (e.g. 25% POC = 75% of all expenses on the project need to go into prepaids). The prepaid posting can

  • Can a Link be attached to a select list option?

    Here is some code for a select list that pulls its data from a database: <select name = "sid" height = "1"class="dropit" > <?PHP while ($row = $result ->fetch_assoc())  {   ;     echo '<option value='. ($row['id']);     if (!(strcmp($row['id'], $row[

  • Need to know at what point does file get copied to table

    Hi. I'm looking at the file browse page item and wanted to find out at what point does the file's data get placed in wwv_flow_files? Does it do it automatically on page submit/refresh?

  • Fed up with battery problems

    About a year ago I had to buy a new battery for my MacBook because the one I had had expanded so horribly it was causing the computer to not function properly. Apple refused to to help and I had to pay full price for the new battery. Now the replacem