CDP Performance Issue-- Taking more time fetch data

Hi,
I'm working on Stellent 7.5.1.
For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
public void getManager(final HashMap binderMap)
throws VistaInvalidInputException, VistaDataNotFoundException,
DataException, ServiceException, VistaTemplateException
     String collectionID =
getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
     long firstStartTime = System.currentTimeMillis();
HashMap resultSetMap = null;
String isNonRecursive = getStringLocal(VistaFolderConstants
.ISNONRECURSIVE_KEY);
if (isNonRecursive != null
&& isNonRecursive.equalsIgnoreCase(
VistaContentFetchHelperConstants.STRING_TRUE))
VistaLibraryContentFetchManager libraryContentFetchManager =
new VistaLibraryContentFetchManager(
binderMap);
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
          resultSetMap = libraryContentFetchManager
.getFolderContentItems(m_workspace);
* used to add the resultset to the binder.
addResultSetToBinder(resultSetMap , true);
else
     long startTime = System.currentTimeMillis();
* isStandard is used to decide the call is for Standard or
* Extended.
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
String isStandard = getTemplateInformation(binderMap);
long endTimeTemplate = System.currentTimeMillis();
binderMap.put(VistaFolderConstants.IS_STANDARD,
isStandard);
long endTimebinderMap = System.currentTimeMillis();
VistaContentFetchManager contentFetchManager
= new VistaContentFetchManager(binderMap);
long endTimeFetchManager = System.currentTimeMillis();
resultSetMap = contentFetchManager
.getAllFolderContentItems(m_workspace);
long endTimeresultSetMap = System.currentTimeMillis();
* used to add the resultset and the total no of content items
* to the binder.
addResultSetToBinder(resultSetMap , false);
long endTime = System.currentTimeMillis();
if (perfLogEnable.equalsIgnoreCase("true"))
     Log.info("Time taken to execute " +
               "getTemplateInformation=" +
               (endTimeTemplate - startTime)+
               "ms binderMap=" +
               (endTimebinderMap - startTime)+
               "ms contentFetchManager=" +
               (endTimeFetchManager - startTime)+
               "ms resultSetMap=" +
               (endTimeresultSetMap - startTime) +
               "ms getManager:getAllFolderContentItems = " +
               (endTime - startTime) +
               "ms overallTime=" +
               (endTime - firstStartTime) +
               "ms folderID =" +
               collectionID);
Edited by: 838623 on Feb 22, 2011 1:43 AM

Hi.
The Select statment accessing MSEG Table is Slow Many a times.
To Improve the performance of  MSEG.
1.Check for the proper notes in the Service Market Place if you are working for CIN version.
2.Index the MSEG table
2.Check and limit the Columns in the Select statment .
Possible Way.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' .
Delete itab where itab EQ '5002361303'
Delete itab where itab EQ  '5003501080' 
Delete itab where itab EQ  '5002996300'
Delete itab where itab EQ '5002996407'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ  '5003587026'
Delete itab where itab EQ  '5003493186'
Delete itab where itab EQ  '5002720583'
Delete itab where itab EQ '5002928122'
Delete itab where itab EQ '5002628263'.
Select
Regards
Bala.M
Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM

Similar Messages

  • Issue with background job--taking more time

    Hi,
    We have a custom program which runs as the background job. It runs every 2 hours.
    Itu2019s taking more time than expected on ECC6 SR2 & SR3 on Oracle 10.2.0.4. We found that it taking more time while executing native SQL on DBA_EXTENTS. When we tried to fetch less number of records from DBA_EXTENTS, it works fine.
    But we need the program to fetch all the records.
    But it works fine on ECC5 on 10.2.0.2 & 10.2.0.4.
    Here is the SQL statement:
    EXEC SQL PERFORMING SAP_GET_EXT_PERF.
      SELECT OWNER, SEGMENT_NAME, PARTITION_NAME,
             SEGMENT_TYPE, TABLESPACE_NAME,
             EXTENT_ID, FILE_ID, BLOCK_ID, BYTES
       FROM SYS.DBA_EXTENTS
       WHERE OWNER LIKE 'SAP%'
       INTO
       : EXTENTS_TBL-OWNER, :EXTENTS_TBL-SEGMENT_NAME,
       :EXTENTS_TBL-PARTITION_NAME,
       :EXTENTS_TBL-SEGMENT_TYPE , :EXTENTS_TBL-TABLESPACE_NAME,
       :EXTENTS_TBL-EXTENT_ID, :EXTENTS_TBL-FILE_ID,
       :EXTENTS_TBL-BLOCK_ID, :EXTENTS_TBL-BYTES
    ENDEXEC.
    Can somebody suggest what has to be done?
    Has something changed in SAP7 (wrt background job etc) or do we need to fine tune the SQL statement?
    Regards,
    Vivdha

    Hi,
    there was an issue with LMT's but that was fixed  in 10.2.0.4
    besides missing system statistics:
    But WHY do you collect every 2 hours this information? The dba_extents view is based on really heavy used system tables.
    Normally , you would start queries of this type against dba_extents ie. to identify corrupt blocks and such:
    SELECT  owner , segment_name , segment_type
            FROM  dba_extents
           WHERE  file_id = &AFN
             AND  &BLOCKNO BETWEEN block_id AND block_id + blocks -1
    Not sure what you want to achieve with it.
    There are monitoring tools (OEM ?) around that may cover your needs.
    Bye
    yk

  • Post Goods Issue (VL06O) - taking more time approximate 30 to 45 minutes

    Dear Sir,
    While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
    We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
    Kindly provide suitable solution for the same.
    Regards,
    Vijay Sanguri

    Hi
    See Note 113048 - Collective note on delivery monitor and search notes related with performance.
    Do a trace with tcode ST05 (look for help from a basis consultant) and search the bottleneck. Search possible sources of performance problems in userexits, enhancements and so on.
    I hope this helps you
    Regards
    Eduardo

  • Deletion of Data Mart Request taking more time

    Hi all,
    One of the Data Mart process(ODS to Infocube) has failed.
    Im not able to delete the request in Infocube.
    When delete option executed, its taking more time while im monitoring that job in SM37 and its not getting completed.
    The details seen in the Job Log is as shown below,
    Job started                                                                  
    Step 001 started (program RSDELPART1, variant &0000000006553, user ID SSRIN90)
    Delete is running: Data target BUS_CONT, from 376,447 to 376,447             
    Please let me know your suggestions.
    Thanks,
    Sowrabh

    Hi,
    How many records are there in that request. Usually when you try to delete out a request it takes more time. Depending on the data volume in the request the deletion time will vary.
    Give it some more time and see if it finishes. To actually know if the job is doing anything, goto SM50/51 and look at whats happening.
    Cheers,
    Kedar

  • Taking more time for retreving data from nested table

    Hi
    we have two databases db1 and db2,in database db2 we have number of nested tables were there.
    Now the problem is we had link between two databases,whenever u firing the any query in db1 internally it's going to acces nested tables in db2.
    For feching records it's going to take much more time even records are less in table . what would be the reason.
    plz help me daliy we are facing the problem regarding the same

    Please avoid duplicate thread :
    quaries taking more time
    Nicolas.
    +< mod. action : thread locked>+

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Select query taking more time..

    Hi friends..
    The below inner join statement is taking more time ,  can any  body sugget me to improve the performance . I tried FOR ALL ENTRIES also but that also taking more time than inner join statement .
    SELECT a~vbeln from vbap as a inner join vakpa as b
          on avbeln = bvbeln
          into corresponding fields of table IT_VAKPA
          where a~WERKS IN S_IWERKS
          and a~pstyv NE 'ZRS'
          and b~vkorg = IVKORG
          and b~audat IN IAUDAT
          and b~vtweg IN IVTWEG.
    Regards
    Chetan

    Hi Chetan ,
    VAKPA is an index table. From the select query , it has been observed that you are not fetching any data from VAKPA. Only you have added some selection paramenters in where clause of select query.
    My suggestion will be instead of using VAKPA in inner join you use VBAK along with VBAP. All the fields that you are using as selection condition from VAKPA are there in VBAK.
    I am sure performance of query will be improved.
    If still duo to business logic you need to use VAKPA, try to create secondary non unique index on fields VKORD,AUDATand VTWEG on table VAKPA.
    However I will recommend you to go for first option only. If this does not work then go for second option.
    Hopfully this will help you.
    Regards,
    Nikhil

  • Oracle 11g: Oracle insert/update operation is taking more time.

    Hello All,
    In Oracle 11g (Windows 2008 32 bit environment) we are facing following issue.
    1) We are inserting/updating data on some tables (4-5 tables and we are firing query with very high rate).
    2) After sometime (say 15 days with same load) we are feeling that the Oracle operation (insert/update) is taking more time.
    Query1: How to find actually oracle is taking more time in insert/updates operation.
    Query2: How to rectify the problem.
    We are having multithread environment.
    Thanks
    With Regards
    Hemant.

    Liron Amitzi wrote:
    Hi Nicolas,
    Just a short explanation:
    If you have a table with 1 column (let's say a number). The table is empty and you have an index on the column.
    When you insert a row, the value of the column will be inserted to the index. To insert 1 value to an index with 10 values in it will be fast. It will take longer to insert 1 value to an index with 1 million values in it.
    My second example was if I take the same table and let's say I insert 10 rows and delete the previous 10 from the table. I always have 10 rows in the table so the index should be small. But this is not correct. If I insert values 1-10 and then delete 1-10 and insert 11-20, then delete 11-20 and insert 21-30 and so on, because the index is sorted, where 1-10 were stored I'll now have empty spots. Oracle will not fill them up. So the index will become larger and larger as I insert more rows (even though I delete the old ones).
    The solution here is simply revuild the index once in a while.
    Hope it is clear.
    Liron Amitzi
    Senior DBA consultant
    [www.dbsnaps.com]
    [www.orbiumsoftware.com]Hmmm, index space not reused ? Index rebuild once a while ? That was what I understood from your previous post, but nothing is less sure.
    This is a misconception of how indexes are working.
    I would suggest the reading of the following interasting doc, they are a lot of nice examples (including index space reuse) to understand, and in conclusion :
    http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf
    "+Index Rebuild Summary+
    +•*The vast majority of indexes do not require rebuilding*+
    +•Oracle B-tree indexes can become “unbalanced” and need to be rebuilt is a myth+
    +•*Deleted space in an index is “deadwood” and over time requires the index to be rebuilt is a myth*+
    +•If an index reaches “x” number of levels, it becomes inefficient and requires the index to be rebuilt is a myth+
    +•If an index has a poor clustering factor, the index needs to be rebuilt is a myth+
    +•To improve performance, indexes need to be regularly rebuilt is a myth+"
    Good reading,
    Nicolas.

  • Taking More Time while inserting into the table (With foriegn key)

    Hi All,
    I am facing problem while inserting the values into the master table.
    The problem,
    Table A -- User Master Table (Reg No, Name, etc)
    Table B -- Transaction Table (Foreign key reference with Table A).
    While inserting the data's in Table B, i need to insert the reg no also in table B which is mandatory. I followed the logic which is mentioned in the SRDemo.
    While inserting we need to query the Table A first to have the values in TableABean.java.
    final TableA tableA= (TableA )uow.executeQuery("findUser",TableA .class, regNo);
    Then, we need to create the instance for TableB
    TableB tableB= (TableB)uow.newInstance(TableB.class);
    tableB.setID(bean.getID);
    tableA.addTableB(tableB); --- this is for to insert the regNo of TableA in TableB.. This line is executing the query "select * from TableB where RegNo = <tableA.getRegNo>".
    This query is taking too much time if values are more in the TableB for that particular registrationNo. Because of this its taking more time to insert into the TableB.
    For Ex: TableA -- regNo : 101...having less entry in TableB means...inserting record is taking less than 1 sec
    regNo : 102...having more entry in TableB means...inserting record is taking more than 2 sec
    Time delay is there for different users when they enter transaction in TableB.
    I need to avoid this since in future it will take more time...from 2 sec to 10 sec, if volume of data increases mean.
    Please help me to resolve this issue...I am facing it now in production.
    Thanks & Regards
    VB

    Hello,
    Looks like you have a 1:M relationship from TableA to TableB, with a 1:1 back pointer from TableB to TableA. If triggering the 1:M relationship is causing you delays that you want to avoid there might be two quick ways I can see:
    1) Don't map it. Leave the TableA->TableB 1:M unmapped, and instead just query for relationship when you do need it. This means you do not need to call tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA), so that the TableB->TableA relation gets set. Might not be the best option, but it depends on your application's usage. It does allow you to potentially page the TableB results or add other query query performance options when you do need the data though.
    2) You are currently using Lazy loading for the TableA->TableB relationship - if it is untriggered, don't bother calling tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA). This of course requires using TopLink api to a) verify the collection is an IndirectCollection type, and b) that it is hasn't been triggered. If it has been triggered, you will still need to call tableA.addTableB(tableB), but it won't result in a query. Check out the oracle.toplink.indirection.IndirectContainer class and it's isInstantiated() method. This can cause problems though in highly concurrent environments, as other threads may have triggered the indirection before you commit your transaction, so that the A->B collection is not up to date - this might require refreshing the TableA if so.
    Change tracking would probably be the best option to use here, and is described in the EclipseLink wiki:
    http://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Attribute_Change_Tracking_Policy
    Best Regards,
    Chris

  • Taking more time!

    hi all,
    i am using Forms [32 Bit] Version 6.0.8.24.1 (Production)
    i am ultimate task is to read the data from excel and insert into the table. if any duplication of records then stop the process and tell the user that which records are repating..
    i have used ole2 package to read data from excel.
    i have two pl/sql table for further process.
    one table is to inser whatever the data come from excel. 2nd table is to check for the existance of current record with the previously inserted record and 3rd table is do store the duplicated records.
    i am inserting each record as it is to the table1. in the table 2 checking for the existance of the record. if current record match with the previous record then store that record into the 3 record.
    at the end if the error_table(3rd table) row count is 0 then no duplication then write the 1st table data into the forms and save,else display the table 3(error table) data into the form and say to the user that these records are duplicating.
    but its taking more time time.
    can anybody tell me the logic which reduces the time consuption(better performance)
    Thanks..

    hi,
    i don't think its a sql issue because if i entered manually(not through the excel upload) its just working fine. the form is database block. so i have not written any sql.
    what i am duing is upload data from excel to plsql table check for the duplication if any duplication found display the error_plsql table else display the first table as i mentioned earlier. the 2nd table is to check for the existance of the current record(since i am reading from excel record by record).
    Thanks..

  • Source System Extraction taking more time

    Hi All,
    I Have an Issue, please help me out in this.
    We have two source systems in our process. One is working fine. We dont have any issue with that. But another source system taking more time to extract the data in the source system than usual. From few days only we are facing this problem. Record count is also normal. I am unable find root cause for this, please help me out regarding this issue.
    It is Full update. We are using Custom program to extract the data. It is running daily.
    Thanks in Advance.
    Regards,
    Suresh
    Edited by: sureshd on Jul 14, 2011 6:04 PM
    Edited by: sureshd on Jul 14, 2011 6:05 PM

    When a long running job is running in the source system side you can trace it through ST05. Simultaneously you can monitor the screen of SM50/SM66 to see what are the tables being accessed during the job.
    After the loading is completed, you can deactivate the trace and then go to the Display trace option where you would see various number of SQL statements along with their tables. Check which particular statement is an expensive one by looking at the time stamp.
    Then next job is to decide why that table is being accessed for so much of time and how to improve the run time of the table. check the fields inside the table and their selectivity and see wether any index already exist on that. You can do this by going into SE11. If index is not available on those specific fields ask your Basis Guy to create a secondary one on top of it which may help in better performance. hope it helps.

  • Dataloading taking more time from PSA to infocube through DTP

    Hi All,
    when i am loading data to infocube through PSA with DTP, it taking more time. Its taken already 4 days to load. Irrespective of the data the previous loads were completed in 2-3 hrs. This is the first time am facing this issue.
    Note: There is a start routine written in Transformations
    Plz let me know how to identify whether the code is written in global declaration or local. If so what is the procedure to correct.
    Thanks,
    Jack

    Hi Jack,
    To improve the performance of the data load, you can do the below:
    1. Compress Old data
    2. Delete and Rebuild Indexes
    3. Read with binary search
    4. Do no use LOOP withing LOOP.
    5. Check sy-subrc after read etc.
    Hope this helps you.
    -Vikram

  • BCC push taking more time

    Hi Experts,
    We are using ATG 9.4 and we have around 70 thousand products, i think its not large number. but when we push projects from BCC it is taking more time.
    here is my problem- we have 2 data centeres ,each data centere is having around more than 100 instances (agents) . we need to push to each data center. BCC is taking more time to push(hours). can you suggest me , what best can be done to avoid those issues. any soulution would be much appreciated
    Thanks
    Krish

    We usually find performance problems can be traced back to the database layer.
    Have you considered cleaning up the versioned schemas, either by using the OOTB purging functionality:
    http://docs.oracle.com/cd/E26180_01/Platform.94/ATGCAProgGuide/html/s1301purgingassetversions01.html
    or a complete rebaselining of your Publishing environment
    For the two data centres you may want to consider using a product like GoldenGate
    FAQ: Using Oracle GoldenGate with Oracle Commerce (Doc ID 1670439.1)
    https://support.oracle.com/rs?type=doc&id=1670439.1
    Whitepaper: Oracle ATG Web Commerce Maximum Availability Architecture (MAA) on Exadata and Exalogic (Doc ID 1590928.1)
    https://support.oracle.com/rs?type=doc&id=1590928.1
    ++++
    Thanks
    Gareth
    Please mark any update as "Correct Answer" or "Helpful Answer" if that update helps/answers your question, so that
    others can identify the Correct/helpful update between many updates.

  • Custom Report taking more time to complete Normat

    Hi All,
    Custom report(Aging Report) in oracle is taking more time to complete Normal.
    In one instance, the same report is taking 5 min and the other instance this is taking 40-50 min to complete.
    We have enabled the trace and checked the trace file, but all the queries are working fine.
    Could you please suggest me regarding this issue.
    Thanks in advance!!

    TKPROF: Release 10.1.0.5.0 - Production on Tue Jun 5 10:49:32 2012
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Sort options: prsela exeela fchela
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    Error in CREATE TABLE of EXPLAIN PLAN table: APPS.prof$plan_table
    ORA-00922: missing or invalid option
    parse error offset: 1049
    EXPLAIN PLAN option disabled.
    SELECT DISTINCT OU.ORGANIZATION_ID , OU.BUSINESS_GROUP_ID
    FROM
    HR_OPERATING_UNITS OU WHERE OU.SET_OF_BOOKS_ID =:B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.05 11 22 0 3
    total 3 0.00 0.05 11 22 0 3
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    3 HASH UNIQUE (cr=22 pr=11 pw=0 time=52023 us cost=10 size=66 card=1)
    3 NESTED LOOPS (cr=22 pr=11 pw=0 time=61722 us)
    3 NESTED LOOPS (cr=20 pr=11 pw=0 time=61672 us cost=9 size=66 card=1)
    3 NESTED LOOPS (cr=18 pr=11 pw=0 time=61591 us cost=7 size=37 card=1)
    3 NESTED LOOPS (cr=16 pr=11 pw=0 time=61531 us cost=7 size=30 card=1)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=11 pr=9 pw=0 time=37751 us cost=6 size=22 card=1)
    18 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK1 (cr=1 pr=1 pw=0 time=17135 us cost=1 size=0 card=18)(object id 43610)
    3 TABLE ACCESS BY INDEX ROWID HR_ALL_ORGANIZATION_UNITS (cr=5 pr=2 pw=0 time=18820 us cost=1 size=8 card=1)
    3 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK (cr=2 pr=0 pw=0 time=26 us cost=0 size=0 card=1)(object id 43657)
    3 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK (cr=2 pr=0 pw=0 time=32 us cost=0 size=7 card=1)(object id 44020)
    3 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=2 pr=0 pw=0 time=52 us cost=1 size=0 card=1)(object id 330960)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=2 pr=0 pw=0 time=26 us cost=2 size=29 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 11 0.01 0.05
    asynch descriptor resize 2 0.00 0.00
    INSERT INTO FND_LOG_MESSAGES ( ECID_ID, ECID_SEQ, CALLSTACK, ERRORSTACK,
    MODULE, LOG_LEVEL, MESSAGE_TEXT, SESSION_ID, USER_ID, TIMESTAMP,
    LOG_SEQUENCE, ENCODED, NODE, NODE_IP_ADDRESS, PROCESS_ID, JVM_ID, THREAD_ID,
    AUDSID, DB_INSTANCE, TRANSACTION_CONTEXT_ID )
    VALUES
    ( SYS_CONTEXT('USERENV', 'ECID_ID'), SYS_CONTEXT('USERENV', 'ECID_SEQ'),
    :B16 , :B15 , SUBSTRB(:B14 ,1,255), :B13 , SUBSTRB(:B12 , 1, 4000), :B11 ,
    NVL(:B10 , -1), SYSDATE, FND_LOG_MESSAGES_S.NEXTVAL, :B9 , SUBSTRB(:B8 ,1,
    60), SUBSTRB(:B7 ,1,30), SUBSTRB(:B6 ,1,120), SUBSTRB(:B5 ,1,120),
    SUBSTRB(:B4 ,1,120), :B3 , :B2 , :B1 ) RETURNING LOG_SEQUENCE INTO :O0
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 20 0.00 0.03 4 1 304 20
    Fetch 0 0.00 0.00 0 0 0 0
    total 21 0.00 0.03 4 1 304 20
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 LOAD TABLE CONVENTIONAL (cr=1 pr=4 pw=0 time=36498 us)
    1 SEQUENCE FND_LOG_MESSAGES_S (cr=0 pr=0 pw=0 time=24 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.01 0.03
    SELECT MESSAGE_TEXT, MESSAGE_NUMBER, TYPE, FND_LOG_SEVERITY, CATEGORY,
    SEVERITY
    FROM
    FND_NEW_MESSAGES M, FND_APPLICATION A WHERE :B3 = M.MESSAGE_NAME AND :B2 =
    M.LANGUAGE_CODE AND :B1 = A.APPLICATION_SHORT_NAME AND M.APPLICATION_ID =
    A.APPLICATION_ID
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.03 4 12 0 2
    total 5 0.00 0.03 4 12 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=6 pr=2 pw=0 time=15724 us cost=3 size=134 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=20 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_NEW_MESSAGES (cr=4 pr=2 pw=0 time=15693 us cost=2 size=125 card=1)
    1 INDEX UNIQUE SCAN FND_NEW_MESSAGES_PK (cr=3 pr=1 pw=0 time=6386 us cost=1 size=0 card=1)(object id 34367)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.00 0.03
    DELETE FROM MO_GLOB_ORG_ACCESS_TMP
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.02 3 4 4 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.02 3 4 4 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    0 DELETE MO_GLOB_ORG_ACCESS_TMP (cr=4 pr=3 pw=0 time=29161 us)
    1 TABLE ACCESS FULL MO_GLOB_ORG_ACCESS_TMP (cr=3 pr=2 pw=0 time=18165 us cost=2 size=13 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 3 0.01 0.02
    SET TRANSACTION READ ONLY
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.01 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.01 0 0 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    begin Fnd_Concurrent.Init_Request; end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 148 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 148 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    log file sync 1 0.01 0.01
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    declare X0rv BOOLEAN; begin X0rv := FND_INSTALLATION.GET(:APPL_ID,
    :DEP_APPL_ID, :STATUS, :INDUSTRY); :X0 := sys.diutil.bool_to_int(X0rv);
    end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 9 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 9 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 8 0.00 0.00
    SQL*Net message from client 8 0.00 0.00
    begin fnd_global.bless_next_init('FND_PERMIT_0000');
    fnd_global.initialize(:session_id, :user_id, :resp_id, :resp_appl_id,
    :security_group_id, :site_id, :login_id, :conc_login_id, :prog_appl_id,
    :conc_program_id, :conc_request_id, :conc_priority_request, :form_id,
    :form_application_id, :conc_process_id, :conc_queue_id, :queue_appl_id,
    :server_id); fnd_profile.put('ORG_ID', :org_id);
    fnd_profile.put('MFG_ORGANIZATION_ID', :mfg_org_id);
    fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :coa);
    fnd_profile.put('APPS_MAINTENANCE_MODE', :amm); end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 8 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 8 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    SELECT FPI.STATUS, FPI.INDUSTRY, FPI.PRODUCT_VERSION, FOU.ORACLE_USERNAME,
    FPI.TABLESPACE, FPI.INDEX_TABLESPACE, FPI.TEMPORARY_TABLESPACE,
    FPI.SIZING_FACTOR
    FROM
    FND_PRODUCT_INSTALLATIONS FPI, FND_ORACLE_USERID FOU, FND_APPLICATION FA
    WHERE FPI.APPLICATION_ID = FA.APPLICATION_ID AND FPI.ORACLE_ID =
    FOU.ORACLE_ID AND FA.APPLICATION_SHORT_NAME = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.00 0 7 0 1
    total 4 0.00 0.00 0 7 0 1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=7 pr=0 pw=0 time=89 us)
    1 NESTED LOOPS (cr=6 pr=0 pw=0 time=93 us cost=4 size=76 card=1)
    1 NESTED LOOPS (cr=5 pr=0 pw=0 time=77 us cost=3 size=67 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=19 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_PRODUCT_INSTALLATIONS (cr=3 pr=0 pw=0 time=51 us cost=2 size=58 card=1)
    1 INDEX RANGE SCAN FND_PRODUCT_INSTALLATIONS_PK (cr=2 pr=0 pw=0 time=27 us cost=1 size=0 card=1)(object id 22583)
    1 INDEX UNIQUE SCAN FND_ORACLE_USERID_U1 (cr=1 pr=0 pw=0 time=7 us cost=0 size=0 card=1)(object id 22597)
    1 TABLE ACCESS BY INDEX ROWID FND_ORACLE_USERID (cr=1 pr=0 pw=0 time=7 us cost=1 size=9 card=1)
    SELECT P.PID, P.SPID, AUDSID, PROCESS, SUBSTR(USERENV('LANGUAGE'), INSTR(
    USERENV('LANGUAGE'), '.') + 1)
    FROM
    V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.AUDSID =
    USERENV('SESSIONID')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 0 0 1
    total 3 0.00 0.00 0 0 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1883 us cost=1 size=108 card=2)
    1 HASH JOIN (cr=0 pr=0 pw=0 time=1869 us cost=1 size=100 card=2)
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1059 us cost=0 size=58 card=2)
    182 FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=285 us cost=0 size=1288 card=161)
    1 FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=617 us cost=0 size=21 card=1)
    181 FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=187 us cost=0 size=10500 card=500)
    1 FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=4 us cost=0 size=4 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 2 0.00 0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 6 0.00 0.00 0 0 0 0
    Execute 6 0.01 0.02 0 165 0 4
    Fetch 1 0.00 0.00 0 0 0 1
    total 13 0.01 0.02 0 165 0 5
    Misses in library cache during parse: 0
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 37 0.00 0.00
    SQL*Net message from client 37 1.21 2.19
    log file sync 1 0.01 0.01
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 49 0.00 0.00 0 0 0 0
    Execute 89 0.01 0.07 7 38 336 24
    Fetch 29 0.00 0.09 15 168 0 27
    total 167 0.02 0.16 22 206 336 51
    Misses in library cache during parse: 3
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 6 0.00 0.00
    db file sequential read 22 0.01 0.14
    48 user SQL statements in session.
    1 internal SQL statements in session.
    49 SQL statements in session.
    0 statements EXPLAINed in this session.
    Trace file compatibility: 10.01.00
    Sort options: prsela exeela fchela
    1 session in tracefile.
    48 user SQL statements in trace file.
    1 internal SQL statements in trace file.
    49 SQL statements in trace file.
    48 unique SQL statements in trace file.
    928 lines in trace file.
    1338833753 elapsed seconds in trace file.

  • XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD

    HI
    XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD.
    The sql has 5 union clauses.
    It takes 20-30 minutes in TOAD compared to running through Concurrent Program in XML Publisher in EBS taking around 4-5 hours.
    The Scalable Flag at report level is turned on with the JVM options set to -Xmx1024m -Xmx1024m in Concurrent Program definition.
    Other configurations for Data Template like XSLT, Scalable, Optimization are turned on though didn't bounce the OPP Server for these to take effect as I am not sure whether it is needed.
    Thanks in advance for your help.

    But the question is that how come it is working in TOAD and takes only 15-20 minutes?
    with initialization of session ?
    what about sqlplus ?
    Do I have to set up the the temp directory for the XML Publisher report to make it faster?
    look at
    R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues (Doc ID 1410160.1)
    BI Publisher - Troubleshooting Oracle Business Intelligence (XML) Publisher For The Oracle E-Business Suite (Doc ID 364547.1)

Maybe you are looking for

  • How can i put photos from my mac to my ipad touch

    How can i put photos from my mac to my ipad touch I have them in an album. Thanks for your help

  • Internal order settlment to GL (primary cost)

    Hi Friends, I got and issue like that client wants to settle internal order (primary cost) to GL only, but do not wants to settle secondary cost to GL. I maintain allocation structure for that where i include all the range of all primary and secondar

  • Time Capsule and Cisco router

    Will Time Capsule work with a Cisco Router E4200 that is connected to a Worldbook NAS? I do not need it to serve as a router, only a sytematic backup solution fro all of our Macs in the network.  We use the NAS as a company client File store and shar

  • Apache 2.2.2 Jrun 4 WinXP --- Apache connector problem.

    I configured External Web Server using Web Server Configuration tool. I have cross-checked all the artifacts like httpd.conf, etc. Everything looks fine. I started JRun and Apache multiple times. "C:/JRun4/lib/wsconfig/1/mod_jrun20.so" is present. I

  • ORA-30678: too many open connections

    Hi, I executed this procedure.. CREATE OR REPLACE procedure TRY.e_mail_message from_name in varchar2, to_name in varchar2, subject in varchar2, message in varchar2 is l_mailhost VARCHAR2(64); l_from VARCHAR2(64); l_to VARCHAR2(64); crlf VARCHAR2( 2 )