PGI Taking more time approximate 30 to 45 minutes

Dear Sir,
While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
Kindly provide suitable solution for the same.
Regards,
Vijay Sanguri

Hello Vijay,
I've just found the SAP Note 1459217 which definitively refers to your issue. Please have a look on it (see below the respective SAP note text).
In case you have question let me know!
Best Regards,
Marcel Mizt
Symptom
Long runtimes occur when using transaction VL06G or VL06O in order to post goods issue (PGI) deliveries.
Poor response times occur when using transaction VL06G or VL06O in order to PGI deliveries.
Poor performance occurs with transaction VL06G / VL06O.
Performance issues occur with transaction VL06G / VL06O.
Environment
SAP R/3 All Release Levels
Reproducing the Issue
Execute transaction VL06O.
Choose "For Goods Issue". (Transaction VL06G).
Long runtimes occur.
Cause
There are too many documents in the database that need to be accessed.
The customising settings in the activity "set updating of partner index" are not activated.
(IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index).                                                                               
Resolution
If there are too many documents in the database to access, archiving them improves the performance of VL06G.
The customising settings in the activity "set updating of partner index" can be updated to improve the performance of VL06G. (IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index). In this transaction, check the entries for the transaction group 6 (= delivery). The effect of these settings is that the table VLKPA (SD index: deliveries by partner functions) is only filled with entries based on the partner functions listed (for example WE = ship-to party). In transaction VL060 the system is checking this customising in order to access the table VBPA or VLKPA.
If you change the settings of the activity "updating of partner index", run the report RVV05IVB to reorganize the index seleting only partner index in the delivery section of the screen. (see note 128947).
Flag the checkbox "display forwarding agent" (available in the display options section of the selection screen). When the list is generated, use the "set filter" functionality (menu path: edit -> set filter) in order to select the deliveries correspondng to one forwarding agent.

Similar Messages

  • Post Goods Issue (VL06O) - taking more time approximate 30 to 45 minutes

    Dear Sir,
    While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
    We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
    Kindly provide suitable solution for the same.
    Regards,
    Vijay Sanguri

    Hi
    See Note 113048 - Collective note on delivery monitor and search notes related with performance.
    Do a trace with tcode ST05 (look for help from a basis consultant) and search the bottleneck. Search possible sources of performance problems in userexits, enhancements and so on.
    I hope this helps you
    Regards
    Eduardo

  • XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD

    HI
    XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD.
    The sql has 5 union clauses.
    It takes 20-30 minutes in TOAD compared to running through Concurrent Program in XML Publisher in EBS taking around 4-5 hours.
    The Scalable Flag at report level is turned on with the JVM options set to -Xmx1024m -Xmx1024m in Concurrent Program definition.
    Other configurations for Data Template like XSLT, Scalable, Optimization are turned on though didn't bounce the OPP Server for these to take effect as I am not sure whether it is needed.
    Thanks in advance for your help.

    But the question is that how come it is working in TOAD and takes only 15-20 minutes?
    with initialization of session ?
    what about sqlplus ?
    Do I have to set up the the temp directory for the XML Publisher report to make it faster?
    look at
    R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues (Doc ID 1410160.1)
    BI Publisher - Troubleshooting Oracle Business Intelligence (XML) Publisher For The Oracle E-Business Suite (Doc ID 364547.1)

  • Currency translation taking more time at first time in the report

    Hi All,
    Here is an issue regarding the currency translation in the report. when we click on the currency translation from the contaxt menu on the report, its taking more time for the first time to get the selection options. But it is fast from the second time onwards. It will be not user friendly if its takes more time for the first time. Is there any way to fast up the processing time for the first time when i go for the "currency translation" option at report level for the first time.
    let me know why it takes more time for the first time when we go for this option at report level.
    Thanks,
    Jack

    Once again from the top.
    This is undoubtedly not a database issue.
    I doubt you can fix it in the database.
    You can, however, spend a lot of hours asking questions and poking around if you are a consultant paid by the hour.
    But as long as you are looking for something to do here are some that won't fix the problem but may be worth a few minutes of your time.
    WHERE 1 = 1Are you kidding? Drop this nonsense. The Oracle optimizer is not brain dead. If it were my database I would also drop the parallel hint like a bad habit.
    /*+ parallel(rp1, 8) */Again the Oracle optimizer is not brain dead. Ask for parallel threads that don't exist and the database will just queue your SQL and let it sit there until sufficient resources become available.

  • Essbase Calculation Script taking more time in new environment

    Hi Everyone:
    We have four environments in our implementation.
    1. DEV Environment - 64 bit Essbase Version 11.1.1.3
    2. PreProd Environment - 32 bit Essbase Version 9.3.0
    3. PreProd Environment - 64 bit Essbase Version 11.1.1.3
    In the above mentioned environment PreProd Environment - 64 bit Essbase Version 11.1.1.3 is a newly installed
    environment.
    We have migrated our Application from PreProd Environment - 32 bit Essbase Version 9.3.0 to PreProd Environment - 64 bit Essbase Version 11.1.1.3. A calculation script that takes only 20 minutes in 32 bit PreProd is taking more than
    5 and half hours in newly installed 64 bit PreProd.
    We have also migrated our Application from DEV Environment - 64 bit Essbase Version 11.1.1.3 to PreProd Environment - 64 bit Essbase Version 11.1.1.3. The calculation script that takes only 20 minutes in 64 bit Dev is taking more than 5 and half hours in newly installed 64 bit PreProd.
    All the server settings and cache setting everything looks similar in all the three environments.
    Please advice us what are all the possibilities that creates the issue.
    Thanks and Regards,
    Prabhakar.

    Hi Cameron,
    Thanks for your reply.
    I have cross checked the Virtual memory in both servers,in new server it was declared high.
    Please find the cfg setting which we are using in our application.
    AGENTPORT 1423
    SERVERPORTBEGIN 32768
    SERVERPORTEND 33768
    AGENTDESC hypservice_1
    ;CSSREFRESHLEVEL auto
    ;SHAREDSERVICESREFRESHINTERVAL 30
    CALCCACHEHIGH 199999999
    CALCCACHEDEFAULT 150000000
    CALCCACHELOW     10000000
    CALCLOCKBLOCKDEFAULT 3000
    DATAERRORLIMIT 10000
    UPDATECALC FALSE
    EXCEPTIONLOGOVERWRITE FALSE
    CALCREUSEDYNCALCBLOCKS FALSE
    PORTUSAGELOGINTERVAL 15
    QRYGOVEXECTIME 600
    LOGMESSAGELEVEL INFO
    CALCPARALLEL 6
    MAXLOGINS 100000
    AGENTDELAY 100
    AGENTTHREADS 30
    AGTSVRCONNECTIONS 10
    SERVERTHREADS 25
    EXPORTTHREADS 1
    SSLOGUNKNOWN FALSE
    CALCNOTICEDEFAULT 10
    NETRETRYCOUNT 3000
    NETDELAY 2000
    __SM__BUFFERED_IO TRUE
    __SM__WAITED_IO TRUE
    and aslo find the caches that we define:
    Index cache:250000
    Data Cache:250000
    Data file cache:32768
    The all above settings are identical both servers.
    In New server ,only one script that is taking more time but remaining scripts are working fine with less time.
    We also did one test cause that splitting the script in to multiple and executed ,in this cause the script where we are using direct assigning value from member(say A1) to another member(Say A2) is taking more time.But same scripts we executed in old server it executes fine.
    Still we are not able to find out exact root cause for this issue.
    Could please anyone help me to resove this issue.
    Regards,
    Prabhakar.

  • Level1 backup is taking more time than Level0

    The Level1 backup is taking more time than Level0, I really am frustated how could it happen. I have 6.5GB of database. Level0 took 8 hrs but level1 is taking more than 8hrs . please help me in this regard.

    Ogan Ozdogan wrote:
    Charles,
    By enabling the block change tracking will be indeed faster than before he have got. I think this does not address the question of the OP unless you are saying the incremental backup without the block change tracking is slower than a level 0 (full) backup?
    Thank you in anticipating.
    OganOgan,
    I can't explain why a 6.5GB level 0 RMAN backup would require 8 hours to complete (maybe a very slow destination device connected by 10Mb/s Ethernet) - I would expect that it should complete in a couple of minutes.
    An incremental level 1 backup without a block change tracking file could take longer than a level 0 backup. I encountered a good written description of why that could happen, but I can't seem to locate the source at the moment. The longer run time might have been related to the additional code paths required to constantly compare the SCN of each block, and the variable write rate which may affect some devices, such as a tape device.
    A paraphrase from the book "Oracle Database 10g RMAN Backup & Recovery"
    "Incremental backups must check the header of each block to discover if it has changed since the last incremental backup - that means an incremental backup may not complete much faster than a full backup."
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • LCM utility taking more time to start

    Hi All,
    In Hyperion System 9.3.1 when running the LCMUtility.bat file in Hyperion Home\BIPLUS\bin folder it is taking nearly 8-10 minutes to start up the process. The shared services, IR, Core services are in the single box. Even though it is taking time to initialize, it ends with success.
    Could anyone please let me know what could be the possible reason why it is taking 8-10 minutes of time to start the process?
    Thanks,
    Sree

    user623166 wrote:
    Hi Gurus
    my database oracle 10.2.0.3 in AIX
    I have started expdp to export selective around 130 tables but it is taking more time to start , almost 20 min passed since it start.
    $ expdp system/*** parfile=parfile.par
    Export: Release 10.2.0.3.0 - 64bit Production on Friday, 22 June, 2012 17:21:26
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_02": system/******** parfile=parfile.par
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    I am not sure that this is enough information for us to suggest something. For starters, can you try selecting less tables than 120 and see what's going on?
    Aman....

  • Transport released it taking more time

    Hi Every One,
    As i want to transport the program with teir package , smartforms etc.
    I  created one transport request with transport of copies and include all package in that and released
    Its taking more time
    Status in the se01 showing request is released
    log shows
    Checks at Operating System Level         23.04.2010 12:45:48    (0) Successfully Completed
    Pre-Export Methods                       23.04.2010 12:45:54    (0) Successfully Completed
    Checks at Operating System Level         23.04.2010 12:47:23    (0) Successfully Completed
    Pre-Export Methods                       23.04.2010 12:47:27    (4) Ended with Warning
    Export                                   23.04.2010 15:28:29 Not yet executed
    RDDIMPDP program is running parallel in background.
    So what i have to check any suggestions please
    Regards
    Shashi

    Hi Niraj,
    i sheduled RDDIMPDP job for every 2 minute and it is running fine
    what i done is i created another request and transported. its went fine and issue is solved.
    but this request is still showing as before now i want to ckear this. how i can do this. shall i delete this request directly.. Any problem with this
    I am pasting Log report last few lines
    WARNING:
    sapprd\sapmnt\trans\tmp\TRPKK900160.TRP is already in use (10), I'm waiting 4 sec (20100423124802). My name: pid 3788 on sapprd (APServiceTRP)
    WARNING:
    sapprd\sapmnt\trans\tmp\TRPKK900160.TRP is already in use (20), I'm waiting 1 sec (20100423124826). My name: pid 3788 on sapprd (APServiceTRP)
    WARNING:
    sapprd\sapmnt\trans\tmp\TRPKK900160.TRP is already in use (30), I'm waiting 4 sec (20100423124850). My name: pid 3788 on sapprd (APServiceTRP)
    START INFORM SAP-SYSTEM OF TRP Q      20100423155230              BIPLAB       sapprd 20100423155222     
    START tp_getprots          TRP Q      20100423155230              BIPLAB       sapprd 20100423155222     
    STOP  tp_getprots          TRP Q      20100423155240              BIPLAB       sapprd 20100423155222     
    STOP  INFORM SAP-SYSTEM OF TRP Q      20100423155240              BIPLAB       sapprd 20100423155222  
    Regards
    Shashi

  • Self Service Password Registration Page taking more time for loading in FIM 2010 R2

    Hi,
    I have beeen successfullly installed FIM 2010 R2 SSPR and it is working fine
    but my problem is that Self Service Password Registration Page taking more time for loading when i provide Window Credential,it is taking approximate 50 to 60 Seconds for loading a page in FIM 2010 R2
    very urgent requirement.
    Regards
    Anil Kumar

    Double check that the objectSid, accountname and domain is populated for the users in the FIM portal, and each user is connected to their AD counterparts
    Check here for more info:
    http://social.technet.microsoft.com/wiki/contents/articles/20213.troubleshooting-fim-sspr-error-3003-the-current-user-account-is-not-recognized-by-forefront-identity-manager-please-contact-your-help-desk-or-system-administrator.aspx

  • Huge volume of records are routing to the remote user other than his position and organization records. Synchronization and DB initialization taking more time around 36 hours.

    Huge volume of records are routing to the remote user other than his position and organization records. Synchronization and DB initialization taking more time around 36 hours.
    Actual accounts & contacts need to be route around 2000 & 3000 but we have observed lakhs of records routing into local DB.
    We have verified all the Assignment Rules, Views.
    We ran docking object visibility rules and we have observed that some other accounts are routing due to Organization rule passing. (these records are not supposed to route).
    Version Siebel 7.7.2.12,
    OS Solaris.

    let me know what would be the reason that 1st million takes only 15 minuts and the time goes on increasing gradually with the increase of dataYes that's a little strange. I only can guess:
    1. You are in archivelog mode and the Archiver is not able to archive the redo logs fast enough
    2. You don't use Direct Load and DBWR ist not able to write the direty block to disk fast enough. You could create more DBWR processes in that case.
    3. Make a snapshot of v$system_event:
    create table begin as select * from v$system_event;After the import run
    create table end as select * from v$system_event;Now compare the values:
    select * from begin order by TIME_WAITED_MICRO descwith the values given you by
    select * from end order by TIME_WAITED_MICRO descSo you can look where your DB spent so much time waiting for something.
    Alternativly, you could start a 10046 trace on the loading session and use tkprof.
    Dim

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Custom Report taking more time to complete Normat

    Hi All,
    Custom report(Aging Report) in oracle is taking more time to complete Normal.
    In one instance, the same report is taking 5 min and the other instance this is taking 40-50 min to complete.
    We have enabled the trace and checked the trace file, but all the queries are working fine.
    Could you please suggest me regarding this issue.
    Thanks in advance!!

    TKPROF: Release 10.1.0.5.0 - Production on Tue Jun 5 10:49:32 2012
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Sort options: prsela exeela fchela
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    Error in CREATE TABLE of EXPLAIN PLAN table: APPS.prof$plan_table
    ORA-00922: missing or invalid option
    parse error offset: 1049
    EXPLAIN PLAN option disabled.
    SELECT DISTINCT OU.ORGANIZATION_ID , OU.BUSINESS_GROUP_ID
    FROM
    HR_OPERATING_UNITS OU WHERE OU.SET_OF_BOOKS_ID =:B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.05 11 22 0 3
    total 3 0.00 0.05 11 22 0 3
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    3 HASH UNIQUE (cr=22 pr=11 pw=0 time=52023 us cost=10 size=66 card=1)
    3 NESTED LOOPS (cr=22 pr=11 pw=0 time=61722 us)
    3 NESTED LOOPS (cr=20 pr=11 pw=0 time=61672 us cost=9 size=66 card=1)
    3 NESTED LOOPS (cr=18 pr=11 pw=0 time=61591 us cost=7 size=37 card=1)
    3 NESTED LOOPS (cr=16 pr=11 pw=0 time=61531 us cost=7 size=30 card=1)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=11 pr=9 pw=0 time=37751 us cost=6 size=22 card=1)
    18 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK1 (cr=1 pr=1 pw=0 time=17135 us cost=1 size=0 card=18)(object id 43610)
    3 TABLE ACCESS BY INDEX ROWID HR_ALL_ORGANIZATION_UNITS (cr=5 pr=2 pw=0 time=18820 us cost=1 size=8 card=1)
    3 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK (cr=2 pr=0 pw=0 time=26 us cost=0 size=0 card=1)(object id 43657)
    3 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK (cr=2 pr=0 pw=0 time=32 us cost=0 size=7 card=1)(object id 44020)
    3 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=2 pr=0 pw=0 time=52 us cost=1 size=0 card=1)(object id 330960)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=2 pr=0 pw=0 time=26 us cost=2 size=29 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 11 0.01 0.05
    asynch descriptor resize 2 0.00 0.00
    INSERT INTO FND_LOG_MESSAGES ( ECID_ID, ECID_SEQ, CALLSTACK, ERRORSTACK,
    MODULE, LOG_LEVEL, MESSAGE_TEXT, SESSION_ID, USER_ID, TIMESTAMP,
    LOG_SEQUENCE, ENCODED, NODE, NODE_IP_ADDRESS, PROCESS_ID, JVM_ID, THREAD_ID,
    AUDSID, DB_INSTANCE, TRANSACTION_CONTEXT_ID )
    VALUES
    ( SYS_CONTEXT('USERENV', 'ECID_ID'), SYS_CONTEXT('USERENV', 'ECID_SEQ'),
    :B16 , :B15 , SUBSTRB(:B14 ,1,255), :B13 , SUBSTRB(:B12 , 1, 4000), :B11 ,
    NVL(:B10 , -1), SYSDATE, FND_LOG_MESSAGES_S.NEXTVAL, :B9 , SUBSTRB(:B8 ,1,
    60), SUBSTRB(:B7 ,1,30), SUBSTRB(:B6 ,1,120), SUBSTRB(:B5 ,1,120),
    SUBSTRB(:B4 ,1,120), :B3 , :B2 , :B1 ) RETURNING LOG_SEQUENCE INTO :O0
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 20 0.00 0.03 4 1 304 20
    Fetch 0 0.00 0.00 0 0 0 0
    total 21 0.00 0.03 4 1 304 20
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 LOAD TABLE CONVENTIONAL (cr=1 pr=4 pw=0 time=36498 us)
    1 SEQUENCE FND_LOG_MESSAGES_S (cr=0 pr=0 pw=0 time=24 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.01 0.03
    SELECT MESSAGE_TEXT, MESSAGE_NUMBER, TYPE, FND_LOG_SEVERITY, CATEGORY,
    SEVERITY
    FROM
    FND_NEW_MESSAGES M, FND_APPLICATION A WHERE :B3 = M.MESSAGE_NAME AND :B2 =
    M.LANGUAGE_CODE AND :B1 = A.APPLICATION_SHORT_NAME AND M.APPLICATION_ID =
    A.APPLICATION_ID
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.03 4 12 0 2
    total 5 0.00 0.03 4 12 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=6 pr=2 pw=0 time=15724 us cost=3 size=134 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=20 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_NEW_MESSAGES (cr=4 pr=2 pw=0 time=15693 us cost=2 size=125 card=1)
    1 INDEX UNIQUE SCAN FND_NEW_MESSAGES_PK (cr=3 pr=1 pw=0 time=6386 us cost=1 size=0 card=1)(object id 34367)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.00 0.03
    DELETE FROM MO_GLOB_ORG_ACCESS_TMP
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.02 3 4 4 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.02 3 4 4 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    0 DELETE MO_GLOB_ORG_ACCESS_TMP (cr=4 pr=3 pw=0 time=29161 us)
    1 TABLE ACCESS FULL MO_GLOB_ORG_ACCESS_TMP (cr=3 pr=2 pw=0 time=18165 us cost=2 size=13 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 3 0.01 0.02
    SET TRANSACTION READ ONLY
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.01 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.01 0 0 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    begin Fnd_Concurrent.Init_Request; end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 148 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 148 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    log file sync 1 0.01 0.01
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    declare X0rv BOOLEAN; begin X0rv := FND_INSTALLATION.GET(:APPL_ID,
    :DEP_APPL_ID, :STATUS, :INDUSTRY); :X0 := sys.diutil.bool_to_int(X0rv);
    end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 9 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 9 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 8 0.00 0.00
    SQL*Net message from client 8 0.00 0.00
    begin fnd_global.bless_next_init('FND_PERMIT_0000');
    fnd_global.initialize(:session_id, :user_id, :resp_id, :resp_appl_id,
    :security_group_id, :site_id, :login_id, :conc_login_id, :prog_appl_id,
    :conc_program_id, :conc_request_id, :conc_priority_request, :form_id,
    :form_application_id, :conc_process_id, :conc_queue_id, :queue_appl_id,
    :server_id); fnd_profile.put('ORG_ID', :org_id);
    fnd_profile.put('MFG_ORGANIZATION_ID', :mfg_org_id);
    fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :coa);
    fnd_profile.put('APPS_MAINTENANCE_MODE', :amm); end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 8 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 8 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    SELECT FPI.STATUS, FPI.INDUSTRY, FPI.PRODUCT_VERSION, FOU.ORACLE_USERNAME,
    FPI.TABLESPACE, FPI.INDEX_TABLESPACE, FPI.TEMPORARY_TABLESPACE,
    FPI.SIZING_FACTOR
    FROM
    FND_PRODUCT_INSTALLATIONS FPI, FND_ORACLE_USERID FOU, FND_APPLICATION FA
    WHERE FPI.APPLICATION_ID = FA.APPLICATION_ID AND FPI.ORACLE_ID =
    FOU.ORACLE_ID AND FA.APPLICATION_SHORT_NAME = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.00 0 7 0 1
    total 4 0.00 0.00 0 7 0 1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=7 pr=0 pw=0 time=89 us)
    1 NESTED LOOPS (cr=6 pr=0 pw=0 time=93 us cost=4 size=76 card=1)
    1 NESTED LOOPS (cr=5 pr=0 pw=0 time=77 us cost=3 size=67 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=19 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_PRODUCT_INSTALLATIONS (cr=3 pr=0 pw=0 time=51 us cost=2 size=58 card=1)
    1 INDEX RANGE SCAN FND_PRODUCT_INSTALLATIONS_PK (cr=2 pr=0 pw=0 time=27 us cost=1 size=0 card=1)(object id 22583)
    1 INDEX UNIQUE SCAN FND_ORACLE_USERID_U1 (cr=1 pr=0 pw=0 time=7 us cost=0 size=0 card=1)(object id 22597)
    1 TABLE ACCESS BY INDEX ROWID FND_ORACLE_USERID (cr=1 pr=0 pw=0 time=7 us cost=1 size=9 card=1)
    SELECT P.PID, P.SPID, AUDSID, PROCESS, SUBSTR(USERENV('LANGUAGE'), INSTR(
    USERENV('LANGUAGE'), '.') + 1)
    FROM
    V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.AUDSID =
    USERENV('SESSIONID')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 0 0 1
    total 3 0.00 0.00 0 0 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1883 us cost=1 size=108 card=2)
    1 HASH JOIN (cr=0 pr=0 pw=0 time=1869 us cost=1 size=100 card=2)
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1059 us cost=0 size=58 card=2)
    182 FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=285 us cost=0 size=1288 card=161)
    1 FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=617 us cost=0 size=21 card=1)
    181 FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=187 us cost=0 size=10500 card=500)
    1 FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=4 us cost=0 size=4 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 2 0.00 0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 6 0.00 0.00 0 0 0 0
    Execute 6 0.01 0.02 0 165 0 4
    Fetch 1 0.00 0.00 0 0 0 1
    total 13 0.01 0.02 0 165 0 5
    Misses in library cache during parse: 0
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 37 0.00 0.00
    SQL*Net message from client 37 1.21 2.19
    log file sync 1 0.01 0.01
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 49 0.00 0.00 0 0 0 0
    Execute 89 0.01 0.07 7 38 336 24
    Fetch 29 0.00 0.09 15 168 0 27
    total 167 0.02 0.16 22 206 336 51
    Misses in library cache during parse: 3
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 6 0.00 0.00
    db file sequential read 22 0.01 0.14
    48 user SQL statements in session.
    1 internal SQL statements in session.
    49 SQL statements in session.
    0 statements EXPLAINed in this session.
    Trace file compatibility: 10.01.00
    Sort options: prsela exeela fchela
    1 session in tracefile.
    48 user SQL statements in trace file.
    1 internal SQL statements in trace file.
    49 SQL statements in trace file.
    48 unique SQL statements in trace file.
    928 lines in trace file.
    1338833753 elapsed seconds in trace file.

  • CDP Performance Issue-- Taking more time fetch data

    Hi,
    I'm working on Stellent 7.5.1.
    For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
    public void getManager(final HashMap binderMap)
    throws VistaInvalidInputException, VistaDataNotFoundException,
    DataException, ServiceException, VistaTemplateException
         String collectionID =
    getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
         long firstStartTime = System.currentTimeMillis();
    HashMap resultSetMap = null;
    String isNonRecursive = getStringLocal(VistaFolderConstants
    .ISNONRECURSIVE_KEY);
    if (isNonRecursive != null
    && isNonRecursive.equalsIgnoreCase(
    VistaContentFetchHelperConstants.STRING_TRUE))
    VistaLibraryContentFetchManager libraryContentFetchManager =
    new VistaLibraryContentFetchManager(
    binderMap);
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
              resultSetMap = libraryContentFetchManager
    .getFolderContentItems(m_workspace);
    * used to add the resultset to the binder.
    addResultSetToBinder(resultSetMap , true);
    else
         long startTime = System.currentTimeMillis();
    * isStandard is used to decide the call is for Standard or
    * Extended.
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
    String isStandard = getTemplateInformation(binderMap);
    long endTimeTemplate = System.currentTimeMillis();
    binderMap.put(VistaFolderConstants.IS_STANDARD,
    isStandard);
    long endTimebinderMap = System.currentTimeMillis();
    VistaContentFetchManager contentFetchManager
    = new VistaContentFetchManager(binderMap);
    long endTimeFetchManager = System.currentTimeMillis();
    resultSetMap = contentFetchManager
    .getAllFolderContentItems(m_workspace);
    long endTimeresultSetMap = System.currentTimeMillis();
    * used to add the resultset and the total no of content items
    * to the binder.
    addResultSetToBinder(resultSetMap , false);
    long endTime = System.currentTimeMillis();
    if (perfLogEnable.equalsIgnoreCase("true"))
         Log.info("Time taken to execute " +
                   "getTemplateInformation=" +
                   (endTimeTemplate - startTime)+
                   "ms binderMap=" +
                   (endTimebinderMap - startTime)+
                   "ms contentFetchManager=" +
                   (endTimeFetchManager - startTime)+
                   "ms resultSetMap=" +
                   (endTimeresultSetMap - startTime) +
                   "ms getManager:getAllFolderContentItems = " +
                   (endTime - startTime) +
                   "ms overallTime=" +
                   (endTime - firstStartTime) +
                   "ms folderID =" +
                   collectionID);
    Edited by: 838623 on Feb 22, 2011 1:43 AM

    Hi.
    The Select statment accessing MSEG Table is Slow Many a times.
    To Improve the performance of  MSEG.
    1.Check for the proper notes in the Service Market Place if you are working for CIN version.
    2.Index the MSEG table
    2.Check and limit the Columns in the Select statment .
    Possible Way.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
    EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
    FROM MSEG
    INTO CORRESPONDING FIELDS OF TABLE ITAB
    WHERE WERKS EQ P_WERKS AND
    MBLNR IN S_MBLNR AND
    BWART EQ '105' .
    Delete itab where itab EQ '5002361303'
    Delete itab where itab EQ  '5003501080' 
    Delete itab where itab EQ  '5002996300'
    Delete itab where itab EQ '5002996407'
    Delete itab where itab EQ '5003587026'
    Delete itab where itab EQ  '5003587026'
    Delete itab where itab EQ  '5003493186'
    Delete itab where itab EQ  '5002720583'
    Delete itab where itab EQ '5002928122'
    Delete itab where itab EQ '5002628263'.
    Select
    Regards
    Bala.M
    Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM

  • Report rdf with size 8mb taking more time to open

    Hello All,
    I have a rdf ( reports 6i) report with size 8.5mb taking more time to open and more time to access each field.
    Please let me know how do i solve this issue.
    Please do the needful.
    Thanks.

    Thanks for immediate response.
    pls let me know how do i know this.
    Right now i have the below details from report->help
    Report Builder 6.0.8.11.3
    ORACLE Server Release 8.0.6.0.0
    Oracle Procedure Builder 6.0.8.11.0
    Oracle ORACLE PL/SQL V8.0.6.0.0 - Production
    Oracle CORE Version 4.0.6.0.0 - Production
    Oracle Tools Integration Services 6.0.8.10.2
    Oracle Tools Common Area 6.0.5.32.1
    Oracle Toolkit 2 for Windows 32-bit platforms 6.0.5.35.0
    Resource Object Store 6.0.5.0.1
    Oracle Help 6.0.5.35.0
    Oracle Sqlmgr 6.0.8.11.3
    Oracle Query Builder 6.0.7.0.0 - Production
    PL/SQL Editor (c) WinMain Software (www.winmain.com), v1.0 (Production)
    Oracle ZRC 6.0.8.11.3
    Oracle Express 6.0.8.3.5
    Oracle XML Parser     1.0.2.1.0     Production
    Oracle Virtual Graphics System 6.0.5.35.0
    Oracle Image 6.0.5.34.0
    Oracle Multimedia Widget 6.0.5.34.0
    Oracle Tools GUI Utilities 6.0.5.35.0
    Thanks
    Edited by: Abdul Khan on Jan 26, 2010 11:54 PM

  • INSTR() function is taking more time

    im using the below query select query in my procedure.
    select distinct SUPPLIER_CIRCUIT_ID from SUPPLIER_DATA
    where INSTR(i.SYSTEM_CIRCUIT_ID,SUPPLIER_CIRCUIT_ID) > 0;
    I am taking SYSTEM_CIRCUIT_ID in cursor.
    This query is taking more time. Is that possiblt to create function based index and speed up the query?

    Hi,
    Welcome to the forum!
    993620 wrote:
    im using the below query select query in my procedure.
    select distinct SUPPLIER_CIRCUIT_ID from SUPPLIER_DATA
    where INSTR(i.SYSTEM_CIRCUIT_ID,SUPPLIER_CIRCUIT_ID) > 0;
    I am taking SYSTEM_CIRCUIT_ID in cursor.Show exactly what you're doing.
    Whenever you have a problem, post a complete test script that people can run to re-create the problem and test their ideas.
    See te forum FAQ {message:id=9360002}
    This query is taking more time. Is that possiblt to create function based index and speed up the query?Sorry, unless you doing something very specific (such as alwyas looking for the same sub-string) then a function-based index won't help.
    Oracle sells a separate product, called Oracle Text, for this kind of searching.
    You might try LIKE:
    WHERE  i.SYSTEM_CIRCUIT_ID LIKE  '%' || SUPPLIER_CIRCUIT_ID || '%'If you're using a cursor, then that's probably slowing the process down much more than the part you're showing.

Maybe you are looking for

  • Attachement Details (GOS) in Web Dynpro (ABAP)

    I posted this question in the Web Dynpro ABAP forum and got no response, so I'm moving it here: We are developing a Web Dynpro application to display PO attachment details (PDF file etc). In the corresponding R3 application, we use FMs ARCHIVE_GET_CO

  • Assistance with improving code

    Hi there, I'm looking for a little bit of help on improving my code. I have a feeling the following could be made to look, and feel a lot better. Originally I had defined image 1-8 as separate variables (var image1:img1 new img1) but instead leave th

  • Connecting via wireless & blue square?

    hello I'm connecting to the www via my router and when it's not possible to connect I seem to be connecting via a blue square on the toolbar ,anyway what is this and can it be switched off because I only want to connect via wireless router or hotspot

  • Iphoto is damaged or incomplete, iphoto is damaged or incomplete, iphoto is damaged or incomplete

    you cant open the application iPhoto because it may be damaged or incomplete..

  • Mobile Media API  and JMF

    can I use the mobile media api and JMF to send media from a pc and receive it on a java enabled mobile device (cell. phone). Thanks in advance