Oracle interview questions;; query very slow

Hai
Most of time in interview they ask..
WHAT IS THE STEPS U DO WHEN UR QUERY IS RUNNNING VERY SLOW..?
i used to say
1) first i will check for whether table is properly indexed or not
2) next properly normalized or not
interviewers are not fully satisfied with these answers..
So kindly tell me more suggestion
S

Also when checking the execution plan, get the actual plan using DBMS_XPLAN.DISPLAY_CURSOR, rather than the predicted one (EXPLAIN PLAN FOR). If you use a configurable IDE such as SQL Developer of PL/SQL Developer it is worth taking the time to set this up in the session browser so that you can easily capture it while it's running. You might also look at the estimated vs actual cardinalities. While you're at it you could check v$session_longops, v$session_wait and (if you have the Diagnostic and Tuning packs licenced) v$active_session_history and the various dba_hist% views.
You might try the SQL Tuning Advisor (DBMS_SQLTUNE) which generates profiles for SQL statements (requires ADVISOR system privilege to run a tuning task, and CREATE ANY SQL PROFILE to apply a profile).
In 11g look at SQL Monitor.
Tracing is all very well if you can get access to the tracefile in a reasonable timeframe, though in many sites (including my current one) it's just too much trouble unless you're a DBA.
Edited by: William Robertson on Apr 18, 2011 11:40 PM
Sorry Rob, should probably have replied to oraclehema rather than you.

Similar Messages

  • Query very slow!

    I have Oracle 9i and SUN OS 5.8
    I have a Java application that have a query to the Customer table. This table has 2.000.000 of records and I have to show its in pages (20 record each page)
    The user query for example the Customer that the Last Name begin with “O”. Then the application shows the first 20 records with this condition and order by Name.
    Then I have to create 2 querys
    1)
    SELECT id_customer,Name
    FROM Customers
    WHERE Name like 'O%'
    ORDER BY id_customer
    But when I proved this query in TOAD it take long to do it (the time consuming was 15 minutes)
    I have the index in the NAME field!!
    Besides, if the user want to go to the second page the query is executed again. (The java programmers said me that)
    What is your recommendation to optimize it????? I need to obtain the information in
    few seconds.
    2)
    SELECT count(*) FROM Customers WHERE NAME like 'O%'
    I have to do this query because I need to known How many pages (20 records) I need to show.
    Example with 5000 records I have to have 250 pages.
    But when I proved this query in TOAD it take long to do it (the time consuming was 30 seconds)
    What is your recommendation to optimize it????? I need to obtain the information in
    few seconds.
    Thanks in advance!

    This appears to be a dulpicate of a post in the Query very slow! forum.
    Claudio, since the same folks tend to read both forums, it is generally preferred that you post questions in only one forum. That way, multiple people don't spend time writing generally equivalent replies.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Oracle 11G - Update is very slow on View

    I have big trouble with some Update query on Oracle 11G.
    I have a set of tables (5) of identical structures and a view that consists in an UNION ALL of the 5 tables.
    None of this table contains more than 20 000 rows.
    Let's call the view V_INTE_NE. Each of the basic table has a PRIMARY KEY defined on 3 NUMBERS(10,0) -> INTE_REF / NE_REF / INSTANCE.
    Now, I get 6 rows in another table and I want to update my view from the data of this small table (let's call it SMALL). This table has the 3 columns INTE_REF / NE_REF / INSTANCE.
    When I try to join the two tables :
    SELECT * FROM T_INTE_NE T2
    WHERE EXISTS ( SELECT 1 FROM SMALL T1 WHERE T2.INTE_REF = T1.INTEREF AND T2.NE_REF = T1.NEREF AND T2.INTE_INST = T1.INSTANCE )
    I get the 6 lines in 0.037 seconds
    When I try to update the view (I have an INSTEAD OF trigger that does nothing (just return for testing even without modifying anything), I execute the following query :
    UPDATE T_INTE_NE T2
    SET INTE_STATE = -11 WHERE
    EXISTS ( SELECT 1 FROM SMALL T1 WHERE T2.INTE_REF = T1.INTEREF AND T2.NE_REF = T1.NEREF AND T2.INTE_INST = T1.INSTANCE )
    The 6 rows are updated (at least TRIGGER is called) in 20 seconds.
    However, in the execution plan, I can't see where Oracle takes time to achieve the query :
    Plan hash value: 907176690
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | UPDATE STATEMENT | | 6 | 36870 | 153 (1)| 00:00:02 |
    | 1 | UPDATE | T_INTE_NE | | | | |
    |* 2 | HASH JOIN RIGHT SEMI | | 6 | 36870 | 153 (1)| 00:00:02 |
    | 3 | TABLE ACCESS FULL | SMALL | 6 | 234 | 9 (0)| 00:00:01 |
    | 4 | VIEW | T_INTE_NE | 6 | 36636 | 143 (0)| 00:00:02 |
    | 5 | VIEW | X_V_T_INTE_NE | 6 | 18636 | 143 (0)| 00:00:02 |
    | 6 | UNION-ALL | | | | | |
    | 7 | TABLE ACCESS FULL| SECNODE1_T_INTE_NE | 1 | 3106 | 60 (0)| 00:00:01 |
    | 8 | TABLE ACCESS FULL| SECNODE2_T_INTE_NE | 1 | 3106 | 60 (0)| 00:00:01 |
    | 9 | TABLE ACCESS FULL| SECNODE3_T_INTE_NE | 1 | 3106 | 2 (0)| 00:00:01 |
    | 10 | TABLE ACCESS FULL| SECNODE4_T_INTE_NE | 1 | 3106 | 2 (0)| 00:00:01 |
    | 11 | TABLE ACCESS FULL| SECNODE5_T_INTE_NE | 1 | 3106 | 2 (0)| 00:00:01 |
    | 12 | TABLE ACCESS FULL| SYS_T_INTE_NE | 1 | 3106 | 17 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("T2"."INTE_REF"="T1"."INTEREF" AND "T2"."NE_REF"="T1"."NEREF" AND
    "T2"."INTE_INST"="T1"."INSTANCE")
    Note
    - dynamic sampling used for this statement (level=2)
    Statistics
    3 user calls
    0 physical read total bytes
    0 physical write total bytes
    0 spare statistic 3
    0 commit cleanout failures: cannot pin
    0 TBS Extension: bytes extended
    0 total number of times SMON posted
    0 SMON posted for undo segment recovery
    0 SMON posted for dropping temp segment
    0 segment prealloc tasks
    What could explain the difference ?
    I get exactly the same execution plan (when autotrace is ON).
    Furthermore, if I try to do the same update on each of the basic tables, I get the rows updated instantaneously.
    Is there any reason for avoiding this kind of query ?
    Any help would be greatly appreciated :-)
    Regards,
    Patrick

    Sorry for this, I lost myself in conjonctures and I didn't think I would have to explain the whole case.
    So, I wrote a small piece of PL/SQL that reproduces the same issue.
    It seems that my issue is not due to the UPDATE but to the use of the IN predicate.
    As you can see at the end of the script, I try to join the 2 tables using different technics.
    The first query is very fast, the second is very slow.
    I need the second one if I want to do any update.
    DROP TABLE Part1;
    DROP TABLE Part2;
    DROP TABLE Part3;
    DROP TABLE Part4;
    CREATE TABLE Part1 ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), PartId NUMBER(10, 0) DEFAULT( 1 ) NOT NULL, Data1 VARCHAR2(1000), X_Data2 VARCHAR2(2000) NULL, X_Data3 VARCHAR2(2000) NULL, CONSTRAINT PK_Part1 PRIMARY KEY( Key1, Key2, Key3 ) );
    CREATE TABLE Part2 ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), PartId NUMBER(10, 0) DEFAULT( 2 ) NOT NULL, Data1 VARCHAR2(1000), X_Data2 VARCHAR2(2000) NULL, X_Data3 VARCHAR2(2000) NULL, CONSTRAINT PK_Part2 PRIMARY KEY( Key1, Key2, Key3 ) );
    CREATE TABLE Part3 ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), PartId NUMBER(10, 0) DEFAULT( 3 ) NOT NULL, Data1 VARCHAR2(1000), X_Data2 VARCHAR2(2000) NULL, X_Data3 VARCHAR2(2000) NULL, CONSTRAINT PK_Part3 PRIMARY KEY( Key1, Key2, Key3 ) );
    CREATE TABLE Part4 ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), PartId NUMBER(10, 0) DEFAULT( 4 ) NOT NULL, Data1 VARCHAR2(1000), X_Data2 VARCHAR2(2000) NULL, X_Data3 VARCHAR2(2000) NULL, CONSTRAINT PK_Part4 PRIMARY KEY( Key1, Key2, Key3 ) );
    CREATE OR REPLACE FUNCTION Decrypt
    x_in IN VARCHAR2
    ) RETURN VARCHAR2
    AS
    x_out VARCHAR2(2000);
    BEGIN
    SELECT REVERSE( x_in ) INTO x_out FROM DUAL;
    RETURN ( x_out );
    END;
    CREATE OR REPLACE VIEW AllParts AS
    SELECT Key1, Key2, Key3, PartId, Data1, Decrypt( X_Data2 ) AS Data2, Decrypt( X_Data3 ) AS Data3 FROM Part1
    UNION ALL
    SELECT Key1, Key2, Key3, PartId, Data1, Decrypt( X_Data2 ) AS Data2, Decrypt( X_Data3 ) AS Data3 FROM Part2
    UNION ALL
    SELECT Key1, Key2, Key3, PartId, Data1, Decrypt( X_Data2 ) AS Data2, Decrypt( X_Data3 ) AS Data3 FROM Part3
    UNION ALL
    SELECT Key1, Key2, Key3, PartId, Data1, Decrypt( X_Data2 ) AS Data2, Decrypt( X_Data3 ) AS Data3 FROM Part4;
    DROP TABLE Small;
    CREATE TABLE Small ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), Data1 VARCHAR2(1000) );
    BEGIN
    DECLARE
    n_Key NUMBER(10, 0 ) := 0;
    BEGIN
    WHILE ( n_Key < 50000 )
    LOOP
    INSERT INTO Part1( Key1, Key2, Key3 )
    VALUES( n_Key, FLOOR( n_Key / 10 ), FLOOR( n_Key / 100 ) );
    INSERT INTO Part2( Key1, Key2, Key3 )
    VALUES( n_Key, FLOOR( n_Key / 10 ), FLOOR( n_Key / 100 ) );
    INSERT INTO Part3( Key1, Key2, Key3 )
    VALUES( n_Key, FLOOR( n_Key / 10 ), FLOOR( n_Key / 100 ) );
    INSERT INTO Part4( Key1, Key2, Key3 )
    VALUES( n_Key, FLOOR( n_Key / 10 ), FLOOR( n_Key / 100 ) );
    n_Key := n_Key + 1;
    END LOOP;
    INSERT INTO Small( Key1, Key2, Key3, Data1 ) VALUES ( 1000, 100, 10, 'Test 1000' );
    INSERT INTO Small( Key1, Key2, Key3, Data1 ) VALUES ( 3000, 300, 30, 'Test 3000' );
    INSERT INTO Small( Key1, Key2, Key3, Data1 ) VALUES ( 5000, 500, 50, 'Test 5000' );
    COMMIT;
    END;
    END;
    SELECT T2.*
    FROM Small T1, AllParts T2
    WHERE T2.Key1 = T1.Key1 AND T2.Key2 = T1.Key2 AND T2.Key3 = T1.Key3;
    SELECT T1.*
    FROM AllParts T1
    WHERE ( T1.Key1, T1.Key2, T1.Key3 ) IN ( SELECT T2.Key1, T2.Key2, T2.Key3 FROM Small T2 );

  • SPATIAL QUERY VERY SLOW

    I CAN TO EXECUTE THIS QUERY BUT IT IS VERY SLOW, I HAVE 2 TABLE , ONE A WITH 250.000 SITE AND B WITH 250.000 POINTS, I WANT TO DETERMINING HOW MANY RISK INSIDE THE SITES.
    THANKS
    JGS
    SELECT B.ID, A.ID, A.GC, A.SUMA
    FROM DBG_RIESGOS_CUMULOS_SITE A, DBG_RIESGOS_CUMULOS B
    WHERE A.GC = 'PATRIMONIAL FENOMENOS SISMICOS' AND A.GC=B.GC
    AND SDO_RELATE(B.GEOMETRY, A.GEOMETRY, 'MASK=INSIDE') = 'TRUE';
    100 RECORS IN 220 '' SLOWWWWW

    I would do two things:
    1) Ensure Oracle is patched with the latest 10.2.0.4 patches
    This is the list I've been working with:
    Patch 7003151
    Patch 6989483
    Patch 7237687
    Patch 7276032
    Patch 7307918
    2) Write the query like this
    SELECT /*+ ORDERED*/ B.ID, A.ID, A.GC, A.SUMA
    FROM DBG_RIESGOS_CUMULOS B, DBG_RIESGOS_CUMULOS_SITE A
    WHERE B.GC = 'PATRIMONIAL FENOMENOS SISMICOS'
    AND A.GC=B.GC
    AND SDO_ANYINTERACT(A.GEOMETRY, B.GEOMETRY) = 'TRUE';

  • User Defined Type - Array bind Query very slow

    Hi.
    I have following Problem. I try to use Oracle Instant Client 11 and ODP.NET to pass Arrays in SELECT statements as Bind Parameters. I did it, but it runs very-very slow. Example:
    - Inittial Query:
    SELECT tbl1.field1, tbl1.field2, tbl2.field1, tbl2.field2 ... FROM tbl1
    LEFT JOIN tbl2 ON tbl1.field11=tbl2.field0
    LEFT JOIN tbl3 ON tbl2.field11=tbl3.field0 AND tbll1.field5=tbl3.field1
    ...and another LEFT JOINS
    WHERE
    tbl1.field0 IN ('id01', 'id02', 'id03'...)
    this query with 100 elements in "IN" on my database takes 3 seconds.
    - Query with Array bind:
    in Oracle I did UDT: create or replace type myschema.mytype as table of varchar2(1000)
    than, as described in Oracle Example I did few classes (Factory and implementing IOracleCustomType) and use it in Query,
    instead of IN ('id01', 'id02', 'id03'...) I have tbl1.field0 IN (select column_value from table(:prmTable)), and :prmTable is bound array.
    this query takes 190 seconds!!! Why? I works, but the HDD of Oracle server works very hard, and it takes too long.
    Oracle server we habe 10g.
    PS: I tried to use only 5 elements in array - the same result, it takes also 190 seconds...
    Please help!

    I recommend you generate an explain plan for each query and post them here. Based on what you have given the following MAY be happening:
    Your first query has as static IN list when it is submitted to the server. Therefore when Oracle generates the execution plan the CBO can accurately determine it based on a KNOWN set of input parameters. However the second query has a bind variable for this list of parameters and Oracle has no way of knowing at the time the execution plan is generated what that list contains. If it does not know what the list contains it cannot generate the most optimal execution plan. Therefore I would guess that it is probably doing some sort of full table scan (although these aren't always bad, remember that!).
    Again please post the execution plans for each.
    HTH!

  • Oracle Interview Questions

    Hi All,
    Could you please answer the below important interview questions:
    1)Lets say many oracle DBs are running on one unix server.How would you determine how many are running.
    2)How would you determine that the shared memory segments/semaphores belongs to a particular instance.
    3)Why is it not recommendable to use PCTUSED with Indexes?
    4)Which query is better in terms of performance-SQL using IN or SQL using EXITS operator.
    5)What does shrink command do ?
    6)WHat are the steps to migrate the DB from one version to another?

    1) ipcs -mob | grep <oracle account> | wc -l
    2) disconnect all users from all instances.
    Connect to one instance. Run ipcs -mob | grep <oracle account>:
    the shared memory segment that is used by the instance you are connected to has its NATTACH value incremented by 1 if the database server is configured with dedicated server process.
    Message was edited by:
    Pierre Forstmann

  • Oracle-11g connection is very slow

    Hi Team,
    Installed oracle11g with database yesterday. but the connection to database using tnsnames is very slow even from host server, where as sys / as sysdba is normal in hostserver.
    And checked other databases(10g) connections in the same host server, normal. Here with I spooled alert log file and parameter list. pls do the needful help.
    Aler log file from shut down to startup.
    Sat Aug 06 11:28:54 2011
    Shutting down instance (immediate)
    Stopping background process SMCO
    Shutting down instance: further logons disabled
    Sat Aug 06 11:28:55 2011
    Stopping background process CJQ0
    Stopping background process QMNC
    Stopping background process MMNL
    Stopping background process MMON
    License high water mark = 10
    ALTER DATABASE CLOSE NORMAL
    Sat Aug 06 11:28:58 2011
    SMON: disabling tx recovery
    SMON: disabling cache recovery
    Sat Aug 06 11:28:58 2011
    Shutting down archive processes
    Archiving is disabled
    Sat Aug 06 11:28:58 2011
    ARCH shutting down
    Sat Aug 06 11:28:58 2011
    ARCH shutting down
    Sat Aug 06 11:28:58 2011
    ARCH shutting down
    ARC1: Archival stopped
    ARC0: Archival stopped
    ARC3: Archival stopped
    Sat Aug 06 11:28:58 2011
    ARCH shutting down
    ARC2: Archival stopped
    Thread 1 closed at log sequence 9
    Successful close of redo thread 1
    Completed: ALTER DATABASE CLOSE NORMAL
    ALTER DATABASE DISMOUNT
    Completed: ALTER DATABASE DISMOUNT
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Archive process shutdown avoided: 0 active
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Archive process shutdown avoided: 0 active
    Sat Aug 06 11:29:01 2011
    Stopping background process VKTM:
    Sat Aug 06 11:29:05 2011
    Instance shutdown complete
    Sat Aug 06 11:32:12 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =118
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up:
    Oracle Database 11g Release 11.2.0.1.0 - 64bit Production.
    Using parameter settings in client-side pfile /oracle/ora11g/apps/dbs/initrakshak.ora on machine abml01
    System parameters with non-default values:
    processes                = 700
    sga_max_size             = 30G
    sga_target               = 30G
    control_files            = "/barch10g_db/ora11g/rakshak_control/rkdatabase/control1/rakshak_control01.ctl"
    control_files            = "/barch10g_db/ora11g/rakshak_redo/rkdatabase/control2/rakshak_control02.ctl"
    control_files            = "/barch10g_db/ora11g/rakshak_idx1/rkdatabase/control3/rakshak_control03.ctl"
    db_block_size            = 16384
    compatible               = "11.2.0"
    log_archive_dest         = "/barch10g_db/ora11g/rakshak_archive/rkdatabase/rakshak"
    db_recovery_file_dest    = "/barch10g_db/ora11g/rakshak_idx1/rkdatabase/flash_recovery_area"
    db_recovery_file_dest_size= 2G
    undo_management          = "AUTO"
    undo_tablespace          = "UNDOTBS1"
    sec_case_sensitive_logon = FALSE
    remote_login_passwordfile= "EXCLUSIVE"
    utl_file_dir             = "/barch10g_db/ora11g/ldoutput/"
    plsql_code_type          = "native"
    job_queue_processes      = 100
    cursor_sharing           = "FORCE"
    audit_file_dest          = "/barch10g_db/ora11g/rakshak_idx1/rkdatabase/adump"
    audit_trail              = "DB"
    db_name                  = "rakshak"
    open_cursors             = 700
    diagnostic_dest          = "/barch10g_db/ora11g/rakshak_idx1/rkdatabase/"
    Sat Aug 06 11:32:33 2011
    PMON started with pid=2, OS id=9463
    Sat Aug 06 11:32:34 2011
    VKTM started with pid=3, OS id=9465 at elevated priority
    VKTM running at (10)millisec precision with DBRM quantum (100)ms
    Sat Aug 06 11:32:34 2011
    GEN0 started with pid=4, OS id=9469
    Sat Aug 06 11:32:34 2011
    DIAG started with pid=5, OS id=9471
    Sat Aug 06 11:32:34 2011
    DBRM started with pid=6, OS id=9473
    Sat Aug 06 11:32:34 2011
    PSP0 started with pid=7, OS id=9475
    Sat Aug 06 11:32:34 2011
    DIA0 started with pid=8, OS id=9477
    Sat Aug 06 11:32:34 2011
    MMAN started with pid=9, OS id=9479
    Sat Aug 06 11:32:34 2011
    DBW0 started with pid=10, OS id=9481
    Sat Aug 06 11:32:34 2011
    DBW1 started with pid=11, OS id=9483
    Sat Aug 06 11:32:34 2011
    DBW2 started with pid=12, OS id=9485
    Sat Aug 06 11:32:34 2011
    LGWR started with pid=13, OS id=9487
    Sat Aug 06 11:32:34 2011
    CKPT started with pid=14, OS id=9489
    Sat Aug 06 11:32:34 2011
    SMON started with pid=15, OS id=9491
    Sat Aug 06 11:32:34 2011
    RECO started with pid=16, OS id=9493
    Sat Aug 06 11:32:34 2011
    MMON started with pid=17, OS id=9495
    Sat Aug 06 11:32:34 2011
    MMNL started with pid=18, OS id=9497
    Sat Aug 06 11:32:34 2011
    ORACLE_BASE not set in environment. It is recommended
    that ORACLE_BASE be set in the environment
    Sat Aug 06 11:34:34 2011
    Shutting down instance (immediate)
    Shutting down instance: further logons disabled
    Stopping background process MMNL
    Stopping background process MMON
    License high water mark = 1
    ALTER DATABASE CLOSE NORMAL
    ORA-1507 signalled during: ALTER DATABASE CLOSE NORMAL...
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Archive process shutdown avoided: 0 active
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Archive process shutdown avoided: 0 active
    Sat Aug 06 11:34:37 2011
    Stopping background process VKTM:
    Sat Aug 06 11:34:40 2011
    Instance shutdown complete
    Sat Aug 06 11:35:55 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =118
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up:
    Oracle Database 11g Release 11.2.0.1.0 - 64bit Production.
    Using parameter settings in client-side pfile /oracle/ora11g/apps/dbs/initrakshak.ora on machine abml01
    System parameters with non-default values:
    processes                = 700
    sga_max_size             = 30G
    sga_target               = 30G
    control_files            = "/barch10g_db/ora11g/rakshak_control/rkdatabase/control1/rakshak_control01.ctl"
    control_files            = "/barch10g_db/ora11g/rakshak_redo/rkdatabase/control2/rakshak_control02.ctl"
    control_files            = "/barch10g_db/ora11g/rakshak_idx1/rkdatabase/control3/rakshak_control03.ctl"
    db_block_size            = 16384
    compatible               = "11.2.0"
    log_archive_dest         = "/barch10g_db/ora11g/rakshak_archive/rkdatabase/rakshak"
    db_recovery_file_dest    = "/barch10g_db/ora11g/rakshak_idx1/rkdatabase/flash_recovery_area"
    db_recovery_file_dest_size= 2G
    undo_management          = "AUTO"
    undo_tablespace          = "UNDOTBS1"
    sec_case_sensitive_logon = FALSE
    remote_login_passwordfile= "EXCLUSIVE"
    utl_file_dir             = "/barch10g_db/ora11g/ldoutput/"
    plsql_code_type          = "native"
    job_queue_processes      = 100
    cursor_sharing           = "FORCE"
    audit_file_dest          = "/barch10g_db/ora11g/rakshak_idx1/rkdatabase/adump"
    audit_trail              = "DB"
    db_name                  = "rakshak"
    open_cursors             = 700
    diagnostic_dest          = "/barch10g_db/ora11g/rakshak_idx1/rkdatabase/"
    Sat Aug 06 11:36:16 2011
    PMON started with pid=2, OS id=9648
    Sat Aug 06 11:36:16 2011
    VKTM started with pid=3, OS id=9657 at elevated priority
    VKTM running at (10)millisec precision with DBRM quantum (100)ms
    Sat Aug 06 11:36:16 2011
    GEN0 started with pid=4, OS id=9669
    Sat Aug 06 11:36:16 2011
    DIAG started with pid=5, OS id=9678
    Sat Aug 06 11:36:16 2011
    DBRM started with pid=6, OS id=9686
    Sat Aug 06 11:36:16 2011
    PSP0 started with pid=7, OS id=9697
    Sat Aug 06 11:36:16 2011
    DIA0 started with pid=8, OS id=9704
    Sat Aug 06 11:36:16 2011
    MMAN started with pid=9, OS id=9711
    Sat Aug 06 11:36:16 2011
    DBW0 started with pid=10, OS id=9713
    Sat Aug 06 11:36:16 2011
    DBW1 started with pid=11, OS id=9715
    Sat Aug 06 11:36:16 2011
    DBW2 started with pid=12, OS id=9717
    Sat Aug 06 11:36:16 2011
    LGWR started with pid=13, OS id=9719
    Sat Aug 06 11:36:16 2011
    CKPT started with pid=14, OS id=9721
    Sat Aug 06 11:36:16 2011
    SMON started with pid=15, OS id=9723
    Sat Aug 06 11:36:16 2011
    RECO started with pid=16, OS id=9725
    Sat Aug 06 11:36:16 2011
    MMON started with pid=17, OS id=9727
    Sat Aug 06 11:36:16 2011
    MMNL started with pid=18, OS id=9729
    Sat Aug 06 11:36:16 2011
    ORACLE_BASE from environment = /oracle/ora11g/home
    Sat Aug 06 11:36:40 2011
    alter database mount
    Sat Aug 06 11:36:44 2011
    Successful mount of redo thread 1, with mount id 3292194824
    Database mounted in Exclusive Mode
    Lost write protection disabled
    Completed: alter database mount
    Sat Aug 06 11:36:54 2011
    alter database open
    LGWR: STARTING ARCH PROCESSES
    Sat Aug 06 11:36:54 2011
    ARC0 started with pid=20, OS id=9743
    Sat Aug 06 11:36:55 2011
    ARC0: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARC0: STARTING ARCH PROCESSES
    Sat Aug 06 11:36:55 2011
    ARC1 started with pid=21, OS id=9745
    Sat Aug 06 11:36:55 2011
    ARC2 started with pid=22, OS id=9747
    Sat Aug 06 11:36:55 2011
    ARC3 started with pid=23, OS id=9749
    ARC1: Archival started
    ARC2: Archival started
    ARC2: Becoming the 'no FAL' ARCH
    ARC2: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Thread 1 opened at log sequence 9
    Current log# 3 seq# 9 mem# 0: /barch10g_db/ora11g/rakshak_idx1/rkdatabase/redo3/rakshak_redolog3a.log
    Current log# 3 seq# 9 mem# 1: /barch10g_db/ora11g/rakshak_idx1/rkdatabase/redo3/rakshak_redolog3b.log
    Successful open of redo thread 1
    Sat Aug 06 11:36:55 2011
    SMON: enabling cache recovery
    Successfully onlined Undo Tablespace 2.
    Verifying file header compatibility for 11g tablespace encryption..
    Verifying 11g file header compatibility for tablespace encryption completed
    SMON: enabling tx recovery
    Database Characterset is WE8ISO8859P1
    No Resource Manager plan active
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    Sat Aug 06 11:36:55 2011
    QMNC started with pid=25, OS id=9753
    Completed: alter database open
    Sat Aug 06 11:36:56 2011
    db_recovery_file_dest_size of 2048 MB is 0.99% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    ARC3: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    Sat Aug 06 11:36:58 2011
    Starting background process CJQ0
    Sat Aug 06 11:36:58 2011
    CJQ0 started with pid=24, OS id=9768
    Setting Resource Manager plan SCHEDULER[0x2FF9]:DEFAULT_MAINTENANCE_PLAN via scheduler window
    Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
    Sat Aug 06 11:37:01 2011
    Starting background process VKRM
    Sat Aug 06 11:37:01 2011
    VKRM started with pid=26, OS id=9770
    Sat Aug 06 11:41:55 2011
    Starting background process SMCO
    Sat Aug 06 11:41:55 2011
    SMCO started with pid=29, OS id=9920
    parameter list
    db_name='rakshak'
    +#memory_target=30G+
    processes = 700
    audit_file_dest='/barch10g_db/ora11g/rakshak_idx1/rkdatabase/adump'
    audit_trail ='db'
    db_block_size=16384
    db_recovery_file_dest='/barch10g_db/ora11g/rakshak_idx1/rkdatabase/flash_recovery_area'
    db_recovery_file_dest_size=2G
    diagnostic_dest='/barch10g_db/ora11g/rakshak_idx1/rkdatabase/'
    +#dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)'+
    open_cursors=700
    job_queue_processes=100
    remote_login_passwordfile='EXCLUSIVE'
    undo_management='AUTO'
    undo_tablespace='UNDOTBS1'
    +# You may want to ensure that control files are created on separate physical+
    +# devices+
    control_files = '/barch10g_db/ora11g/rakshak_control/rkdatabase/control1/rakshak_control01.ctl','/barch10g_db/ora11g/rakshak_redo/rkdatabase/control2/rakshak_control02.ctl','/barch10g_db/ora11g/rakshak_idx1/rkdatabase/control3/rakshak_control03.ctl'
    compatible ='11.2.0'
    SGA_MAX_SIZE=30G
    SGA_TARGET=30G
    Utl_file_dir='/barch10g_db/ora11g/ldoutput/'
    sec_case_sensitive_logon=FALSE
    plsql_code_type=native
    cursor_sharing='FORCE'
    log_archive_dest='/barch10g_db/ora11g/rakshak_archive/rkdatabase/rakshak'
    If any information is needed, pls let me know.
    thanks in advance
    Regards
    Phani Kumar

    Phani  wrote:
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.247.27)(PORT=1522)))Why use port 1522?
    It is always a good idea to use the standard ports for a network application. There is no logic in obfuscating ports for security purposes. It also makes network management and dealing with quality of service issues for example, much more complex if you do not stick to the registered application ports.
    Also note that if you provide a dotted IP address, only that address will be used for binding the tcp port as a listening end point. This means no connections will be accepted on localhost and other IP addresses of that server. Make sure that this is what is technically required.
    ifconfig
    bond6     Link encap:Ethernet  HWaddr 00:26:55:D3:02:B6
    Why are you using bonding? How many bonded interfaces are there and now many physical NICs? Bond6 alludes that it is the 7th bonded interface - and at 2 NICs per bonded interface it implies that your server has 14 physical Ethernet interfaces. Which I doubt is true.
    RX packets:1309675596 errors:5 dropped:0 overruns:0 frame:3Not good to see any errors. What does ethtool stats show? Also check that the physical interfaces are enabled for full duplex. Some Cisco switches do not negotiate it properly and the NIC could be running half duplex.
    Also - using bonding... does not seem right. The 1st and default bonded interface should be +/dev/bond0+ - and not bond6.
    Check the server's network configuration (the +ifcfg-*+ files in +/etc/sysconfig/networking-scripts+ directory). Suggest that you get a network engineer (or the like) to assist with reviewing the network setup of that server.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • 10g Form - first execute query - very slow

    I have the following issue:
    Enter an application
    open a form in enter query mode
    first time execute query is very slow (several minutes)
    every other time it's quick (couple seconds or less)
    I can leave the form, use other forms within the app, come back and query is still quick. It's only the first time after initially launching the app.
    Any ideas what might be causing this?

    We have the same application running in 6i client/server DB-9i in production. We are testing the upgraded application that is 10g forms on OAS DB-10g. We don't have the issue in the current production client/server app.

  • UNION making query very slow... solution?

    Hi Guys,
    I want to get the records of two tables in one view. Option available in oracle is UNION.
    I have used UNION between two select statement. There are above 15,000 records in one table and around 200 in the other one.
    But after using this UNION between the select statements my view have become very slow.
    Can i use a order by command in the following view, I have tried but it gives error. What is the alternate of a UNION.
    Please help. All of our reports depend on this view and its very slow.
    the script of the view is as follows:
    CREATE OR REPLACE VIEW "COMMON"."V_SEL_SYS_EMP" AS
    Select Employee.Emp_Employees.Employee_ID,
    trim(Employee.Emp_Employees.Emp_F_Name) ||' '|| trimemployee.Emp_Employees.Emp_L_Name) As
    Emp_Name, Employee.Emp_Employees.Branch_ID,
    Common.Com_Branches.Br_Name, COMMON.COM_BRANCHES.REGION_ID,
    COMMON.COM_REGIONS.REGION_NAME, COMMON.COM_BRANCHES.CHAPTER_ID,
    COMMON.COM_CHAPTERS.CHAPTER_NAME, Employee.Emp_Employees.Company_ID,
    Common.Com_Companies.Comp_Name, Employee.Emp_Employees.Department_ID,
    Common.Com_Departments.Dept_Name, Employee.Emp_Employees.Religion_ID,
    Common.Com_Religions.Religion_Name, Employee.Emp_Employees.Premises_ID,
    Common.Com_Premises.Premises_name, Employee.Emp_Employees.Categ_ID,
    Employee.Emp_Categories.Categ_Name, Employee.Emp_Employees.Desig_ID,
    Employee.Emp_Employees.Desig_Suffix, Employee.Emp_Designations.Designation,
    EMPLOYEE.EMP_EMPLOYEES.PAY_SCALE, EMPLOYEE.EMP_EMPLOYEES.BASIC_SAL,
    Employee.Emp_Employees.HEAD_OF_DEPT, Employee.Emp_Employees.Birth_Date,
    Employee.Emp_Employees.Emp_Gender, Employee.Emp_Employees.Emp_Status,
    Employee.Emp_Employees.Hire_Date, Employee.Emp_Employees.Conf_Date,
    Employee.Emp_Employees.Left_Date, Employee.Emp_Employees.Emp_Photo,
    Employee.Emp_Emp_Info.E_Mail,Employee.Emp_Employees.Dept_Head_Id FROM Employee.Emp_Employees, Common.Com_Branches,
    Common.Com_Companies, Common.Com_Departments, Common.Com_Religions, Common.Com_Premises,
    Employee.Emp_categories,
    Employee.Emp_Designations, Employee.Emp_Emp_Info, COMMON.COM_REGIONS,common.com_chapters
    Where (Employee.Emp_Employees.Branch_ID = Common.Com_Branches.Branch_ID(+))
    and (Employee.Emp_Employees.Company_ID = Common.Com_Companies.Company_ID(+))
    AND (COM_BRANCHES.REGION_ID = COM_REGIONS.REGION_ID(+))
    AND (COM_BRANCHES.CHAPTER_ID = COM_CHAPTERS.CHAPTER_ID(+))
    and (Employee.Emp_Employees.Department_ID = Common.Com_Departments.Department_ID(+))
    and (Employee.Emp_Employees.Religion_ID = Common.Com_Religions.Religion_ID(+))
    and (Employee.Emp_Employees.Premises_ID = Common.Com_Premises.Premises_ID(+))
    and (Employee.Emp_Employees.Categ_ID = Employee.Emp_Categories.Categ_ID(+))
    and (Employee.Emp_Employees.Desig_ID = Employee.Emp_Designations.Desig_ID(+))
    and (Employee.Emp_Employees.Employee_ID = Employee.Emp_Emp_Info.Employee_ID(+))
    UNION
    Select Common.Com_Non_Employees.Non_Employee_ID,
    trim(Common.Com_Non_Employees.First_Name) ||' '|| trim(Common.Com_Non_Employees.Last_Name)
    As Emp_Name, Common.Com_Non_Employees.Branch_ID,
    Common.Com_Branches.Br_Name, COMMON.COM_BRANCHES.REGION_ID,
    COMMON.COM_REGIONS.REGION_NAME, COMMON.COM_BRANCHES.CHAPTER_ID,
    COMMON.COM_CHAPTERS.CHAPTER_NAME, Common.Com_Non_Employees.Company_ID,
    Common.Com_Companies.Comp_Name, Common.Com_Non_Employees.Department_ID,
    Common.Com_Departments.Dept_Name, Common.Com_Non_Employees.Religion_ID,
    Common.Com_Religions.Religion_Name, NULL as Premises_ID,
    NULL as Premises_name, NULL as Categ_ID, NULL as Categ_Name,
    Common.Com_Non_Employees.Desig_ID, Common.Com_Non_Employees.Desig_Suffix,
    Employee.Emp_Designations.Designation, NULL as PAY_SCALE,
    NULL as BASIC_SAL, NULL as HEAD_OF_DEPT, NULL as Birth_Date,
    Common.Com_Non_Employees.Emp_Gender, NULL as Emp_Status,
    NULL as Hire_Date,NULL as Conf_Date,NULL as Left_Date,NULL as Emp_Photo,
    Employee.Emp_Emp_Info.E_Mail,Null as Dept_Head_ID
    FROM Common.Com_Non_Employees, Common.Com_Branches,
    Common.Com_Companies,
    Common.Com_Departments, Common.Com_Religions, Common.Com_Premises,
    Employee.Emp_categories, Employee.Emp_Designations, Employee.Emp_Emp_Info, COMMON.COM_REGIONS,
    common.com_chapters
    Where (Common.Com_Non_Employees.Branch_ID = Common.Com_Branches.Branch_ID(+))
    and (Common.Com_Non_Employees.Company_ID = Common.Com_Companies.Company_ID(+))
    AND (COM_BRANCHES.REGION_ID = COM_REGIONS.REGION_ID(+))
    AND (COM_BRANCHES.CHAPTER_ID = COM_CHAPTERS.CHAPTER_ID(+))
    and (Common.Com_Non_Employees.Department_ID = Common.Com_Departments.Department_ID(+))
    and (Common.Com_Non_Employees.Religion_ID = Common.Com_Religions.Religion_ID(+))
    and (Common.Com_Non_Employees.Desig_ID = Employee.Emp_Designations.Desig_ID(+))
    and (Common.Com_Non_Employees.NOn_Employee_ID = Employee.Emp_Emp_Info.Employee_ID(+))
    without UNION the two selet commands retrieve data in a quick manner.
    Plis help!
    Imran Baig

    use UNION ALL instead of UNION.
    If you still feel slow then generate the trace and see what is the bottle neck.
    alter session set events '10046 trace name context forever, level 8'
    select * from veww;
    alter session set events '10046 trace name context off';
    use tkprof to format the trace file generated by the event, you can find trace in your udump directory. And see what are the waiting events.
    Jaffar
    OCP DBA

  • Oracle HTTP JSP gets very slow on 9i

    PLateform : Windows 2000
    I have changed from Oracle 8i to Oracle 9i. Than I
    installed Oracle Chartbuilder from
    http://otn.oracle.com/software/tech/java/servlets/htdocs/utilsoft.htm
    On the 9i Apache this application runs very very slow
    It looks a little bit like he is compiling the jsp
    every time it is called.
    Does anybody know whats wrong or where I have to configure something ???
    Peter S.

    1. Are the database parameters the same on each of the systems (ie. LARGE_POOL_SIZE)? You may want to check on any potential bottlenecks during the restore by looking in V$BACKUP_SYNC_IO and V$BACKUP_ASYNC_IO.
    2. You can limit the amount of datafiles included in a backup set by setting the MAXSETSIZE parameter. Be careful not to set this value too low. If a datafile is bigger than MAXSETSIZE, then the backup will fail.

  • Oracle table insertion is very slow - Very Imp

    I have a oracle 9i db installed on Windows 2000 Adv. Server. Server is single processor ,2GB RAM.
    and I have a table is have one long raw field & 4 other fields. It contails 10k records. and table is indexed.
    I have an application is VB using ADOs I connected to Oracle db. I am saving binary file to long raw field. For me retreival is very fast and when i am inserting the record it is very slow. It is taking 4min for one record.
    Please help me to solve this issue

    Is it possible for your capture the execution plan, as well as session wait events?
    If you have buffer busy waits, and not using ASSM (Automatic Segemtn Storage Management), playing with free list also helps.
    Jaffar

  • SQL Query very slow.

    I have a table which has 40million data in it. Of-course partitioned!.
    begin
    pk_cm_entity_context.set_entity_in_context(1);
    end;
    SELECT COUNT(1) FROM XFACE_ADDL_DETAILS_TXNLOG;
    alter table XFACE_ADDL_DETAILS_TXNLOG rename to XFACE_ADDLDTS_TXNLOG_PTPART;
    SELECT COUNT(1) FROM XFACE_ADDLDTS_TXNLOG_PTPART;
    -- Create table
    create table XFACE_ADDL_DETAILS_TXNLOG
    REF_TXN_NO CHAR(40),
    REF_USR_NO CHAR(40),
    REF_KEY_NO VARCHAR2(50),
    REF_TXN_NO_ORG CHAR(40),
    REF_USR_NO_ORG CHAR(40),
    RECON_CODE VARCHAR2(25),
    COD_TASK_DERIVED VARCHAR2(5),
    COD_CHNL_ID VARCHAR2(6),
    COD_SERVICE_ID VARCHAR2(10),
    COD_USER_ID VARCHAR2(30),
    COD_AUTH_ID VARCHAR2(30),
    COD_ACCT_NO CHAR(22),
    TYP_ACCT_NO VARCHAR2(4),
    COD_SUB_ACCT_NO CHAR(16),
    COD_DEP_NO NUMBER(5),
    AMOUNT NUMBER(15,2),
    COD_CCY VARCHAR2(3),
    DAT_POST DATE,
    DAT_VALUE DATE,
    TXT_TXN_NARRATIVE VARCHAR2(60),
    DATE_CHEQUE_ISSUE DATE,
    TXN_BUSINESS_TYPE VARCHAR2(10),
    CARD_NO CHAR(20),
    INVENTORY_CODE CHAR(10),
    INVENTORY_NO CHAR(20),
    CARD_PASSBOOK_NO CHAR(30),
    COD_CASH_ANALYSIS CHAR(20),
    BANK_INFORMATION_NO CHAR(8),
    BATCH_NO CHAR(10),
    SUMMARY VARCHAR2(60),
    MAIN_IC_TYPE CHAR(1),
    MAIN_IC_NO CHAR(48),
    MAIN_IC_NAME CHAR(64),
    MAIN_IC_CHECK_RETURN_CODE CHAR(1),
    DEPUTY_IC_TYPE CHAR(1),
    DEPUTY_IC_NO CHAR(48),
    DEPUTY_NAME CHAR(64),
    DEPUTY_IC_CHECK_RETURN_CODE CHAR(1),
    ACCOUNT_PROPERTY CHAR(4),
    CHEQUE_NO CHAR(20),
    COD_EXT_TASK CHAR(10),
    COD_MODULE CHAR(4),
    ACC_PURPOSE_CODE VARCHAR2(15),
    NATIONALITY CHAR(3),
    CUSTOMER_NAME CHAR(192),
    COD_INCOME_EXPENSE CHAR(6),
    COD_EXT_BRANCH CHAR(6),
    COD_ACCT_TITLE CHAR(192),
    FLG_CA_TT CHAR(1),
    DAT_EXT_LOCAL DATE,
    ACCT_OWNER_VALID_RESULT CHAR(1),
    FLG_DR_CR CHAR(1),
    FLG_ONLINE_UPLOAD CHAR(1),
    FLG_STMT_DISPLAY CHAR(1),
    COD_TXN_TYPE NUMBER(1),
    DAT_TS_TXN TIMESTAMP(6),
    LC_BG_GUARANTEE_NO VARCHAR2(20),
    COD_OTHER_ACCT_NO CHAR(22),
    COD_MOD_OTHER_ACCT_NO CHAR(4),
    COD_CC_BRN_SUB_ACCT NUMBER(5),
    COD_CC_BRN_OTHR_ACCT NUMBER(5),
    COD_ENTITY_VPD NUMBER(5) default NVL(sys_context('CLIENTCONTEXT','entity_code'),11),
    COD_EXT_TASK_REV VARCHAR2(10)
    partition by hash (REF_TXN_NO)
    PARTITIONS 128
    store in (FCHDATA1,FCHDATA2,FCHDATA3,FCHDATA4, FCHDATA5, FCHDATA6, FCHDATA7, FCHDATA8);
    insert /*+APPEND NOLOGGING */ into XFACE_ADDL_DETAILS_TXNLOG
    select /*+PARALLEL */ * from XFACE_ADDLDTS_TXNLOG_PTPART;
    -- Add comments to the table
    comment on table XFACE_ADDL_DETAILS_TXNLOG
    is ' Additional Data log table ';
    -- Add comments to the columns
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_TXN_NO
    is 'Transaction Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_USR_NO
    is 'User Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_KEY_NO
    is 'Unique key to identify a leg of the transaction';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_TXN_NO_ORG
    is 'Original Transaction Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_USR_NO_ORG
    is 'Original Transaction User Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.RECON_CODE
    is 'Reconciliation of transactions in future';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_TASK_DERIVED
    is 'Transaction mnemonic for the request';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CHNL_ID
    is 'Channel ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_SERVICE_ID
    is 'Service ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_USER_ID
    is 'User ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_AUTH_ID
    is 'Authorizer ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_ACCT_NO
    is 'It can be Card number or MCA or GL or CASH GL';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.TYP_ACCT_NO
    is 'Type of input (Valid values CARD, MCA, GL, CASH, LN)';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_SUB_ACCT_NO
    is 'MC Sub Account Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_DEP_NO
    is 'Deposit Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.AMOUNT
    is 'Transaction Amount';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CCY
    is 'Currency Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_POST
    is 'Posting Date of the transaction';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_VALUE
    is 'Value Date of the transaction';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.TXT_TXN_NARRATIVE
    is 'Text Transaction Narrative';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DATE_CHEQUE_ISSUE
    is 'Date of Issue of Cheque';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.TXN_BUSINESS_TYPE
    is 'Transaction Business Type';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CARD_NO
    is 'Card Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.INVENTORY_CODE
    is 'Inventory Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.INVENTORY_NO
    is 'Inventory Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CARD_PASSBOOK_NO
    is 'Card Passbook Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CASH_ANALYSIS
    is 'Cash Analysis Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.BANK_INFORMATION_NO
    is 'Bank Information Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.BATCH_NO
    is 'Batch Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.SUMMARY
    is 'Summary';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_TYPE
    is 'IC Type';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_NO
    is 'IC Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_NAME
    is 'IC Name';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_CHECK_RETURN_CODE
    is 'IC Check Return Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_TYPE
    is 'Deputy IC Type';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_NO
    is 'Deputy IC Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_NAME
    is 'Deputy Name';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_CHECK_RETURN_CODE
    is 'Deputy IC Check Return Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.ACCOUNT_PROPERTY
    is 'Account Property';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CHEQUE_NO
    is 'Cheque Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_EXT_TASK
    is 'External Task Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_MODULE
    is 'Module Code - CH, TD, RD , LN, CASH, GL';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.ACC_PURPOSE_CODE
    is 'Account Purpose Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.NATIONALITY
    is 'Nationality';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CUSTOMER_NAME
    is 'Customer Name';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_INCOME_EXPENSE
    is 'Income Expense Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_EXT_BRANCH
    is 'External Branch Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_ACCT_TITLE
    is 'Account Title Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_CA_TT
    is 'Cash or Funds Transfer flag';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_EXT_LOCAL
    is 'Local Date';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.ACCT_OWNER_VALID_RESULT
    is 'Account Owner Valid Result';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_DR_CR
    is 'Flag Debit Credit - D, C.';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_ONLINE_UPLOAD
    is 'Flag Online Upload - O, U.';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_STMT_DISPLAY
    is 'Statement Display Flag - Y/N, Y(Normal Reversal), N(Correction Reversal)';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_TXN_TYPE
    is 'To denote the kind of transaction:
    1 ?Cash Credit Transaction
    2 ?Cash Debit Transaction
    3 ?Funds Transfer Credit Transaction
    4 ?Funds Transfer Debit Transaction
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_TS_TXN
    is 'Date and Timestamp of the record being inserted';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.LC_BG_GUARANTEE_NO
    is 'LC/BG Guarantee Number for which the request for the Liquidation has been initiated.';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_OTHER_ACCT_NO
    is 'Other Account No';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_MOD_OTHER_ACCT_NO
    is 'Module Code of Other Account No - CH, TD, RD , LN, CASH, GL';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CC_BRN_SUB_ACCT
    is 'Branch Code for Sub Account';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CC_BRN_OTHR_ACCT
    is 'Branch Code for Other Account';
    -- Create/Recreate indexes
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_1;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_2;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_3;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_4;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_5;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_6;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_7;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_8;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_1 on XFACE_ADDL_DETAILS_TXNLOG (REF_TXN_NO, REF_KEY_NO, COD_SUB_ACCT_NO, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH (REF_TXN_NO, REF_KEY_NO, COD_SUB_ACCT_NO) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_2 on XFACE_ADDL_DETAILS_TXNLOG (REF_USR_NO, REF_KEY_NO, COD_SUB_ACCT_NO, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(REF_USR_NO, REF_KEY_NO, COD_SUB_ACCT_NO) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_3 on XFACE_ADDL_DETAILS_TXNLOG (COD_SUB_ACCT_NO, FLG_STMT_DISPLAY,DAT_POST COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(COD_SUB_ACCT_NO, FLG_STMT_DISPLAY) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_4 on
    XFACE_ADDL_DETAILS_TXNLOG (COD_ACCT_NO, REF_TXN_NO, COD_TXN_TYPE, COD_USER_ID, COD_EXT_BRANCH, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(COD_ACCT_NO, REF_TXN_NO, COD_TXN_TYPE, COD_USER_ID, COD_EXT_BRANCH)
    PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_5 on XFACE_ADDL_DETAILS_TXNLOG (COD_USER_ID, DAT_POST, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(COD_USER_ID) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_6 on XFACE_ADDL_DETAILS_TXNLOG (REF_TXN_NO_ORG, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(REF_TXN_NO_ORG) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_7 on XFACE_ADDL_DETAILS_TXNLOG (DAT_EXT_LOCAL, DAT_POST,TXN_BUSINESS_TYPE, FLG_ONLINE_UPLOAD, COD_CHNL_ID, REF_TXN_NO, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(DAT_EXT_LOCAL) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    /* Previous Key order: (COD_EXT_BRANCH,DAT_POST,REF_TXN_NO_ORG,COD_SERVICE_ID,COD_ENTITY_VPD) */
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_8 on XFACE_ADDL_DETAILS_TXNLOG (DAT_POST, COD_EXT_BRANCH, REF_TXN_NO_ORG, COD_SERVICE_ID, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(DAT_POST) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    ALTER TABLE XFACE_ADDL_DETAILS_TXNLOG NOPARALLEL PCTFREE 50 INITRANS 128 LOGGING;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_1 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_2 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_3 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_4 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_5 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_6 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_7 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_8 NOPARALLEL INITRANS 128;
    BEGIN
    DBMS_RLS.ADD_POLICY(OBJECT_SCHEMA => UPPER('FCR44HOST'),
    OBJECT_NAME => UPPER('XFACE_ADDL_DETAILS_TXNLOG '),
    POLICY_NAME => 'FC_ENTITY_POLICY',
    FUNCTION_SCHEMA => UPPER('FCR44HOST'),
    POLICY_FUNCTION => 'pk_cm_vpd_policy.get_entity_predicate',
    STATEMENT_TYPES => 'select,insert,update,delete',
    UPDATE_CHECK => TRUE,
    ENABLE => TRUE,
    STATIC_POLICY => FALSE,
    POLICY_TYPE => DBMS_RLS.SHARED_STATIC,
    LONG_PREDICATE => FALSE,
    SEC_RELEVANT_COLS => NULL,
    SEC_RELEVANT_COLS_OPT => NULL);
    END;
    begin
    dbms_stats.gather_table_stats(ownname => 'FCR44HOST',tabname => 'XFACE_ADDL_DETAILS_TXNLOG', cascade=>true,method_opt=>'for all columns size 1',degree => 32, GRANULARITY => 'PARTITION');
    end;
    Query which takes time.
    INSERT INTO xface_addl_dtls_tlog_temp
    (ref_txn_no,
    ref_usr_no,
    ref_key_no,
    ref_txn_no_org,
    ref_usr_no_org,
    recon_code,
    cod_task_derived,
    cod_chnl_id,
    cod_service_id,
    cod_user_id,
    cod_auth_id,
    cod_acct_no,
    typ_acct_no,
    cod_sub_acct_no,
    cod_dep_no,
    amount,
    cod_ccy,
    dat_post,
    dat_value,
    txt_txn_narrative,
    date_cheque_issue,
    txn_business_type,
    card_no,
    inventory_code,
    inventory_no,
    card_passbook_no,
    cod_cash_analysis,
    bank_information_no,
    batch_no,
    summary,
    main_ic_type,
    main_ic_no,
    main_ic_name,
    main_ic_check_return_code,
    deputy_ic_type,
    deputy_ic_no,
    deputy_name,
    deputy_ic_check_return_code,
    account_property,
    cheque_no,
    cod_ext_task,
    cod_module,
    acc_purpose_code,
    nationality,
    customer_name,
    cod_income_expense,
    cod_ext_branch,
    cod_acct_title,
    flg_ca_tt,
    dat_ext_local,
    acct_owner_valid_result,
    flg_dr_cr,
    flg_online_upload,
    flg_stmt_display,
    cod_txn_type,
    dat_ts_txn,
    lc_bg_guarantee_no,
    cod_other_acct_no,
    cod_mod_other_acct_no,
    cod_cc_brn_sub_acct,
    cod_cc_brn_othr_acct,
    cod_ext_task_rev,
    sessionid)
    SELECT ref_txn_no,
    ref_usr_no,
    ref_key_no,
    ref_txn_no_org,
    ref_usr_no_org,
    recon_code,
    cod_task_derived,
    cod_chnl_id,
    cod_service_id,
    cod_user_id,
    cod_auth_id,
    cod_acct_no,
    typ_acct_no,
    cod_sub_acct_no,
    cod_dep_no,
    amount,
    cod_ccy,
    dat_post,
    dat_value,
    txt_txn_narrative,
    date_cheque_issue,
    txn_business_type,
    card_no,
    inventory_code,
    inventory_no,
    card_passbook_no,
    cod_cash_analysis,
    bank_information_no,
    batch_no,
    summary,
    main_ic_type,
    main_ic_no,
    main_ic_name,
    main_ic_check_return_code,
    deputy_ic_type,
    deputy_ic_no,
    deputy_name,
    deputy_ic_check_return_code,
    account_property,
    cheque_no,
    cod_ext_task,
    cod_module,
    acc_purpose_code,
    nationality,
    customer_name,
    cod_income_expense,
    cod_ext_branch,
    cod_acct_title,
    flg_ca_tt,
    dat_ext_local,
    acct_owner_valid_result,
    flg_dr_cr,
    flg_online_upload,
    flg_stmt_display,
    cod_txn_type,
    dat_ts_txn,
    lc_bg_guarantee_no,
    cod_other_acct_no,
    cod_mod_other_acct_no,
    cod_cc_brn_sub_acct,
    cod_cc_brn_othr_acct,
    cod_ext_task_rev,
    var_l_sessionid
    FROM xface_addl_details_txnlog
    WHERE cod_sub_acct_no = var_pi_cod_acct_no
    AND dat_post between var_pi_start_dat AND var_pi_end_dat;
    Index referred is in_xface_addl_details_txnlog_3.
    First time when i execute the query it takes huge time. but subsequent queries are faster. This is only if i pass same account and criteria again.
    Observed that first time it goes for physical reads which takes time. and subsequent runs physical reads are less.....
    Request suggestions.....this is account statement inquiry user may have 10000txns in a day as well
    Bymistake earlier i raised this in "Oracle -> Text"
    Slow inserts due to physical reads every time for fresh account i am passin
    They suggested to use bind variable. But as i know, we are already using bind variables to bind account number and start and end date.

    My Replies below.
    Whenever you post provide your 4 digit Oracle version (SELECT * FROM V$VERSION).
    Ans :
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    "CORE     11.2.0.3.0     Production"
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    1. If your question is about the INSERT query into xface_addl_dtls_tlog_temp why didn't you post any information about the DDL for that table? Is it the same structure as the table you did post DDL for?
    Ans :
    -- Create table
    create global temporary table XFACE_ADDL_DTLS_TLOG_TEMP
    REF_TXN_NO CHAR(40) not null,
    REF_USR_NO CHAR(40) not null,
    REF_KEY_NO VARCHAR2(50),
    REF_TXN_NO_ORG CHAR(40),
    REF_USR_NO_ORG CHAR(40),
    RECON_CODE VARCHAR2(25),
    COD_TASK_DERIVED VARCHAR2(5),
    COD_CHNL_ID VARCHAR2(6),
    COD_SERVICE_ID VARCHAR2(10),
    COD_USER_ID VARCHAR2(30),
    COD_AUTH_ID VARCHAR2(30),
    COD_ACCT_NO CHAR(22),
    TYP_ACCT_NO VARCHAR2(4),
    COD_SUB_ACCT_NO CHAR(16),
    COD_DEP_NO NUMBER(5),
    AMOUNT NUMBER(15,2),
    COD_CCY VARCHAR2(3),
    DAT_POST DATE,
    DAT_VALUE DATE,
    TXT_TXN_NARRATIVE VARCHAR2(60),
    DATE_CHEQUE_ISSUE DATE,
    TXN_BUSINESS_TYPE VARCHAR2(10),
    CARD_NO CHAR(20),
    INVENTORY_CODE CHAR(10),
    INVENTORY_NO CHAR(20),
    CARD_PASSBOOK_NO CHAR(30),
    COD_CASH_ANALYSIS CHAR(20),
    BANK_INFORMATION_NO CHAR(8),
    BATCH_NO CHAR(10),
    SUMMARY VARCHAR2(60),
    MAIN_IC_TYPE CHAR(1),
    MAIN_IC_NO VARCHAR2(150),
    MAIN_IC_NAME VARCHAR2(192),
    MAIN_IC_CHECK_RETURN_CODE CHAR(1),
    DEPUTY_IC_TYPE CHAR(1),
    DEPUTY_IC_NO VARCHAR2(150),
    DEPUTY_NAME VARCHAR2(192),
    DEPUTY_IC_CHECK_RETURN_CODE CHAR(1),
    ACCOUNT_PROPERTY CHAR(4),
    CHEQUE_NO CHAR(20),
    COD_EXT_TASK CHAR(10),
    COD_MODULE CHAR(4),
    ACC_PURPOSE_CODE VARCHAR2(15),
    NATIONALITY CHAR(3),
    CUSTOMER_NAME CHAR(192),
    COD_INCOME_EXPENSE CHAR(6),
    COD_EXT_BRANCH CHAR(6),
    COD_ACCT_TITLE VARCHAR2(360),
    FLG_CA_TT CHAR(1),
    DAT_EXT_LOCAL DATE,
    ACCT_OWNER_VALID_RESULT CHAR(1),
    FLG_DR_CR CHAR(1),
    FLG_ONLINE_UPLOAD CHAR(1),
    FLG_STMT_DISPLAY CHAR(1),
    COD_TXN_TYPE NUMBER(1),
    DAT_TS_TXN TIMESTAMP(6),
    LC_BG_GUARANTEE_NO VARCHAR2(20),
    COD_OTHER_ACCT_NO CHAR(22),
    COD_MOD_OTHER_ACCT_NO CHAR(4),
    COD_CC_BRN_SUB_ACCT NUMBER(5),
    COD_CC_BRN_OTHR_ACCT NUMBER(5),
    COD_EXT_TASK_REV VARCHAR2(10),
    SESSIONID NUMBER default USERENV('SESSIONID') not null
    on commit delete rows;
    -- Create/Recreate indexes
    create index IN_XFACE_ADDL_DTLS_TLOG_TEMP on XFACE_ADDL_DTLS_TLOG_TEMP (COD_SUB_ACCT_NO, REF_TXN_NO, COD_SERVICE_ID, REF_KEY_NO, SESSIONID);
    2. Why doesn't your INSERT query use APPEND, NOLOGGING and PARALLEL like the first query you posted? If those help for the first query why didn't you try them for the query you are now having problems with?
    Ans :
    I will try to use append but i cannot use parallel since i have hardware limitations.
    3. What does this mean: 'Index referred is in_xface_addl_details_txnlog_3.'? You haven't posted any plan that refers to any index. Do you have an execution plan? Why didn't you post it?
    Ans :
    Plan hash value: 4081844790
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | INSERT STATEMENT | | | | 5 (100)| | | |
    | 1 | LOAD TABLE CONVENTIONAL | | | | | | | |
    | 2 | FILTER | | | | | | | |
    | 3 | PARTITION HASH ALL | | 1 | 494 | 5 (0)| 00:00:01 | 1 | 128 |
    | 4 | TABLE ACCESS BY GLOBAL INDEX ROWID| XFACE_ADDL_DETAILS_TXNLOG | 1 | 494 | 5 (0)| 00:00:01 | ROWID | ROWID |
    | 5 | INDEX RANGE SCAN | IN_XFACE_ADDL_DETAILS_TXNLOG_3 | 1 | | 3 (0)| 00:00:01 | 1 | 128 |
    4. Why are you defining 37 columns as CHAR datatypes? Are you aware that CHAR data REQUIRES the use of the designated number of BYTES/CHARACTERS?
    Ans :
    I understand and appreciate your points, but since it is huge application and is built over a period of time. I am afraid if i will be allowed to do change on datatypes. there are lot of queries over this table.
    5. Are you aware that #4 means those 37 columns columns, even if all of them are NULL, mean that your MINIMUM record length is 1012? Care to guess how many of those records Oracle can fit into an 8k block? And that is if you ignore the other 26 VARCHAR2, NUMBER and DATE columns.
    Two of your columns take 192 bytes MINIMUM even if they are null
    CUSTOMER_NAME CHAR(192),
    COD_ACCT_TITLE CHAR(192)
    Why are you wasting all of that space? If you are using a multi-byte character set and your data is multi-byte those 37 columns are using even more space because some characters will use more than one byte.
    If the name and title average 30 characters/bytes then those two columns alone use 300+ unused bytes. With 40 million records those unused bytes, just for those two columns take 12 GB of space.
    WIth a block size of 8k that would totally waste 1.5 million blocks that Oracle has to read just to ignore the empty space that isn't being used.
    I highly suspect that your use of CHAR is a large part of this performance problem and probably other performance problems in your system. Not only for this table but for any other table that uses similar CHAR datatypes and wastes space.
    Please reconsider your use of CHAR datatypes like this. I can't imagine what justification you have for using them.
    Ans :
    I understand your points, but since it is huge application is built over a period of time. I am afraid if i will be allowed to do change on datatypes.
    I have to manage in current situation. Not expecting query to respond in millisecs but not even 40secs which is happening currently.
    Edited by: Rohit Jadhav on Dec 30, 2012 6:44 PM

  • First query very slow, subsequent queries fine

    In our 9i application when we first do a query it is extremely slow. If the database has not been used for sometime. This happens for sure after overnight, but also after an hour or so of inactivity during day.
    After the initial query eventually completes, subsequent queries seem fast and no problem. Is just problem with first query.
    This does not happen with all data. Just a particular group of data in our database.
    any suggestions?
    Thanks
    John

    Hi John !
    For mee, it looks like a data cache effect.
    A database need to manipulate data and use a data cache to avoid reading/writing data too much.
    So if the request don't find data in the cache, the database have to read it from disk and put it in the data cache (for me, your fist request). But if data are already in the cache, there is no need to reed them from disk. So the request time is very far better (for me, following requests).
    So if this is a very important problem what can you do ?
    - Check your query exec plan and try to need few data reads (avoid full scans tables for exemple...)
    - Rise the size of your db cache (check the cache hit ratio (1))
    - You can place data permanently in the cache (for table CACHE option) but only if these data sets are small (check dba_segments, [dba_tables after statistics]). If data sets are important, these data can eject other data from cache, so your request time will be good but other requests very bad.
    It could be a library cache effect too (same kind of problem: entries are made for querys already parsed, so the same query can avoid a hard parse) if, for exemple, you handle queries with 5,000 bind variables .
    You can check the library hit ratio too (2)
    To be sure of your problem, I think the best is to trace your request
    1) when executed first (cold request)
    2) and when executed 4th time (hot request)
    Tkprof the two traces and look where is the difference. There is 3 phases: parse, execute and fetch.
    Data cache problem is a high fetch, library, high parse. You will also find for your query the state which implies disk reads read (on execution plan)
    You can posts here cache query results and times for your 1st request and following requests. Even your trace files, if you want me to check your resolution.
    Regards,
    Jean-Luc
    (1)
    Cache hit ratio.
    Warning1: calculated from your last startup (so if your last startup is few weeks ago, you need to shutdwon, wait for a good sample of your batches executed, and try the following request)
    Warning2: There is no ">98 % is good" and "<90 % is bad". It depends on yours applications. For exemple, if same data is frequently acceded in a transactionnal database, you have rise it as high you can.
    But imagine databases for clients and clients who needs their data 1 time a day or a week (database or schema of client information like this very forum [Good exemple because I suspect them to use Oracle databases you know :)]). You can accept to have a high response time, lots of disk reads, and so a HR < 90.
    Cache hit ratio :
    select round((1-(pr.value/(bg.value+cg.value)))*100,2) cachehit
    from v$sysstat pr, v$sysstat bg, v$sysstat cg
    where pr.name = 'physical reads'
    and bg.name = 'db block gets'
    and cg.name = 'consistent gets';
    (2)
    Same warnings than (1)W1
    but not (1)W2: Library HR is generaly higher than cache hit ratio >98
    Library cache hit ratio :
    select round(sum(pinhits)/sum(pins) * 100,2) efficacite from v$librarycache;

  • Query very slow on Windows 2003 Server

    Hi,
    Our customer is running 10g on Windows 2003 server. Some queries perform badly. I imported the data into a 10g DB on Linux (at our office) to analyze and test. Strange enough the same query takes more than 10 times longer to run on the Windows machine compared to the Linux. The Windows machine is dedicated to Oracle, it is not 'overloaded'.
    SELECT
    plan_task.id_task,
    plan_task.task_id,
    plan_task.taskdef_y_n,
    plan_task.description,
    plan_task.status,
    plan_task.team_id,
    plan_task.activity_id,
    plan_task.task_start_datetime,
    plan_task.district_id,
    plan_task.task_end_datetime,
    plan_task.taskdef_freq_code,
    plan_task.taskdef_start_time,
    plan_task.taskdef_end_time,
    plan_task.order_nr
    d,
    PREVENT.PLAN_PERSONS_AVAILABLE_SHORT(ID_TASK) PERS_OK,
    PREVENT.PLAN_MATERIALS_AVAILABLE_SHORT(ID_TASK) MAT_OK
    FROM PREVENT.PLAN_TASK
    WHERE (TASKDEF_Y_N='N') AND (TASK_START_DATETIME>=to_date(to_char(SYSDATE,'dd-mm-yyyy'),'dd-mm-yyyy'))
    AND (TASK_START_DATETIME<to_date(to_char(SYSDATE,'dd-mm-yyyy'),'dd-mm-yyyy')+1)
    ORDER BY DESCRIPTION;
    On LINUX (Intel 2 GHz, 2 Gb mem) takes 0,5 seconds to execute (46 rows returned)
    ================================================================================
    Plan
    SELECT STATEMENT ALL_ROWSCost: 27 Bytes: 2,436 Cardinality: 29                     
         4 SORT ORDER BY Cost: 27 Bytes: 2,436 Cardinality: 29                
              3 FILTER           
                   2 TABLE ACCESS BY INDEX ROWID TABLE PREVENT.PLAN_TASK Cost: 26 Bytes: 2,436 Cardinality: 29      
                        1 INDEX RANGE SCAN INDEX PREVENT.PLAN_TASK_START_DATETIME_I Cost: 2 Cardinality: 30
    On WINDOWS (Intel 2 GHz, 2 Gb mem) takes 11 seconds to execute (46 rows returned)
    =================================================================================
    Plan
    SELECT STATEMENT ALL_ROWSCost: 35 Bytes: 3,276 Cardinality: 39                     
         4 SORT ORDER BY Cost: 35 Bytes: 3,276 Cardinality: 39                
              3 FILTER           
                   2 TABLE ACCESS BY INDEX ROWID TABLE PREVENT.PLAN_TASK Cost: 34 Bytes: 3,276 Cardinality: 39      
                        1 INDEX RANGE SCAN INDEX PREVENT.PLAN_TASK_START_DATETIME_I Cost: 2 Cardinality: 40
    NOTEs:
    - The data is exactly the same on both machines
    - I analyzed_schema on both machines/DB's before running the query
    - The SGA size en DB_BUFFERS are (almost) set to the same value
    - Oracle version is the same: 10g

    On Windows:
    - I exported the data
    - Re-created my 2 tablespaces, to set: extent management local, uniform 1M
    - imported the data
    - dbms_stats.gather_schema_stats( option=>'GATHER')
    Plan
    SELECT STATEMENT ALL_ROWSCost: 36 Bytes: 3,32 Cardinality: 40                     
         4 SORT ORDER BY Cost: 36 Bytes: 3,32 Cardinality: 40                
              3 FILTER           
                   2 TABLE ACCESS BY INDEX ROWID TABLE PREVENT.PLAN_TASK Cost: 35 Bytes: 3,32 Cardinality: 40      
                        1 INDEX RANGE SCAN INDEX PREVENT.PLAN_TASK_START_DATETIME_I Cost: 2 Cardinality: 41
    Tested, takes 7 seconds now (was 11), but this is still slow compared to the run on Linux
    Grt, Stephan
    Edited by: Stephan van Hoof on Jan 9, 2009 8:41 PM

  • Update Query very slow

    Hi All
    I have three setups on which i have to run same query which is mentioned below. The execution plan on all three setups is same for the mentioned query. Still in one of the setup the query is taking almost 8 Hrs to complete. while in rest 2 setups it takes 2 Hrs to complete. The Ram Available for the setup is Same(16 GB). I tried to increase yhe SGA size but not got the expected results. i do not have DBA support for the same. I have also analysed and changed the parameter Index_OPtimizer_cost and made sure vthat this parameter is same for all three setups.
    The main problem is i can not modify the query as it is been generated during on the processes. But as mentioned earlier the query generated on all three setup is same. I also changed log_buffer_size. The query is :
    UPDATE /*+ BYPASS_UJVC */ ( SELECT Main Table_name.n_exp_covered_amt_irb AS T0 , CASE WHEN COND0 = 0 THEN BP0 ELSE BP1 END AS T1 FROM Global temp table , Main Table Name WHERE Main Table Name.n_gaap_skey=Gloabl Temp table.n_gaap_skey AND Main Table Name.n_run_skey=Gloabl Temp table.n_run_skey AND Main Table Name.n_acct_skey=Gloabl Temp table.n_acct_skey AND Main Table Name.fic_mis_date=Gloabl Temp table.fic_mis_date) SET T0 = T1
    Indexes are same on the All three setups also one index is present for the column name mentioned in the where clause of the query.
    The oracle version used is 10.0.1.0 in first setup in second setup i am using 10.0.2.0 and in third setup it's 10.0.4.0. The query is taking time where 10.0.2.0 version is installed.
    When i have looked in to the session while the query e=was executing SORT OUTPUT was taking most of the time.
    Thanks in Advance. It's very critical for me to get it resolved.Any suggestions are extremely welcome.

    Hi,
    please check the indexes on the colums of table where sort is happening. if indexes are not there then create it or rebuilt it. also the sql tuning advisor
    recommendation in dbconsole.
    thanks

Maybe you are looking for

  • Change mail service to SMTP only

    Our company was running all its mail for two domains internally on Mac OS X Server 10.6. We recently switched over to Google Apps for our email. However, Google's SMTP server is SLOW, so I thought I'd try to use the SMTP server from our 10.6 Server t

  • [MacOS 10.6.4] error at run pointer being freed was not allocated on dlopen

    Hi everybody, I am working on a c++ program that fail at runtime with the following error: malloc: * error for object .....: pointer being freed was not allocated * set a breakpoint in mallocerrorbreak to debug the program I a m working on has the pa

  • To change Font size and style in a cell

    HI! . I would to know if it is possible to change to the source and the text of alv. I have seen that with class CL_SALV_WD_UIE_TEXT_VIEW and method SET_DESIGN, Font size and style in a cell can be changed, but i don´t know how do it and what objects

  • Web Dispatcher with SSL termination for EP

    Hi All, I want to configure SAP Web Dispatcher (installed on windows) for SSL termination scenario. I did all the configuration steps, SSL Basic, SSL termination steps without Metadata Exchange scenario. But , when i am trying to access the portal us

  • SSI/'Include' a .php file?

    Hi All, Server Side Includes in DW look much more complex than they used to be! A few years ago I used them a lot. Now I read the 'Help' and I'm bewildered! I began wondering about this because we have PHP file containing n HTML fragment - on one ser