Oracle 10g Performance

Hi All,
We have are facing some performance issue with Oracle. We have created an table with some fields and a BLOB object. Our application continuously inserts and deleted data from that table. These data operations at a very high rate. We have Enabled NoCache for the BLOB object and are BLOB object is stored in different tablespace. Using some monitoring tool we observed that inserts into the table is taking more time on further digging we found that the time is taken by the BLOB object. We also observed that the Waiting time is more for db file sequential read while doing an insert. As i mentioned Earlier we are also deleting records from this table that is done by other threads. Is this problem occuring as we are inserting and deleting the same data at a high rate. Also Oracle is doing a full table scan on inserts.
Please Help
Thanks in Advance.

No database version?
No table structures?
No example data?
No code?
No way we can really help unfortunately.
[How to post a SQL statement tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]
however, as it sounds a little more like a whole application issue rather than just a single SQL statement you may want to post all your details over in the Database-General forum where the DBA's tend to hang out.

Similar Messages

  • Oracle 10G Performance Tuning Documents

    Hi all
    Can any one tell where can I get the oracle 10G Performance Tuning materials(PDF),Documents.
    Thanks in advance

    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14211.pdf

  • Oracle 10g performance is slow

    Dear Exports
    how we can imporve the Oracle 10g performance........we are upgrading from Oracle 8 to Oracle 10g. Windows platform. and using Oracle developer 6 as front end .
    thanks in advance

    Do you have statistics gathered on the tables in the 8i database? Can you post the explain plan for the query in both databases?
    Since you know what SQL is having poor performance you can use TKPROF and SQL TRACE to see where your query is spending its time.
    Try the following:
    alter session set timed_statistics=true;
    alter session set max_dump_file_size=unlimited;
    alter session set tracefile_identifier='BAD_SQL';
    alter session set events '10046 trace name context forever, level 12';
    <insert sql with poor response time>
    disconnect
    Use the TKPROF utility on the file found in USER_DUMP_DEST that contains the string BAD_SQL.
    For information on how to interrupt the TKPROF output, see the following link.
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm

  • Oracle 10g performance issues

    Hi,
    We were using Oracle 9i in Solaris 5.8 and it was working fine with some minor performance issues. We formatted the Solaris server with new Solaris 5.10 and installed Oracle 10g.
    Now we are experiencing some performance issues in Oracle 10g. This issue is arising when using through Websphere 5.1.
    We have analyzed the schema, index is rebuild, SGA is 4.5 GB, PGA is 2.0 GB, Solaris RAM is 16 GB. Also we are having some Mat Views (possibly this may cause performance issues - not sure) due to refresh.
    Also I have changed some parameters in init.ora file like query_rewrite = STALE_TOLERATED, open_cursors = 1500 etc.
    Is is something due to driver from which the data is accessed. I guess it is not utilizing the indexes on the table.
    Can anyone please suggest, what could be the issue ?

    <p>There are a lot of changes to the optimizer in the upgrade from 9i to 10g, and you need to be aware of them. There are also a number of changes to the default stats collection mechanism, so after your upgrade your statistics (hence execution paths) could change dramatically.
    </p>
    <p>
    Greg Rahn has a useful entry on his blog about stats collection, and the blog al,so points to an Oracle white paper which will give you a lot of ideas about where the optimizer changes - which may help you spot your critical issues.
    </p>
    <p>Otherwise, follow triggb's advice about using Statspack to find the SQL that is the most expensive - it's reasonably likely to be this SQL that has changed execution plans in the upgrade.
    </p>
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Oracle 10g performance tuning tools

    hi,
    can anyone please suggest me any oracle database tuning tool to use for improving the performance of the database?(oracle 10g)

    Hi,
    Do you want a tuning tool that does not require the user to have in-depth Oracle knowledge? If so, try here:
    http://images.google.com/images?&q=ouija+board&um=1&ie=UTF-8&sa=N&tab=wi
    Seriously, I like to use AWR and STATSPACK reports, and there are some freeware tools to help analyze them, one that I sponsor:
    http://www.statspackanalyzer.com
    For online tools, Oracle SQL Developer is a great way to get started, as-is the Oracle performance pack:
    http://www.oracle.com/technology/products/database/sql_developer/index.html
    For third-party tuning tools, look at Confio, quite good:
    http://www.confio.com/
    Hope this helps. . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Oracle 10G performance problems

    Hello,
    we have a lot of performance problems with oracle 10G. Especially tables scan on DRAW or AEN1 have long response times. It seems that the CBO uses the wrong strategy. The latest merge fix is already installed. Any idea to solve the problem is welcome.
    Best regards
    Juergen Remmert

    We had similar performance issues in our environment, once we upgraded from 9.2.0.2 to 10.2.0.2.
    Oracle: 10.2.0.2
    SAP: 4.7x110
    OS: SOLARIS 9 64bit
    The above mentioned notes were very helpful. We had to install an oracle patch as well (found on marketplace)  --  6321245
    and make the following oracle parameters changes:
    pga_aggregate_target - 144MB (default = 25MB)
    *.event="10027 trace name context forever, level 1"
    *.event="10028 trace name context forever, level 1"
    *.event="10162 trace name context forever, level 1"
    *.event="10183 trace name context forever, level 1"
    *.event="10191 trace name context forever, level 1"
    *.event="10629 trace name context forever, level 32"
    *.event="38068 trace name context forever, level 100"
    *.event="38043 trace name context forever, level 1"
    *.optimizer_index_caching=50
    *.optimizer_index_cost_adj=20
    *.parallel_execution_message_size=16384
    *._b_tree_bitmap_plans=FALSE
    *._index_join_enabled=FALSE
    *._optim_peek_user_binds=FALSE
    *._optimizer_mjc_enabled=FALSE
    *._sort_elimination_cost_ratio=10
    Remove
    *.optimizer_features_enable='9.2.0'
    HTH

  • ORACLE 10g PERFORMANCE ON SOLARIS 10

    Hi all,
    In sunfire v890 we have installed oracle 10g release 2 on solaris 10.
    prstat -a command shows :
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    105 root 9268M 6324M 20% 1:21:57 0.4%
    59 oracle 24G 22G 71% 0:04:33 0.1%
    2 nobody4 84M 69M 0.2% 0:11:32 0.0%
    2 esbuser 13M 9000K 0.0% 0:00:46 0.0%
    1 smmsp 7560K 1944K 0.0% 0:00:00 0.0%
    4 daemon 12M 7976K 0.0% 0:00:00 0.0%
    and top utility shows :
    last pid: 8639; load avg: 0.09, 0.09, 0.09; up 2+06:05:29 17:07:50
    171 processes: 170 sleeping, 1 on cpu
    CPU states: 98.7% idle, 0.7% user, 0.7% kernel, 0.0% iowait, 0.0% swap
    Memory: 32G phys mem, 22G free mem, 31G swap, 31G free swap
    therefore from prstat we come to know that memory used by oracle is 71%
    where as top says 31.25% used.....
    which one is true in this scenario.....
    shall we go ahead in trusting top utility????
    Advance thanks to you.

    therefore from prstat we come to know that memory
    used by oracle is 71%
    where as top says 31.25% used.....
    which one is true in this scenario.....
    shall we go ahead in trusting top utility????In this case top is more accurate. prstat pretends all the memory used by each Oracle process is used only by that process. But lots of the memory used by Oracle is shared between several processes. prstat is counting that shared memory over and over for each process... resulting in the higher figure.
    http://forum.java.sun.com/thread.jspa?threadID=5114263&tstart=105
    Regards,
    [email protected]
    http://www.HalcyonInc.com

  • Oracle 10g performance issue

    Hi all,
    In sunfire v890 we have installed oracle 10g release 2 on solaris 10.
    prstat -a command shows :
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    105 root 9268M 6324M 20% 1:21:57 0.4%
    59 oracle 24G 22G 71% 0:04:33 0.1%
    2 nobody4 84M 69M 0.2% 0:11:32 0.0%
    2 esbuser 13M 9000K 0.0% 0:00:46 0.0%
    1 smmsp 7560K 1944K 0.0% 0:00:00 0.0%
    4 daemon 12M 7976K 0.0% 0:00:00 0.0%
    and top utility shows :
    last pid: 8639; load avg: 0.09, 0.09, 0.09; up 2+06:05:29 17:07:50
    171 processes: 170 sleeping, 1 on cpu
    CPU states: 98.7% idle, 0.7% user, 0.7% kernel, 0.0% iowait, 0.0% swap
    Memory: 32G phys mem, 22G free mem, 31G swap, 31G free swap
    therefore from prstat we come to know that memory used by oracle is 71%
    where as top says 31.25% used.....
    which one is true in this scenario.....
    shall we go ahead in trusting top utility????
    Advance thanks to you.

    Hi darren
    The main thing is,, prstat -a command showing oracle
    user occupied 70%.In top utility showing 22gb memory
    free out of 32 gb.That means 10gb was occupied by all
    users.In percentage calculation its comes
    31.25%...i.e top shows all users occupied only
    31.25%Right. That's all memory in use, correct? From your first message I thought you meant it said that was the amount used by Oracle.
    It's easy to calculate total memory in use.
    It's hard to calculate memory in use by a subset of processes (perhaps those owned by a particular user).
    but prstat -t command shows 70% occupied by oracle.
    which one i want to believe??????The prstat command showing memory in use by a user will be incorrect because it does not calculated shared pages properly.
    As far as I am aware, 'top' has no similar display.
    Darren

  • Oracle 10G Performance Tuning

    A colleague of mine supplied me with a tuning script to help in my performance analysis of a 10.2.0.1 Oracle database. The script is called:
    responsetimebreakdown.sql
    Apparently this was designed for 8i as it cannot find the sys.x_$ksles (session events) view or table in my Oracle 10.2.0.1 database. I receive:
    ORA-00942: table or view does not exist
    Any one know the equivalent of this object in 10G or has access to this script designed for use against a 10.2.0.1 Oracle database?
    Thanks.

    I don't know what your script does, but here's the table you're looking for. <br>
    SQL> select name from v$fixed_table where name like '%KSLES%';
    <br>
    <br>
    NAME<br>
    ------------------------------<br>
    X$KSLES<br><br>
    If you really want to tune, you should also try<br>
    SQL> @$ORACLE_HOME/rdbms/admin/awrrpt
    <br><br>
    Dave <br>
    Lehr.servehttp.com
    Message was edited by:
    DaveLehr

  • Oracle 10g  – Performance with BIG CONTEXT indexes

    I would like to use Oracle XE 10.2.0.1.0 only for the full-text searching of the files residing outside the database on the FTP server.
    Recently I have found out that size of the files to be indexed is 5GB.
    As I have read somewhere on this forum before size of the index should be 30-40% of the indexed text files (so with formatted documents like PDF or DOC even less).
    Lets say that the CONTEXT index size over these files will be 1.5-2GB.
    Number of the concurrent user will be max. 5.
    I can not easily test it my self yet.
    Does anybody have any experience with Oracle XE or other Oracle Database edition performance with the CONTEXT index this BIG?
    Will Oracle XE hardware resources license limitation be sufficient to handle one CONTEXT indexe this BIG?
    (Oracle XE license limitations: 1 GB RAM and 1 CPU)
    Regards.

    That depends on at least three things:
    (1) what is the range of words that will appear in the document set (wide range of documents = smaller resultsets = better performance)
    (2) how precise are the user's queries likely to be (more precise = smaller resultsets = better performance)
    (3) how many milliseconds are your users willing to wait for results
    So, unfortunately, you'll probably have to experiment a bit before you'll know...

  • Oracle 10g performance degrades while concurrent inserts into a table

    Hello Team,
    I am trying to insert into single table via multiple threads, Some of these threads perform reasonably well but some will take really longer time, As the time goes on performance drastically degrades (even down by 500 to 600 times). With AWR report i see that there quite huge number of buffer gets there. I am not sure how can i reduce those. If i ran on a single thread this operation is consistent.
    I tried quite a few options like
    1. Increasing SGA Memory
    2. Moving redo logs to another disk drive.
    3. Trying it on a empty table
    4. Trying it on a table which has huge data (4 Million rows)
    5. I have even tried partitioning the table with HASH algoritm
    Note: Each thread i am pupming equal amount of data (let say 25K rows).
    Can any body suggest me a clue what could be the culprit here.
    Thanks in Advance
    Satish Kumar Ballepu

    user11150696 wrote:
    Can you please guide me how do i do that, I am not aware of how to generate explain plan for that query.Since you have the trace file already (and I don't mean the tkprof output), you could do the following:
    Read the trace file to find the statement you're interested id - the line above it will be a +"PARSING"+ line, and will include a reference to the statement hash_value look like +'hv=3838377475845'+.
    Use the hash_value to query v$sql to get the sql_id and child_number;
    Use the sql_id and child number in a call to dbms_xplan.display_cursor:
    PARSING IN CURSOR #7 len=68 dep=0 uid=55 oct=3 lid=55 tim=448839952404 *hv=3413100263* ad='2f6ede48'
    select ... etc.  (the statement I want the plan for)
    SQL> select sql_id , child_number from v$sql where hash_value = *3413100263*;
    SQL_ID        CHILD_NUMBER
    053tyaz5qzjr7            0
    SQL> select * from table(dbms_xplan.display_cursor(*'053tyaz5qzjr7'*,*0*));
    PLAN_TABLE_OUTPUT
    SQL_ID  053tyaz5qzjr7, child number 0
    select  /*+ use_concat */  small_vc from  t1 where  n1 = 1 or n2 = 2
    Plan hash value: 82564388
    | Id  | Operation                    | Name  | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT             |       |       |       |     4 |
    |   1 |  CONCATENATION               |       |       |       |       |
    |   2 |   TABLE ACCESS BY INDEX ROWID| T1    |    10 |   190 |     2 |
    |*  3 |    INDEX RANGE SCAN          | T1_N2 |    10 |       |     1 |
    |*  4 |   TABLE ACCESS BY INDEX ROWID| T1    |    10 |   190 |     2 |
    |*  5 |    INDEX RANGE SCAN          | T1_N1 |    10 |       |     1 |
    Predicate Information (identified by operation id):
       3 - access("N2"=2)
       4 - filter(LNNVL("N2"=2))
       5 - access("N1"=1)
    Note
       - cpu costing is off (consider enabling it)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "For every expert there is an equal and opposite expert."
    Arthur C. Clarke
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Oracle 10g Performance between (10.2.0.3.0 and 10.2.0.5.0): Platform solari

    Hello all,
    We have built our new UAT (10.2.0.5.0) using disk copy from old UAT(10.2.0.3.0). Ystem stats are all up to date. But one of the query takes very long time almost 6 hours in New UAT. But old UAT it takes only 20 minutes .
    Can anybody tell why or what i shoudl check for?

    Plan@OLD UAT
    ===============
    1 One or more rows were retrieved using index SCHEMANMERR.XAK_BATCH_CONTROL_3 . The index was scanned in ascending order..
    2 The rows from step 1 were inserted into using direct-load insert.
    3 Rows were retrieved using the unique index SCHEMANMMAP.XPK_SOURCE_SYSTEM_FEED .
    4 Rows from table SCHEMANMMAP.SCHEMANMF_SOURCE_SYSTEM_FEED were accessed using rowid got from an index.
    5 Every row in the table SYS.SYS_TEMP_0FD9D6651_E9A50186 is read.
    6 A view definition was processed, either from a stored view WEIJ. or as defined by steps 5.
    7 One or more rows were retrieved using index SCHEMANMERR.XAK_BATCH_CONTROL_4 . The index was scanned in ascending order..
    8 For each row retrieved by step 6, the operation in step 7 was performed to find a matching row.
    9 The rows were sorted to support a group operation (MAX,MIN,AVERAGE, SUM, etc).
    10 A view definition was processed, either from a stored view WEIJ. or as defined by steps 9.
    11 Every row in the table SYS.SYS_TEMP_0FD9D6651_E9A50186 is read.
    12 A view definition was processed, either from a stored view WEIJ. or as defined by steps 11.
    13 One or more rows were retrieved using index SCHEMANMERR.XAK_BATCH_CONTROL_5 . The index was scanned in ascending order..
    14 For each row retrieved by step 12, the operation in step 13 was performed to find a matching row.
    15 The rows were sorted to support a group operation (MAX,MIN,AVERAGE, SUM, etc).
    16 A view definition was processed, either from a stored view WEIJ. or as defined by steps 15.
    17 For each row retrieved by step 10, the operation in step 16 was performed to find a matching row.
    18 Every row in the table SYS.SYS_TEMP_0FD9D6651_E9A50186 is read.
    19 A view definition was processed, either from a stored view WEIJ. or as defined by steps 18.
    20 One or more rows were retrieved using index SCHEMANMERR.XAK_BATCH_CONTROL_5 . The index was scanned in ascending order..
    21 For each row retrieved by step 19, the operation in step 20 was performed to find a matching row.
    22 The rows were sorted to support a group operation (MAX,MIN,AVERAGE, SUM, etc).
    23 A view definition was processed, either from a stored view WEIJ. or as defined by steps 22.
    24 For each row retrieved by step 17, the operation in step 23 was performed to find a matching row.
    25 One or more rows were retrieved using index SCHEMANMERR.XIE_BATCH_CONTROL_6 . The index was scanned in ascending order..
    26 The rows were sorted to support a group operation (MAX,MIN,AVERAGE, SUM, etc).
    27 For the rows returned by step 26, filter out rows depending on filter criteria.
    28 Return all rows from steps 24, 27 - including duplicate rows.
    29 A view definition was processed, either from a stored view WEIJ. or as defined by steps 28.
    30 For each row retrieved by step 4, the operation in step 29 was performed to find a matching row.
    31 The rows were sorted to support the join at step 37.
    32 Rows were retrieved from concatenated index Partitions determined by Key Values without using the leading column(s).
    33 Rows from table Partitions determined by Key Values were accessed using rowid got from a local (single-partition) index.
    34 PARTITION LIST SINGLE
    35 PARTITION RANGE ALL
    36 The rows were sorted to support the join at step 37.
    37 Join the sorted results sets provided from steps 31, 36.
    38 Rows were retrieved using the unique index Partitions determined by Key Values.
    39 Rows from table Partitions determined by Key Values were accessed using rowid got from a local (single-partition) index.
    40 PARTITION LIST SINGLE
    41 A range of partitions of steps 40 were accessed..
    42 For each row retrieved by step 37, the operation in step 41 was performed to find a matching row.
    43 Rows were retrieved using the unique index SCHEMANMADM.XPK_INSTRUMENT .
    44 Rows from table SCHEMANMADM.INSTRUMENT were accessed using rowid got from an index.
    45 For each row retrieved by step 42, the operation in step 44 was performed to find a matching row.
    46 One or more rows were retrieved using index Partitions determined by Key Values. The index was scanned in ascending order..
    47 PARTITION LIST SINGLE
    48 A range of partitions of steps 47 were accessed..
    49 For each row retrieved by step 45, the operation in step 48 was performed to find a matching row.
    50 Rows from table SCHEMANMADM.INSTRUMENT_BALANCE were accessed using rowid got from a local (single-partition) index.
    51 Every row in the table SCHEMANMINT.SCHEMANMF_ACCOUNTING_PERIOD is read.
    52 The result sets from steps 50, 51 were joined (hash).
    53 Every row in the table SYS.SYS_TEMP_0FD9D6651_E9A50186 is read.
    54 A view definition was processed, either from a stored view WEIJ. or as defined by steps 53.
    55 For each row retrieved by step 52, the operation in step 54 was performed to find a matching row.
    56 The rows from step 55 were inserted into using direct-load insert.
    57 Every row in the table SYS.SYS_TEMP_0FD9D6652_E9A50186 is read.
    58 A view definition was processed, either from a stored view WEIJ. or as defined by steps 57.
    59 Rows were retrieved using the unique index Partitions determined by Key Values.
    60 Rows from table Partitions determined by Key Values were accessed using rowid got from a local (single-partition) index.
    61 PARTITION LIST SINGLE
    62 A range of partitions of steps 61 were accessed..
    63 For each row retrieved by step 58, the operation in step 62 was performed to find a matching row.
    64 Every row in the table SYS.SYS_TEMP_0FD9D6652_E9A50186 is read.
    65 A view definition was processed, either from a stored view WEIJ. or as defined by steps 64.
    66 Return all rows from steps 63, 65 - including duplicate rows.
    67 TEMP TABLE TRANSFORMATION
    68 Rows were returned by the SELECT statement.
    ++++++++++++++++++++++++++++++++++
    Plan@NEW UAT
    ===============
    1 One or more rows were retrieved using index SCHEMANMERR.XAK_BATCH_CONTROL_3 . The index was scanned in ascending order..
    2 The rows from step 1 were inserted into using direct-load insert.
    3 Rows were retrieved using the unique index SCHEMANMMAP.XPK_SOURCE_SYSTEM_FEED .
    4 Rows from table SCHEMANMMAP.SCHEMANMF_SOURCE_SYSTEM_FEED were accessed using rowid got from an index.
    5 Every row in the table Partitions determined by Key Values is read.
    6 PARTITION LIST SINGLE
    7 PARTITION RANGE ALL
    8 For each row retrieved by step 4, the operation in step 7 was performed to find a matching row.
    9 Rows were retrieved using the unique index Partitions determined by Key Values.
    10 Rows from table Partitions determined by Key Values were accessed using rowid got from a local (single-partition) index.
    11 PARTITION LIST SINGLE
    12 A range of partitions of steps 11 were accessed..
    13 For each row retrieved by step 8, the operation in step 12 was performed to find a matching row.
    14 Rows were retrieved using the unique index SCHEMANMADM.XPK_INSTRUMENT .
    15 Rows from table SCHEMANMADM.INSTRUMENT were accessed using rowid got from an index.
    16 For each row retrieved by step 13, the operation in step 15 was performed to find a matching row.
    17 One or more rows were retrieved using index Partitions determined by Key Values. The index was scanned in ascending order..
    18 PARTITION LIST SINGLE
    19 A range of partitions of steps 18 were accessed..
    20 For each row retrieved by step 16, the operation in step 19 was performed to find a matching row.
    21 Rows from table SCHEMANMADM.INSTRUMENT_BALANCE were accessed using rowid got from a local (single-partition) index.
    22 Every row in the table SCHEMANMINT.SCHEMANMF_ACCOUNTING_PERIOD is read.
    23 The result sets from steps 21, 22 were joined (hash).
    24 Every row in the table SYS.SYS_TEMP_0FD9FC8FB_DC4EE804 is read.
    25 A view definition was processed, either from a stored view WEIJ. or as defined by steps 24.
    26 One or more rows were retrieved using index SCHEMANMERR.XAK_BATCH_CONTROL_4 . The index was scanned in ascending order..
    27 The result sets from steps 25, 26 were joined (hash).
    28 The rows were sorted to support a group operation (MAX,MIN,AVERAGE, SUM, etc).
    29 A view definition was processed, either from a stored view WEIJ. or as defined by steps 28.
    30 Every row in the table SYS.SYS_TEMP_0FD9FC8FB_DC4EE804 is read.
    31 A view definition was processed, either from a stored view WEIJ. or as defined by steps 30.
    32 One or more rows were retrieved using index SCHEMANMERR.XAK_BATCH_CONTROL_5 . The index was scanned in ascending order..
    33 The result sets from steps 31, 32 were joined (hash).
    34 The rows were sorted to support a group operation (MAX,MIN,AVERAGE, SUM, etc).
    35 A view definition was processed, either from a stored view WEIJ. or as defined by steps 34.
    36 For each row retrieved by step 29, the operation in step 35 was performed to find a matching row.
    37 Every row in the table SYS.SYS_TEMP_0FD9FC8FB_DC4EE804 is read.
    38 A view definition was processed, either from a stored view WEIJ. or as defined by steps 37.
    39 One or more rows were retrieved using index SCHEMANMERR.XAK_BATCH_CONTROL_5 . The index was scanned in ascending order..
    40 The result sets from steps 38, 39 were joined (hash).
    41 The rows were sorted to support a group operation (MAX,MIN,AVERAGE, SUM, etc).
    42 A view definition was processed, either from a stored view WEIJ. or as defined by steps 41.
    43 For each row retrieved by step 36, the operation in step 42 was performed to find a matching row.
    44 One or more rows were retrieved using index SCHEMANMERR.XIE_BATCH_CONTROL_6 . The index was scanned in ascending order..
    45 The rows were sorted to support a group operation (MAX,MIN,AVERAGE, SUM, etc).
    46 For the rows returned by step 45, filter out rows depending on filter criteria.
    47 Return all rows from steps 43, 46 - including duplicate rows.
    48 A view definition was processed, either from a stored view WEIJ. or as defined by steps 47.
    49 For each row retrieved by step 23, the operation in step 48 was performed to find a matching row.
    50 Every row in the table SYS.SYS_TEMP_0FD9FC8FB_DC4EE804 is read.
    51 A view definition was processed, either from a stored view WEIJ. or as defined by steps 50.
    52 For each row retrieved by step 49, the operation in step 51 was performed to find a matching row.
    53 The rows from step 52 were inserted into using direct-load insert.
    54 Every row in the table SYS.SYS_TEMP_0FD9FC8FC_DC4EE804 is read.
    55 A view definition was processed, either from a stored view WEIJ. or as defined by steps 54.
    56 Rows were retrieved using the unique index Partitions determined by Key Values.
    57 Rows from table Partitions determined by Key Values were accessed using rowid got from a local (single-partition) index.
    58 PARTITION LIST SINGLE
    59 A range of partitions of steps 58 were accessed..
    60 For each row retrieved by step 55, the operation in step 59 was performed to find a matching row.
    61 Every row in the table SYS.SYS_TEMP_0FD9FC8FC_DC4EE804 is read.
    62 A view definition was processed, either from a stored view WEIJ. or as defined by steps 61.
    63 Return all rows from steps 60, 62 - including duplicate rows.
    64 TEMP TABLE TRANSFORMATION
    65 Rows were returned by the SELECT statement.

  • Oracle 10g on HP-UX, Terrible Poor Performance!!

    Hi All,
    I setup an Oracle 10g on HP-UX 11iv1. Server is a HP 9000, 4 CPUs (750
    MHZ). It is connected to Disk System 2405 (Virtual Array 7110). Fiber
    Channels are connected at 2 GB speed.
    I installed a cluster 10g database. First I installed CRS and after
    that I installed oracle database. ( I want to test clustered database
    with one instance)
    I installed every thing line by line as oracle document wrote.
    All the things, kernel parameters, patches, are like oracle wrote in
    its document.
    I installed Golden quality package June 2004.
    I increased shmmax to 2.3 G . My SGA is 1.7 G And change some other
    parameters as Sandy Gruver wrote in Best Practices for Oracle on HPUX.
    I used oracle new storage system called ASM for this case.
    When I put the system under the load, I was monitoring the system
    carefully.
    I started gmp. When we sent some quarries to database (It is not heavy
    load, I tested it with a Linux system on proliant ML570 without any
    problem), suddenly DISK section in gpm changed to red (critical ) I
    read the warning. It said "Disk bottleneck probability = 100%". I
    changed the output of disk report to "Report IO by Disk"
    "DISK%" was 100% and "RAW IO RT" was about 1000 for two disks ( This
    two disks dedicated for ASM). In this situation CPU idle time was 1% or
    2% for all the CPUs but load average was about 1. Performance is not
    acceptable at all ( In comparison with Oracle that installed on Linux).
    Glance reported Disk was in Critical situation.
    I think the problem is IO or something about Disks
    I used HP Disk System 2405. Fibber channels on both server side and
    Disk Array side are configured at 2 Gb and topologies are
    PTTOPT_FABRIC.
    Is it ok that RAW IO RT about 1000 for each LUN?
    Why Disk% in glance/report IO BY Disk/ was 100%?
    I found an error in STM logs about I/O.It said:
    Entry type: I/O error
    Product: Fiber Channel Interface
    Logger: td
    It logged this error about 12 times during the test.Any comment?
    Regards,
    Hasan

    Sorry, I have not a solution for your problem, but similar things happen on our installation on Solaris 5.8 with Oracle 10:
    I have a banking business solution from Windows/SQL Server 2000 to Sun Solaris/ORACLE 10g migrated. In the test environment everything was working fine. On the production system we have very poor DB performance. About 100 times slower than SQL Server 2000!
    Environment at Customer Server Side:
    Hardware: SUN Fire 4 CPU's, OS: Solaris 5.8, DB Oracle 8 and 10
    Data Storage: Em2
    DB access thru OCCI [Environment:OBJECT, Connection Pool, Create Connection]
    Depending from older applications it's necessary to run ORACLE 8 as well on the same Server. Since we have running the new solution, which is using ORACLE 10, the listener for ORACLE 8 is frequently gone (or by someone killed?). The performance of the whole ORACLE 10 Environment is very poor. As a result of my analyse I figured out that the process to create a connection to the connection pool takes up to 14 seconds. Now I am wondering if it a problem to run different ORACLE versions on the same Server? The Customer has installed/created the new ORACLE 10 DB with the same user account (oracle) as the older version. To run the new solution we have to change the ORACLE environment settings manually. All hints/suggestions to solve this problem are welcome. Thanks in advance.
    Anton

  • Oracle 10g vs Oracle 11g query performance

    Hi everyone,
    We are moving from Oracle 10g to Oracle 11g database.
    I have a query which in Oracle 1g takes 85 seconds to run, but when I run the same query in Oracle 11g database, it takes 635 seconds.
    I have confirmed that all indexes on tables involved are enabled.
    Does anyone have any pointers, what should I look into. I have compared explain plans and clearly they are different. Oracle 11g is taking a different approach than Oracle 1g.
    Thanks

    Pl post details of OS versions, exact database versions (to 4 digits) and init.ora parameters of the 10g and 11g databases. Have statistics been gathered after the upgrade ?
    For posting tuning requests, pl see these threads
    HOW TO: Post a SQL statement tuning request - template posting
    When your query takes too long ...
    Pl see if the SQL Performance Analyzer can help - MOS Doc 562899.1 (TESTING SQL PERFORMANCE IMPACT OF AN ORACLE 9i TO ORACLE DATABASE 10g RELEASE 2 UPGRADE WITH SQL PERFORMANCE ANALYZER)
    HTH
    Srini

  • Oracle 10g Express Edition performances

    I'm looking for something about performances of Oracle 10g Express Edition to make a little presentation for the University. Can anyone help me?
    Thnaks

    I'm looking for something about performances of
    Oracle 10g Express Edition to make a little
    presentation for the University. Can anyone help me?What's the matter with the docco? 1 processor only, 1GB ram, one db
    and 4GB disk.
    As to how this will affect your* performance, this can only be
    done by you testing under given conditions. However, if you are doing
    Uni work, why not use the EE - you only have to pay for deployment AFAIK.

Maybe you are looking for

  • BPM Scneario .. SOAP-XI-BW , Error: " MESSAGE_NOT_FOUND"

    Hi all, Webservice --> XI -->BW . BPM has been used to send the response back. BPM : start ->Receive(Request)> Transformation(Responsemap)>Send(SendtoBW)->Send(Send Response) ---> stop. The request message processed succesfully and sent to BW system.

  • FI - Accounting Document not Created

    Hi Forum, At the time of Billing Invoice creation (VF01), system giving an error in the Header "Posting document not created (export data missing)" & Foreign Trade data is INCOMPLETE. for this reason, i am unable to post the accounting Document. Can

  • [Solved] Missing digest_pw_auth after installation using yaourt

    I have installed Squid using yaourt but after installation and setting it up using https://wiki.archlinux.org/index.php/Squid I realized that /usr/lib/squid/digest_pw_auth is missing. How can I get it? # ls basic_db_auth basic_ncsa_auth basic_smb_aut

  • Querying a text field with an escape character in it.

    I want to retrive a the search results from the database with the search string being something like say "{2.5}" select cola from tableX where contains(cola, '{{2.5}}', 1)>0) I tried to parse the string with another escape character - ie i modified t

  • New Journal: an error ocurred

    Hi, I have done the "set up Journals" wizard in the admin console for an application and I get the message that it has successfully finished. However when I go into Excel and click on Journals and then "Enter a new Journal" I get the following Error