Tuning Queries

1) While using toad for sql tuning,it's adding +0 to the where clause columns ?
What's the significance of adding zero?
2) One more thing I observed it's rewriting joins :
For ex :
select *
FROM tablea a,
tableb b,
tablec c,
tabled d
WHERE a.CREATED = b.USER
AND a.UPDATED = c.USER
AND a.BOOK_ID = d.BOOK_ID(+)
toad rewriting that query as follows :
select *
FROM tablea a1,
tableb b1,
tablec c1,
tabled d1
WHERE a1.CREATED = b1.USER
AND a1.UPDATED = c1.USER
AND a1.BOOK_ID = d1.BOOK_ID
UNION ALL
FROM tablea a3,
tableb b3,
tablec c3
WHERE a3.CREATED = b3.USER
AND a3.UPDATED = c3.USER
AND (NOT EXISTS (SELECT 'X'
FROM tabled d3
WHERE a3.BOOK_ID = d3.BOOK_ID))
3) Can I join main view with another view for reporting purposes or joining view with other view's tables directly.In My requirement, using main view is mandatory and using other view is optional?
Thanks,
Kiran

1&2 seem to be related to using rules of thumb and tuning techniques that are at least 10 years old.
I wouldn't trust any such tool to provide reliable "advice".
Adding +0 to a numberic column or concatenating a null to a varchar2 column will prevent an index from being considered.
The rewrite of the query to use a UNION ALL is to prevent using an OUTER JOIN which people used to advise avoiding at all costs.
Tuning by heuristics is rarely effective and there's a reason that the optimizer is no longer rule-based.
3) Can I join main view with another view for reporting purposes or joining view with other view's tables directly.In My requirement, using main view is mandatory and using other view is optional?You could join main view with another view.
Or with the other views tables directly.
Whether you should do one or the other depends on the specific circumstances.

Similar Messages

  • Performance tuning queries/stored proc

    Hi,
    What are the key columns to be considered in improving d performance of the query/sp.
    v can get d info by using explain plan and v$session.
    What is Hash value
    Thanks,
    Naren

    Hi,
    When you write any query ,
    1) check for its cost so that the it does not raise any performance issues.
    2) Also check whether the required columns are indexed.
    3) donot use to_char or to_num in unnecessary places.
    4) also UPPER(TRIM()) should be used only when there are chances for receiign a value in either upper or lower.
    5) If you have joins in ur query check whether the required primary key foreign key relation are euqated properly.
    If you have n joins then n-1 where clause should be available.
    Regards,
    V.S

  • Performance Tuning 10gR2 RHEL5.5

    Could we remove at all / nullify db_file_sequential_read and db_file_scattered_read on OLTP env. high transaction ?
    I've read many Tom Kyte articles but still not find this answers (while scattered for multiblock i/o full table scans and sequential for single block i/o).
    So, if it is possible, plz tell me how to make it nullify.
    All aid is very appreciated.
    Best Regards,
    Han

    SigCle wrote:
    tuning queries are obligation.
    but my problems is how to tune where from java apps, java programmer using hibernate java (java sql clause build-in, for eg. exists,contains clause didn't include), so i can't change the query.
    Eg.
    select * from table_A where c1 like '%SSA%';
    c1 is varchar2(20 byte).
    Typically i create index idx1 on table(A) indextype is ctxsys.context and then change the query to
    select * from table_A where contains(c1,'%SSA%')>0;
    while for select * from table_A where c2 is null, i create index idx2 on table_A(c2,-1) -> causing oracle didn't recognize null condition.
    Unfortunately, i can't change this query using contains bcause contains clause didn't include in java sql built-in.
    the other problem is IBM H/W production server env. is 4 Gb of RAM with 2 processors.
    on S/W side, oracle db 10.2.0.1 on os RHEL 5.5 is standart edition version (i can't use partition and anvanced queueing features).
    user Internet Banking needs for 4000 concurrent users, so i change oracle processes parameter to 4000, hash_area_size=655360,sort_area_size=300M,
    SGA_max_size=1904M, sga_target=1472M, pga_target=662M, aq_tm_processes=1,session_cached_cursors=20,open_cursor=300.
    I've tried to tune abput pctfree,pctused,initrans,freelist for each table, but few tables I let it to default (pctfree=10,pctused=40) because of on those tables, insert,update,delete,select happened). I've had gather_schema_stats, analyze index, etc.
    overall performance from EM, it's quite satisfying me but feeling it's not enough for me for tuning perfection.
    maybe, i want to ask, how to tune query like this or create index :
    select * from table_A where c1 > :1 and c2 = :2 and c3 <= :3 and c3 >= :4;
    I can see that you are on 10g(10201 though, you need to patch it immediately) but still are using many of the features in a manual fashion while a better approach is available to manage them through the more refined algorithms. For example, if you would have used Locally Managed tablespace with the ASSM(Automatic Segment Space Management), the entire task to manage the pctfree and all would had gone. The same is true for sort/hash-area-it would had been much easier to manage it through the PGA_Aggregate_Target .
    So why not you try these things out?
    Aman....

  • How to setup shortcut to insert a command in Oracle sql shell?

    Hi,
    Under sqlplus shell, if i want to frequently insert a long command while
    tuning queries, is there any way I can press a couple of keys to insert the
    command that is pre-prepared.
    For example, I need to frequently insert:
    sqlplus> alter session set events '10053 trace name context forever';
    anyone knows some tips to avoid typing?
    thanks,
    Rick

    user10217806 wrote:
    i mean sqlplus command line input interface under linux.Well, Ivan showed how to just use some sql files to do it .. not exactly pasting your command at the command line, but useful, nevertheless.
    As for actually pasting text at the command prompt, as I said, what you are asking has nothing to do with sqlplus itself and everything to do with the environment in which you are running it. For instance, if I use Putty to connect to my *nix boxes, it is a feature of putty to paste whatever is in the clipboard by simply right-clicking the mouse.  I can do the same thing at a Windows command prompt window if I have the properties set for 'quick edit'.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Column p3 (id#/block#) in v$session_wait/v$active_session_history

    The environment is 10.2.0.3 in AIX 5.3.0.0 in a two node cluster.
    In v$session_wait/v$active_session_history, the column p3 is reffered as
    For gc buffer busy waits - id#
    and
    For buffer busy waits - Class#
    (1)Could somebody please explain what exactly these values mean id#/class# and how can we associate them with other statitistics availble to troublshoot the issue?
    In my system when I made a query in v$active_session_history it is found that file#(p1) with block# (p2) 287724 is having high count of "gc buffer waits" with tow different values in id# (p3) (pls find below result) The file 16 with block 287724 is having most number of counts for gc buffer busy (with two different id#(??))
    EVENT P1 P2 P3 COUNT(*)
    gc buffer busy 18 1091724 65537 2
    gc buffer busy 16 287724 131073 58
    gc buffer busy 7 575153 65537 2
    gc buffer busy 13 1528666 65537 1
    gc buffer busy 14 843396 65537 2
    gc buffer busy 12 1157771 65537 1
    gc buffer busy 16 287724 65537 86
    gc buffer busy 12 12231 65537 1
    gc buffer busy 18 1091732 65537 1
    gc buffer busy 11 1642482 65537 2
    gc buffer busy 10 1527484 65537 2
    EVENT P1 P2 P3 COUNT(*)
    gc buffer busy 11 1642497 65537 1
    gc buffer busy 14 843396 131073 1
    And I found that this is a primary key index with the segment_statistics as follows (from V$SEGMENT_STATISTICS)
    OBJECT_NAME STATISTIC_NAME VALUE
    PROCESS_LOCK_PK logical reads 18176
    PROCESS_LOCK_PK buffer busy waits 33
    PROCESS_LOCK_PK gc buffer busy 776
    PROCESS_LOCK_PK db block changes 6624
    PROCESS_LOCK_PK physical reads 2
    PROCESS_LOCK_PK physical writes 64
    PROCESS_LOCK_PK physical reads direct 0
    PROCESS_LOCK_PK physical writes direct 0
    PROCESS_LOCK_PK gc cr blocks received 1991
    PROCESS_LOCK_PK gc current blocks receive 2771
    PROCESS_LOCK_PK ITL waits 0
    OBJECT_NAME STATISTIC_NAME VALUE
    PROCESS_LOCK_PK row lock waits 0
    PROCESS_LOCK_PK space used 0
    PROCESS_LOCK_PK space allocated 0
    PROCESS_LOCK_PK segment scans 0
    with these available informations how can I solve this problem of "gc buffer busy" waits.
    I am not able to find from anywhere what exactly this "id#" is and how can we use it to solve this problem.
    Any help will be highly appreciated.
    Thanks
    N. Sethi
    Message was edited by:
    user623256

    Hi,
    gc buffer busy
    This wait event, also known as global cache buffer busy prior to Oracle 10g, specifies the time the remote instance locally spends accessing the requested data block. This wait event is very similar to the buffer busy waits wait event in a single-instance database and are often the result of:
    Hot Blocks - multiple sessions may be requesting a block that is either not in buffer cache or is in an incompatible mode. Deleting some of the hot rows and re-inserting them back into the table may alleviate the problem. Most of the time the rows will be placed into a different block and reduce contention on the block. The DBA may also need to adjust the pctfree and/or pctused parameters for the table to ensure the rows are placed into a different block.
    Inefficient Queries ? as with the gc cr request wait event, the more blocks requested from the buffer cache the more likelihood of a session having to wait for other sessions. Tuning queries to access fewer blocks will often result in less contention for the same block.
    Please find the below link.. I hope it will help out for you..Since I did not have much idea on RAC ..
    http://www.ardentperf.com/2007/09/12/gc-buffer-busy-waits-in-rac-finding-hot-blocks/
    Thanks
    Pavan Kumar N

  • Design questions - beginner

    Hello,
    I'm just getting started with BDB XML and the resources here have been very helpful. To play around with it, I imported 100,000 records from my relational db into BDB XML. I imported it as one document that sort of looks like this:
    <people>
    <person><name>John></name><age>22</age></person>
    <person>..</person>
    ...100k times
    </people>
    Querying this database using dbxml.exe has been extremely slow, even after using indexes. So, I have the following questions:
    1. Should I import it has a single document containing 100k children, or is it better to import it as 100k different documents?
    2. Are there any resources available for best practices in designing the XML database, especially from the PoV of a relational database designer?
    Thanks
    Amit

    Amit,
    Using the correct indexes and release 2.3.10 your performance should be reasonably good even with a single, large document. In general, it can be better to use individual documents, especially if you want to add/remove them individually. If there is no need to keep them as one document, I'd recommend using separate documents.
    Also, if you want better answers on tuning queries, you need to provide more information, such as the indexes you've declared and the queries you are using.
    Regards,
    George

  • How to use REGEXP_LIKE or REGEXP_INSTR in a query

    Hello All,
    I would like to do a query on a column of a table to see if it has any combination of ALL of up to 5 words. So for example, if I search for (Apple, Banana, Blueberry), I would like to see the following data returned
    Apples are better than Bananas and Blueberrys.
    Blueberry recipes contain apples and bananas.
    Bananas can be baked into bread with Apples but not Blueberrys.
    So the criteria I would like to meet are
    1. All three words are in the data returned
    2. The three words can be in any order
    3. There can be any or no other text in between the three words.
    4. The query is case insensitive.
    So far I have come up with this
        select * from hc_work_items where REGEXP_LIKE(wki_name, '(Apple)', 'i') AND REGEXP_LIKE(wki_name, '(Banana)', 'i') AND REGEXP_LIKE(wki_name, '(Blueberry)', 'i') This does the trick but I am wondering if it looks ok (I am new to REGEXP and also tuning queries for efficiency). I did also try
        select * from hc_work_items where REGEXP_INSTR(wki_name, '(Apples|Blueberrys|Bananas)') > 0 but this was returning only an OR selection of the words, not all three.
    Thank you for any advice !
    Edited by: 991003 on Feb 28, 2013 8:32 AM
    Edited by: 991003 on Feb 28, 2013 8:34 AM

    Hi,
    Welcome to the forum!
    991003 wrote:
    Hello All,
    I would like to do a query on a column of a table to see if it has any combination of ALL of up to 5 words. So for example, if I search for (Apple, Banana, Blueberry), I would like to see the following data returned
    Apples are better than Bananas and Blueberrys.
    Blueberry recipes contain apples and bananas.
    Bananas can be baked into bread with Apples but not Blueberrys.It doesn't seem like you're really looking for words. In most of these cases, the text you are looking for (e.g. 'Apple') is not a separate word, but is a sub-string embedded in a longer word (e.g., 'Apple<b>s</b>').
    What if someone uses the correct plural of 'Blueberry', that is, 'Blueberr<b>ies</b>? You might have to instruct your users to look for only the common part; in this case 'Blueberr'.
    So the criteria I would like to meet are
    1. All three words are in the data returned
    2. The three words can be in any order
    3. There can be any or no other text in between the three words.
    4. The query is case insensitive.
    So far I have come up with this
    select * from hc_work_items where REGEXP_LIKE(wki_name, '(Apple)', 'i') AND REGEXP_LIKE(wki_name, '(Banana)', 'i') AND REGEXP_LIKE(wki_name, '(Blueberry)', 'i')
    Yes, I think you'll have to do separate searches for each of the 3 targets.
    Regular expressions might not be the most efficient way. INSTR or LIKE will probably be faster, e.g.
    WHERE     UPPER (wki_name)   LIKE '%APPLE%'
    AND     UPPER (wki_name)   LIKE '%BANANA%'
    AND     UPPER (wki_name)   LIKE '%BLUEBERRY%'Oracle is smart enough to "short crcuit" compound conditions like this. For example, if if looks for 'Apple' first, and doesn't find it on a given row, then it doesn't waste time looking for the other targets on the same row.
    This does the trick but I am wondering if it looks ok (I am new to REGEXP and also tuning queries for efficiency). I did also try
    select * from hc_work_items where REGEXP_INSTR(wki_name, '(Apples|Blueberrys|Bananas)') > 0 but this was returning only an OR selection of the words, not all three.Exactly. You could look for any of the 6 possible permutations, but that's really ugly, inefficient, and unscalable. (If you ever need 4 targets, there are 24 permutations; with 5 targets there are 120.) You were better off the first time, with 3 separate conditions.
    Oracle Text is a separate product that was designed for jobs like this. It's a separate product, that requires a separate license, and it has Text

  • Can we convert a SQL (Select Statement) to Procedure.?

    Hi
    I am using a select sql for retrieving the results - Below is a sample sql select query.
    select TableC.DATEFIELD as QUERY_DATE,
    TableB.COLUMN1 PROCESS,
    TableC.COLUMN1 PRODUCT,
    sum(TableC.COLUMN4) as OPEN_INSTANCES
    from      TableA, TableB, TableC
    where TableB.COLUMN1      = TableA.COLUMN2
    and      TableA.COLUMN2      = TableC.COLUMN2
    and      DATEFIELD <= to_date('2011-02-02' ,'YYYY-MM-DD')
    and      DATEFIELD >= to_date('2011-02-02' ,'YYYY-MM-DD')
    and      TableC.COLUMN4 <= (24 * 3600 )
    and      TableB.COLUMN1 like 'PROCESSID'
    and      TableC.COLUMN1 in ('OSRCITR')
    group by TableC.DATEFIELD as QUERY_DATE,
    TableA.COLUMN1 PROCESS,
    TableC.COLUMN1 PRODUCT
    I believe if we use a Procedure, It would be much faster. Is there any way that we can convert the above select sql to a procedure. If yes, how can it be.
    Thanks in Advance.
    -Sreekant

    Sreekant wrote:
    select TableC.DATEFIELD as QUERY_DATE,
    TableB.COLUMN1 PROCESS,
    TableC.COLUMN1 PRODUCT,
    sum(TableC.COLUMN4) as OPEN_INSTANCES
    from      TableA, TableB, TableC
    where TableB.COLUMN1      = TableA.COLUMN2
    and      TableA.COLUMN2      = TableC.COLUMN2
    and      DATEFIELD <= to_date('2011-02-02' ,'YYYY-MM-DD')
    and      DATEFIELD >= to_date('2011-02-02' ,'YYYY-MM-DD')
    and      TableC.COLUMN4 <= (24 * 3600 )
    and      TableB.COLUMN1 like 'PROCESSID'
    and      TableC.COLUMN1 in ('OSRCITR')
    group by TableC.DATEFIELD as QUERY_DATE,
    TableA.COLUMN1 PROCESS,
    TableC.COLUMN1 PRODUCT
    I believe if we use a Procedure, It would be much faster. Is there any way that we can convert the above select sql to a procedure. If yes, how can it be.Using the code tags would make the query easier to read :)
    What version of Oracle are you on?
    Under the right conditions deconstructing a huge query into smaller components sometimes can offer performance increases, but this is more true of older versions of Oracle than recent ones. Lately I get better results from tuning queries in place - as Aman pointed out you introduce context switching (moving between the SQL and PL/SQL engines to do work) which can also hurt performance.
    Try tuning the query first. Get an execution plan. Things you can look for include
    * make sure the driving table is the best one
    * are the join columns properly indexed? Are existing indexes being suppressed due to the functions?
    Is "and      TableB.COLUMN1 like 'PROCESSID' " correct? without a wildcard LIKE should evalate to =

  • SQL Developer responding slowly

    Hi,
    I am connecting to Oracle DB from SQL Developer using remote connection through LDAP. I work with sql server management studio regularly but through sql developer this is my first time.
    I find SQl developer slow compared to sql server management studio- IS this true or is it because of the remote connection.?
    What steps can I take to improve its speed . Are there any tools or settings that I need to change?
    I have a simple insert query into a temporary table which ran for 5 hours.The application does not stop executing after that and I need to end it from task manager. I think this is to do with the buffer size, so I need to commit data in batches to process it more speedily.Are there any settings to do to run data in batches?
    Thanks for your time and help

    >
    I find SQl developer slow compared to sql server management studio- IS this true or is it because of the remote connection.?
    What steps can I take to improve its speed . Are there any tools or settings that I need to change?
    I have a simple insert query into a temporary table which ran for 5 hours.The application does not stop executing after that and I need to end it from task manager. I think this is to do with the buffer size, so I need to commit data in batches to process it more speedily.Are there any settings to do to run data in batches?
    >
    Based on the code you posted in a later reply this does NOT appear to be a sql developer issues at all and should be posted in the SQL and PL/SQL forum.
    PL/SQL
    Before you post there please read the thread 'How to post a tuning request' located at
    HOW TO: Post a SQL statement tuning request - template posting
    That thread discusses the information that you need to provide in order for volunteers to be able to make meaningful suggestions. That information includes
    1. the DDL for the tables and indexes involved
    2. the row counts for the tables
    3. the row counts for the query predicates
    4. the execution plan for the query
    5. the expected row count of the result set
    Based of this code that you posted
    INSERT INTO Temp
    SELECT
    VisitNo,
    EmployeeNo,
    (SELECT EmployeeNo FROM Employee e WHERE e.CaseNo = clin.CaseNo) emp,
    ROW_NUMBER() OVER (ORDER BY VisitNo, CaseNo) rnum
    FROM
    ClinicVisit clin
    ORDER BY
    VisitNo,
    CaseNo;the query will simply be sent by sql developer to Oracle for processing. Sql developer will play no further role until the query is actually completed. So based on this example code there is no basis at all for saying:
    1. sql developer is slow
    2. the application does not stop executing - please clarify what you mean by 'application' and by 'does not stop'
    3. I think this is to do with the buffer size
    4. I need to commit data in batches to process it more speedily
    5. the remote connection is in any way involved
    The code (insert into temp) suggests that you are using the typical sql server practice of populating a temporary table whose data will then, typically, be used for further processing and population of the target table.
    In Oracle you rarely need to use temporary tables and when you do use them you use them differently than the way they are used in sql server.
    The SQL and PL/SQL forum is the place to gets answers about tuning queries but you will get better help if you provide additional information about what you are really trying to do.

  • Data Access Object for Data Warehouse?

    Hi,
    Does anyone know how the DAO pattern looks like when it is used for a data warehouse rather than a normal transactional database?
    Normally we have something like CustomerDAO or ProductDAO in the DAO pattern, but for data warehouse applications, JOINs are used and multiple tables are queried, for example, a query may contains data from the Customer, Product and Time table, what should the DAO class be named? CustomerProductTimeDAO?? Any difference in other parts of the pattern?
    Thanks in advance.
    SK

    In my opinion, there are no differences in the Data Access Object design pattern which have any thing to do with any characteristic of its implementation or the storage format of the data the pattern is designed to function with.
    The core pupose of the DAO design pattern is to encapsulate data access code and separate it from the business logic code of the application. A DAO implementation might vary from application to application. The design pattern does not specify any implementation details. A DAO implementation can be applied to group of XML data files, an Excel-based CSV file, a relational database, or an OS file system. The design is the same for all these, it is the implementation that varies.
    The core difference between an operational database and a strategic data warehouse is the purpose of why and how the data is used. It is not so much a technical difference. The relational design may vary however, there may be more tables amd ternary relationships in a data warehouse to support more fine-tuned queries; there may be less tables in a operational database to support insert/add efficiencies.
    The DAO implementation for a data warehouse would be based on the model of the databases. However the tables are set up, that is how the DAO is coded.

  • Best approach to store collection for html:options

    I am using struts to develop my web application. I am trying to figure out the best way to store a Collection for html:options inside a html:select element.
    My JSP page has a html:select statement which lists all the groups in a system. I have a member variable called getGroupNames() in my ActionForm class. When the JSP page is first requested, I populate the groupNames attribute of ActionForm class with a list of names. The html:options reads the groupNames attribute and lists all the group names.
    The problem: Once the user selects a group name and then submits, if there is any validation problems in other fields in the JSP, I build ActionErrors object and forward the request back to the same JSP. However, now I get an error saying the groupNames is NULL.
    Option1: One way to avoid this error is to AGAIN populate the groupNames list if there is a validation problem.
    Option 2: Another way is to store the list of groupnames in a seperate bean (and NOT as an attrib of the ActionForm) and store it in the REQUEST scope. This way I think the I don't have reload the groupName list in case of a validation failure.
    Could someone please help me in deciding which will be the right approach.
    Thanks

    Hi,
    1,2 billion rows and each row of 50 bytes - approximately 56 GB around. Huge segment. As you stated it's an OLTP database, can you check across what is the percentage or maximum number of queries with developers - important column which they are querying across and application dealt with.
    Based on the input of the above question we can check how best we can partition and columns we are dealing. Perhaps you have stated across historical data. From this kindly verify across what is the % of users check or go-back to historical information - based on that we can estimate the number of partitions we can create on segment and looking forward, you must estimate the % of increase of new/incoming data and day/month wise. so that you might be plan what can we do the old historical data.
    HTH
    Note- Part of Performance dealt with writing efficient tuned queries
    - Pavan Kumar N

  • Best Approach to store 1.2 billion records

    Hello All,
    In our OLTP database there will be few tables with 1.2 billion records. The row length is approximate 50 bytes.
    Please can I know what are best options to store this historical data.
    Would it be advisable to store all this records go in one partitioned table, or create a separate schema in the same database for storage.
    Would it impact performance of the database?
    thanks

    Hi,
    1,2 billion rows and each row of 50 bytes - approximately 56 GB around. Huge segment. As you stated it's an OLTP database, can you check across what is the percentage or maximum number of queries with developers - important column which they are querying across and application dealt with.
    Based on the input of the above question we can check how best we can partition and columns we are dealing. Perhaps you have stated across historical data. From this kindly verify across what is the % of users check or go-back to historical information - based on that we can estimate the number of partitions we can create on segment and looking forward, you must estimate the % of increase of new/incoming data and day/month wise. so that you might be plan what can we do the old historical data.
    HTH
    Note- Part of Performance dealt with writing efficient tuned queries
    - Pavan Kumar N

  • Fastest way to select many rows?

    I wonder way a simple select that only joins 2 tables take so long to execute. It takes up to 30 seconds to read 50.000 rows with the 9.2.0.5 driver on a ORACLE 8.0.5 server. In another project we read 100.000 rows with a similar select over two tables in 2 seconds on a MS SQL Server 7 with free jTDS JDBC driver.

    You cannot directly compare the cost of two different queries. It is quite possible that a query with a cost of 100 will be significantly slower than a query with a cost of 1,000,000. Cost is generally meaningless outside of a very detailed CBO trace. In other words, for your purposes, you probably want to ignore the cost.
    When you're looking at a query plan, the part that you ought to care about is the estimate of the number of rows that will be returned from a single step. If the value Oracle is estimating is significantly different than the value you know will be returned, that is an indicator that something may be amiss.
    What is your OPTIMIZER_MODE set to? The CBO was in its infancy in 8.0.5, so you might want to try using the RBO instead (set OPTIMIZER_MODE to RULE). Advice for tuning queries on modern versions of Oracle is going to concentrate on making sure that the statistics for the CBO are accurate-- because you're on such an old version, you may be better off abandoning the CBO entirely.
    When you time the queries, are you measuring the time it takes to return the first row? Or the last row?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • What features I can disable?

    Hi, some of the queries in our db consumes 100% of CPU for a couple of seconds. I've got a quest to find out, which oracle features can be disabled e.g. how to simplify the Oracle to the least necessary minimum to get rid of the CPU's peaks. It's not about tuning queries or the database parameters. Thanks a lot...

    I guess the first two posters trying to say is you have a wrong approach. The first two poster is me, right ? Then yes, approach is wrong. This is not by checking CPU that we tune a database.
    And a lot of time, It IS about tuning queries or the database parameters. No more than Hans has already said earlier, but right. And once more, this is not with CPU checking that we determine what parameter and what queries need to be tune.
    Unless to fall in the Compulsive Tuning Disorder, some serious studies based on database reports are required.
    Nicolas.

  • Can we *_PLEASE_* have a .sig for Oracle forums?

    I have requested this before and I will go on doing so (every year, I suppose)
    until the people at Oracle listen.
    Why - OH WHY* can we not have a 5 line .sig for our posts to this forum?
    I am blue in the face from trying to tell people to put
    Software - DB Server version - OS (+ version)
    Hardware - CPU config - Disk config
    into their posts re. Oracle databases and running/tuning queries &c.
    Please support my campaign - SOS - (Start Our .Sig!!!) NOW*
    Paul...

    Paulie wrote:
    >
    Paulie wrote:
    Please include all of the following information with database queries.
    Software - OS (+ version), Server,
    Hardware - CPU + Disk configuration.
    In order to adequately respond, you really should be collecting an RDA <g>. A Recursive Decision Algorithm? Or
    a Relational Distributed Architecture? Possibly
    a Random Duplicate Allocation?
    I think you probably mean a Requirement Development Analysis?
    No. I mean the Remote Diagnostic Agent output. The one that you are asked to submit with an Oracle Support technical Service Request.
    Surely you are aware of the Support requirement for that information.
    >
    At the very least you should get both client Maybe I'm just incredibly naive - I start by assuming
    that the client can connect with the server... having
    said that, I appreciate that there can be compatibility
    issues.
    I ain't talking about compatibilty. Having a 'I gonna be an Oracle guru'-type tell you that he's installed the Oracle client on XP Home and has no idea what a firewall is can be useful troubleshooting infomation.
    And then there is the "I can't get Forms 4.0 to connect to my 11g database" challenge.
    >
    and server OS (including family, edition, version I'll go with this.
    and patch level ), Patch level? I don't think the .sig should be longer than
    the post - If the debate starts to have to get to the level
    of patch level, then it's gone way beyond the scope
    of a .sig.
    The vast majority* of problems IMHO aren't related
    to patches, but are (normally) far simpler than that!
    Knowing whether the user is at 10.2.0.1 or 10.2.0.4 is not important to you? Knowing whether the user is running Windows XP Pro SP2 or SP3 is not important? WHether the database is on XP Home or Vista Bliss and Pleasure Edtion could be as uninterestinng as whether the user has updated to the latest WIndows Defender.
    >
    >
    Oracle product (you are assuming database, but ...)Well, database would be my interest area, and I normally only post
    to those fora!
    And the rest of us are schmucks? <g>
    >
    Memory config might be more important than disk, especially
    if looking at Java issues, and that includes any OEM issues.Well, yes, obviously RAM would come into the equation - I would
    have thought that would be under the rubric of Hardware.
    SGA and PGA are memory configurations. In Linux and Unix, the SHMEM and SEM parameters are memory config. Last time I looked they were not part of hardware.
    Knowing whether the user has 1G of RAM and the hardware steaks 512M of that for video could also be useful. But is that hardware or software? The typical newb gonna tell you "but my machine has 1GB RAM".
    >
    You may want to simpily by setting up a blog with your rant (a bit
    like mine at http://hansforbrich.blogspot.com/2008/07/
    why-do-they-always-ask-for-version.html) and just linking your sig to the blog.The problem with links is that those who don't bother to read
    forum guidelines are even less likely to bother to go further and
    follow a link before posting - though I will, myself, certainly be
    casting an eye over yours from time to time.
    The problem with ranting in .sigs is that people will ignore that.
    If you look very carefully, 99% of the problems in the forums come from people who don't read AT ALL. It doesn't matter whether you use a .sig (I used 'em foir years in CDOS) or provide a link.
    Seems like many of those who have trouble and turn to the forums go deaf, dumb, blind and schtoopyd, just when the most important tools they need are reading and thinking.
    >
    Rgs.
    Paul...Do takie my response with a grain of salt. It has more to do with Devil's Advocate and what feels like twenty-umpteen-two years of responding to Oracle questions. And using .sigs even back in my Compuserve days ealy to mid 90s.
    /Hans
    (aka Fuzzy Greybeard)

Maybe you are looking for

  • Booting Intel Mac Mini from PPC OS

    I had a PowerPC Mac Mini that booted off of an external hard drive. I just sold the computer, and bought a new Intel-based Mac Mini. I would like to have the new Mini boot off of the external hard drive also, so I can keep all of my settings and exte

  • K8t800 onboard sound randomly stops until restart

    my onboard sound seems to randomly stop working at times. my 5.1 speakers have a power/volume knob that clicks the power off when you turn the knob all the way down. the only consistency i've found is that sometimes i'll turn the volume all the way d

  • How to Upgrade RPM without checking dependicies.

    Hi I have some RPM installed on RHEL4. for installing i need to upgrade some RPMs. I am using the command RPM -Uvh <RPM name> to upgrade RPMs. Some time i get error that [root@R11 RPM]# rpm -Uvhf gcc-3.4.6-3.i386.rpm warning: gcc-3.4.6-3.i386.rpm: V3

  • The new version of firefox is ridiculously slow, how do I go back to the old version?

    As requested I recently updated to the newer version of firefox, since I did that it has been really slow. I would like to go back to the older version, how do I do that?

  • Oracle Enterprise Manager for Personal Oracle 9i

    I have justdownloded Personal oracle 9i for WIN98. Everything was fine. But when I tried to install the Enterprise Management 2.2 (for 8i) I got an errorthat "Oracle .swd.Jrl.1.1.8.10.0 support file is missing. I was able to install the OEM for my Pe