Oracle performance with increased data

I have a table1 that is being accessed by process 1 (a store proc). This process runs for 1 to 2 hours and that is normal. Now I am going to add anew table2 and create a new process 2 (again a store proc). Will this slowdown the process 1 in anyway? I will not be running the two processes at same time (if i did it would obviously mean process1 will slowdown). I am just looking at the increased data volume in my database. Will addition of more data slow down oracle even though I am adding in a different table?
The data I am talking here is huge. Both table1 and table2 each occupy almost 500GB. Each table has 200+ partitions. BTW, I am using 10g - 10.2.0.3.0.
Edited by: user6794035 on Aug 12, 2009 4:26 AM
Edited by: user6794035 on Aug 12, 2009 4:27 AM

user6794035 wrote:
why should CPU and resources come into play here? As I said nothing has changed except the data volume and two processes will not run at same time.
Consider there is no process2. I just have table 1 and process1. I just added another table of the size I mentioned above (500GB). Just the fact, that I added more data in another table will slow down Oracle's processing speed in anyway?Not until and while you are actually performing processing against that table. Your question seems to indicate that you fear that the mere presence of a table in the database causes a performance impact.

Similar Messages

  • Problem with updating oracle DB with java date thru resultset.updateDate()

    URGENT Please
    I am facing problem in updating oracle database with java date through resultset.updateDate() method. Can anybody help me please
    following code is saving wrong date value (dec 4, 2006 instead of java date jul 4, 2007) in database:
    ResultSet rs = stmt.executeQuery("SELECT myDate FROM myTable");
    rs.first();
    SimpleDateFormat sqlFormat = new SimpleDateFormat("yyyy-mm-dd");
    java.util.Date myDate = new Date();
    rs.updateDate("myDate", java.sql.Date.valueOf(sqlFormat.format(myDate)));
    rs.updateRow();

    I believe you should use yyyy-MM-dd instead of yyyy-mm-dd. I think MM stands for month while mm stands for minute as per
    http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html
    (If this works, after spending so much of your time trying to solve it, don't hit yourself in the head too hard. I find running out of the room laughing hysterically feels better).
    Here is a more standard(?) way of updating:
    String sqlStatement=
    "update myTable set myDate=? where personID=?"
    PreparedStatement p1= connection.prepareStatement(sqlStatement);
    p1.setDate(1,new java.sqlDate());
    p1.setInt(2, personID);
    p1.executeUpdate();

  • Problem when trying to refresh oracle screens with latest data

    hello experts,
    i have one problem,i want to refresh the oracle screen with the latest data from the data
    base.
    It is a two stage process.At first step one user will select a row from the screen and then he will press a button .
    now the second screen will appear and the detail of the employee will be displayed.
    First step has been completed and the data is coming in the second form via parameters and i can see the full information of the employee.
    Now i want to refresh the oracle form i.e. suppose if my dba has made any changes in the oracle table( EMP table) i want that after pressing the refresh button user can see the
    latest data from the database.
    in WHEN_BUTTON_PRESSED trigger i have written this codes.
    enter_query;
    execute_query;
    but they are not giving the expected result.
    And one more thing please suggest whether in the second form i should use database item
    or non database item.
    When i am using database item when i am trying to close second from one pop up is appearing
    and asking that whether i want to save the changes.
    please suggest how can i remove this message from my application.
    Regards
    Anutosh

    Hi,
    what data did you transfer via parameters to the second form ?
    how did you populate the datablock in the second form ?
    Typical solution would be:
    (For my example the block is both forms is named EMP, and is based on the table SCOTT.EMP)
    In Form 1, transfer the primary key-value of the current record to a global or parameter (will use global in my example):
    e.g. you have a WHEN-BUTTON-PRESSED-Trigger with the following code:
    <pre>
    :GLOBAL.EMPNO:=:EMP.EMPNO;
    CALL_FORM('FORM2');
    </pre>
    In Form 2, you have a WHEN-NEW-FORM-INSTANCE-Trigger with code:
    <pre>
    DEFAULT_VALUE('GLOBAL.EMPNO', NULL);
    IF :GLOBAL.EMPNO IS NOT NULL THEN
    GO_BLOCK('EMP');
    EXECUTE_QUERY;
    :GLOBAL.EMPNO:=NULL;
    END IF;
    </pre>
    On block EMP in Form 2 there is a PRE-QUERY-Trigger with following code:
    <pre>
    IF :GLOBAL.EMPNO IS NOT NULL THEN
    :EMP.EMPNO:=:GLOBAL.EMPNO;
    END IF;
    </pre>
    And at last, in your refresh-button would be the following code:
    <pre>
    :GLOBAL.EMPNO:=:EMP.EMPNO;
    GO_BLOCK('EMP');
    EXECUTE_QUERY;
    :GLOBAL.EMPNO:=NULL;
    </pre>
    Hope this helps

  • Safest Way to Penetration Test an Oracle DB with Potential Data Loss

    Hi,
    I was wondering what the safest way to protect Oracle from data loss when running a web application scan. We currently have an external company about to perform a web application scan and they warned us of potential data loss. However, we can't afford much downtime and our storage doesn't support things such as Copy on Write. What would you recommend? Do you think that something like putting the database in read-only mode for the duration of the test (2 hours) and enabling audit on all actions would be sufficient (we could then review the audit to see if any unauthorized calls were made)? Thanks.

    If not running live you might consider restoring your database to before the test. But you need to have confidence this would work.
    I assume your running live for the duration of the test.
    Going read only might invalid the test, and your application might not be able to run read only without generating errors.
    Examine and be aware of the flashback technologies available to you at your database version and which ones might be useful. In this context increase undo space/retention target might be helpful but dont dash off doing something at last minute.
    Ensure you have checked out how to use logminer.
    Consider not continously updating and standby database you have until test is complete.
    Ensure your more recent backup is successful and you have checked your restore procedures and have contingency places in place.
    In practice the web peneration test may attempt to change a small amount of data in a small number of records, but the agreement probably means they are not liabable if they dropped schema in the database!
    If you have to correct data following their test then do so carefully. Doing the wrong thing (especially in a panic) can make a sitation worse, especially if you are doing something you are not familiar with. Often it may be better the data loss through the application itself.
    If you do turn on auditing be aware of what it gives you before you turn in on, and beware any space implications.
    I notice your are recently registered on the site ... this may mean you dont have much experience with oracle, you may be more of a system administrator for instance. No disrespect in that whatsoever. However especially if this is the case then remember in my opinion dashing to change something last minute statisically often does more harm than good overall and may be harder to undo.
    Hope this helps.
    bigdelboy
    Edited by: bigdelboy on 28-Mar-2009 01:18
    Edited by: bigdelboy on 28-Mar-2009 01:22

  • Oracle 9i with 500GB data

    Hi,
    our company is planning a data warehouse and are planning to propose an Oracle solution. The database of the warehouse would initially be 200GB and will grow upto 1 terabyte. Can someone tell me what kind of hardware configuration should we go for? Can we look at Oracle9i on windows 2000 advanced server for it?
    Regards,
    Harini

    My Priorities for this environment would be:
    Multiple file volumes with multiple RAID Channels (You are going to want to partition or stripe your data as much as possible)
    Multiple CPUs (will allow for Parallel Queries and Parallel Direct Dataloads)
    64 bit CPUs that will allow for more memory addressing and thus more physical memory (More DB Buffers, Larger Shared Pool, More in memory sorts)
    A secure and reliable 64 bit OS that will not require constant reboots to clear problems.

  • Call Oracle procedure with custom data type within Java and Hibernate

    I have a custom date TYPE in Oracle
    like
    CREATE TYPE DATEARRAY AS TABLE OF DATE;
    and I have a Oracle function also
    like
    CREATE OR REPLACE FUNCTION doesContain (list DATEARRAY, val VARCHAR2) RETURN NUMBER
    IS
    END doesContain;
    In my Java class,
    I have a collection which contain a list of java.util.Date objects
    When I call Oracle function "doesContain", how to pass my java collection to this Oracle function ...
    anyone can provide solutions?
    Please !!!
    Thanks,
    Pulikkottil

    Vu,
    First of all you need to define your types as database types, for example:
    create or replace type T_ID as table of number(5)Then you need to use the "oracle.sql.ARRAY" class. You can search this forum's archives for the term "ARRAY" in order to find more details and you can also find some samples via the JDBC Web page at the OTN Web site.
    Good Luck,
    Avi.

  • Importing data from Microsoft excel file to Oracle Database with Multiple Data Tables. Need expert advice and guidance

    I posted a query on Importing data from Microsoft Excel to Oracle Database (Multiple Data Tables). I got some answer and reference from the forum.
    I presented to my Oracle consultant and representative from Oracle Malaysia. They said impossible. I do not believe what they said. I do believe can be done.
    Can someone help or direct me to an expert that can help me on this

    e90f478a-c529-4c48-b189-51eebeaed477 wrote:
    I posted a query on Importing data from Microsoft Excel to Oracle Database (Multiple Data Tables). I got some answer and reference from the forum.
    I presented to my Oracle consultant and representative from Oracle Malaysia. They said impossible. I do not believe what they said. I do believe can be done.
    Can someone help or direct me to an expert that can help me on this
    We don't know the "query on Importing data from Microsoft Excel to Oracle Database (Multiple Data Tables). "
    We don't know where you posted said query.
    We don't know what "some answer and reference" you received "from the forum."
    We don't know what it was that your "Oracle consultant and representative from Oracle Malaysia" said was "impossible".
    So on what basis are we supposed to "help or direct" to "to an expert that can help "?

  • Trouble with increased data usage since 5.1 update

    Since updating my iPhone4 to 5.1 I seem to be using much more data. My phone will show that it's on WiFi...and my ATT phone bill is showing data charges at the same time. Does anyone know a solution to this problem? It is happening on all 3 of the iPhones I own.

    I am also having the exact same problem.

  • How to tune performance of a cube with multiple date dimension?

    Hi, 
    I have a cube where I have a measure. Now for a turn time report I am taking the date difference of two dates and taking the average, max and min of the date difference. The graph is taking long time to load. I am using Telerik report controls. 
    Is there any way to tune up the cube performance with multiple date dimension to it? What are the key rules and beset practices for a cube to perform well? 
    Thanks, 
    Amit

    Hi amit2015,
    According to your description, you want to improve the performance of a SSAS cube with multiple date dimension. Right?
    In Analysis Services, there are many tips to improve the performance of a cube. In this scenario, I suggest you only keep one dimension, and only include the column which are required for your calculation. Please refer to "dimension design" in
    the link below:
    http://www.mssqltips.com/sqlservertip/2567/ssas--best-practices-and-performance-optimization--part-3-of-4/
    If you have any question, please feel free to ask.
    Simon Hou
    TechNet Community Support

  • Updating Oracle table with info from Sybase query

    I hope this is the correct forum for this question.
    I am fairly new to Java and JDBC. I am trying to figure out what the best method for updating information in Oracle tables with data from a Sybase table. I would prefer to use Oracle’s transparent gateway but this is not an option my company will pay for so I am creating a java stored procedure and using JDBC to connect to the Sybase database.
    The process I think I need to go thru is
    1.     Query an Oracle table to get the records that need to be updated and the “key” information to query the Sybase table with.
    2.     Use that result to query the Sybase database to get the fields that need to be updated in the Oracle table for those records.
    3.     Update the records on the Oracle table with the data from the Sybase query.
    I know I can just do this procedurally, row-by-row, but I was wondering if anyone knows of a way to accomplish this with SQL and no loops. Is there a way to make a result set available as a “SQL table” for another JDBC query?
    Basically what I would like to do is:
    OraQuery = “ select sybinfo from sometable where updated_date = null”;
    Statement orastmt1 = OraConn.createStatement();
    ResultSet Orars1 = orastmt1.executeQuery (OraQuery);
    SybQuery = “select update_date, sybinfo from sybtable where sybinfo = Orars1.sybinfo”;
    Statement sybstmt = SybConn.createStatement();
    ResultSet Sybrs = sybstmt1.executeQuery (SybQuery);
    OraUpdate = “update (select update_date from sometable, Sybrs where sometable.sybinfo = Sybrs.sybinfo) set update_date = Sybrs.update_date”;
    Statement orastmt2 = OraConn.createStatement();
    ResultSet Orars2 = orastmt2.executeQuery (OraUpdate);
    This may not be possible but if anyone has done something similar and wouldn’t mind sharing I would appreciate it. If there is a “better” way of accomplishing this I am open to suggestions.
    Thanks

    you can try using cachedRowSet() for the Oracle side query.
    The rows in this could be populated using the sybase side query's resultset and then all of this could updated into Oracle in one shot.

  • Export report with graphics data

    Hi,
    i would like to get a long time period report (about 2 month) of performance data with graphics data, same as data on Performance tab of GridControl, is possible to export report of performance with graphics data instead of ASH report that export only data without graph?
    thanks very much
    Andrea

    Hi
    Good. The information that you need is stored in the MGMT TABLES but the query can change too much if you change your requirements.
    Example
    For example CPU % Utilization GC retrieve information from sar comand
    sar 60 1440
    For IO Waits the GC retrieve information of iostat
    But the performance page in GC is the most complex to build a query, because these information are building but PL executions and perl scripts.
    Other metrics are more easy
    - Blocking session count in database
    SELECT blocking_sid, num_blocked
    FROM ( SELECT blocking_sid, SUM(num_blocked) num_blocked
    FROM ( SELECT l.id1, l.id2,
    MAX(DECODE(l.block, 1, i.instance_name||'-'||l.sid,
    2, i.instance_name||'-'||l.sid, 0 )) blocking_sid,
    SUM(DECODE(l.request, 0, 0, 1 )) num_blocked
    FROM gv$lock l, gv$instance i
    WHERE ( l.block!= 0 OR l.request > 0 ) AND
    l.inst_id = i.inst_id
    GROUP BY l.id1, l.id2)
    GROUP BY blocking_sid
    ORDER BY num_blocked DESC)
    WHERE num_blocked != 0
    The CPU LOAD in the database performance page establish a maximun cpu line that is equal to processors numbers.
    The graphic establish the sessions active for each class and if these are active
    You can read the Enterprise Manager Grid Control Extensibility to check all the mgmt views and the metric reference manual to view the data source of each metric

  • Does the Iphone 4S's new modem only increase data speed for Att customers?

    I'm wondering if the new upgraded modem that is touted with the release with increase data speads across both GSM and CDMA networks.

    And only for GSM carriers who support HSPA+
    EVDO on Verizon phones is already operating at the limits of the capability of the technology, and the 4S does not support LTE, Verizon's (and AT&T's) next generation network.

  • Oracle Spatial Performance with 10-20.000 users

    Does anyone have any experience when Oracle Spatial is used with say 20.000 concurrent users. I am not interested in MapViewer response time, but lets say there is:
    - an app using 800 different tables each having an sdo_geometry column
    - the app is configured with different tables visible on different view scales
    - let's say an average of 40-50 tables is visible at any given time
    - some tables will have only a few records, while other can hold millions.
    - there is no client side caching
    - clients can zoom in/out pan.
    Anwers I am interested in:
    - What sort of server would be required
    - How can Oracle serve all that data (each Refresh renders the map and retrieves the data over the wire as there is no client side caching).
    - What sort of network infrastructure would be required.
    - Can clients connect to different servers and hence use load balancing or does Oracle have an automatic mechanism for that?
    Thanks in advance,
    Patrick

    Patrick, et al.
    There are lots of things one can do to improve performance in mapping environments because of a lot of the visualisation is based on "background" or read-only data. Here are some "tips":
    1. Spatially sort read-only data.
    This tip makes sure that data that is close to each other in space are next to each other on disk! Dan gave a good suggestion when he referenced Chapter 14, "Reorganize the Table Data to Minimize I/O" pp 580- 582, Pro Oracle Spatial. But just as easily one can create a table as select ... where sdo_filter() where the filtering object is an optimized rectangle across the whole of the dataset. (This is quite quick on 10g and above but much slower on earlier releases.)
    When implementing this make sure that the created table is created such that its blocks are next to each other in the tablespace. (Consider tablespace defragmentation beforehand.) Also, if the data is READ ONLY set the PCTFREE to 0 in order to pack the data up into as small a number of blocks as possible.
    2. Generalise data
    Rendering spatial data can be expensive where the data is geometrically detailed (many vertices) esp where the data is being visualised at smaller scales than it was captured at. So, if your "zoom thresholds" allow 1:10,000 data to be used at 1:100,000 then you are going to have problems. Consider pre-generalising the data (see sdo_util.simplify) before deployment. You can add multiple columns to your base table to hold this data. Be careful with polygon data because generalising polygons that share boundaries will create gaps etc as the data is more generalised. Often it is better to export the data to a GIS which can maintain the boundary relationships when generalising (say via topological relationships).
    Oracle's MapViewer has excellent on-the-fly generalisation but here one needs to be careful. Application tier caching (cf Bryan's comments) can help here a lot.
    3. Don't draw data that is sub-pixel.
    As one zooms out objects become smaller and smaller until they reach a point where the whole object can be drawn within a single pixel. If you have control over your map visualisation application you might want to consider setting the SDO_FILTER parameter "min_resolution" flag dynamically so that its value is the same as the number of meters / pixel (eg min_resolution=10). If this is set Oracle Spatial will only include spatial objects in the returned search set if one side of a geometry's MBR is greater than or equal to this value. Thus any geometries smaller than a pixel will not be returned. Very useful for large scale data being drawn at small scales and for which no selection (eg identify) is required. With Oracle MapViewer this behaviour can be set via the generalized_pixels parameter.
    3. SDO_TOLERANCE, Clean Data
    If you are querying data other than via MBR (eg find all land parcels that touch each other) then make sure that your sdo_tolerance values are appropriate. I have seen sites where data captured to 1cm had an sdo_tolerance value set to a millionth of a meter!
    A corollary to this is make sure that all your data passes validation at the chosen sdo_tolerance value before deploying to visualisation. Run sdo_geom.validate_geometry()/validate_layer()...
    4. Rtree Spatial Indexing
    At 10g and above lots of great work went in to the RTree indexing. So, make sure you are using RTrees and not QuadTrees. Also, many GIS applications create sub-optimal RTrees by not using the additional parameters available at 10g and above.
    4.1 If your table/column sdo_geometry data contains only points, lines or polygons then let the RTree indexer know (via layer_gtype) as it can implement certain optimizations based on this knowledge.
    4.2 With 10g you can set the RTree's spatial index data block use via sdo_pct_free. Consider setting this parameter to 0 if the table/column sdo_geometry data is read only.
    4.3 If a table/column is in high demand (eg it is the most commonly used table in all visualisations) you can consider loading (a part of) the RTree index into memory. Now, with the RTree indexing, the sdo_non_leaf_tbl=true parameter will split the RTree index into its leaf (contains actual rowid reference) and non-leaf (the tree built on the leaves) components. Most RTrees are built without this so only the MDRT*** secondary tables are built. But if sdo_non_leaf_tbl is set to true you will see the creation of an additional MDNT*** secondary table (for the non_leaf part of the rtree index). Now, if appropriate, the non_leaf table can be loaded into memory via the following:
    ALTER TABLE MDNT*** STORAGE(BUFFER_AREA KEEP);
    This is NOT a general panacea for all performance problems. One should investigate other options before embarking on this (cf Tom Kyte's books such as Expert Oracle Database Architecture, 9i and 10g Programming Techniques and Solutions.)
    4.4 Don't forget to check your spatial index data quality regularly. Because many sites use GIS package GUI tools to create tables, load data and index them, there is a real tendency to not check what they have done or regularly monitor the objects. Check the SDO_RTREE_QUALITY column in USER_SDO_INDEX_METADATA and look for indexes with an SDO_RTREE_QUALITY setting that is > 2. If > 2 consider rebuilding or recreating the index.
    5. The rendering engine.
    Whatever rendering engine one uses make sure you try and understand fully what it can and cannot do. AutoDesk's MapGuide is an excellent product but I have seen it simply cache table/column data and never dynamically access it. Also, I have been at one site which was running Deegree and MapViewer and MapViewer was so fast in comparison to Deegree that I was called in to find out why. I discovered that Deegree was using SDO_RELATE(... ANYINTERACT ...) for all MBR queries while MapViewer was using SDO_FILTER. Just this difference was causing some queries to perform at < 10% of the speed of MapViewer!!!!
    6. Consider "denormalising" data
    There is an old adage in databases that is "normalise for edit, denormalise for performance". When we load spatial data we often get it from suppliers in a fairly flat or normalised form. In consort with spatial sorting, consider denormalising the data via aggregations based on a rendering attribute and some sort of spatial unit. For example, if you have 1 million points stored as single points in SDO_GEOMETRY.SDO_POINT which you want to render by a single attribute containing 20 values, consider aggregating the data using this attribute AND some sort of spatial BUCKET or BIN. So, consider using SDO_AGGR_UNION coupled with Spatial Analysis and Mining package functions to GROUP the data BY <<column_name>> and a set of spatial extents.
    6. Tablespace use
    Finally, talk to your DBA in order to find out how the oracle database's physical and logical storage is organised. Is a SAN being used or SAME arranged disk arrays? Knowing this you can organise your spatial data and indexes using more effective and efficient methods that will ensure greater scalability.
    7. Network fetch
    If your rendering engine (app server) and database are on separate machines you need to investigate what sort of fetch sizes are being used when returning data from queries to the middle-tier. Fetch sizes for attribute only data rows and rows containing spatial data can be, and normally are, radically different. Accepting the default settings for these sizes could be killing you (as could the sort_area_size of the Oracle session the application server has created on the database). For example I have been informed that MapInfo Pro uses a fixed value of 25 records per fetch when communicating with Oracle. I have done some testing to show that this value can be too small for certain types of spatial data. SQL Developer's GeoRaptor uses 100 which is generally better (but this one can modify this). Most programmers accept defaults for network properties when programming in ADO/ODBC/OLEDB/JDBC: just be careful as to what is being set here. (This is one of the great strengths of ArcSDE: its TCP/IP network transport is well written, tuneable and very efficient.)
    8. Physical Format
    Finally, while Oracle's excellent MapViewer requires data its spatial data to be in Oracle, other commercial rendering engines do not. So, consider using alternate, physical file formats that are more optimal for your rendering engine. For example, Google Earth Enterprise "compiles" all the source data into an optimal format which the server then serves to Google Earth Enterprise clients. Similarly, a shapefile on local disk to the application server (with spatial indexing) may be faster that storing the data back in Oracle on a database server that is being shared with other business databases (eg Oracle financials). If you don't like this approach and want to use Oracle only consider using a dedicated Oracle XE on the application server for the data that is read only and used in most of your generated maps eg contour or drainage data.
    Just some things to think about.
    regards
    Simon

  • Problem increasing dates with REGEXP_REPLACE and TO_DATE

    Hello all,
    i'm trying to increase dates in a string with a single statement, but i ran into the problem that Oracle doesn't interpret the regex backtrace operators when they are used in TO_DATE:
    SELECT
      REGEXP_REPLACE(
        'Test: 01.01.2001 Test: 02.02.2002',
        '([0-9]{2})\.([0-9]{2})\.([0-9]{4})',
        TO_CHAR(TO_DATE('\1\2\3', 'DDMMYYYY') + 1, 'DD.MM.YYYY')
    FROM DUALLeads to a ORA-01858: 01858, 00000, "a non-numeric character was found where a numeric was expected".
    However, using other functions on the backtrace operators like UPPER('\1') works like a charm. Is this a limitation of the TO_DATE function?
    Is there any possibility to replace the dates in a small, simple statement?
    Thanks in advance,
    -sd

    How should I show you an example, when it's simply
    not possible? I know I mixed up parameters and
    callbacks mistakenly and took the wrong approach.Ok. In your last post, you've added a question mark, so
    it did look as you weren't aware of the problem.
    However, I thought there was some possibility in
    Oracle regex to apply a callback function to
    regex-replacement like other languages have
    (preg_replace_callback in PHP for example) like:Interesting. Unfortunately PL/SQL is not PHP and there's
    (yet) no callback feature implemented.
    FUNCTION chg_date(
    p_string
    REGEXP_REPLACE_CALLBACK('Test: 2001', '[0-9]{4}',
    'chg_date')
    ...I too miss the possibility to extend the use of backtrace
    parameters, maybe in one of the future versions.
    C.

  • Performance with dates in the where clause

    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?
    Thanks in advance.
    Execution Plan 1:
    SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1486387033
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
    Note
    - dynamic sampling used for this statement
    Statistics
    4 recursive calls
    0 db block gets
    1610 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Execution Plan 2:
    SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
    259259259259259259)
    3 - access("FDATE">=TRUNC(SYSDATE@!) AND
    "FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
    9)
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows
    Execution Plan 3:
    SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
    e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
    23:59:59','DD-MON-YY hh24:mi:ss'))
    3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
    "FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Hi,
    user10541890 wrote:
    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
    Be careful; post the code you're actually running.
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
    If "better" means faster, you've already shown that one is about as good as the other.
    Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
    For clarity, I prefer:
    WHERE     fdate >= TRUNC (SYSDATE)
    AND     fdate <  TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?Sorry, I can't.

Maybe you are looking for

  • Full backup system restore from time machine

    Help: I need to rebuild my system from Time Machine. I am using iMac with 10.6.8. I plan to replace my HDD. When trying to emulate the system restoration, I used the MAC Install DVD, which allowed accessing the Time Machine external disk. However, TM

  • Mighty Mouse in Parallells problem

    When I start Parallels [3.0] my wireless bluetooth mighty mouse is lost, whether i am in Windows or OSX. The oddity it that it worked for several months, but something changed today. Also, my apple wireless keyboard still works in Parallels. Can anyo

  • Acrobat Reader X "An Internal Error Has Occurred"

    I have a Windows 7 Professional 64-Bit machine running Acrobat Reader X.  Both the machine and Reader are updated with the latest patches. The primary user on the machine can use the program.  A second (also an administrator) user on the same machine

  • Is Airport Card necessary to run Airport Extreme in Windows PC?

    I know this sounds extremely stupid, but the instruction manual states this as a requirement. My setup consists of a Dell PC, which will be hosting the Airport router, and 2 Macs which will be networked to it. The 2 Macs obviously have the Airport Ca

  • CAPTURE AUDIO MIXES LEFT CHANNEL TO MONO

    Everytime I capture now, or batch, or capture a single logged clip, the stereo audio comes in as mono, using only the left channel. The right channel audio is lost. I searched the postings, and found this was happening with a lot of people, not just