Delete on very large table

Hi all,
one table in my database has grown to 20GB, this table holds log for applications since 2005 , so we decided to archive and delete all the log from 2005 and 2006.
the table:
CREATE TABLE WORKMG.PP_TRANSFAUX
NID_TRANSF NUMBER(28) NOT NULL,
VTRANSFTYPE VARCHAR2(200 BYTE) NOT NULL,
VTRANSF VARCHAR2(4000 BYTE) NOT NULL,
DTRANSFDATE DATE NOT NULL
TABLESPACE TBS_TABWM_ALL
The command:
delete from workmg.pp_transfaux where dtransfdata < to_date('20070101
00:00','yyyymmdd hh24:mi')
My question is, what are the "best pratices" for this operation ? such a huge delete can " flood fill" the redo logs , i cant avoid it with "alter table pptransfaux nologging" ...
So i can delete small chunks of data .. say .. delete 6 months each time ... but .. i'll get a big fragmented table ...
my environment:
oracle 9.2.0.1 under windows 2000
Best Regards
Rui Madaleno

Since this is log data I am assuming you don't need it all online at a given time, and that you don't have a partitioning license:
<Online>
0. Backup the database.
1. Create an empty duplicate table 'A'.
<maintenance window>
2. Exchange A and the primary table.
<Online>
3. insert-as-select-compress-nologging the data to keep from the primary table to A.
4. create-table-as-select-compress-nologging the data to archive from the primary table to B.
5. Drop the primary table.
6. Archive and drop B.
7. Backup the database.
<Future>
7. Upgrade to 11g
8. License and utilize the automated partition generation feature to give each period its own partition.
9. Periodically archive and drop partitions from the log table.
If you have partitioning:
<Online>
1. Create an empty duplicate table, we'll call it A. It should be partitioned either by month or year.
2. insert-as-select-compress-nologging the primary table into A.
<maintenance window>
3. exchange A and the primary table.
<Online>
4. Drop the primary table.
5. archive and drop the partitions of A you no longer need.

Similar Messages

  • Deleting rows from very large table

    Hello,
    I need to delete rows from a large table, but not all of them, so I can't use truncate. The delete condition is based on one column, something like this:
    delete from very_large_table where col1=100;
    There's an index (valid, B-tree) on col1, but it still goes very slow. Is there any instruction which can help delete rows faster?
    Txh in adv.
    A.

    Your manager doesn't agree to your running an EXPLAIN PLAN? What is his objection? Sounds like the prototypical 'pointy-hair boss'.
    Take a look at these:
    -- do_explain.sql
    spool explain.txt
    -- do EXPLAIN PLAN on target queries with current index definitions
    truncate table plan_table
    set echo on
    explain plan for
    <insert query here>
    set echo off
    @get_explain.sql
    -- get_explain.sql
    set linesize 120
    set pagesize 70
    column operation     format a25
    column query_plan     format a35
    column options          format a15
    column object_name     format a20
    column order           format a12
    column opt           format a6
    select     lpad(' ',level) || operation "OPERATION",
         options "OPTIONS",
         decode(to_char(id),'0','COST = ' || NVL(to_char(position),'n/a'),object_name) "OBJECT NAME",
         cardinality "rows",     
         substr(optimizer,1,6) "OPT"
    from     plan_table
    start     with id = 0
    connect by prior id = parent_id
    There are probably newer, better ways, but this should work with all living versions of Oracle and is something I've had in my back pocket for several years now. It's not actually executing the query or dml in question, just running an explain plan on it.

  • Creation of very large table

    Dear all, I need to create a table that will host about 40 GB of data with a very simple structure (varchar2(20), varchar2(10), VARCHAR2(4000)), thus there will be millions of records. The table should be indexed by the first two fields to allow online reporting.
    I am not a DBA and our DBA is on leave so we have a little trouble there (;C). Loading the table is not a problem, however I will need to partition it if I want it to deliver queries with a reasonable response time. The same for indexes.
    Is there a recommendation for creation of such table?
    The first field is a code that indicates the dossier number. The second field indicates the type of data that is contained in the third field. Typically, a query would be of the kind "show me the name of manufacturers (thus, second code=xx) for year 2006 (thus, second code=yy and data=2006). I have thought that partitioning by year would be a good idea and then maybe hash-sub partitioning in order to reduce the size of the yearly partitions.
    Regarding performance, what needs to be done in order to optimise the execution plan?
    Thanks in advance.

    Thanks for the reminder, but licensing is covered by my organisation. I am really concerned about the technical aspects of my question.
    Regarding Vit's post, I do not understand fully why "Well, partitioning on "data" seems to be a bad idea, if it contains other values than just dates (as you indicate)." I would partition on data, the problem is that the governing field is not a column per se, but rather contained in the data column together with other information. I'll make it more clear:
    column 1: fiche id
    column 2: data code
    column 3: data itself
    Thus, a "fiche" would consist of several records, each containing diverse information, for example:
    fiche code data
    1 10 01/01/2007 (i.e. the date I would use for partitioning)
    1 20 Acme (e.g. name of manufacturer)
    1 30 1000000 (e.g. revenue in 2007)
    1 40 WaltDisney st. (e.g. address)
    and so on
    So, it is not so straight forward to use a column for partitioning, I actually need the combination of code and data to know what data to use. Otherwise I would use hash partitioning.
    Regarding the keys, the obvious columns are fiche and code, being the combination of both, unique. A typical query would want to search for data in the data column (i.e. year), for which I also need column code (I need to know what code is the type of data year before I can ask for it). Indexing of data would be necessary.
    Finally, the data would be accessed through an SQL editor. The end user would have freedom to build own queries. Just for information, a select count on the partially loaded table (bit more than 95 million records = ~9% of foreseen load) without indexes takes about 1 min 20 seconds to complete.
    One more thing I forgot to mention: the table will have inserts (not frequent, but maybe massive) and not updates.

  • How do I delete a very Large "Other" File

    I erroneously loaded a ton of music files to the "Other" file on my ipod classic 80g...What is the easiest way to delete this large file from the ipod?

    This method deleted the huge "Other" file, but of course results in a long term issue of re-loading the ipod w/a large original file. On a external drive, I have my itunes music folder and other music in MP3/CD formats. I've tried to load these songs to the ipod, but it seems to want to place them in the "Other" file and not music. I have been manually loading these songs from the external drive and sliding them right into the music library on the itunes page. Is this the right to do this?

  • Working with VERY LARGE tables - is it possible to bypass row counting?

    Hello!
    For working with large result sets ADF provides the `Range Paging` mechanism for views, described in the 27.1.5 part of the Developer’s Guide For Forms/4GL Developers.
    It works well, but as a common mode it counts total row count to allow paging. In some cases query `select count(1) from (SELECT ...)...` can take very, very long time.
    But if a view object doesn't know row count (for example we can override getEstimatedRowCount() method ), paging controls doesn't appear in user interface.
    Meanwhile I suggest that it's possible to display two paging links - Prev and Next, without knowing row count. Is it a way to do it?
    Thank in advance,
    Ilya Rodionov.

    Hi Ilya,
    while you wait for Franks to dig up the right sample you can read this thread:
    Re: ADF BC: Performance issue with getEstimatedRowCount (ER?)
    There we discuss the exact issue.
    Timo

  • Error when creating index with parallel option on very large table

    I am getting a
    "7:15:52 AM ORA-00600: internal error code, arguments: [kxfqupp_bad_cvl], [7940], [6], [0], [], [], [], []"
    error when creating an index with parallel option. Which is strange because this has not been a problem until now. We just hit 60 million rows in a 45 column table, and I wonder if we've hit a bug.
    Version 10.2.0.4
    O/S Linux
    As a test I removed the parallel option and several of the indexes were created with no problem, but many still threw the same error... Strange. Do I need a patch update of some kind?

    This is most certainly a bug.
    From metalink it looks like bug 4695511 - fixed in 10.2.0.4.1

  • Composite primary key on very large tables...

    Hello
    So I'm building a database where one of the tables will eventually have > 1 billion rows and was wondering about the primary key. The table will have 3 columns (sample_id, object_id, value) and so I was thinking instead of creating a surrogate key I would create a composite primary key on those 3 columns.... People will query either for sample_id = X or object_id = Y. I've created a composite index on object_id, sample_id and value and the query times have been fast < 2-3s per object_id. Although building that index takes some time (7-8 hrs) what would be the pros/cons to composite PK vs a unique index? I plan to do massive bulk uploads (50M records at a time) so I'll disable the constraints before loading.... These records will also be loaded in order so would a clustered table be appropriate?
    thanks
    steve

    Hi,
    As correctly said by steve,partition the table.
    Create a unique index on single column and then even if ur query is not using that column which has index,use a HINT and make it to use the index.
    If ur using joins on this table with other tables u can use use_hash hint which will improve performance.
    Hope it helps.
    Thanks

  • How to check duplicate in a very large table

    How  to  check duplicate  value (say mobile number) in a table  having 50 million records

    I need  to  know whether  the data(Mobile Number)  is  already  there  in  the  table whenever a new  row  is  inserted
    You can also use the MERGE statement and the WHEN NOT MATCHED clause to insert rows only if that value is not there.
    But if you are just checking the one column then a unique index would do it.
    Does that column allow a NULL value?
    Do you care about existing duplicate values?
    If you only want to check NEW inserts then create a UNIQUE constraint and use the ENABLE NOVALIDATE clause:
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/clauses002.htm
      ENABLE NOVALIDATE ensures that all new DML operations on the constrained data comply with the constraint. This clause does not ensure that existing data in the table complies with the constraint.
    That will check new data but leave the old data alone - no performance hit for the old data.

  • Break a (very) large table into pages interactively

    Here's the problem: I have a number of reports which, depending on parameters and the nature of the underlying data, may produce a table (tablix) with 10's of 1000's of rows.
    The non-interactive renderers deal with this kind of table reasonably - Word & PDF, for example, simply start a new page when the current page is full.
    Sadly, interactive use of the report uses the RPL renderer which, for some strange and incomprehensible reason, ignores the page size and always tries to render a single table without intervening page breaks.
    Unfortunately, when you ask the Web or WinForms ReportViewer clients to render a 250,000 row table on a single page, they fail badly - the Web viewer just gives up after a few minutes, while the WinForms viewer tries like a champ, but even after 90 minutes
    of execution and consumption of 10+Gb of RAM is still stuck compute-bound trying to lay out that giant table.
    Looking for ideas - I've been looking for a solution for 10 years now, but SSRS has progressed so little since 2005, the time lapsed doesn't really mean much.
    In case it matters - the underlying query is MDX.  SSRS 2012 (or 2008 R2 or 2014 - they all behave the same).
    -cd Mark the best replies as answers!

    Setting the InteractiveSize property should resolve this issue.
    Since you say you are getting no page breaks at all I am guessing you have the interactive height set to zero. 
    Not that you can't see this property if you right click the report background and go to report properties - you need to use the properties pane instead.
    http://www.magnetismsolutions.com/blog/nathaneccles/2013/10/30/ssrs-using-page-size-and-interactive-size-to-manage-printing
    LucasF

  • Efficiently Querying Large Table

    I have to query a recordset of 14k records against (join) a very large table: billions of data -even a count of the table does not return any resulty after 15 mins.
    I tried a plsql procedure to store the first recordset in a temp table and then preparing two cursors: one on the temp table and the other on the large table.
    However, the plsql procedure runs for a long time and just gives up with this error:
    SQL> exec match;
    ERROR:
    ORA-01041: internal error. hostdef extension doesn't exist
    BEGIN match; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Is there is way through which I can query more efficiently?
    - Using chucks of records from the large table at a time - how to do that? (rowid)
    - Or just ask the dba to partition the table - but the whole table would still need to be queried.
    The temp table is:
    CREATE TABLE test AS SELECT a.mon_ord_no, a.mo_type_id, a.p2a_pbu_id, a.creation_date,b.status_date_time,
    a.expiry_date, a.current_mo_status_desc_id, a.amount,
    a.purchaser_name, a.recipent_name, a.mo_id_type_id,
    a.mo_redeemed_by_id, a.recipient_type, c.pbu_id, c.txn_seq_no, c.txn_date_time
    FROM mon_order a, mo_status b, host_txn_log c
    where a.mon_ord_no = b.mon_ord_no
    and a.mon_ord_no = c.mon_ord_no
    and b.status_date_time = c.txn_date_time
    and b.status_desc_id = 7
    and a.current_mo_status_desc_id = 7
    and a.amount is not null
    and a.amount > 0
    order by b.status_date_time;
    and the PL/SQL Procedure is:
    CREATE OR REPLACE PROCEDURE MATCH
    IS
    --DECLARE
    deleted INTEGER :=0;
    counter INTEGER :=0;
    CURSOR v_table IS
         SELECT DISTINCT pbu_id, txn_seq_no, create_date
         FROM host_v
         WHERE status = 4;
    v_table_record v_table%ROWTYPE;
    CURSOR temp_table (v_pbu_id NUMBER, v_txn_seq_no NUMBER, v_create_date DATE) IS
         SELECT * FROM test
         WHERE pbu_id = v_pbu_id
         AND txn_seq_no = v_txn_seq_no
         AND creation_date = v_create_date;
    temp_table_record temp_table%ROWTYPE;
    BEGIN
    OPEN v_table;
    LOOP
    FETCH v_table INTO voucher_table_record;
    EXIT WHEN v_table%NOTFOUND;
    OPEN temp_table (v_table_record.pbu_id, v_table_record.txn_seq_no, v_table_record.create_date);
    LOOP
    FETCH temp_table INTO temp_table_record;
    EXIT WHEN temp_table %FOUND;
         DELETE FROM test WHERE pbu_id = v_table_record.pbu_id AND
                   temp_table_record.txn_seq_no = v_table_record.txn_seq_no AND
                   temp_table_record.creation_date = v_table_record.create_date;
    END LOOP;
    CLOSE temp_table;
    END LOOP;
    CLOSE v_table;
    END MATCH;
    /

    Many thanks,
    I can get the explain plan for the SQL statement, but I am not sure how to get it for teh PLSQL. Which section in the PLSQL do I get the explain plan. I am using SQL Navigator.
    I can create the cursor with the join, and if it does not need the delete statement, then there is no need requirement for the procedure itself. Should I just run the query as a SQL statement?
    You have not said what I should do with the rowid?
    Regards

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Select All in large table

    My product has some very large tables (using RANGE_PAGING). I want to allow the end user to do a Select All operation, and then press a command button to act on all selected rows, but want my backing code to detect a Select All has been done, rather than attempt to retrieve all rows from the table. Is this doable with a RichTable?
    I'm porting an existing UI over to ADF -- the old UI handled this by having a separate select all button (column header), which would get unset if any row got unselected, and the backing code could interrogate that. I was wondering if there was a more ADF-ish way of handling this.
    Using Oracle JDeveloper 11g Release 1 (11.1.1.6.0)
    Edited by: user12614476 on Dec 5, 2011 1:58 PM
    (added JDeveloper version info)

    Are you really using 11.1.1.6? If so, I guess you should be asking in one of the internal Oracle forums.
    But, no, there's no "more ADF-ish" way to my knowledge :)
    John

  • Using dbms_metadata to get defination of large table

    Hi,
    I have very very large table definition table with partitions and sub-partitions and I want to create that table in some other db.
    I am trying below spooled query but its not retrieving complete definition of the table, any other setting I should follow
    spool t.sql
    set pagesize 0
    set long 2000000
    select dbms_metadata.get_ddl
    I am using oracle 10g r2
    Thanks,
    Pankaj

    Thanks you all for your replied, following are observation after settings suggested
    With setting
    ==================
    set longchunksize 2000000
    set long 2000000
    set lines 2000
    set pages 0
    -sh-3.00$ cat t.sql | wc
    10819 23419 21635001
    =================
    With setting
    set long 100000
    set pages 0
    -sh-3.00$ cat t.sql | wc
    3452 7326 279269
    Result => None worked for me :(

  • Updating a large table

    Hello,
    We need to update 2 columns on a very large table (20000000 records). Every row in the table is to be updated and the client wants to be able to update the records by year. Below the procedure that has been developed
    DECLARE
    l_year VARCHAR2 (4) := '2008';
    CURSOR c_1 (l_year1 VARCHAR2)
    IS
    SELECT ROWID l_rowid, (SELECT tmp.new_code_x
    FROM new_mapping_code_x tmp
    WHERE tmp.old_code_x = l.code_x) code_x,
    (SELECT tmp.new_code_x
    FROM new_mapping_code_x tmp
    WHERE tmp.old_code_x = l.code_x_ori) code_x_ori
    FROM tableX l
    WHERE TO_CHAR (created_date, 'YYYY') = l_year1;
    TYPE typec1 IS TABLE OF c_1%ROWTYPE
    INDEX BY PLS_INTEGER;
    l_c1 typec1;
    BEGIN
    DBMS_OUTPUT.put_line ( 'Update start - '
    || TO_CHAR (SYSDATE, 'DD/MM/YYYY HH24:MI:SS')
    OPEN c_1 (l_year);
    LOOP
    FETCH c_1
    BULK COLLECT INTO l_c1 LIMIT 100000;
    EXIT WHEN l_c1.COUNT = 0;
    FOR indx IN 1 .. l_c1.COUNT
    LOOP
    UPDATE tableX
    SET code_x = NVL (l_c1 (indx).code_x, code_x),
    code_x_ori =
    NVL (l_c1 (indx).code_x_ori, code_x_ori)
    WHERE ROWID = l_c1 (indx).l_rowid;
    END LOOP;
    COMMIT;
    END LOOP;
    CLOSE c_1;
    DBMS_OUTPUT.put_line ( 'Update end - '
    || TO_CHAR (SYSDATE, 'DD/MM/YYYY HH24:MI:SS')
    END;
    We do not want to do a single update by year as we fear the update might fail with for example rollback segment error.
    It seems to me the above developed is not the most efficient one. Any comments on the above or anyone having a better solution?
    Thanks

    Everything wrong with the sample code and approach used. This is not how one uses Oracle. This is not how one designs performant and scalable code.
    Transactions must be consistent and logical. A commit in the middle of "+doing something+" is wrong. Period. (and no, the reasons for committing often and frequently in something like SQL-Server do not and never have applied to Oracle)
    Also, as I/O is the slowest and most expensive operation that one can perform in a database, it simply makes sense to reduce I/O as far as possible. This means not doing this:
    WHERE TO_CHAR (created_date, 'YYYY') = l_year1;Why? Because an index on created_date is now rendered utterly useless... and in this specific case will result in a full table scan.
    It means using the columns in their native data types. If the column is a date then use it as a date! E.g.
    where created_date between :startDate and :endDateThe proper approach to this problem is to determine what is the most effective logical transaction that can be done, given the available resources (redo/undo/etc).
    This could very likely be daily - dealing and updating with a single day's data at a time. So then one will write a procedure that updates a single day as a single transaction.
    One can also create a process log table - and have this procedure update this table with the day being updated, the time started, the time completed, and the number of rows updated.
    One now has a discrete business process that can be run. This allows one to run 10 or 30 or more of these processes at the same time using DBMS_JOB - thus doing the updates for a month using parallel processing.
    The process log table can be used to manage the entire update. It will also provide basic execution time details allowing one to estimate the average time for updating a day and the total time it will take for all the data in the large table to be updated.
    This is a structured approach. An approach that ensures the integrity of the data (all rows for a single day is treated as a single transaction). One that also provides management data that gives a clear picture of the state of the data in the large table.
    I'm a firm believer that is something is worth doing, it is worth doing well. Using a hacked approach of blindly updating data and committing ad-hoc without any management and process controls... That is simply doing something very badly. Why? It may be interesting running into a brick wall the first time around. However, subsequent encounters with the wall should be avoided.

  • Lookup from a large table?

    Hi, All
    Very new to APEX. If I have a very large table of securities, and want the user to be able to lookup the security a number of different ways (Name, CUSIP or ticker), is there a way to do this in APEX? When they are faced with entering a "security id", they need to find a way to retrieve the correct one.
    Thanks

    Hi
    Yes, this is very simple.
    Essentially, you create a number of items where they can enter the details.
    In you case, let's call them P1_SEC_ID, P1_NAME, P1_CUSIP and P1_TICKER.
    Next create a button called GO (or whatever you want) that branches back to the same page.
    Not create a report region with a source something like this...
    SELECT *
    FROM my_big_table
    WHERE INSTR(UPPER(sec_id),UPPER(:P1_SEC_ID)) > 0
    OR    INSTR(UPPER(name),UPPER(:P1_NAME)) > 0
    OR    INSTR(UPPER(cusip),UPPER(:P1_CUSIP)) > 0
    OR    INSTR(UPPER(ticker),UPPER(:P1_TICKER)) > 0Next, make the report rgion conditional on the request value being 'GO' and this should work for you.
    Cheers
    Ben

Maybe you are looking for

  • IChat no longer works in video

    I was using video iChat with Panther successfully. Having installed Tiger on one of two hard drives I can now audio chat on both but video chat on neither. I get the message "cannot get video from the camera" whaen I initiate a call, and " erik forre

  • Error: IDoc XML data record

    Hi all,    We have file->XI->idoc scenario and the problem we are facing is like, when the idoc structure is prepared using XSLT mapping, when the idoc is ready to get in thru the idoc adapter we are getting the following error. <i><b>Error: IDoc XML

  • Aperture, the iPad, and sort order

    Basically, Aperture and iPad really don't seem to be happy bed buddies. Seems to me the bottom line is if you want to use your iPad to browse and view pictures, don't "upgrade" to Aperture. My basic problem is sort order, and twofold: 1. The order of

  • Macbook Pro late 2011 SSD compatibility

    I installed a crucial m4 into my new macbook pro 13", and had a tonne of issues; spinning beach balls etc. After checking the problem online it became apparent that this problem was quite common and was due to the ssd being sata 3 and my laptop's con

  • CCMP Report Upload Issue

    Hi, I am facing an issue while updating reports in CCMP verion 8.5. when i try to upload the report it gives the error. System.Web.Services.Protocols.SoapException: The permissions granted to user 'NT AUTHORITY\NETWORK SERVICE' are insufficient for p