Updating a large table

Hello,
We need to update 2 columns on a very large table (20000000 records). Every row in the table is to be updated and the client wants to be able to update the records by year. Below the procedure that has been developed
DECLARE
l_year VARCHAR2 (4) := '2008';
CURSOR c_1 (l_year1 VARCHAR2)
IS
SELECT ROWID l_rowid, (SELECT tmp.new_code_x
FROM new_mapping_code_x tmp
WHERE tmp.old_code_x = l.code_x) code_x,
(SELECT tmp.new_code_x
FROM new_mapping_code_x tmp
WHERE tmp.old_code_x = l.code_x_ori) code_x_ori
FROM tableX l
WHERE TO_CHAR (created_date, 'YYYY') = l_year1;
TYPE typec1 IS TABLE OF c_1%ROWTYPE
INDEX BY PLS_INTEGER;
l_c1 typec1;
BEGIN
DBMS_OUTPUT.put_line ( 'Update start - '
|| TO_CHAR (SYSDATE, 'DD/MM/YYYY HH24:MI:SS')
OPEN c_1 (l_year);
LOOP
FETCH c_1
BULK COLLECT INTO l_c1 LIMIT 100000;
EXIT WHEN l_c1.COUNT = 0;
FOR indx IN 1 .. l_c1.COUNT
LOOP
UPDATE tableX
SET code_x = NVL (l_c1 (indx).code_x, code_x),
code_x_ori =
NVL (l_c1 (indx).code_x_ori, code_x_ori)
WHERE ROWID = l_c1 (indx).l_rowid;
END LOOP;
COMMIT;
END LOOP;
CLOSE c_1;
DBMS_OUTPUT.put_line ( 'Update end - '
|| TO_CHAR (SYSDATE, 'DD/MM/YYYY HH24:MI:SS')
END;
We do not want to do a single update by year as we fear the update might fail with for example rollback segment error.
It seems to me the above developed is not the most efficient one. Any comments on the above or anyone having a better solution?
Thanks

Everything wrong with the sample code and approach used. This is not how one uses Oracle. This is not how one designs performant and scalable code.
Transactions must be consistent and logical. A commit in the middle of "+doing something+" is wrong. Period. (and no, the reasons for committing often and frequently in something like SQL-Server do not and never have applied to Oracle)
Also, as I/O is the slowest and most expensive operation that one can perform in a database, it simply makes sense to reduce I/O as far as possible. This means not doing this:
WHERE TO_CHAR (created_date, 'YYYY') = l_year1;Why? Because an index on created_date is now rendered utterly useless... and in this specific case will result in a full table scan.
It means using the columns in their native data types. If the column is a date then use it as a date! E.g.
where created_date between :startDate and :endDateThe proper approach to this problem is to determine what is the most effective logical transaction that can be done, given the available resources (redo/undo/etc).
This could very likely be daily - dealing and updating with a single day's data at a time. So then one will write a procedure that updates a single day as a single transaction.
One can also create a process log table - and have this procedure update this table with the day being updated, the time started, the time completed, and the number of rows updated.
One now has a discrete business process that can be run. This allows one to run 10 or 30 or more of these processes at the same time using DBMS_JOB - thus doing the updates for a month using parallel processing.
The process log table can be used to manage the entire update. It will also provide basic execution time details allowing one to estimate the average time for updating a day and the total time it will take for all the data in the large table to be updated.
This is a structured approach. An approach that ensures the integrity of the data (all rows for a single day is treated as a single transaction). One that also provides management data that gives a clear picture of the state of the data in the large table.
I'm a firm believer that is something is worth doing, it is worth doing well. Using a hacked approach of blindly updating data and committing ad-hoc without any management and process controls... That is simply doing something very badly. Why? It may be interesting running into a brick wall the first time around. However, subsequent encounters with the wall should be avoided.

Similar Messages

  • How to Query and Update/Insert Large Tables ...

    I have the following 2 tables:
    Table 1: Pricing
    This table holds pricing details of all Items (roughly 150,000 items). One Item has three types of prices Standard, Promotion, and Discounted. Therefore the table contains roughly 150,000 * 3 records. Also the prices may get updated frequently every day.
    Item#---Type------------FromDate---ToDate-----Price
    ===================================
    AAA-----Standard------01/01/05---31/12/05---1.50
    AAA-----Promotion----16/12/05---20/12/05---1.40     
    AAA-----Discounted---10/12/05---15/12/05---1.20
    BBB-----Standard------01/01/05---31/12/05---1.60
    BBB-----Promotion----17/12/05---18/12/05---1.30
    BBB-----Discounted---10/12/05---15/12/05---1.50
    Table 2: BestPrice
    At a given date, this table contains the best price (lowest price) of each item for the following 21 days (including the current date). The POS system is accessing this table to get the daily best price for billing customers.
    Item#----Date--------BestPrice
    =====================
    AAA------17/12/05----1.40
    AAA------18/12/05----1.40     
    AAA------19/12/05----1.40
    AAA------20/12/05----1.40
    AAA------21/12/05----1.50
    BBB------17/12/05----1.30
    BBB------18/12/05----1.30
    BBB------19/12/05----1.60
    BBB------20/12/05----1.60
    BBB------21/12/05----1.60
    Problem Statement:
    Table 2 (BestPrice) needs to get updated from Table 1 (Pricing) at least once every day with the best price for each item for the next 21 days (including the current day). What’s the most efficient method to perform this job?

    I don't know really why your application needs to use BestPrice table!
    It is not clear what it does for it because querying will not be better than querying the pricing table. This will be very fast with an index on intem#
    On the other hand, why do you use three rows per item in Pricing table? it could be one row per item.
    Any way, to populate the table for the first time :
    insert into bestprice
    select item#
         , dte
         , min(price) bestprice
    from pricing, (select trunc(sysdate) + rownum - 1 dte from dual connect by rownum <= 21) t
    where  dte  between fromdate and todate
    group by dte, item#Now, you can have a row trigger (update) on Pricing table which can change BestPrice table according to a change in Pricing tabe.
    Then you have to use a daily job that can be based on the following statement:
    update BestPrice B
    set dte = trunc(sysdate) + 20
      , BestPrice =
             select min(price)
             from Pricing P
             where P.item# = B.item#
                and B.dte between P.FromDate and P.ToDate
    where dte = trunc (sysdate) - 1Message was edited by:
    Michel SALAIS
    I forgot to say that that if your periods in the pricing table don't covere a desired date in the BestPrice table then my insert statement will not treat it and my update will put it to NULL

  • Updating Large Tables

    Hi,
    I was asked the following during an interview
    U have a large table with millions rows and want to add a column. What is the best way to do it without effecting the preformance of DB
    Also u have a large table with million rows how do u organise the indexes
    My answer was to coalasce the indexes
    I was wondering what is the best answers for these questions
    Thanks

    Adding a column to a table, even a really big one is trivial and will have no impact on the performance of the database. Just do:
    ALTER TABLE t ADD (new_column DATATYPE);This is simply an update to the data dictionary. Aside from the few milliseconds that Oracle will lock some dicitonary tables (no different than the locks held if you update a column in a user table) there will be no impact on the performance of the database. Now, populatng that column would be a different kettle of fish, and would depend on how (i.e. single value for all rows, calculated based on other columns) the column needs to be populated.
    I would have asked for clarification on what they meant by "oraganise the indexes". If they meant what tablespaces should they go in, I would say in the same tablespace as other objects of similar size (You are using locally managed tablespaces aren't you?). If they meant what indexes would you create, I would say that I would create the indexes neccessary to answer the queries that you run.
    HTH
    John

  • Large Table Data Updation

    Hi,
    I have a large table about 8 GB i need to add a new column to this table and update new column values of all the present records.
    Please let me know any good methods for the same. while doing i same i also intend to partition the table.
    Thanks in advance.

    Check this
    SQL> CREATE TABLE tb1(c1 NUMBER);
    Table created.
    SQL> ALTER TABLE tb1 ADD CONSTRAINT tb1_pk PRIMARY KEY(c1);
    Table altered.
    SQL> INSERT INTO tb1 VALUES(1);
    1 row created.
    SQL> CREATE TABLE tb2(c1 NUMBER, c2 NUMBER);
    Table created.
    SQL> exec dbms_redefinition.can_redef_table(USER, 'TB1');
    PL/SQL procedure successfully completed.
    SQL>  exec dbms_redefinition.start_redef_table(USER, 'TB1', 'TB2', 'C1 C1, 34 C2');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_redefinition.finish_redef_table(USER, 'TB1', 'TB2')
    PL/SQL procedure successfully completed.
    SQL> SELECT * FROM tb1;
         C1        C2
          1        34
    SQL> Lukasz

  • Updating large table using the WITH CLAUSE or PLSQL

    I tried to perform an update on a table with over 15million records using the merge statement below but it's very slow.
    Can someone help me re-writting this statement using the WITH CLAUSE or a PLSQL statement that will make it run faster?
    my merge statemet:
    MERGE INTO voter dst
    USING (
    SELECT voterid,
    pollingstation || CASE
    WHEN ROW_NUMBER () OVER ( PARTITION BY pollingstation
    ORDER BY surname, firstnames
    ) <= 1000
    THEN 'A'
    WHEN ROW_NUMBER () OVER ( PARTITION BY pollingstation
    ORDER BY surname, firstnames
    ) BETWEEN 1000 AND 2000
    THEN 'B'
    ELSE 'C'
    END AS new_pollingstation
    FROM voter
    ) src
    ON (src.voterid = dst.voterid)
    WHEN MATCHED THEN UPDATE
    SET dst.new_pollingstation = src.new_pollingstation
    the with clause approach:http://www.dba-oracle.com/t_with_clause.htm
    thanks.

    Well, here's your query formatted for people to read...
    MERGE INTO voter dst
    USING (SELECT voterid,
                  pollingstation || CASE WHEN ROW_NUMBER () OVER ( PARTITION BY pollingstation ORDER BY surname, firstnames) <= 1000
                                           THEN 'A'
                                         WHEN ROW_NUMBER () OVER ( PARTITION BY pollingstation ORDER BY surname, firstnames) BETWEEN 1000 AND 2000
                                           THEN 'B'
                                         ELSE 'C'
                                    END AS new_pollingstation
           FROM voter) src
    ON (src.voterid = dst.voterid)
    WHEN MATCHED THEN
    UPDATE SET dst.new_pollingstation = src.new_pollingstation
    ;In future, please read {message:id=9360002} and post relevant details.
    What do you mean when you say it's "slow"? How have you measured this? Have you examined the explain plan?
    Take a read of the threads linked to by the FAQ post: {message:id=9360003} for details of what you need to provide to get help with performance issues.

  • How to rotate a large table

    I'm building a long technical report with Pages'08.
    There are many tables in this report.
    This document is in portrait format.
    In the middle of this document I have a particularly large
    table which can't be read if I try to stay on a portrait
    presentation of this table: too many columns (15).
    Hence I'd like to find an easy way to either rotate
    the page or the table so as to be able to use larger
    columns.
    I discovered that Pages'08 doesn't permit to put one
    page in landscape format. I also abandonned the idea of
    using 3 different documents (part 1 in portrait,
    part 2 in landscape, part 3 in portrait again).
    I have to chain the paragraph numbers.
    I have to make a table of content at the end of
    this technical report.
    What is the most efficient way to manage to fill this large
    table.
    Word let me do this, but unfortunately it does also
    make me spend toomuch time on other simple and basic functions.
    Is Pages'09 better on this basic and frequent need (at least for my job)?
    <pre>--------
    As long as you'll see students making graphics with pen on paper,
    you'll see the missing keystone of the software empire.
    dan</pre>

    Peggy wrote:
    You can rotate a floating table, but it can be a problem if you need to edit the table. It will auto-rotate to portrait to edit it but it can be difficult to see or get to the outside edges. I find it easiest to copy & paste the table into a landscape document the copy it back after editing.
    Thank you for the nice hint.
    I finally choosed to work on a temporary document in A3 format,
    and keep it open so as to be able to quickly copy my table in the
    main document every time I update it.
    During this copy operation I noticed a boring problem:
    as the text column in my main document is slightly smaller than my table,
    Pages decide to shrink it every time, and I can't recover it's
    original size (which I penibly tuned up in my A3 document).
    Hence all the cell contents are partially hidden.
    The button:
    Inspector > Metrics > Original Size
    is off.
    Do you know how to circumvent this bad habit of Pages to resize my
    imported table?
    <pre>--------
    As long as you'll see students making graphics with pen on paper,
    you'll see the missing keystone of the software empire.
    dan</pre>

  • How to instantiate multiple large tables while DB is online

    Hello Experts,
    I've set up goldengate on a RHEL 11.2.0.3 Cluster to a HP Unix Itanium 10.2.0.4 database.
    I've got a sample table replicated successfully, and changes are moving over to the target system.
    However, now comes the real thing, I have about 100 + varying size tables, one 1TB, then a few 30 GB tables, and then finally a bunch of smaller tables.
    I've not been able to find a clean way to do this while the db is online.
    The afterscn is good, but it works for the replicat not at the table level...
    Any help is appreciated.
    Best Regards.

    Since I've received no updates here, I'd put in an SR with Oracle Support.
    The suggestion was to add additional large tables online using another new replicat, with the afterscn option for the replicat.
    Once the changes are all synced up and the replication has been running for a while, stop the Pump on the source, stop the new replicat, delete it, and move the table configuration to an existing replicat, and startup the stopped pump, and you're done with the online re-instantiation.
    Am putting it to the test, but logically it sounds feasible.

  • Using workspaces with large tables

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    I've just spotted the workspace manager forum, I'll post there. :-)

  • Join on Large Table

    I have some queries that use an inner join between a table with a few hundred rows and a table that will eventually have many millions of rows. The join is on an integer value that is part of the primary key on the larger table. The primary key
    on said table consists of the integer and another field which is a BigInt (representing Date/time to the millisecond). The query also has  predicate (where clause) with an exact match for the BigInt.
    The query take about a second to execute at the moment but I was wondering whether I should expect a large increase in execution time as the years go by.
    Is an inner join on the large table advisable?
    By the way, the first field in the primary key is the integer followed by the BigInt, so any thought of selecting on the BigInt into temp table before attempting the join probably won't help.
    R Campbell

    Just in case anyone wants to see the full picture (which I am not actually expecting) this is a script for all SQL objects involved.
    The numbers of rows in the tables are.
    Tags 5,000
    NumericSamples millions (over time)
    TagGroups 50
    GroupTags 500
    CREATE TABLE [dbo].[Tags](
    [ID] [int] NOT NULL,
    [TagName] [nvarchar](110) NOT NULL,
    [Address] [nvarchar](80) NULL,
    [DataTypeID] [smallint] NOT NULL,
    [DatasourceID] [smallint] NOT NULL,
    [Location] [nvarchar](4000) NULL,
    [Properties] [nvarchar](4000) NULL,
    [LastReadSampleTime] [bigint] NOT NULL,
    [Archived] [bit] NOT NULL,
    CONSTRAINT [Tags_ID_PK] PRIMARY KEY CLUSTERED
    [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[Tags] WITH NOCHECK ADD CONSTRAINT [Tags_DatasourceID_Datasources_ID_FK] FOREIGN KEY([DatasourceID])
    REFERENCES [dbo].[Datasources] ([ID])
    GO
    ALTER TABLE [dbo].[Tags] CHECK CONSTRAINT [Tags_DatasourceID_Datasources_ID_FK]
    GO
    ALTER TABLE [dbo].[Tags] WITH NOCHECK ADD CONSTRAINT [Tags_DataTypeID_DataTypes_ID_FK] FOREIGN KEY([DataTypeID])
    REFERENCES [dbo].[DataTypes] ([ID])
    GO
    ALTER TABLE [dbo].[Tags] CHECK CONSTRAINT [Tags_DataTypeID_DataTypes_ID_FK]
    GO
    ALTER TABLE [dbo].[Tags] ADD CONSTRAINT [DF_Tags_LastReadSampleTime] DEFAULT ((552877956000000000.)) FOR [LastReadSampleTime]
    GO
    ALTER TABLE [dbo].[Tags] ADD DEFAULT ((0)) FOR [Archived]
    GO
    CREATE TABLE [dbo].[NumericSamples](
    [TagID] [int] NOT NULL,
    [SampleDateTime] [bigint] NOT NULL,
    [SampleValue] [float] NULL,
    [QualityID] [smallint] NOT NULL,
    CONSTRAINT [NumericSamples_TagIDSampleDateTime_PK] PRIMARY KEY CLUSTERED
    [TagID] ASC,
    [SampleDateTime] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[NumericSamples] WITH NOCHECK ADD CONSTRAINT [NumericSamples_QualityID_Qualities_ID_FK] FOREIGN KEY([QualityID])
    REFERENCES [dbo].[Qualities] ([ID])
    GO
    ALTER TABLE [dbo].[NumericSamples] CHECK CONSTRAINT [NumericSamples_QualityID_Qualities_ID_FK]
    GO
    ALTER TABLE [dbo].[NumericSamples] WITH NOCHECK ADD CONSTRAINT [NumericSamples_TagID_Tags_ID_FK] FOREIGN KEY([TagID])
    REFERENCES [dbo].[Tags] ([ID])
    GO
    ALTER TABLE [dbo].[NumericSamples] CHECK CONSTRAINT [NumericSamples_TagID_Tags_ID_FK]
    GO
    CREATE TABLE [dbo].[TagGroups](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [TagGroup] [varchar](50) NULL,
    [Aggregates] [varchar](250) NULL,
    [NumericData] [bit] NULL,
    CONSTRAINT [PK_TagGroups] PRIMARY KEY CLUSTERED
    [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[TagGroups] ADD CONSTRAINT [DF_Tag_Groups_Aggregates] DEFAULT ('First') FOR [Aggregates]
    GO
    ALTER TABLE [dbo].[TagGroups] ADD CONSTRAINT [DF_TagGroups_NumericData] DEFAULT ((1)) FOR [NumericData]
    GO
    CREATE TABLE [dbo].[GroupTags](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [TagGroupID] [int] NULL,
    [TagName] [varchar](150) NULL,
    [ColumnName] [varchar](50) NULL,
    [SortOrder] [int] NULL,
    [TotalFactor] [float] NULL,
    CONSTRAINT [PK_GroupTags] PRIMARY KEY CLUSTERED
    [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[GroupTags] WITH CHECK ADD CONSTRAINT [FK_GroupTags_TagGroups] FOREIGN KEY([TagGroupID])
    REFERENCES [dbo].[TagGroups] ([ID])
    ON UPDATE CASCADE
    ON DELETE CASCADE
    GO
    ALTER TABLE [dbo].[GroupTags] CHECK CONSTRAINT [FK_GroupTags_TagGroups]
    GO
    ALTER TABLE [dbo].[GroupTags] ADD CONSTRAINT [DF_GroupTags_TotalFactor] DEFAULT ((1)) FOR [TotalFactor]
    GO
    CREATE VIEW [dbo].[vw_GroupTags]
    AS
    SELECT TOP (10000) dbo.TagGroups.TagGroup AS TableName, dbo.TagGroups.Aggregates AS SortOrder, dbo.GroupTags.SortOrder AS TagIndex, dbo.GroupTags.TagName,
    dbo.Tags.ID AS TagId, dbo.TagGroups.NumericData, dbo.GroupTags.TotalFactor, dbo.GroupTags.ColumnName
    FROM dbo.TagGroups INNER JOIN
    dbo.GroupTags ON dbo.TagGroups.ID = dbo.GroupTags.TagGroupID INNER JOIN
    dbo.Tags ON dbo.GroupTags.TagName = dbo.Tags.TagName
    ORDER BY SortOrder, TagIndex
    CREATE procedure [dbo].[GetTagTableValues]
    @SampleDateTime bigint,
    @TableName varchar(50),
    @PadRows int = 0
    as
    BEGIN
    DECLARE @i int
    DECLARE @ResultSet table(TagName varchar(150), SampleValue float, ColumnName varchar(50), SortOrder int, TagIndex int)
    set @i = 0
    INSERT INTO @ResultSet
    SELECT vw_GroupTags.TagName, NumericSamples.SampleValue, vw_GroupTags.ColumnName, vw_GroupTags.SortOrder, vw_GroupTags.TagIndex
    FROM vw_GroupTags INNER JOIN NumericSamples ON vw_GroupTags.TagId = NumericSamples.TagID
    WHERE (vw_GroupTags.TableName = @TableName) AND (NumericSamples.SampleDateTime = @SampleDateTime)
    set @i = @@ROWCOUNT
    if @i < @PadRows
    BEGIN
    WHILE @i < @PadRows
    BEGIN
    INSERT @ResultSet (TagName, SampleValue, ColumnName, SortOrder, TagIndex) VALUES ('', NULL, '', 0, 0)
    set @i = @i + 1
    END
    END
    select TagName, SampleValue, ColumnName, SortOrder, TagIndex
    from @ResultSet
    END
    R Campbell

  • SELECTing from a large table vs small table

    I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
    The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
    1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
    ( SELECTing using an index )
    My understanding of how Oracle works internally is this :
    It will first locate the ROWID from teh B-Tree that stores the index.
    ( This operation is O(log N ) based on B-Tree )
    ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
    And Oracle simply reads teh data from teh location it deduced from ROWID.
    But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
    Am I correct above.
    2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
    Can somebody please help

    user597961 wrote:
    I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
    The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
    1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
    ( SELECTing using an index )
    My understanding of how Oracle works internally is this :
    It will first locate the ROWID from teh B-Tree that stores the index.
    ( This operation is O(log N ) based on B-Tree )
    ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
    And Oracle simply reads teh data from teh location it deduced from ROWID.
    But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
    Am I correct above.
    2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
    Can somebody please helpIt's not going to be that simple. Before your first step (locate ROWID from index), it will first evaluate various access plans - potentially thousands of them - and choose the one that it thinks will be best. This evaluation will be based on the number of rows it anticipates having to retrieve, whether or not all of the requested data can be retrived from the index alone (without even going to the data segment), etc. etc etc. For each consideration it makes, you start with "all else being equal". Then figure there will be dozens, if not hundreds or thousands of these "all else being equal". Then once the plan is selected and the rubber meets the road, we have to contend with the fact "all else is hardly ever equal".

  • How to do a mass UPDATE on a table that must be kept "online"

    Hello,
    I am using Oracle 10g and would like to know how best to update a very large table (20 million rows) globally (one column in all rows) in such a way as to make the UPDATE as fast as possible and avoiding contention on indices, row locks etc.
    The table has no Foreign Keys, No triggers. I have read about creating temporary tables, filling them, dropping the original and then renaming, but this does not seem reasonable considering that the table must be at all times online.
    I have also tried with a parallel hint but it was slow.
    Any insights greatly appreciated.

    Is this identical to the question you asked Strategy for a fast global UPDATE on a large online table?? Or is there some difference between the two that I'm missing?
    Justin

  • Large tables without clustered indexes -- a bad idea, or ok?

    We have several large tables that don't have clustered indexes, but do have primary keys and assorted other indexes. We're seeing issues that look like they are related to statistics even though there is a nightly update statistics job.
    So my question is, does the existence of a clustered index impact other indexes or the update statistics process? Are our tables ok the way they are?
    Adding clustered indexes would be a big project for non-development-related reasons so I would need to have good reasons to advocate for this.

    RSingh,
    "By default" is a confusing word here. If you do it with WIZARD, I agree. But with T-SQL,there is no space for by default, it can be anything.
    I agree that I assumed here as I wont do much things with wizard. :)
    Try the below:
    --Here by default, its non-clustered
    create table Test_table(Col1 int , Col2 int not null )
    Create clustered index IX__Test_Table on Test_Table (Col1)
    Alter table test_Table
    Add Constraint PK_testTable
    Primary Key (Col2);
    Drop table test_table
    --Here by default, its clustered
    create table Test_table(Col1 int , Col2 int not null Primary key)
    Create index IX__Test_Table on Test_Table (Col1)
    Drop table test_table
    Neither the word "By Default" is a confusing word nor it happens only in WIZARD. Let me clarify below. I said "When a primary key constraint is created it creates a unique clustered index by default". Run the below DDL and check whether
    a clustered index is created or not. i.e
    CREATE TABLE Persons
    P_Id int NOT NULL,
    LastName varchar(255) NOT NULL,
    FirstName varchar(255),
    Address varchar(255),
    City varchar(255),
    CONSTRAINT pk_PersonID PRIMARY KEY (P_Id,LastName)
    Regards, RSingh
    That did not really make any sense to my context as I am NOT against that it will create Clustered index, In fact, my code says the same. My point is only "By default" is confusing. 
    Let me explain, what I mean "By default"....
    Create index IX_Indexname on Tablename(Colname)
    The above will create "non clustered index" BY DEFAULT. Any situation, the above would create only non-clustered irrespective of other indexes present in the table.
    Hope the above is clear. May be its my way looking at "BY DEFAULT".

  • XSU: Dealing with large tables / large XML files

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server Try to split the XML before you process it. You can take look into XMLDocumentSplitter explained in Building Oracle XML Applications Book By Steven Meunch.
    The other alternative is write your own SAX handler and send the chuncks of XML for insert

  • Jython error while updating a oracle table based on file count

    Hi,
    i have jython procedure for counting counting records in a flat file
    Here is the code(took from odiexperts) modified and am getting errors, somebody take a look and let me know what is the sql exception in this code
    COMMAND on target: Jython
    Command on source : Oracle --and specified the logical schema
    Without connecting to the database using the jdbc connection i can see the output successfully, but i want to update the oracle table with count. any help is greatly appreciated
    ---------------------------------Error-----------------------------
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 45, in ?
    java.sql.SQLException: ORA-00936: missing expression
    ---------------------------------------Code--------------------------------------------------
    import java.sql.Connection
    import java.sql.Statement
    import java.sql.DriverManager
    import java.sql.ResultSet
    import java.sql.ResultSetMetaData
    import os
    import string
    import java.sql as sql
    import java.lang as lang
    import re
    filesrc = open('c:\mm\xyz.csv','r')
    first=filesrc.readline()
    lines = 0
    while first:
    #get the no of lines in the file
    lines += 1
    first=filesrc.readline()
    #print lines
    ## THE ABOVE PART OF THE PROGRAM IS TO COUNT THE NUMBER OF LINES
    ## AND STORE IT INTO THE VARIABLE `LINES `
    def intWithCommas(x):
    if type(x) not in [type(0), type(0L)]:
    raise TypeError("Parameter must be an integer.")
    if x < 0:
    return '-' + intWithCommas(-x)
    result = ''
    while x >= 1000:
    x, r = divmod(x, 1000)
    result = ",%03d%s" % (r, result)
    return "%d%s" % (x, result)
    ## THE ABOVE PROGRAM IS TO DISPLAY THE NUMBERS
    sourceConnection = odiRef.getJDBCConnection("SRC")
    sqlstring = sourceConnection.createStatement()
    sqlstmt="update tab1 set tot_coll_amt = to_number( "#lines ") where load_audit_key=418507"
    sqlstring.executeQuery(sqlstmt)
    sourceConnection.close()
    s0=' \n\nThe Number of Lines in the File are ->> '
    s1=str(intWithCommas(lines))
    s2=' \n\nand the First Line of the File is ->> '
    filesrc.seek(0)
    s3=str(filesrc.readline())
    final=s0 + s1 + s2 + s3
    filesrc.close()
    raise final

    i changed as you adviced ankit
    am getting the following error now
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 37, in ?
    java.sql.SQLException: ORA-00911: invalid character
    here is the modified code
    sourceConnection = odiRef.getJDBCConnection("SRC")
    sqlstring = sourceConnection.createStatement()
    sqlstmt="update tab1 set tot_coll_amt = to_number('#lines') where load_audit_key=418507;"
    result=sqlstring.executeUpdate(sqlstmt)
    sourceConnection.close()
    Any ideas
    Edited by: Sunny on Dec 3, 2010 1:04 PM

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

Maybe you are looking for