Need suggestion on adding Index on table
Hi,
There is table called customer_locations in my database which has records about more then 5000 rows, When we write some select query to fetch data from this table takes too much time to load the data.
Need a suggestion how to add index to increase the performance on this table. also which type of index to be added need a suggestion
table sql script is mentioned below
CREATE TABLE "CUSTOMER_LOCATIONS"
( "LOCATION_ID" NUMBER NOT NULL ENABLE,
"COMPANY_NAME" VARCHAR2(512),
"ADDRESS_LINE_1" VARCHAR2(512),
"ADDRESS_LINE_2" VARCHAR2(512),
"PHONE_NUMBER" VARCHAR2(255),
"FAX_NUMBER" VARCHAR2(255),
"CITY" VARCHAR2(512),
"STATE" VARCHAR2(512),
"ZIP" VARCHAR2(100),
"COUNTRY" VARCHAR2(255),
"CREATED_BY" VARCHAR2(512),
"CREATED_DATE" TIMESTAMP (6),
"MODIFIED_BY" VARCHAR2(512),
"MODIFIED_DATE" TIMESTAMP (6),
"DOMAIN_ID" NUMBER,
"LOCATION_TYPE" VARCHAR2(255),
"STATUS" VARCHAR2(50),
"IB_STATUS" VARCHAR2(100),
"OLD_LOCATION_ID" VARCHAR2(50),
CONSTRAINT "SS_CUSTOMER_LOCATIONS_PK" PRIMARY KEY ("LOCATION_ID") ENABLE
Please suggest
Thanks
Sudhir
Hi Sudhir,
Since you have no predicates it would be unavoidable to have FULL TABLE SCAN. But let me tell you that FULL TABLE SCANS are not bad.
Just to help you with, the below code can enunciate the use of Indexes:
drop table test_table;
create table test_Table as select * from all_objects where rownum < 5001;
select * from user_ind_columns where table_name = 'TEST_TABLE';
--No rows fetched.
explain plan for
select object_name || ' is a ' || object_type as OBJ_DESC, object_id
from test_table;
--5000 Rows fetched
select operation, options, object_name, object_alias, object_instance, object_type, optimizer, depth, position, cost, cardinality, cpu_cost, io_cost
from plan_table;
OPERATION OPTIONS OBJECT_NAME OBJECT_ALIAS OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER DEPTH POSITION COST CARDINALITY CPU_COST IO_COST
SELECT STATEMENT (NULL) (NULL) (NULL) (NULL) (NULL) ALL_ROWS 0 19 19 5000 1698651 19
TABLE ACCESS FULL TEST_TABLE TEST_TABLE@SEL$1 1 TABLE (NULL) 1 1 19 5000 1698651 19
alter table test_Table add constraint pk_object_id PRIMARY KEY (object_id);
explain plan for
select object_name || ' is a ' || object_type as OBJ_DESC, object_id
from test_table
where object_id = 26;
select operation, options, object_name, object_alias, object_instance, object_type, optimizer, depth, position, cost, cardinality, cpu_cost, io_cost
from plan_table
where statement_id = 'WITH_PK';
OPERATION OPTIONS OBJECT_NAME OBJECT_ALIAS OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER DEPTH POSITION COST CARDINALITY CPU_COST IO_COST
SELECT STATEMENT (NULL) (NULL) (NULL) (NULL) (NULL) ALL_ROWS 0 2 2 1 15543 2
TABLE ACCESS BY INDEX ROWID TEST_TABLE TEST_TABLE@SEL$1 1 TABLE 1 1 2 1 15543 2
INDEX UNIQUE SCAN PK_OBJECT_ID TEST_TABLE@SEL$1 (NULL) INDEX (UNIQUE) ANALYZED 2 1 1 1 8171 1Let me know if it help or if you still have any concerns.
Regards,
P
Edited by: PurveshK on May 29, 2012 12:40 PM
Similar Messages
-
Need Suggestion for Archival of a Table Data
Hi guys,
I want to archive one of my large table. the structure of table is as below.
Daily there will be around 40000 rows inserted into the table.
Need suggestion for the same. will the partitioning help and on what basis?
CREATE TABLE IM_JMS_MESSAGES_CLOB_IN
LOAN_NUMBER VARCHAR2(10 BYTE),
LOAN_XML CLOB,
LOAN_UPDATE_DT TIMESTAMP(6),
JMS_TIMESTAMP TIMESTAMP(6),
INSERT_DT TIMESTAMP(6)
TABLESPACE DATA
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
LOB (LOAN_XML) STORE AS
( TABLESPACE DATA
ENABLE STORAGE IN ROW
CHUNK 8192
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOCACHE
NOPARALLEL;
do the needful.
regards,
SandeepThere will not be any updates /deletes on the table.
I have created a partitioned table with same struture and i am inserting the records from my original table to this partitioned table where i will maintain data for 6 months.
After loading the data from original table to archived table i will truncating the original table.
If my original table is partitioned then what about the restoring of the data??? how will restore the data of last month??? -
Need suggestion in deletion for five tables at a time
Hi,
I need some suggestion regarding a deletion and i have the following scenario.
tab1 contains 100 items.
for one item tab2..6 tables contain 4000 rows.So the loop will run for each item and will delete 20,000 lines and will do a commit.
Currently for 5,00,000 deletion it is taking 1 hr.All the tables and indexes are analysied.
CURSOR C_CHECK_DELETE_IND
IS
SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y';
type p_item IS TABLE OF tab1.item%type;
act_p_item p_item;
BEGIN
OPEN C_CHECK_DELETE_IND;
LOOP
FETCH C_CHECK_DELETE_IND bulk collect INTO act_p_item limit 5000;
FOR i IN 1..act_p_item.count
LOOP
DELETE FROM tab2 WHERE item = act_p_item(i);
DELETE FROM tab3 WHERE item = act_p_item(i);
DELETE FROM tab4 WHERE item = act_p_item(i);
DELETE FROM tab5 WHERE item = act_p_item(i);
DELETE FROM tab6 WHERE item = act_p_item(i);
COMMIT;
END IF;
END LOOP;
exit when C_CHECK_DELETE_IND%notfound;
END LOOP;
Hope i have explained the scenario.Can you please suggest me the right approach.
Thanks in advance.drop the loop
DELETE FROM tab2 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
DELETE FROM tab3 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
DELETE FROM tab4 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
DELETE FROM tab5 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
DELETE FROM tab6 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );if you can accomplish the task without looping, by all means do so!
you could also do a bulk delete.
forall j in indices of act_p_item save exceptions
DELETE tab2
WHERE item = act_p_item(j);etc for tab3 to 6
but unless the query SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' is horrible, I'd stick to the
DELETE FROM tab2 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' ); path
Edited by: tanging on Dec 21, 2009 10:19 AM -
Need document for standard index hashed tables
can any send me meterial for standard tables
hi raja,
The following are the table types used in SAP :
I. Transparent tables (BKPF, VBAK, VBAP, KNA1, COEP)
Allows secondary indexes (SE11->Display Table->Indexes)
Can be buffered (SE11->Display Table->technical settings) Heavily updated tables should not be buffered.
II. Pool Tables (match codes, look up tables)
Should be accessed via primary key or
Should be buffered (SE11->Display Table->technical settings)
No secondary indexes
Select * is Ok because all columns retrieved anyway
III. Cluster Tables (BSEG,BSEC)
Should be accessed via primary key - very fast retrieval otherwise very slow
No secondary indexes
Select * is Ok because all columns retrieved anyway. Performing an operation on multiple rows is more efficient than single row operations. Therefore you still want to select into an internal table. If many rows are being selected into the internal table, you might still like to retrieve specific columns to cut down on the memory required.
Statistical SQL functions (SUM, AVG, MIN, MAX, etc) not supported
Can not be buffered
IV. Buffered Tables (includes both Transparent & Pool Tables)
While buffering database tables in program memory (SELECT into internal table) is generally a good idea for performance, it is not always necessary. Some tables are already buffered in memory. These are mostly configuration tables. If a table is already buffered, then a select statement against it is very fast. To determine if a table is buffered, choose the 'technical settings' soft button from the data dictionary display of a table (SE12). Pool tables should all be buffered.
More at this link.
http://help.sap.com/saphelp_erp2004/helpdata/en/81/415d363640933fe10000009b38f839/frameset.htm
regrds,
anver
if helped mark points -
I want to create some indexes on following table in Oracle 10G database.
Table --> RegionalOrders
Columns -->
TenorderID -- PrimaryKey
custid number(6) -- ForegionKey
Empid number(6)-- ForegionKey
Region (char(4)) -- ForegionKey
Tencon varchar(20),
Tensell varchar(20),
Tendate date
Following are some of heavy activties on this table :
1) Ofter joins
2) Joins mostly based on primary key and foreigh key
I have following queries regarding indexes on this table.
a) can you please suggest what columns I can index.
b) There is not need to create index on TenorderID ( as its primary key)
c) Can we create any indexes for table joins.I guess that will be not a very good approach to create indexes like this. If you have queries with you that you are going to write over this table and other table(s) ,looking at explain plan of them , it wil be more accurate to tell which column(s) should be indexed. Remeber , index based on a single query's performance are not of much use and will lead to more issues than benefits.
You are correct in saying that PK column is not required to be indexed.Table join indexes or anything like that is not there,you can create indexes on underlying colums.
HTH
Aman.... -
Need suggestion on how to extract data from a table
Many years ago, I wrote many Perl scripts to increase my work productivity.
Now I am starting learning Java, Javascript and PHP for some purposes. Just wrote a simple html2txt java program.
Now I need experts' suggestion for one specific purpose - extract data from a html table. One of the examples is here: http://mops.tse.com.tw/nas/t06sa18/200902/A02_2454_200902.htm
I need to extract datum from the table and make some analysis. (Txt is in Chinese, sorry)
What is the best way to code it? Java, PHP or other? If Java, any suggestion what classes/module and approach to use?
Thanks very much.xcomme wrote:
Many years ago, I wrote many Perl scripts to increase my work productivity.
Now I am starting learning Java, Javascript and PHP for some purposes. Just wrote a simple html2txt java program.
Now I need experts' suggestion for one specific purpose - extract data from a html table. One of the examples is here: http://mops.tse.com.tw/nas/t06sa18/200902/A02_2454_200902.htm
I need to extract datum from the table and make some analysis. (Txt is in Chinese, sorry)
What is the best way to code it? It this a one shot (one time only task)? Then the best way is whatever way you are most comfortable with and which works.
If on going then I doubt language choice matters but using a html parser rather than attempting to parse it yourself is going to help in any language.
And are you starting with html files or starting with a http server? The two are very different. -
Why do we need varrays ,index by table,pl/sql table etc when cursor is avai
hi,
Why do we need Composite data types like Index by Table, varrays etc when we have cursors and we can do all the things with cursor.
Thanks
RamI would have to create a collection type for each column in the select statement.No.
SQL> select count(*) from scott.emp ;
COUNT(*)
14
1 row selected.
SQL> DECLARE
2 TYPE my_Table IS TABLE OF scott.emp%ROWTYPE;
3 my_tbl my_Table;
4 BEGIN
5 SELECT * BULK COLLECT INTO my_tbl FROM scott.emp;
6 dbms_output.put_line('Bulk Collect rows:'||my_tbl.COUNT) ;
7 END;
8 /
Bulk Collect rows:14
PL/SQL procedure successfully completed.
SQL> disc
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.7.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.7.0 - Production
SQL>Message was edited by:
Kamal Kishore -
Oracle evolution suggestion : NULL and Index
Hello,
As you know in most of case NULL values are not indexed and then not searchable using index.
Then when you do where MyField is null, you have the risk of doing a full table scan
However most of people don't know that, and then doesn't care of this possible issue and of possible solution (bitmap or including a not null column)
SQL Server, MySQL and probably some others DB don't have the same behavior as they indexes NULL
I know this caveat can be used to have partial indexing by nulling non interesting values and then can't be removed
Then I would suggest to enhance the create index command to allow to index also null with something like that
Create index MyIndex on MyTable(MyColumn including nulls )
As you make this change, perhaps it would be geat to change the behavior documented bellow as it looks more as an old heritage too by adding keyword like "allow null duplicate" and "constraint on null duplicate"
Ascending unique indexes allow multiple NULL values. However, in descending unique indexes, multiple NULL values are treated as duplicate values and therefore are not permitted.
LaurentHello,
Thanks, for the links it cover mains solutions to index null values, there's also the usage of bitmap index.
All of them are not very intuitive for an non expert.
But the purpose of my message was mainly to higlight this complexity for a quite basic stuff, as I think that the default solution should be to index nulls and eventually allow to do not index them.
As I said this is the behavior on sql server and mysql. That why i suggest to enhance index behavior to allow to index nulls easily and not by using stange tips like indexing a blank space or a not null column.
This solutions are from my viewpoint workaround, helpfull workaround but still workaround, Oracle database team have the power to change this root cause without breaking ascending compatibility, here is the sense of my message, just hopping they can hear me...
Laurent -
Need suggestion: few records to be updated in millions
Hi All,
I need your expert comments over below senerio:
Senerio: i have a table which has 2 column. Person_Id and Country_id. it has millions of records.
1) One persion_id may have multiple country_id but maximum 4.
2) this data is get updated. But around 4000 records are updated only out of millions present in table.
This updation will be done by a procedure which i will create. We will have information about those 4000 records which will need update and will update above table.
Now from performance point of view.. how we should design this table?
1) Should i create partition on this table? if yes.. what type of?
2) Should there be any index..
Please suggest, if there is any other better way.
Thanks alot in advance.00abfbfa-08b4-49e2-98ad-44c04ed2ac37 wrote:
yes.. records will be indentified by persion_id.
but since persion_id is not unique, will not oracle go for full scan even if there is an index.
thats why we thinking of partition.
Did you actually read what I wrote?
The important bit was
"Oracle will choose the best path to access the rows."
So, create the index, gather statistics and find out yourself! -
hi can you tell me
1.what is index oragnized table
2. fragmentation of table
3. what is cascading trigger meanHi,
For this points a good starting point are Oracle Manuals. For index organizated tables check this link [Overview of Index-Organized Tables|http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm#sthref1044].
Cascading triggers should be avoided, see this link [Some Cautionary Notes about Triggers|http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/triggers.htm#sthref3187]
Regards,
Edited by: Walter Fernández on Jan 28, 2009 4:21 PM - Adding information about cascading triggers -
Need to generate a Index xml file for corresponding Report PDF file.
Need to generate a Index xml file for corresponding Report PDF file.
Currently in fusion we are generating a pdf file using given Rtf template and dataModal source through Ess BIPJobType.xml .
This is generating pdf successfully.
As per requirement from Oracle GSI team, they need index xml file of corresponding generated pdf file for their own business scenario.
Please see the following attached sample file .
PDf file : https://kix.oraclecorp.com/KIX/uploads1/Jan-2013/354962/docs/BPA_Print_Trx-_output.pdf
Index file : https://kix.oraclecorp.com/KIX/uploads1/Jan-2013/354962/docs/o39861053.out.idx.txt
In R12 ,
We are doing this through java API call to FOProcessor and build the pdf. Here is sample snapshot :
xmlStream = PrintInvoiceThread.generateXML(pCpContext, logFile, outFile, dbCon, list, aLog, debugFlag);
OADocumentProcessor docProc = new OADocumentProcessor(xmlStream, tmpDir);
docProc.process();
PrintInvoiceThread :
out.println("<?xml version=\"1.0\" encoding=\"UTF-8\" ?>");
out.print("<xapi:requestset ");
out.println("<xapi:filesystem output=\"" + outFile.getFileName() + "\"/>");
out.println("<xapi:indexfile output=\"" + outFile.getFileName() + ".idx\">");
out.println(" <totalpages>${VAR_TOTAL_PAGES}</totalpages>");
out.println(" <totaldocuments>${VAR_TOTAL_DOCS}</totaldocuments>");
out.println("</xapi:indexfile>");
out.println("<xapi:document output-type=\"pdf\">");
out.println("<xapi:customcontents>");
XMLDocument idxDoc = new XMLDocument();
idxDoc.setEncoding("UTF-8");
((XMLElement)(generator.buildIndexItems(idxDoc, am, row)).getDocumentElement()).print(out);
idxDoc = null;
out.println("</xapi:customcontents>");
In r12 we have a privilege to use page number variable through oracle.apps.xdo.batch.ControlFile
public static final String VAR_BEGIN_PAGE = "${VAR_BEGIN_PAGE}";
public static final String VAR_END_PAGE = "${VAR_END_PAGE}";
public static final String VAR_TOTAL_DOCS = "${VAR_TOTAL_DOCS}";
public static final String VAR_TOTAL_PAGES = "${VAR_TOTAL_PAGES}";
Is there any similar java library which do the same thing in fusion .
Note: I checked in the BIP doc http://docs.oracle.com/cd/E21764_01/bi.1111/e18863/javaapis.htm#CIHHDDEH
Section 7.11.3.2 Invoking Processors with InputStream .
But this is not helping much to me. Is there any other document/view-let which covers these thing .
Appreciate any help/suggestions.
-anjani prasad
I have attached these java file in kixs : https://kix.oraclecorp.com/KIX/display.php?labelId=3755&articleId=354962
PrintInvoiceThread
InvoiceXmlBuilder
Control.javaYou can find the steps here.
http://weblogic-wonders.com/weblogic/2009/11/29/plan-xml-usage-for-message-driven-bean/
http://weblogic-wonders.com/weblogic/2009/12/16/invalidation-interval-secs/ -
I need to copy data from a table in one database (db1) to another table in
Hi
I need to copy data from a table in one database (db1) to another table in another database (db2).
I am not sure if the table exists in db2,,,if it doesnot it needs to be created as well data also needs to be inserted...
How am I supposed to this using sql statements..?
I shall be happy if it is explained SQL also...
Thanking in advanceHow many rows does the table contains? There are manyway you can achieve this.
1. export and import.
2. create a dblink between two databases and use create table as select, if structure doesnot exists in other database, if structure exists, use, insert into table select command.
example:
create a dblink in db2 database for db1 database.
create table table1 as select * from table1@db1 -- when there is no structure present
-- you need to add constraints manually, if any exists.
insert into table1 select * from table1@db1 -- when there is structure present.
If the table contains large volume of data, I would suggest you to use export and import.
Jaffar -
Creation of secondary indexes for table "RSBATCHCTRL_PAR" failed
Hi ,
We have installed EHP1 on our BI7.0 system successfully, later we are trying to apply SPS01 for this EHP but we got the follwoing error during TBATG conversion.
2 EGT092 Conversion of table "RSBATCHCTRL_PAR" was restarted
2 EGT241 The conversion is continued at step "6"
2 EGT246 Type of conversion: "T" -> "T"
2 EGT240XBegin step "RSBATCHCTRL_PAR-STEP6":
4 EGT281 sql:
4 ED0314 CREATE
4 ED0314 INDEX [RSBATCHCTRL_PAR~DB] ON [RSBATCHCTRL_PAR]
4 ED0314 ( [JOBNAME] ,
4 ED0314 [JOBCOUNT] ,
4 ED0314 [SERVER] ,
4 ED0314 [HOST] ,
4 ED0314 [WP_NO] ,
4 ED0314 [WP_PID] ,
4 ED0314 [PROCESS_TYPE] )
4 ED0314 WITH ( ONLINE=OFF )
4 ED0314 ON [PRIMARY]
2 ED0314 Line 1: Incorrect syntax near '('.
3 EDA093 "DDL time(___1):" ".........6" milliseconds
2EEGT236 The SQL statement was not executed
2EEDI006 Index " " could not be created completely in the database
2EEGT221 Creation of secondary indexes for table "RSBATCHCTRL_PAR" failed
2EEGT239 Error in step "RSBATCHCTRL_PAR-STEP6"
2 EGT253XTotal time for table "RSBATCHCTRL_PAR": "000:00:00"
2EEGT094 Conversion could not be restarted
2 EGT067 Request for "RSBATCHCTRL_PAR" could not be executed
1 ED0327XProcess..................: "ferrari_12"
1 ED0302X=========================================================================
1 ED0314 DD: Execution of Database Operations
1 ED0302 =========================================================================
1 ED0327 Process..................: "ferrari_12"
1 ED0319 Return code..............: "0"
1 ED0314 Phase 001................: < 1 sec. (Preprocessing of TBATG)
1 ED0314 Phase 002................: < 1 sec. (Partitioning)
1 ED0309 Program runtime..........: "< 1 sec."
1 ED0305 Date, time...............: "03.06.2009", "12:47:21"
1 ED0318 Program end==============================================================
1 ETP166 CONVERSION OF DD OBJECTS (TBATG)
1 ETP110 end date and time : "20090603124721"
1 ETP111 exit code : "8"
1 ETP199 ######################################
System properties:
SAP - BI7.0 with EHP1
Database - MSSQL 2000
OS - Windows2003
Please suggest.
Thanks in advance,
Pavan.> We have installed EHP1 on our BI7.0 system successfully, later we are trying to apply SPS01 for this EHP but we got the follwoing error during TBATG conversion.
> 2 ED0314 Line 1: Incorrect syntax near '('.
> 3 EDA093 "DDL time(___1):" ".........6" milliseconds
> 2EEGT236 The SQL statement was not executed
This is a known problem with SQL Server 2000, see
Note 1180553 - Syntax error 170 during index creation on SQL 2000
I highly suggest upgrading to SQL Server 2005 or 2008.
Markus -
what is logical rowid in IOT?are they stored somwhere physically just like physical rowId's
what are secondary indexes?
what it means by leaf block splits?when and how it happens?
and the primary key constraint for an index-organized table cannot be dropped, deferred, or disabled,,,,,Is it true,,,,,if Yes Then Y
how does overflow works?how the two clauses are implemented PCTTHRESHOLD and INCLUDING.how they work?
Edited by: Juhi on Oct 22, 2008 1:09 PMI'm sort-of tempted to just point you in the direction of the official documentation (the concepts guide would be a start. See http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#sthref759)
But I would say one or two other things.
First, physical rowids are not stored physically. I don't know why you'd think they were. The ROWID data type can certainly be used to store a rowid if you choose to do so, but if you do something like 'select rowid from scott.emp', for example, you'll see rowids that are generated on-the-fly. ROWID is a pseudo-column, not physically stored anywhere, but computed whenever needed.
The difference between a physical rowid and a logical one used with IOTs comes down to a bit of relational database theory. It is a cast-iron rule of relational databases that a row, once inserted into a table, must never move. That is, the rowid it is assigned at the moment of its first insertion, must be the rowid it 'holds onto' for ever and ever. If you ever want to change the rowids assigned to rows in an ordinary table, you have to export them, truncate the table and then re-insert them: fresh insert, fresh rowid. (Oracle bends this rule for various maintenance and management purposes, whereby 'enable row movement' permits rows to move within a table, but the general case still applies mostly).
That rule is obviously hopeless for index structures. Were it true, an index entry for 'Bob' who gets updated to 'Robert' would find itself next to entries for 'Adam' and 'Charlie', even though it now has an 'R' value. Effectively, a 'b' "row" in an index must be allowed to "move" to an 'r' sort of block if that's the sort of update that takes place. (In practice, an update to an index entry consists of performing a delete followed by a re-insert, but the physicalities don't change the principle: "rows" in an index must be allowed to move if their value changes; rows in a table don't move, whatever happens to their values)
An IOT is, at the end of the day, simply an index with a lot more columns in it than a "normal" index would have -so it, too, has to allow its entires (its 'rows', if you like) to move. Therefore, an IOT cannot use a standard ROWID, which is assigned once and forever. Instead, it has to use something which takes account of the fact that its rows might wander. That is the logical rowid. It's no more "physical" than a physical rowid -neither are physically stored anywhere. But a 'physical' rowid is invariant; a logical one is not. The logical one is actually constructed in part from the primary key of the IOT -and that's the main reason why you cannot ever get rid of the primary key constraint on the IOT. Being allowed to do so would equate to allowing you to destroy the one organising principle for its contents that an IOT possesses.
(See the section entitled "The ROWID Pseudocolumn" and following on this page: http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1845
So IOTs have their data stored in them in primary key order. But they don't just contain the primary key, but every other column in the 'table definition' too. Therefore, just like with an ordinary table, you might want sometimes to search for data on columns which are NOT part of the primary key -and in that case, you might well want these non-primary key columns to be indexed. Therefore, you will create ordinary indexes on these columns -at this point, you're creating an index on an index, really, but that's a side issue, too! These extra indexes are called 'secondary indexes', simply because they are 'subsidiary indexes' to the main one, which is the "table" itself arranged in primary key order.
Finally, a leaf block split is simply what happens when you have to make room for new data in an index block which is already packed to the rafters with existing data. Imagine an index block can only contain four entries, for example. You fill it with entries for Adam, Bob, Charlie, David. You now insert a new record for 'Brian'. If this was a table, you could throw Brian into any new block you like: data in a table has no positional significance. But entries in an index MUST have positional significance: you can't just throw Brian in amongst the middle of a lot of Roberts, Susans and Tanyas. Brian HAS to go in between the existing entires for Bob and Charlie. Yet you can't just put him in the middle of those two, because then you'd have five entries in a block, not four, which we imagined for the moment to be the maximum allowed. So what to do? What you do is: obtain a new, empty block. Move Charlie and David's entries into the new block. Now you have two blocks: Adam-Bob and Charlie-David. Each only has two entries, so each has two 'spaces' to accept new entries. Now you have room to add in the entry for Brian... and so you end up with Adam-Bob-Brian and Charlie-David.
The process of moving some index entries out of one block into a new one so that there's room to allow new entries to be inserted in the middle of existing ones is called a block split. They happen for other reasons, too, so this is just a gloss treatment of them, but they give you the basic idea. It's because of block splits that indexes (and hence IOTs) see their "rows" move: Charlie and David started in one block and ended up in a completely different block because of a new (and completely unrelated to them) insert.
Very finally, overflow is simply a way of splitting off data into a separate table segment that wouldn't sensibly be stored in the main IOT segment itself. Suppose you create an IOT containing four columns: one, a numeric sequence number; two, a varchar2(10); three, a varchar2(15); and four, a blob. Column 1 is the primary key.
The first three columns are small and relatively compact. The fourth column is a blob data type -so it could be storing entire DVD movies, multi-gigabyte-sized monsters. Do you really want your index segment (for that is what an IOT really is) to balloon to huge sizes every time you add a new row? Probably not. You probably want columns 1 to 3 stored in the IOT, but column 4 can be bumped off over to some segment on its own (the overflow segment, in fact), and a link (actually, a physical rowid pointer) can link from the one to the other. Left to its own devices, an IOT will chop off every column after the primary key one when a record which threatens to consume more than 50% of a block gets inserted. However, to keep the main IOT small and compact and yet still contain non-primary key data, you can alter these default settings. INCLUDE, for example, allows you to specify which last non-primary key column should be the point at which a record is divided between 'keep in IOT' and 'move out to overflow segment'. You might say 'INCLUDE COL3' in the earlier example, so that COL1, COL2 and COL3 stay in the IOT and only COL4 overflows. And PCTTHRESHOLD can be set to, say, 5 or 10 so that you try to ensure an IOT block always contains 10 to 20 records -instead of the 2 you'd end up with if the default 50% kicked in. -
Need suggestion in getting data using JDBC
Hi all need suggestion,
i had a VO corresponding to database table.
when i am try to get the records from that table,
how can i initialize the particular column value to the
corresponding VO setter method.
please do the needful.Hello inform2csr,
Your question is not so clear.
Can you be more precise?
What is VO?
Maybe you are looking for
-
Some Users cannot login via Lync 2013 windows client
Hi, I have a Lync 2013 FrontEnd server, DB server and Edge Server. Since last noon we are facing a mysterious issue. Its as below. ( I have just entered into Lync administration) User A and User B, both were able to login to lync till yesteday noon o
-
Skype number is suddenly not working for me and my...
Skype number is suddenly not working for me and my colleagues to RECEIVE outside calls. Any ideas? When I try to call my skype from my cell, it doesnt ring on either end. It appears connected on the cell phone since the counter clock for the call i
-
QuickTime 7.6 crashing on playback 5.1 m4v movies
Also with perian disabled. On mac mini. Good to have Pacifist
-
I have searched and cannot find a solution. I have reset device and done a factory wipe back to original specs. No change. It appears to be a potential operating system issue but I cannot find any fixes. Apple has nothing posted about this proble
-
I am having trouble trying to insert an excel worksheet from one workbook into another workbook. None of the excel functions seem to work for me and I am having a problem trying to find a good example. Basically what my program does is it will collec