Index growth
hi
i dont understand how index grow in a tree like structure...i mean...oracle says that up to 4 level branches are possible...if we have 10000 key value , so can anybody tell me how these values wil be stored in the index..i mean just explain here and i will make image in my mind. i know that actual value will be in leaf blocks...and how the address of these actual values wil be stored and in which order in the upper level blocks, what will be in root block
Regards
Actually, there have been discussions elsewhere that Oracle b-tree indexes can be made to have 22 or 23 levels (but you really, really have to work at it).
The Oracle Concepts Guide has a pretty good introduction to index storage http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm#13387
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC
Similar Messages
-
A better SQL to avoid Index growth
Our developer passed this code along to us as we are getting some performance and behaviour in our 9.2.0.7 DB. When this process run, our 4MB index grows to over 1.5 GB and we are forced to re-org it to bring it back down.
BEGIN
PRETERR := 'NO ERROR';
PRETCODE := 0;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectalias';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectalias', V.PVR_ALIAS, NULL, NULL
FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
PVR_EFFECTIVE_END_DATE
AND PT.PTY_PARTY_STATUS <> 'D'
AND V.PVR_ALIAS IS NOT NULL
AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
ORDER BY 2;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectlender';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectlender', V.PVR_BUSINESS_NAME, NULL, NULL
FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
PVR_EFFECTIVE_END_DATE
AND PT.PTY_PARTY_STATUS <> 'D'
AND V.PVR_BUSINESS_NAME IS NOT NULL
AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
ORDER BY 2;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectlendername';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectlendername', PP.PRP_COLUMN_1, NULL, NULL
FROM T_POLICY, T_POLICY_PARTY, T_POLICY_PARTY_PROPERTY PP
WHERE POL_POLICY_ID = PPA_POL_POLICY_ID
AND PPA_EFFECTIVE_END_DATE = TO_DATE('9999-01-01', 'RRRR-MM-DD')
AND PPA_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
AND POL_POLICY_ID = PP.PRP_POL_POLICY_ID
AND PP.PRP_EFFECTIVE_END_DATE = TO_DATE('9999-01-01', 'RRRR-MM-DD')
AND PP.PRP_COLUMN_1 IS NOT NULL
ORDER BY 2;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectinsinstcode';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectinsinstcode', V.PVR_INSTITUTION_CODE, NULL, NULL
FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
PVR_EFFECTIVE_END_DATE
AND PT.PTY_PARTY_STATUS <> 'D'
AND V.PVR_INSTITUTION_CODE IS NOT NULL
AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
ORDER BY 2;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectinstransit';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectinstransit', V.PVR_TRANSIT_NUM, NULL, NULL
FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
PVR_EFFECTIVE_END_DATE
AND PT.PTY_PARTY_STATUS <> 'D'
AND V.PVR_TRANSIT_NUM IS NOT NULL
AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
ORDER BY 2;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectSubInstCode';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectSubInstCode', V.PVR_INSTITUTION_CODE, NULL, NULL
FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
PVR_EFFECTIVE_END_DATE
AND PT.PTY_PARTY_STATUS <> 'D'
AND V.PVR_INSTITUTION_CODE IS NOT NULL
AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
ORDER BY 2;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectSubTransNum';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectSubTransNum', V.PVR_TRANSIT_NUM, NULL, NULL
FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
PVR_EFFECTIVE_END_DATE
AND PT.PTY_PARTY_STATUS <> 'D'
AND V.PVR_TRANSIT_NUM IS NOT NULL
AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
ORDER BY 2;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectFileOwner';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectFileOwner',
PVR_FIRST_NAME || ' ' ||
DECODE(PVR_MIDDLE_NAME,
NULL,
PVR_MIDDLE_NAME || ' ') || PVR_LAST_NAME,
NULL,
PTY_PARTY_CODE
FROM T_ROLE TR, T_USER_ROLE TUR, T_PARTY TP, T_PARTY_VERSION TPV
WHERE TP.PTY_PARTY_ID = TPV.PVR_PTY_PARTY_ID
AND SYSDATE BETWEEN TPV.PVR_EFFECTIVE_START_DATE AND
TPV.PVR_EFFECTIVE_END_DATE
AND TP.PTY_PARTY_ID = TUR.ULE_PFY_PTY_PARTY_ID
AND (SYSDATE BETWEEN TUR.ULE_START_DATE AND TUR.ULE_END_DATE OR
TUR.ULE_END_DATE IS NULL)
AND TR.RLE_ROLE_ID = TUR.ULE_RLE_ROLE_ID
AND PTY_PARTY_CODE NOT LIKE 'LTEST%'
AND TR.RLE_ROLE_NAME IN
('VPOPS', /*'VPRSK2', */
'OPSLEADER', 'TEAMLEADER', 'OPSLEVEL5', 'OPSLEVEL4', 'OPSLEVEL3',
'OPSLEVEL2', 'OPSLEVEL1', 'OPSTRNG')
AND PTY_PARTY_CODE IS NOT NULL
ORDER BY 2;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectbrokername';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectbrokername', PP.PPA_NAME_BROKER_TEXT, NULL, NULL
FROM T_POLICY, T_POLICY_PARTY PP
WHERE POL_POLICY_ID = PPA_POL_POLICY_ID
AND PPA_EFFECTIVE_END_DATE = TO_DATE('9999-01-01', 'RRRR-MM-DD')
AND PPA_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
AND TRIM(PP.PPA_NAME_BROKER_TEXT) IS NOT NULL
ORDER BY 2;
DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectorg';
INSERT INTO T_TEAM_LIST_VAL
(LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
SELECT DISTINCT 'selectorg', PT.PTY_PARTY_CODE, NULL, NULL
FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
PVR_EFFECTIVE_END_DATE
AND PT.PTY_PARTY_STATUS <> 'D'
AND PT.PTY_PARTY_CODE IS NOT NULL
AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
ORDER BY 2;
COMMIT;Question 1 is with a delete and insert one after another, will this cause it to grow rapidly? Would it be better to run all the delete's first, commit and then the inserts and commit?
Question 2 is at a high level, what method would be better. We though a truncate, however, the developer only deletes 40% data.Jonathan Lewis wrote:
huh? wrote:
Our developer passed this code along to us as we are getting some performance and behaviour in our 9.2.0.7 DB. When this process run, our 4MB index grows to over 1.5 GB and we are forced to re-org it to bring it back down.
Your table definition shows the indexed column at CHAR(200) - which means fixed length; so 4MB equates to 20,000 row. If this grows to 1.5GB and rebuilds to 4MB then at some point your index runs to about 80KB per index entry. If, as you describe, you only delete about 40% of the data and re-insert a similar quantity just once the you must have hit a bug.
You're on 9.2.0.7 (buggy) - were you using ASSM (also buggy). I came across a bug with ASSM on one occasion that resulted in a process using only 3 blocks from each index extent that it allocated during a pl/sql loop to modify a couple of hundred rows. Different circumstances from yours, and an earlier version - but you may have hit something similar.
A few thoughts:
<ul>
If it's in ASSM move the index to a freelist-managed tablespace to see what happens
If you need the index could you make it unusable for the load
Should you have the index at all ?
Should this table be list-partitioned on listdomain - then you could use partition exchange to load data like this
</ul>
Committing between delete and insert generally helps in cases like this.
Committing between each delete/insert pair should (in the absence of bugs) help
Regards
Jonathan Lewis.Yes we are using ASSM. Yes we have been told the same that this sounds like a common bug. One thing we will do is the two commits as well as your suggestion and moving or altering the index and see how that fairs. Thanks for your help. -
can anybody tell me, how i could avoid an index to grow to fast,
when i insert presorted data into the corresponding table?
Is there any way to force rebalancing of the index? (to get a fill-rate of nearly 100%)
merci, CharlesJonathan Lewis wrote:
huh? wrote:
Our developer passed this code along to us as we are getting some performance and behaviour in our 9.2.0.7 DB. When this process run, our 4MB index grows to over 1.5 GB and we are forced to re-org it to bring it back down.
Your table definition shows the indexed column at CHAR(200) - which means fixed length; so 4MB equates to 20,000 row. If this grows to 1.5GB and rebuilds to 4MB then at some point your index runs to about 80KB per index entry. If, as you describe, you only delete about 40% of the data and re-insert a similar quantity just once the you must have hit a bug.
You're on 9.2.0.7 (buggy) - were you using ASSM (also buggy). I came across a bug with ASSM on one occasion that resulted in a process using only 3 blocks from each index extent that it allocated during a pl/sql loop to modify a couple of hundred rows. Different circumstances from yours, and an earlier version - but you may have hit something similar.
A few thoughts:
<ul>
If it's in ASSM move the index to a freelist-managed tablespace to see what happens
If you need the index could you make it unusable for the load
Should you have the index at all ?
Should this table be list-partitioned on listdomain - then you could use partition exchange to load data like this
</ul>
Committing between delete and insert generally helps in cases like this.
Committing between each delete/insert pair should (in the absence of bugs) help
Regards
Jonathan Lewis.Yes we are using ASSM. Yes we have been told the same that this sounds like a common bug. One thing we will do is the two commits as well as your suggestion and moving or altering the index and see how that fairs. Thanks for your help. -
Database growth following index key compression in Oracle 11g
Hi,
We have recently implemented index key compression in our sap R3 environments, but unexpectedly this has not resulted in any reduction of index growth rates.
What I mean by this is that while the indexes have compressed on average 3 fold (over the entire DB), we are not seeing this with the DB growth going forward.
ie We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
Our trial with ACO compression seemed to yield reduction of table growth rates that corresponded to the compression ratio (ie table data growth rates dropped to a third after compression), but we havent seen this with index compression.
Does anyone know if a rebuild with index key compression will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
Cheers
TheoHello Theo,
Does anyone know if a rebuild with index key compression will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
I wrote a blog about index key compression internals long time ago ([Oracle] Index key compression), but now i noticed that one important statement is missing. Yes future entries are compressed too - index key compression is a "live compression" feature.
We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
Do you mean that your DB size still increases ~15GB per month overall or just the index segments? Depending on the segment type growth - maybe indexes are only a small part of your system at all.
If you have enabled compression and perform a reorg of them, you can run into one-time effects like 50/50 block splits due to fully packed blocks, etc. It also depends on the way the data is inserted/updated and which indexes are compressed.
Regards
Stefan -
Hi,
We have gone line with SAP ECC for retail scenario recently. Our database is growing 3 GB per day which includes both data and index growth.
Modules configured:
SD (Retail), MM, HR and FI/CO.
COPA is configured for reporting purpose to find article wise sales details per day and COPA summarization has not been done.
Total sales order created per day on an average: 4000
Total line items of sales order on an average per day: 25000
Total purchase order created per day on an avearage: 1000
Please suggest whether database growth of 3 GB per day is normal for our scenario or should we do something to restrict the database growth.
Fastest Growing tables are,
CE11000 Operating Concern fo
CE31000 Operating Concern fo
ACCTIT Compressed Data from FI/CO Document
BSIS Accounting: Secondary Index for G/L Accounts
GLPCA EC-PCA: Actual Line Items
FAGLFLEXA General Ledger: Actual Line Items
VBFA Sales Document Flow
RFBLG Cluster for accounting document
FAGL_SPLINFO Splittling Information of Open Items
S120 Sales as per receipts
MSEG Document Segment: Article
VBRP Billing Document: Item Data
ACCTCR Compressed Data from FI/CO Document - Currencies
CE41000_ACCT Operating Concern fo
S033 Statistics: Movements for Current Stock (Individual Records)
EDIDS Status Record (IDoc)
CKMI1 Index for Accounting Documents for Article
LIPS SD document: Delivery: Item data
VBOX SD Document: Billing Document: Rebate Index
VBPA Sales Document: Partner
BSAS Accounting: Secondary Index for G/L Accounts (Cleared Items)
BKPF Accounting Document Header
FAGL_SPLINFO_VAL Splitting Information of Open Item Values
VBAP Sales Document: Item Data
KOCLU Cluster for conditions in purchasing and sales
COEP CO Object: Line Items (by Period)
S003 SIS: SalesOrg/DistCh/Division/District/Customer/Product
S124 Customer / article
SRRELROLES Object Relationship Service: Roles
S001 SIS: Customer Statistics
Is there anyway we can reduce the datagrowth without affecting the functionalities configured?
Is COPA summarization configuration will help reducing the size of the FI/CO tables growth?
Regards,
Nalla.user480060 wrote:
Dear all,
Oracle 9.2 on AIX 5.3
In one of our database, one table has a very fast growth rate.
How can I check if the table growth is normal or not.
Please advice
The question is, what is a "very fast growth rate"?
What are the DDL of the table resp. the data types that the table uses?
One potential issue could be the way the table is populated: If you constantly insert into the table using a direct-path insert (APPEND hint) and subsequently delete rows then your table will grow faster than required because the deleted rows won't be reused by the direct-path insert because it always writes above the current high-water mark of your table.
May be you want to check your application for such an case if you think that the table grows faster than the actual amount of data it contains.
You could use the ANALYZE command to get information about empty blocks and average free space in the blocks or use the procedures provided by DBMS_SPACE package to find out more about the current usage of your segment.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Hi all,
DB:Oracle 9i
os :solaris 8
can any one please tell me ....how to check the object(table,index) growth day wise?
thanks,
kk
Edited by: user603328 on Apr 20, 2011 3:08 PMYou must take a "photo" of dba_segments.
day 1
create table spaces as select * from dba_segments;
alter table spaces add (time_snap date);
create trigger for insert mytrigger..... time_snap = sysdate ...;
day 2,3,4, ... n
insert into spaces select * from dba_segments;
In spaces table you'll have the growth by day the all objects. You can take the data to excel and generate the graphic growth.
HTH
Antonio NAVARRO -
How to find the incremental growth of index for last few months
Hi,
How to find the incremental growth of index for last few months.
Thanks,
Sathis.Hi,
Check the below link, it may help you!
http://www.rampant-books.com/t_tracking_oracle_database_tables_growth.htm
Thanks,
Sankar -
I have table whose growth is 1 million per month and may increase in future. I currently place an index on column which is frequently uses in where clause. there is another column which contains months so it may possible that I make 12 partitions of that. I want to know what is suitable. is there any connection between index and table partition?
Message was edited by:
user459835I think the question is more of what type of queries are answered by this table?
is it that most of the times the results returned span across several months?
is there any relation to the column you use in where clause with the data belonging to a particular month (or range there-of)? -
Index file increase with no corresponding increase in block numbers or Pag file size
Hi All,
Just wondering if anyone else has experienced this issue and/or can help explain why it is happening....
I have a BSO cube fronted by a Hyperion Planning app, in version 11.1.2.1.000
The cube is in it's infancy, but already contains 24M blocks, with a PAG file size of 12GB. We expect this to grow fairly rapidly over the next 12 months or so.
After performing a simple Agg of aggregating sparse dimensions, the Index file sits at 1.6GB.
When I then perform a dense restructure, the index file reduces to 0.6GB. The PAG file remains around 12GB (a minor reduction of 0.4GB occurs). The number of blocks remains exactly the same.
If I then run the Agg script again, the number of blocks again remains exactly the same, the PAG file increases by about 0.4GB, but the index file size leaps back to 1.6GB.
If I then immediately re-run the Agg script, the # blocks still remains the same, the PAG file increases marginally (less than 0.1GB) and the Index remains exactly the same at 1.6GB.
Subsequent passes of the Agg script have the same effect - a slight increase in the PAG file only.
Performing another dense restructure reverts the Index file to 0.6GB (exactly the same number of bytes as before).
I have tried running the Aggs using parallel calcs, and also as in series (ie single thread) and get exactly the same results.
I figured there must be some kind of fragmentation happening on the Index, but can't think of a way to prove it. At all stages of the above test, the Average Clustering Ratio remains at 1.00, but I believe this just relates to the data, rather than the Index.
After a bit of research, it seems older versions of Essbase used to suffer from this Index 'leakage', but that it was fixed way before 11.1.2.1.
I also found the following thread which indicates that the Index tags may be duplicated during a calc to allow a read of the data during the calc;
http://www.network54.com/Forum/58296/thread/1038502076/1038565646/index+file+size+grows+with+same+data+-
However, even if all the Index tags are duplicated, I would expect the maximum growth of the Index file to be 100%, right? But I am getting more than 160% growth (1.6GB / 0.6GB).
And what I haven't mentioned is that I am only aggregating a subset of the database, as my Agg script fixes on only certain members of my non-aggregating sparse dimensions (ie only 1 Scenario & Version)
The Index file growth in itself is not a problem. But the knock-on effect is that calc times increase - if I run back-to-back Aggs as above, the 2nd Agg calc takes 20% longer than the 1st. And with the expected growth of the model, this will likely get much worse.
Anyone have any explanation as to what is occurring, and how to prevent it...?
Happy to add any other details that might help with troubleshooting, but thought I'd see if I get any bites first.
The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
Thanks for reading.alan.d wrote:
The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
Thanks for reading.
I haven't tried Direct I/O for quite a while, but I never got it to work properly. Not exactly the same issue that you have, but it would spawn tons of .pag files in the past. You might try duplicating your cube, changing it to buffered I/O, and run the same processes and see if it does the same thing.
Sabrina -
Reclaim disk space after delete an index
Hi ,
I have deleted unused indexes from a history table to reclaim the disk space was allocated for these indexes , yet the size on the disk is the same , any steps I can follow to achieve this would be greatly appreciated..
indexes size is 70 GB
table size is 45 GB , # of rows is above 200 Million
Thanks in advancein order to reclaim space after your big delete. Don't shrink your data files unless you have no other choice.
The space will automatically go back to SQL server. You don't have to do anything.
The database size will be the same plus the growth of your log file due to logging the transactions. However, internally, SQL server knows that it still has that space to play with.
Other wise you can go for update the statistics and rebuild indexes . but I think it would be the part of your normal weekend nightly maintenance plan
Please click "Propose
As Answer" if a post solves your problem, or "Vote
As Helpful" if a post has been useful to you -
Rapid and Huge growth of of used space of temporary tablespace
Hi,
Have a query (select) that run quick (no more than 10 seconds).
As soon I insert the data into a temporary table or even on physical table, the temporary table used space starts to growth very fast. The used space is totally used and the query crash since e reach the limit (65GB), or even more if I add more table files to temporary tablespace!
The problem also happen only if the period (dates) is one year (2013). If the period is the first trimestre of 2013 (same amount of data), the problem does not happen!!
I also confirm that on another instance ( a test one), even with less resources this ORACLE behavior do not happen. I confirm differente execution plan queries, between the two instances .
What I really do not understant is the behavior of the ORACLE with the huge and rapid growth!!!
Any one experiment such a similiar situation?
Thanks in advance,Rui
Plan
INSERT STATEMENT ALL_ROWSCost: 15.776 Bytes: 269 Cardinality: 1
28 LOAD TABLE CONVENTIONAL MIDIALOG_OLAP.MED_INVCOMP_FACTTMP_BEFGROUPBY
27 FILTER
26 NESTED LOOPS
24 NESTED LOOPS Cost: 15.776 Bytes: 269 Cardinality: 1
22 NESTED LOOPS Cost: 15.775 Bytes: 255 Cardinality: 1
19 NESTED LOOPS Cost: 15.774 Bytes: 205 Cardinality: 1
17 NESTED LOOPS Cost: 15.773 Bytes: 197 Cardinality: 1
14 NESTED LOOPS Cost: 15.770 Bytes: 180 Cardinality: 1
11 NESTED LOOPS Cost: 15.767 Bytes: 108 Cardinality: 1
9 HASH JOIN Cost: 15.757 Bytes: 8.346.500 Cardinality: 83.465
7 HASH JOIN Cost: 13.407 Bytes: 6.345.012 Cardinality: 83.487
5 HASH JOIN Cost: 11.163 Bytes: 5.010.550 Cardinality: 100.211
3 HASH JOIN Cost: 5.642 Bytes: 801.288 Cardinality: 22.258
1 INDEX RANGE SCAN INDEX MIDIALOG.IX_INSCOMP_DTCEIDICIDLCPECIDOP Cost: 120 Bytes: 489.676 Cardinality: 22.258
2 INDEX FAST FULL SCAN INDEX (UNIQUE) MIDIALOG.IX_LINHACOMPRADA_IDLCIDOPSEQ Cost: 5.463 Bytes: 123.975.530 Cardinality: 8.855.395
4 INDEX FAST FULL SCAN INDEX (UNIQUE) MIDIALOG.IX_LINHACOMPRADA_IDLCIDOPSEQ Cost: 5.463 Bytes: 123.975.530 Cardinality: 8.855.395
6 TABLE ACCESS FULL TABLE MIDIALOG.ITEM_AV Cost: 1.569 Bytes: 6.963.736 Cardinality: 267.836
8 TABLE ACCESS FULL TABLE MIDIALOG.ITEM_AV Cost: 1.572 Bytes: 7.713.672 Cardinality: 321.403
10 INDEX UNIQUE SCAN INDEX (UNIQUE) MIDIALOG.IX_BOFINALBO_IDBOIDFINALBO Cost: 0 Bytes: 8 Cardinality: 1
13 TABLE ACCESS BY INDEX ROWID TABLE MIDIALOG.INSERCAO_COMPRADA Cost: 3 Bytes: 72 Cardinality: 1
12 INDEX RANGE SCAN INDEX (UNIQUE) MIDIALOG.IX_INSCOMPRADA_IDLCDATAPECAINS Cost: 2 Cardinality: 1
16 TABLE ACCESS BY INDEX ROWID TABLE MIDIALOG.INSERCAO_ITEMFACTURA Cost: 3 Bytes: 17 Cardinality: 1
15 INDEX RANGE SCAN INDEX MIDIALOG.IX_INSITFACT_INSCOMPRADA Cost: 2 Cardinality: 1
18 INDEX RANGE SCAN INDEX (UNIQUE) MIDIALOG.UQ_ITEMFACTURA_IDITF_IDFACT Cost: 1 Bytes: 8 Cardinality: 1
21 TABLE ACCESS BY INDEX ROWID TABLE MIDIALOG.FATURA Cost: 1 Bytes: 50 Cardinality: 1
20 INDEX UNIQUE SCAN INDEX (UNIQUE) MIDIALOG.PK_FATURA Cost: 0 Cardinality: 1
23 INDEX UNIQUE SCAN INDEX (UNIQUE) MIDIALOG.PK_TIPO_ESTADO Cost: 0 Cardinality: 1
25 TABLE ACCESS BY INDEX ROWID TABLE MIDIALOG.TIPO_ESTADO Cost: 1 Bytes: 14 Cardinality: 1
Edited by: rr**** on 19/Fev/2013 15:25I run the select with sucess, no more that 1 minute from on year of data. Few temporary used space used.
As soon I plug the insert (global temporary table, also experiment with physical table) the used space of temporary table space start to grow crazy!!
insert into midialog_olap.med_invcomp_facttmp_befgroupby
select fac.numefatura,
fac.codpessoa,
fac.dtemiss,
tef.nome as estado_factura,
opsorig.demid,
opsorig.anoplano,
opsorig.numplano,
opsorig.numplanilha,
ops.nome as ordem_publicidade,
ops.external_number as numero_externo,
ops.estado,
lic.seq,
inc.data,
inc.peca,
fac.id_versao_plano,
fac.ano_proforma || '.' || fac.numrf as num_proforma,
iif.tipo_facturacao,
opsorig.codveiculo as id_veiculo,
opsorig.codfm as id_fornecedor_media,
icorig.chkestado as id_estado_checking,
0 as percentagem_comissao_agencia,
0 as valor_pbv,
0 as valor_stxtv,
0 as valor_ptv,
0 as valor_odbv,
0 as valor_pbbv,
0 as valor_dnv,
0 as valor_pbnv,
0 as valor_stxv,
0 as valor_pbtv,
0 as valor_dav,
0 as valor_plv,
0 as valor_odlv,
0 as valor_pllv,
0 as valor_ca,
0 as valor_trv,
0 as valor_txv,
0 as valor_base_facturacao,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_pb_compra * fac.percentagem_facturada / 100))
as valor_pbc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_stxt_compra * fac.percentagem_facturada / 100))
as valor_stxtc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_pt_compra * fac.percentagem_facturada / 100))
as valor_ptc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_odb_compra * fac.percentagem_facturada / 100))
as valor_odbc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_pbb_compra * fac.percentagem_facturada / 100))
as valor_pbbc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_dn_compra * fac.percentagem_facturada / 100))
as valor_dnc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_pbn_compra * fac.percentagem_facturada / 100))
as valor_pbnc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_stx_compra * fac.percentagem_facturada / 100))
as valor_stxc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_pbt_compra * fac.percentagem_facturada / 100))
as valor_pbtc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_da_compra * fac.percentagem_facturada / 100))
as valor_dac,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_pl_compra * fac.percentagem_facturada / 100))
as valor_plc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_odl_compra * fac.percentagem_facturada / 100))
as valor_odlc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_pll_compra * fac.percentagem_facturada / 100))
as valor_pllc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_transcricoes * fac.percentagem_facturada / 100))
as valor_trc,
decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
inc.total_tx_compra * fac.percentagem_facturada / 100))
as valor_txc,
--nvl((select cfm.total_comprado
-- from fin_custos_facturados_media cfm
-- where cfm.id_factura = fac.id_factura and
- -- cfm.id_op = ops.id_op
-- ), 0) / opsorig.number_of_bought_insertions as custos_associados,
0 as custos_associados,
fac.iss as percentagem_iva,
fac.percentagem_facturada,
fac.currency_exchange as taxa_cambio,
iif.associated_code as insertions_associated_code
from fatura fac, item_fatura itf, insercao_itemfactura iif,
insercao_comprada icorig, linha_comprada lcorig, item_av opsorig,
med_bookingorder_finalbo opfin,
insercao_comprada inc,
linha_comprada lic, item_av ops,
--veiculo vei,
tipo_estado tef
where fac.id_factura = itf.id_factura and
itf.id_itemfactura = iif.id_itemfactura and
iif.id_ic = icorig.id_ic and
icorig.id_lc = lcorig.id_lc and
lcorig.id_op = opsorig.id_op and
opsorig.id_op = opfin.id_booking_order and
opsorig.number_of_bought_insertions > 0 and
opfin.id_final_booking_order = ops.id_op and
-- ops.id_op = (
-- select max(ops.id_op)
-- from item_av ops
-- start with ops.id_op = opsorig.id_op
-- connect by prior ops.id_opsubstituicao = ops.id_op) and
ops.id_op = lic.id_op and
lic.seq = lcorig.seq and
lic.id_op = inc.id_op and
lic.id_lc = inc.id_lc and
inc.data = icorig.data and
inc.peca = icorig.peca and
--opsorig.codveiculo = vei.codveiculo and
fac.estado = tef.estado and
fac.estado != 305 and
ops.estado != 223 and
iif.tipo_facturacao != 'SO_CA' and
icorig.data between :dtBeginDate and :dtEndDate and
(fac.codagenciafat = :iIdAgency or :iIdAgency is null); -
BW DB13 Tables and indexes missing
hi, everyone,
our team find a strangethings in DB02 about the growth of table space, there is even no any growth, and it keeps about two months. and then we found that in tables and indexes all table about BW like *ODSD, *FACTD, *DIMD are missing!
I only find DB13C appear the error as" REOGR CHK for All Tables" at the same time .
please tell me why I can't find BW tables ? is there any relations between tables missing and job calendar?
our level is 701, and database is DB6.It's a fool questions, i found the reason is so easy^^
-
System.arraycopy (2 dim array) and growth of 2 dim array
Hi everybody
I am working on a program which contains a module that can perform Cartesian product on number of sets.
The code I have developed so far is :
import java.lang.reflect.Array;
public class Cart5 {
public static void main(String[] args) throws Exception
int pubnewlength;
// declare SolArray
int[][] solArray;
// initialize solArray
solArray=new int[1][4];
// Use for method
for (int ii=0 ; ii<4 ; ii++)
solver(solArray,ii);
// Print the array ?
System.out.println("\n The array was changed ... " );
} // End main
public void solver(int Solarray2[][] , int abi)
int[][] A = { {1,2,3,5},
{4,6,7},
{11,22,9,10},
{17,33}
jointwoArrays(solarray2,A,abi);
// some other operations
} // End Solver method
public void jointwoArrays(int solarray3[][] , int aArray[][],int indexA)
int y,u;
int[][] tempArray;
// calculate growth of rows:
pubnewlength=solArray3.length * aArray[indexA].length;
//Fill TempArray
y=solArray3[0].length;
u=solArray3.length;
tempArray=new int[u][y];
// Use system.arraycopy to copy solArray3 into tempArray -- How ?
// Change the size of arrow to proper size -- How ?
solArray3 = (int[][]) arrayGrow(solArray3);
// Join operation - Still under construction
for(int i = 0, k = 0; i < tempArray.length; i++)
for(int j = 0; j < set3.length; j++)
for (q=0;q<=2;q++)
{ solArray3[k][q] = tempArray[i][q];}
solArray3[k][q]= aArray[indexA][j];
++k;
} // End jointwoArrays method
// This module is from http://www.java2s.com/ExampleCode/Language-Basics/Growarray.htm
static Object arrayGrow(Object a) {
Class cl = a.getClass();
if (!cl.isArray())
return null;
Class componentType = a.getClass().getComponentType();
int length = Array.getLength(a);
int newLength = pubnewlength;
Object newArray = Array.newInstance(componentType, newLength);
System.arraycopy(a, 0, newArray, 0, length);
return newArray;
} // End ClassI deeply appreciate your help with these 3 questions :
1. How can I use system.arraycopy to copy my two dimensional array? I have searched but examples seem to be about one dim arrays.
2. How can I change the "static Object arrayGrow(Object a)" , to grow my two dimensional array ?
3. If you know any codes or articles or java code regarding cartesian products , please tell me.
Thank you
Denis1. How can I use system.arraycopy to copy my two
dimensional array? I have searched but examples seem
to be about one dim arrays.That's because you can't do it in one call. You need to create a loop which copies each 'row".
>
2. How can I change the "static Object
arrayGrow(Object a)" , to grow my two dimensional
array ?Why do you make it so complicated (generic). Make it take an int[][] array instead, and see the answer from above.
>
3. If you know any codes or articles or java code
regarding cartesian products , please tell me.There are probably lots of them if you google.
Kaj -
Why should we create index on the table after inserting data ?
Please tell me the Reason, why should we create index on the table after inserting data .
while we can also create index on the table before insertion of the data.The choice depends on a number of factors, the main being how many rows are going to be inserted in the table as a percentage of the existing rows, or the percentage growth.
Creating index after a table has been populated works better when the tables are large or the inserts are large for the following reasons
1. The sort and creation of index is more efficient when done in batch and written in bulk. So works faster.
2. When the index is being written blocks get acquired as more data gets written. So, when a large number of rows get inserted in a table that already has an index , the index data blocks start splitting / chaining. This increases the "depth" of the inverted b-tree makes and that makes the index less efficient on I/O. Creating index after data has been inserted allows Orale to create optical block distribution/ reduce splitting / chaining
3. If an index exists then it too is routed through the undo / redo processes. Thats an overhead which is avoided when you create index after populating the table.
Regards -
Hi Friends,
How to create the bitmap index? Pls given the syntax
When should we create the Btree index?
Given one example btree index
regs
rengaHi Vinas,
generally bitmap is not good for data on which u are making dml operations. Since there are lock issues and index size issues.
1) lock issues bitmap index has different structre than b-tree one. There are flag, lock byte, value, start rowid, end rowid and bitmap in your index entry. So when you change one bitmap you have to lock all rows in your row id's range. In btree you lock just one row.
2) when you update index u mark one entry as deleted and create new one with new value. When you look on structure described in point 1) you get know that index entry could be quite big (generally bigger than index entry for btree index). So this is the way how your bitmap index can growth.
Both these points are really dangerous from the point of view of performance and it's reason why it's not good idea use bitmap index on columns with dml activity.
Jakub.
Maybe you are looking for
-
Ipod Classic 120GB won't shut down
Hello everyone, Two weeks ago I went on holiday, and took my Ipod Classic (120GB) with me. I plugged my Ipod into an adapter, and when the battery was full, I disconnected it from the adapter. But somehow, the image of the plug - which is shown when
-
Hello My iPhone is first generation phone last time my phone is updated from 3.1.1 to 3.1.2 so then the phone is locking is this disply shows datacable mark to itunes icon. This is my iPhone problem so pls help me
-
Leopard mail.app jpg attachments not viewable
Some PDF and Jpg attachment are not viewable in the message and a blue cube with white question marks appear in both sending, sent and received emails. This is a new phenomenon which started last week.
-
XML exception when running .jrxml
Jun 7, 2006 9:43:01 AM org.apache.commons.digester.Digester error SEVERE: Parse Error at line 5 column 260: Element type "style" must be declared. org.xml.sax.SAXParseException: Element type "style" must be declared. at org.apache.xerces.util.ErrorHa
-
Just installed FCP 6 and now can't open FCP 5 projects! HELP
Yesterday I was trying to get some help with an FCP project that would no longer open. After a number of (very good) ideas from this and other forums did not help, I gave up and purchased the newest version of FCP Studio 2, figured I would soon anywa