BWA Fact Table Index Size
Hi
Can anybody tell me how the BWA decides when a fact table index gets split into multiple parts? We have a number of very large cubes that are indexed and some have a fact table index that consists of one logical index which is made up of multiple physical indexes but other, similar sized cubes, just have one very large physical index for the fact table.
With the one very large physical index we seem to get an overload problem but when they are split into multiple parts we don't.
Thanks
Martin
Hi Martin,
this depends on the reorg config and the attribute of the index. You can manually trigger a splitting of an index via command 'ROUNDROBIN x', x stand for the number of parts which the index will be split to. Therefore you have to go into trexadmin standalone tool -> landscape right click on index -> split/merge index...
If you want an automatically split, you have to setup your reorg settings. Goto trexadmin standalone tool -> tab reorg -> options -> here you can choose the type of algorithm. Have a look into note 1313260 and 1163149.
Do you have a scheduled reorg job?
Regards,
Jens
PS: Every black box can be understood...
Similar Messages
-
BIA gurus..
Prior to our BIA implementation we had the drop and rebuild index process variants in our process chains.
Now after the BIA implementation we have the BIA index roll-up process variant included in the process chain.
Is it still required to have the drop and rebuilt index process variants during data load ?
Do the infocube fact table indexes ever get hit after the BIA implementation ?
Thanks,
Ajay Pathak.I think you still need the delete/create Index variants as it not only helps in query performance but also speeds up the load to your cubes.
Documentation in Perfomance tab:
"Indices can be deleted before the load process and after the loading is finished be recreated. This accelerates the data loading. However, simultaneous read processes to a cube are negatively influenced: they slow down dramatically. Therefore, this method should only be used if no read processes take place during the data loading."
More details at:
[http://help.sap.com/saphelp_nw70/helpdata/EN/80/1a6473e07211d2acb80000e829fbfe/frameset.htm] -
Table index size in DB02 smaller after upgrade
SAP ERP 6.0, DB2 9.5, AIX 5.3. After we upgraded to SPS 15 / EHP4 / Netweaver EHP1 SPS02 using the downtime minimized method (shadow instance created) the index sizes for the tables are showing reduced sizes. When looking in DB02 under History -> "tables and indexes" all the tables show a drop in index sizes. I have compared the indexes to a pre upgrade copy of the system and all the indexes are still defined and active in the upgrades system. Can somebody please explain why the size drop? Is this a reporting error or what?
Hi Eddie,
DB2 V8.2 did not allow to retrieve table/index size information from DB2 directly. Therefore the SAP DB2 database interface and the CCMS code tried to do some size estimation based on cardinality and table/index width. DB2 V9.1+ provides table function ADMIN_GET_TAB_INFO to retrieve size information directly from DB2. Since this size information is much more accurate the SAP DB2 database interface and the CCMS code have been changed to use this table function.
So the phantom-"shrink" you observed may be related to the switch from size estimation to the size retrieved from ADMIN_GET_TAB_INFO . This may have happened directly after the V9.5 upgrade ( size retrieved differently in SAP DB2 database interface ) or after the SAP release upgrade ( change in CCMS ABAP coding ).
Regards
Frank -
How do I find out the size of an existing table and an index ?
Thanksanalyze the table or index.
see dba_segments
dba_extents
where segmenT_type in ('TABLE','INDEX'
null -
SAP Table index size is greater than the size of the actual table
Hello Experts,
We are resolving an issue related to database performance. The present database size is 9 Terabytes. The analysis of response times through ST03N shows that the db time is 50% of the total response time. We are planning to reorganize the most updated tables (found from DB02old tx).
Here we see that the size of the index for a table is greater than the actual size of the table. Is this possible, if yes then how can we reorganize the index as it does not allow us to reorganize the index using brspace command.
Hope to hear from you soon, and if any additional activities you can suggest to improve the performance of the database will be appreciated.
Thank youHi Zaheer,
online redef may help you (for a little while) , but also check WHY the index became fragmented.
Improper settings can bring the index fragmented again and you have reoccuring reorg needs.
i.e.
check if PCT_INCREASE >0 if you are using Dictionary Managed Tablespaces or locally managed tablespaces that uses a "User" allocation policy. Set it to 0 to generate uniform next extents in the online reorg.
select
SEGMENT_NAME,
SEGMENT_TYPE,
round((NEXT_EXTENT*BLOCKS)/(EXTENTS*BYTES))*(BYTES/BLOCKS),
PCT_INCREASE
from
DBA_SEGMENTS
where
OWNER='SAPR3'
and
SEGMENT_TYPE in ('INDEX',
'TABLE')
and
PCT_INCREASE > 0
and segment_name in ('Yourtable','Yourindex')
In the following cases, it may be worthwhile to rebuild the index:
--> the percentage of the space used is bad - lower than 66%: PCT_USED
--> deleted leaf blocks represent more than 20% of total leaf blocks: DEL_LF_ROWS
--> the height of the tree is bigger than 3: HEIGHT or BLEVEL
select
name,
'----------------------------------------------------------' headsep,
'height '||to_char(height, '999,999,990') height,
'blocks '||to_char(blocks, '999,999,990') blocks,
'del_lf_rows '||to_char(del_lf_rows,'999,999,990') del_lf_rows,
'del_lf_rows_len '||to_char(del_lf_rows_len,'999,999,990') del_lf_rows_len,
'distinct_keys '||to_char(distinct_keys,'999,999,990') distinct_keys,
'most_repeated_key '||to_char(most_repeated_key,'999,999,990') most_repeated_key,
'btree_space '||to_char(btree_space,'999,999,990') btree_space,
'used_space '||to_char(used_space,'999,999,990') used_space,
'pct_used '||to_char(pct_used,'990') pct_used,
'rows_per_key '||to_char(rows_per_key,'999,999,990') rows_per_key,
'blks_gets_per_access '||to_char(blks_gets_per_access,'999,999,990') blks_gets_per_access,
'lf_rows '||to_char(lf_rows, '999,999,990')||' '||+
'br_rows '||to_char(br_rows, '999,999,990') br_rows,
'lf_blks '||to_char(lf_blks, '999,999,990')||' '||+
'br_blks '||to_char(br_blks, '999,999,990') br_blks,
'lf_rows_len '||to_char(lf_rows_len,'999,999,990')||' '||+
'br_rows_len '||to_char(br_rows_len,'999,999,990') br_rows_len,
'lf_blk_len '||to_char(lf_blk_len, '999,999,990')||' '||+
'br_blk_len '||to_char(br_blk_len, '999,999,990') br_blk_len
from
index_stats where index_name = 'yourindex'
bye
yk -
To find the size of the fact table and dimension table
Hi experts,
Can anyone plz tell me if i want to find size of the fact table and size of the dimension table to find cardinality and line item do we first build statistics then find size by transaction DB02 or any other method we have?
Thanks in advanceHi ,
Please go to Tcode DB02 >Space>Table and Indexes.Give your table name or pattern (like /BIC/F* for gettinf all the Fact tables)
.This will give you sizes of all the table.
Also if you want to get list like TOP 30 Fact tables and Dimension Table.Please use TCode ST14, this will give a desired output with all the required details.
-Vikram -
Hi Experts
Iam working on implementation project, here already some cube are there is a sales, financials & Material Movements cube.
Here Sales and financial cubes are customer define means start with u2018Zu2019.
Here my doubt is how to know the fact table & cubes size? & how to calculate the fact table &
cube size for customer define and SAP define in our SAP BI 7.0 systems .
Regards
SKBABUHello,
Check this document:
[http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0427b8a-7fe5-2c10-f1be-a8be71fa2c06?QuickLink=index&overridelayout=true]
Regards,
Jorge Diogo -
Index size greated then Table Size
Hi all,
We are running BI7.0 in our environment.
One of the tables' index size is much greated than the table itself. The Details are listed below:
Table Name: RSBERRORLOG
Total Table Size: 141,795,392 KB
Total Index Size: 299,300,576 KB
Index:
F5: Index Size / Allocated Size: 50%
Is there any reason that the index should grow more than Table? If so, would Reorganizing index help and if this can be controlled?
Please letme know on this as I am not very clear on DB much.
Thanks and Regards,
RaghavanHi Hari
Its basically degenerated index. You can follow the below steps
1. Delete some entries from RSBERRORLOG.
BI database growing at 1 Gb per day while no data update on ECC
2. Re-organize this table from BRSPACE . Now the size of the table would be very less. I do not remember if this table has a LONG RAW field ( in that case export /import) of this table would be required. ---Basis job
3. Delete and recreate Index on this table
You will gain lot of space.
I assumed you are on Oracle.
More information on reoganization is LINK: [Reorg|TABLE SPACE REORGANIZATION !! QUICK EXPERT INPUTS;
Anindya
Regards
Anindya -
Urgent regarding E & F fact table
Hi all,
How and where we find E & F partition fact table having size larger than 30.
It's very urgent.
Thanks & Regards,
Priya.Hi,
You can find the table related to InfoCube by following the below mention naming convention.
<b>/BI<C OR DIGIT>/<TABLE CODE><INFOCUBE><DIMENSION>
<C or digit>: C = Customer-defined InfoCube
Digit = SAP-defined InfoCube
<table code>: D = Dimension table
E = Compressed fact table
F = Uncompressed fact table
<InfoCube>: The name of the InfoCube without leading digits (if any)
<dimension>: (only used for dimension tables)
P = Package dimension
U = Unit dimension
T = Time dimension
0-9, A, B, C = User-defined dimension tables</b>
And you can find info about the size of infocube:
Calculating size of CUBE & ODS
regards,
Pruthvi R -
Select count from large fact tables with bitmap indexes on them
Hi..
I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
ThxYou may have corruption in the index if the queries ...
Select /*+ full(t) */ count(*) from my_table t
... and ...
Select /*+ index_combine(t my_index) */ count(*) from my_table t;
... give different results.
Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them. -
Hi All,
We are trying to build a data warehouse. The data marts would be accessed by cognos reporting layer. In the data marts we have around 9 dimension tables and 1 fact table. For each month we will have around 21-25 million records in the fact table. Out of 9 dimensions there dim1 and dim2 have 21 million and 10 million records respectively. The rest 7 dimensions are very small like less than 10k records.
In cognos reports they are trying to join the some dimension tables and the fact table to populate some reports. they are taking around 5-6 min.
I have around 8 B-Tree indexes on this fact table with all possible combination of columns. I believe that these many indexes is not improving the performance. So I decided to create a aggregated table with measures. But in cognos there are some reports which give detailed information from the fact table and that are taking around 8 min.
please advice as to what type indexes can be created on fact tables.
I read that we can create bit map indexes based on join conditions but the documentation says that it can include columns only from dimension tables and not fact tables. Should the indexed columns be keys in dimensional tables?
I have observed that the fact table is around 1.5gb. But each index is around 1.9 -2gb. I was kind of surprised when I saw that figure. Does it imply that index scan and table lookup would take more time than the full table scan? And hence it is not using the indexes.
Any help is greatly appreciated.
Thanks
HariWhat sort of queries are you running? Do you have an example (with a query plan)?
Are indexes even useful? Or are you accessing too much data to make indexes worthwhile?
Are you licensed to use partitioning? If so, are your fact tables partitioned? Are the queries doing partition pruning?
Are you using parallelism? If so, is parallel query actually being invoked?
If creating aggregate tables is a potentially useful strategy, you would want to use materialized views with query rewrite.
Justin -
ROWNUM is indexed in the Fact table - How to optimize performace with this?
Hi,
I have a scenario where there is an index on the Rownum.
The main Fact table is partitioned based on the job number (Daily and monthly). As there can be multiple entries for a single jobID, the primary key is made up of the Job ID and the Rownum
This fact table in turn is joined with another fact table based on this job number and rownum. This second fact table is also partitioned on job ID.
I have few reference tables that are joined with the first fact table with btree index.
Though in a normal DW scenario we should use bitmap, here we can't do that as lot of other applications are accessing data (DML queries) where bitmap will be slow. So I am using STAR_TRANSFORMATION hint to use the normal index as bitmap index.
Till here it is fine. Problem is when I simply do a count for a specific partition from a reference table and a fact table, it is using all required indexes as bitmap with very low cost. But also it is using ROWNUM index that is of very very high cost.
I am relatively new to Oracle tuning. I am not able to understand what it is exactly doing. Could you please suggest if I can get rid of this ROWNUM to make my query performance faster? This index can not be dropped. Is there a way in the hint I can instruct not to use this primary key index?
Or Even by using is there a way that the performance will be faster?
I will highly appreciate any help in this regard.
Regards
...Just sending the portion having info on the partition and Primary index as the entire script is too big.
CREATE TABLE FACT_TABLE
JOBID VARCHAR2(10 BYTE) DEFAULT '00000000' NOT NULL,
RECID VARCHAR2(18 BYTE) DEFAULT '000000000000000000' NOT NULL,
REP_DATE VARCHAR2(8 BYTE) DEFAULT '00000000' NOT NULL,
LOCATION VARCHAR2(4 BYTE) DEFAULT ' ' NOT NULL,
FUNCTION VARCHAR2(6 BYTE) DEFAULT ' ' NOT NULL,
AMT.....................................................................................
TABLESPACE PSAPPOD
PCTUSED 0
PCTFREE 10
INITRANS 11
MAXTRANS 255
STORAGE (
INITIAL 32248K
LOGGING
PARTITION BY RANGE (JOBID)
PARTITION FACT_TABLE_1110500 VALUES LESS THAN ('01110600')
LOGGING
NOCOMPRESS
TABLESPACE PSAPFACTTABLED
PCTFREE 10
INITRANS 11
MAXTRANS 255
STORAGE (
INITIAL 32248K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL DEFAULT
PARTITION FACT_TABLE_1191800 VALUES LESS THAN ('0119190000')
LOGGING
NOCOMPRESS
TABLESPACE PSAPFACTTABLED
PCTFREE 10
INITRANS 11
MAXTRANS 255
CREATE UNIQUE INDEX "FACT_TABLE~0" ON FACT_TABLE
(JOBID, RECID)
TABLESPACE PSAPFACT_TABLEI
INITRANS 2
MAXTRANS 255
LOCAL (
PARTITION FACT_TABLE_11105
LOGGING
NOCOMPRESS
TABLESPACE PSAPFACT_TABLEI
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL DEFAULT
...................................................... -
Report to find all table and index sizes
Hi all,
Good day..
Is there any report.sql or so to find out the sizes of all the tables and indexes in a database.
thanks,
baskar.l1.To get table size
What will be the table size if?
<or>
break on report
set line 200
COMPUTE SUM LABEL "Total Reclaimable Space" OF "KB Free Space" ON REPORT
column "Table Size" Format a20
column "Actual Data Size" Format a20
column "KB Free Space" Format "9,99,999.99"
select table_name,
round((blocks*8),2)||'kb' "Table size",
round((num_rows*avg_row_len/1024),2)||'kb' "Actual Data size",
pct_free,
round((blocks*8),2) - (round((blocks*8),2)*pct_free/100) - (round((num_rows*avg_row_len/1024),2)) "KB Free Space"
from user_tables
where round((blocks*8),2) - (round((blocks*8),2)*pct_free/100) - (round((num_rows*avg_row_len/1024),2)) > 0
order by round((blocks*8),2) - (round((blocks*8),2)*pct_free/100) - (round((num_rows*avg_row_len/1024),2)) desc
2.To get index size
How to size the Index
Hth
Girish Sharma -
Index size keep growing while table size unchanged
Hi Guys,
I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
The base tables are some working tables with DML operation and nearly same number of records daily.
I've analysed the schema in the test environment.
Those indexes do not fulfil the criteria for rebuild as follows,
- deleted entries represent 20% or more of the current entries
- the index depth is more then 4 levels
May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
Grateful if someone can give me some advice.
Thanks a lot.
Best regards,
TimmyPlease read the documentation. COALESCE is available in 9.2.
Here is a demo for coalesce in 10G.
YAS@10G>truncate table t;
Table truncated.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 65536
TIND 65536
YAS@10G>insert into t select level from dual connect by level<=10000;
10000 rows created.
YAS@10G>commit;
Commit complete.
YAS@10G>
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 196608
TIND 196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
YAS@10G>delete from t where mod(id,2)=0;
5000 rows deleted.
YAS@10G>commit;
Commit complete.
YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
5000 rows created.
YAS@10G>commit;
Commit complete.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 196608
TIND 327680Table size is the same but the index size got bigger.
YAS@10G>exec show_space('TIND',user,'INDEX');
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 6
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 0
Full Blocks ..................... 29
Total Blocks............................ 40
Total Bytes............................. 327,680
Total MBytes............................ 0
Unused Blocks........................... 0
Unused Bytes............................ 0
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 37,001
Last Used Block......................... 8
PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
YAS@10G>alter index tind coalesce;
Index altered.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 196608
TIND 327680
YAS@10G>exec show_space('TIND',user,'INDEX');
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 13
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 0
Full Blocks ..................... 22
Total Blocks............................ 40
Total Bytes............................. 327,680
Total MBytes............................ 0
Unused Blocks........................... 0
Unused Bytes............................ 0
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 37,001
Last Used Block......................... 8
PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
Insert another 5000 rows with higher key values.
YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
5000 rows created.
YAS@10G>commit;
Commit complete.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 262144
TIND 327680Now the index did not get bigger because it could use the free blocks for the new rows. -
How can we know that size of dimension is more than fact table?
how can we know that size of dimension is more than fact table?
this was the question asked for me in interviewHi Reddy,
This is common way finding the size of cube or dimensions or KF.
Each keyfiure occupies 10 Bytes of memory
Each Char occupies 6 Bytes of memory
So in an Infocube the maximum number of fields are 256 out of which 233 keyfigure, 16 Dimesions and 6 special char.
So The maximum capacity of a cube
= 233(Key figure)10 + 16(Characteristics)6 + 6(Sp.Char)*6
In general InfoCube size should not exceed 100 GB of data
Hope it answer your question.
Regards,
Varun
Maybe you are looking for
-
Premiere Elements 11 won't launch
G'day, I downloaded the trial version of PRE11 a few weeks ago, but I couldn't get it to run. Upon launch I get the welcome screen with two choices, to launch the Organizer or the Video Editor, but neither of these actually start. It seems as it is t
-
When I copy and paste from Word to my e-mail using Firefox it changes the font and spacing to single space and 10pt. It just started doing this a month ago. It does not do it when I am on my laptop or using Explorer. Help, I hate explorer.
-
hi, after added the hyperion add-ins (for installing smartview), my user has difficulting working on her normal excel file which is 10mb size. she is not connect to the application but when she click on the worskheets on her excel file, it takes seve
-
Hi there, Is there a way to get a list of all uploaded images via items (by page group / page) ? (Not only the shared) My customer is asking for an thumbnail preview page, if I can selected the images from some view I'm able to construct such a page.
-
Best ways to transfer files to network isolated VMs?
I'm looking at creating a DHCP scope and have some IP addresses that would be static for such items as routers, printers, etc. I've been told various ways depending on your point of view. I'm curious on what everyone's perspective is.For example, I h