Hierarchy "I" table keeps growing

We have 4 hierarchys we load. The problem is I don't understand why each time we reload the hierrachy the table
/bio/iwbselmnt keeps growing by the number of records loaded.
We only have 303 rows in the hierarchy but the table has 909 records. the only difference is the version. I see "L", "M", "N" .
What is the situation here. How do i get rid of the other versions ?
Thanks

Hi Richard,
If you go to the info-object maintenance for wbselmnt you would fine a check box that the hierarchy is version dependent in the "Hierarchy tab". Remove that and your problem would be resolved. But I think you might need to delete the hierarchy and its data before that. Create a new hierarchy and reload the data.
Bye
Dinesh

Similar Messages

  • Index size keep growing while table size unchanged

    Hi Guys,
    I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
    The base tables are some working tables with DML operation and nearly same number of records daily.
    I've analysed the schema in the test environment.
    Those indexes do not fulfil the criteria for rebuild as follows,
    - deleted entries represent 20% or more of the current entries
    - the index depth is more then 4 levels
    May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
    Grateful if someone can give me some advice.
    Thanks a lot.
    Best regards,
    Timmy

    Please read the documentation. COALESCE is available in 9.2.
    Here is a demo for coalesce in 10G.
    YAS@10G>truncate table t;
    Table truncated.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                         65536
    TIND                      65536
    YAS@10G>insert into t select level from dual connect by level<=10000;
    10000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
    YAS@10G>delete from t where mod(id,2)=0;
    5000 rows deleted.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680Table size is the same but the index size got bigger.
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................               6
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              29
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
    YAS@10G>alter index tind coalesce;
    Index altered.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................              13
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              22
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
    Insert another 5000 rows with higher key values.
    YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        262144
    TIND                     327680Now the index did not get bigger because it could use the free blocks for the new rows.

  • Can't create table that grows using the button object

    Im a beginner, and this seems like a simple issue. Appreciate your help in advance.  When I try to do this, the button fails to add a row.
    So, I printed out the Help section entitled, "create a table that grows using the button object" and started from scratch with a simple table as the only object on the form- and followed the instructions to the letter.
    When I go to preview and click the button, a row is not added.
    Can you suggest something?

    Try this:
    Select body row of table from hierarchy palette that you want to repeat then on he object tab of the properties tab select binding tab. You need to click on "Repeat Row for Each Data Item"
    I hope this helps
    Murat
    www.muratkuru.com.tr

  • Oracle Text Context index keeps growing. Optimize seems not to be working

    Hi,
    In my application I needed to search through many varchar columns from differents tables.
    So I created a materialized view in which I concatenate those columns, since they exceed the 4000 characters I merged them concatenating the columns with the TO_CLOBS(column1) || TO_CLOB(column)... || TO_CLOB(columnN).
    The query is complex, so the refresh is complete on demand for the view. We refresh it every 2 minutes.
    The CONTEXT index is created with the sync on commit parameter.
    The index then is synchronized every two minutes.
    But when we run the optimize index it does not defrag the index. So it keeps growing.
    Any idea ?
    Thanks, and sorry for my poor english.
    Edited by: detryo on 14-mar-2011 11:06

    What are you using to determine that the index is fragmented? Can you post a reproducible test case? Please see my test of what you described below, showing that the optimization does defragment the index.
    SCOTT@orcl_11gR2> -- table:
    SCOTT@orcl_11gR2> create table test_tab
      2    (col1  varchar2 (10),
      3       col2  varchar2 (10))
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- materialized view:
    SCOTT@orcl_11gR2> create materialized view test_mv3
      2  as
      3  select to_clob (col1) || to_clob (col2) clob_col
      4  from   test_tab
      5  /
    Materialized view created.
    SCOTT@orcl_11gR2> -- index with sync(on commit):
    SCOTT@orcl_11gR2> create index test_idx
      2  on test_mv3 (clob_col)
      3  indextype is ctxsys.context
      4  parameters ('sync (on commit)')
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- inserts, commits, refreshes:
    SCOTT@orcl_11gR2> insert into test_tab values ('a', 'b')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> commit
      2  /
    Commit complete.
    SCOTT@orcl_11gR2> exec dbms_mview.refresh ('TEST_MV3')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> insert into test_tab values ('c a', 'b d')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> commit
      2  /
    Commit complete.
    SCOTT@orcl_11gR2> exec dbms_mview.refresh ('TEST_MV3')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- query works:
    SCOTT@orcl_11gR2> select * from test_mv3
      2  where  contains (clob_col, 'ab') > 0
      3  /
    CLOB_COL
    ab
    c ab d
    2 rows selected.
    SCOTT@orcl_11gR2> -- fragmented index:
    SCOTT@orcl_11gR2> column token_text format a15
    SCOTT@orcl_11gR2> select token_text, token_first, token_last, token_count
      2  from   dr$test_idx$i
      3  /
    TOKEN_TEXT      TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
    AB                        1          1           1
    AB                        2          3           2
    C                         3          3           1
    3 rows selected.
    SCOTT@orcl_11gR2> -- optimizatino:
    SCOTT@orcl_11gR2> exec ctx_ddl.optimize_index ('TEST_IDX', 'REBUILD')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- defragmented index after optimization:
    SCOTT@orcl_11gR2> select token_text, token_first, token_last, token_count
      2  from   dr$test_idx$i
      3  /
    TOKEN_TEXT      TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
    AB                        2          3           2
    C                         3          3           1
    2 rows selected.
    SCOTT@orcl_11gR2>

  • Time machine keeps growing in size or backing up a size far greater than it

    Apologies if this is covered elsewhere but after hours of searching I still haven't been able to find a resolution which is actually fixing my problem.
    When my time machine tries to back up the size of the back up just keeps growing and growing - a problem which has been reported several times but none of the solutions are fixing the problem for me. I've completely reset time machine (disconnect external hard drive, deleting the preferences from ~library/preference, then reset up with my exclusions) and the problem just reappeared.
    I've gone through this routine several times and in most cases had no joy. On one occasion I thought the problem had been resolved as it did the initial backup and then a couple of hourly backs and then suddenly it returned a message saying it had insufficient space (trying to back up 470GB but disk only has 300GB available) which was just ridiculous as the full back up size was under 50GB.
    I've verified the disk on several occasions and it always comes up clean. In a fit of desperation I've even tried reformatting the external drive with no improvements
    I'm at the point of deleting time machine from my laptop and going back to the old Backup program. Can anyone offer any thing else I could try.

    Hi, and welcome to the forums.
    Have you Verified your internal HD (and Repaired any externals that are also being backed-up)?
    If so, it's probably something damaged or corrupted in your installation of OSX. I'd suggest downloading and installing the 10.6.4 "combo" update. That's the cleverly-named combination of all the updates to Snow Leopard since it was first released, so installing it should fix anything that's gone wrong since then, such as with one of the normal "point" updates. Info and download available at: http://support.apple.com/kb/DL1048 Be sure to do a +Repair Permissions+ via Disk Utility (in your Applications/Utilities folder) afterwards.
    If that doesn't help, reinstall OSX from your Snow Leopard Install disc (that won't affect anything else), then apply the "combo" again.

  • Sub Total value is empty in  parent child hierarchy pivot table

    Hi All,
    I am using obiee 11.1.1.6.2 in Test environment. Is it a known issue/bug for 11.1.1.6.2 to show empty/blank values for sub total when using parent child hierarchy pivot table. The sub total for parent value is showing but sub total for child value is coming blank. However, in 11.1.1.5.0, we do not have any issue with this.
    Is it a known bug in obiee 11.1.1.6.2?
    Thanks,
    Sushil

    Yes it is a known bug...
    Thanks.

  • Sparsebundle keeps growing rapidly

    I have 2 macs with 500 GB Drives backing up to a 2TB time capsule, about 250 GB of used space on each.  The spares bundle on one keeps growing until it hogs all the space.  One sparsebundle now it occupies 1.5TB, will not allow further backups, and has deleted all but the most recent backup.  The other one stays at about 370 GB, but still contains several months of backups.  Why does one keep growing, and what can I do to correct this problem.
    Thanks.
    Ray R

    It sounds like more than the internal HD is getting backed-up.  Are there other drives in or connected to the "rogue" Mac?  
    See what it shows for Estimated size of full backup under the exclusions box in Time Machine Preferences > Options.  If that's larger than the 250 GB or so that's on the internal HD, exclude the other drive(s) you don't want backed-up.
    It's also possible that sparse bundle is damaged.   A clue may be lurking in your logs.  Use the widget in #A1 of Time Machine - Troubleshooting to display the backup messages from your logs.  Locate the most recent backup from that Mac, then copy and post all the messages here.
    Either way, your best bet might be to delete the sparse bundle for that Mac, per #Q5 in Using Time Machine with a Time Capsule (via Ethernet if at all possible, as that will take quite a while).
    The next backup of that Mac will, of course, be a full one (so do it via Ethernet, too).   Then see what the next one does.  If it's a full backup, see #D7 in Time Machine - Troubleshooting.

  • Oracle 11g flash recovery keeps growing

    Oracle 11g flash recovery (E:\ partition in a Windows 2003 server) keeps growing. It already used 200GB of disk space and only 46GB of disk space is left in that partition. Can we tell how often Oracle 11g delete the old backups?
    Thanks.
    Andy

    andychow wrote:
    SQL> set linesize 1000
    SQL> select substr(name,1,30) name,
    2         space_limit,
    3         space_used,
    4         space_reclaimable,
    5         number_of_files
    6  from V$RECOVERY_FILE_DEST
    7  ;
    NAME                           SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES
    E:\flash_recovery_area_11       1.0486E+11 1.6752E+10                 0               5
    SQL> select *
    2  from V$FLASH_RECOVERY_AREA_USAGE
    3  ;
    FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
    CONTROL FILE                          0                         0               0
    REDO LOG                              0                         0               0
    ARCHIVED LOG                        .09                         0               2
    BACKUP PIECE                      15.88                         0               3
    IMAGE COPY                            0                         0               0
    FLASHBACK LOG                         0                         0               0
    FOREIGN ARCHIVED LOG                  0                         0               0
    7 rows selected.
    SQL>
    It shows only 5 files in E:\flash_recovery_area_11. Can you verify ?

  • Which table keep the relationship of AO and Usage

    Hello Guru:
    which table keep the relationship of Architecture view and Usage view.
    for example, this architecture object was used to create business entity.
    Please advise and thanks in advance.

    Hi,
    it's table VIBDOBJREL that keeps the relationship.
    Regards, Franz

  • Mod_wl_ohs_0202.log keep growing

    Grid control 11.1.0.1.0 installed on Redhat 5.2. The reposotory database is Oracle 11.2.0.2 on the same Linux box.
    File /u01/app/gc_inst/WebTierIH1/diagnostics/logs/OHS/ohs1/mod_wl_ohs_0202.log keeps growing, 6.5 gb after 6 months. renamed the file and created a empty mod_wl_ohs_0202.log. But the the old file still gets writing. Not sure if I should remove the file.
    What is the best practice to manage this file to prevent it grow too big?
    Thanks

    please check article-ID
    11G Grid Control Performance: Webtier log - mod_wl_ohs.log in the OMS Home is very Large in Size and not Rotated [ID 1271676.1]
    in MOS...
    HTH

  • Mac os x lion installation give negative times and keep growing?

    Why my mac os x lion installation give negative times and keep growing?
    Please help me since i've already try to install more then 10 times and wait until -2 hours but the problem still persist.

    On the bottom of your MBP should be a serial number.  Enter it here and post back the model/year information:
    https://selfsolve.apple.com/agreementWarrantyDynamic.do
    Do you already have Lion installed or are trying to install it to replace another OSX?  If you are replacing, which OSX is currently installed?
    Ciao.

  • I have a problem with my iPad, the "other" part of the capacity summary keeps growing like ****

    Dear all: I have a problem with my iPad, the "other" part of the capacity summary keeps growing like ****, every time I sync the iPad gets bigger and bigger. What is this "other" and what do I do to keep it from getting bigger? Thanks to all!

    Marcos,
    I did not try your trick, didn't see your post until after i visited the Apple store.  The tech at the genius bar did not know why my "Other" memory had grown so huge.  It was over 24GB, which only left me with 1 GB free on my 32GB iPad.   I had been loading and unloading video's yesterday thru itunes to try various compression settings.  I did not notice the other categrory growing until I tried to load a 3 GB movie file and it told me I was out of memory.
    I probably loaded and unloaded 30 different times, trying different compression settings.  In the process of doing this, it apparently loaded up the "Other: stuff.  To me it seems like recycle bin in windows, but no one knows how to free it up. 
    So here is what he did and it worked.  He reset the iPad.  Press and hold both the Sleep/Wake button and the Home button for at least ten seconds, until the Apple logo appears.  Then he hooked it up to itunes on his computer.  Fortunately I have everything backed up on my home computer.  When my iPad showed up on iTunes at the store, it showed the 24 MB of "Other" momentarily, then it reduced to about .7 GB giving me 24.5 GB free.  Resetting it seemed to do the trick, but he still could tell me how this is happening and how to prevent it. 
    I think I'll try to call someone at applecare to get an explanation, hopefully.

  • SRRELROLES table entries growing

    Hi Experts,
    Can you please help me understand what is the exact use of table SRRELROLES.
    What i know is this table is used to store the link information between systems connected to SAP CRM.
    I need help on the followings:
    1. How the table get its entries? Is it like whenever a message flow (to and fro) between SAP CRM and another system(say SAP ECC), a BDoc flows.
    And for this BDoc flow, an entry will be created in SRRELROLES.
    2.Also can the old entries from this table be deleted. This table is growing like any thing in SAP CRM system.
    I have found some SAP Notes which suggested how to delete the entries. But what can be the effect of deletion is no where explained.
    I have checked many blogs but i am ot able to find exact reason for this.
    Please help me in this regards.
    Thanks in Advance.
    regards,
    Vicky

    Hello Vicky,
    Please read note 505608 to find out what can be don to avoid the rapid growth of table SRRELROLES.
    The table SRRELROLES can be archived by running the reports RSRLDREL, RSRLDREL2 and RSRLDREL3.
    The links are not archived as separate object. The links are archived with the respective business object. You can use report RSRLDREL2 from note 853035 for mass deletion. And for the report RSRLDREL please leave the business object empty. You will use the role as selection
    criteria which most of the entries in the table srrelroles use for the document flow.
    thanks
    Willie

  • Proxy 4 - Cache size keeps growing

    I may have a wrong cache setting somewhere, but I can't find it. I am running Proxy 4.0.2 (for windows).
    Under Cache settings, I have "Cache Size" set to 800MB. Under "Cache Capacity" I have it set to 1GB (500 MB-2GB).
    The problem is my physical cache size on the hard drive keeps growing and growing and is starting to fill the partition on the hard drive. At last count, the "cache" directory on the hard drive which holds the cache files is now using 5.7GB of space and still growing.
    Am I mis-understanding something? I thought the max physical size would be a lot lower, and stop at a given size. But the cache directory on the hard drive is now close to 6GB and still growing day by day. When is it going to stop growing, or how do I stop it and put a cap on the physical size it can grow to on the hard drive?
    Thanks

    Until 4.03 is out, you can use this script..
    Warning: experimental, run this on a copy of cache first to make sure that it works as you want it.
    The firs argument is the size in MB's that you want to remove.
    I assume your cachedir is "./cache" if it is not, then change the variable $cachedir to
    the correct value.
    ==============cut-here==========
    #!/bin/perl
    use strict;
    use File::stat;
    my $cachedir = "./cache";
    my $gc_size; #bytes
    my $verbose = 0;
    sub gc_file {
        my $file = shift;
        my $sb = stat($file);
        $gc_size -= $sb->size;
        unlink $file;
        print "$gc_size more after $file\n" if $verbose;
        exit 0 if $gc_size < 0;
    sub main {
        my $size = shift;
        $gc_size = $size * 1024 * 1024; #in MB's
        opendir(DIR, $cachedir) || die "can't opendir $cachedir: $!";
        my @sects = grep {/^s[0-9]\.[0-9]{2}$/} readdir(DIR);
        closedir DIR;
        foreach my $sect (@sects) {
            chomp $sect;
            opendir (CDIR, "$cachedir/$sect") || die "cant opendir $cachedir/$sect: $!";
            my @ssects = grep {/^[A-F0-9]{2}$/} readdir(CDIR);
            closedir CDIR;
            foreach my $ssect (@ssects) {
                chomp $ssect;
                opendir (SCDIR, "$cachedir/$sect/$ssect") || die "cant opendir $cachedir/$sect/$ssect: $!";
                my @files = grep {/^[A-Z0-9]{16}$/} readdir(SCDIR);
                closedir SCDIR;
                foreach my $file (@files) {
                    gc_file "$cachedir/$sect/$ssect/$file";
    main $ARGV[0] if $ARGV[0];
    =============cut-end==========On your second problem, the easiest way to recover a corrupted partition is to list out the sections in that partition, and delete those sections that seem like odd ones
    eg:
    $ls ./cache
    s4.00 s4.01 s4.02 s4.03 s4.04 s4.05 s4.06 s4.07 s4.08 s4.09 s4.10 s4.11 s4.12 s4.13 s4.14 s4.15 s0.00
    Here the s0.00 is the odd one out, so remove the s0.00 section. Also keep an eye on the relative sizes of the sections. if the section to be removed is larger than the rest of the sections combinde, you might not want to remove that.
    WARNING: anything you do, do on a copy

  • Amount being backed up by Time Machine keeps growing during full backup

    After some problems with Time Machine backup, I removed a bunch of stuff from my internal hard drive, reformatted my external HD used for backups, and started a new full backup.  Time Machine has been running for about 24 hours and gives me a status of Backing up: xx GB of yy GB.  Both xx and yy keep growing (apace) so that I don't believe xx will ever equal yy. 
    To cryptic?  Here are real numbers.  My internal disk being backed up contains 469.39 GB.  Backup status now says Backing up: 685.78GB of 754.42 GB.  Both numbers just keep growing.  What's going on here?  I'm happy to let Time Machine overnight (again) but, really, this can't be normal.

    Start up in Recovery mode, launch Disk Utility, select the startup volume ("Macintosh HD," unless you gave it a different name), and run Repair Disk (not Repair Permissions.) If any problems are found, repeat. Then restart as usual.
    If you don't already have a current backup, you must back up your data before you take the above step. You may be able to back up, even if the system isn't fully functional. Ask if you need guidance.
    Directory corruption in a MacOS journaled volume is always the result of a drive malfunction. It's not caused by power failures, system crashes, or anything else. You might choose to tolerate such a malfunction once in the life of a drive. If it's repeated, the drive must be replaced, or there is some other hardware fault that needs to be corrected. Ignoring repeated directory errors will result in data loss.

Maybe you are looking for