Table growing space.

Hi GURU
We are having a issue in the Production system. Table u201C/BIC/B0000404000u201D is growing very fast .
In 1 month the table size grow upto 15 GB. Normally it should be less then 1 GB.
To reduce the size we have delete the data. How can we see why the size of the data got increase.
Let me know if you have any issue .
regards
Vikash

Hi Vikash,
Please check the frequecy of data upload & number of records .In last month ,there must increase in No. of records. If it is full Upload & scheduled daily then PSA table size increases very fast. For daily schedule upload  , PSA data is required after upload , it should be weekly.
Hope , it will help.
Have a nice day,
Anand mehrotra.

Similar Messages

  • Writing to BLOB gets slow as table grows

    Hi,
    We have a table with a BLOB column. When the table is empty it takes a few second to insert 3k rows. As the table grows to 500k rows, it takes almost 5 minutes to insert the same # of rows, and it's getting slower and slower. At the beginning, I thought it was because of indexes and constraints, so I disabled all foreign key constraints (have to leave primary keys and indexes there), it doesn't make much difference. I don't see other tables with indexes and constraints and even more # of rows but w/o BLOB column have such significant performance degradation. The BLOB is stored out of line in a separate tablespace with 8k block size and chunk size.
    Do you have any idea what may cause the slowness in insert?
    Thanks.

    If the tablespace is ASSM (segment space management AUTO), there are issues with LOB performance. If you can, move the table and it's LOB to an MSSM tablespace and re-test your inserts.
    BTW, what version and patchlevel are your running ?

  • How to fix name of table have space in goldengate 11.1.1.1.2 on SQL Server

    Problem Description: How to fix problem with the name of table have space in sql server?
    If I want to used with oracle golden gate 11g(v 11.1.1.1.2)
    Thank you

    I use
    Source:SQL Server 2000
    Target : Oracle
    Config Source:
    table dbo."New Table";
    Config Target:
    map dbo."New Table" , target xxx."NEW_TABLE";
    when check in process extract
    I find No active extraction maps after insert data
    Thank you N K

  • Table grows to 6 GB with 6k records only after Delete ORA-01653:

    Hello,
    I have a Table that i delete data from using
    DELETE FROM DJ_20255_OUTPUT a where trunc(a.LOADED_DATE) <trunc(sysdate -7);
    COMMIT;
    the issue i have is when i want to repopulate the table i get the Error ORA-01653: unable to extend table.
    The table grows to over 6gb but if i truncate that table and in both cases there is no data left in the table after either action the table will only grow to 0.8 Mb once populated.
    so with truncate table size is 0.8MB and if i use delete from.... the table grows to 6GB
    The repopulation of the table uses mutiple insert statments commiting after each one.
    is this a bug or is there an action i should perform onece I have deleted the data?
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi

    Is this an Index Organized table ? (select IOT_TYPE from user_tables where table_name = 'DJ_20255_OUTPUT' ;)
    Are you saying that you use this sequence :
    DELETE .... in one single call to delete all the rows
    INSERT ... in multiple calls with COMMIT after each row
    Hemant K Chitale

  • MS Access Tables WIth Spaces

    Hi Everyone,
    I am trying to write an application which requires connection to an old MS Access database. Unfortunately some of the tables have spaces in the names. I can query all tables except those with spaces in the table names. I searched for how to quote tables with spaces in the FROM clause on the net and have tried the following:
    // Angled quotes
    String sql = "Select * From `Function Groups`";
    // double quotation marks
    String sql = "Select * From \"Function Groups\"";
    // single quotation marks
    String sql = "Select * from 'Function Groups'"
    // square brackets
    String sql = "Select * from [Function Groups]"
    I have set setEscapeProcessing to be both TRUE and FALSE in various trials as follows:
    stat.setEscapeProcessing(true);
    All of the above throws an exception from the Access Driver stating I have an error in the From Clause (Yes, there is a table called Function[space]Groups in the database). I am using the free MS Access driver which comes with Vista to connect to the read-only database via JDBC.
    I cannot just exchange spaces for underscores for other reasons.
    Does anyone know the special character used by the Access Driver to parse tables with spaces? Square brackets works with field names but not tables names :(
    All help appreciated,
    Tom

    Doh! missed out a keyword not a quoting problem at all - for interest angled quotes (`) work around the spaces in the table names.

  • Which tables store space disk information ?

    Hi ,
    In DB02 you can see space disk evolution of the BW databse by days , weeks , months
    I want to create queries with the same data
    For this I need to create a cube about space disk
    What are the tables where space disk values are stored ?
    If I know the tables I will then create a datasource on them
    Thanks
    Sebastien

    Hi,
    ask your database admin!
    => I think, you can't create a datasource for this. with native sql perhaps (see http://help.sap.com/saphelp_470/helpdata/en/fc/eb3b8b358411d1829f0000e829fbfe/content.htm).
    in INET I found for example: http://database.ittoolbox.com/groups/technical-functional/db2-l/how-to-calculate-the-table-size-in-db2-1691545
    => you need the system tables: syscat.tables, syscat.indexes and syscat.tablespaces
    Sven

  • Disadvantages of letting FACTFINANCE table grow bigger day by day?

    Hi All,
    Please share your inputs on the Disadvantages and Advantages of letting FACTFINANCE table grow bigger day by day.
    As far as i know there's no theoretical limit to the number of records.
    If there is any solution or process followed to keep only actively used data records in FACTFINANCE and smart way in dealing with very rarely used data (Historical data).
    Please suggest/comment it will greatly help us!
    Regards,
    Rajesh Muppala.

    The main disadvantage of a huge factfinance table will be initially the process time of changing a dimension hierarchy. I have heard of processing taking 8 hours when a large fact table is involved. I am not sure of a theoretical limit. I suspect the limit will be more of the operating system limits rather than SQL server limits.
    Of course performance will slow when the fact table gets large to. So archiving off historic ( and not used) data is a good idea.
    To create a system where only actively used records are kept would need to be a custom procedure where you;d need to analyse the queries coming from the client side. It seems a very complex method that could land up with errors and dad corruption.
    I personally would rather make a business decision about it and archive a few years off into a separate application set. In an ideal world, it could be on a separate server as well so the productions system is not affected in any way.
    Another option would be to make custom partitions in the finance cube. This would mean the same fact table is used but different partitions for each year (for example).
    Tim

  • OAI_AGENT_ERROR table growing rapidly

    Hi
    I have a problem with my oai_agent_error table growing at a rate of a 1000 records every few minutes. My DB adapter reports the following error before dropping the messages,
    oracle.oai.agent.server.transform.database.DBTransformationException: PLSQLTransformation.transform: Transformation failed
    at oracle.oai.agent.server.transform.database.PLSQLTransformation.transform(PLSQLTransformation.java:359)
    at oracle.oai.agent.server.transform.BuiltInTransformation.transform(BuiltInTransformation.java:293)
    at oracle.oai.agent.server.transform.MessageTransformer.processStackFrame(MessageTransformer.java:849)
    at oracle.oai.agent.server.transform.MessageTransformer.processTransformMetadatalet(MessageTransformer.java:489)
    at oracle.oai.agent.server.transform.MessageTransformer.transformMessage(MessageTransformer.java:276)
    at oracle.oai.agent.server.InMessageTransformer.processObject(InMessageTransformer.java:87)
    at oracle.oai.agent.common.QueueAgentComponent.run(QueueAgentComponent.java:110)
    at java.lang.Thread.run(Thread.java:534)
    We have a custom PL/SQL package that does the AV to CV transforms for us and this is valid at the moment.
    I also see that its not all messages that are getting dropped as some messages have reached my Spoke systems properly transformed.
    I would really like to know how to debug this further to stop these messages from being dropped.
    Any help is much appreciated!
    Thank you

    HI
    "oracle.oai.agent.server.transform.database.DBTransformationException: PLSQLTransformation.transform: Transformation failed" in the log file means this error occured at agent level /application level.So,using metadata interconnect is not able to convert input data in to application view.
    Error is transformation failed.So reason can be
    --> any unhandled exceptions
    --> mismatch in data type of fields/parameters used.
    --> mis match in number of field/elements in input data.
    Transformation failed indicates interconnect not able to transform input data in to sepcified format in Application view or common view.
    As you mentioned,this is happening only with few records.
    --> check if failed records are to test negative cases.
    --> check if failed records have all values expected as mandatory for transformation.
    --> check if failed records have the data with expected datatypes(For eg field a is defined as number and field a in record have value 'abc' instea dof 123).
    Hope this information helps you.

  • Dropped a table but space not freed on drive

    Hi All,
    I had a table that was storing 244gb of data, I've truncated it then drop it in SSMS.
    However it has not freed up the space of the drive.
    I did try
    dbcc shrinkfile(fileid)
    Please Help.
    Thanks
    Rebekah

    Hi Adding this
    When you drop or rebuild large indexes, or drop or truncate large tables, the Database Engine defers the actual page deallocations, and their associated locks, until after a transaction commits. This implementation supports both autocommit and explicit transactions
    in a multiuser environment, and applies to large tables and indexes that use more than 128 extents.
    The Database Engine avoids the allocation locks that are required to drop large objects by splitting the process in two separate phases: logical and physical.
    In the logical phase, the existing allocation units used by the table or index are marked for deallocation and locked until the transaction commits. With a clustered index that is dropped, the data rows are copied and then moved to new allocation units created
    to the store either a rebuilt clustered index, or a heap. (In the case of an index rebuild, the data rows are sorted also.) When there is a rollback, only this logical phase needs to be rolled back.
    The physical phase occurs after the transaction commits. The allocation units marked for deallocation are physically dropped in batches. These drops are handled inside short transactions that occur in the background, and do not require lots of locks.
    Because the physical phase occurs after a transaction commits, the storage space of the table or index might still appear as unavailable. If this space is required for the database to grow before the physical phase is completed, the Database Engine tries
    to recover space from allocation units marked for deallocation. To find the space currently used by these allocation units, use the
    sys.allocation_units catalog view.
    Deferred drop operations do not release allocated space immediately, and they introduce additional overhead costs in the Database Engine. Therefore, tables and indexes that use 128 or fewer extents are dropped, truncated, and rebuilt just like in SQL Server
    2000. This means both the logical and physical phases occur before the transaction commits.
    Link:
    http://technet.microsoft.com/en-us/library/ms177495%28v=sql.105%29.aspx
    You need to wait before internal operations complete and then please shrink in small chunks.
    PS: Its well known fact shrinking causes logical fragmentation so you must rebuild indexes after shrinking is done
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • DSVASRESULTSGEN/EWA table grow?

    Dear Expert,
    I noticed a consistently growth of segment DSVASRESULTSGEN in our Solution Manager system. It has been consistently growing about >1.0GB every month.
    As I understand, the DSVAS* tables are related to Service Session which might be entries generated by EWA sessions. Maybe this is related to EWA reports. Can we or how can I do housekeeping on this, to free up some of the database space?
    It will grow a lot when there is new EWA report setup.
    Do you have same experience, please share & guide me.
    thank you
    kelly

    Hi Lay
    SAP EarlyWatch Alerts and Service Level Reports can be stored on a separate file server, and then the session
    data can be deleted with the report RDSMOPREDUCEDATA
    you also might want to check #546685
    Nesimi
    Edited by: Nesimi Buelbuel on Sep 22, 2009 9:50 AM

  • The size of the target table grows abnormaly

    hi all,
    I am curently using OWB (version 9 2.0 4 to feed some tables.
    we have created a new database 9.2.0.5 for a new datawarehouse.
    I have an issue that I really can not explain about the increase size of the target tables.
    I take the exemple of a parameter table that contains 4 fields and only 12 rows.
    CREATE TABLE SSD_DIM_ACT_INS
    ID_ACT_INS INTEGER,
    COD_ACT_INS VARCHAR2(10 BYTE),
    LIB_ACT_INS VARCHAR2(80 BYTE),
    CT_ACT_INS VARCHAR2(10 BYTE)
    TABLESPACE IOW_OIN_DAT
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    NOCACHE
    NOPARALLEL;
    this table is feed by a mapping and I use the update/insert option, which generates a Merge.
    first the table is empty, I run the maping and I add 14 lines.
    the size of the table is now 5 Mo !!
    then I delete 2 lines by sql with TOAD
    I run a again the mapping. It updates 12 lines and add 2 lines.
    at this point,the size of the table has increased of 2 Mo (1 Mo by line !!)
    the size of the table is now 7 Mo !!
    I do the same again and I get a 9 Mo table
    when I delete 2 lines with a SQL statement and create them manually, the size of the table does not change.
    when I create a copy of the table with an insert select sql statement the size becomes equal to 1 Mo which is normal.
    Could someone explain me how this can be possible.
    is it a problem with the database ? with the configuration of OWB ?
    what should I check ?
    Thank you for your help.

    Hi all
    We have found the reason of the increasing.
    Each mapping has a HINT which is defaulted to PARALLEL APPEND. as I understand it, it is use by OWB to determine if an insert allocates of not new space for a table when it runs and insert.
    We have changed each one to PARALLEL NOAPPEND and now, it's correct.

  • Reducing table storage space in oracle 8i

    Hi,
    I am trying to reduce the space usage of the historical tables I have.
    These tables contains Data of the previous months.The only table(snapshot) we can retrive the historical data from.
    Is there any table compression technique or script that can help me? I can't afford to reduce the PCTFREE parameter.
    Waiting for your valuable suggestions
    Regards
    Rajib

    Hi Rajib,
    Dropping unused extents will definelty help you reduce the space occupied. But before that you need to know how many extents are totally blank. if more percent of extents are completely blank then this option will help you to a certain extent.
    You can make use of dbms_rowid package and dba_extents and find out no of blocks/extents partially or completely free.
    Or else , rebuilding a table is a best option , but again depends on the size of the table. If bigger the table, time taken to rebuild ill be more. If you can afford this time, then well and fine.
    -Firoz

  • [JS][CS2]How to distribute table column space evenly?

    Script gurus help.
    Is it possible to snap the table columns to the longest entry then the remaining space be distributed to each column evenly?

    Yes, it is. Both column snapping and distributing space have been dealt with here in the past on a number of occasions. Searching the forum should turn up several useful approaches.
    Peter

  • Is alter table shrink space a logged operation?

    Hello -
    I am running alter table xxxxxx shrink space. I have a few questions:
    1. Is this a logged operation? Oracle handles this interanally as insert/delete operations, but I am not seeing more arch logs being generated.
    2. If I stop in the middle of the shrink process, can Oracle resume where it left off?
    3. How can I monitor the progress of the shrink operation? I am not seeing the sid, serial# in v$session_longops.
    Any help is much appreciated!
    Thanks,
    mike

    I'm not sure that you will be able to "monitor" it.
    You could test using dbms_space to see if it shows any changes will the shrink is in progresss.
    One other way would be to watch V$TRANSACTION USED_UREC (which will reflect counts for rows and index entries) to see USED_UREC approaching the expected number of table+index entries being rebuilt.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Table free space

    Is there any way to find the amount of free space that will be created when we delete data from any table.
    Suppose we delete 10,000 rows from a table, how much free space will be created ?

    I wish to mention as below : (Oracle 10.2.0.1 on windows XP Prof Service Pack 2)
    select ts.tablespace_name, to_char(sum(nvl(fs.bytes,0)), '999999999999999') as MB_FREE
      from dba_free_space fs, dba_tablespaces ts
      where fs.tablespace_name(+) = ts.tablespace_name
      group by ts.tablespace_name;
    TABLESPACE_NAME                MB_FREE
    USERS                                 280231936
    Then I created one table in scott schema (which is having privilege on users tbs)
    create table t(name varchar2(20),age number(3));
    Now check space again :
    SYS@orcl> /
    TABLESPACE_NAME                MB_FREE
    USERS                                 280166400
    Means 280231936-280166400=65536 bytes consumed above create table command.  Why?
    SCOTT@orcl> select bytes,blocks,extents from user_segments where segment_name='T';
                  BYTES              BLOCKS             EXTENTS
               65536.00                8.00                1.00
    But why it taken 65536 bytes?
    Yes it will take 65536 bytes because my db_block_size=8192 bytes and by default it will create 1 extent of 8 blocks.  It has not occupied all 65536 bytes, but has been allocated for the table; means if insert 65536 bytes data there will be only 1 extent (if we don’t say alter table t allocate extent)  Now suppose if I said alter table t allocate extent; then :
    SCOTT@orcl> select bytes,blocks,extents from user_segments where segment_name='T';
                  BYTES              BLOCKS             EXTENTS
              131072.00               16.00                2.00
    So, at this moment table size is 131072 and if again check the size of users tablespace then it :
    SYS@orcl> /
    TABLESPACE_NAME                MB_FREE
    USERS                                 280100864 (280166400 – 65536)
    Now, table T has been allocated space for 131072 bytes, but how much actual data it is carrying (data size of table) :
    So, I will : (simple method by avg_row_len column, after analyze table)
    select table_name,num_rows,avg_row_len, (num_rows*avg_row_len) "Data in Table" from user_tables where table_name='T';
    TABLE_NAME                       NUM_ROWS AVG_ROW_LEN Data in Table
    T                                       0           0             0
    Now I will insert a row in the table :
    insert into t values('Girish',40);
    TABLE_NAME                       NUM_ROWS AVG_ROW_LEN Data in Table
    T                                       0           0             0
    analyze table t compute statistics;
    TABLE_NAME                       NUM_ROWS AVG_ROW_LEN Data in Table
    T                                       1          13            13
    insert into t values('Alok',42);
    TABLE_NAME                       NUM_ROWS AVG_ROW_LEN Data in Table
    T                                       1          13            13
    analyze table t compute statistics;
    TABLE_NAME                       NUM_ROWS AVG_ROW_LEN Data in Table
    T                                       2          12            24
    Commit;
    Ok.  At this moment it is having 24 bytes data.  Now I am going to delete 1 row :
    delete from t where name='Girish';
    TABLE_NAME                       NUM_ROWS AVG_ROW_LEN Data in Table
    T                                       2          12            24
    TABLE_NAME                       NUM_ROWS AVG_ROW_LEN Data in Table
    T                                       1          11            11
    Now I wish to gain 24-11=13 bytes to tablespace.
    SCOTT@orcl> select bytes,blocks,extents from user_segments where segment_name='T';
         BYTES     BLOCKS    EXTENTS
        131072         16          2
    Alter table t move;
    SCOTT@orcl> select bytes,blocks,extents from user_segments where segment_name='T';
         BYTES     BLOCKS    EXTENTS
         65536          8          1
    Means, it deallocated / released all extent/space and reached upto last shrinkable space.  Since I don’t have any associated index with this table; otherwise I will have to rebuild them.HTH
    Girish Sharma

Maybe you are looking for