Fastest growing table....

I have a 10.2.0.4 database running Oracle EBusiness Suite 11.5.10.2 - I am trying to find a way to pick a list top 10 tables that are growing the fastest. By growth, I dont mean rows but rather space consumption....any suggestions, please?
Thanks,

Is there no way of getting this info from Grid/Oracle Enterprise Manager or AWR, etc.?If/when the raw data is NOT stored within the DB, the flavor of client is not relevant.
You can not produce any report when required data does not exist.
When you start with the wrong question, no matter how good an answer you get, it won't matter very much.
I NEVER tracked disk space consumption at the table level; only at tablespace level.
Realize you could track disk usage at the OS level; simply based upon file sizes.

Similar Messages

  • How to check fastest growing tables in db2 via db2 command

    Hi Experts,
    You might feel this very silly question on this forum, but still because of some requirement’s I need that. I'm new to db2 and basis so please bare my immatureness.
    Our DB size is growing faster @ 400/500 MB per day from last 15 days, which is supposed to be not more than 100 MB per day. We want to check the fastest growing tables. So i checked the history in db02 transaction and select the entry field as per 'Growth'. But for the given specific date it is showing nothing. Earlier we had same issue some 3 months back that time it used to display with same selection criteria.
    So I want a db2 command to execute and check the fastest growing table on database level. Please help guys. Early reply must be appreciated.
    PFA screenshot. DB version is DB2 9.7 and OS is Linux
    Thanks & Regards,
    Prasad Deshpande

    Hi Gaurav/Sriram,
    Thanks for the reply..
    DBACOCKPIT is best to go i agree with you but on DBACOCKPIT though our data collector framework and rest all the things are configured properly but still it is not working. It is not changed from last 1 year, and for same scenario 3 months ago it has displayed the growth tables.
    Any how i have raised this issue to SAP so let the SAP come back with the solution on this product error.
    In the mean while @ Experts please reply if you know the DB level command for getting the fastest growing table.
    I'll update the SAP's reply for the same as soon as i get, so that the community should also get the solution for this..
    Thanks & Regards,
    Prasad Deshpande

  • Top n Growing tables in BW & R/3 with Oracle env

    Hi all,
    We are on BW 7.0 with Oracle 10.2.0.2 , please let me know how to get the top N growing tables & Top N largest tables.
    I remember collecting these stats from TX: DB02 when we had MS SQLserver as the DB.  It was as easy as clicking a button.  but with Oracle I have been unable to find these options in Db02 or dbacockpit.
    Thanks,
    Nick

    Nick,
    Goto tcode DB02OLD>Detailed Analysis>Object Name *, Tablespace , Objetc Type tab. You will get list of all table, you take out top 50 table from this list.
    Earlywatch report also gives list of top 20 tables. Check your earlywatch report for this.
    You can also use the following SQL query:
    select * from
    (select owner, segment_name, segment_type, tablespace_name, sum (bytes/1024/1024) MB
    from dba_extents
    where segment_type = 'TABLE'
    group by owner, segment_name, segment_type, tablespace_name
    order by MB desc)
    where rownum <= N;
    Put any value for N (e.g. 50)to find out top N growing tables & Top N largest tables.
    If you are planning to go for a table reorg then refer below link.
    Re: is it possible to see decrease of table size & size of tablespace
    Hope this helps.
    Thanks,
    Sushil

  • How to identify the Selected row number or Index in the growing Table

    Hi,
    How to find the selected Row number or Row Index of growing  Table using Javascript or Formcalc in Interactive Adobe forms
    Thanks & Regards
    Srikanth

    After using bellow script it works fine
    xfa.resolveNode("Formname.Table1.Row1["this.parent.index"].fieldname").rawValue;

  • How to deal with the growing table?

    Every tables are growing in any applications. In some applications, tables become larger and larger quickly. How to deal with this problem?
    I am developing an application system now. Should I add a lot of delete commands in the code for each table?

    junez wrote:
    Every tables are growing in any applications. In some applications, tables become larger and larger quickly. How to deal with this problem?
    I am developing an application system now. Should I add a lot of delete commands in the code for each table?Uh, well, yes if you continually add rows to a table the table will grow ... sooner or later you will want to delete rows that are no longer needed. What did you expect? You have to decide what the business rules are to determine when a row can be deleted, and make sure your design allows for such rows to be identified. This is called ..... analysis and design.

  • Kackup fast growing table

    I have a table for xml messages in which one of the column size is 4000, the table grows very fast (10G+/week). I also need to keep these messages by backup or some other ways, but need to be loaded into the DB easily if required to be online.
    The table is written at any time, there is no way to know when it will.
    Does anybody know what is the common practice for this kind of operation, where should I store the data, and how to load into the DB, or if I need special tools. I have only 60G in my DB server.
    Thanks in advance

    Robert Geier wrote:
    This does not indicate what is growing.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_4149.htm
    "DBA_TAB_MODIFICATIONS describes modifications to all tables in the database that have been modified since the last time statistics were gathered on the tables. Its columns are the same as those in "ALL_TAB_MODIFICATIONS".
    Note:
    This view is populated only for tables with the MONITORING attribute. It is intended for statistics collection over a long period of time. For performance reasons, the Oracle Database does not populate this view immediately when the actual modifications occur. Run the FLUSH_DATABASE_MONITORING_INFO procedure in the DIMS_STATS PL/SQL package to populate this view with the latest information. The ANALYZE_ANY system privilege is required to run this procedure."-----------------
    Thank you.

  • How to minimise performance degradation when querying a growing table during processing...?

    Hi everyone
    Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
    Read record from source table
    Check to see if matching header record exists in TableA (using indexed field)
    If match found then store TXH_ID (PK in TableA)
    If no match found then create new header record in TableA with new TXH_ID
    Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
    If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
    If so, is there any way to rectify this? Would updating the stats at certain points in the process be effective?
    Would it be any different if a MERGE was used to (conditionally) insert the header records into TableA? (i.e. would the stats still get stale?)
    DB is 11GR2 and OS is Windows Server 2008
    Thanks

    Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
    Read record from source table
    Check to see if matching header record exists in TableA (using indexed field)
    If match found then store TXH_ID (PK in TableA)
    If no match found then create new header record in TableA with new TXH_ID
    Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
    If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
    What do you mean 'presumably the stats . .'?
    In item #3 you said that TXH_ID is the primary key. That means only ONE value will EVER be found in the index so there should be NO degradation for looking up that primary key value.
    The plan you posted shows an index range scan. A range scan is NOT used to lookup primary key values since they must be unique (meaning there is NO RANGE).
    So there should be NO impact due to the header table 'getting big'.

  • How do I create a dynamic growing table which holds "copies" of the footer Row from several instances?

    Mission:        
    To create a summary table with Rows from several (yet to be determined) instances
    Coordinates of the enemy   (or Row in question):  
    ROOT.category[*].sub_category_total.sub_category_summary.Row1
    Background information:
    An order form with option for several categories (2 – 10) with subcategories (2 – 20).
    Since the finished form might be up to 20 pages long I would like to get to the point
    with a click of a button (so to speak)
    Therefore a summary list seems the only logic solution.
    Reward:
    Unending gratefulness and publishing of finished sample for others to learn from
    Additional Note:
    Should you choose to accept this mission I will never deny your existence and will
    always give credit to whom credit belongs.
    This message will NOT self-destruct.    

    Hi Steve,
    thanks for the example ... nice and close but not 100 % solution.
    The sample requires me to set up the summary table with fixed rows.
    But I don't know how many main categories & subcategories will be there - it can vary between 2 and 20.
    One solution would be to set up a table with 20+ rows and hide them if rawValue ... equals 0.
    Might work -  but is not very elegant ....
    I guess I was looking for the script that counts the instances and then creates automatically the necessary rows.
    I know it is a lot to ask in a forum ... but I guess it might be possible. Isn't it??
    Please let me know if I'm reaching for the impossible 
    Jixin

  • How to create a growing table  with hierarchical increasing of rows Eg. For each row there must be a subRow i can add like for 1 i can add 1.1

    i  can increase the row but how to make it hirarchical  so tha i can add rows under rows?

    You have the right link to get started on adding authentication to your Mobile service app. But for security reasons the MobileServiceUser class only exposes the authenticated UserId and MobileServiceAuthenticationToken properties. You can save these
    two values in your client application.
    Getting the user name and email address isn't available through this medium. You would have to call into the respective APIs for the service (Facebook or Twitter APIs).
    Abdulwahab Suleiman

  • CO table growth rate

    Hi,
    We have gone line with SAP ECC for retail scenario recently. Our database is growing 3 GB per day which includes both data and index growth.
    Modules configured:
    SD (Retail), MM, HR and FI/CO.
    COPA is configured for reporting purpose to find article wise sales details per day and COPA summarization has not been done.
    Total sales order created per day on an average: 4000
    Total line items of sales order on an average per day: 25000
    Total purchase order created per day on an avearage: 1000
    Please suggest whether database growth of 3 GB per day is normal for our scenario or should we do something to restrict the database growth.
    Fastest Growing tables are,
    CE11000     Operating Concern fo
    CE31000     Operating Concern fo
    ACCTIT     Compressed Data from FI/CO Document
    BSIS     Accounting: Secondary Index for G/L Accounts
    GLPCA     EC-PCA: Actual Line Items
    FAGLFLEXA      General Ledger: Actual Line Items
    VBFA     Sales Document Flow
    RFBLG     Cluster for accounting document
    FAGL_SPLINFO     Splittling Information of Open Items
    S120     Sales as per receipts
    MSEG     Document Segment: Article
    VBRP     Billing Document: Item Data
    ACCTCR     Compressed Data from FI/CO Document - Currencies
    CE41000_ACCT     Operating Concern fo
    S033     Statistics: Movements for Current Stock (Individual Records)
    EDIDS     Status Record (IDoc)
    CKMI1     Index for Accounting Documents for Article
    LIPS     SD document: Delivery: Item data
    VBOX     SD Document: Billing Document: Rebate Index
    VBPA     Sales Document: Partner
    BSAS     Accounting: Secondary Index for G/L Accounts (Cleared Items)
    BKPF     Accounting Document Header
    FAGL_SPLINFO_VAL     Splitting Information of Open Item Values
    VBAP     Sales Document: Item Data
    KOCLU     Cluster for conditions in purchasing and sales
    COEP     CO Object: Line Items (by Period)
    S003     SIS: SalesOrg/DistCh/Division/District/Customer/Product
    S124     Customer / article
    SRRELROLES     Object Relationship Service: Roles
    S001     SIS: Customer Statistics
    Is there anyway we can reduce the datagrowth without affecting the functionalities configured?
    Is COPA summarization configuration will help reducing the size of the FI/CO tables growth?
    Regards,
    Nalla.

    user480060 wrote:
    Dear all,
    Oracle 9.2 on AIX 5.3
    In one of our database, one table has a very fast growth rate.
    How can I check if the table growth is normal or not.
    Please advice
    The question is, what is a "very fast growth rate"?
    What are the DDL of the table resp. the data types that the table uses?
    One potential issue could be the way the table is populated: If you constantly insert into the table using a direct-path insert (APPEND hint) and subsequently delete rows then your table will grow faster than required because the deleted rows won't be reused by the direct-path insert because it always writes above the current high-water mark of your table.
    May be you want to check your application for such an case if you think that the table grows faster than the actual amount of data it contains.
    You could use the ANALYZE command to get information about empty blocks and average free space in the blocks or use the procedures provided by DBMS_SPACE package to find out more about the current usage of your segment.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Table space and recollection of released space after client deletion

    Dear All,
    I have two questions:
    1.Can any1 tell me how to determine the fastest growing table and how to reorganize table space.
    2. Is there any way by which we can recollect the space released:say after client deletion. I had deleted one client two  weeks back and when I saw the SAP/user drive I fow free space available remains the same.
    I searched but found no relevant clue.
    regards,
    Ashutosh

    1.- You can see the largest table report in ST04 (as far as i remember)...
    2.- You can reorganize your db using brtools - brspace...
    Read,
    http://help.sap.com/saphelp_erp2004/helpdata/en/32/0d0c888839164ba4245b3ff7969c59/frameset.htm
    Regards
    Juan

  • How to use insert into...select * from...if source table is huge in size

    The source tables are having crores of data (growing tables). Tables with 4crores, 17cr. We want these tables to be copied frequently to our local database. Previously it was done by export-import through windows scheduled task. but now we are planning to do it as database jobs. We are fetching the datas with query
    insert into dest_tablee( select * from source_table@dblink) when we tried with this it was throwing exception like enough table space is not there. And also it was found that frequent commits has to be used while populating datas from big tables. So tried with cursor But it was very slow and again we got the exception like 'the table space unable to extend segment by 16 in undo tablespace 'UNDOTBS1'.
    After that we tried with the group by. In this case we got the exception like unable to extend table and also index in the table space. For this the solution is to add datafile. Again we have increase the table space. Now the procedure is running very slow(taking much time. It might be because of the conditions used in the query).
    Is there any other option to copy the datas from such a big tables? can we use the same sort of query?
    Friends please help me to sort it out.
    Thanks in Advance

    Hi,
    you have lot of data DONT use cursor, did you try using the COPY command.
    How frequently you will be doing the COPYING of the data ?
    If you have any constraints you can disable and enable after all the records have been copied.
    Please look at this link this should help.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5280714813869
    Thanks
    Message was edited by:
    hkandpal
    Link added

  • XI DB size growing

    Hi,
    I am facing a problem in the XI DB (MAXDB), the size of the DB is growing tremendously in the recent days despite of running the schedule deletion and archiving successfully.
    Can anyone suggest me to how to check the growing tables ??
    Regards,
    Senpai

    I have found the table which is giving the trouble,
    Its XI_AF_MSG table which is consuming about 47-48 GB of space. So any suggestion how can I fix this problem
    you can check for the possibility of performing Archiving and deletion activity on this table:
    http://help.sap.com/saphelp_nw70/helpdata/EN/f5/d347ddec72274ca2abb0d7682c800b/frameset.htm
    From the above link:
    The following table represents the persistence layer of the Adapter Engine, which means that it stores incoming and outgoing messages.
    u25CF     XI_AF_MSG
    A full archiving solution is provided for this table. For more information, see Background Processing
    For Background Processing:
    http://help.sap.com/saphelp_nw70/helpdata/EN/05/b1b740f83db533e10000000a155106/frameset.htm
    Regards,
    Abhishek.

  • Fast growing tablespaces

    Hi Experts,
    The following tablespaces are consuming max free space.
    PSAPBTABD : 50% of the total space aquired 77.5 GB   
    PSAPBTABI : 38.5 GB                     
    PSAPCLUD : 15 GB          
    85 % of total space is consumed by these tablespaces.
    Tables with max growth of increasing are :
    BSIS, RFBLG, ACCTIT, ACCTCR, MSEG, RSEG   etc.
    Average increase of  2GB space per month.
    Kindly help me to find out the solution.
    Regards,
    Praveen Merugu

    Hi praveen,
    Greetings!
    I am not sure whether you are a BASIS or Functional guy but, if you are BASIS then you can discuss with your functional team on selecting the archiving objects inline with your project. Normally, func consultants will have the knowledge of which archive object will delete entries from which tables... You can also search help.sap.com for identifying the archiving objects.
    Once you identified the archiving objects, you need to discuss with your business heads and key users about your archiving plan. This is to fix the data retention period in the production system and to fix the archiving cycle for every year.
    Once these been fixed, you can sit along with func guys to create varients for the identified archiving objects. Use SARA and archivie the concerned objects.
    Initiating a archiving project is a time consuming task. It is better to start a seperate mini project to kick off the initial archiving plan. You can test the entire archiving phase in QA system by copying the PRD client.
    The below summary will give you a idea to start the archiving project,
    1. Identify the tables which grow rapidly and its module.
    2. Identify the relevent archiving object which will delete the entries in rapidly growing table.
    3. Prepare a archive server to store the archived data ( Get 3rd party archiving solution if possible). Remeber, the old data should be reterived from the archive server when ever the business needs it.
    4. Finalise the archving cycle inline with your business need.
    5.Archvie the objects using SARA.
    6.Reorganize the DB after archiving.
    Hope this will give some idea on archiving project.
    regards,
    VInodh.

  • ReportserverTempDB Grows unexpected

    Hi SSRS Experts
    We encounter the problem, that a reportservertempdb grows extremly large... the sessiondata and snapshotdata tables are about 10gb each at the moment and keep growing and growing...
    The CleanupCycleMinutes Configuration is set to default=10. We are not taking snapshots of our reports.
    Shouldnt those tables be cleaned up every 10minutes with this setting?
    What can we do to stop this database growing? is truncating those tables on a weekly base a solution?
    We are using SQL Server Reporting Services 2005 SP2 with Cumulative Hotfix Package 5 (Build 3215).
    Thanks for your answers...
    Kind Regards,
    Roger

    No,  just the fast growing tables.  Schedule a night job.
    Sample below. Please check the date column to be sure it is the right column.
    Code Snippet
    SELECT
    ToBeDeletedCount =
    count(*)
    FROM
    [ReportServerTempDB].[dbo].[SessionData]
    WHERE
    datediff(day,
    CreationDate,
    getdate()
    > 5
    GO
    DELETE
    FROM [ReportServerTempDB].[dbo].[SessionData]
    WHERE
    datediff(day,
    CreationDate,
    getdate()
    > 5
    GO
    Kalman Toth SQL SERVER 2012 & BI TRAINING
    New Book: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2012

Maybe you are looking for

  • IRR dynamic action filter does not work when condition set.

    Hi All I've set up a filter on an Interactive report using a dynamic action. The fiilter is used to display records that expire within a term ( example 30,60,90 days.) The dynamic action is set to fire On Change. This all works fine as long as I do n

  • Is there a way to only catalog backup set not in the catalog? "CATALOG START WITH" generates double entry.

    The DB version is 11.1.0.7. The "CATALOG START WITH" not only catalog backup set that are not in the RMAN catalog, but also catalog a second copy for files that are already in the catalog. Files are not a second copy; they are the same file in the sa

  • Overmodulated Output Audio on "Client Monitor"

    I have been using an old Sony SD monitor to playback for clients. The video and audio go out of FCP to my JVC BRHD50 via Firewire then to the montior with SVHS for video and RCA for audio through a Makey Mixer to powered speakers, no problems. Now, I

  • DTrace support for 64 bit JDK on Solaris 10

    Hi, I installed latest 64 bit jdk 1.6_23, using self-extract file, on solaris 10 machine, and I can't find the hotspot provider for dtrace when I use 64 bit jvm, I can find them when I use 32 bit, I use the following command: $ dtrace -ln 'hotspot*::

  • How to set the resource_limit parameter to each user

    how to set the resource_limit initialization parameter to each user. it says like this; You must modify the RESOURCE_LIMIT initialization parameter. After you assign profiles to users, the Oracle database enforces the limits only when the RESOURCE_LI