Kackup fast growing table

I have a table for xml messages in which one of the column size is 4000, the table grows very fast (10G+/week). I also need to keep these messages by backup or some other ways, but need to be loaded into the DB easily if required to be online.
The table is written at any time, there is no way to know when it will.
Does anybody know what is the common practice for this kind of operation, where should I store the data, and how to load into the DB, or if I need special tools. I have only 60G in my DB server.
Thanks in advance

Robert Geier wrote:
This does not indicate what is growing.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_4149.htm
"DBA_TAB_MODIFICATIONS describes modifications to all tables in the database that have been modified since the last time statistics were gathered on the tables. Its columns are the same as those in "ALL_TAB_MODIFICATIONS".
Note:
This view is populated only for tables with the MONITORING attribute. It is intended for statistics collection over a long period of time. For performance reasons, the Oracle Database does not populate this view immediately when the actual modifications occur. Run the FLUSH_DATABASE_MONITORING_INFO procedure in the DIMS_STATS PL/SQL package to populate this view with the latest information. The ANALYZE_ANY system privilege is required to run this procedure."-----------------
Thank you.

Similar Messages

  • Fast growing Basis tables....

    Hi,
    I have bees using one note 706478 to find out the measure against fast growing basis tables. I use this for R/3. Is there any note like this for BW and CRM as well. I search OSS but could not come up with any solid note.
    Does anybody knwo abot this? Your help will be appreciated.
    Thanks in advance to everybody.
    Thanks for the help.

    There are several factors to consider. For example:
    - Is the table a TimesTen only table or is it a cached Oracle table using AWT to push the inserts down to Oracle?
    - How long does the data need to be retained? Forever, for 1 hour, for 1 minute...
    If the table is an AWT cached table from Oracle then the inserted data ultimately ends up in Oracle. It is therefore likely that you can safely discard some data from TimesTen to prevent the datastore filling up. You can do this using the automnatic aging feature in TT 7.0 or you can implemnent it as a periodic UNLOAD CACHE GROUP statement executed from a script or from application code.
    If the table is a TimesTen only table then in addition to the above you need to consider if you can just discard 'old' data or if you have to keep it somewhere. If you need to keep it then you will first need to copy it out of TT before it gets deleted. In this case aging is probably not a good solution and you should implement some application mechanism to copy the data somewhere else and then delete it. If you do not need to keep the data then aging may still be an option.
    In any event, you will want to give yourself as much headroom as possible by making the datastore as big as you can subject to available memory etc. If you use aging, you will likely have to configure very aggressive aging parameters in order to keep the table size under control. It is possible that aging may not be up to the job if the insert rate is extremely high in which case you may anyway need to implement some application based cleanup mechanism.
    Chris

  • Fast growing object in tablespace

    Hi Experts
    can any tell me how to find the fast growing objects in database. database is growing very fast and i want to know what all objects are those?
    thanks in advance
    anu

    can any tell me how to find the fast growing objects in database. database is growing very fast and i want to know what all objects are those?I would change this query to take it object wise it as OP is interested in growing object size not in segment size
    select owner,segment_name,segment_type,sum(bytes)/1024/1024 "SIZE(MB)",blocks from dba_segments where segment_type = 'TABLE' and owner = 'LOGS' group by owner,segment_name,segment_type;Regards.

  • How to check fastest growing tables in db2 via db2 command

    Hi Experts,
    You might feel this very silly question on this forum, but still because of some requirement’s I need that. I'm new to db2 and basis so please bare my immatureness.
    Our DB size is growing faster @ 400/500 MB per day from last 15 days, which is supposed to be not more than 100 MB per day. We want to check the fastest growing tables. So i checked the history in db02 transaction and select the entry field as per 'Growth'. But for the given specific date it is showing nothing. Earlier we had same issue some 3 months back that time it used to display with same selection criteria.
    So I want a db2 command to execute and check the fastest growing table on database level. Please help guys. Early reply must be appreciated.
    PFA screenshot. DB version is DB2 9.7 and OS is Linux
    Thanks & Regards,
    Prasad Deshpande

    Hi Gaurav/Sriram,
    Thanks for the reply..
    DBACOCKPIT is best to go i agree with you but on DBACOCKPIT though our data collector framework and rest all the things are configured properly but still it is not working. It is not changed from last 1 year, and for same scenario 3 months ago it has displayed the growth tables.
    Any how i have raised this issue to SAP so let the SAP come back with the solution on this product error.
    In the mean while @ Experts please reply if you know the DB level command for getting the fastest growing table.
    I'll update the SAP's reply for the same as soon as i get, so that the community should also get the solution for this..
    Thanks & Regards,
    Prasad Deshpande

  • Top n Growing tables in BW & R/3 with Oracle env

    Hi all,
    We are on BW 7.0 with Oracle 10.2.0.2 , please let me know how to get the top N growing tables & Top N largest tables.
    I remember collecting these stats from TX: DB02 when we had MS SQLserver as the DB.  It was as easy as clicking a button.  but with Oracle I have been unable to find these options in Db02 or dbacockpit.
    Thanks,
    Nick

    Nick,
    Goto tcode DB02OLD>Detailed Analysis>Object Name *, Tablespace , Objetc Type tab. You will get list of all table, you take out top 50 table from this list.
    Earlywatch report also gives list of top 20 tables. Check your earlywatch report for this.
    You can also use the following SQL query:
    select * from
    (select owner, segment_name, segment_type, tablespace_name, sum (bytes/1024/1024) MB
    from dba_extents
    where segment_type = 'TABLE'
    group by owner, segment_name, segment_type, tablespace_name
    order by MB desc)
    where rownum <= N;
    Put any value for N (e.g. 50)to find out top N growing tables & Top N largest tables.
    If you are planning to go for a table reorg then refer below link.
    Re: is it possible to see decrease of table size & size of tablespace
    Hope this helps.
    Thanks,
    Sushil

  • How to identify the Selected row number or Index in the growing Table

    Hi,
    How to find the selected Row number or Row Index of growing  Table using Javascript or Formcalc in Interactive Adobe forms
    Thanks & Regards
    Srikanth

    After using bellow script it works fine
    xfa.resolveNode("Formname.Table1.Row1["this.parent.index"].fieldname").rawValue;

  • Fast growing tablespaces

    Hi Experts,
    The following tablespaces are consuming max free space.
    PSAPBTABD : 50% of the total space aquired 77.5 GB   
    PSAPBTABI : 38.5 GB                     
    PSAPCLUD : 15 GB          
    85 % of total space is consumed by these tablespaces.
    Tables with max growth of increasing are :
    BSIS, RFBLG, ACCTIT, ACCTCR, MSEG, RSEG   etc.
    Average increase of  2GB space per month.
    Kindly help me to find out the solution.
    Regards,
    Praveen Merugu

    Hi praveen,
    Greetings!
    I am not sure whether you are a BASIS or Functional guy but, if you are BASIS then you can discuss with your functional team on selecting the archiving objects inline with your project. Normally, func consultants will have the knowledge of which archive object will delete entries from which tables... You can also search help.sap.com for identifying the archiving objects.
    Once you identified the archiving objects, you need to discuss with your business heads and key users about your archiving plan. This is to fix the data retention period in the production system and to fix the archiving cycle for every year.
    Once these been fixed, you can sit along with func guys to create varients for the identified archiving objects. Use SARA and archivie the concerned objects.
    Initiating a archiving project is a time consuming task. It is better to start a seperate mini project to kick off the initial archiving plan. You can test the entire archiving phase in QA system by copying the PRD client.
    The below summary will give you a idea to start the archiving project,
    1. Identify the tables which grow rapidly and its module.
    2. Identify the relevent archiving object which will delete the entries in rapidly growing table.
    3. Prepare a archive server to store the archived data ( Get 3rd party archiving solution if possible). Remeber, the old data should be reterived from the archive server when ever the business needs it.
    4. Finalise the archving cycle inline with your business need.
    5.Archvie the objects using SARA.
    6.Reorganize the DB after archiving.
    Hope this will give some idea on archiving project.
    regards,
    VInodh.

  • Very fast growing STDERR# File

    Hi experts,
    I have stderr# files on two app-servers, which are growing very fast.
    Problem is, I can't open the files via ST11 as they are to big.
    Is there a guide, which explains what is it about and how I can manage this file (reset, ...)?
    May it be a locking-log?
    As I have a few entries in SM21 about failed locking.
    I also can find entries about "call recv failed" and "comm error, cpic return code 020".
    Thx in advance

    Dear Christian,
    Stderr* are used to record syslog and logon check, when the system is up, there should be only one being used, you can delete the others. for example, if the stderr1 is being used, then you can delete the stderr0.
    stderr2,stderr3... Otherwise only shutting down the application server will allow deletion. When deleted the files will be created
    again and only increase in size if the original issue causing it still exists, switching is internal and not controlled by size.
    Some causes of 'stderr4' growth:
    In the case of repeated input/output errors of a TemSe object (in particular in the background), large portions of trace information are written to stderr. This information is not necessary and not useful in this quantity.
    Please review carefully following Notes :
       48400 : Reorganization of TemSe and Spool
      (here delete old 'temse' objects)
    RSPO0041 (or RSPO1041), RSBTCDEL: To delete old TemSe objects
    RSPO1043 and RSTS0020 for the consistency check.
    1140307 : STDERR1 or STDERR3 becomes unusually large
    Please also run a Consistency Check of DB Tables as follows:
    1. Run Transaction SM65
    2. Select Goto ... Additional tests
    3. Select "Consistency check DB Tables" and click execute.
    4. Once you get the results check to see if you have any inconsistencies
       in any of your tables.
    5. If there are any inconsistencies reported then run the "Background
       Procesing Analyses" (SM65 .. Goto ... Additional Tests) again.
       This time check both the "Consistency check DB Tables" and the
       "Remove Inconsistencies" option.
    6. Run this a couple of times until all inconsistencies are removed from
       the tables.
    Make sure you run this SM65 check when the system is quiet and no other batch jobs are running as this would put a lock on the TBTCO table till it finishes.  This table may be needed by any other batch job that is running or scheduled to run at the time SM65 checks are running.
    Running these jobs daily should ensure that the stderr files do not increase at this rate in the future.
    If the system is running smoothly, these files should not grow very fast, because most of they just record the error information when it happening.
    For more information about stderr please refer to the following note:
       12715: Collective note: problems with SCSA
              (the Note contains the information about what is in the  stderr and how it created).
    Regards,
    Abhishek

  • How to deal with the growing table?

    Every tables are growing in any applications. In some applications, tables become larger and larger quickly. How to deal with this problem?
    I am developing an application system now. Should I add a lot of delete commands in the code for each table?

    junez wrote:
    Every tables are growing in any applications. In some applications, tables become larger and larger quickly. How to deal with this problem?
    I am developing an application system now. Should I add a lot of delete commands in the code for each table?Uh, well, yes if you continually add rows to a table the table will grow ... sooner or later you will want to delete rows that are no longer needed. What did you expect? You have to decide what the business rules are to determine when a row can be deleted, and make sure your design allows for such rows to be identified. This is called ..... analysis and design.

  • PL/SQL forall operator speeds 30x faster for table inserts

    Hi!
    I find this quotes from one internet website. According to this site - "Loading an Oracle table from a PL/SQL array involves expensive context switches, and the PL/SQL FORALL operator speed is amazing. " But, i knew something opposite - that normal SQL insertion always faster than this one. There may be some exception - but not always. Pls go through the link --
    [url http://www.dba-oracle.com/oracle_news/news_plsql_forall_performance_insert.htm]PL/SQL FORALL operator speed is amazing In Oracle .
    Anyone who have knowledge regarding this - pls share their views or opinion here to clear my doubt. I'll be waiting for the reply. Thanks in advance.
    Regards.
    Satyaki De.

    Hi Satyaki,
    The forall bulk processing is always faster than the row by row processing as shown in this test.
    It is faster because you have much less executions of the insert statement and therefore less context switches.
    I must mention that bulk fetching 100,000 records at a time isn't always a great idea for production code because
    you'll use a large amount of PGA memory.
    I guess you are confused with the "insert into products select * from products" variant. This pure SQL will beat the
    forall variant.
    From slow to fast:
    1. row by row dynamic SQL without using bind variables (execute immediate insert ...)
    2. row by row dynamic SQL with the use of bind variables (execute immediate insert ... using ...)
    3. row by row static SQL (for r in c loop insert ... values ... end loop;)
    4. bulk processing (forall ... insert ...)
    5. single SQL (insert ... select)
    Hope this helps.
    Regards,
    Rob.
    Message was edited by:
    Rob van Wijk
    Way too late ...

  • Fastest growing table....

    I have a 10.2.0.4 database running Oracle EBusiness Suite 11.5.10.2 - I am trying to find a way to pick a list top 10 tables that are growing the fastest. By growth, I dont mean rows but rather space consumption....any suggestions, please?
    Thanks,

    Is there no way of getting this info from Grid/Oracle Enterprise Manager or AWR, etc.?If/when the raw data is NOT stored within the DB, the flavor of client is not relevant.
    You can not produce any report when required data does not exist.
    When you start with the wrong question, no matter how good an answer you get, it won't matter very much.
    I NEVER tracked disk space consumption at the table level; only at tablespace level.
    Realize you could track disk usage at the OS level; simply based upon file sizes.

  • How to minimise performance degradation when querying a growing table during processing...?

    Hi everyone
    Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
    Read record from source table
    Check to see if matching header record exists in TableA (using indexed field)
    If match found then store TXH_ID (PK in TableA)
    If no match found then create new header record in TableA with new TXH_ID
    Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
    If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
    If so, is there any way to rectify this? Would updating the stats at certain points in the process be effective?
    Would it be any different if a MERGE was used to (conditionally) insert the header records into TableA? (i.e. would the stats still get stale?)
    DB is 11GR2 and OS is Windows Server 2008
    Thanks

    Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
    Read record from source table
    Check to see if matching header record exists in TableA (using indexed field)
    If match found then store TXH_ID (PK in TableA)
    If no match found then create new header record in TableA with new TXH_ID
    Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
    If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
    What do you mean 'presumably the stats . .'?
    In item #3 you said that TXH_ID is the primary key. That means only ONE value will EVER be found in the index so there should be NO degradation for looking up that primary key value.
    The plan you posted shows an index range scan. A range scan is NOT used to lookup primary key values since they must be unique (meaning there is NO RANGE).
    So there should be NO impact due to the header table 'getting big'.

  • LabVIEW/TestStand/PXI Engineering Architect Role in fast growing Semiconductor Services Company

    A reputed Semiconductor Services company is on the cusp of major growth due to recent Brand Recognition and happy customers. The company is looking for a capable, motivated senior engineer or developer who wants to take the next step toward technical/architecture leadership, team leadership and opportunities to make lasting impressions on customers in order to grow the business and themselves. Some questions to ask yourself before you apply:
    a) Do you have 2+ years of experience in LabVIEW/TestStand/PXI with a strong foundation in Electrical/Electronics/Communications/Computer Engineering
    b) Do you feel that your technical skills in the LabVIEW/TesStand/PXI space have evolved to the level that you can punch above your weight compared to the number of years of experience. We are looking for go-getters who may be 2-3 years experience but make a lasting impression on any customers and come across as 4-5 years of experience because of your innate smartness, command of engineering/architectural concepts, communication skills and can-do attitude
    c) Are you driven by a sense of integrity, respect to your colleagues and a strong team spirit
    d) Do you believe that every meeting and deliverable to a customer is a vehicle for company and personal growth?
    e) Do you enter every project and opportunity with a view to ensuring customer delight and loyalty?
    f) Are you fearless about entering new allied technologies such as LabVIEW FPGA/Xilinx//Altera based FPGA/ Microcontroller programming and system design
    If the answer to these questions is yes, please email [email protected] with your most updated resume and prepare to embark on a career that will fuel your  job satisfaction and growth in the years to come. A strong technical background in the areas mentioned is essential and will be tested.
    Company Information:
    Soliton Technologies Inc. is a value-driven Engineering Services company with over 15 years of experience and steady services growth in the Semiconductor, Automotive, Biomedical and Test and Measurement industries. (www.solitontech.com). Soliton's services range from LabVIEW and TestStand based Validation Automation (often PXI based), GUI Frameworks, Embedded Development Services on NI Embedded targets as well as Microcontrollers, High Speed FPGA Design, Enterprise Software Development on Multiple programming platforms ( C, C++, C#, .NET, Python, Java, HTML5 etc) and Vision Based Test Systems. The company has a strong Semiconductor focus in the past decade with multiple Fortune 500 customers with steady growth and a track record of customer retention.
    Compensation: Not a constraint for the right candidate.

    Hi,
    Kindly arrange an interview process.
    I have attached my resume
    Regards,
    Bharath Kumar

  • Why reports are fast when tables are available in local user

    Dear All,
    I have one report refering ws1,ws2,t1,t2 which are in user "test".
    so in my qery of report i will write the query as
    select a.code,b.edu,c.dept_name,d.desg_desc
    from test.ws1 a,test.ws2 b,test.t1 c,test.t2 d
    where a.code = b.code
    and a.dept = c.dept
    and a.desg = d.desg
    The report is logged in by "user123" who is having select grant on all the above 4 tables.
    Report runs but takes time.
    But if i make the above 4 tables available in "user123" and write the query(removed user name in from clause) as
    select a.code,b.edu,c.dept_name,d.desg_desc
    from .ws1 a,ws2 b,.t1 c,t2 d
    where a.code = b.code
    and a.dept = c.dept
    and a.desg = d.desg
    and then report is logged in by "user123"
    Report runs and takes considerably less time.
    My understanding is when tables are available in current user it takes less time because it does'nt have to go to other user and search for the tables.
    Is it correct or there are some other reasons for it.
    Kindly advice
    Best Regards,
    Devendra

    If the tables are owned by a user in the same database then there should be no difference in performance.

  • How do I create a dynamic growing table which holds "copies" of the footer Row from several instances?

    Mission:        
    To create a summary table with Rows from several (yet to be determined) instances
    Coordinates of the enemy   (or Row in question):  
    ROOT.category[*].sub_category_total.sub_category_summary.Row1
    Background information:
    An order form with option for several categories (2 – 10) with subcategories (2 – 20).
    Since the finished form might be up to 20 pages long I would like to get to the point
    with a click of a button (so to speak)
    Therefore a summary list seems the only logic solution.
    Reward:
    Unending gratefulness and publishing of finished sample for others to learn from
    Additional Note:
    Should you choose to accept this mission I will never deny your existence and will
    always give credit to whom credit belongs.
    This message will NOT self-destruct.    

    Hi Steve,
    thanks for the example ... nice and close but not 100 % solution.
    The sample requires me to set up the summary table with fixed rows.
    But I don't know how many main categories & subcategories will be there - it can vary between 2 and 20.
    One solution would be to set up a table with 20+ rows and hide them if rawValue ... equals 0.
    Might work -  but is not very elegant ....
    I guess I was looking for the script that counts the instances and then creates automatically the necessary rows.
    I know it is a lot to ask in a forum ... but I guess it might be possible. Isn't it??
    Please let me know if I'm reaching for the impossible 
    Jixin

Maybe you are looking for

  • Disjointed rollover or Popup  Help?

    Can someone please look at this site and tell me how they were able to have the mouse over image floating? I am new to Dreamwaever MX and would ike to duplicate this function. http://foreverfoiled.com/site/src/GraphicsCover.html Thank you, Pam

  • I can't get the entire update file to download, 995mb then stops.

    995mb then stops. I can't get the entire file to download and this is a fast connection, says 23 minutes to download the file, but stops short. I have tried about 15 times over the last 7 hours, yet it always stops at 995mb. I've tried over wireless

  • Wireless SSID with Certificate

    Dear All,   I have a wireless network with cisco 5508 WLC for corporate network ,Cisco WLC for guest network, ACS 4.2, and 200 accesspoints. Corporate SSID authentication-   WPA1 & 2  with Dot1X(Via ACS) Guest        SSID authentication- Webauth with

  • Flash Catalyst 5--Artboard Error and Code Error When Publishing

    I am trying to create an interactive document in FC (imported from Illustrator) for the first time, but I am having two big problems. 1. When I run my project, quite a big portion of my outside edges are "cropped" off, and I can't see them in the bro

  • Trying to connect using Network Browser, AppleTalk

    I am trying to connect to another computer in my office. I go to the Apple Menu, Network Browser, AppleTalk, click AppleTalk and usually the name of the other computer that I want access to appears in the list, the last two days the other computer na