Collect segment size history in 10g

Hi,
In order to know how much each segments and tablespace growth each month ,
i have a job that queering dba_segments and dba_data_files and save the results in a database tables.
Does 10g collect this information automatically , or should i continue collect this kind on statistics by my self ?
Thanks.

You can try to use the DBMS_SPACE package. For example:
SQL> column timepoint format a30
SQL> select * from
  2  table(dbms_space.OBJECT_GROWTH_TREND(
  3          'SYS', 'OBJ$', 'TABLE', null, to_timestamp('14-JUL-2008','DD-MON-YYYY'), null, numtodsinterval(30, 'DAY') ));
TIMEPOINT                      SPACE_USAGE SPACE_ALLOC QUALITY
14-JUL-08 12.00.00.000000 AM       1318466     2097152 INTERPOLATED
13-AUG-08 12.00.00.000000 AM       1318466     2097152 GOOD

Similar Messages

  • No size history for SQL Database in dbacockpit for new system

    Hello,
    we have installed a new ERP 6.0 System on Windows with SQL Server 2005 (3 month ago)
    Now we haven´t any data in datbase size historie in the dbacockpit.
    The message is: No size history.  Check the DB Collector status.
    We checked already the state of Collector. It is running every 20 minutes and collect data for performance. But not for the growth of the database.
    We also deleted the job at the SQL Agent and planned it new with the report MSSPROCS.
    Does anyone have an idea whats the problem?
    Best regards
    Petra Wöritz

    Hello all together,
    we are facing the same problem until now.
    I checked all the points you suggested:
    - the performance collector job runs in client 000
    - time zone is set to CET
    - the SAPOSCOL is running
    - no errors by executing the SQL statement EXECUTE sap_dbcoll
    - no errors in SQL Job SAP_sid_SID_MSSQL_COLLECTOR
    We don't have any data in size history.
    But we found a error message in dev_w0:
    C  dbdsmss: DBSL26 SQL3621C  Violation of PRIMARY KEY constraint 'PK__#perfinfo_________0C5BC11B'. Cannot insert duplicate key in object 'dbo.#perfinfo'.The statement has been terminated
    It occurs, when we try get the data of history in db02 (or dbacockpit).
    The note 1171828 doen't fit.
    We have already SAP_BASIS     701     0007
    We've got a system with SAP_BASIS     702     0006
    There the error is displayed in dbacockpit directly:
    SQL-Fehler 2627: [Microsoft][SQL Native Client][SQL Server]Violatio n of PRIMARY KEY constraint 'PK__#perfinfo________ _5A303401'. Cannot insert duplicate key in object
    Any ideas?
    Best regards
    Petra

  • Unable to collect Product Return History using legacy collection

    Hi,
    I am facing issue in collecting product return history using legacy collection, File Upload (User File Upload) & Loader Worker erroring out as below. As I observe, its inserting space after .ctl, .dis & .bad file path.
    Can some one guide me how to reslove below issue.
    Loader Worker
    Argument 1 (CTRL_FILE) = /u02/oracle/xxxxx/inst/apps/rights_apps/logs/appl/conc/out/5913849MSD_DEM_RETURN_HISTORY .ctl
    Argument 2 (DATA_FILE) = /u02/oracle/xxxxx/inst/apps/rights_apps/logs/appl/conc/out/5913849PrdRetHist.dat
    Argument 3 (DISCARD_FILE) = /u02/oracle/xxxxx/inst/apps/rights_apps/logs/appl/conc/out/5913849MSD_DEM_RETURN_HISTORY .dis
    Argument 4 (BAD_FILE) = /u02/oracle/xxxxx/inst/apps/rights_apps/logs/appl/conc/out/5913849MSD_DEM_RETURN_HISTORY .bad
    Argument 5 (LOG_FILE) =
    Argument 6 (NUM_OF_ERRORS) = 1000000
    ===================================================================
    plan_id:0 plan_type:0 planning_engine_type:1
    Creating dummy log file ...
    Parent Program Name: MSCLOADS
    This is NOT as part of a Plan run.
    NLS_LANG original American_America.AL32UTF8 alt American_America.UTF8
    LRM-00112: multiple values not allowed for parameter 'control'
    SQL*Loader: Release 10.1.0.5.0 - Production on Tue Mar 11 19:58:20 2014
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    SQL*Loader-100: Syntax error on command-line
    Program exited with status 1
    APP-FND-01630: Cannot open file /u02/oracle/xxxxx/inst/apps/rights_apps/appltmp/OFq98wrx.t for reading
    Cause: USDINS encountered an error when attempting to open file /u02/oracle/xxxxx/inst/apps/rights_apps/appltmp/OFq98wrx.t for reading.
    Action: Verify that the filename is correct and that the environment variables controlling that filename are correct.
    Action: If the file is opened in read mode, check that the file exists. Check that you have privileges to read the file in the file directory. Contact your system administrator to obtain read privileges.
    Action: If the file is opened in write or append mode, check that you have privileges to create and write files in the file directory. Contact your system administrator to obtain create and write privileges.
    ***** End Of Program - No title available *****
    File Upload (User File Upload)
    Tue Mar 11 19:57:52 RET 2014: Profile 'MRP_DEBUG' Value : N
    Tue Mar 11 19:57:52 RET 2014: ===============================================================
    Tue Mar 11 19:57:52 RET 2014: fileLoaderInit: paramName = pLOAD_ID; paramValue=41563
    Tue Mar 11 19:57:52 RET 2014: ===============================================================
    Tue Mar 11 19:57:52 RET 2014: The control file Path /u02/oracle/xxx/apps/apps_st/appl/msc/12.0.0/patch/115/import/MSD_DEM_RETURN_HISTORY .ctl does not exist. Please contact your System  Administrator
    Regards,
    ML

    Hi,
    Login to unix server and I believe the control file is placed in a custom top say $MSC_TOP in your environment.
    just try to rename the ctl file without the MSD_DEM_RETURN_HISTORY<space>.ctl
    And try to upload the file once again.
    Hope this helps...!!!

  • Collection of stats prior to 10g upgrade for dictionary tables

    Collection of stats prior to 10g upgrade for dictionary tables is required? I thought if you are upgrading from 9i to 10g the stats in 9i become stale in 10g.
    I read a document that says you need to collect stats prior to upgrade and import them once your upgrade is done. Can someone guide me?

    I don't know where did you get these? There's some optimizer improvement and new feature in 10g. Collecting stats before upgrade and import later is unheard of.
    I suggest you check this article
    Choosing An Optimal Stats Gathering Strategy
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
    and the white paper mentioned,
    White Paper entitled Upgrading from Oracle Database 9i to 10g: What to expect from the Optimizer.

  • Java NIO - TCP segment size abnormally low

    Hi !
    After noticing a weird behaviour on our Linux production server for code that works perfectly on my Windows dev box, I used tcpdump to sniff the packets that are actually sent to our clients.
    The code I use to write the data is as simple as :
    // using NIO - buffer is at most 135 bytes long
    channel.write(buffer);
    if (buffer.hasRemaining()) {
        // this never happens
    }When the buffer is 135 bytes long, this systematically results in two TCP segments being sent : one containing 127 bytes of data, the other one containing 8 bytes of data.
    Our client is an embedded system which is poorly implemented and handles TCP packets as whole application messages, which means that the remaining 8 bytes end up being ignored and the operation fails.
    I googled it a bit, but couldn't find any info about the possible culprit (buffer sizes and default max tcp segment sizes are of course way larger that 127 bytes !)
    Any ideas ?
    Thanks !

    NB the fragmentation could also be happening in any intermediate router.
    All I can suggest is that you set the TCP send buffer size to some multiple of the desired segment size, or maybe just set it very large, like 64k-1, so that you can reduce its effect on segmentation.

  • What is limit of database size in oracle 10g standard edition/edition one

    Hai All,
    What is the limit of database size in oracle 10g standard edition and standard edition one.. I see the white paper of oracle says that the limitation is 500 GB. This limitation is correct.? if correct then what happened after the limit..?
    Please help?
    Shiju

    What white paper would that be? I can't see any limit in the Oracle Database 10g Editions comparisons.
    C.

  • How can i increase sga size in oracle 10g

    Hello friends
    how can i increase my sga size in oracle 10g
    Regards
    Vicky
    Edited by: Vignesh Chinnasamy on 31-Jul-2012 02:28

    HI
    **SQL> Show parameter sga ;**
    NAME                                 TYPE        VALUE
    lock_sga                             boolean     FALSE
    pre_page_sga                         boolean     FALSE
    sga_max_size                         big integer 2G
    sga_target                           big integer 2G
    **SQL> show parameter memory;**
    NAME                                 TYPE        VALUE
    hi_shared_memory_address             integer     0
    shared_memory_address                integer     0
    **[root@mte ~]# ulimit -a**
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 1024
    max locked memory       (kbytes, -l) 32
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    stack size              (kbytes, -s) 10240
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 278528
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
    *[root@mte ~]#*

  • Give me the sql query which calculte the table size in oracle 10g ecc 6.0

    Hi expert,
    Please  give me the sql query which calculte the table size in oracle 10g ecc 6.0.
    Regards

    Orkun Gedik wrote:
    select segment_name, sum(bytes)/(1024*1024) from dba_segments where segment_name = '<TABLE_NAME>' group by segment_name;
    Hi,
    This delivers possibly wrong data in MCOD installations.
    Depending on Oracle Version and Patchlevel dba_segments does not always have the correct data,
    at any time esp. for indexes right after being rebuild parallel (Even in DB02 because it is using USER_SEGMENTS).
    Takes a day to get the data back in line (never found out, who did the correction at night, could be RSCOLL00 ?).
    Use above statement with "OWNER = " in WHERE for MCOD or connect as schema owner and use USER_SEGMENTS.
    Use with
    segment_name LIKE '<TABLE_NAME>%'
    if you like to see the related indexes as well.
    For partitioned objects, a join from dba_tables / dba_indexes to dba_tab_partitions/dba_ind_partitions to dba_segments
    might be needed, esp. for hash partitioned tables, depending on how they have been created ( partition names SYS_xxxx).
    Volker

  • How to change redo log size in oracle 10g

    Hi Experts,
    Can anybody confirm how to change redo log size in oracle 10g?
    Amit

    Dear Amit,
    You can enlarge the size of existing Online Redo log files, by adding new groups with different size of files (origlog$/mirrlog$) and then carefully droping the old groups with  their associated inactive files.
    Please refer SAP Note 309526 - Enlarging redo log files to perform the activity.
    Steps to perform:
    STEP-1. Analyze the exisiting situation and prepare an action plan.
    A. You have to ensure that no more than one log switch per minute occurs during peak times.
    It may also be necessary to increase the size of the online redo logs until they are large enough.
    Too many log switches lead to too many checkpoints, which in turn lead to a high writing load in the I/O subsystem.
    Use ST04 -> Additional Functions --> Display GV$-Views
    There you can select
    Gv$LOG_HISTORY --->for determing your existing LOG switching frequency.
    GV$LOG -
    > list the status(INACTIVE/CURRENT/ACTIVE) /size/sequence no. of existing online redolog files
    GV$LOGFILE  --- > list the information of existing online  redolog files with their storage paths
    You can document the existing situation of Online Redo Log Fiile management before going to enlarge Redo Log Files.
    It will be helpful, if something goes wrong while performing activities.
    B. Based on above Situation analysis, Plan your New Redo Log Group and there Members with new optimal size.
    e.g.
    Group No.         Redo Log File Locations  u201C/oracle/<SID>/u201D                  Size
                                 /origlogA                  /mirrlogA            
    15                        log_g15m1.dbf         log_g15m2.dbf               100 MB
    17                        log_g17m1.dbf            log_g17m2.dbf               100 MB
                                /origlogB                    /mirrlogB
    16                       log_g16m1.dbf          log_g16m2.dbf            100 MB
    18                       log_g18m1.dbf            log_g18m2.dbf            100 MB
    Continue to next.....

  • Database size history on Java instance

    Hello,
    Does anybody know how to get the database size history (db02) on a Java only instance.
    The system is portal 7 running on SQL 2005 and windows 2003.
    Thanks

    Kruger,
      Your best bet to get this type of information is going to be through the SQL Server Enterprise Manager.  J2EE Engines typically do not grow as they rarely hold information but are pass-through systems.  For this reason, there is little detail in the Portal for this specific type of information.
      If you setup the Solution Manager as a CEN Monitor for the Java Instance you can provide some thresholds for the MTE's on Db but again, won't hold informaiton on size history.
      Hope that helps - if this answers your question, please set this thread to answered.

  • Hi, I just installed CS6 master collection from order history and when I try to enter Serial Number it tells me that: Serial number you provided is valid, but qualifying product can not be found on this computer. Then it gives me options under drop down m

    Hi, I just installed CS6 master collection from order history and when I try to enter Serial Number it tells me that: Serial number you provided is valid, but qualifying product can not be found on this computer. Then it gives me options under drop down menu but Master Collection CS6 is the only one not appearing in a drop down menu.

    Your CS6 must have been purchased as an upgrade.  What it is asking you to select/provide is the name/serial number of the previous version you purchased that qualifies you to install and activate the CS6 upgrade version... this would be likely be one of CS3 thru CS5.5.
    Error "This serial number is not for a qualifying product" | CS6, CS5.5, CS5
    http://helpx.adobe.com/creative-suite/kb/error-serial-number-qualifying-product.html

  • DB02 - Database size history report

    Hi,
    On our ECC 6.0 PRD system, we need to extend the time span of the history report. Currently the report goes back to 1/31/08. We need to extend that date to at least a year.
    To navigate to the DB size history report, use:
    Tx DB02 -> Space ->History ->Database size history report
    Regards,
    Sai R.

    The statistics are shown from where the collector was started.
    If you cant see statistics before 1-31-08 I think there is no way to solve the problem because there is no data before  ( RSCOLL00 program ).
    But someone else can help you better or give you the solution.
    Antonio Voce.
    Edited by: Antonio Voce on Jun 9, 2008 4:45 PM

  • "Limit Capture/Export File Segment Size To" doesn't work

    I put the limit to 4000 MB before i startet capturing HD-video, but it didn't work. Several of my files are bigger than 4 GB. This is a problem since I use an online back up service that has 5 GB as the maximum file limit. Any suggestion to fix this problem is highly appreciated.

    I believe, although I am not 100% sure, that the "Limit Capture/Export File Segment Size To" does not apply to Log and Transfer, only Log and Capture.
    Since Log and Capture works with tape sources, when the limit is hit, the tape can be stopped, pre-rolled and started again to create another clip
    In the case of Log and Transfer, it is ingesting and transcoding existing files; the clip lengths (and therefore size) are already determined by the source clip.
    If you are working with very lengthy clips, you may want to set in and out points to divide the clips into smaller segments.
    MtD

  • Segment size problem

    I use apache HttpClient upload xml file to a server, and it is https. It works for some files, but for some of them i can not even get response, and use snoop to check networking, i found out, for the bad cases, there is one frame shows "unreassembled packet" and the checksum is INCORRECT, the size of the frame is 1514, the total length in Internet Protocol is 1500, but in the beginning of connection it shows MSS=1460,
    so who can explain the realtionship between these numbers, and where i can configure or control the segment size when uploading, or it is impossible.

    yes, 1514 is right, and the checksum incorrect is beacuse i am using snoop right on my server, the checksum has not been calculated by NIC, it is always 0, now i noticed the problem happens when the segment reachs 1460, and then, this packet is "unreassambled". So, i am wondering where is the problem, SSL, or system configure, or somthing else. By the way, my server is solaris 10 , and the peer, i am not soure, IBM, but i think they may use proxy server.

  • Delete the Company Code assigned to Collection Segment

    Hi All
    I have accidentally assigned Company Code to wrong Collection Segment, I need to delete the Company Code from the segment.  System throws error -  Deletion of company codes not possible in released segment
    I had search in the forum and could not find the exact solution for that. Could anyone pls help to provide the solution for this?
    Thanks in advance
    Chandu

    Hi Mark,
    Though I completely agree with your point, what I wanted to say is that in case a company code has been wrongly assigned to a segment and the segment released, then we can no longer delete that company code from that segment.  Then in that case we have to create a new segment and then assign this company code in that segment.  In the collection profile, then we have to assign the new segment and remove the old segment.
    Actually this is kind of perplexing for me.  If we have been using the segment for some time and then later we add a company code by mistake to a wrong segment.  Then there should be a provision to delete the company code at least before any data is transferred from FI to collections management.  I am not aware of any other solution for this problem.  Your valuable comments please.
    Regards,
    Ravi

Maybe you are looking for

  • Mail crashing with  EXC_BAD_ACCESS (SIGBUS)

    Mail has been crashing all weekend with the following report: Process: Mail [3406] Path: /Applications/Mail.app/Contents/MacOS/Mail Identifier: com.apple.mail Version: 3.4 (928) Build Info: Mail-9280000~1 Code Type: X86 (Native) Parent Process: launc

  • Hello I am considering leaving the mobile phone contract it's not suitable

    Hello I am considering leaving my mobile contract, Its not sutable for my usage, my £10.00 a month is costing much more, I do not use it very much so the contract is costing me a lot of money.Regards,Mary Burdon.

  • Internal HD - Read only?

    All of a sudden I'm seeing the little pencil icon with a slash through it on all the windows of my internal HD. And if I try to copy a file to my drive I need to authenticate every time. Any idea what's going on? I already rebooted from my Leopard di

  • Mac mini crashes often during sleep

    I have a mac mini about 6 months old.  Everything was great for a while but for the past few months it's been crashing every 1-2 days during sleep (I go to wake it up and it says it restarted after a problem).  I have a bunch of crash reports, I'll p

  • RAC and ASM study materials.

    Hi , i am new in RAC ,i need some books name OR pdf link for RAC Notes.Please help me .