Database data file growing very fast

Hi
I have a database that runs on SQL server 2000.
A few months back, the database was shifted to new server because the old server was crash.
There was no issue in old server which was used more than 10 years.
I noticed that the data file was growing very fast since the database was shifted to new server.
When I run "sp_spaceused", a lot of space are unused. Below is the result:
database size = 50950.81 MB
unallocated space = 14.44 MB
reserved = 52048960 KB
data = 9502168 KB
index size = 85408 KB
unused = 42461384 KB
When I run "sp_spacedused" only for one big table, the result is:
reserved = 19115904 KB
data = 4241992 KB
index size = 104 KB
unused = 14873808 KB
I had shrink the database and the size didn't reduce.
May I know how to reduce the size? Thanks.

Hallo Thu,
can you check whether you have active Jobs in Microsoft SQL Server Agent which may...
rebuild Indexes?
run maintenance Jobs of your application?
I'm quite confident that index maintenance will cause the "growth".
Shrinking the database is...
useless and
nonsence
if you have index maintenance Tasks. Shrinking the database means the move of data pages from the very end of the database to the first free part in the database file(s). This will cause index fragmentation.
If the nightly index maintenance Job will rebuild the Indexes it uses NEW space in the database for the allocation of space for the data pages!
Read the blog post from Paul Randal about it here:
http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/
MCM - SQL Server 2008
MCSE - SQL Server 2012
db Berater GmbH
SQL Server Blog (german only)

Similar Messages

  • PSAPSR3 Tablespace is only growing very fast in PROD

    Dear All,
    In our Prod Server  -> PSAPSR3 Tablespace is only growing very fast (Note : with 5 days i have extened 2 time PSAPSR3 table space) .
    let me know the permament solution is only extending table space ? or any alternate solution to control specific table space growth ?
    pls check DB02 Table space details :
    PSAPSR3     219,640.00     10,010.81     95     YES     220,000.00     10,370.81     95     22     157,305     226,884     ONLINE     PERMANENT
    PSAPSR3700     71,120.00     3,506.75     95     YES     170,000.00     102,386.75     40     17     868     11,389     ONLINE     PERMANENT
    PSAPSR3USR     20.00     1.94     90     YES     10,000.00     9,981.94     0     1     38     108     ONLINE     PERMANENT
    PSAPTEMP     4,260.00     4,260.00     0     YES     10,000.00     10,000.00     0     1     0     0     ONLINE     TEMPORARY
    PSAPUNDO     10,000.00     8,391.44     16     NO     10,000.00     8,391.44     16     1     20     498     ONLINE     UNDO
    SYSAUX     480.00     22.88     95     YES     10,000.00     9,542.88     5     1     991     2,633     ONLINE     PERMANENT
    SYSTEM     880.00     5.44     99     YES     10,000.00     9,125.44     9     1     1,212     2,835     ONLINE     PERMANENT
    Kindly advise

    Dear MHO/Sunil/Eric,
    still the PSAPSR3 tablespace keep on growing ,
    Pls check the DB02 ,segments details .
    SAPSR3     BALDAT          TABLE     PSAPSR3     42,622.000     268.800     853     5,455,616
    SAPSR3     SYS_LOB0000072694C00007$$          LOBSEGMENT     PSAPSR3     5,914.000     191.533     277     756,992
    SAPSR3     CDCLS          TABLE     PSAPSR3     9,091.000     38.400     327     1,163,648
    SAPSR3     SYS_LOB0000082646C00006$$          LOBSEGMENT     PSAPSR3     1,664.000     37.067     209     212,992
    SAPSR3     BALDAT~0          INDEX     PSAPSR3     5,049.000     32.000     266     646,272
    SAPSR3     EDI40          TABLE     PSAPSR3     3,155.000     23.467     233     403,840
    SAPSR3     CDCLS~0          INDEX     PSAPSR3     1,965.000     19.200     214     251,520
    SAPSR3     BDCP2~001          INDEX     PSAPSR3     1,543.000     18.400     208     197,504
    SAPSR3     BDCPS~1          INDEX     PSAPSR3     4,039.000     17.067     247     516,992
    SAPSR3     APQD          TABLE     PSAPSR3     1,671.000     17.067     210     213,888
    SAPSR3     CDHDR~0          INDEX     PSAPSR3     2,183.000     12.800     218     279,424
    SAPSR3     CDHDR          TABLE     PSAPSR3     2,305.000     12.800     220     295,040
    SAPSR3     BDCP2~0          INDEX     PSAPSR3     1,000.000     12.533     196     128,000
    SAPSR3     ZBIPRICING~0          INDEX     PSAPSR3     320.000     10.600     111     40,960
    SAPSR3     WRPL          TABLE     PSAPSR3     288.000     8.700     107     36,864
    SAPSR3     FAGL_SPLINFO          TABLE     PSAPSR3     1,016.000     8.000     198     130,048
    SAPSR3     FAGL_SPLINFO_VAL~0          INDEX     PSAPSR3     736.000     8.000     163     94,208
    SAPSR3     ZBIPRICING          TABLE     PSAPSR3     208.000     6.931     97     26,624
    SAPSR3     MARC~Y          INDEX     PSAPSR3     176.000     5.533     93     22,528
    SYS     WRH$_ACTIVE_SESSION_HISTORY     WRH$_ACTIVE_2349179954_18942     TABLE PARTITION     SYSAUX     6.000     5.375     21     768
    SAPSR3     MARC~VBM          INDEX     PSAPSR3     152.000     4.867     90     19,456
    SAPSR3     MARC~D          INDEX     PSAPSR3     136.000     4.367     88     17,408
    SAPSR3     FAGLFLEXA          TABLE     PSAPSR3     2,052.000     4.267     216     262,656
    SAPSR3     RFBLG          TABLE     PSAPSR3     3,200.000     4.267     233     409,600
    SAPSR3     BDCPS          TABLE     PSAPSR3     1,280.000     4.267     203     163,840
    SAPSR3     BDCP~POS          INDEX     PSAPSR3     3,392.000     4.267     236     434,176
    SAPSR3     BALHDR          TABLE     PSAPSR3     864.000     4.000     179     110,592
    SAPSR3     FAGL_SPLINFO~0          INDEX     PSAPSR3     361.000     3.767     117     46,208
    SAPSR3     ACCTIT          TABLE     PSAPSR3     289.000     3.733     108     36,992
    SAPSR3     WRPT~0          INDEX     PSAPSR3     112.000     3.731     85     14,336
    SAPSR3     FAGL_SPLINFO_VAL          TABLE     PSAPSR3     448.000     3.467     127     57,344
    SAPSR3     COEJ          TABLE     PSAPSR3     1,089.000     3.200     201     139,392
    SAPSR3     ZBISALEDATA3          TABLE     PSAPSR3     176.000     3.200     93     22,528
    SAPSR3     COEP~1          INDEX     PSAPSR3     927.000     3.167     187     118,656
    SAPSR3     GLPCP          TABLE     PSAPSR3     891.000     2.933     183     114,048
    SAPSR3     ZBISALEDATA          TABLE     PSAPSR3     376.000     2.933     118     48,128
    SAPSR3     WBBP          TABLE     PSAPSR3     344.000     2.933     114     44,032
    SYS     WRH$_ACTIVE_SESSION_HISTORY     WRH$_ACTIVE_2349179954_18918     TABLE PARTITION     SYSAUX     6.000     2.594     21     768
    SAPSR3     FAGL_SPLINFO~1          INDEX     PSAPSR3     280.000     2.400     106     35,840
    SAPSR3     SE16N_CD_DATA          TABLE     PSAPSR3     72.000     2.333     80     9,216
    SAPSR3     KONH          TABLE     PSAPSR3     1,373.000     2.133     207     175,744
    SAPSR3     GLPCA          TABLE     PSAPSR3     2,437.000     2.133     222     311,936
    SAPSR3     BDCP~0          INDEX     PSAPSR3     1,863.000     2.133     213     238,464
    SAPSR3     SYS_LOB0000161775C00013$$          LOBSEGMENT     PSAPSR3700     5,210.000     2.133     266     666,880
    SAPSR3     BDCPS~0          INDEX     PSAPSR3     2,496.000     2.133     222     319,488
    SAPSR3     D010TAB          TABLE     PSAPSR3700     2,176.000     2.133     217     278,528
    SAPSR3     COEP          TABLE     PSAPSR3     2,117.000     2.133     217     270,976
    SAPSR3     FAGLFLEXA~0          INDEX     PSAPSR3     808.000     2.133     172     103,424
    SAPSR3     BSIS          TABLE     PSAPSR3     1,734.000     2.133     211     221,952
    SAPSR3     BSAS          TABLE     PSAPSR3     1,650.000     2.133     210     211,200
    SAPSR3     GLPCA~3          INDEX     PSAPSR3     382.000     1.867     119     48,896
    SAPSR3     BKPF          TABLE     PSAPSR3     1,012.000     1.867     198     129,536
    SAPSR3     FAGLFLEXA~3          INDEX     PSAPSR3     744.000     1.867     164     95,232
    SAPSR3     FAGLFLEXA~2          INDEX     PSAPSR3     661.000     1.867     154     84,608
    SAPSR3     WRPL~001          INDEX     PSAPSR3     112.000     1.867     85     14,336
    SAPSR3     WRPL~0          INDEX     PSAPSR3     112.000     1.667     85     14,336
    SAPSR3     PCL2          TABLE     PSAPSR3     1,000.000     1.600     196     128,000
    SAPSR3     GLPCA~2          INDEX     PSAPSR3     345.000     1.600     115     44,160
    SAPSR3     FAGL_SPLINFO~3          INDEX     PSAPSR3     136.000     1.600     88     17,408
    SAPSR3     MARC~WRK          INDEX     PSAPSR3     160.000     1.600     91     20,480
    SAPSR3     MSEG          TABLE     PSAPSR3     136.000     1.600     88     17,408
    SAPSR3     ZBISALEDATA~0          INDEX     PSAPSR3     208.000     1.600     97     26,624
    SAPSR3     ZBISALEDATA3~0          INDEX     PSAPSR3     195.000     1.500     96     24,960
    SYS     WRH$_ACTIVE_SESSION_HISTORY     WRH$_ACTIVE_2349179954_18894     TABLE PARTITION
    Kindly suggest

  • CAS Content lib growing very fast!! HELP.

    Hello guys!!
    The "SCCMContentLib" at CAS in my SCCM 2012 R2 was growing very fast! In 15 minutes increased 3GB!!
    Anyone help me?
    Thanks!!
    Atenciosamente Julio Araujo

    Is SP0 your CAS? It looks like the package is created there. You can read more about Content Library here:
    http://technet.microsoft.com/en-us/library/gg682083.aspx#BKMK_ContentLibrary and here
    http://technet.microsoft.com/en-us/library/gg682083.aspx I would also like to suggest
    https://social.technet.microsoft.com/Forums/en-US/de323e04-7bff-4d28-b76e-b4ab4c52cf4b/sccmcontentlib-on-cas?forum=configmanagerdeployment
    Tim Nilimaa-Svärd | Blog: http://infoworks.tv | Twitter: @timnilimaa

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • USB file copy very fast and not completed

    Hello
    when I try copy big files ( about 700 MB for example ) to USB device my copy speed is very fast ( about 55 MB/S ) and Copy the file is not completed. Any one can help me to solve this problem?

    Linux usually doesn't copy the files all at once. You can force it to flush all
    remaining data by unmounting the USB device, which will take quite a long time
    if you do it immediately after issuing the copy command.
    I think there is a mounting option that forces all data transfer to be done
    synchronously, in case you prefer that.

  • WWV_FLOW_DATA growing very fast

    Hi,
    We have a public application and we see wwv_flow_data growing very very fast (Up to 5Gb now).
    In a way, this is a good sign ;) this means that we have a lot of hits... but we are also starting to see some contention on that table.
    It would be nice to be able to set a purge sessions for public (nobody) sessions and another purge sessions for connected sessions.
    We have some people that have to be connected all day , so we cannot purge sessions that are younger than 10 hours.
    Is there another way to limit the number of records in wwv_flow_data than using wwv_flow_cache.purge_sessions(p_purge_sess_older_then_hrs => 24); ?
    Thanks
    Francis Mignault
    http://insum-apex.blogspot.com/
    http://www.insum.ca

    In /f?p=4050:65 Apex report, I can see the sessions and users , anyway that I could use that to delete the records ?No, it doesn't let you select by user name.
    You can login to the workspace, though, and navigate to:
    Home>Administration>Manage Services>Manage Session State>Recent Sessions>Session Details
    Here you can remove a session one-by-one. But that's probably too tedious.
    Scott

  • Can not open database - data file missing

    I connected to the database as sysdba and issued STARTUP OPEN command.
    How can I restore the datafile?
    ORA-01157: cannot identify/lock data file 32 - see DBWR trace file
    ORA-01110: data file 32: 'F:\ORACLE\DATABASE\M23.DAT'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.

    Thank you for your advice.
    To Issue ALTER TABLESPACE M23 OFFLINE IMMEDIATE, the database should be in OPEN mode.
    I couldn't open the database.
    What shall I do put it in OPEN MODE?
    Thank you in advance.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Randall Roberts ([email protected]):
    You have to STARTUP MOUNT and take the tablespace that M23.DAT belongs to offline... probably OFFLINE IMMEDIATE.
    Once you do that you may ALTER DATABASE OPEN. Then you can replace your data file with a backup and use the recover command to recover the tablespace IF you are in archive log mode. If you are not in archive log mode, you'll have to drop the tablespace, recreate it and hopefully you have an export file of its contents you can import.
    Best!
    Randall<HR></BLOCKQUOTE>
    null

  • Database Data files on RAW Devices

    Hi,
    I'm creating a database and wish to locate the tablespaces on a raw device.
    How do I specify the datafiles for the tablespaces on the raw device.

    > Any reason for moving to raw device?
    I actually prefer raw devices and ASM over using cooked file systems.
    By default RAW devices eliminate the "interference" of the kernel. It does not manage a file system on the device. There are no other foreign processes that can use that raw device (as is often the case with cooked devices). There is no o/s file system cache for that device. A physical I/O on the Oracle side means a dinkum physical I/O and not a maybe-logical-I/O-from-the-file-system-o/s-cache.
    There is no need for me as DBA to manage that file system according to OFA standards (ASM does it for me). There is no need for me to go through the learning curve of using a 3rd party volume manager. I have features like automated load balancing while the instance is up and running.
    Yes, there is an argument to be made for just how much of a performance improvement in I/O one can get from a raw device versus a cooked file system. But I/O performance is not the only consideration.

  • After upgrade ZCM 10.3.4, DB grow very fast

    HI All
    I had test ZCM 10.3.4 in my lab, and it seem no critical problem.then I upgrade it on my customer production environment. When I upgrade ZCM completed...the DB and trans log size upgrade from 70G/2G to 102G/165G in 2 days....it is every terrible speed.
    Now, DBA had truncate trans log , and size down to 2GB. but I still do not know why & what happen to my ZCM server....Who could tell me when upgrade to 10.3.4, What DB action that upgrade path will run (like clear NC_COMPCHANGES or PatchScanAuditLog)?
    wyldkao

    wyld,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Very fast growing STDERR# File

    Hi experts,
    I have stderr# files on two app-servers, which are growing very fast.
    Problem is, I can't open the files via ST11 as they are to big.
    Is there a guide, which explains what is it about and how I can manage this file (reset, ...)?
    May it be a locking-log?
    As I have a few entries in SM21 about failed locking.
    I also can find entries about "call recv failed" and "comm error, cpic return code 020".
    Thx in advance

    Dear Christian,
    Stderr* are used to record syslog and logon check, when the system is up, there should be only one being used, you can delete the others. for example, if the stderr1 is being used, then you can delete the stderr0.
    stderr2,stderr3... Otherwise only shutting down the application server will allow deletion. When deleted the files will be created
    again and only increase in size if the original issue causing it still exists, switching is internal and not controlled by size.
    Some causes of 'stderr4' growth:
    In the case of repeated input/output errors of a TemSe object (in particular in the background), large portions of trace information are written to stderr. This information is not necessary and not useful in this quantity.
    Please review carefully following Notes :
       48400 : Reorganization of TemSe and Spool
      (here delete old 'temse' objects)
    RSPO0041 (or RSPO1041), RSBTCDEL: To delete old TemSe objects
    RSPO1043 and RSTS0020 for the consistency check.
    1140307 : STDERR1 or STDERR3 becomes unusually large
    Please also run a Consistency Check of DB Tables as follows:
    1. Run Transaction SM65
    2. Select Goto ... Additional tests
    3. Select "Consistency check DB Tables" and click execute.
    4. Once you get the results check to see if you have any inconsistencies
       in any of your tables.
    5. If there are any inconsistencies reported then run the "Background
       Procesing Analyses" (SM65 .. Goto ... Additional Tests) again.
       This time check both the "Consistency check DB Tables" and the
       "Remove Inconsistencies" option.
    6. Run this a couple of times until all inconsistencies are removed from
       the tables.
    Make sure you run this SM65 check when the system is quiet and no other batch jobs are running as this would put a lock on the TBTCO table till it finishes.  This table may be needed by any other batch job that is running or scheduled to run at the time SM65 checks are running.
    Running these jobs daily should ensure that the stderr files do not increase at this rate in the future.
    If the system is running smoothly, these files should not grow very fast, because most of they just record the error information when it happening.
    For more information about stderr please refer to the following note:
       12715: Collective note: problems with SCSA
              (the Note contains the information about what is in the  stderr and how it created).
    Regards,
    Abhishek

  • WLS_DIAGNOSTICS0~.DAT files occupying more space

    Hi,
    WLS_DIAGNOSTICS0~.DAT files are occupying more space, due to this server disk is filling quickly, please let me know what is the cause
    this is weblogic 9.1 , oracle 9i,
    System = SunOS
    Release = 5.10
    KernelID = Generic_118833-36
    Machine = sun4v
    BusType = <unknown>
    Serial = <unknown>
    Users = <unknown>
    OEM# = 0
    Origin# = 1
    NumCPU = 32
    Thanks
    Hanuman
    Edited by: user9166997 on 11-Apr-2011 06:48
    Edited by: user9166997 on 11-Apr-2011 06:54

    Hi Hanuman,
    Try using the below elements, so it could be that your file is growing very fast in this interval.
    <store-size-check-period> sets the interval at which the <preferred-store-size-limit> is checked if the size is exceeded.
    From information do have a look at the below link in which René van Wijk has explained it.
    [WLS - 10.3] - Issue with Diagnostic Archive.
    Topic: Retiring Data from the Archives
    http://download.oracle.com/docs/cd/E11035_01/wls100/wldf_configuring/config_diag_archives.html#wp1069508
    Regards,
    Ravish Mody
    http://middlewaremagic.com/weblogic
    Come, Join Us and Experience The Magic…

  • BPM data increase very fast and want to get suggestion about BPM capacity

    Dear BPM Experts:
    I meet a problem with BPM capacity problem. My customer using BPM 11g and every day they
    Have 1000 new process,every process have 20-30 tasks,they find the data increase very fast,about 1G/day.
    We have done a test about BPM capacity, I create a new simple process named simpleProcess.
    which only have three input field, I use API to initiate the task and submit to the next
    person.
    we using dev_soainfra tablespace, and we set the default audit level, after insert 5000 task, we find dev_soainfra is reach 362.375M,
    So as assume 30000 task will using 362*6=2G database spaces,and because in next phases,my customer want
    To push BPM platform to more customers, which means more and more customer will using this platform,so
    I want to ask is it data increase reasonable? Do you have capacity planning guide for BPM 11g? and If I want to reduce
    Lower The data increase, how can we do?
    We have try to turn the audit log off, but it seems useless, it only save 8% spaces.
    Thanks for your help!
    Eric

    It looks like you are writing your data to disk every so often.  For that reason, I recommend making it based on the number of samples you have instead of the time.  With that you can preallocate your arrays with constants going into the shift registers.  You then use Replace Array Subset to update your arrays.  When you write to the file, make sure you go back to overwriting the beginning of your array.  This will greatly reduce the amount of time you spend reallocating memory and will reduce your memory usage.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • How to Re org the SQL 2008 database to have equal size data files

    Hello everyone -
    I have I/O issue with our production system (SAP ECC 6.0 on Windows 2008 and MSSQL 2008).
    Following are my I/0 stats in DB02 (since DB Start)
    ECPDATA1     E:     Data       25.779     28.053     4.689     1,056.68     36.297     23.326     2.515     45.301     14.43
    ECPDATA2     E:     Data       23.143     24.979     4.68     593.297     17.971     12.448     1.238     47.663     14.518
    ECPDATA3     G:     Data   9.17     9.807     3.477     1,018.69     36.144     21.938     2.457     46.434     14.712
    ECPDATA4     F:     Data  10.985     11.788     2.69     148.512     4.777     3.248     0.314     45.722     15.201
    ECPDATA5     F:     Data       14.746     16.164     2.676     162.39     6.693     3.679     0.432     44.139     15.491
    ECPLOG1      D:     Log        5.337     27.081     4.916     26.962     26.264     0.037     1.919     726.928     13.688
    ECPLOG2      F:     Log     0.755     17.582     0.487     35.472     35.161     0.042     2.637     845.998     13.334
    The fourth column is ms/op which is very high and also asymmetrical for all the data files.
    Also, The data files are not of equal size
    ECPDATA1     106,173
    ECPDATA2     59,588
    ECPDATA3     105,036
    ECPDATA4     14,992
    ECPDATA5     16,491
    ECPLOG1       1,025
    ECPLOG2             3,199
    So ECPdata1 and ECPdata3 are about 105 GB while #4 and #5 are 14 GB and 16 GB each. As per SQL best practices, all the Data files should be of equal size to get the best performance.
    How do I make the data files with equal sizes ?
    Your help is very much appreciated.
    Thank you
    -TSB

    Hi dudes!
    The key here is that you manually grow your datafiles before the autogrown mechanism comes to play; in fact, SAP recommends setting the autogrow just to avoid the hypothetical case that should never arrive in which the DB administrator forgot about the database size and it ran out of space.
    Otherwise, if you properly monitor and manage your DB you should always grant that at least 30% (to say) of free space is allocated in your datafiles. If you do so, the SQL Server engine should do the rest, as it follows a proportional filling strategy as you can read in SAP note 1238993.
    In case your database is not still proportional, my advise is that you just manually grow the datafiles so that all them are the same size, and so SQL Server will do the rest. If you however need to addresss that immediately, you will need to reorganize your database, which is not just more sensitive and complicate, but also which involves some I/O intensive operations (check SAP note 159316).
    Cheers!!
    --Jesú

  • Data / Data files / Database GROWTH

    Dear experts,
    I have a practical question on reading / determining the exact fluctuations in the sizeof
    a database.
    I am publishing this question in the oracle section, as my database is oracle, but if I am
    not wrong, all this should be valid for all databases.
    So, the question itself: I have a system, where all the datafiles / tablespaces are set to
    AUTOEXTEND, the size for each growth is 200 MB (meaning, if growing automatically,
    the increment size will be 200 MB). Now I would like to see, if datafiles grew automatically,
    lets say for today! And if yes - by how much increments.
    Furthermore, I would like to ask - browsing ST04, in Space / Database / Overview on the
    history tab - all the daily / weekly / monthly changes - how to evaluate, whether this was
    only an "internal" growth, when the database grew in account of decreasing the free space
    in the DB itself, and when it has also caused a data file to grow automatically, as it has its
    AUTOEXTEND option enabled ???

    Dear Deepak,
    after quite a lot of googling,I found this:
    SELECT TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days, ts.tsname,
    max(round((tsu.tablespace_size* dt.block_size )/(1024*1024),2) ) cur_size_MB,
    max(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) usedsize_MB
    FROM DBA_HIST_TBSPC_SPACE_USAGE tsu, DBA_HIST_TABLESPACE_STAT ts, DBA_HIST_SNAPSHOT sp, DBA_TABLESPACES dt
    WHERE tsu.tablespace_id= ts.ts#
    AND tsu.snap_id = sp.snap_id
    AND ts.tsname = dt.tablespace_name
    AND ts.tsname NOT IN ('SYSAUX','SYSTEM')
    GROUP BY TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.tsname
    ORDER BY ts.tsname, days;
    It comes the closest to what I really need. And what I need, is the above query,
    but by data file, not by tablespaces.
    Thanks a lot!!

  • How can i recover my database after losing system data file.

    hi everyone,
    how can i recover my database in the following scenario.
    1. offline complete backup taken 2 days ago. database was in archive mode.
    2. today i lost my system data file, and also lost my all archived files.
    3. i started up the database but, the following error was generated.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 135338868 bytes
    Fixed Size 453492 bytes
    Variable Size 109051904 bytes
    Database Buffers 25165824 bytes
    Redo Buffers 667648 bytes
    Database mounted.
    ORA-01113: file 1 needs media recovery
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\ORCL\SYSTEM01.DBF'
    4. i copied the system data file from backup and wrote the following statement, to recover the database.
    SQL> recover datafile 1;
    ORA-00279: change 2234434 generated at 07/15/2009 10:52:10 needed for thread 1
    ORA-00289: suggestion : C:\B\ARC00051.001
    ORA-00280: change 2234434 for thread 1 is in sequence #51
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    now i don't have any archive file. is there any chance to recover the database ?
    R e g a r d s,
    Asif Iqbal
    Software Engineer,
    Lucky Tex, Karachi,
    Pakistan.

    now i don't have any archive file. is there any chance to recover the database ?If no archive log files are available you can't recover the datafile.You need to have all the archives from the time of offline backup was taken till the system datafile is lost.
    Anand

Maybe you are looking for

  • Looking for help with PowerShell script to delete folders in a Document Library

    I'd like to create a PowerShell script to delete old folders in a Document library that are over 30 days old. Has anyone created something like this? Orange County District Attorney

  • Iphone update needs to connect to iTunes - I DON'T HAVE A COMPUTER WITH ITUNES?

    My macbook gave up for good and I haven't purchased a new one yet, so I have been using an old PC (boo, I know). It doesn't have iTunes, so I have been using my iPhone4 for everything. My iphone has EVERYTHING stored on it (lots of songs, movies, boo

  • I can open itunes on my new computer but I can't use it...

    Hi everyone, I hope you can help me with an irritating problem I have with itunes on my new computer. I'm not american so I hope I can explane it... I installed itunes and thought it worked fine. I copied a couple of cd´s I wantet to put on my ipod b

  • Sqlldr errors loading XML file

    I am getting two errors on two different data elements trying to load an XML file. DB version is 11.2.0.2.0 - 64bit. This is part of the instance file: <?xml version="1.0" encoding="utf-8"?> <SamseAMSS xmlns:xsi="http://www.w3.org/2001/XMLSchema-inst

  • HDV won't play out to Z1u

    I'm having a tough time getting my HDV timeline to playout to a Sony Z1U HDV camera. I keep selecting View/external video/All frames and the Z1U screen stays blue. It won't even flicker. My sequence settings are Compressor=HDV 1080i60 editing timebas