Tablespace growing too large

Good morning gurus,
Sorry if I sound novice at some point .
I have this table space vending of size 188,598.6MB.It keeps on growing.I have to give it extra space every week and all is consumed.It is a permanent table space with extent management local and segment space management auto.This table space is the backbone of the database which is 250G.We are currently running oracle 10.2.0.4 on windows.
Please help
Regards
Deepika

Hi..
Please do mention the database version and the OS.
You need to know what are the objects, object_types on such a big tablespace.Which schemas use it waht do they do.Do they do any kind of DIRECT Loading in the database.Are all the tables and the indexes on the same tablespace.What i feel is, you are having all the tables and the indexes on the same tablespace.I would recommend 2 things:--
1. Do the data purging.Talk to the considered applications team,or who so ever is the concerned person, and the decide data retention period in the database and move the rest of the data to some other database as history.
2. Keep different tablespaces for the tables and indexes.
HTH
Anand

Similar Messages

  • Temp tablespace grow too big???

    we have EBS R12.1 on LINUX X86_64 with ORACLE database version 11.1.0.7. every week we have temporary tablespace grow too big (> 32 GB) and out of extent.
    Based on our research some some sql statement or report cause this issue. if we 'analyze stratistics", most time problem can fix. It cause us some time need run "analyze statistics" several times in one day.
    Does anyone have solution for this?
    Thanks.

    Please see if these docs help.
    Temporary Segments: What Happens When a Sort Occurs [ID 102339.1]
    Queries to monitor Temporary Tablespace usage [ID 289894.1]
    How Can Temporary Segment Usage Be Monitored Over Time? [ID 364417.1]
    Thanks,
    Hussein

  • TIme Machine  backup grows too large during backup process

    I have been using Time Machine without a problem for several months, backing up my imac - 500GB drive with 350g used. Recently TM failed because the backups had finally filled the external drive - 500GB USB. Since I did not need the older backups, I reformatted the external drive to start from scratch. Now TM tries to do an initial full backup but the size keeps growing as it is backing up, eventually becoming too large for the external drive and TM fails. It will report, say, 200G to back up, then it reaches that point and the "Backing up XXXGB of XXXGB" just keeps getting larger. I have tried excluding more than 100GB of files to get the backup set very small, but it still grows during the backup process. I have deleted plist and cache files as some discussions have suggested, but the same issue occurs each time. What is going on???

    Michael Birtel wrote:
    Here is the log for the last failure. As you see it indicates there is enough room 345g needed, 464G available, but then it fails. I can watch the backup progress, it reaches 345G and then keeps growing till it give out of disk space error. I don't know what "Event store UUIDs don't match for volume: Macintosh HD" implies, maybe this is a clue?
    No. It's sort of a warning, indicating that TM isn't sure what's changed on your internal HD since the previous backup, usually as a result of an abnormal shutdown. But since you just erased your TM disk, it's perfectly normal.
    Starting standard backup
    Backing up to: /Volumes/Time Machine Backups/Backups.backupdb
    Ownership is disabled on the backup destination volume. Enabling.
    2009-07-08 19:37:53.659 FindSystemFiles[254:713] Querying receipt database for system packages
    2009-07-08 19:37:55.582 FindSystemFiles[254:713] Using system path cache.
    Event store UUIDs don't match for volume: Macintosh HD
    Backup content size: 309.5 GB excluded items size: 22.3 GB for volume Macintosh HD
    No pre-backup thinning needed: 345.01 GB requested (including padding), 464.53 GB available
    This is a completely normal start to a backup. Just after that last message is when the actual copying begins. Apparently whatever's happening, no messages are being sent to the log, so this may not be an easy one to figure out.
    First, let's use Disk Utility to confirm that the disk really is set up properly.
    First, select the second line for your internal HD (usually named "Macintosh HD"). Towards the bottom, the Format should be +Mac OS Extended (Journaled),+ although it might be +Mac OS Extended (Case-sensitive, Journaled).+
    Next, select the line for your TM partition (indented, with the name). Towards the bottom, the Format must be the same as your internal HD (above). If it isn't, you must erase the partition (not necessarily the whole drive) and reformat it with Disk Utility.
    Sometimes when TM formats a drive for you automatically, it sets it to +Mac OS Extended (Case-sensitive, Journaled).+ Do not use this unless your internal HD is also case-sensitive. All drives being backed-up, and your TM volume, should be the same. TM may do backups this way, but you could be in for major problems trying to restore to a mis-matched drive.
    Last, select the top line of the TM drive (with the make and size). Towards the bottom, the *Partition Map Scheme* should be GUID (preferred) or +Apple Partition Map+ for an Intel Mac. It must be +Apple Partition Map+ for a PPC Mac.
    If any of this is incorrect, that's likely the source of the problem. See item #5 of the Frequently Asked Questions post at the top of this forum for instructions, then try again.
    If it's all correct, perhaps there's something else in your logs.
    Use the Console app (in your Applications/Utilities folder).
    When it starts, click +Show Log List+ in the toolbar, then navigate in the sidebar that opens up to your system.log and select it. Navigate to the +Starting standard backup+ message that you noted above, then see what follows that might indicate some sort of error, failure, termination, exit, etc. (many of the messages there are info for developers, etc.). If in doubt post (a reasonable amount of) the log here.

  • SharePoint TempDB.mdf growing too large? I have to restart SQL Server all the time. Please help

    Hi there,
    On our DEV SharePoint farm > SQL server
    The tempdb.mdf size grows too quickly and too much. I am tired of increasing the space and cannot do that anymore.
    All the time I have to reboot the SQL server to get tempdb to normal size.
    The Live farm is okay (with similar data) so it must be something wrong with our
    DEV environment.
    Any idea how to fix this please?
    Thanks so much.

    How do you get the tempdb to 'normal size'? How large is large and how small is normal.
    Have you put the databases in simple recovery mode? It's normal for dev environments to not have the required transaction log backups to keep the ldf files in check. That won't affect the tempdb but if you've got bigger issues then that might be a symptom.
    Have you turned off autogrowth for the temp DB?

  • EM Application Log and Web Access Log growing too large on Redwood Server

    Hi,
    We have a storage space issue on our Redwood SAP CPS Orcale servers and have found that the two log files above are the main culprits for this. These files are continually updated and I need to know what these are and if they can be purged or reduced down in size.
    They have been in existence since the system has been installed and I have tried to access them but they are too large. I have also tried taking the cluster group offline to see if the file stops being updated but the file continues to be updated.
    Please could anyone shed any light on this and what can be done to resolve it?
    Thanks in advance for any help.
    Jason

    Hi David,
    The file names are:
    em-application.log and web access.log
    The File path is:
    D:\oracle\product\10.2.0\db_1\oc4j\j2ee\OC4J_DBConsole_brsapprdbmp01.britvic.BSDDRINKS.NET_SAPCPSPR\log
    Redwood/CPS version is 6.0.2.7
    Thanks for your help.
    Kind Regards,
    Jason

  • SYSAUX tablespace grow too quick????

    We have EBS R12.1 on LINUX system. Recently I found our development EBS database SYSAUX tablespace grow very quick. The SYSAUX tablespace has two data files and each data files size is 6GB (total 12 GB). In one month all 12 GB space are gone.
    My questions are:
    1. what objects or reports or ??? take this much space?
    2. how to delete un-need space?
    3. what is reasonable SYSAUX size?
    Thanks.

    I double check SYSAUX space usage and found it only use less than 100MB. Why SYSAUX show all 12 Gb space are gone?
    SQL> l
    1 SELECT occupant_name, schema_name, move_procedure,
    2 space_usage_kbytes
    3 FROM v$sysaux_occupants
    4* ORDER BY 1
    SQL> /
    OCCUPANT_NAME SCHEMA_NAME MOVE_PROCEDURE SPACE_USAGE_KBYTES
    AO SYS DBMS_AW.MOVE_AWMETA 45888
    AUTO_TASK SYS 320
    EM SYSMAN emd_maintenance.move_em_tblspc 0
    EM_MONITORING_USER DBSNMP 0
    EXPRESSION_FILTER EXFSYS 0
    JOB_SCHEDULER SYS 1152
    LOGMNR SYSTEM SYS.DBMS_LOGMNR_D.SET_TABLESPACE 13376
    LOGSTDBY SYSTEM SYS.DBMS_LOGSTDBY.SET_TABLESPACE 1600
    ORDIM ORDSYS 0
    ORDIM/PLUGINS ORDPLUGINS 0
    ORDIM/SQLMM SI_INFORMTN_SCHEMA 0
    PL/SCOPE SYS 640
    SDO MDSYS MDSYS.MOVE_SDO 0
    SM/ADVISOR SYS 198528
    SM/AWR SYS 1006144
    SM/OPTSTAT SYS 10866560
    SM/OTHER SYS 8192
    SMON_SCN_TIME SYS 3328
    SQL_MANAGEMENT_BASE SYS 1728
    STATSPACK PERFSTAT 0
    STREAMS SYS 1216
    TEXT CTXSYS DRI_MOVE_CTXSYS 0
    TSM TSMSYS 256
    ULTRASEARCH WKSYS MOVE_WK 0
    ULTRASEARCH_DEMO_USE WK_TEST MOVE_WK 0
    R
    WM WMSYS DBMS_WM.move_proc 0
    XDB XDB XDB.DBMS_XDB.MOVEXDB_TABLESPACE 56192
    XSAMD OLAPSYS DBMS_AMD.Move_OLAP_Catalog 0
    XSOQHIST SYS DBMS_XSOQ.OlapiMoveProc 45888
    29 rows selected.

  • EBS Database R12.1 temporary tablespace grow too quick??

    we have a EBS R12.1 database on Redhat LINUX server. Recently this database every day Temporary tablespace grow at least 1 GB. This database temporary tablespace (with two temp files) has been grow to 45 GB.
    Does anyone know what wrong?
    How to fix problem?

    I eventually figure out this temporary tablespace grow is cause by OEM.
    SQL statement is:
    /* OracleOEM */ SELECT end_time, status, session_key, session_recid, session_stamp, command_id, start_time, time_taken_
    display, input_type, output_device_type, input_bytes_display, output_bytes_display, output_bytes_per_sec_display
    FROM (SELECT end_time, status, session_key, session_recid, session_stamp, command_id, start_time,
    time_taken_display, input_type, output_device_type, input_bytes_display, output_bytes_di
    splay, output_bytes_per_sec_display FROM v$rman_backup_job_details ORDER BY end_time DESC) WHERE rownum
    = 1;
    ANyone know why this statement will take 30GB temporary space on EBS R12.1?
    Thanks.

  • Music library growing too large...

    I've been using Quod Libet as my music player for a while now, and it is pretty much exactly what I want in a music player.  However, as my music collection grows, it has been slowing down lately.  I have over 8000 songs now, around 40 gigs, and Quod Libet will slow down, peg cpu usage, and crash quite often now.  What other options do I have?  I know Amarok can use a real database backend that should scale way beyond what I currently have, but prefer GTK apps and the Quod Libet interface.  Can MPD handle a library this large?  Any MPD clients that are Quod Libet like?  Anyway to make Quod Libet scale better?
    Thanks

    luciferin wrote:
    dmz wrote:http://www.last.fm/user/betbot
    It takes a true audiophile to require The Spice Girls in lossless quality
    Here's me: http://www.last.fm/user/Arch
    That's right, I nabbed the nick Arch way back in 2004 on Audioscrobbler and Neowin.net   Arch Linux and I were meant to be together.
    And to derail this thread a little bit: does anybody know of a linux music player that doesn't use a database?  Just adds files from your directories ala Foobar?
    The Spice Girls is very underestimated. And Mel C is a hell of a girl. So beautiful.. I wish.. oh well. Maybe you want to take a look at mocp or cmus, if you dont want to use mpd.

  • Automatic Deployment Rule for SCEP Definitions growing too large.

    See the deployment package for SCEP definition is now 256MB and growing.  How can we make sure it stays small?  The ADR creating the package is leaving 26 Definition in there right now.

    The method that Kevin suggests above is what is implemented as part of a default deployment template included with SP1. This limits the number of definitions in the update group to the latest eight (I think).
    As a supplemental note here, whenever an ADR runs and is configured to use an existing update group, it first wipes that update group.
    Jason | http://blog.configmgrftw.com

  • RTP jitter buffer growing too large?

    Hi all I am experiencing a rather annoying problem when receiving RTP audio data and rendering it: It takes some time for the player to get created and realized, in the mean time RTP packets continued to arrive, causing them to be buffered. It appeared that the buffer grew until data is drained from it (by the player), so the longer it took the player to get created and realized the larger the buffer became, causing a massive delay which is annoying when a conversation is being carried out. I did set the buffer length (via the RTPManager's BufferControl) to 200ms but this does not seem to make any difference. I don't have direct proof that this is what actually happened under the hood but all evidence seemed to point to this unchecked growth of the jitter buffer. The faster the computer, the faster the player get realized and the smaller the delay.
    Does anyone else experience this phenomenon? Is there a fix?

    I don't know if your diagnosis is correct, for shure I have a lot of jitter between two PC using the same java app and playing a RTP broadcast audio.
    But I could not relate it with the speed of the computer, sometimes A plays before B, sometimes after. Problably it is the time to create objects that varies.
    Still looking for a solution....

  • Content Database Growing too large

    We seem to be experiencing some slowness on our SharePoint farm and noticed that one of our databases (we have two) is now at 170 Gb. Best practice seems to be to keep the database from going over 100Gb.
    We have hundreds of Sites within one Database and need to spit these up to save space on our databases.
    So I  would like to create some new databases and move some of the sites from the old database over to the new databases.
    Can anyone tell me if I am on the right track here and if so how to safely move these sites to another Content Database?
    dfrancis

    I would not recommend using RBS. Microsoft's RBS is really just meant to be able to exceed the 4GB/10GB MDF file size limit in SQL Express. RBS space /counts against/ database size, and backup/restore becomes a more complex task.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • CSCtz15346 - /mnt/pss directory growing too large and having no free space

          hello,
    Nexus 3K switch is not allowing me to save the configuration, showing the below message.   
              switch %$ VDC-1 %$ %SYSMGR-2-NON_VOLATILE_DB_FULL: System non-volatile storage usage is unexpectedly high at 99%
    switch# switch# copy  r s
    [########################################] 100%
    Configuration update aborted: request was aborted
    switch#
    how to clean up /mnt/pss ?
    Thanks

    Hi Naga,
    From the CLI, issue "show system internal flash" to see what directory is taking up the space.   Unfortunately, if it is /mnt/pss, then you really need to engage TAC to get on the switch and enable the internal access to the file system so it can be cleared up.
    Sincerely,
    David.

  • RSZWOBJ table growing too large

    Hello Experts:
    RSZWOBJ is the largest table at my client.  Does anyone have experience with archiving the RSZWOBJ table or handling its data growth?
    Thanks,
    Jane

    Hi,
    can you carry out say bookmark purge for the content which are older than 6 months or so?
    can u check that whether we can delete those history from the system and appraently from table?
    How to delete user defined Bookmarks ?
    How to find the Infoprovider and Query name with help of WAD tech name:
    Thanks and regards
    Kiran

  • Var/adm/utmpx: value too large for defined datatype

    Hi,
    On a Solaris 10 machine I cannot use last command to view login history etc. It tells something like "/var/adm/utmpx: value too large for defined datatype".
    The size of /var/adm/utmpx is about 2GB.
    I tried renaming the file to utmpx.0 and create a new file using head utmpx.0 > utmpx but after that the last command does not show any output. The new utmpx file seems to be updating with new info though... as seen from file last modified time.
    Is there a standard procedure to recreate a new utmpx file once it grows too largs?? I couldnt find much in man pages
    Thanks in advance for any help

    The easiest way is to cat /dev/null to utmpx - this will clear out the file to 0 bytes but leave it intact.
    from the /var/adm/ directory:
    cat /dev/null > /var/adm/utmpx
    Some docs suggest going to single user mode to do this, or stopping the utmp service daemon first, but I'm not positive this is necessary. Perhaps someone has input on that aspect. I've always just sent /dev/null to utmpx and wtmpx without a problem.
    BTW - I believe "last" works with wtmpx, and "who" works with utmpx.

  • Tablespace temp grows too much

    Hi gurus,
    i´m using OBIEE 10.1.3.4 . When i´doing a select over a materialized view, the tablespace temp grows too much ( about 15GB).
    The columns of the materialized view are functions.
    The base tables of the Materialized View are big, thousands of MB.
    Is it possible to configure the tablespace temp with OBIEE? and via SQL-PLUS?
    Best Regards.
    Roberto.

    you can do it with Oracle Enterprise Manager, that is much easier way. If you don't have access to OEM and has privileges to alter table space. the command is something like
    alter tablespace ts_sth add datafile \etc\temp.dbf' size 4M autoextend on;

Maybe you are looking for