Trouble while decreasing Datafile Size?

Hi Dears,
I have a Tablespace TEMP which has only one datafile TEMP_01.dbo, Its initial size is about 2 GB, But it is not used more than 50 MB, I used these statements to decrease its size
Alter Tablespace Temp Coalesce
Alter Database Datafile 'path\temp_01.dbo' resize 500 MB
but there is an error message, ORA-03297 file contains used data beyond requested RESIZE value
I checked it upto 1.5 GB to resize, But this error appears Every Time.
Your co-operation will be appreciated.
Thanks in Advance.

This error appears cause Oracle choose by himself where he writes the datas in the file and in your case, it seems that he wrote it "at the end" of the datafile so you cant deacrease its size.
Drop/Create the tablespace will solve your problem
Fred

Similar Messages

  • Can we decrease the datafile size in 8.1.7.4

    Hello All-
    I have created a sample db with table space with datafile size of 2 GB. I may be needing hundreds of mb only.It is eating up the space on unix box server.
    Is there any way I can decrease the size of the datafile attached to the tablespace in Oracle 8.1.7.4.
    Any help would be appreciated.
    Thanks
    Vidya R.

    Yes you surelly can
    SQL> ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf'
    RESIZE 100M;
    Cheers !!
    Jagjit

  • Can we decrease the size of existing datafile?

    Hi,
    can we decrease the size of existing datafile?
    Thanks,

    It is a very nice script.
    But it does not deal with something that appears to be being 'pushed' by some contributors (OK, one, really) to this forum as a suitable 'fix' for some performance problems: using multiple block sizes in one database. It's not surprising the script doesn't deal with this, though, because Tom Kyte has never particularly approved of using multiple blocksizes like that.
    Being specific, the script at Ask Tom queries for the setting of db_block_size parameter and uses that to work out what a datafile can be shrunk to. If you have datafiles using non-standard block sizes, however, then that script will not work correctly for them.
    It's always potentially dangerous making use of 5-year old scripts found on the internet, even somewhere as good as Ask Tom.
    It's also a good demonstration as to why, for at least one reason, it's potentially dangerous following Mr. Burleson's advice on multiple block sizes.

  • Problem with decreased font size in exported PDF.

    We have a web site running on a Windows 2008R2 server.
    The site was developed with Visual Studio 2010 using  .NET 4.0.and is running in 32bit. The associated application pool uses a specific local user.
    We have some reports that include a barcode font. The font is installed on the server and the reports are rendered in PDF on the server side to avoid the need to have the barcode font installed on all clients. The PDF renders correctly but we are facing the decreased font size problem
    We followed the instructions included in this post:
    Unfortunately the part regarding VS 2010 does not seem to work.
    If we add the specified Key on the HKLM section of the registry, it doesnu2019t have any effect.
    If we add it on the HKCU section while logged in with the same user assigned to the application pool, everything seems to work fine. But as soon as the user is logged out, the reports stop working.
    Have anybody figured out how to configure correctly those keys with this configuration?

    Try putting it under the HKU/.default path.  You can also use process monitor to see where the application is looking.

  • How to decrease the size of image using a java peogram

    Hi All,
    I want to decrease the size of an image from MB to KB. I have some images of size around 1MB to 3MB. While displaying all those images in the browser it is taking so much time to download all those and display. So I want to write a Java program to make a duplicate copies of those images, which are small in height and width and display those images in small size. Basically I need to display all the list of images as thumbnail view or list view with faster speed.
    Thanks in advance..
    Upendra P.

    I read the documentation. There is no corresponding Image class. The other
    classes are like BufferedImage and all.Yes BufferedImage sounds like what you want.
    You could (1) Read the image (2) Obtain its dimensions and create a blank
    scaled image (3) Draw a scaled version of the original image using the
    Graphics2D obtained from the smaller one. (4) Save the smaller image.
    The javax.util.ImageIO class will help with reading and writing the data and
    Sun's Tutorial has a section on "2D Graphics".

  • Problems with increasing/decreasing cache size when live

    Hello,
    I have configured multiple environments which I'm compacting sequentially and to achieve this I allocate a bigger cache to the env currently being compacted as follows:
    Initialization:
    DB_ENV->set_cachesize(gbytes, bytes, 1); // Initial cache size.
    DB_ENV->set_cache_max(gbytes, bytes); // Maximum size.
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current env
    But the problem is that over time memory is leaked (as if the extra memory of each env was not freed) and I'm totally sure that the problem comes from this code.
    I'm running Berkeley DB 4.7.25 on FreeBSD.
    Maybe some leak was fixed in newer versions and you could suggest to me a patch? or I don't use the API correctly?
    Thanks!
    Edited by: 894962 on Jan 23, 2012 6:40 AM

    Hi,
    Thanks for providing the information.
    Unfortunately, I don't remember exact test case I was doing, so I did a new one with 32 env.
    I set the following for each env:
    - Initial cache=512MB/32
    - Max=1GB
    Before open, I do:
    DBEnvironment->set_cachesize((u_int32_t)0, (u_int32_t)512*1024*1024/32, 1);
    DBEnvironment->set_cache_max(1*1024*1024*1024, 0);
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=1 and obytes=0
    After open, I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: cache_size=18644992 cache_ncache=1
    So here, the values returned by memp_stat are normal but get_cache_max is strange. Then after increasing the cache to the strange value returned by get_cache_max (gbytes=0, obytes=9502720), I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: outlinks cache_size=27328512 cache_ncache=54
    with cache_size being: ((ui64)sp->st_gbytes * GIGA + sp->st_bytes);.
    So cache is actually increased...
    I try to reproduce this case by opening 1 env as follows.
    //Before open
    DbEnv->set_cachesize(); 512MB, 1 cache
    DbEnv->set_cache_max; 1GB
    //After open
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    //Decrease the cache size
    DbEnv->set_cachesize(); 9MB(9502720B), 1 cache
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    All the result is expected. Since when resizing the cache after DbEnv is open, it is rounded to the nearest multiple of the region size. Region size means the size of each region specified initially. Please refer to BDB doc: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html]. Here region size is 512MB/1cache = 512MB. And I don't think you can resize the cache smaller than 1 region.
    Since you are opening 32 env at the same time with 512MB cache and 1GB maximum for each, when the env is open, whether it can allocate as much as that specified for the cache, is dependent on the system. I am guess the number 9502720 got from get_cache_max after opening the env is probably based on the system and app request, the cache size you can get when opening the env.
    And for the case listed in the beginning of the post
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current envWhen env1 is finishing soon, what numbers do you set in set_cachesize to decrease the cache, including the number of caches and cache size?
    When decreasing the cache, I do:
    env->GetDbEnv()->set_cachesize((u_int32_t)0, (u_int32_t)20973592, 0);
    I mean, in all cases I simply set cachesize to its original value (obtained after open through get_cachesize) when decreasing and set cachesize to its max value when increasing (obtained though get_cache_max; plus I do something like cacheMaxSize * 0.75 if < 500MB).I can reproduce this case. And I think the result is expected. When using DBEnv->set_cachesize() to resize the cache after env is opened, the ncache para is ignored. Please refer to BDB doc here: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html] . Hence I don't think you can decrease the cache size by setting the number of cache to 0.
    Hope it helps.

  • Can we reduce datafile size for tablespce ??

    Hi,
    I have two tablespace with 8GB datafile ..
    Used of datafile size is 750 MB for each datafile ...
    Can we reduce a datafile size from 8GB to 2GB !
    while it is use to store data around 750 MB data ..
    If it allow will impcat on my data ???
    SSM

    If used amount is 750MB then yes you can resize to 2G.
    Try issueing the following (example)
    SQL> alter database datafile 'c:\oracle\ora92\ORCL\userdata01.dbf' resize 2000m;
    If it is not possible, oracle will complain with error number ORA-03297.

  • Datafile size after index rebuld?

    Hi experts,
    I recently rebuild indexes on some tables having fragmentation >80% and page count >40000 by job setting database in bulk_logged mode during rebuild.
    and most of them were over 90% fragmented.It was off line index rebuild, after rebuild I noticed that my datafile 
    used space decreased 
    from 38 GB to 29 GB instead of increase, see blow table for before and after status. Suggestions please.
    Before rebuild
    SIZE_MB
    USED_MB
    FREE_SPACE
    %_FREE
    AUTO_GROWTH
    NAME
    42305
    38952
    3353
    8
    10%
    ABC
    2200
    18
    2182
    99
    10%
    ABC_LOG
    After rebuild
    SIZE_MB
    USED_MB
    FREE_SPACE
    %_FREE
    AUTO_GROWTH
    NAME
    42305
    29861
    12444
    29
    10%
    ABC
    2200
    228
    1972
    89
    10%
    ABC_LOG
    thanks

    Hi,
    If you are thinking de-fragmantation operation has deleted something , then rest assured this has not happened what you are seeing is normal behavior with offline index rebuild. Perhaps fill factor used during index rebuild was good one and this filler the
    pages aptly requiring less pages(may be) so decrease in size.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Datafile size minor than 4 Gb: is this a bug?

    Hi all! I have a tablespace made up by 1 datafile (size = 4 Gb). Oracle 9.2.0.4.0 (on Win2k server) seems incapable of managing this; in fact, I receive ora-04031: unable to allocate 8192 bytes of shared memory but the memory of Oracle is configured correctly. Resizing this tablespace and adding a new datafile so that every datafile is 2 Gb large, I don't receive the error any longer. what do you think about this?
    Bye. Ste.

    Hello everybody;
    The Buffer Cache Advisory feature enables and disables statistics gathering for predicting behavior with different cache sizes. The information provided by these statistics can help you size the Database Buffer Cache optimally for a given workload. The Buffer Cache Advisory information is collected and displayed through the V$DB_CACHE_ADVICE view.
    The Buffer Cache Advisory is enabled via the DB_CACHE_ADVICE initialization parameter. It is a dynamic parameter, and can be altered using ALTER SYSTEM. Three values (OFF, ON, READY) are available.
    DB_CACHE_ADVICE Parameter Values
    OFF: Advisory is turned off and the memory for the advisory is not allocated.
    ON: Advisory is turned on and both cpu and memory overhead is incurred.
    Attempting to set the parameter to the ON state when it is in the OFF state may lead to the following error: ORA-4031 Inability to allocate from the Shared Pool when the parameter is switched to ON. If the parameter is in a READY state it can be set to ON without error because the memory is already allocated.
    READY: Advisory is turned off but the memory for the advisory remains allocated. Allocating the memory before the advisory is actually turned on will avoid the risk of ORA-4031. If the parameter is switched to this state from OFF, it is possible that an ORA-4031 will be raised.
    Make sure too that you dont need to use RMAN program to make backup...baceuse the large pool is used to this too.
    Regards to everybody
    Nando

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • Decrease the size of a series marker on a bar chart

    I am trying to decrease the size of a square series marker on a bar chart in crystal reports.
    Usually this can be achieved through the 'appearance' tab in the 'chart options' menu.
    However as I have changed one of the series(bar) on the graph into a line, this no longer seems possible.
    Is there a way around this?
    Thanks

    Hi Natalie
    In order to decrease the size of a square series marker after changing the series into line, you can use Format Series Marker -> Border -> Thickness.
    Hope this helps!
    Regards
    Poonam Thorat

  • Maximum datafile size for sqlloader

    Hi,
    I would like to load data from 4GB xls file into oracle database by using sql*loader. Could you please tell me the maximum datafile size that can support sql*loader?
    Thanks
    Venkat
    Edited by: user12052260 on Jul 2, 2011 6:11 PM

    I would like to load data from 4GB xls file into oracle database by using sql*loader. Could you please tell me the maximum datafile size that can support sql*loader?you can post this question in SQL loader forum questions. CLose the thread here.
    Export/Import/SQL Loader & External Tables

  • Decreasing the size of PDF to archive.

    Hi
    Just wondering if anyone knows of a way to decrease the size of a pdf file that is created for archiving purposes. I have had a couple of thoughts on it.
    1)I read by changing the font to Acrobat base fonts would reduce size but dont really want to go changing all the fonts on the final output of the doc.
    2) Ensure no extra presentment targets are being compiled into mdf.
    3) Reduce any images etc inside the ifd.
    If anyone has any other suggestions i would really appreciiate them
    Thanks.

    Our users are not even aware that the form is being regenerated. They just click on the one they want and the underlying code knows to retrieve the original DAT file, change the job name to the one that results in a PDF being sent back to the user and places it on the server.
    As far as the job name goes, it appears that since the MDF name is currently being used that in reality no job name is being provided (I think that is the case). Either that or the source of the DAT file is building the file that way. To get the customer name or account number then the source of the DAT files would have to be modified to do that.
    Personally, it would definately be unmanageable if we used the MDF name as the job name because we not only use multiple MDFs per job but we have 100s of MDFs.
    I don't know how many customers are involved but using that information as the job name could be cumbersome due to the potential number of job definitions that would be required.

  • I am trying to decrease the size of my aperature library on my MB Pro HD.  I used  file\ relocate master to move files to my External HD(EHD) and this worked.  However, the size of my MBP library has not decreased.

    I am trying to decrease the size of my aperature library on my MB Pro HD.  I used  file\ relocate master to move files to my External HD(EHD) and this worked.  However, the size of my MBP library has not decreased.  I do not want to delete the files on my MB Pro until I am confident Aperature can see the files on the EHD.  It looks like the relocate master fxn just copies the masters and data to another location. 

    Relocate master (or original in newer versions) does move and not copy the master file. Jut to make sure I just ram a test and moved some images out of the library to an external drive and they were definitely gone from the Aperture library.
    If you look at the images in the library that you moved do they have the referenced file badge? If you run a filter on your library looking for referenced files do the images you relocated show up? If you select one of the images and do a Show in Finder what does it show? Finally if you select the images and do Locate Referenced files do they appear listed as being on the volume you put them on?
    You could go into the library and look for the masters you moved to see if they are still there but if all the above show you they were relocated then I don;t think you will find them.
    How are you checking the library size? Have you quit Aperture and restarted it (better would be to log out and log back in) and then checked the size? It is possible that some of that data is cached and won't get updated till you restart.
    Finally you could do a library repair in case something got 'stuck'.

  • How to decrease SGA size in oracle 9i

    hi,
    how to decrease SGA size .. now SGA size=400m i want to drecrease to 150m
    Thanks,
    Mohammed

    Why do you want to do this? Is your system low on memory?
    Perhaps your DB is on a laptop with
    only a small amount of memory?
    Well an enterprise database server set of
    thread inside a complex service would need
    a high powered laptop with large amounts
    of memory.
    Or perhaps your UAT server has 10 databases
    on it all competing for resource?The culprit may not be Oracle.
    For example (as you have not told us your operating system, windows linux unix?) IBM's AIX unix operating system comes configured optimally for managing file caching in the OS memory; as Oracle also caches the blocks read from disk sometimes the file system caching is redundant and ends up using memory better allocated to Oracle. In that case you tune the Unix kernel to reduce memory allocated to the file system caching. The default settings allow IBM AIX to take up to 80% of system memory for caching bad if you have allocated 40% to the SGA already.........
    The really cool thing about AIX is you can do this on the fly and you don't need to reboot so when you go live you juggle until you get an optimal setting and then monitor.
    In the end you may just have to add memory.

Maybe you are looking for