Recommend datafile size on Linux

Hi
Can anyone suggest in general what is the max recommended datafile size for Oracle database (8i,9i, 10g) on linux platforms.
datafiles are placed located on local server and i dont prefer keeping the autoextend on for Prod databases.
I heard that 4G is the best max size, after which we keep on adding datafiles as required.
Is that true?
Please suggest.
Thanks
Vk

I heard that 4G is the best max size, after which we keep on adding datafiles as required.
Is that true?well this is a myth, send it to mythbuster to get it bustered.
According to Oracle Document,
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm#sthref4183
Database file size Maximum
Operating system dependent.
Limited by maximum operating system file size; typically 2^22 or 4 million blocks
If your block size is 8k, that's 32G

Similar Messages

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • Differences between -Xss[size] in linux and windows

    Simple thread test program which runs 3000 threads:
    startup options on windows:
    -server -Xms16m -Xmx16m -Xss7k
    startup options on linux:
    -server -Xms16m -Xmx16m -Xss97k (Why 90k bigger stack size??)
    Linux is using newest kernel and NPTL threads. With 'regular' threads the linux version overflows stack unless -Xss2m is given...
    I Think Sun needs to come up with clear specification of threading models, libraries etc... used in both environments. If you search these forums - one of the most common and baffling error is the sig11 on linux. I think the main cause behind it are the library 'mismatches'.
    Answers to these questions are really needed:
    What is the recommended threading library on linux with each VM version?
    Against which library is the VM tested?
    If anyone has experiences with NPTL threads on 1.4.2 VM please contribute. How many threads did you manage to create, how much memory that took? What distribution, thread library, start up parameters, etc... did you use.
    I'm going to do a bit more experimenting on a real life application during the followin weeks. With simple test program we were able to create over 16000 threads, but if you do the math with the minimum stack size you can guess that the virtual memory usage was sky high!
    P.S.
    I'll post more exact platform specs when I'm back to work tomorrow..

    http://java.sun.com/docs/hotspot/VMOptions.html - see under -Xoss option.
    BTW, I ran my test case again, and now i finally succeeded to prove that 'rule' (see that 'another' thread): Xss*nThreads + Xmx < Xmx_MAX. The conclusions are (at least under 32-bit Linux):
    1) Xss does limit native stack size
    2) the 'rule' seems to be independent of whether u r actually using the allocated stack space or not
    Below a test case is attached. Under 32-bit Linux it crashes before it can create 30 threads with
    Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
            at java.lang.Thread.start(Native Method)
            at testlab.StackTest.main(StackTest.java:46)exception, if start up parameters are
    java -cp ../../classes -Djava.library.path=./ -Xms1870m -Xmx1870m -Xss1m testlab.StackTest 100 200 1000000The crash point can be moved/eliminated by modifying either Xss param (the smaller the size the later the crush would happen) or Xmn/Xmx (the smaller...., the later....). Exact Xmx_MAX number might depend on what threads library you are running. On my hardware it is as follows:
    java -Xms1920M -Xmx1920M
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    java -Xms1910M -Xmx1910M
    Error occurred during initialization of VM
    Could not reserve enough space for card marking array
    java -Xms1900M -Xmx1900M
    <RUNS OK>See how error messages differ. So, in the case above Xmx_MAX is around 1.9G.
    Native code was compiled with gcc2.96, under 32-bit RH Linux, no optimizations.
    Java code compiled with 1.4.2_02 compiler, run with JVM is 1.4.2_02
    Java file:
    package testlab;
    * Author: volenin
    * Date: Dec 4, 2003
    * Time: 10:40:20 AM
    * under GPL license
    public class StackTest extends Thread {
      static byte[] arr;
      static {
        System.loadLibrary("stacktest");
      boolean isStarted = false;
      int allocSize;
      StackTest(int allocSize) {
        this.allocSize = allocSize;
        setDaemon(true);
    //    arr2 = new byte[allocSize];
      public void run() {
        isStarted = true;
        allocate(allocSize);
        synchronized (this) {
          try { wait(); }
          catch (InterruptedException Ie) {}
      public native void allocate(int size);
      public static void main(String[] args) throws Exception {
        int nThreads = Integer.parseInt(args[0]);
        int allocSize = Integer.parseInt(args[1]);
        int initSize = Integer.parseInt(args[2]);
        arr = new byte[initSize];
        for (int i = 0; i < nThreads; i++) {
          int n = i+1;
          System.out.println("Creating thread #"+n);
          StackTest test = new StackTest(allocSize);
          System.out.println("Starting thread #"+n);
          test.start();
          System.out.println("Thread started #"+n);
          synchronized (StackTest.class) {
            if (!test.isStarted)   StackTest.class.wait(100);
          System.out.println("Thread running #"+n);
    Native .h file:
    #include <jni.h>
    #ifndef _Included_testlab_StackTest
    #define _Included_testlab_StackTest
    #ifdef __cplusplus
    extern "C" {
    #endif
    JNIEXPORT void JNICALL Java_testlab_StackTest_allocate(JNIEnv *, jobject, jint);
    #ifdef __cplusplus
    #endif
    #endif
    Native .cpp file:
    #include "testlab_StackTest.h"
    JNIEXPORT void JNICALL Java_testlab_StackTest_allocate(JNIEnv *env, jobject jobj, jint allocSize) {
      int arr[allocSize];
      int arrSize = sizeof(arr);
      printf("array allocated: %d, %d\n", allocSize, arrSize);
      getchar();

  • Actual tables size is different than tablespace,datafile size

    Hi,
    I had created 10 tables with minimum extents of 256M in the same tablespace. The total size was 2560M. After 3 months run, all tables sizes were not increased over 256M. But the datafile size for that tablespace was increased sharply to 20G.
    I spent a lot of time on it and could not find anything wrong.
    Please help.
    Thanks,

    The Member Feedback forum is for suggestions and feedback for OTN Developer Services. This forum is not monitored by Oracle support or product teams and so Oracle product and technology related questions will not be answered. We recommend that you post this thread to the Oracle Technology Network (OTN) > Products > Database > Database - General forum.

  • Datafile size minor than 4 Gb: is this a bug?

    Hi all! I have a tablespace made up by 1 datafile (size = 4 Gb). Oracle 9.2.0.4.0 (on Win2k server) seems incapable of managing this; in fact, I receive ora-04031: unable to allocate 8192 bytes of shared memory but the memory of Oracle is configured correctly. Resizing this tablespace and adding a new datafile so that every datafile is 2 Gb large, I don't receive the error any longer. what do you think about this?
    Bye. Ste.

    Hello everybody;
    The Buffer Cache Advisory feature enables and disables statistics gathering for predicting behavior with different cache sizes. The information provided by these statistics can help you size the Database Buffer Cache optimally for a given workload. The Buffer Cache Advisory information is collected and displayed through the V$DB_CACHE_ADVICE view.
    The Buffer Cache Advisory is enabled via the DB_CACHE_ADVICE initialization parameter. It is a dynamic parameter, and can be altered using ALTER SYSTEM. Three values (OFF, ON, READY) are available.
    DB_CACHE_ADVICE Parameter Values
    OFF: Advisory is turned off and the memory for the advisory is not allocated.
    ON: Advisory is turned on and both cpu and memory overhead is incurred.
    Attempting to set the parameter to the ON state when it is in the OFF state may lead to the following error: ORA-4031 Inability to allocate from the Shared Pool when the parameter is switched to ON. If the parameter is in a READY state it can be set to ON without error because the memory is already allocated.
    READY: Advisory is turned off but the memory for the advisory remains allocated. Allocating the memory before the advisory is actually turned on will avoid the risk of ORA-4031. If the parameter is switched to this state from OFF, it is possible that an ORA-4031 will be raised.
    Make sure too that you dont need to use RMAN program to make backup...baceuse the large pool is used to this too.
    Regards to everybody
    Nando

  • Maximum recommended file size for public distribution?

    I'm producing a project with multiple PDFs that will be circulated to a goup of seniors aged 70 and older. I anticipate that some may be using older computers.
    Most of my PDFs are small, but one at 7.4 MB is at the smallest size I can output the document as it stands. I'm wondering if that size may be too large. If necessary, I can break it into two documents, or maybe even three.
    Does anyone with experience producing PDFs for public distribution have a sense of a maximum recommended file size?
    I note that at http://www.irs.gov/pub/irs-pdf/ the Internal Revenue Service hosts 2,012 PDFs, of which only 50 are 4 MB or larger.
    Thanks!

    First Open the PDF  Use Optimizer to examine the PDF.
    a Lot of times when I create PDF's I end up with a half-dozen copies of the same font and fontfaces. If you remove all the duplicates that will reduce the file size tremendously.
    Another thing is to reduce the dpi of any Graphicseven for printing they don't need to be any larger than 200DPI.
    and if they are going to be viewed on acomputer screen only no more than 150 DPI tops and if you can get by with 75DPI that will be even better.
    Once you set up the optimized File save under a different name and see what size it turns out. Those to thing s can sometimes reduce file size by as much as 2/3's.

  • Is there a recommended (maximum) size for a catalog?

    I'm loading all my photos into PSE9 Organizer on my new MacBook Pro and was wondering if there is a recommended maximum size for a catalog? On my PC I separated my catalogs by year. My 2011 folder contains over 11,000 photos and is 71GB. Is that pushing the limits for efficiency or could I combine years?

    One catalog for all of your photos makes sense. Alternatively, as Ken said, catalogs by relatively non-overlapping subject matter areas make sense. I still see no scenario where catalogs by years makes sense, unless you can honestly say that you can remember the years of all of your 68000 photos.
    Tags and albums let you organize in ways that are nearly impossible using folders. If your daughter is named Jennifer, for example, and you assign the Jennifer tag to all photos that contain her image, then later when you want to search for pictures of Jennifer, you simply click the tag and boom, there are the photos instantaneously. You don't need to know what folder the photos are in, nor do you need to know what date the photos were taken. If you want photos of Jennifer over the years at Christmas, this is a simple search once you tag the photos, and nearly impossible using folders. The possibilites are endless. You let PSE remember where the photos are, so you don't have to. You let PSE do the searching, instead of you searching the folders.

  • Maximum datafile size for sqlloader

    Hi,
    I would like to load data from 4GB xls file into oracle database by using sql*loader. Could you please tell me the maximum datafile size that can support sql*loader?
    Thanks
    Venkat
    Edited by: user12052260 on Jul 2, 2011 6:11 PM

    I would like to load data from 4GB xls file into oracle database by using sql*loader. Could you please tell me the maximum datafile size that can support sql*loader?you can post this question in SQL loader forum questions. CLose the thread here.
    Export/Import/SQL Loader & External Tables

  • Can we decrease the datafile size in 8.1.7.4

    Hello All-
    I have created a sample db with table space with datafile size of 2 GB. I may be needing hundreds of mb only.It is eating up the space on unix box server.
    Is there any way I can decrease the size of the datafile attached to the tablespace in Oracle 8.1.7.4.
    Any help would be appreciated.
    Thanks
    Vidya R.

    Yes you surelly can
    SQL> ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf'
    RESIZE 100M;
    Cheers !!
    Jagjit

  • Recommended Block Size For RAID 0

    I am setting up a RAID configuration (Striping, no Parity, Mac G5, OS-X) and was curious what the recommended Block Size should be. Content is primarily (but not limited to) Images created with Adobe Photoshop CS2 and range in size from 1.5MB to >20MB. The default for OS-X is 32K chunks of data.
    Drives are External FW-400.
    Many thanks, and Happy Holidays to all!

    If it is just scratch, run some benchmarks with it set to 128k and 256k and see how it feels with each. The default is too small, though some find it acceptable for small images. For larger files you want larger - and for PS scratch you definitely want 128 or 256k.

  • What would be the maximum datafile size that can support sql*loader

    Hi,
    I would like to load datafile from xls file which nearly 5 gb into oracle table by using sql*loader. Could you please tell me how much is max datafile size we can load by using sql*loader?
    Thanks
    VAMSHI

    Hello,
    The Size limit is mainly given by the OS. So you should care about what the OS could support as SQL*Loader files are unlimited on *64 Bit* but limited to *2GB* in *32 Bit* OS:
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10839/appg_db_lmts.htm#UNXAR382
    Else, you should be able to load these data into the Table. So you must check that you have enough place inside the Tablespace and/or the Disk (if the Tablespace has to be extended).
    Please find enclosed a link about SQL*Loader and scroll down to Limits / Defaults:
    http://www.ordba.net/Tutorials/OracleUtilities~SQLLoader.htm
    Hope this help.
    Best regards,
    Jean-Valentin

  • Max datafiles size

    hi there,
    i want to know how much is the maximum size of a datafile?
    I'm using oracle 8.1.7.4 on aix 4.3.3
    db_block_size=8192
    i have a datafile of 2GB and i need to expand it.
    I was wondering if the maximum datafile size is 2 gb, so i do not need to increase this but create e new one
    thanks

    Without any reference at hand, the AIX (4.3.3, JFS) limits as I can recall
    File size: 2GB
    File size if large files enabled: near 64GB
    File system size: 64GB with std fragment size.
    Also observe the ulimit of the user who is using the file system.

  • How to shrink the system tablespace datafile Size

    iam using oracle 9i R2 and i want to reduce my datafile size but it's show's that error when i try to resize it. ORA-03297

    Hi,
    We can directly resize datafilesTEST.SQL>SELECT FILE_NAME, BYTES FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='SYSTEM';
    FILE_NAME
         BYTES
    /.../dbsGNX.dbf
    419430400
    TEST.SQL>ALTER DATABASE DATAFILE '/.../dbsGNX.dbf' RESIZE 390M;
    Database altered.
    TEST.SQL>SELECT FILE_NAME, BYTES FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='SYSTEM';
    FILE_NAME
         BYTES
    /.../dbsGNX.dbf
    408944640But the minimum file size is the size of the extend the furthest in the datafile:TEST.SQL>SELECT FILE_ID,FILE_NAME FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='SYSTEM';
       FILE_ID
    FILE_NAME
             1
    /.../dbsGNX.dbf
    TEST.SQL>SELECT MAX(BLOCK_ID) MBID FROM DBA_EXTENTS WHERE FILE_ID=1;
          MBID
         25129
    TEST.SQL>SELECT SEGMENT_NAME,OWNER,SEGMENT_TYPE FROM DBA_EXTENTS WHERE FILE_ID=1 AND BLOCK_ID=25129;
    SEGMENT_NAME                                                                      OWNER                          SEGMENT_TYPE
    I_OBJAUTH2                                                                        SYS                            INDEX
    TEST.SQL>SHOW PARAMETER BLOCK_SIZE
    NAME                                 TYPE                             VALUE
    db_block_size                        integer                          8192
    TEST.SQL>SELECT 8192*25129 FROM DUAL;
    8192*25129
    205856768about 200M.
    Regards,
    Yoann.

  • Increase the Oracle datafile size or add another datafile

    Someone please explain,
    Is it better to increase the Oracle datafile size or add another datafile to increase the Oracle tablespace size?
    Thanks in advance

    The decision must also includes:
    - the max size of a file in your OS and/or file system
    - how you perform your backup and recovery (eg:do you need to change the file list)
    - how many disks are available and how they are presented to the OS (raw, LVM, striped, ASM, etc.)
    - how many IO channels are available and whether you can balance IO across them
    Personal default is to grow a file to the largest size permitted by OS unless there is a compelling reason otherwise. That fits nicely with the concept of BIGFILE tablespaces (which have their own issues, especially in backup/recovery)

  • Recommended pixel size image on Apple Cinema

    Hi,
    I want to buy some photography photos online. Companies like Dreamstime sell credit packages to buy their photos, and it costs more credits for larger image size photos. I am not buying photos to print them - just to view them.
    I got a reply from Dreamtime which said
    "I would recommend checking the owners information that comes with your Apple
    Cinema. They should provide a recommended pixel size of the image (not
    necessarily related to the monitor resolution) for your particular usage.
    Higher resolution images are typically for printed applications, so check
    Apple's owner manual info for recommended sizes if you are only on-screen
    viewing and not printing."
    So my question is what is the recommended pixel size for displaying images on the Apple Cinema since it is not related to actual monitor resolution.
    I don't have an owner's manual for my cinema.
    Thank you

    Hello BSteely,
    Much thanks again for the dialogue and informing me that I'd be best off with 1680x1050 res. I've also been doing some web reading on this resolution issue.
    I understand that when a photo is exported to iPhoto that it scales it to the monitor's dimensions - as I understand it by enhancing or decreasing pixel size - so that it fits on the screen.
    Is it logical to assume then that a picture or photo with 2560 x 1600 res. would be ideal for the Apple 30" Cinema Monitor because that is its maximum resolution - and pic and monitor would have a perfect fit ? Or does it just not make that much of a difference in viewing clarity because iPhoto calibrates the resolution to fit the monitor ?
    And finally does the quality/clarity of the picture suffer more when a pixel size is increased or decreased. The way I am picturing it - picture clarity would suffer more if a pixel is increased ( stretched to fit the screen ) than if a pixel is compacted from an image larger than the maximum resolution of the monitor.
    If you wouldn't mind clarifying that for me that should put the end of my inconveniencing you. You've been really helpful.

Maybe you are looking for

  • How to send an automated SMS via Skype from a Linu...

    Hi. (Sorry if I missed a posting somewhere else about this) Clickatell uses an SMS URL system to allow people to send Text Messages from my (Synology ) server should there be a problem. The texting format is thus:- https://api.clickatell.com/http/sen

  • Adding Data Field to Sales Analysis Report

    Does anyone know how to add data fields to Sales Analysis Reports, either in PLD or CR?  On Sales Analysis by Customer, I need to add the total quantities for all items sold for the sales included in the report, excluding negative quantities. Thank y

  • Payroll operation: comparison with a certain date

    Hello, everybody, I am trying to compare the employee's date of seniority from IT0041 with a certain date (ex: first of september 2001) so as to calculate my seniority bonus (because the calculation rule is not the same for the employees who have the

  • How to generate sql script based on table structure

    I want to generate a sql script based on a table structure. For example: if the table is: cid id c_value 1 1 zz 2 1 yy 3 2 zz 4 2 xx 5 3 ss 6 3 tt The expected output is: WITH CHILD_tab as ( SELECT 1 cid, 1 id,'zz' c_value from dual union all SELECT

  • Flash animation to html conversion hairline problem

    Hi. I am Suzi. I am currently working in Flash CC and working with some animations. I am needing to turn the Flash animations in to HTML and Java Script which are being viewed on iPads, iPhone and on the net.  My issue is that when they are eventuall