Big datafile vs many datafile

I'm currently contemplating on how do design my datafiles layout. I'm running ERP6 on AIX 6.1 with Oracle 10g. Total DB size is 1300GB.
The default is to have 10G of datafiles which will leads to 130 data files which I think too much of a hassle to keep track all of those files.  Is it ok to have datafiles of size 100GB?  what are the pro and cons?

Hi Mohammad,
It's basically depend on what Oracle "Blocksize" you are using. If it is 8K, then maximum datafile size can be 32G.
Again, you should think about backup/recovery and performance issues.
You also need to check your backup tool limitations about same.
There can be a OS limitations also for larger datafiles.
Checkpoint operation may take less time for larger datafiles, as it need to update the headers of few datafiles as compare to large no. of small datafiles.
Restore time would be same in both case. Because, restoring five 4GB datafiles and one 20GB datafile will take same amount of time.
It would be better, if you consult with core DBA persom for your doubts.
Regards.
Rajesh Narkhede

Similar Messages

  • Many little datafiles or one big datafile?

    Hi all
    I have a database which grows every day. I have 10 datafiles with name users01.db , users02.dbf and etc.
    And all these datafiles have 1.5Gb size. Each time every datafile reachs it's 1.5Gb size, I add new datafile. Is it correct strategy? Or I must add only extents to last datafile and allow it to grow?
    Thanks...

    Hi,
    In my point of view, it is a good approach if "table striping" is used, where data is spread across more than one device. Otherwise, if you have just one disk, then I don't see advantages or disadvantages. In addition, take a look at [url http://forums.oracle.com/forums/thread.jspa?messageID=1041074&#1041074]more small datafiles or one big datafile for tablespace thread.
    Cheers
    Legatti

  • More small datafiles or one big datafile for tablespace

    Hello,
    I would like to create a new tablespace (about 4 GB). Could someone if it's better to create one big datafile or create 4 datafiles with 1 GB each?
    Thank you.

    It depends. Most of the time, it's going to come down to personal preference.
    If you have multiple data files, will you be able to spread them over multiple physical devices? Or do you have a disk subsystem that virtualizes many physical devices and spreads the data files across physical drives (i.e. a SAN)?
    How big is the database going to get? You wouldn't want to have a 1 TB database with 1000 1 GB files, that would be a monster to manage. You probably wouldn't want a 250 GB database with a single data file either, because it would take forever to recover the data file from tape if there was a single block corruption.
    Is there a data files size that fits comfortably in whatever size mount points you have? If you get 10 GB chunks of SAN at a time, for example, you would probably want data files that were an integer factor of that (i.e. 1, 2, or 5 GB) so that you can add similarly sized data files without wasting space and so that you can move files to a new mountpoint without worrying about whether they'll all fit.
    Does your OS support files of an appropriate size? I know Windows had problems a while ago with files > 2 GB (at least when files extended beyond 2 GB).
    In the end though, this is one of those things that probably doesn't matter too much within reason.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Big Datafile Creation

    Hi all,
    My OS: Windows Server 2003
    Oracle Version: 10.2.0.1.0
    Is there is possibility to add big datafile more than 30G.
    Regards,
    Vikas

    Vikas Kohli wrote:
    Thanks for your help ,
    But if i have a already a tablespace, every time when it is going to fill i need to add datafile of 30g. Is there any possibility that i can specify a big datafile or need to create a new big datafile tablespace and move the tables from olde tablespace to new oneYou have to understand that a bigfile tablespace is a tablespace with a single, but very large datafile.
    have you read the link I posted before?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#i1010733

  • SELECT query performance : One big table Vs many small tables

    Hello,
    We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
    Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
    multiple small tables will help ?
    For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
    Thanks.

    Hello,
    There is some information on this topic in the FAQ at:
    http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
    If this does not address your question, please just let me know.
    Thanks,
    Sandra

  • Adding No. of Datafiles to TableSpace

    Hi All,
    I have doubt regarding adding no. of datafiles to Tablespace. I wanna connect my application to Oracle10g database. As a DBA I am going to allocate on separate Tablespace to this new Application. As time will pass the data acumulated by the application would reach in GBs, So how should I control the datafile???
    1) Shall I add only one Big datafile to the TB
    OR
    2) Periodically I should go on adding the datafile as it crosses the particular limit of
    data size???
    What policy should I follow....
    Regards,
    Darshan

    Hi,
    There is no issue in creating multiple datafiles while creating database or adding them later. If space is no constraitn for you then you should create multiple datafiles during creation of database.
    Also create then with AUTOEXTEND clause and you don't have to put this clause OFF if you create it with MAXSIZE parameter. It's always better tot have MAXSIZE parameter defined because this will not allow your datafile to grow bigger then the MAXSIZE parameter size. Here is example of adding a datafile to tablespace with MAXSIZE clause;
    ALTER TABLESPACE users
    ADD DATAFILE '/u02/oracle/rbdb1/users03.dbf' SIZE 10M
    AUTOEXTEND ON
    NEXT 512K
    MAXSIZE 250M;
    After defining the MAXSIZE, keep a close eye on growth of data in datafile and when its near to fill add a new datafile to tablespace and keep this cycle on....
    Regarding your second question :
    You are write that data is written 2nd to datafile when first one is full. But you try to validate this point running database then sometimes you will find that 2nd datafile is consuming more space then 1st, even have 1st datafile is sufficient empty. This is due to fact od frequent deletions in the tables who resides in datafile.
    Hope that you got some hints......

  • How to Resize a Datafile to Minimum Size in sysaux tablespace

    Hi Experts,
    I found the init data file size is 32712M in sysaux table. our other DBA added 2 big datafile for spilled issue in sysaux. After cleared data and fixed spill issue. I want to reduce datafile size.
    as ALTER DATABASE DATAFILE 'D:\ORACLE\ORADATA\SALE\SYSAUX05.DBF' RESIZE 10000M...
    I got error message that used data size is over resize size.
    ERROR at line 1:
    ORA-03297: file contains used data beyond requested RESIZE value
    However, I checked data size is only 176M in this datafile in OEM.
    How to fix this issue?
    I use oracle 10G R4 in 32 bit window 2003 server
    JIM
    Edited by: user589812 on May 18, 2009 11:22 AM

    10.2.0.4
    If I run as
    SQL> alter database datafile 'D:\ORACLE\ORADATA\SALE\SYSAUX05.DBF' offline drop;
    Database altered.
    the file is still in database.
    can I make online for this file again?
    could we delete it by SQL as
    alter tablespace sysaux drop datafile 'D:\ORACLE\ORADATA\SALE\SYSAUX05.DBF';
    Thanks
    Jimmy
    Edited by: user589812 on May 18, 2009 12:16 PM
    Edited by: user589812 on May 18, 2009 12:33 PM

  • Big File vs Small file Tablespace

    Hi All,
    I have a doubt and just want to confirm that which is better if i am using Big file instead of many small datafile for a tablespace or big datafiles for a tablespace. I think better to use Bigfile tablespace.
    Kindly help me out wheather i am right or wrong and why.

    GirishSharma wrote:
    Aman.... wrote:
    Vikas Kohli wrote:
    With respect to performance i guess Big file tablespace is a better option
    Why ?
    If you allow me to post, I would like to paste the below text from my first reply's doc link please :
    "Performance of database opens, checkpoints, and DBWR processes should improve if data is stored in bigfile tablespaces instead of traditional tablespaces. However, increasing the datafile size might increase time to restore a corrupted file or create a new datafile."
    Regards
    Girish Sharma
    Girish,
    I find it interesting that I've never found any evidence to support the performance claims - although I can think of reasons why there might be some truth to them and could design a few tests to check. Even if there is some truth in the claims, how significant or relevant might they be in the context of a database that is so huge that it NEEDS bigfile tablespaces ?
    Database opening:  how often do we do this - does it matter if it takes a little longer - will it actually take noticeably longer if the database isn't subject to crash recovery ?  We can imagine that a database with 10,000 files would take longer to open than a database with 500 files if Oracle had to read the header blocks of every file as part of the database open process - but there's been a "delayed open" feature around for years, so maybe that wouldn't apply in most cases where the database is very large.
    Checkpoints: critical in the days that a full instance checkpoint took place on the log file switch - but (a) that hasn't been true for years, and (b) incremental checkpointing made a big difference the I/O peak when an instance checkpoint became necessary, and (c) we have had a checkpoint process for years (if not decades) which updates every file header when necessary rather than requiring DBWR to do it
    DBWR processes: why would DBWn handle writes more quickly - the only idea I can come up with is that there could be some code path that has to associate a file id with an operating system file handle of some sort and that this code does more work if the list of files is very long: very disappointing if that's true.
    On the other hand I recall many years ago (8i time) crashing a session when creating roughly 21,000 tablespaces for a database because some internal structure relating to file information reached the 64MB hard limit for a memory segment in the SGA. It would be interesting to hear if anyone has recently created a database with the 65K+ limit for files - and whether it makes any difference whether that's 66 tablespaces with about 1,000 files, or 1,000 tablespace with about 66 files.
    Regards
    Jonathan Lewis

  • I have transactions from my bank account from itunes that is not from me how can I resolve this?  I cannot find a link or help on the itunes website for this? Many thanks

    Hi there
    Please can anyone advise?  I have had a couple of transactions from my bank account, one for around £28, not from me, then there was a strange
    refund of £9.17!  I have searched iTunes website for help with this, but can`t seem to find any help with this matter!  Will phone bank but I don`t
    think they`ll offer much help as I have an itunes account set up!  I know I can call a telephone number itunes provide, but don`t want to pay big charges!
    Many thanks!

    First of all this is not the forum to be discussing this. Second, these are user-to-user forums and Apple does not have a support presence here. Since this involves a disputed payment you need to contact Apple directly for resolution.

  • Purchase order printing / Big Contract

    hello everybody:
    As we all know, purchase order can be printed using SAP Smartform to get a formatted PO paper. My question is: Is there any function available in SAP to support big contract printing in PO? Big contract means, many formal contract conditions should be included, not just a simple format as SAP has provide as examples.
    The ideal solution would be: there are certain information carrier, where I can store all my contract conditions, whenever I compose a PO, I can select/copy such condition into my PO and adjust as I want. Then I can print the PO out, say totally 30 pages. The benefit is I do not need to enter all the conditions every time I compose PO in system.
    I have such requirement from customer before, the solution is by enhancement. The resul is OK, but not very good.
    Can anyone give me any input on this? Any helpful information would be honored points for sure.
    Andy

    Hi Rajesh:
    Thanks for the reply. I think it might solve my problem.
    But can you give me any detailed information on how to classify the text, and even how to adopt such text during PO processing.
    Thanks again.
    Andy

  • Oracle BPM / SOA Suite and Big and Complex Scenarios

    Hi people,
    I have worked for a company that in the past chose Oracle BPM (ALBPM at the time) and one of the big problems that the company had was in relation about big processes and complex scenarios.
    This company is for the e-commerce area and our processes can have many instances at the same time, for example, a process to all the "order flow" can have thousands, maybe millions instances at the same time.
    So we choose to abort the BPMS option in the past and now we back to talk again about BPMS and one question is always made by the company board: if we use BPM again, the new versions can be support all the our volume data?
    To be honestly, I don't have this answer so I like to know if any people here has a paper or report about Oracle BPM 11g and big scenarios with many instances. If anybody has a case too, it will be relevant.
    Another and the final question: how does the oracle bpm engine treats the case when my engine reboots and before that the engine had many instances active? The istances will be lost?

    11g ADF is not certified with 10g SOA Suite. What I mean by this is that your 11g ADF will need to run on WLS 10.3.1 and SOA Suite 10g will eith run on oc4j or WLS 9.2.
    ADF is just JDeveloper, you deploy to WLS 11g (10.3.1).
    This is the most detailed 10g SOA Suite guide I know for 10g
    http://download.oracle.com/docs/cd/E10291_01/core.1013/e10294/toc.htm
    note that it is for 10.1.3.3, you just need to subsitute for 10.1.3.5 (latest release)
    cheers
    James

  • Is there another bug fix coming soon. IpadAir, iPod and iPhone are still having many, many issues even with the last update. IOS8 has been a disaste! 

    Many features are still having issues, including basic iPad, iPod and iPhone functions that used to be rock solid. When can we expect the next SW releases?

    The Real Big Mac wrote:
    Many features are still having issues, including basic iPad, iPod and iPhone functions that used to be rock solid. When can we expect the next SW releases?
    What troubleshooting steps have you taken?

  • Include many jars for a complex signed applet in html file??

    hello
    I'd like to know how it's possible to put a signed applet in an html file, that needs many jar files.
    I explain myself: I know that to create a signed applet and to put it in an html file, I need to create a Jar file that contains this applet, create a private key with keytool, sign the jar and include it in my html file with the tag <applet code="....." archive="......jar".... />
    This works fine if my applet is a simple program that only uses the clases present by default in the jdk.
    In my case, I have a big project, with many packages. In one of these packages, I have my applet that uses some classes of the other packages, which use classes from imported jars, such as BouncyCastle, and others...
    There is still no problem when I run the applet from the applet viewer.
    The problem appears when I put the JAR file with all these classes in the html file: there is a problem since it doesn't know anything of these classes imported from these jars.. It's quite obvious actually.
    My question is: how do I do to make the html file aware of these classes? Is there an html tag that allows us to include many jar files? Do I have to decompress all these jars, take all the directories, add them to the directories of my project and create a BIG jar (that's what I did, but it's really dirty, and heavy! (11M))??
    Does anyone have an idea about how I can do it?
    Thanks for your help
    Philippe

    11 MB is pretty big for an applet.
    Let's say your applet uses java 3d, normally a client would download and
    install this seporately, meaning the jars needed end up in lib/ext directory where
    any applet can find them.
    Check what applets need to be installed (put in lib/ext) and what can be
    downloaded:
    <object .....
    <param name="archive" value="myJar.jar, myOtherjar.jar" />

  • Dynamic select too big for varchar2

    Greetings,
    I'm creating a dynamic sql query that in most cases will be larger than a VARCHAR2's size following the idea in this thread:
    Manual Tabular Form - APEX_ITEM
    I haven't found anything on Apex Documentation about using CLOB as return value for an SQL Query (PL/SQL function body returning SQL query)
    Is it possible? Could there be any other solution to returning a query larger than VARCHAR2?
    Thank you,
    Marc

    That example is more or less what I'm trying to do, but the query is on a big table with many more rows and columns.
    I'm trying to do a manual tabular form because I'll group rows with the same value in one column and i'd want the rowselector checkbox(and other columns) to only appear on the first row of that set of rows, so if that one is selected, deleting(for example) would delete the whole set with the same ID.
    I haven't found the way to do so with a wizard created tabular form. A thread for this Q has been opened here,
    Conditional rowselector in tabular form
    so if you have any suggestions on that, they'd be gladly taken into consideration :)
    Regards,
    Marc

  • What Is A big file?

    Hi All. I'm making a smart folder for big files when the question hit me. When does a file become a big file? 100MB? 500MB? 1GB? I would love to hear anyone's opinions.

    It's all relative. Its fair to define the relative size of a file in terms of entropy and utility, both of which are measurable (entropy empirically, utility through observation).
    The entropy is a measure of the randomness of the data. If it's regular, it should be compressible. High entropy (1 per bit) means a good use of space, low entropy (0 per bit), a waste.
    Utility is simple, what fraction of the bits in a file are ever used for anything. Even if it's got a high entropy, if you never refer to a part of the file, it's existence is useless.
    You can express the qualitative size of a file as the product of the entropy and the utility. Zero entropy means the file itself contains just about no information -- so, regardless how big it is it's larger than it needs to be. If you never access even one bit of a file, it's also too big regardless how many bits are in the file. If you use 100% (1 bit per bit) of a file, but it's entropy is 0.5 per bit, then the file is twice as big as it needs to be to represent the information contained in it.
    An uncompressed bitmap is large, whereas a run-length encoded bitmap that represents the exact same data is small.
    An MP3 file is based on the idea that you can throw away information in a sound sample to make it smaller, but still generate something that sounds very similar to the original. Two files representing the same sound, but the MP3 is smaller because you can sacrifice some of the bits because you are lowering the precision with which you represent the original (taking advantage that human perception of the differences is limited).
    So, 100M would seem like a ridiculous size for a forum post, but it sounds small for a data file from a super-high resolution mass spectrometer.

Maybe you are looking for

  • 15" MacBook Pro (Late 2008) Hard Freezing & Apple Refusing Support

    Hi all, I have tried installing Microsoft Windows XP / Vista / Vista SP1 32-bit and 64-bit on my 15" Unibody MacBook Pro, and in all instances the machine has hard locked up / froze randomly, which can only be rectified by a hard reboot. Sometimes th

  • How to Change log in background image without third party apps on Mountain lion

    Hi I am trying to change the log in screen without using a third party app on OSX 10.8. every time I use Loginox the background image changes but if i create an image of that volume and reimage another mac with a different screen size the background

  • Tables with JSF

    Hi, I need to use Tables (the same that HTML), but in JSF. I test with panelGrid and panelGroup, but I can't to to merge two or more columns (as "colspan" in HTML). How I can do this?. Thanks by any help. Tania.

  • XL REPORTER - Unable to connect to XL Reporter

    Hello, When I Try to start XL Reporter in a TSE Session on a MS Windows Server 2008 - 64 bits Machine, I get the message : Unable to connect to XL Reporter Error ! Server Communication Failed Cause : Error! Unable to get document Type 36 I am on SAP

  • Trim upto 2 decimal  points + js

    Hi, I want to trim a decimal number upto 2 decimal points.ie i want to show only 2 digits after the decimal point.How do i acheive this in Javascript. I do not want to use the Round function as it causes the value to change by 0.01. this is the code