Big Datafile Creation

Hi all,
My OS: Windows Server 2003
Oracle Version: 10.2.0.1.0
Is there is possibility to add big datafile more than 30G.
Regards,
Vikas

Vikas Kohli wrote:
Thanks for your help ,
But if i have a already a tablespace, every time when it is going to fill i need to add datafile of 30g. Is there any possibility that i can specify a big datafile or need to create a new big datafile tablespace and move the tables from olde tablespace to new oneYou have to understand that a bigfile tablespace is a tablespace with a single, but very large datafile.
have you read the link I posted before?
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#i1010733

Similar Messages

  • More small datafiles or one big datafile for tablespace

    Hello,
    I would like to create a new tablespace (about 4 GB). Could someone if it's better to create one big datafile or create 4 datafiles with 1 GB each?
    Thank you.

    It depends. Most of the time, it's going to come down to personal preference.
    If you have multiple data files, will you be able to spread them over multiple physical devices? Or do you have a disk subsystem that virtualizes many physical devices and spreads the data files across physical drives (i.e. a SAN)?
    How big is the database going to get? You wouldn't want to have a 1 TB database with 1000 1 GB files, that would be a monster to manage. You probably wouldn't want a 250 GB database with a single data file either, because it would take forever to recover the data file from tape if there was a single block corruption.
    Is there a data files size that fits comfortably in whatever size mount points you have? If you get 10 GB chunks of SAN at a time, for example, you would probably want data files that were an integer factor of that (i.e. 1, 2, or 5 GB) so that you can add similarly sized data files without wasting space and so that you can move files to a new mountpoint without worrying about whether they'll all fit.
    Does your OS support files of an appropriate size? I know Windows had problems a while ago with files > 2 GB (at least when files extended beyond 2 GB).
    In the end though, this is one of those things that probably doesn't matter too much within reason.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Many little datafiles or one big datafile?

    Hi all
    I have a database which grows every day. I have 10 datafiles with name users01.db , users02.dbf and etc.
    And all these datafiles have 1.5Gb size. Each time every datafile reachs it's 1.5Gb size, I add new datafile. Is it correct strategy? Or I must add only extents to last datafile and allow it to grow?
    Thanks...

    Hi,
    In my point of view, it is a good approach if "table striping" is used, where data is spread across more than one device. Otherwise, if you have just one disk, then I don't see advantages or disadvantages. In addition, take a look at [url http://forums.oracle.com/forums/thread.jspa?messageID=1041074&#1041074]more small datafiles or one big datafile for tablespace thread.
    Cheers
    Legatti

  • RMAN Automatic Datafile Creation

    Hi,
    Could anyone please explain me RMAN Automatic Datafile Creation in oracle 10g with example.
    Thanks in advance
    regards,
    Shaan

    Hi,
    Automatic Datafile Creation - RMAN will automatically create missing datafiles in two circumstances. First, when the backup controlfile contains a reference to a datafile, but no backup of the datafile is present. Second, when a backup of the datafile is present, but there is no reference in the controlfile as it was not backed up after the datafile addition.
    doc
    http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10734/wnbradv.htm
    Regards,
    Tom

  • Datafile creation in Oracle

    Hi All,
    I am going to create a datafile in Oracle database by using this syntax.
    ALTER DATABASE
    CREATE DATAFILE 'c:\oracle\oradata\orabase\uwdata03.dbf' SIZE 1G
    AS 'UWDATA';
    Does the creation of the datafile will hamper the default datafiles of oracle?

    user11358816 wrote:
    Hi,
    I don't know abt.oracle much ,but i need to create datafile for creating a table space.
    So for that i need to create a datafile.No you don't that's not how it works.
    So for that I am asking about the syntax of it.
    ThanksThen the very first thing you'll want to learn is where to find the official documentation.
    It would be a good investment in your career to go to tahiti.oracle.com. Drill down to your product and version. There you will find the complete doc library.
    Notice the 'search' function at that site.
    You should spend a few minutes just getting familiar with what kind of documentation is available there by simply browsing the titles under the "Books" tab.
    Open the Reference Manual and spend a few minutes looking through the table of contents to get familiar with what kind of information is available there. Learning where to look things up in the documentation is time well spent on your career.
    Do the same with the SQL Reference Manual.
    Then set yourself a plan to dig deeper.
    - Read the 2-Day DBA Manual
    - Read a chapter a day from the Concepts Manual.
    - Look in your alert log and find all the non-default initialization parms listed at instance startup. Then read up on each one of them in the Reference Manual. Take a look at your listener.ora, tnsnames.ora, and sqlnet.ora files, then bounce what you see there in the network administrators manual.
    - Read the concepts manual again.
    Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.

  • Tablespace or datafile  creation during recovery

    Hello
    During recovery,
    If there is new tablespace or datafile added in archivelogs or redologs I have to manually issue:
    alter database create datafile .. as ..
    Why doesnt oracle automatically create the datafiles ?

    The datafile doesn't exist in the control file. The control file maintains the physical structure of the database. During the RECOVERy phase, Oracle reads the ArchiveLogs to identify what updates are to be done -- these are mapped in terms of file, block and row. If the file doesn't exist in the controlfile, the rollforward cannot be applied.
    Therefore, the ALTER DATABASE CREATE DATAFILE ... AS .... allows Oracle to "add' the file to the controlfile and then proceed with the rollforward.
    Oracle doesn't automatically create the datafile because it can't know what the target file name is.
    In your backup your datafiles may have been spread across /u01/oradata/MYDB, /u02/oradata/MYDB, /u03/oradata/MYDB and this file may have been in /u03/oradata/MYDB. However, in your target (restored) location the files may be at only two, differently named, mountpoints : /oradata1/REPDB, /oradata/REPDB. Oracle can't decide for you where the new datafile (which was in /u03/oradata/MYDB) should be created -- should it be in /oradata1/REPDB or /oradata/REPDB or, you might have avaialable /oradata3/REPDB which the database instance isn't aware of !

  • Looking for datafile creation date

    DB version: 11.2 / Solaris 10
    We use OMF for our datafiles stored in ASM.
    I was asked to create a 20gb tablespace. We don't create datafiles above 10g. So, I did this.
    CREATE TABLESPACE FMT_DATA_UAT DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO;
    ALTER TABLESPACE FMT_DATA_UAT ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off;Later it turns out that the Schema will be having only 7gb worth of data. So, I wanted to reduce the file size of the second file added using ALTER DATABASE DATAFILE .... RESIZE command. But I don't want to RESIZE (reduce) the size of the first datafile created when I issued the CREATE TABLESPACE command. Since, in ASM, there is no real naming like
    +DATA/orcl/datafile/fmt_data_uat01.dbf
    +DATA/orcl/datafile/fmt_data_uat02.dbf
    .It is difficult to find which was the first file created.
    And there is no create_date column in DBA_DATA_FILES. There isn't a create_date column in v$datafile either.
    SQL > select file_name from dba_data_Files where tablespace_name = 'FMT_DATA_UAT';
    FILE_NAME
    +DATA/orcl/datafile/fmt_data_uat.1415.792422709
    +DATA/orcl/datafile/fmt_data_uat.636.792422811
    SQL > select name, CHECKPOINT_TIME, LAST_TIME, FIRST_NONLOGGED_TIME, FOREIGN_CREATION_TIME
         from v$datafile where name like '+DATA/orcl/datafile/fmt_data_uat%';
    NAME                                                    CHECKPOINT_TIME      LAST_TIME            FIRST_NONL FOREIGN_CREATION_TIM
    +DATA/orcl/datafile/fmt_data_uat.1415.792422709         27 Aug 2012 18:55:06
    +DATA/orcl/datafile/fmt_data_uat.636.792422811          27 Aug 2012 18:55:06
    SQL >Alert log doesn't show file names either.
    CREATE TABLESPACE FMT_DATA_UAT DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO
    Mon Aug 27 13:25:37 2012
    Completed: CREATE TABLESPACE FMT_DATA_UAT DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO
    Mon Aug 27 13:26:51 2012
    ALTER TABLESPACE FMT_DATA_UAT ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off
    Mon Aug 27 13:27:10 2012
    Thread 1 advanced to log sequence 70745 (LGWR switch)
      Current log# 8 seq# 70745 mem# 0: +DATA/orcl/onlinelog/group_8.1410.787080847
      Current log# 8 seq# 70745 mem# 1: +FRA/orcl/onlinelog/group_8.821.787080871
    Mon Aug 27 13:27:13 2012
    Archived Log entry 123950 added for thread 1 sequence 70744 ID 0x769b5f42 dest 1:
    Mon Aug 27 13:27:21 2012
    Completed: ALTER TABLESPACE FMT_DATA_UAT ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off
    Mon Aug 27 13:28:16 2012

    There isn't a create_date column in v$datafile either.Did you check CREATION_TIME ?

  • Big datafile vs many datafile

    I'm currently contemplating on how do design my datafiles layout. I'm running ERP6 on AIX 6.1 with Oracle 10g. Total DB size is 1300GB.
    The default is to have 10G of datafiles which will leads to 130 data files which I think too much of a hassle to keep track all of those files.  Is it ok to have datafiles of size 100GB?  what are the pro and cons?

    Hi Mohammad,
    It's basically depend on what Oracle "Blocksize" you are using. If it is 8K, then maximum datafile size can be 32G.
    Again, you should think about backup/recovery and performance issues.
    You also need to check your backup tool limitations about same.
    There can be a OS limitations also for larger datafiles.
    Checkpoint operation may take less time for larger datafiles, as it need to update the headers of few datafiles as compare to large no. of small datafiles.
    Restore time would be same in both case. Because, restoring five 4GB datafiles and one 20GB datafile will take same amount of time.
    It would be better, if you consult with core DBA persom for your doubts.
    Regards.
    Rajesh Narkhede

  • Datafile creation

    I am adding a new datafile to My database.
    We have 3 direcories like
    /dwh/data1/dwh/all datafiles here like data101dwh.dbf
    /dwh1/data1/dwh/all datafiles here data101dwh.dbf
    /dwh2/data1/dwh/all datafile here data101dwh.dbf
    Can i keep in 3 directories the datafile name as data101dwh.dbf or is there any problem .If i create like this ...

    What Sabdar said times ten. What if you had to perform a tablespace point in time recovery? Let's say you map the paths to one place using db file convert. Now you have three files with the same name going into the same/one directory. Now you have the added step of using set newname to fix the file name collision. Why complicate matters?

  • Adding No. of Datafiles to TableSpace

    Hi All,
    I have doubt regarding adding no. of datafiles to Tablespace. I wanna connect my application to Oracle10g database. As a DBA I am going to allocate on separate Tablespace to this new Application. As time will pass the data acumulated by the application would reach in GBs, So how should I control the datafile???
    1) Shall I add only one Big datafile to the TB
    OR
    2) Periodically I should go on adding the datafile as it crosses the particular limit of
    data size???
    What policy should I follow....
    Regards,
    Darshan

    Hi,
    There is no issue in creating multiple datafiles while creating database or adding them later. If space is no constraitn for you then you should create multiple datafiles during creation of database.
    Also create then with AUTOEXTEND clause and you don't have to put this clause OFF if you create it with MAXSIZE parameter. It's always better tot have MAXSIZE parameter defined because this will not allow your datafile to grow bigger then the MAXSIZE parameter size. Here is example of adding a datafile to tablespace with MAXSIZE clause;
    ALTER TABLESPACE users
    ADD DATAFILE '/u02/oracle/rbdb1/users03.dbf' SIZE 10M
    AUTOEXTEND ON
    NEXT 512K
    MAXSIZE 250M;
    After defining the MAXSIZE, keep a close eye on growth of data in datafile and when its near to fill add a new datafile to tablespace and keep this cycle on....
    Regarding your second question :
    You are write that data is written 2nd to datafile when first one is full. But you try to validate this point running database then sometimes you will find that 2nd datafile is consuming more space then 1st, even have 1st datafile is sufficient empty. This is due to fact od frequent deletions in the tables who resides in datafile.
    Hope that you got some hints......

  • Problem in creating Datafile

    Hi all,
    I am doing some kind of testing in my test maching and I faced the following situation.
    I have a good control file and all the archives starting from the 1st scn. Now I brought down the instance and physically deleted a datafile and tried bringing up the db. This time it couldnt open the db as a file is missing, so I created the datafile and gave the recover database command and things opened up fine. No I backed up the ctrl file to trace and brought down the instance. Now I deleted my controlfile and recreated it successfully by using the script that I got from trace and tried the datafile drop and recreate trick as before, but in this case I couldn't create the datafile. Can anybody explain me why is this behaviour?

    KRIS wrote:
    Version is 11.2.0.1.0
    OS is Red Hat Enterprise Linux Server release 5.5
    As told earlier I tried to create the deleted datafile and this is the error I got.
    13:20:44 SQL> alter database create datafile 4 as '/oracle/abhi/data/users.dbf'; alter database create datafile 4 as '/oracle/abhi/data/users.dbf'
    ERROR at line 1:
    ORA-01178: file 4 created before last CREATE CONTROLFILE, cannot recreate
    ORA-01110: data file 4: '/oracle/abhi/data/users.dbf'
    I can understand the error, but I want to understand what happens internally under such circumstances and why such a datafile recovery is not allowed in oracleonly you can re-create datafile without backup in archivelog mode, the datafile creation should be less than the control file creation time.
    if you re-create the control file then you cannot use the command "alter database create datafile" .
    you can use only for the datafiles created after control file creation time.
    check:-
    SQL> select creation_time,name from v$datafile;
    CREATION_ NAME
    17-APR-07 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\SYSTEM01.DBF
    17-APR-07 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\UNDOTBS01.DBF
    17-APR-07 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\SYSAUX01.DBF
    17-APR-07 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\USERS01.DBF
    27-JUL-11 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\EXAMPLE01.DBF
    27-JUL-11 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\USER02.DBF
    6 rows selected.
    6 rows selected.
    SQL> select controlfile_created from v$database;
    CONTROLFI
    27-JUL-11

  • How to Resize a Datafile to Minimum Size in sysaux tablespace

    Hi Experts,
    I found the init data file size is 32712M in sysaux table. our other DBA added 2 big datafile for spilled issue in sysaux. After cleared data and fixed spill issue. I want to reduce datafile size.
    as ALTER DATABASE DATAFILE 'D:\ORACLE\ORADATA\SALE\SYSAUX05.DBF' RESIZE 10000M...
    I got error message that used data size is over resize size.
    ERROR at line 1:
    ORA-03297: file contains used data beyond requested RESIZE value
    However, I checked data size is only 176M in this datafile in OEM.
    How to fix this issue?
    I use oracle 10G R4 in 32 bit window 2003 server
    JIM
    Edited by: user589812 on May 18, 2009 11:22 AM

    10.2.0.4
    If I run as
    SQL> alter database datafile 'D:\ORACLE\ORADATA\SALE\SYSAUX05.DBF' offline drop;
    Database altered.
    the file is still in database.
    can I make online for this file again?
    could we delete it by SQL as
    alter tablespace sysaux drop datafile 'D:\ORACLE\ORADATA\SALE\SYSAUX05.DBF';
    Thanks
    Jimmy
    Edited by: user589812 on May 18, 2009 12:16 PM
    Edited by: user589812 on May 18, 2009 12:33 PM

  • Big File vs Small file Tablespace

    Hi All,
    I have a doubt and just want to confirm that which is better if i am using Big file instead of many small datafile for a tablespace or big datafiles for a tablespace. I think better to use Bigfile tablespace.
    Kindly help me out wheather i am right or wrong and why.

    GirishSharma wrote:
    Aman.... wrote:
    Vikas Kohli wrote:
    With respect to performance i guess Big file tablespace is a better option
    Why ?
    If you allow me to post, I would like to paste the below text from my first reply's doc link please :
    "Performance of database opens, checkpoints, and DBWR processes should improve if data is stored in bigfile tablespaces instead of traditional tablespaces. However, increasing the datafile size might increase time to restore a corrupted file or create a new datafile."
    Regards
    Girish Sharma
    Girish,
    I find it interesting that I've never found any evidence to support the performance claims - although I can think of reasons why there might be some truth to them and could design a few tests to check. Even if there is some truth in the claims, how significant or relevant might they be in the context of a database that is so huge that it NEEDS bigfile tablespaces ?
    Database opening:  how often do we do this - does it matter if it takes a little longer - will it actually take noticeably longer if the database isn't subject to crash recovery ?  We can imagine that a database with 10,000 files would take longer to open than a database with 500 files if Oracle had to read the header blocks of every file as part of the database open process - but there's been a "delayed open" feature around for years, so maybe that wouldn't apply in most cases where the database is very large.
    Checkpoints: critical in the days that a full instance checkpoint took place on the log file switch - but (a) that hasn't been true for years, and (b) incremental checkpointing made a big difference the I/O peak when an instance checkpoint became necessary, and (c) we have had a checkpoint process for years (if not decades) which updates every file header when necessary rather than requiring DBWR to do it
    DBWR processes: why would DBWn handle writes more quickly - the only idea I can come up with is that there could be some code path that has to associate a file id with an operating system file handle of some sort and that this code does more work if the list of files is very long: very disappointing if that's true.
    On the other hand I recall many years ago (8i time) crashing a session when creating roughly 21,000 tablespaces for a database because some internal structure relating to file information reached the 64MB hard limit for a memory segment in the SGA. It would be interesting to hear if anyone has recently created a database with the 65K+ limit for files - and whether it makes any difference whether that's 66 tablespaces with about 1,000 files, or 1,000 tablespace with about 66 files.
    Regards
    Jonathan Lewis

  • Recreate datafile(URGENT)

    Hello
    one datafile is removed by mistake and i am not having the backu up.i want to recreate that datafile now.How can i do this?
    Please provide me the step by step process to recreate the datafile
    Errors in file /oracle/home92/admin/tst/bdump/cbosstst_j001_22481.trc:
    ORA-12012: error on auto execute of job 191459
    ORA-01116: error in opening database file 9
    ORA-01110: data file 9: '/ocs/tst/maintbl03.dbf'
    ORA-27041: unable to open file
    SVR4 Error: 2: No such file or directory
    Thankx...

    Hi,
    aah.. one datafile is removed by mistake
    or Without backup you cann't recover your datafile unless if your database running on archivelog mode or you have store all ARCHIVELOG from datafile creation to till then it is possible to create new datafile and recover all data.
    or if it is testing database then perform below step.
    alter database datafile 9 offline drop
    alter database openregards
    Taj

  • Is Shared storage provided by VirtualBox better or as good as Openfiler ?

    Grid version : 11.2.0.3
    Guest OS           : Solaris 10 (64-bit )
    Host OS           : Windows 7 (64-bit )
    Hypervisor : Virtual Box 4.1.18
    In the past , I have created 2-node RAC in virtual environment (11.2.0.2) in which the shared storage was hosted in OpenFiler.
    Now that VirtualBox supports shared LUNs. I want to try it out. If VirtualBox's shared storage is as good as Openfiler , I would definitely go for VirtualBox as Openfiler requires a third VM (Linux) to be created just for hosting storage .
    For pre-RAC testing, I created a VirtualBox VM and created a Stand alone DB in it. Below test is done in VirtualBox's LOCAL storage (I am yet to learn how to create Shared LUNs in Virtual Box )
    I know that a datafile creation is not a definite test to determine I/O throughput. But i did a quick Test by creating a 6gb tablespace.
    Is the duration of 2 minutes and 42 seconds acceptable for a 6gb datafile ?
    SQL> set timing on
    SQL> create tablespace MHDATA datafile '/u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf' SIZE 6G AUTOEXTEND off ;
    Tablespace created.
    Elapsed: 00:02:42.47
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    $
    $ du -sh /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    6.0G   /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    $ df -h /u01/app/hldat1/oradata/hcmbuat
    Filesystem             size   used  avail capacity  Mounted on
    /dev/dsk/c0t0d0s6       14G    12G   2.0G    86%    /u01

    well once i experimented with Openfiler and built a 2-node 11.2 RAC on Oracle Linux 5 using iSCSI storage (3 VirtualBox VMs in total, all 3 on a desktop PC: Intel i7 2600K, 16GB memory)
    CPU/memory wasnt a problem, but as all the 3 VMs were on a single HDD, performance was awful
    didnt really run any benchmarks, but a compressed full database backup with RMAN for an empty database (<1 GB) took like 15 minutes...
    2 VMs + VirtualBox shared disk on the same single HDD provided much better performance, still using this kind of setup for my sandbox RAC databases
    edit: 6 GB in 2'42" is about 37 MB/sec
    with the above setup using Openfiler, it was nowhere near this
    edit2: made a little test
    host: Windows 7
    guest:2 x Oracle Linux 6.3, 11.2.0.3
    hypervisor is VirtualBox 4.2
    PC is the same as above
    2 virtual cores + 4GB memory for each VM
    2 VMs + VirtualBox shared storage (single file) on a single HDD (Seagate Barracuda 3TB ST3000DM001)
    created a 4 GB datafile (not enough space for 6 GB):
    {code}SQL> create tablespace test datafile '+DATA' size 4G;
    Tablespace created.
    Elapsed: 00:00:31.88
    {code}
    {code}RMAN> backup as compressed backupset database format '+DATA';
    Starting backup at 02-OCT-12
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=22 instance=RDB1 device type=DISK
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=+DATA/rdb/datafile/system.262.790034147
    input datafile file number=00002 name=+DATA/rdb/datafile/sysaux.263.790034149
    input datafile file number=00003 name=+DATA/rdb/datafile/undotbs1.264.790034151
    input datafile file number=00004 name=+DATA/rdb/datafile/undotbs2.266.790034163
    input datafile file number=00005 name=+DATA/rdb/datafile/users.267.790034163
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/nnndf0_tag20121002t192133_0.389.795640895 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/ncsnf0_tag20121002t192133_0.388.795640919 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 02-OCT-12
    {code}
    Now i dont know much about Openfiler, maybe i messed up something, but i think its quite good, so i wouldnt use a 3rd VM just for the storage.

Maybe you are looking for