Bigfile tablespace.

Can someone tell me wat is the max size of Bigfile Tablespace in Oracle 10g???????????

Oracle Reference has database limit list,
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm#i287915
For user625868, if you want good alignment of output, use pre /pre with [] to wrap around your result.
For example, unwraped output of desc looks like,
desc dba_users
Name Null? Type
USERNAME NOT NULL VARCHAR2(30)
USER_ID NOT NULL NUMBER
PASSWORD VARCHAR2(30)
ACCOUNT_STATUS NOT NULL VARCHAR2(32)
LOCK_DATE DATE
EXPIRY_DATE DATE
DEFAULT_TABLESPACE NOT NULL VARCHAR2(30)
TEMPORARY_TABLESPACE NOT NULL VARCHAR2(30)
CREATED NOT NULL DATE
PROFILE NOT NULL VARCHAR2(30)
INITIAL_RSRC_CONSUMER_GROUP VARCHAR2(30)
EXTERNAL_NAME VARCHAR2(4000)
This one is better
desc dba_users
Name                                      Null?    Type
USERNAME                                  NOT NULL VARCHAR2(30)
USER_ID                                   NOT NULL NUMBER
PASSWORD                                           VARCHAR2(30)
ACCOUNT_STATUS                            NOT NULL VARCHAR2(32)
LOCK_DATE                                          DATE
EXPIRY_DATE                                        DATE
DEFAULT_TABLESPACE                        NOT NULL VARCHAR2(30)
TEMPORARY_TABLESPACE                      NOT NULL VARCHAR2(30)
CREATED                                   NOT NULL DATE
PROFILE                                   NOT NULL VARCHAR2(30)
INITIAL_RSRC_CONSUMER_GROUP                        VARCHAR2(30)
EXTERNAL_NAME                                      VARCHAR2(4000)

Similar Messages

  • Restore Database to non-ASM Storage - Issue with Bigfile Tablespace

    I have been testing a restore of my prod database that uses ASM (and oracle managed files) for storage to a different server and non-ASM storage. Oracle version is 10g EE. My database has one bigfile tablespace and it's datafile is about 250GB. The restore fails and it has something to do with the bigfile tablespace.
    Here is the rman restore script:
    run
    set newname for datafile 1 to '/ora01/db/ehr/system01.dbf';
    set newname for datafile 2 to '/ora01/db/ehr/undotbs01.dbf';
    set newname for datafile 3 to '/ora01/db/ehr/sysaux01.dbf';
    set newname for datafile 4 to '/ora01/db/ehr/undotbs02.dbf';
    set newname for datafile 5 to '/ora01/db/ehr/users01.dbf';
    set newname for datafile 6 to '/ora01/db/ehr/apolloaud01.dbf';
    set newname for datafile 7 to '/ora01/db/ehr/apollohist01.dbf';
    set newname for datafile 8 to '/ora01/db/ehr/apolloidx01.dbf';
    set newname for datafile 9 to '/ora01/db/ehr/apollotab01.dbf';
    set newname for datafile 10 to '/ora01/db/ehr/apollotab02.dbf';
    set newname for datafile 11 to '/ora02/db/ehr/apollolob01.dbf';
    set newname for datafile 12 to '/ora01/db/ehr/apollofdb01.dbf';
    set newname for datafile 13 to '/ora01/db/ehr/apolloidx02.dbf';
    set newname for datafile 14 to '/ora01/db/ehr/apolloidx03.dbf';
    set newname for datafile 15 to '/ora01/db/ehr/apolloaud02.dbf';
    set newname for datafile 16 to '/ora01/db/ehr/apollotab03.dbf';
    set until sequence 60298 thread 2;
    restore database;
    switch datafile all;
    recover database;
    Datafile 11 is the datafile in the bigfile tablespace. Here are the weird things about the restore:
    1. The restore output shows this:
    creating datafile fno=11 name=/ora02/db/ehr/apollolob01.dbf
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to /ora01/db/ehr/system01.dbf
    restoring datafile 00002 to /ora01/db/ehr/undotbs01.dbf
    restoring datafile 00003 to /ora01/db/ehr/sysaux01.dbf
    restoring datafile 00004 to /ora01/db/ehr/undotbs02.dbf
    restoring datafile 00005 to /ora01/db/ehr/users01.dbf
    restoring datafile 00006 to /ora01/db/ehr/apolloaud01.dbf
    restoring datafile 00007 to /ora01/db/ehr/apollohist01.dbf
    restoring datafile 00008 to /ora01/db/ehr/apolloidx01.dbf
    restoring datafile 00009 to /ora01/db/ehr/apollotab01.dbf
    restoring datafile 00010 to /ora01/db/ehr/apollotab02.dbf
    restoring datafile 00012 to /ora01/db/ehr/apollofdb01.dbf
    restoring datafile 00013 to /ora01/db/ehr/apolloidx02.dbf
    restoring datafile 00014 to /ora01/db/ehr/apolloidx03.dbf
    restoring datafile 00015 to /ora01/db/ehr/apolloaud02.dbf
    restoring datafile 00016 to /ora01/db/ehr/apollotab03.dbf
    Why at the beginning is it "creating" datafile 11? Then it doesnt even say it is "restoring" that datafile. Only restoring datafiles 1,2,3,4,5,6,7,8,9,10,12,13,14,15, and 16.
    When it creates datafile 11 it is only 26GB, that is much smaller than it should be according to v$datafile view on source prod database. Also even though it says it is creating datafile 11 as /ora02/db/ehr/apollolob01.dbf it actually creates it as an oracle managed file at /ora02/db/ehr/EHR/datafile/o1_mf_apollolo_6crxyqs2_.dbf
    After the datafiles are restored the "switch datafile all" command fails:
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of switch command at 10/18/2010 13:58:37
    ORA-19625: error identifying file /ora02/db/ehr/apollolob01.dbf
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    So my question is how do I get this database restored to non-ASM (and non omf)?

    So I tried using a different scn with my "set until scn #####" then the restore created 2 datafiles. The datafile for apollolob and apollotab02.dbf. So I think I have narrowed the problem to be that I am not using the correct scn number so RMAN can successfully restore those datafiles and recreates them instead. How do I find the correct scn to use to do a successful restore of the entire database? I have seen different methods on the web, but cant figure it out. Ive used "select archivelog_change#-1 from v$database;" and I also did "list backup of archivelog all" and used the latest sequence number. How can I find the correct scn to use so the entire database will restore?
    Here is the output of "list backup":
    List of Backup Sets
    ===================
    BS Key Size Device Type Elapsed Time Completion Time
    19724 41.12M DISK 00:00:10 14-OCT-10
    BP Key: 65840 Status: AVAILABLE Compressed: YES Tag: TAG20101014T210022
    Piece Name: /mnt/migrate/rman/EHR_dbid3632734257_set113195_piece1_copy1_20101014
    List of Archived Logs in backup set 19724
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 50439 3230234843 14-OCT-10 3230268282 14-OCT-10
    1 50440 3230268282 14-OCT-10 3230286806 14-OCT-10
    2 60280 3230234852 14-OCT-10 3230251419 14-OCT-10
    2 60281 3230251419 14-OCT-10 3230268263 14-OCT-10
    2 60282 3230268263 14-OCT-10 3230286809 14-OCT-10
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    19725 Full 126.40G DISK 09:11:51 15-OCT-10
    List of Datafiles in backup set 19725
    File LV Type Ckp SCN Ckp Time Name
    1 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/system.625.609259453
    2 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/undotbs1.620.609259461
    3 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/sysaux.768.609259463
    4 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/undotbs2.632.609259467
    5 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/users.257.609259471
    6 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloaud.316.619537285
    7 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollohist.629.619538155
    8 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloidx.312.619538169
    9 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollotab.276.619538487
    10 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollotab.576.619539331
    11 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollolob.570.619539593
    12 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollofdb.750.645974339
    13 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloidx.429.651171265
    14 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloidx.705.688680793
    15 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloaud.747.699632315
    16 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollotab.330.715622123
    Backup Set Copy #1 of backup set 19725
    Device Type Elapsed Time Completion Time Compressed Tag
    DISK 09:11:51 20-OCT-10 YES TAG20101014T210039
    List of Backup Pieces for backup set 19725 Copy #1
    BP Key Pc# Status Piece Name
    65851 1 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece1_copy1_20101014
    65862 2 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece2_copy1_20101014
    65873 3 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece3_copy1_20101014
    65884 4 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece4_copy1_20101014
    65895 5 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece5_copy1_20101014
    65901 6 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece6_copy1_20101014
    65902 7 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece7_copy1_20101014
    65903 8 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece8_copy1_20101014
    65904 9 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece9_copy1_20101014
    65841 10 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece10_copy1_20101014
    65842 11 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece11_copy1_20101014
    65843 12 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece12_copy1_20101014
    65844 13 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece13_copy1_20101014
    65845 14 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece14_copy1_20101014
    65846 15 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece15_copy1_20101014
    65847 16 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece16_copy1_20101014
    65848 17 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece17_copy1_20101014
    65849 18 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece18_copy1_20101014
    65850 19 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece19_copy1_20101014
    65852 20 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece20_copy1_20101014
    65853 21 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece21_copy1_20101014
    65854 22 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece22_copy1_20101015
    65855 23 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece23_copy1_20101015
    65856 24 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece24_copy1_20101015
    65857 25 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece25_copy1_20101015
    65858 26 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece26_copy1_20101015
    65859 27 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece27_copy1_20101015
    65860 28 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece28_copy1_20101015
    65861 29 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece29_copy1_20101015
    65863 30 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece30_copy1_20101015
    65864 31 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece31_copy1_20101015
    65865 32 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece32_copy1_20101015
    65866 33 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece33_copy1_20101015
    65867 34 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece34_copy1_20101015
    65868 35 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece35_copy1_20101015
    65869 36 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece36_copy1_20101015
    65870 37 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece37_copy1_20101015
    65871 38 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece38_copy1_20101015
    65872 39 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece39_copy1_20101015
    65874 40 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece40_copy1_20101015
    65875 41 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece41_copy1_20101015
    65876 42 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece42_copy1_20101015
    65877 43 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece43_copy1_20101015
    65878 44 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece44_copy1_20101015
    65879 45 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece45_copy1_20101015
    65880 46 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece46_copy1_20101015
    65881 47 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece47_copy1_20101015
    65882 48 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece48_copy1_20101015
    65883 49 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece49_copy1_20101015
    65885 50 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece50_copy1_20101015
    65886 51 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece51_copy1_20101015
    65887 52 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece52_copy1_20101015
    65888 53 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece53_copy1_20101015
    65889 54 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece54_copy1_20101015
    65890 55 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece55_copy1_20101015
    65891 56 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece56_copy1_20101015
    65892 57 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece57_copy1_20101015
    65893 58 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece58_copy1_20101015
    65894 59 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece59_copy1_20101015
    65896 60 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece60_copy1_20101015
    65897 61 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece61_copy1_20101015
    65898 62 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece62_copy1_20101015
    65899 63 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece63_copy1_20101015
    65900 64 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece64_copy1_20101015
    BS Key Size Device Type Elapsed Time Completion Time
    19726 228.10M DISK 00:00:49 15-OCT-10
    BP Key: 65905 Status: AVAILABLE Compressed: YES Tag: TAG20101015T061242
    Piece Name: /mnt/migrate/rman/EHR_dbid3632734257_set113197_piece1_copy1_20101015
    List of Archived Logs in backup set 19726
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 50441 3230286806 14-OCT-10 3230331993 14-OCT-10
    1 50442 3230331993 14-OCT-10 3230401945 14-OCT-10
    1 50443 3230401945 14-OCT-10 3230469794 15-OCT-10
    1 50444 3230469794 15-OCT-10 3230555010 15-OCT-10
    1 50445 3230555010 15-OCT-10 3230618396 15-OCT-10
    1 50446 3230618396 15-OCT-10 3230695020 15-OCT-10
    2 60283 3230286809 14-OCT-10 3230304858 14-OCT-10
    2 60284 3230304858 14-OCT-10 3230330891 14-OCT-10
    2 60285 3230330891 14-OCT-10 3230354275 14-OCT-10
    2 60286 3230354275 14-OCT-10 3230366292 14-OCT-10
    2 60287 3230366292 14-OCT-10 3230399805 14-OCT-10
    2 60288 3230399805 14-OCT-10 3230423577 14-OCT-10
    2 60289 3230423577 14-OCT-10 3230446176 15-OCT-10
    2 60290 3230446176 15-OCT-10 3230469756 15-OCT-10
    2 60291 3230469756 15-OCT-10 3230496786 15-OCT-10
    2 60292 3230496786 15-OCT-10 3230524710 15-OCT-10
    2 60293 3230524710 15-OCT-10 3230554981 15-OCT-10
    2 60294 3230554981 15-OCT-10 3230583802 15-OCT-10
    2 60295 3230583802 15-OCT-10 3230610465 15-OCT-10
    2 60296 3230610465 15-OCT-10 3230617887 15-OCT-10
    2 60297 3230617887 15-OCT-10 3230673207 15-OCT-10
    2 60298 3230673207 15-OCT-10 3230695022 15-OCT-10

  • Migrating to a bigfile tablespace

    I have a table that is all set up and populated. Is there a way to create a table based on the properties of another table but as bigfile instead of smallfile? I have found that you can't just convert the smallfile to bigfile but have been unsuccessful in finding any documentation on how to go about migrating to a bigfile tablespace. Again any help is appreciated, I am new but enjoying oracle very much thanks to finding this forum.
    Thanks,
    Bruce
    Edit:
    I apologize I neglected to post the details
    windows 2003 server
    oracle 10.2.4.0
    Edited by: Bruce_Bruce on May 27, 2009 12:44 PM

    Bruce_Bruce wrote:
    I have a table that is all set up and populated. Is there a way to create a table based on the properties of another table but as bigfile instead of smallfile? I have found that you can't just convert the smallfile to bigfile but have been unsuccessful in finding any documentation on how to go about migrating to a bigfile tablespace. Again any help is appreciated, I am new but enjoying oracle very much thanks to finding this forum.
    Thanks,
    Bruce
    Edit:
    I apologize I neglected to post the details
    windows 2003 server
    oracle 10.2.4.0
    Edited by: Bruce_Bruce on May 27, 2009 12:44 PM"bigfile" is a property of the tablespace, not of the table. Just create a 'bigfile' tablespace and move your table to it.
    You aren't thinking that there is a one-to-one relationship between tables and tablespaces are you? A table lives in one tablespace, but a tablespace can contain many tables. And any relationship between tablespaces and schemas/users is strictly by your own convention for ease of administration. There is nothing in Oracle that makes that linkage.

  • Need clarification on Bigfile Tablespaces

    In the following Oracle Documentation Lirbary PDF,
        Oracle® Database
        Concepts
        10g Release 2 (10.2)
        B14220-02
        October 2005
    section
        Overview of Tablespaces
        Bigfile Tablespaces (page: 3-5)
    it says,
        Benefits of Bigfile Tablespaces
        * Bigfile tablespaces can significantly increase the storage capacity of an Oracle database. Smallfile tablespaces can contain up to 1024 files, but bigfile tablespaces contain only one file that can be 1024 times larger than a smallfile tablespace. The total tablespace capacity is the same for smallfile tablespaces and bigfile tablespaces. However, because there is a limit of 64K database for each database, a database can contain 1024 times more bigfile tablespaces than smallfile tablespaces, so bigfile tablespaces increase the total database capacity by 3 orders of magnitude. In other words, 8 exabytes is the maximum size of the Oracle database when bigfile tablespaces are used with the maximum block size (32k).
    I need clarification on how to arrive at 8 exabytes ?
    1024 x 32k x 64,000 ??
    According to the exerpt above, there's no mention of maximum number of Operating System blocks per extent. Unless this was assumed knowledge ... how do I get 8 exabytes ?
    And if "a database can contain 1024 times more bigfile tablespaces than smallfile tablespaces", then what's the upper limit on smallfile tablespaces ? -- was this sentence referring to the number of datafiles per smallfile tablespace ? ...
    O_o
    Thanks !
    Message was edited by:
    mvanle

    Hi,
    According to [url http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm#i287915]Physical Database Limits page, a bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks. In resume, A bigfile tablespace is a tablespace containing a single datafile that can be as large as 128 terabytes (TB), depending on the block size of the tablespace. In conjunction with setting the initialization parameter DB_FILES to the maximum value of 65,635 the total size of the database can be more than 8 exabytes (EB)
    >>how do I get 8 exabytes ?
    You can calculate the maximum amount of space (M) in a single Oracle database as the maximum number of datafiles (D) multiplied by the maximum number of blocks per datafile (F) multiplied by the tablespace block size (B):
    M = D * F * B. Therefore, the maximum database size, given the maximum block size and the maximum number of datafiles, is:
    65,535 datafiles * 4,294,967,296 blocks per datafile * 32,768 block size = 9,223,231,299,366,420,480 = 8EB.Cheers
    Legatti

  • RMAN level 0 backup with bigfile tablespaces

    We noticed that our rman backup process for PROD database is not working properly.
      PROD is a 8 TB size database having two bigfile tablespaces each having more than 2 TB datafile.
      PROD is a 11gR1 (11.1.0.7) RAC database with ASM storage.
       We are taking weekly incremental level 0 backup for entire database and level 1 backup every day.
      As per v$session_longops, the approx completion time for the two tablespaces is 5 days.
    We can use "SECTION SIZE" parameter to take rman backup for bigfile tablespaces and exclude them from the level 0 backup.
    However, in this case our level 0 backup wont include the two bigfile tablespaces and the daily level 1 backup will not include them.
    Will you pls advise us on how to take backup in this scenario.
    we have only 4TB of LUN allocated for backup and have to keep 7 days of backup on disk.
    Thanks in advance !
    DR

    We can use "SECTION SIZE" parameter to take rman backup for bigfile tablespaces and exclude them from the level 0 backup.
    How do you think that the backup will be useful if you exclude some files from level 0 or base backup? You will find difficulties or you can't restore the database in case of disaster.
    Yes you can use section size parameter for paralleled big file tablespace backup.
    Check below for explanation and example.
    Backing Up the Database: Advanced Topics
    http://www.oracle-base.com/articles/11g/rman-enhancements-11gr1.php#multisection_backups
    You can use the below note for your case which will be of help.
    Reducing RMAN backup time for unevenly sized tablespaces « Oracle DBA – A lifelong learning experie…
    Thank you!!

  • Query about Bigfile tablespace

    What will be minimum size for a single data file for creating bigfile tablespace?

    John Stegeman wrote:
    why do you need to know? As far as I recall, it's pretty small (100k or less)"pretty small" is correct, but I think it may be a little over a megabyte in the latest versions of Oracle - I think I noticed Oracle reserving the 1st MB for the file space map quite recently.
    Regards
    Jonathan Lewis

  • Smallfile or Bigfile tablespace?

    Hi!
    Smallfile or Bigfile tablespace?
    I have 5TB DB(RAC 2 node) with bigfile + smallfile tablespace's.
    I want create tablespace, but I don't know how can I choose type of tablespace?
    I find some tips:
    1.Using BIGFILE tablespaces on platforms that do not support large file sizes is not recommended : it limits tablespace capacity.
    2.The fact that only one data file is attached to a tablespace makes it easier to manage: the tablespace becomes the unit of administration.
    3.Maximum Datafile Size Limit In an Oracle Database.
    4.Restore operation quicklier on smallfile than bigfile.
    5.Bigfile tablespaces should be striped so that parallel operations are not adversely affected. Oracle expects bigfile tablespace to be used with Automatic Storage Management (ASM) or other logical volume managers that support striping or RAID.
    6.Avoid using bigfile tablespaces if there could possibly be no free space available on a disk group, and the only way to extend a tablespace is to add a new datafile on a different disk group.
    7.Bigfile tablespaces should be used with automatic storage management, or other logical volume managers that support dynamically extensible logical volumes, striping and RAID.
    Sorry, if I repeat this global topic.
    But, do you have more arguments to use bigfile tablespace?

    duplicated post :
    Bigfile or smallfile tablespace?
    Question about BIGFILE vs SMALLFILE
    and:
    http://www.databasejournal.com/features/oracle/article.php/3646226/Bigfile-Type-Tablespaces-versus-Smallfile-Type.htm
    Edited by: Fran on 13-mar-2012 2:06

  • Converting Bigfile Tablespace to small file tablespace

    Hi all!
    i want to change the BigFile tablespace to small file tablespace? how can i achieve that??

    i am worried about only one question.
    if i follow the following procedure, what are the drawbacks?? would i miss something???
    1) take an export of the tablespace as SYS user with all the default parameters.
    2) take the tablespace offline and drop it
    3) create the same tablespace with smallfile parameter(default).
    4) import the tablespace (WHICH WAS EARLIER EXPORTED).
    this will be done, in a restricted mode.
    by doing so, is there still, i will lose something. (dependencies, triggers, synonyms etc). how can i verify, that everything went right!!!
    of course i can run a count(*) on the tables and indexes but what about other objects??? especially the dependencies...
    Thanks for all the help till far and the patience to look into this thread!!

  • Range of time to create a 100G bigfile Tablespace decent server HW

    Creating a 100G bigfile data space with autoextend 2G max of 200G. I run the script with smaller sizes 300mb initial 20mb autoextend 2g max size to ensure it worked as intended. All went well. So when I put in the larger file sizes it is taking a lot longer. I am aware that there is a large increase in the time needed to create 100GB tablespaces as oppposed to 300MB. I was wondering if you guys have created similar sized tablespaces and maybe how long it took you. I have let this run for over 12 hours and nothing to show for it.
    Any contributions would be appreciated.
    Bruce

    Bruce_Bruce wrote:
    I left it running over night and still nothing....
    SQL> create bigfile tablespace bigtbs datafile '/data/oracle/mydb/bigtbs.dbf' size 100g;
    Tablespace created.
    Elapsed: 00:40:34.33
    SQL> drop tablespace bigtbs including contents and datafiles;
    Tablespace dropped.
    Elapsed: 00:00:04.29On AIX5.3, RAID10 on SAN fibre-channel with following lv and vg specs :
    oracle:/data/oracle# lslv lvoradata
    LOGICAL VOLUME:     lvoradata              VOLUME GROUP:   vgoradata
    LV IDENTIFIER:      xxx                    PERMISSION:     read/write
    VG STATE:           active/complete        LV STATE:       opened/syncd
    TYPE:               jfs2                   WRITE VERIFY:   off
    MAX LPs:            2048                   PP SIZE:        1024 megabyte(s)
    COPIES:             1                      SCHED POLICY:   parallel
    LPs:                1398                   PPs:            1398
    STALE PPs:          0                      BB POLICY:      relocatable
    INTER-POLICY:       minimum                RELOCATABLE:    yes
    INTRA-POLICY:       middle                 UPPER BOUND:    64
    MOUNT POINT:        /data/oracle           LABEL:          /data/oracle
    MIRROR WRITE CONSISTENCY: on/ACTIVE
    EACH LP COPY ON A SEPARATE PV ?: yes
    Serialize IO ?:     NO
    oracle:/data/oracle# lsvg vgoradata
    VOLUME GROUP:       vgoradata                VG IDENTIFIER:  yyy
    VG STATE:           active                   PP SIZE:        1024 megabyte(s)
    VG PERMISSION:      read/write               TOTAL PPs:      1399 (1432576 megabytes)
    MAX LVs:            512                      FREE PPs:       0 (0 megabytes)
    LVs:                2                        USED PPs:       1399 (1432576 megabytes)
    OPEN LVs:           2                        QUORUM:         2 (Enabled)
    TOTAL PVs:          1                        VG DESCRIPTORS: 2
    STALE PVs:          0                        STALE PPs:      0
    ACTIVE PVs:         1                        AUTO ON:        yes
    MAX PPs per VG:     130048
    MAX PPs per PV:     2032                     MAX PVs:        64
    LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
    HOT SPARE:          no                       BB POLICY:      relocatableYou may have wrong config.
    Nicolas.

  • Bigfile tablespaces requirment

    version 10g/11g
    I am just trying to clear my concepts about VLDBs,
    How the disk should be configured to support bigfile tablespace for example 2 tablespace 10TB/each(one for data, one for index)
    How i can tell my Disk support bigfile tablespace (commands?)
    what should be the best practice to implement this ?
    any other suggestion/metalink notes for further research would help....

    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#sthref1288
    OR
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#i1007169

  • Bigfile Temporary Tablespace?

    Hi,
    --------------------------Oracle 11G Release 11.1.0.6.0, Windows XP 32------------------------------------
    I've tried to find out but not successful. Can I create BIGFILE TEMPORARY TABLESPACE like I've created normal BIGFILE TABLESPACE.
    Secondly, If it is not possible then will creating multiple temporary files will effect performance will doing the sorting?
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
          from gv$sort_segment;
       INST_ID TABLESPACE_NAME                 TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS
             1 TEMP                                12582144           0    12582144
    I've created 3 temp files of 30G each and get above error i.e unable to extend. adding a temp file will solve this issue, but I was thinking to create one bigfiel temporary file instead of adding files, that might decrease lot of performance.
    Thanks
    Regards

    kam555 wrote:
    Hi,
    --------------------------Oracle 11G Release 11.1.0.6.0, Windows XP 32------------------------------------
    I've tried to find out but not successful. Can I create BIGFILE TEMPORARY TABLESPACE like I've created normal BIGFILE TABLESPACE.
    Secondly, If it is not possible then will creating multiple temporary files will effect performance will doing the sorting?
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
    from gv$sort_segment;
    INST_ID TABLESPACE_NAME                 TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS
    1 TEMP                                12582144           0    12582144I've created 3 temp files of 30G each and get above error i.e unable to extend. adding a temp file will solve this issue, but I was thinking to create one bigfiel temporary file instead of adding files, that might decrease lot of performance.
    Thanks
    RegardsBigfile tablespaces are supported only for locally managed tablespaces with automatic segment space management, with three exceptions: locally managed undo tablespaces, temporary tablespaces, and the SYSTEM tablespace.
    There is no performance overhead if you have multiple datafiles in a tablespace. That is structure of Oracle: Tablespace may consist of multiple datafiles.

  • Bigfile or smallfile tablespace?

    Hi community,
    I'd like to set up a Oracle database (Version 10.2.0.3 64-bit) on a Windows Server 2003 (64-bit).
    Server Hardware:
    - Dual Core AMD Opteron
    - Processor 8224 SE, 3,21 GHz
    - 16 GB RAM
    Storage:
    - HP StorageWorks MSA 1000
    - 3 x 250 GB (RAID 1)
    - At the moment the 250 GB drives are combined to one large 750 GB Logical Drive (RAID 0)
    The data content for the new database are coming from an DB2 (will be copied to Oracle DB) and the size is about 350 GB at the moment, upward trend.
    Data:
    150 tables with less than 100.000 rows.
    40 tables with more than 100.000 rows but less than 1.000.000 rows.
    15 tables with more than 1.000.000 rows (biggest table has 125.000.000 rows)
    So my questions now regarding how to size a tablespace for the new Oracle instance:
    1. Should I create a bigfile tablespace with one large datafile (350 - 400 GB) or should I create a smallfile tablespace with several smaller datafiles?
    2. At the moment the db_block_size is 8 kb. This means that I can create a datafile for a smallfile tablespace with a maximum of 32 GB. So for 350 GB I need at least 11 datafiles. I think this is not a good solution. So maybe I should change the db_block_size to 16 KB so I can create datafiles up to 64 GB? What effects will this change of the block size bring?
    3. Is it better, for performace issues, to use the 750 GB RAID10 Array or should I use three 250 GB RAID1 Arrays and allocate different datafiles on them?
    4. For the beginning should I size the tablespace as big as it really is (350GB) or should I size it bigger (400GB)?
    I would really appreciated, if somebody can help me with this issue...
    Thanks in advance,
    Tobias Schmidt

    Use standard tablespaces. You do not need the larger files with their limitations. Read more at http://tahiti.oracle.com. If performance is an issue then you should get management to invest in decent storage. Three internal drives with mirroring is totally inadequate for any serious work. Look at Pillar Data Systems or NetApp for NAS and/or SAN. Here's why:
    You have a 350GB database with RAID 0 so you just used 100% of your available disk. You've no room left. No room for the operating system and no room for the Oracle binaries if one assumes a swap space. Then where are you going to put SYSTEM, SYSAUX, TEMP, and UNDO tablespaces? And they too need to be mirrored. How about your archived redo logs? backups? flashback logs?
    I wouldn't consider putting a 350GB database on anything less than 6X that much space: More if, as you indicate, it is growing. And tell your storage admin that one LUN is not the right answer to just about any question.
    Also I am puzzled why, with what appears to be a serious database, you are using Windows rather than Linux.

  • Bigfile vs. smallfile tablespaces

    Oracle 11g R1/ASM Single-Instance
    RHEL 5
    ========
    Hi All,
    I just wanted everyones opinion about using bigfile tablespaces vs. the regular smallfile tablespaces. We have one tablespace that gets filled up pretty quick...it is loaded with images and we are trying to decide whether we should go the bigfiles tablespace route or should we just keep creating additional datafiles with the smallfile tablespace method. If I am correct the maximum a smallfile datafile can be is 65GB. And that lasts us about a couple of months. I guess my real question is how would a bigfile tablespace affect database/datafile recoverability. From common sense, I think it would take much longer to recover a 4TB datafile as compared to a 65GB datafile. Or would it?
    I just wanted to know what you all think and what you have experienced.
    Thanks.

    Oviwan wrote:
    As far as I know bigfiles are designed for ASM. I used it only once for a big temp tablespace for a migration. But I think if a block gets corrupt it's better to use several small datafiles, hence you have to restore the corrupt one datafile instead of the big one.
    the limitation of a datafile is platform dependend. on linux it is 32GB...
    have a look here: http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/limits002.htm#i287915
    But you risk a data block corruption either way you go, and if you are in archive log mode, with RMAN, you can fairly easily repair the bad block.
    It sounds like to me, that the parent of this thread is a fantastic candidate for BigFile tablespaces. I'm actually looking at trying to move MOST of my stuff to them to see how it does...I'd rather let Oracle manage it and free me up to work on other things. The nice thiing with bf tablespaces is...finally the datafile and tablespace are pretty much one in the same thing now.
    I wouldn't throw every thing in a database onto just one bigfile ts...but, I would have a few bigfile tablespaces out there and separate things by usage/application. But really if you have a big SAN, using ASM and all, why not just put more things together and make life siimpler on yourself?
    Just my $0.02 here of late,
    cayenne

  • Bigfile Temporary Tablespaces

    In most RAC environments, you use multiple tempfiles in a temp tablespace to avoid contention yet in the Exadata OLTP best practices guide it stats that you should start out by using a bigfile tablespace for temporarty tablespaces. This seemed odd to me since bigfile tablespaces can only have one tempfile. Understanding that OLTP may not have much parallelism, so multiple tempfiles may not be needed but it still seemed odd to use bigfile instead of smallfile tablespaces for something like temp. Does anyone have any experience with using bigfile tablespaces on RAC for temporary tablespaces.

    I have used bigfile temporary tablespaces for some intensively parallelized operations and never encountered any contention problems. If I were you, since we are talking about temporary stuff, I would make the change and run some tests.

  • Maximum number of blocks for a bigfile datafile?

    Hello,
    In 10g, when using bigfile tablespaces, the maximum size of a datafile can be between 8TB to 128TB depending on the block size (2K to 32K). The formula given to calculate the max size of a bigfile datafile is:
    Maximum datafile size = db_block_size * maximum number of blocks
    So my question is how do you calculate the max number of blocks? Oracle documentation states that the max number of blocks is 4,294,967,296 (4 billion) 2^32. But where do they get this number from? I read, on a different site (not Oracle), that it is the addressing scheme used in ROWID that changed permitting the possibility of addressing 4 billion blocks instead of 4 million:
    This is due to a new addressing scheme Oracle uses internally. Oracle ROWID, addressing a database object stored in a traditional SMALLFILE tablespace, divides the 12 bytes thusly: 3 bytes for the Relative File#, 6 bytes for the Block# and 3 bytes for the object. The same rowid addressing an object stored in a new BIGFILE tablespace uses the 9 bytes to store the Block# within the unique file, as there is no reason to use the 3 bytes for the Relative File# since there is only one file in that tablespace. Thus the new addressing scheme permits up to 4Gblocks (4,294,967,296) in a single data file and the maximum file size can reach 8 TB for a blocksize of 2K and 128 TB for a blocksize of 32K.
    But, even with this explanation, 4 billion still makes no sense because using 9 bytes equates to 2^72 addressable blocks, not 2^32. Am I missing something?
    Thanks,
    Mark

    Thanks for the answer.
    I'm almost tempted to accept the explanation that the 4,294,967,296 addressable blocks is a 32bit platform limitation... Do you or anyone know of any official Oracle documentation explaining where they get this magic number from?

Maybe you are looking for

  • Working with multiple user accounts on the same folio/article

    Hi, i have the follwing issue: - User A owns the folio (the one that publsihes it), the folio is shared with user B and C - User B is the graphics designer. He does the Artwork - Usee C should do the textutal corrections Problem: - User C cannot acce

  • Deploy Applet and MySQL - how to read and write

    Question on deploying Applet and MySQL I am building an applet that is to be a part of a simple game. The applet is supposed to read data from a MySQL server and use it to place towns on the map. In addition, the applet must be able to write to the b

  • ITunes (AppleTV) TV Show Episode Sort Order

    I have purchased Dr Who Season 6 but it comes in two parts. Part One and Part Two. When these settle into iTunes in the TV series section they combine into one season but the sort (play) order is wrong. When viewing from within iTunes or through Appl

  • PE 7 (or 8!) and Canon HF10 Full HD1080 25p

    I want to preserve the quality of the 1080p 25FPS progressive AVCHD that comes out of my HF10. My problem with PE7, and it seems also with the PE8 trial version, is that 1. the highest quality project setting Full AVCHD is only 25i (interlaced) - not

  • Error Generating Java Objects using TopLink

    Running online tutorial: Build a Web Application with ADF Faces and Oracle TopLink Generated errors when trying to build Java Objects From Table Wizard at step 9. [b]java.lang.NullPointerException      at oracle.ideimpl.log.TabbedLogManager.getMsgPag