Query about Bigfile tablespace

What will be minimum size for a single data file for creating bigfile tablespace?

John Stegeman wrote:
why do you need to know? As far as I recall, it's pretty small (100k or less)"pretty small" is correct, but I think it may be a little over a megabyte in the latest versions of Oracle - I think I noticed Oracle reserving the 1st MB for the file space map quite recently.
Regards
Jonathan Lewis

Similar Messages

  • Query about Temporary Tablespaces

    Why Data Manipulation Language (DML) locks are never acquired on the data of temporary tables? Any detailed explanations with justifications will be great. Thanks in advance.

    918868 wrote:
    Why Data Manipulation Language (DML) locks are never acquired on the data of temporary tables? Any detailed explanations with justifications will be great. Thanks in advance.each process only manipulates it own segments; so there is never any cross session contention.

  • About Temporary Tablespace

    Hi DBA Guru(s),
    I have a general query about Temporary Tablespace that,
    if we drop the temporary tablespace then out database will run or not and
    also what will be the impact on database.
    Please suggest.
    Regards,
    Rajeev Kumar

    970371 wrote:
    Hi DBA Guru(s),
    I have a general query about Temporary Tablespace that,
    if we drop the temporary tablespace then out database will run or not and
    also what will be the impact on database.
    Please suggest.
    Regards,
    Rajeev KumarDatabase can start without Temp Tablespace but it will not be good idea since because the system tbs is used for sort segment.

  • Restore Database to non-ASM Storage - Issue with Bigfile Tablespace

    I have been testing a restore of my prod database that uses ASM (and oracle managed files) for storage to a different server and non-ASM storage. Oracle version is 10g EE. My database has one bigfile tablespace and it's datafile is about 250GB. The restore fails and it has something to do with the bigfile tablespace.
    Here is the rman restore script:
    run
    set newname for datafile 1 to '/ora01/db/ehr/system01.dbf';
    set newname for datafile 2 to '/ora01/db/ehr/undotbs01.dbf';
    set newname for datafile 3 to '/ora01/db/ehr/sysaux01.dbf';
    set newname for datafile 4 to '/ora01/db/ehr/undotbs02.dbf';
    set newname for datafile 5 to '/ora01/db/ehr/users01.dbf';
    set newname for datafile 6 to '/ora01/db/ehr/apolloaud01.dbf';
    set newname for datafile 7 to '/ora01/db/ehr/apollohist01.dbf';
    set newname for datafile 8 to '/ora01/db/ehr/apolloidx01.dbf';
    set newname for datafile 9 to '/ora01/db/ehr/apollotab01.dbf';
    set newname for datafile 10 to '/ora01/db/ehr/apollotab02.dbf';
    set newname for datafile 11 to '/ora02/db/ehr/apollolob01.dbf';
    set newname for datafile 12 to '/ora01/db/ehr/apollofdb01.dbf';
    set newname for datafile 13 to '/ora01/db/ehr/apolloidx02.dbf';
    set newname for datafile 14 to '/ora01/db/ehr/apolloidx03.dbf';
    set newname for datafile 15 to '/ora01/db/ehr/apolloaud02.dbf';
    set newname for datafile 16 to '/ora01/db/ehr/apollotab03.dbf';
    set until sequence 60298 thread 2;
    restore database;
    switch datafile all;
    recover database;
    Datafile 11 is the datafile in the bigfile tablespace. Here are the weird things about the restore:
    1. The restore output shows this:
    creating datafile fno=11 name=/ora02/db/ehr/apollolob01.dbf
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to /ora01/db/ehr/system01.dbf
    restoring datafile 00002 to /ora01/db/ehr/undotbs01.dbf
    restoring datafile 00003 to /ora01/db/ehr/sysaux01.dbf
    restoring datafile 00004 to /ora01/db/ehr/undotbs02.dbf
    restoring datafile 00005 to /ora01/db/ehr/users01.dbf
    restoring datafile 00006 to /ora01/db/ehr/apolloaud01.dbf
    restoring datafile 00007 to /ora01/db/ehr/apollohist01.dbf
    restoring datafile 00008 to /ora01/db/ehr/apolloidx01.dbf
    restoring datafile 00009 to /ora01/db/ehr/apollotab01.dbf
    restoring datafile 00010 to /ora01/db/ehr/apollotab02.dbf
    restoring datafile 00012 to /ora01/db/ehr/apollofdb01.dbf
    restoring datafile 00013 to /ora01/db/ehr/apolloidx02.dbf
    restoring datafile 00014 to /ora01/db/ehr/apolloidx03.dbf
    restoring datafile 00015 to /ora01/db/ehr/apolloaud02.dbf
    restoring datafile 00016 to /ora01/db/ehr/apollotab03.dbf
    Why at the beginning is it "creating" datafile 11? Then it doesnt even say it is "restoring" that datafile. Only restoring datafiles 1,2,3,4,5,6,7,8,9,10,12,13,14,15, and 16.
    When it creates datafile 11 it is only 26GB, that is much smaller than it should be according to v$datafile view on source prod database. Also even though it says it is creating datafile 11 as /ora02/db/ehr/apollolob01.dbf it actually creates it as an oracle managed file at /ora02/db/ehr/EHR/datafile/o1_mf_apollolo_6crxyqs2_.dbf
    After the datafiles are restored the "switch datafile all" command fails:
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of switch command at 10/18/2010 13:58:37
    ORA-19625: error identifying file /ora02/db/ehr/apollolob01.dbf
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    So my question is how do I get this database restored to non-ASM (and non omf)?

    So I tried using a different scn with my "set until scn #####" then the restore created 2 datafiles. The datafile for apollolob and apollotab02.dbf. So I think I have narrowed the problem to be that I am not using the correct scn number so RMAN can successfully restore those datafiles and recreates them instead. How do I find the correct scn to use to do a successful restore of the entire database? I have seen different methods on the web, but cant figure it out. Ive used "select archivelog_change#-1 from v$database;" and I also did "list backup of archivelog all" and used the latest sequence number. How can I find the correct scn to use so the entire database will restore?
    Here is the output of "list backup":
    List of Backup Sets
    ===================
    BS Key Size Device Type Elapsed Time Completion Time
    19724 41.12M DISK 00:00:10 14-OCT-10
    BP Key: 65840 Status: AVAILABLE Compressed: YES Tag: TAG20101014T210022
    Piece Name: /mnt/migrate/rman/EHR_dbid3632734257_set113195_piece1_copy1_20101014
    List of Archived Logs in backup set 19724
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 50439 3230234843 14-OCT-10 3230268282 14-OCT-10
    1 50440 3230268282 14-OCT-10 3230286806 14-OCT-10
    2 60280 3230234852 14-OCT-10 3230251419 14-OCT-10
    2 60281 3230251419 14-OCT-10 3230268263 14-OCT-10
    2 60282 3230268263 14-OCT-10 3230286809 14-OCT-10
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    19725 Full 126.40G DISK 09:11:51 15-OCT-10
    List of Datafiles in backup set 19725
    File LV Type Ckp SCN Ckp Time Name
    1 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/system.625.609259453
    2 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/undotbs1.620.609259461
    3 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/sysaux.768.609259463
    4 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/undotbs2.632.609259467
    5 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/users.257.609259471
    6 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloaud.316.619537285
    7 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollohist.629.619538155
    8 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloidx.312.619538169
    9 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollotab.276.619538487
    10 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollotab.576.619539331
    11 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollolob.570.619539593
    12 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollofdb.750.645974339
    13 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloidx.429.651171265
    14 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloidx.705.688680793
    15 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloaud.747.699632315
    16 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollotab.330.715622123
    Backup Set Copy #1 of backup set 19725
    Device Type Elapsed Time Completion Time Compressed Tag
    DISK 09:11:51 20-OCT-10 YES TAG20101014T210039
    List of Backup Pieces for backup set 19725 Copy #1
    BP Key Pc# Status Piece Name
    65851 1 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece1_copy1_20101014
    65862 2 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece2_copy1_20101014
    65873 3 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece3_copy1_20101014
    65884 4 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece4_copy1_20101014
    65895 5 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece5_copy1_20101014
    65901 6 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece6_copy1_20101014
    65902 7 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece7_copy1_20101014
    65903 8 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece8_copy1_20101014
    65904 9 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece9_copy1_20101014
    65841 10 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece10_copy1_20101014
    65842 11 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece11_copy1_20101014
    65843 12 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece12_copy1_20101014
    65844 13 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece13_copy1_20101014
    65845 14 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece14_copy1_20101014
    65846 15 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece15_copy1_20101014
    65847 16 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece16_copy1_20101014
    65848 17 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece17_copy1_20101014
    65849 18 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece18_copy1_20101014
    65850 19 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece19_copy1_20101014
    65852 20 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece20_copy1_20101014
    65853 21 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece21_copy1_20101014
    65854 22 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece22_copy1_20101015
    65855 23 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece23_copy1_20101015
    65856 24 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece24_copy1_20101015
    65857 25 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece25_copy1_20101015
    65858 26 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece26_copy1_20101015
    65859 27 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece27_copy1_20101015
    65860 28 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece28_copy1_20101015
    65861 29 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece29_copy1_20101015
    65863 30 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece30_copy1_20101015
    65864 31 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece31_copy1_20101015
    65865 32 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece32_copy1_20101015
    65866 33 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece33_copy1_20101015
    65867 34 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece34_copy1_20101015
    65868 35 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece35_copy1_20101015
    65869 36 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece36_copy1_20101015
    65870 37 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece37_copy1_20101015
    65871 38 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece38_copy1_20101015
    65872 39 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece39_copy1_20101015
    65874 40 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece40_copy1_20101015
    65875 41 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece41_copy1_20101015
    65876 42 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece42_copy1_20101015
    65877 43 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece43_copy1_20101015
    65878 44 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece44_copy1_20101015
    65879 45 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece45_copy1_20101015
    65880 46 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece46_copy1_20101015
    65881 47 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece47_copy1_20101015
    65882 48 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece48_copy1_20101015
    65883 49 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece49_copy1_20101015
    65885 50 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece50_copy1_20101015
    65886 51 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece51_copy1_20101015
    65887 52 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece52_copy1_20101015
    65888 53 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece53_copy1_20101015
    65889 54 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece54_copy1_20101015
    65890 55 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece55_copy1_20101015
    65891 56 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece56_copy1_20101015
    65892 57 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece57_copy1_20101015
    65893 58 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece58_copy1_20101015
    65894 59 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece59_copy1_20101015
    65896 60 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece60_copy1_20101015
    65897 61 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece61_copy1_20101015
    65898 62 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece62_copy1_20101015
    65899 63 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece63_copy1_20101015
    65900 64 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece64_copy1_20101015
    BS Key Size Device Type Elapsed Time Completion Time
    19726 228.10M DISK 00:00:49 15-OCT-10
    BP Key: 65905 Status: AVAILABLE Compressed: YES Tag: TAG20101015T061242
    Piece Name: /mnt/migrate/rman/EHR_dbid3632734257_set113197_piece1_copy1_20101015
    List of Archived Logs in backup set 19726
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 50441 3230286806 14-OCT-10 3230331993 14-OCT-10
    1 50442 3230331993 14-OCT-10 3230401945 14-OCT-10
    1 50443 3230401945 14-OCT-10 3230469794 15-OCT-10
    1 50444 3230469794 15-OCT-10 3230555010 15-OCT-10
    1 50445 3230555010 15-OCT-10 3230618396 15-OCT-10
    1 50446 3230618396 15-OCT-10 3230695020 15-OCT-10
    2 60283 3230286809 14-OCT-10 3230304858 14-OCT-10
    2 60284 3230304858 14-OCT-10 3230330891 14-OCT-10
    2 60285 3230330891 14-OCT-10 3230354275 14-OCT-10
    2 60286 3230354275 14-OCT-10 3230366292 14-OCT-10
    2 60287 3230366292 14-OCT-10 3230399805 14-OCT-10
    2 60288 3230399805 14-OCT-10 3230423577 14-OCT-10
    2 60289 3230423577 14-OCT-10 3230446176 15-OCT-10
    2 60290 3230446176 15-OCT-10 3230469756 15-OCT-10
    2 60291 3230469756 15-OCT-10 3230496786 15-OCT-10
    2 60292 3230496786 15-OCT-10 3230524710 15-OCT-10
    2 60293 3230524710 15-OCT-10 3230554981 15-OCT-10
    2 60294 3230554981 15-OCT-10 3230583802 15-OCT-10
    2 60295 3230583802 15-OCT-10 3230610465 15-OCT-10
    2 60296 3230610465 15-OCT-10 3230617887 15-OCT-10
    2 60297 3230617887 15-OCT-10 3230673207 15-OCT-10
    2 60298 3230673207 15-OCT-10 3230695022 15-OCT-10

  • Migrating to a bigfile tablespace

    I have a table that is all set up and populated. Is there a way to create a table based on the properties of another table but as bigfile instead of smallfile? I have found that you can't just convert the smallfile to bigfile but have been unsuccessful in finding any documentation on how to go about migrating to a bigfile tablespace. Again any help is appreciated, I am new but enjoying oracle very much thanks to finding this forum.
    Thanks,
    Bruce
    Edit:
    I apologize I neglected to post the details
    windows 2003 server
    oracle 10.2.4.0
    Edited by: Bruce_Bruce on May 27, 2009 12:44 PM

    Bruce_Bruce wrote:
    I have a table that is all set up and populated. Is there a way to create a table based on the properties of another table but as bigfile instead of smallfile? I have found that you can't just convert the smallfile to bigfile but have been unsuccessful in finding any documentation on how to go about migrating to a bigfile tablespace. Again any help is appreciated, I am new but enjoying oracle very much thanks to finding this forum.
    Thanks,
    Bruce
    Edit:
    I apologize I neglected to post the details
    windows 2003 server
    oracle 10.2.4.0
    Edited by: Bruce_Bruce on May 27, 2009 12:44 PM"bigfile" is a property of the tablespace, not of the table. Just create a 'bigfile' tablespace and move your table to it.
    You aren't thinking that there is a one-to-one relationship between tables and tablespaces are you? A table lives in one tablespace, but a tablespace can contain many tables. And any relationship between tablespaces and schemas/users is strictly by your own convention for ease of administration. There is nothing in Oracle that makes that linkage.

  • Smallfile or Bigfile tablespace?

    Hi!
    Smallfile or Bigfile tablespace?
    I have 5TB DB(RAC 2 node) with bigfile + smallfile tablespace's.
    I want create tablespace, but I don't know how can I choose type of tablespace?
    I find some tips:
    1.Using BIGFILE tablespaces on platforms that do not support large file sizes is not recommended : it limits tablespace capacity.
    2.The fact that only one data file is attached to a tablespace makes it easier to manage: the tablespace becomes the unit of administration.
    3.Maximum Datafile Size Limit In an Oracle Database.
    4.Restore operation quicklier on smallfile than bigfile.
    5.Bigfile tablespaces should be striped so that parallel operations are not adversely affected. Oracle expects bigfile tablespace to be used with Automatic Storage Management (ASM) or other logical volume managers that support striping or RAID.
    6.Avoid using bigfile tablespaces if there could possibly be no free space available on a disk group, and the only way to extend a tablespace is to add a new datafile on a different disk group.
    7.Bigfile tablespaces should be used with automatic storage management, or other logical volume managers that support dynamically extensible logical volumes, striping and RAID.
    Sorry, if I repeat this global topic.
    But, do you have more arguments to use bigfile tablespace?

    duplicated post :
    Bigfile or smallfile tablespace?
    Question about BIGFILE vs SMALLFILE
    and:
    http://www.databasejournal.com/features/oracle/article.php/3646226/Bigfile-Type-Tablespaces-versus-Smallfile-Type.htm
    Edited by: Fran on 13-mar-2012 2:06

  • Converting Bigfile Tablespace to small file tablespace

    Hi all!
    i want to change the BigFile tablespace to small file tablespace? how can i achieve that??

    i am worried about only one question.
    if i follow the following procedure, what are the drawbacks?? would i miss something???
    1) take an export of the tablespace as SYS user with all the default parameters.
    2) take the tablespace offline and drop it
    3) create the same tablespace with smallfile parameter(default).
    4) import the tablespace (WHICH WAS EARLIER EXPORTED).
    this will be done, in a restricted mode.
    by doing so, is there still, i will lose something. (dependencies, triggers, synonyms etc). how can i verify, that everything went right!!!
    of course i can run a count(*) on the tables and indexes but what about other objects??? especially the dependencies...
    Thanks for all the help till far and the patience to look into this thread!!

  • Bigfile tablespaces requirment

    version 10g/11g
    I am just trying to clear my concepts about VLDBs,
    How the disk should be configured to support bigfile tablespace for example 2 tablespace 10TB/each(one for data, one for index)
    How i can tell my Disk support bigfile tablespace (commands?)
    what should be the best practice to implement this ?
    any other suggestion/metalink notes for further research would help....

    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#sthref1288
    OR
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#i1007169

  • Query about local storage

    Hi,
         i had a query about local storage.
         I've a machine that hosts weblogic and tangosol. i've an ejb that accesses a distributed cache i.e NamedCache cache = CacheFactory.get("MyCache")
         i modified tangosol-coherence.xml and set local-storage to false ( for distributed cache) and replaced the file in coherence.jar.
         i'm using an overflow scheme and the back map uses a disk scheme.
         i also start a separate standalone instance of tangosol and i set the system property of local storage to true for the standalone instance.
         i start the standalone instance first and then weblogic.
         The idea is ensure that the tangosol instance in weblogic or the weblogic JVM should not participate in storing data (hence local storage false).
         only the JVM for the standalone instance should store data (hence local storage true -system property).
         i wanted to know whether the property "local-storage" is pertinent to a member(machine) or to a JVM?
         the reason for this doubt: as i'm using a disk scheme, tangosol creates a file for an overflow (e.g lh014402~.tp). i can see two such files when ideally i would have wanted only one for the tangosol instance.
         -rw-r--r-- 1 zephyr users 8364032 2005-06-23 17:02 lh014402~.tp
         -rw-r--r-- 1 zephyr users 8364032 2005-06-23 17:02 lh014403~.tp.
         can you please let me know if we can configure tangosol in such a way that we can two separate instances running with local stroage false for one and true for the other?
         Awaiting your reply
         Thanks
         Vinay

    I would suggest leaving the default 'local-storage' value set to 'true' in the tangosol-coherence.xml and just use the JVM argument to control the local storage of each individual node. Then start the stand alone instance normally (I assume you are using the com.tangosol.net.DefaultCacheServer) and start the WebLogic instance with the following:
         java [...] -Dtangosol.coherence.distributed.localstorage=false [...]
         Hope this helps.
         Later,
         Rob Misek
         Tangosol, Inc.

  • Big Troubles on designing Query about special customers' counting

    Hello buddies:
    I meet a problem on designing Query about special customers' counting. Let me describe the requirment first.  I want to create a query with BEX , and there is a key figure with very special logic.
    That is: to list the counts of the customers which has more than one sales records in a time period from sales data. 
    For example :
    when the user excute the query , he or she must input a time period ( 2007.01~2007.03 e.g)
    then the query output as follow:
    District          Cust-sount
    North-Zone       100
    South-Zone      120
    The Main trouble are :
    1. Threr are no document number in the detail of sales data document records. so I could not count the sales times with document number.
    2. The time period is not fixed value, it depends on the user's input, so I can not define the counting logic in the update rule or in the query with fixed time period.
    Anybody who met similar requirement pls show me your hand and give your solutions, thanks very much.
    Jason

    Hi,
        Your solution sounds a good way to count the distinct customers. but in my case, one salse line item must not be recognize as one sales record, instead,  one customer's all sales line items occurs in one day must be  recognize as one sales record ( or we say that one sales behavior).
    for example:
    customer     product    quantity   date
    cust001       prod001        10       2007.06.06
    cust001       prod002        20       2007.06.06
    the two line items above means one sales record for the customer "cust001".
    so I could not simply use the CKF : (( Counter ) *FV2 ) > 1 .
    Best Regards,
    Jason

  • Query about licensing Jdeveloper

    Dear Friends,
    I have a query about licensing of Jdeveloper development tool. I understand that Jdeveloper is Free tool.That is we do not require license to use Jdeveloper for development as well as production.
    Recent I heard that Jdeveloper is free only if we purchase Oracle Application Server. Is it correct ? Does one need to purchase Jdeveloper license if it is being deployed on any other App. server eg. Jboss etc ?
    Can anyone throw light on the same ?
    Many thanks,
    Vaij

    Hi,
    JDeveloper is free! Oracle ADF - the binding layer - ADF BC, and ADF Faces need an OracleAs licence
    Frank

  • Need clarification on Bigfile Tablespaces

    In the following Oracle Documentation Lirbary PDF,
        Oracle® Database
        Concepts
        10g Release 2 (10.2)
        B14220-02
        October 2005
    section
        Overview of Tablespaces
        Bigfile Tablespaces (page: 3-5)
    it says,
        Benefits of Bigfile Tablespaces
        * Bigfile tablespaces can significantly increase the storage capacity of an Oracle database. Smallfile tablespaces can contain up to 1024 files, but bigfile tablespaces contain only one file that can be 1024 times larger than a smallfile tablespace. The total tablespace capacity is the same for smallfile tablespaces and bigfile tablespaces. However, because there is a limit of 64K database for each database, a database can contain 1024 times more bigfile tablespaces than smallfile tablespaces, so bigfile tablespaces increase the total database capacity by 3 orders of magnitude. In other words, 8 exabytes is the maximum size of the Oracle database when bigfile tablespaces are used with the maximum block size (32k).
    I need clarification on how to arrive at 8 exabytes ?
    1024 x 32k x 64,000 ??
    According to the exerpt above, there's no mention of maximum number of Operating System blocks per extent. Unless this was assumed knowledge ... how do I get 8 exabytes ?
    And if "a database can contain 1024 times more bigfile tablespaces than smallfile tablespaces", then what's the upper limit on smallfile tablespaces ? -- was this sentence referring to the number of datafiles per smallfile tablespace ? ...
    O_o
    Thanks !
    Message was edited by:
    mvanle

    Hi,
    According to [url http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm#i287915]Physical Database Limits page, a bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks. In resume, A bigfile tablespace is a tablespace containing a single datafile that can be as large as 128 terabytes (TB), depending on the block size of the tablespace. In conjunction with setting the initialization parameter DB_FILES to the maximum value of 65,635 the total size of the database can be more than 8 exabytes (EB)
    >>how do I get 8 exabytes ?
    You can calculate the maximum amount of space (M) in a single Oracle database as the maximum number of datafiles (D) multiplied by the maximum number of blocks per datafile (F) multiplied by the tablespace block size (B):
    M = D * F * B. Therefore, the maximum database size, given the maximum block size and the maximum number of datafiles, is:
    65,535 datafiles * 4,294,967,296 blocks per datafile * 32,768 block size = 9,223,231,299,366,420,480 = 8EB.Cheers
    Legatti

  • New issue in R 12.1.3 in AP while query about inovice was recorded in AP

    i record new invoice in ap on release R 12.1.3 and when query about it on invoice form the error appeare was it
    forms
    FRM-40735:POST-QUERY trigger raised unhandled exception ORA-04063
    how can someone help us

    Hi,
    Please see these docs.
    R12:Getting FRM-40735 Post-Query Trigger On Quering Invoice [ID 1209736.1]
    After Applying Patch APXINWKB.fmb Is Not Working [ID 1159124.1]
    R12.1.1 APXINWKB Invoice Workbench Form Comes Up With 'ORA-01403' [ID 949942.1]
    12.1.1: FRM-40735: Post-Query Trigger Raised Unhandled Exception ORA-04063 [ID 1077613.1]
    Query on Invoices, Getting "FRM-40735: POST-QUERY trigger raised unhandled exception ORA-4063" [ID 1076609.1]
    Thanks,
    Hussein

  • RMAN level 0 backup with bigfile tablespaces

    We noticed that our rman backup process for PROD database is not working properly.
      PROD is a 8 TB size database having two bigfile tablespaces each having more than 2 TB datafile.
      PROD is a 11gR1 (11.1.0.7) RAC database with ASM storage.
       We are taking weekly incremental level 0 backup for entire database and level 1 backup every day.
      As per v$session_longops, the approx completion time for the two tablespaces is 5 days.
    We can use "SECTION SIZE" parameter to take rman backup for bigfile tablespaces and exclude them from the level 0 backup.
    However, in this case our level 0 backup wont include the two bigfile tablespaces and the daily level 1 backup will not include them.
    Will you pls advise us on how to take backup in this scenario.
    we have only 4TB of LUN allocated for backup and have to keep 7 days of backup on disk.
    Thanks in advance !
    DR

    We can use "SECTION SIZE" parameter to take rman backup for bigfile tablespaces and exclude them from the level 0 backup.
    How do you think that the backup will be useful if you exclude some files from level 0 or base backup? You will find difficulties or you can't restore the database in case of disaster.
    Yes you can use section size parameter for paralleled big file tablespace backup.
    Check below for explanation and example.
    Backing Up the Database: Advanced Topics
    http://www.oracle-base.com/articles/11g/rman-enhancements-11gr1.php#multisection_backups
    You can use the below note for your case which will be of help.
    Reducing RMAN backup time for unevenly sized tablespaces « Oracle DBA – A lifelong learning experie…
    Thank you!!

  • Bigfile tablespace.

    Can someone tell me wat is the max size of Bigfile Tablespace in Oracle 10g???????????

    Oracle Reference has database limit list,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm#i287915
    For user625868, if you want good alignment of output, use pre /pre with [] to wrap around your result.
    For example, unwraped output of desc looks like,
    desc dba_users
    Name Null? Type
    USERNAME NOT NULL VARCHAR2(30)
    USER_ID NOT NULL NUMBER
    PASSWORD VARCHAR2(30)
    ACCOUNT_STATUS NOT NULL VARCHAR2(32)
    LOCK_DATE DATE
    EXPIRY_DATE DATE
    DEFAULT_TABLESPACE NOT NULL VARCHAR2(30)
    TEMPORARY_TABLESPACE NOT NULL VARCHAR2(30)
    CREATED NOT NULL DATE
    PROFILE NOT NULL VARCHAR2(30)
    INITIAL_RSRC_CONSUMER_GROUP VARCHAR2(30)
    EXTERNAL_NAME VARCHAR2(4000)
    This one is better
    desc dba_users
    Name                                      Null?    Type
    USERNAME                                  NOT NULL VARCHAR2(30)
    USER_ID                                   NOT NULL NUMBER
    PASSWORD                                           VARCHAR2(30)
    ACCOUNT_STATUS                            NOT NULL VARCHAR2(32)
    LOCK_DATE                                          DATE
    EXPIRY_DATE                                        DATE
    DEFAULT_TABLESPACE                        NOT NULL VARCHAR2(30)
    TEMPORARY_TABLESPACE                      NOT NULL VARCHAR2(30)
    CREATED                                   NOT NULL DATE
    PROFILE                                   NOT NULL VARCHAR2(30)
    INITIAL_RSRC_CONSUMER_GROUP                        VARCHAR2(30)
    EXTERNAL_NAME                                      VARCHAR2(4000)

Maybe you are looking for