Oracle Datafiles.

Hi,
I have installed oracle 9i on Win 2k.
When i Create a new database by using "Database configuration assistant" it creates total 9
.dbf files.
like SYSTEM01.DBF, TOOLS01.DBF,UNDOTBS01.DBF,USERS01.DBF,TEMP01.DBF etc
what I want to know is , if i login with default user : "scoot", "tigger" and create some tables,
where is this information stored ?
if i need to create new application and assign new tablespace and users do i need to create new datafile for this?

when i login withe these new user , all data will be by default stored to assign datafile ?
yes, but a user has associated tablespaces instead datafailes and the tablespaces are logic units that stored information about the datafiles that belongs to it.
In that case if i want to move this data to other server i just need to transfer the data file ?
You never transfer that datafiles to transport the information of a user. You transfer the datafiles in you want to backup all the database or part of it.
Joel P�rez

Similar Messages

  • Change the default oracle datafile permission from 640 to 644 internally.

    Is there any method available to change the default datafile permission (640-(RW-R-----)) to (644-(RW-R--R--)).Please check the below example which I required.
    existing:
    rw-r----- 1 orasd dba 104865792 Mar 15 01:17 users01.dbf
    Required:
    -rw-r--r-- 1 orasd dba 104865792 Mar 15 01:17 users01.dbf
    we can bring the expected result by setting the value of UMASK to 022, only for OS level files not for oracle datafiles.
    So anyone can help me which method or which parameter we need to set to create datafiles in 644 permission from internally(create tablespace command).
    I know very well oracle software is creating the datafile,controlfile and logfile with 640 file permission, to maintain discretionary access to data (for security purpose).

    Is there a reason for posting duplicates ? Want to change the default datafile permission (640) to (644)

  • Increase the Oracle datafile size or add another datafile

    Someone please explain,
    Is it better to increase the Oracle datafile size or add another datafile to increase the Oracle tablespace size?
    Thanks in advance

    The decision must also includes:
    - the max size of a file in your OS and/or file system
    - how you perform your backup and recovery (eg:do you need to change the file list)
    - how many disks are available and how they are presented to the OS (raw, LVM, striped, ASM, etc.)
    - how many IO channels are available and whether you can balance IO across them
    Personal default is to grow a file to the largest size permitted by OS unless there is a compelling reason otherwise. That fits nicely with the concept of BIGFILE tablespaces (which have their own issues, especially in backup/recovery)

  • Best Oracle datafile size for a SAP ECC6 database.

    Hello,
    We plan to migrate our ECC6 SAP database on a new Windows X64 plateforme running with :
    - Windows 2008 R2
    - Oracle 10.2.0.5.
    The actual Oracle database size is more 2 Tbytes.
    Please could help me to find the best size for the Oracle datafile in this new plateform ?
    Today we create Oracle Tablespace for SAP with 8 GBytes datafiles but we have too many datafiles to managed.
    Do you have experimented 16 GBytes, 32 GBytes, 64 Gbytes datafiles for Oracle Tablespaces on this kind of SAP Plateform ?
    Thank you very much for your reply.
    Best regards.
    Jean-Pascal.

    Hello Jean-Pascal,
    well the answer as usual - it depends )
    We already discussed that topic some time ago - please check this thread:
    big file or small file
    Your storage sub system and stripping matters of course too.
    Regrds
    Stefan

  • Why not to place Oracle Datafiles on local disks

    Hi, I want to ask a basic question.
    Almost any Oracle installation I saw, the datafiles were placed on mount points, disks etc.
    Is there a reason for this? Why does no one place oracle datafiles to local disks? I couldnt find the answer on installation guide's
    I am asking this because I am going to install oracle (10g) on vmware and trying to decide whether or not to put datafiles on vmdk (local disk of virtual machine)...
    Is this recommended? Or otherwise, why it is recommended to put datafiles to a mount point/external disk etc.
    Thanks in advance

    Hi,
    Oracle is flexiable to place datafiles at your desired location. To reduce IO contention, It's adviced to place datafiles of particular database on specific folder.
    For an instance, While you create database using DBCA, There will be an option asked to choose the datafile location.
    Regards
    KSG

  • Renaming oracle datafiles at the unix level

    I have noticed that some of the datafile under /oracle/DV1/sapdata2/ have some strange characters in them
    Grrr - DBAs that don't know brtools.....
    for example:
    drwxr-xr-x   2 oradv1   dba             256 Nov 02 23:22 sr3\177_2
    and it shows as sr3_2 unless you use ls -lb which shows the above:
    Now can I simply stop the DB and cp -rp sr3\177_2 to sr3_2?
    I can actually cp -rp  sr3\177_2 to sr3_X and then copy sr3_X to sr3_2 or an mv.
    So can I do this without screwing up Oracle and SAP?
    What should I do as the \177 characters should not be there?
    Thanks Mikie

    Hello Mike,
    at first... the naming of the folder is no problem in running SAP ... it is only a "cosmetic" failure.
    But to your question.....
    1) Shutdown SAP and Oracle (shutdown immediate)
    2) Copy (and remove) or move your data files
    3) Publish the new file names to oracle the following way:
    > startup mount;
    > alter database rename file '<OLD_FILENAME>' to '<NEW_FILENAME>';
    > shutdown immediate;
    > startup ;
    After this the database should start "normally" with the new data file names / directory structure. After these steps i would make a backup of the database.
    I have done this many times (not only in SAP environment).
    For more information of the rename command:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_1004.htm#i2079942
    Regards
    Stefan

  • Oracle Datafile block corruption

    HI all
    i am facing datafile block corruption in following datafile .
    i don't have any backup.
    how i can recover these blocks
    Regards
    Vivek Rawat

    Hi Vivek,
    Please refer below SAP notes to analyze the affected objects which needs to be recovered
    365481 - Block corruptions
    1559652 - How to deal with block corruptions on Oracle
    923919 - Advanced Oracle block checking features
    http://www.dba-oracle.com/t_repair_corrupt_blocks.htm
    Hope this helps.
    Regards,
    Deepak Kori

  • Resizing an Oracle Datafile

    Oracle Version: 9.2.0.4.0
    O.S. Windows 2000 SP4
    1Gb Ram
    I´ve truncated an table and now the datafile has 3,000,000 but using only 703,000.
    How I can resizing the Datafile ?
    I understood that I need to create another tablespace and data file. After that export and import in it. Is it corret ?
    Thanks for the help !
    Luciano

    Hi,
    Or resize directly datafile.
    Read the next post : Re: How to shrink the system tablespace datafile Size
    Nicolas.

  • Putting Oracle Datafiles on an encrypted filesystem

    Hi,
    Has anyone had any luck in keeping datafiles on an encrypted filesystem?
    We have a possible requirement to host a copy of a database that is provided by a third party in a location that isn't manned 24/7. Some people have raised concerns and said that the data must be encrypted - I personally think it's a bit of a waste of time but I'd like to know if it is at least technically feasible.
    We are running Oracle 10.1.2, at the moment on Windows server 2003, but at some point we will move it onto AIX (It currently gets loaded by import/export, but we are going to start using data guard)
    There isn't a lot we can do with the application as it's third party, if we were on 11g we might be able to use the transparent tablespace encryption stuff, if we forked out for it but the application hasn't been tested at all on v11.
    Any ideas?
    Cheers
    Carl

    I'm not sure if this would help,
    http://www.oracle.com/technology/oramag/oracle/05-jan/o15security.html
    http://www.oracle.com/technology/oramag/oracle/05-sep/o55security.html
    sbs

  • Oracle datafile moving soft links

    hi,
    i need to move my datafile from one location to other location. actually in database the data files are confiured with softlinks..
    so how can i move my data file (physicl location )from one location to other physical location,
    i thinking
    1. put the tablespace offline .
    2. move the datafile from one location to other.
    my doublt is how i alter the link location to point to new location.
    should i create new link to point this new location and
    3.alter tablespace rename datafile <old link> to <new link >
    put the tablespace online .
    is any method i can alter the link to point to new location so that i can avoid step 3.
    thanks
    aditya

    You need to do 1, 2 but not 3.
    Instead you need to drop the old link and create a new link to point to the new location of the datafile.
    The name is unchanged as far as the DB is concerned, but you need to make sure the tablespace is offline while making the change.

  • Unix file system & Oracle datafiles--urgent plz

    How i can chech my oracle db files on which unix file system? In HP/UX exvirnment??

    select * from dba_data_files
    AUTOEXTENSIBLE column gives you whether autoextend is on or not.
    Join with dba_free_space to get free space for each file.
    You can check the following link
    http://www.oracle.com/technology/oramag/code/tips2003/083103.html

  • Install sap through copy /datafile and oracle binaries

    Dear Experts
    we have an requirement would like to make an test system without installing sap software but only through copying oracle datafiles and oracle binaries
    on windows 2003
    we have just windows 2008 os on c:\ with 100 gb
                                                            d:\ with 250 gb empty space
    so let me know the copy procedure in c:\ and d:\  which links to be copied
    Thanks & Regards                                                      

    http://scn.sap.com/people/harsha.bs/blog/2013/04/16/system-copy--backuprestore-method
    Hi Rajendra,
    Now I got your point .
    Please check the above  link and let me know if you are facing issues.
    Thanks,
    Pavan

  • 2GB OR NOT 2GB - FILE LIMITS IN ORACLE

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-11
    2GB OR NOT 2GB - FILE LIMITS IN ORACLE
    ======================================
    Introduction
    ~~~~~~~~~~~~
    This article describes "2Gb" issues. It gives information on why 2Gb
    is a magical number and outlines the issues you need to know about if
    you are considering using Oracle with files larger than 2Gb in size.
    It also
    looks at some other file related limits and issues.
    The article has a Unix bias as this is where most of the 2Gb issues
    arise but there is information relevant to other (non-unix)
    platforms.
    Articles giving port specific limits are listed in the last section.
    Topics covered include:
    Why is 2Gb a Special Number ?
    Why use 2Gb+ Datafiles ?
    Export and 2Gb
    SQL*Loader and 2Gb
    Oracle and other 2Gb issues
    Port Specific Information on "Large Files"
    Why is 2Gb a Special Number ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Many CPU's and system call interfaces (API's) in use today use a word
    size of 32 bits. This word size imposes limits on many operations.
    In many cases the standard API's for file operations use a 32-bit signed
    word to represent both file size and current position within a file (byte
    displacement). A 'signed' 32bit word uses the top most bit as a sign
    indicator leaving only 31 bits to represent the actual value (positive or
    negative). In hexadecimal the largest positive number that can be
    represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
    This is ONE less than 2Gb.
    Files of 2Gb or more are generally known as 'large files'. As one might
    expect problems can start to surface once you try to use the number
    2147483648 or higher in a 32bit environment. To overcome this problem
    recent versions of operating systems have defined new system calls which
    typically use 64-bit addressing for file sizes and offsets. Recent Oracle
    releases make use of these new interfaces but there are a number of issues
    one should be aware of before deciding to use 'large files'.
    What does this mean when using Oracle ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The 32bit issue affects Oracle in a number of ways. In order to use large
    files you need to have:
    1. An operating system that supports 2Gb+ files or raw devices
    2. An operating system which has an API to support I/O on 2Gb+ files
    3. A version of Oracle which uses this API
    Today most platforms support large files and have 64bit APIs for such
    files.
    Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
    but the situation is very dependent on platform, operating system version
    and the Oracle version. In some cases 'large file' support is present by
    default, while in other cases a special patch may be required.
    At the time of writing there are some tools within Oracle which have not
    been updated to use the new API's, most notably tools like EXPORT and
    SQL*LOADER, but again the exact situation is platform and version specific.
    Why use 2Gb+ Datafiles ?
    ~~~~~~~~~~~~~~~~~~~~~~~~
    In this section we will try to summarise the advantages and disadvantages
    of using "large" files / devices for Oracle datafiles:
    Advantages of files larger than 2Gb:
    On most platforms Oracle7 supports up to 1022 datafiles.
    With files < 2Gb this limits the database size to less than 2044Gb.
    This is not an issue with Oracle8 which supports many more files.
    In reality the maximum database size would be less than 2044Gb due
    to maintaining separate data in separate tablespaces. Some of these
    may be much less than 2Gb in size.
    Less files to manage for smaller databases.
    Less file handle resources required
    Disadvantages of files larger than 2Gb:
    The unit of recovery is larger. A 2Gb file may take between 15 minutes
    and 1 hour to backup / restore depending on the backup media and
    disk speeds. An 8Gb file may take 4 times as long.
    Parallelism of backup / recovery operations may be impacted.
    There may be platform specific limitations - Eg: Asynchronous IO
    operations may be serialised above the 2Gb mark.
    As handling of files above 2Gb may need patches, special configuration
    etc.. there is an increased risk involved as opposed to smaller files.
    Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
    Important points if using files >= 2Gb
    Check with the OS Vendor to determine if large files are supported
    and how to configure for them.
    Check with the OS Vendor what the maximum file size actually is.
    Check with Oracle support if any patches or limitations apply
    on your platform , OS version and Oracle version.
    Remember to check again if you are considering upgrading either
    Oracle or the OS in case any patches are required in the release
    you are moving to.
    Make sure any operating system limits are set correctly to allow
    access to large files for all users.
    Make sure any backup scripts can also cope with large files.
    Note that there is still a limit to the maximum file size you
    can use for datafiles above 2Gb in size. The exact limit depends
    on the DB_BLOCK_SIZE of the database and the platform. On most
    platforms (Unix, NT, VMS) the limit on file size is around
    4194302*DB_BLOCK_SIZE.
    Important notes generally
    Be careful when allowing files to automatically resize. It is
    sensible to always limit the MAXSIZE for AUTOEXTEND files to less
    than 2Gb if not using 'large files', and to a sensible limit
    otherwise. Note that due to <Bug:568232> it is possible to specify
    an value of MAXSIZE larger than Oracle can cope with which may
    result in internal errors after the resize occurs. (Errors
    typically include ORA-600 [3292])
    On many platforms Oracle datafiles have an additional header
    block at the start of the file so creating a file of 2Gb actually
    requires slightly more than 2Gb of disk space. On Unix platforms
    the additional header for datafiles is usually DB_BLOCK_SIZE bytes
    but may be larger when creating datafiles on raw devices.
    2Gb related Oracle Errors:
    These are a few of the errors which may occur when a 2Gb limit
    is present. They are not in any particular order.
    ORA-01119 Error in creating datafile xxxx
    ORA-27044 unable to write header block of file
    SVR4 Error: 22: Invalid argument
    ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
    ORA-27070 skgfdisp: async read/write failed
    ORA-02237 invalid file size
    KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
    file limit exceed.
    Unix error 27, EFBIG
    Export and 2Gb
    ~~~~~~~~~~~~~~
    2Gb Export File Size
    ~~~~~~~~~~~~~~~~~~~~
    At the time of writing most versions of export use the default file
    open API when creating an export file. This means that on many platforms
    it is impossible to export a file of 2Gb or larger to a file system file.
    There are several options available to overcome 2Gb file limits with
    export such as:
    - It is generally possible to write an export > 2Gb to a raw device.
    Obviously the raw device has to be large enough to fit the entire
    export into it.
    - By exporting to a named pipe (on Unix) one can compress, zip or
    split up the output.
    See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
    - One can export to tape (on most platforms)
    See "Exporting to tape on Unix systems" <Note:30428.1>
    (This article also describes in detail how to export to
    a unix pipe, remote shell etc..)
    Other 2Gb Export Issues
    ~~~~~~~~~~~~~~~~~~~~~~~
    Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
    with EXPORT on many releases of Oracle such that if you export a large table
    and specify COMPRESS=Y then it is possible for the NEXT storage clause
    of the statement in the EXPORT file to contain a size above 2Gb. This
    will cause import to fail even if IGNORE=Y is specified at import time.
    This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
    An export will typically report errors like this when it hits a 2Gb
    limit:
    . . exporting table BIGEXPORT
    EXP-00015: error on row 10660 of table BIGEXPORT,
    column MYCOL, datatype 96
    EXP-00002: error in writing to export file
    EXP-00002: error in writing to export file
    EXP-00000: Export terminated unsuccessfully
    There is a secondary issue reported in <Bug:185855> which indicates that
    a full database export generates a CREATE TABLESPACE command with the
    file size specified in BYTES. If the filesize is above 2Gb this may
    cause an ORA-2237 error when attempting to create the file on IMPORT.
    This issue can be worked around be creating the tablespace prior to
    importing by specifying the file size in 'M' instead of in bytes.
    <Bug:490837> indicates a similar problem.
    Export to Tape
    ~~~~~~~~~~~~~~
    The VOLSIZE parameter for export is limited to values less that 4Gb.
    On some platforms may be only 2Gb.
    This is corrected in Oracle 8i. <Bug:490190> describes this problem.
    SQL*Loader and 2Gb
    ~~~~~~~~~~~~~~~~~~
    Typically SQL*Loader will error when it attempts to open an input
    file larger than 2Gb with an error of the form:
    SQL*Loader-500: Unable to open file (bigfile.dat)
    SVR4 Error: 79: Value too large for defined data type
    The examples in <Note:30528.1> can be modified to for use with SQL*Loader
    for large input data files.
    Oracle 8.0.6 provides large file support for discard and log files in
    SQL*Loader but the maximum input data file size still varies between
    platforms. See <Bug:948460> for details of the input file limit.
    <Bug:749600> covers the maximum discard file size.
    Oracle and other 2Gb issues
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    This sections lists miscellaneous 2Gb issues:
    - From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
    An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
    - DBV (the database verification file program) may not be able to scan
    datafiles larger than 2Gb reporting "DBV-100".
    This is reported in <Bug:710888>
    - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
    specified in 'M' or 'K' to create files larger than 2Gb otherwise the
    error "ORA-02237: invalid file size" is reported. This is documented
    in <Bug:185855>.
    - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
    Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
    reports
    ORA-2187: invalid quota specification.
    This is documented in <Bug:425831>.
    The workaround is to grant users UNLIMITED TABLESPACE privilege if they
    need a quota above 2Gb.
    - Tools which spool output may error if the spool file reaches 2Gb in size.
    Eg: sqlplus spool output.
    - Certain 'core' functions in Oracle tools do not support large files -
    See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
    Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
    Even with this fix there may still be large file restrictions as not
    all code uses these 'core' functions.
    Note though that <Bug:749600> covers CORE functions - some areas of code
    may still have problems.
    Eg: CORE is not used for SQL*Loader input file I/O
    - The UTL_FILE package uses the 'core' functions mentioned above and so is
    limited by 2Gb restrictions Oracle releases which do not contain this fix.
    <Package:UTL_FILE> is a PL/SQL package which allows file IO from within
    PL/SQL.
    Port Specific Information on "Large Files"
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Below are references to information on large file support for specific
    platforms. Although every effort is made to keep the information in
    these articles up-to-date it is still advisable to carefully test any
    operation which reads or writes from / to large files:
    Platform See
    ~~~~~~~~ ~~~
    AIX (RS6000 / SP) <Note:60888.1>
    HP <Note:62407.1>
    Digital Unix <Note:62426.1>
    Sequent PTX <Note:62415.1>
    Sun Solaris <Note:62409.1>
    Windows NT Maximum 4Gb files on FAT
    Theoretical 16Tb on NTFS
    ** See <Note:67421.1> before using large files
    on NT with Oracle8
    *2 There is a problem with DBVERIFY on 8.1.6
    See <Bug:1372172>

    I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
    Step 1
    Write a simple Java program like the one listed:
    import java.io.File;
    public class fileCheckUtl {
    public static int fileExists(String FileName) {
    File x = new File(FileName);
    if (x.exists())
    return 1;
    else return 0;
    public static void main (String args[]) {
    fileCheckUtl f = new fileCheckUtl();
    int i;
    i = f.fileExists(args[0]);
    System.out.println(i);
    Step 2 Load this into the Oracle data using LoadJava
    loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
    The output should be something like this:
    creating : source fileCheckUtl
    loading : source fileCheckUtl
    creating : fileCheckUtl
    resolving: source fileCheckUtl
    Step 3 - Create a PL/SQL wrapper for the Java Class:
    CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
    LANGUAGE JAVA
    NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
    Step 4 Test it:
    SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
    2 /
    FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
    1

  • Planning to move my datafile from windows server 2003 to windows XP

    planning to move my datafiles from windows server 2003 to windows XP
    database 10g
    Is it possible to follow below the steps for above migration
    =====================================-
    This is my steps( migrated from windows Xp to windows XP)
    Moving oracle Datafile from one server A to B (instance not running) (cold backup)
    In server a (cold backup of datafile)
    SQL> alter database backup controlfile to trace
    Go to udump check the trace file copy these lines
    CREATE CONTROLFILE REUSE DATABASE "O10G1" NORESETLOGS NOARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 454
    LOGFILE
    GROUP 1 'E:\ORACLE2\PRODUCT\10.1.0\ORADATA\O10G1\REDO01.LOG' SIZE 10M,
    GROUP 2 'E:\ORACLE2\PRODUCT\10.1.0\ORADATA\O10G1\REDO02.LOG' SIZE 10M,
    GROUP 3 'E:\ORACLE2\PRODUCT\10.1.0\ORADATA\O10G1\REDO03.LOG' SIZE 10M
    -- STANDBY LOGFILE
    DATAFILE
    'E:\ORACLE2\PRODUCT\10.1.0\ORADATA\O10G1\SYSTEM01.DBF',
    'E:\ORACLE2\PRODUCT\10.1.0\ORADATA\O10G1\UNDOTBS01.DBF',
    'E:\ORACLE2\PRODUCT\10.1.0\ORADATA\O10G1\SYSAUX01.DBF',
    'E:\ORACLE2\PRODUCT\10.1.0\ORADATA\O10G1\USERS01.DBF'
    CHARACTER SET WE8MSWIN1252
    Change the new database name “jee” and REUSE replaced by SET and NORESTLOGS replaced by RESTLOGS above lines.
    Save as C1.sql
    Stop the oracle instance service in source ( server A)
    Copy all the datafile and redlog file put into server B( any folder)
    Copy the init.ora file from source and edit
    Change the db_name and location of the controlfile
    SERVER B
    Create the oracle instance using ORADIM
    Start the service
    C:\ set oracle_sid=instance name
    C:\>set oracle_sid=jeeno1
    C:\>sqlplus /nolog
    SQL*Plus: Release 10.1.0.2.0 - Production on Tue Apr
    11 06:44:28 2006
    Copyright (c) 1982, 2004, Oracle. All rights reserved.
    SQL> connect / as sysdba;
    Connected to an idle instance.
    SQL> startup nomount
    pfile='C:\oracle\product\10.1.0\admin\jeeno\pfile\jeenoinit.ora'
    ORACLE instance started.
    Total System Global Area 171966464 bytes
    Fixed Size 787988 bytes
    Variable Size 145750508 bytes
    Database Buffers 25165824 bytes
    Redo Buffers 262144 bytes
    SQL> CREATE CONTROLFILE set DATABASE "jeeno1"
    RESETLOGS NOARCHIVELOG
    2 MAXLOGFILES 16
    3 MAXLOGMEMBERS 3
    4 MAXDATAFILES 100
    5 MAXINSTANCES 8
    6 MAXLOGHISTORY 454
    7 LOGFILE
    8 GROUP 1
    'C:\ORACLE\PRODUCT\10.1.0\ORADATA\jeeno\REDO01.LOG'
    SIZE 10M,
    9 GROUP 2
    'C:\ORACLE\PRODUCT\10.1.0\ORADATA\jeeno\REDO02.LOG'
    SIZE 10M,
    10 GROUP 3
    'C:\ORACLE\PRODUCT\10.1.0\ORADATA\jeeno\REDO03.LOG'
    SIZE 10M
    11 -- STANDBY LOGFILE
    12 DATAFILE
    13
    'C:\ORACLE\PRODUCT\10.1.0\ORADATA\jeeno\SYSTEM01.DBF',
    14
    'C:\ORACLE\PRODUCT\10.1.0\ORADATA\jeeno\UNDOTBS01.DBF',
    15
    'C:\ORACLE\PRODUCT\10.1.0\ORADATA\jeeno\SYSAUX01.DBF',
    16
    'C:\ORACLE\PRODUCT\10.1.0\ORADATA\jeeno\USERS01.DBF'
    17 CHARACTER SET WE8MSWIN1252
    18 ;
    Control file created.
    SQL> alter database open resetlogs;
    Database altered

    one more thing u can also rename ur database by using nid utility ,So u have no need to recreate the controlfile for changing the name of ur database.just restore all files from primary to new server ,create new service by oradim ,start the instance and then use NID utility.
    Thanks
    Kuljeet

  • Oracle 10g on Oracle Linux 5.6: dbca does.. nothing?

    Dear all,
    I successfully installed Oracle Enterprise Linux 5.6 64bit on a test server. I installed a bunch of prerequisite packages then I successfully installed Oracle 10G (10.2 for linux x86-64). During the installation I selected "no default database (only server) and no enterprise manager".
    Successfully I mean, apparently!
    After that I launched DBCA, and go through all steps to create my first database.
    - general purpose
    - no enterprise manager (disabled, anyway)
    - on file system
    - use database file locations from template (ORACLE_BASE is correctly set)
    - no flash recovery area
    - default size, charset and so on
    Then I reach the last page (12). Create Database is already selected, and I select also "Generate Database Creation Scripts". I press Finish. The Confirmation page appears, I review all the stuff then I press OK.
    At this point I expect the window with the progress bar of database creation (I did this tens of times before on 10g, 11g on Windows and Linux) but I simply return to the Step 12 of dbca.
    In /u01/app/oracle/oradata I see an empty ORCL folder
    In /u01/app/admin/ORCL I see the empty folders: adump, bdump, cdump and so on, plus scripts, that contains:
    - ORCL.sh:
    #!/bin/sh
    mkdir -p /u01/app/oracle/admin/ORCL/adump
    mkdir -p /u01/app/oracle/admin/ORCL/bdump
    mkdir -p /u01/app/oracle/admin/ORCL/cdump
    mkdir -p /u01/app/oracle/admin/ORCL/dpdump
    mkdir -p /u01/app/oracle/admin/ORCL/pfile
    mkdir -p /u01/app/oracle/admin/ORCL/udump
    mkdir -p /u01/app/oracle/datafile
    mkdir -p /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/dbca/ORCL
    mkdir -p /u01/app/oracle/product/10.2.0/db_1/dbs
    ORACLE_SID=ORCL; export ORACLE_SID
    - ORCL.sql:
    PROMPT specify a password for sys as parameter 1;
    DEFINE sysPassword = &1
    PROMPT specify a password for system as parameter 2;
    DEFINE systemPassword = &2
    I can't found any log and I don't understand what's wrong.
    Mario.

    You'll never believe!
    I suspected about an apparently stupid warning whenever I launched dbca on command line (Xming server over ssh)
    Warning: Cannot convert string "-b&h-lucida-medium-r-normal-sans-*-140-*-*-p-*-iso8859-1" to type FontStruct
    I installed xming fonts and now the progress window appears and database creation is done, everything works fine! Incredible!
    This lead me to see a new error, during the db creation:
    ora-27125: unable to create shared memory segment.
    I solved this by issuing this command
    #echo "<dba_group_gid>" > /proc/sys/vm/hugetlb_shm_group
    where dba_group_gid is the gid of oracle group (i.e. oinstall).

Maybe you are looking for