Datafile creation

I am adding a new datafile to My database.
We have 3 direcories like
/dwh/data1/dwh/all datafiles here like data101dwh.dbf
/dwh1/data1/dwh/all datafiles here data101dwh.dbf
/dwh2/data1/dwh/all datafile here data101dwh.dbf
Can i keep in 3 directories the datafile name as data101dwh.dbf or is there any problem .If i create like this ...

What Sabdar said times ten. What if you had to perform a tablespace point in time recovery? Let's say you map the paths to one place using db file convert. Now you have three files with the same name going into the same/one directory. Now you have the added step of using set newname to fix the file name collision. Why complicate matters?

Similar Messages

  • RMAN Automatic Datafile Creation

    Hi,
    Could anyone please explain me RMAN Automatic Datafile Creation in oracle 10g with example.
    Thanks in advance
    regards,
    Shaan

    Hi,
    Automatic Datafile Creation - RMAN will automatically create missing datafiles in two circumstances. First, when the backup controlfile contains a reference to a datafile, but no backup of the datafile is present. Second, when a backup of the datafile is present, but there is no reference in the controlfile as it was not backed up after the datafile addition.
    doc
    http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10734/wnbradv.htm
    Regards,
    Tom

  • Datafile creation in Oracle

    Hi All,
    I am going to create a datafile in Oracle database by using this syntax.
    ALTER DATABASE
    CREATE DATAFILE 'c:\oracle\oradata\orabase\uwdata03.dbf' SIZE 1G
    AS 'UWDATA';
    Does the creation of the datafile will hamper the default datafiles of oracle?

    user11358816 wrote:
    Hi,
    I don't know abt.oracle much ,but i need to create datafile for creating a table space.
    So for that i need to create a datafile.No you don't that's not how it works.
    So for that I am asking about the syntax of it.
    ThanksThen the very first thing you'll want to learn is where to find the official documentation.
    It would be a good investment in your career to go to tahiti.oracle.com. Drill down to your product and version. There you will find the complete doc library.
    Notice the 'search' function at that site.
    You should spend a few minutes just getting familiar with what kind of documentation is available there by simply browsing the titles under the "Books" tab.
    Open the Reference Manual and spend a few minutes looking through the table of contents to get familiar with what kind of information is available there. Learning where to look things up in the documentation is time well spent on your career.
    Do the same with the SQL Reference Manual.
    Then set yourself a plan to dig deeper.
    - Read the 2-Day DBA Manual
    - Read a chapter a day from the Concepts Manual.
    - Look in your alert log and find all the non-default initialization parms listed at instance startup. Then read up on each one of them in the Reference Manual. Take a look at your listener.ora, tnsnames.ora, and sqlnet.ora files, then bounce what you see there in the network administrators manual.
    - Read the concepts manual again.
    Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.

  • Tablespace or datafile  creation during recovery

    Hello
    During recovery,
    If there is new tablespace or datafile added in archivelogs or redologs I have to manually issue:
    alter database create datafile .. as ..
    Why doesnt oracle automatically create the datafiles ?

    The datafile doesn't exist in the control file. The control file maintains the physical structure of the database. During the RECOVERy phase, Oracle reads the ArchiveLogs to identify what updates are to be done -- these are mapped in terms of file, block and row. If the file doesn't exist in the controlfile, the rollforward cannot be applied.
    Therefore, the ALTER DATABASE CREATE DATAFILE ... AS .... allows Oracle to "add' the file to the controlfile and then proceed with the rollforward.
    Oracle doesn't automatically create the datafile because it can't know what the target file name is.
    In your backup your datafiles may have been spread across /u01/oradata/MYDB, /u02/oradata/MYDB, /u03/oradata/MYDB and this file may have been in /u03/oradata/MYDB. However, in your target (restored) location the files may be at only two, differently named, mountpoints : /oradata1/REPDB, /oradata/REPDB. Oracle can't decide for you where the new datafile (which was in /u03/oradata/MYDB) should be created -- should it be in /oradata1/REPDB or /oradata/REPDB or, you might have avaialable /oradata3/REPDB which the database instance isn't aware of !

  • Looking for datafile creation date

    DB version: 11.2 / Solaris 10
    We use OMF for our datafiles stored in ASM.
    I was asked to create a 20gb tablespace. We don't create datafiles above 10g. So, I did this.
    CREATE TABLESPACE FMT_DATA_UAT DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO;
    ALTER TABLESPACE FMT_DATA_UAT ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off;Later it turns out that the Schema will be having only 7gb worth of data. So, I wanted to reduce the file size of the second file added using ALTER DATABASE DATAFILE .... RESIZE command. But I don't want to RESIZE (reduce) the size of the first datafile created when I issued the CREATE TABLESPACE command. Since, in ASM, there is no real naming like
    +DATA/orcl/datafile/fmt_data_uat01.dbf
    +DATA/orcl/datafile/fmt_data_uat02.dbf
    .It is difficult to find which was the first file created.
    And there is no create_date column in DBA_DATA_FILES. There isn't a create_date column in v$datafile either.
    SQL > select file_name from dba_data_Files where tablespace_name = 'FMT_DATA_UAT';
    FILE_NAME
    +DATA/orcl/datafile/fmt_data_uat.1415.792422709
    +DATA/orcl/datafile/fmt_data_uat.636.792422811
    SQL > select name, CHECKPOINT_TIME, LAST_TIME, FIRST_NONLOGGED_TIME, FOREIGN_CREATION_TIME
         from v$datafile where name like '+DATA/orcl/datafile/fmt_data_uat%';
    NAME                                                    CHECKPOINT_TIME      LAST_TIME            FIRST_NONL FOREIGN_CREATION_TIM
    +DATA/orcl/datafile/fmt_data_uat.1415.792422709         27 Aug 2012 18:55:06
    +DATA/orcl/datafile/fmt_data_uat.636.792422811          27 Aug 2012 18:55:06
    SQL >Alert log doesn't show file names either.
    CREATE TABLESPACE FMT_DATA_UAT DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO
    Mon Aug 27 13:25:37 2012
    Completed: CREATE TABLESPACE FMT_DATA_UAT DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO
    Mon Aug 27 13:26:51 2012
    ALTER TABLESPACE FMT_DATA_UAT ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off
    Mon Aug 27 13:27:10 2012
    Thread 1 advanced to log sequence 70745 (LGWR switch)
      Current log# 8 seq# 70745 mem# 0: +DATA/orcl/onlinelog/group_8.1410.787080847
      Current log# 8 seq# 70745 mem# 1: +FRA/orcl/onlinelog/group_8.821.787080871
    Mon Aug 27 13:27:13 2012
    Archived Log entry 123950 added for thread 1 sequence 70744 ID 0x769b5f42 dest 1:
    Mon Aug 27 13:27:21 2012
    Completed: ALTER TABLESPACE FMT_DATA_UAT ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off
    Mon Aug 27 13:28:16 2012

    There isn't a create_date column in v$datafile either.Did you check CREATION_TIME ?

  • Big Datafile Creation

    Hi all,
    My OS: Windows Server 2003
    Oracle Version: 10.2.0.1.0
    Is there is possibility to add big datafile more than 30G.
    Regards,
    Vikas

    Vikas Kohli wrote:
    Thanks for your help ,
    But if i have a already a tablespace, every time when it is going to fill i need to add datafile of 30g. Is there any possibility that i can specify a big datafile or need to create a new big datafile tablespace and move the tables from olde tablespace to new oneYou have to understand that a bigfile tablespace is a tablespace with a single, but very large datafile.
    have you read the link I posted before?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#i1010733

  • Problem in creating Datafile

    Hi all,
    I am doing some kind of testing in my test maching and I faced the following situation.
    I have a good control file and all the archives starting from the 1st scn. Now I brought down the instance and physically deleted a datafile and tried bringing up the db. This time it couldnt open the db as a file is missing, so I created the datafile and gave the recover database command and things opened up fine. No I backed up the ctrl file to trace and brought down the instance. Now I deleted my controlfile and recreated it successfully by using the script that I got from trace and tried the datafile drop and recreate trick as before, but in this case I couldn't create the datafile. Can anybody explain me why is this behaviour?

    KRIS wrote:
    Version is 11.2.0.1.0
    OS is Red Hat Enterprise Linux Server release 5.5
    As told earlier I tried to create the deleted datafile and this is the error I got.
    13:20:44 SQL> alter database create datafile 4 as '/oracle/abhi/data/users.dbf'; alter database create datafile 4 as '/oracle/abhi/data/users.dbf'
    ERROR at line 1:
    ORA-01178: file 4 created before last CREATE CONTROLFILE, cannot recreate
    ORA-01110: data file 4: '/oracle/abhi/data/users.dbf'
    I can understand the error, but I want to understand what happens internally under such circumstances and why such a datafile recovery is not allowed in oracleonly you can re-create datafile without backup in archivelog mode, the datafile creation should be less than the control file creation time.
    if you re-create the control file then you cannot use the command "alter database create datafile" .
    you can use only for the datafiles created after control file creation time.
    check:-
    SQL> select creation_time,name from v$datafile;
    CREATION_ NAME
    17-APR-07 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\SYSTEM01.DBF
    17-APR-07 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\UNDOTBS01.DBF
    17-APR-07 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\SYSAUX01.DBF
    17-APR-07 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\USERS01.DBF
    27-JUL-11 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\EXAMPLE01.DBF
    27-JUL-11 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DEMODB\USER02.DBF
    6 rows selected.
    6 rows selected.
    SQL> select controlfile_created from v$database;
    CONTROLFI
    27-JUL-11

  • Recreate datafile(URGENT)

    Hello
    one datafile is removed by mistake and i am not having the backu up.i want to recreate that datafile now.How can i do this?
    Please provide me the step by step process to recreate the datafile
    Errors in file /oracle/home92/admin/tst/bdump/cbosstst_j001_22481.trc:
    ORA-12012: error on auto execute of job 191459
    ORA-01116: error in opening database file 9
    ORA-01110: data file 9: '/ocs/tst/maintbl03.dbf'
    ORA-27041: unable to open file
    SVR4 Error: 2: No such file or directory
    Thankx...

    Hi,
    aah.. one datafile is removed by mistake
    or Without backup you cann't recover your datafile unless if your database running on archivelog mode or you have store all ARCHIVELOG from datafile creation to till then it is possible to create new datafile and recover all data.
    or if it is testing database then perform below step.
    alter database datafile 9 offline drop
    alter database openregards
    Taj

  • Is Shared storage provided by VirtualBox better or as good as Openfiler ?

    Grid version : 11.2.0.3
    Guest OS           : Solaris 10 (64-bit )
    Host OS           : Windows 7 (64-bit )
    Hypervisor : Virtual Box 4.1.18
    In the past , I have created 2-node RAC in virtual environment (11.2.0.2) in which the shared storage was hosted in OpenFiler.
    Now that VirtualBox supports shared LUNs. I want to try it out. If VirtualBox's shared storage is as good as Openfiler , I would definitely go for VirtualBox as Openfiler requires a third VM (Linux) to be created just for hosting storage .
    For pre-RAC testing, I created a VirtualBox VM and created a Stand alone DB in it. Below test is done in VirtualBox's LOCAL storage (I am yet to learn how to create Shared LUNs in Virtual Box )
    I know that a datafile creation is not a definite test to determine I/O throughput. But i did a quick Test by creating a 6gb tablespace.
    Is the duration of 2 minutes and 42 seconds acceptable for a 6gb datafile ?
    SQL> set timing on
    SQL> create tablespace MHDATA datafile '/u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf' SIZE 6G AUTOEXTEND off ;
    Tablespace created.
    Elapsed: 00:02:42.47
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    $
    $ du -sh /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    6.0G   /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    $ df -h /u01/app/hldat1/oradata/hcmbuat
    Filesystem             size   used  avail capacity  Mounted on
    /dev/dsk/c0t0d0s6       14G    12G   2.0G    86%    /u01

    well once i experimented with Openfiler and built a 2-node 11.2 RAC on Oracle Linux 5 using iSCSI storage (3 VirtualBox VMs in total, all 3 on a desktop PC: Intel i7 2600K, 16GB memory)
    CPU/memory wasnt a problem, but as all the 3 VMs were on a single HDD, performance was awful
    didnt really run any benchmarks, but a compressed full database backup with RMAN for an empty database (<1 GB) took like 15 minutes...
    2 VMs + VirtualBox shared disk on the same single HDD provided much better performance, still using this kind of setup for my sandbox RAC databases
    edit: 6 GB in 2'42" is about 37 MB/sec
    with the above setup using Openfiler, it was nowhere near this
    edit2: made a little test
    host: Windows 7
    guest:2 x Oracle Linux 6.3, 11.2.0.3
    hypervisor is VirtualBox 4.2
    PC is the same as above
    2 virtual cores + 4GB memory for each VM
    2 VMs + VirtualBox shared storage (single file) on a single HDD (Seagate Barracuda 3TB ST3000DM001)
    created a 4 GB datafile (not enough space for 6 GB):
    {code}SQL> create tablespace test datafile '+DATA' size 4G;
    Tablespace created.
    Elapsed: 00:00:31.88
    {code}
    {code}RMAN> backup as compressed backupset database format '+DATA';
    Starting backup at 02-OCT-12
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=22 instance=RDB1 device type=DISK
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=+DATA/rdb/datafile/system.262.790034147
    input datafile file number=00002 name=+DATA/rdb/datafile/sysaux.263.790034149
    input datafile file number=00003 name=+DATA/rdb/datafile/undotbs1.264.790034151
    input datafile file number=00004 name=+DATA/rdb/datafile/undotbs2.266.790034163
    input datafile file number=00005 name=+DATA/rdb/datafile/users.267.790034163
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/nnndf0_tag20121002t192133_0.389.795640895 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/ncsnf0_tag20121002t192133_0.388.795640919 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 02-OCT-12
    {code}
    Now i dont know much about Openfiler, maybe i messed up something, but i think its quite good, so i wouldnt use a 3rd VM just for the storage.

  • ORA-01653: unable to extend table DISPATCH.T_EVENT_DATA by 4096 in tablespa

    Hello everybody,
    I try to explain the problem I had, because I still didn't understand real causes.
    Everything started when I got this error:
    ORA-01653: unable to extend table DISPATCH.T_EVENT_DATA by 4096 in tablespace USERS
    I'm using ASM.
    This was the situation of the tablespace USER:
    FILE NAME                                                 TB NAME   SIZE (gb)                   STATUS               
    DATA/evodb/datafile/users.261.662113927     USERS     63,999969482421875     AVAILABLE
    and this was the situation of the DATAS diskgroup:
    GR # NAME        FREE_MB    USABLE     STATE      SECTOR SIZE   BLOCKSIZE
    2     DATA     60000     60000     MOUNTED     512     4096
    That diskgroup is composed by 5 files:
    PATH       DISK#       GR NAME           FREE MB    OS MB       TOTAL MB NAME                FAILGROUP
    /dev/asm2     0     DATA          12000     48127     48127     DATA_0000     DATA_0000
    /dev/asm3     1      DATA          12000     48127     48127     DATA_0001     DATA_0001
    /dev/asm4     2     DATA          12000     48127     48127     DATA_0002     DATA_0002
    /dev/asm5     3     DATA          12000     48127     48127     DATA_0003     DATA_0003
    /dev/asm6     4     DATA          12000     48127     48127     DATA_0004     DATA_0004
    This are the information about the table got from the dba_tables table:
    OWNER     DISPATCH
    TABLE_NAME     T_EVENT_DATA
    TABLESPACE_NAME USERS
    CLUSTER_NAME     
    IOT_NAME     
    STATUS     VALID
    PCT_FREE     10
    PCT_USED     
    INI_TRANS     1
    MAX_TRANS     255
    INITIAL_EXTENT     4294967296
    NEXT_EXTENT     
    MIN_EXTENTS     1
    MAX_EXTENTS     2147483645
    PCT_INCREASE     
    FREELISTS     
    FREELIST_GROUPS     
    LOGGING     YES
    BACKED_UP      N
    NUM_ROWS     532239723
    BLOCKS     1370957
    EMPTY_BLOCKS     0
    AVG_SPACE      0
    CHAIN_CNT 0
    AVG_ROW_LEN     32
    AVG_SPACE_FREELIST_BLOCKS     0
    NUM_FREELIST_BLOCKS     0
    DEGREE     1
    INSTANCES     1
    CACHE     N
    TABLE_LOCK     ENABLED
    SAMPLE_SIZE     532239723
    LAST_ANALYZED 21/09/2009 22.45
    PARTITIONED     NO
    IOT_TYPE     
    TEMPORARY     N
    SECONDARY      N
    NESTED     NO
    BUFFER_POOL     DEFAULT
    ROW_MOVEMENT DISABLED
    GLOBAL_STATS     YES
    USER_STATS     NO
    DURATION     
    SKIP_CORRUPT     DISABLED
    MONITORING     YES
    CLUSTER_OWNER     
    DEPENDENCIES     DISABLED
    COMPRESSION     DISABLED
    COMPRESS_FOR     
    DROPPED      NO
    READ_ONLY     NO
    So, my question is:
    Why did it happen?
    Why the table was unable to allocate the space? From what I can see the space was there.
    I alstro tried an ALTER TABLESPACE USER COALESCE, but with no luck.
    To solve the problem, I had to create another tablespace and put there the T_EVENT_DATA table.
    Looking forward to read some answer,
    thanks in advance!

    There can be two reasons:
    1.) Datafile is unable to extend as the auto-extend is set to NO.
    2.) Datafile reached to the MAXSIZE provided at the datafile creation.
    Query dba_data_files view and confirm this.
    Regards.

  • DB Recovery through RMAN usin Redo Archivelog

    Hi,
    I want to know that, am I be able to Recover Database using RMAN, If I was only taking Backups of Redo Archive Logs.
    Regards,
    Raza

    The minimum set of datafiles required would be SYSTEM and UNDO -- in fact all the datafiles that were created before the database was set to ARCHIVELOG.
    If a database is created either with a CREATE DATABASE or with dbca extracting it from a template, the initial set of datafile creation isn't in ARCHIVELOG. Oracle switches to ARCHIVELOG mode only later.
    Therefore, it wouldn't be possible to recover until and unless the minimum set of datafiles -- ie the first database backup IS available_.

  • ASM migration

    I have a LINUX server with an ASM instance up and running and a 10.2 database instance up and running that is presently not using the ASM, each in different ORACLE_HOME.
    I also have a 10.2 database on a different (Windows) server that I want to migrate from the Windows server to the LINUX server and create the tablespaces (datafiles) from the Windows server within the ASM diskgroup on the LINUX server .
    I am thinking all I need to do is change the init.ora parameters on the LINUX ORCL database to reference the ASM diskgroup (+DATA) for datafile creation, then run a script on ORCL to create the tablespaces in ASM on the LINUX server, then take an export from the Windows server and import it into the LINUX server.
    Any comments? WIll this work?

    and create the tablespaces (datafiles) from the Windows server within the ASM diskgroup on the LINUX server . You mean you want to migrate COMPLETELY the instance on windows to LINUX right? If yes, that should be fine.
    I am thinking all I need to do is change the init.ora parameters on the LINUX ORCL database to reference the ASM diskgroup (+DATA) for datafile creation, then run a script on ORCL to create the tablespaces in ASM on the LINUX server, then take an export from the Windows server and import it into the LINUX server.Are you referring to - db_create_file_dest* init parameters? If yes, that should be fine. As such this is optional, you can create tablespaces on the database (where the ASM instance is running) without these parameters by just indicated datafile as '+ASM/....'. BTW, with this you will have a mix of ASM and non-ASM datafiles, since you are planning on creating ASM datafiles in a database which already has non-asm files. You can convert these datafiles into ASM using RMAN....
    The overall/high-level plan looks fine...Good Luck.
    Chandra

  • Unexpected result in efficieny comparson of Hashtable and ArrarList

    i have been told that Hashtable is much more efficient than ArrayList. To convince my self I did the follwing:
    a) created a file of 100000 random 3 letter strings from acii range of 65 to 122. Foll. is the code I used.
    import java.util.Random;
    import java.io.*;
    class RandomStrToFile {
         public static void main(String[] args) {
              Random rnd = new Random();
              long i,imax=100000;
              char cNL;
              String sFileName = "myrandom_strings.txt";
              String sBuffer="";
              try {
              FileWriter fw = new FileWriter(sFileName);
              BufferedWriter bw = new BufferedWriter(fw);
              for (i=0; i<imax;i++){
                   sBuffer=""+ (char) (65 + (int)((rnd.nextDouble())*57)) + (char) (65 + (int)((rnd.nextDouble())*57)) + (char) (65 + (int)((rnd.nextDouble())*57));
                   bw.write(sBuffer);
                   bw.newLine();
                   //System.out.println(sBuffer);
              bw.close();
              fw.close();
              } catch (IOException e) {}
    b. Timed to create a unique valielist from the above file, uing ArrayList and Hashtable respectively. Following is my code:
    import java.io.*;
    import java.util.*;
    public class test2 {
         public static void main(String[] args){
              String sLine;
              int iVal;
              long startTime, endTime;
              ArrayList<String> al = new ArrayList<String>();
              Hashtable ht = new Hashtable (4999); // use a prime number
              try {
                   FileReader fr = new FileReader("C:\\Home\\MyJava\\eclipse_proj\\MyFirst\\bin\\myrandom_strings.txt");
                   BufferedReader br = new BufferedReader(fr);
                   startTime = System.currentTimeMillis();
                   iVal=0;
                   while ((sLine=br.readLine())!=null){
                        //System.out.println(sLine);
                        iVal=iVal+1;
                        if (!ht.contains(sLine)){
                             ht.put(iVal,sLine);
                             //System.out.println(sLine);
                   endTime = System.currentTimeMillis();
                   System.out.println("The Hashtable Elapsed Time is: " + (startTime-endTime));
                   br.close();
                   fr.close();
                   fr = new FileReader("C:\\Home\\MyJava\\eclipse_proj\\MyFirst\\bin\\myrandom_strings.txt");
                   br = new BufferedReader(fr);
                   startTime = System.currentTimeMillis();
                   iVal=0;
                   while ((sLine=br.readLine())!=null){
                        //System.out.println(sLine);
                        iVal=iVal+1;
                        if (!al.contains(sLine)){
                             al.add(sLine);
                             //System.out.println(sLine);
                   endTime = System.currentTimeMillis();
                   System.out.println("The ArrayList Elapsed Time is: " + (endTime-startTime));
                   br.close();
                   fr.close();
              } catch (IOException e){}
    To my surprize Hashtable approach was much slower thatn the ArrayList approach. Am I missing something here?

    ////////////////////////////////////////datafile creation code///////////////////////////////
    import java.util.Random;
    import java.io.*;
    class RandomStrToFile {
         public static void main(String[] args) {
              Random rnd = new Random();
              long i,imax=100000;
              char cNL;
              String sFileName = "myrandom_strings.txt";
              String sBuffer="";
              try {
              FileWriter fw = new FileWriter(sFileName);
              BufferedWriter bw = new BufferedWriter(fw);
              for (i=0; i<imax;i++){
                   sBuffer=""+ (char) (65 + (int)((rnd.nextDouble())*57)) + (char) (65 + (int)((rnd.nextDouble())*57)) + (char) (65 + (int)((rnd.nextDouble())*57));
                   bw.write(sBuffer);
                   bw.newLine();
                   //System.out.println(sBuffer);
              bw.close();
              fw.close();
              } catch (IOException e) {}
    }////////////////////////////////Testing code//////////////////////////////////
    import java.io.*;
    import java.util.*;
    public class test2 {
         public static void main(String[] args){
              String sLine;
              int iVal;
              long startTime, endTime;
              ArrayList<String> al = new ArrayList<String>();
              Hashtable ht = new  Hashtable (4999); // use a prime number
              try {
                   FileReader fr = new FileReader("C:\\Home\\MyJava\\eclipse_proj\\MyFirst\\bin\\myrandom_strings.txt");
                   BufferedReader br = new BufferedReader(fr);
                   startTime = System.currentTimeMillis();
                   iVal=0;
                   while ((sLine=br.readLine())!=null){
                        //System.out.println(sLine);
                        iVal=iVal+1;
                        if (!ht.contains(sLine)){
                             ht.put(iVal,sLine);
                             //System.out.println(sLine);
                   endTime = System.currentTimeMillis();
                   System.out.println("The Hashtable Elapsed Time is: " + (startTime-endTime));
                   br.close();
                   fr.close();
                   fr = new FileReader("C:\\Home\\MyJava\\eclipse_proj\\MyFirst\\bin\\myrandom_strings.txt");
                   br = new BufferedReader(fr);
                   startTime = System.currentTimeMillis();
                   iVal=0;
                   while ((sLine=br.readLine())!=null){
                        //System.out.println(sLine);
                        iVal=iVal+1;
                        if (!al.contains(sLine)){
                             al.add(sLine);
                             //System.out.println(sLine);
                   endTime = System.currentTimeMillis();
                   System.out.println("The ArrayList Elapsed Time is: " + (endTime-startTime));
                   br.close();
                   fr.close();
              } catch (IOException e){}
    }OK here we go. I cannot get rid of the file as taht is where I put the strings. See wheter you can find the issue.

  • ORA-19645: datafile 17: incremental-start SCN is prior to creation SCN 8180

    hi Experts i m recieved ORA-19645: datafile 17: incremental-start SCN is prior to creation SCN 8180101895458 error durring backup by rman
    can any one explain what may b the cause of this
    my database version is 10g(10.2.0.4) O/S AIX 6.1
    pls help me out

    19645, 00000, "datafile %s: incremental-start SCN is prior to creation SCN %s"
    // *Cause:  The incremental-start SCN which was specified when starting an
    //          incremental datafile backup is less than the datafile's
    //          creation SCN.
    // *Action: Specify a larger incremental-start SCN.

  • PB creation datafile greater than 2Gb

    Why is it not possible to create a file greather than 2Gb on
    oracle 7.3.4.0
    and solaris 2.7???
    I have created the same architecture on SCO openserver 5 and HPUX and it 's
    working????
    Could you help me please???

    In "earlier days", filesystem were limited to a maximum file size of 2 GB. Today that limitation has almost gone away.
    What filesystem do you use for those datafiles? Both, VxFS and UFS can handle bigger files. It seems to me, that you upgraded the database from an earlier version (might have been 7.x or 8.x) and for compatibility reason the upgrade put that limitation in your init 2 GB it´s safe to change that parameter and add a bigger sized file.
    Markus

Maybe you are looking for

  • Need input for my requirement?

    HI all, I have requirement like this. Client  have this existing logic. The javascript downloads the exchange rates from the internet site abc.com by using the user name for xxxx  and stores in the .CSV file . Another Vb.net program takes that file a

  • Java 6 plug-in

    I have been updating Java 6 with update 1,2,3,5,7,11 and J2SE runtime environment 5.0 update 6,9,10. They are all listed in the add/remove program section of xp. Are they all being used when I use java or simply occupying space on the HD? Could I uni

  • Iphoto no longer opens in the media browser in iweb, iphoto no longer opens in the media browser in iWeb

    Hello, I am having several issues with Iphoto after upgrading to '11. I was able to use time machine to reload photos as the photos disappeared...basically it was a disaster. I am continuing to experience a lot of issues with Iphoto - it no longers o

  • HT201287 Predictive text gives odd suggestions

    i am struggling with predicti text. The recent updates seem to have messed it up and it changes the word I want.  the latest one was I correctly spelt 'fringe' and pressed return.  When I looked at the Calendar entry it said 'grange' why?  This seems

  • Convert .MXF Video files to work with Mac Editing software?

    Hello, Many thanks in advance for your attention. Trying to find software that will edit .MXF Video Files to work with Mac. As the edit is simple, prefer iMovie, but happy if the Conversion software only works with FCExpress (or even FCPro if that is