Problem with DATAFILE

Hi Guru!
In my RAC Database I added a Datafile.The location of new created datafile is Local Drive which is different with other Datafiles of Existing Database.After some transactions I was getting a error ORA-01157.Then I made the new created datafile offiline and was getting error like datafile is offline bring it online.Then I dropped the datafile. after that I am getting the status of this datafile is RECOVER.Now I can up the database to MOUNT mode cannot open/startup..
Please help me
Thanks in Advance
Mokarem

Datafiles need to be on a shared location with RAC. There is no exception. Even tablespaces which are exclusively used by a certain instance (undo) need to have the datafiles shared.
If you put the tablespace offline at which you added the datafile on a local location, you can bring up the database. As long as there is no data in the datafile which you added, you can drop the datafile with the command: "alter database datafile xxx offline drop"
(replace xxx with '/path/to/datafile')

Similar Messages

  • [RMAN] problem with registering database

    Hello, we have RAC environment on two nodes. We created rman catalog database and then registered our database with REGISTER DATABASE command. After that we issued command SHOW ALL and encountered an error:
    RMAN> show all;
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 21 DAYS;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    could not read file header for datafile 74 error reason 4
    could not read file header for datafile 74 error reason 4
    could not read file header for datafile 74 error reason 4
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/10.2.0/db_1/dbs/snapcf_easy1.f'; # default
    this is new catalog database, which is to replace the old one
    how to deal with this problem ?

    So you are following up on datafile 74 in this other thread :
    problem with datafile recovery
    Hemant K Chitale

  • Tp ended with error code 0247 - addtobuffer has problems with data- and/or

    Hello Experts,
    If you give some idea, it will be greatly appreciated.
    This transported issue started coming after power outage, sap system went hard shutdown.
    Then we brought up the system. Before that , we do not have this transport issue.
    our TMS landscape is
    DEV QA-PRD
    SED-SEQSEP
    DEV is having the TMS domain controller.
    FYI:
    *At OS level, when we do scp command using root user, it is fine for any TR.
    In STMS, while adding TR in SEQ(QA system), we  are getting error like this.
    Error:
    Transport control program tp ended with error code 0247
         Message no. XT200
    Diagnosis
         An error occurred when executing a tp command.
           Command:        ADDTOBUFFER SEDK906339 SEQ client010 pf=/us
           Return code:    0247
           Error text:     addtobuffer has problems with data- and/or
           Request:        SEDK906339
    System Response
         The function terminates.
    Procedure
         Correct the error and execute the command again if necessary.
    This is tp version 372.04.71 (release 700, unicode enabled)
    Addtobuffer failed for SEDK906339.
      Neither datafile nor cofile exist (cofile may also be corrupted).
    standard output from tp and from tools called by tp:
    tp returncode summary:
    TOOLS: Highest return code of single steps was: 0
    ERRORS: Highest tp internal error was: 0247

    when we do scp using sm69,
    SEDADM@DEVSYS:/usr/sap/trans/cofiles/K906339.SED SEQADM@QASYS:/usr/sap/trans/cofiles/.
    it throws the error like below,
    Host key verification failed.
                                                                                    External program terminated with exit code 1
    Thanks
    Praba

  • Problem with SQL,udfs & procedures

    I have couple of problems with my database. Please suggest solution.
    We are basically a web product With a Quite large Database
    1. I am using functions both User Defined and Built in Functions in
    SQL Statement. I want to optimize the query how do i do it.
    why the usage of function in sql statements suppresses,the
    usage of indexes internally. How to forceable make use of
    the index even though function is used.
    2. Whenver The Client makes a request to the Database server with a
    Sql Query What are the steps we can take at the
    client side to enhance the performance of the Query.
    (i.e the Data Request ). How to optimize the usage of CPU at
    client site?
    3. what is the increase in the performance ration by having
    separate table spaces for user data,system data and indexes.
    4. Why the procedures are getting invalided
    after some time. The procedure is
    not getting executed at the front end.
    Once the procedure is getting invalidated.
    However even though the status of the
    procedure is invalid the same is getting
    executed at the back end.
    Can anybody help me
    Request for reply ASAP.
    Regards
    Koshal
    null

    1. In Oracle 8i, one can create function-based indexes, where instead of indexing a column, one can index upper() of that column.
    2. Optimizing client performance is trickier. One can tune the queries being
    submitted by the client, but if getting the first row back -- which is how response time is generally perceived -- set the OPTIMIZER_MODE parameter in init.ora to FIRST_ROWS.
    3. There is minimal benefit to having data, index, rollback, and temp tablespaces all separated unless all the datafiles for each tablespace reside on different disks (data on disk 1, index on disk 2, etc). It's recommended regardless, but unless the files are on separate volumes, there won't be a great performance benefit.
    4. A procedure is invalidated whenever DDL is issued against any object that that procedure depends upon. For example, if you add a column to a table, any procedures which reference that table will be invalidated. Any procedures which reference views which reference that table will be invalidated, because the view will be invalidated. It's best to run a compile script which looks for and attempts to recompile any invalid objects on a daily basis.
    Adam

  • Problem with full database backup.

    This is what I got after execute backup full database statement
    "RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/03/2009 14:15:15
    ORA-19502: write error on file "/u01/app/oracle/backup/ORCL/ora_df680537660_s2_s1", blockno 25985 (blocksize=8192)
    ORA-27072: File I/O error
    Linux Error: 2: No such file or directory
    Additional information: 4
    Additional information: 25985
    Additional information: 483328"
    I have quite the same problem with create tablespace, I can't create tablespace 512m, but I can create the same tablespace only 100m.
    I think it must be somethink with storage???

    Starting backup at 03-MAR-09
    using channel ORA_DISK_1
    input datafile fno=00015 name=/u01/app/oracle/oradata/o2_mf_system_48fprop3_.dbf
    input datafile fno=00003 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux_48fpropg_.dbf
    input datafile fno=00014 name=/u01/app/oracle/oradata/o2_mf_sysaux_48fpropg_.dbf
    input datafile fno=00005 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_example_48fpw04c_.dbf
    input datafile fno=00017 name=/u01/app/oracle/oradata/ORCL/datafile/rcvcat01.dbf
    input datafile fno=00002 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs1_48fprovo_.dbf
    input datafile fno=00016 name=/u01/app/oracle/oradata/inventory03.dbf
    input datafile fno=00011 name=/u01/app/oracle/oradata/ORCL/datafile/inventory01.dbf
    input datafile fno=00012 name=/u01/app/oracle/oradata/ORCL/datafile/inventory02.dbf
    input datafile fno=00006 name=/u01/app/oracle/oradata/ORCL/datafile/ts01.dbf
    input datafile fno=00008 name=/u01/app/oracle/oradata/ORCL/datafile/ts02.dbf
    input datafile fno=00001 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_system_48fprop3_.dbf
    channel ORA_DISK_1: specifying datafile(s) in backupset
    channel ORA_DISK_1: starting full datafile backupset
    input datafile fno=00009 name=/u01/app/oracle/oradata/ORCL/datafile/undo01.dbf
    channel ORA_DISK_1: starting piece 1 at 03-MAR-09
    input datafile fno=00013 name=/u01/app/oracle/oradata/ORCL/datafile/val01.dbf
    input datafile fno=00007 name=/u01/app/oracle/oradata/ORCL/datafile/test_reorg0
    input datafile fno=00004 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_users_48fprowb_.dbf
    input datafile fno=00010 name=/u01/app/oracle/oradata/ORCL/datafile/ts01b.dbf
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/03/2009 14:15:15
    ORA-19502: write error on file "/u01/app/oracle/backup/ORCL/ora_df680537660_s2_s1", blockno 25985 (blocksize=8192)
    ORA-27072: File I/O error
    Linux Error: 2: No such file or directory
    Additional information: 4
    Additional information: 483328
    Additional information: 25985
    Directory exist because I can backup single tablespace, into the same directory, I just can't backup whole database.
    Edited by: val75 on Mar 3, 2009 1:57 PM

  • Problem with restoring database from backupset

    Hello,
    I'm newie in working with RMAN and I have problem with restoring database from backup set in my testcase.
    I've restored controlfile, but I couldn't restore database - it fails with:
    RMAN-00571: =============================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
    RMAN-00571: =============================================
    RMAN-03002: failure of restore command at 08/31/2006 12:06:47
    ORA-01180: can not create datafile 1
    ORA-01110: data file 1: 'C:\ORACLE\ORADATA\LOCA10G2\SYSTEM01.DBF'
    List of backupsets from restored controlfile
    (I restored controlfile by command: restore controlfile from 'C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NCSNF_TAG20060829T113622_2H82OW77_.BKP'; -- I have disabled controlfile autobackup, therefore I couldn't restore from autobackup);
    List of Backup Sets
    ===================
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    36 Full 6.98M DISK 00:00:02 29-AUG-06
    BP Key: 36 Status: AVAILABLE Compressed: NO Tag: 01_CTL
    Piece Name: C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NCNNF_01_CTL_2H82NDQT_.BKP
    Control File Included: Ckp SCN: 578469 Ckp time: 29-AUG-06
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    37 Full 322.96M DISK 00:00:27 29-AUG-06
    BP Key: 37 Status: AVAILABLE Compressed: NO Tag: TAG20060829T113622
    Piece Name: C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NNNDF_TAG20060829T113622_2H82NQ58_.BKP
    List of Datafiles in backup set 37
    File LV Type Ckp SCN Ckp Time Name
    1 Full 578481 29-AUG-06 C:\ORACLE\ORADATA\LOCA10G2\SYSTEM01.DBF
    2 Full 578481 29-AUG-06 C:\ORACLE\ORADATA\LOCA10G2\UNDOTBS01.DBF
    3 Full 578481 29-AUG-06 C:\ORACLE\ORADATA\LOCA10G2\SYSAUX01.DBF
    4 Full 578481 29-AUG-06 C:\ORACLE\ORADATA\LOCA10G2\USERS01.DBF
    BS Key Size Device Type Elapsed Time Completion Time
    38 650.50K DISK 00:00:00 29-AUG-06
    BP Key: 38 Status: AVAILABLE Compressed: NO Tag: TAG20060829T113804
    Piece Name: C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_ANNNN_TAG20060829T113804_2H82QYOV_.BKP
    List of Archived Logs in backup set 38
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 32 577277 29-AUG-06 578529 29-AUG-06
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    39 Full 7.02M DISK 00:00:00 29-AUG-06
    BP Key: 39 Status: AVAILABLE Compressed: NO Tag: TAG20060829T113622
    Piece Name: C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NCSNF_TAG20060829T113622_2H82OW77_.BKP
    Control File Included: Ckp SCN: 578493 Ckp time: 29-AUG-06
    SPFILE Included: Modification time: 28-AUG-06
    I can successfully crosscheck backup sets (by command crosscheck backup), but I couldn't restore database.
    Path C:\ORACLE\ORADATA\LOCA10G2\ exists and I have right privileges.
    Output of command restore database validate:
    RMAN> restore database validate;
         Starting restore at 31-AUG-06
         allocated channel: ORA_DISK_1
         channel ORA_DISK_1: sid=102 devtype=DISK
         data file 1 will be created automatically during restore operation
         data file 2 will be created automatically during restore operation
         data file 3 will be created automatically during restore operation
         data file 4 will be created automatically during restore operation
         restore not done; all files readonly, offline, or already restored
         Finished restore at 31-AUG-06
    What's wrong?
    Thanks
         Paul

    Hi,
    I think, that everything seems to be OK.
    Commands which I use to validation of existence backup:
    RMAN> crosscheck backup;
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=102 devtype=DISK
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NCNNF_01_CTL_2H82NDQT_.BKP recid=36 stamp=599744172
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NNNDF_TAG20060829T113622_2H82NQ58_.BKP recid=37 stamp=599744183
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_ANNNN_TAG20060829T113804_2H82QYOV_.BKP recid=38 stamp=599918805
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NCSNF_TAG20060829T113622_2H82OW77_.BKP recid=39 stamp=599918805
    Crosschecked 4 objects
    RMAN> list backup summary;
    List of Backups
    ===============
    Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
    36 B F A DISK 29-AUG-06 1 1 NO 01_CTL
    37 B F A DISK 29-AUG-06 1 1 NO TAG20060829T113622
    38 B A A DISK 29-AUG-06 1 1 NO TAG20060829T113804
    39 B F A DISK 29-AUG-06 1 1 NO TAG20060829T113622
    RMAN> list backup;
    List of Backup Sets
    ===================
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    36 Full 6.98M DISK 00:00:02 29-AUG-06
    BP Key: 36 Status: AVAILABLE Compressed: NO Tag: 01_CTL
    Piece Name: C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NCNNF_01_CTL_2H82NDQT_.BKP
    Control File Included: Ckp SCN: 578469 Ckp time: 29-AUG-06
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    37 Full 322.96M DISK 00:00:27 29-AUG-06
    BP Key: 37 Status: AVAILABLE Compressed: NO Tag: TAG20060829T113622
    Piece Name: C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NNNDF_TAG20060829T113622_2H82NQ58_.BKP
    List of Datafiles in backup set 37
    File LV Type Ckp SCN Ckp Time Name
    1 Full 578481 29-AUG-06 C:\ORACLE\ORADATA\LOCA10G2\SYSTEM01.DBF
    2 Full 578481 29-AUG-06 C:\ORACLE\ORADATA\LOCA10G2\UNDOTBS01.DBF
    3 Full 578481 29-AUG-06 C:\ORACLE\ORADATA\LOCA10G2\SYSAUX01.DBF
    4 Full 578481 29-AUG-06 C:\ORACLE\ORADATA\LOCA10G2\USERS01.DBF
    BS Key Size Device Type Elapsed Time Completion Time
    38 650.50K DISK 00:00:00 29-AUG-06
    BP Key: 38 Status: AVAILABLE Compressed: NO Tag: TAG20060829T113804
    Piece Name: C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_ANNNN_TAG20060829T113804_2H82QYOV_.BKP
    List of Archived Logs in backup set 38
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 32 577277 29-AUG-06 578529 29-AUG-06
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    39 Full 7.02M DISK 00:00:00 29-AUG-06
    BP Key: 39 Status: AVAILABLE Compressed: NO Tag: TAG20060829T113622
    Piece Name: C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NCSNF_TAG20060829T113622_2H82OW77_.BKP
    Control File Included: Ckp SCN: 578493 Ckp time: 29-AUG-06
    SPFILE Included: Modification time: 28-AUG-06
    RMAN> restore database validate;
    Starting restore at 04-SEP-06
    using channel ORA_DISK_1
    data file 1 will be created automatically during restore operation
    data file 2 will be created automatically during restore operation
    data file 3 will be created automatically during restore operation
    data file 4 will be created automatically during restore operation
    restore not done; all files readonly, offline, or already restored
    Finished restore at 04-SEP-06
    Is something wrong? After crosscheck backup set with system datafile is AVAILABLE.
    If i tried to test existence of backup pieces on disk and permissisons - everything is OK too:
    C:\>dir C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\
    Volume in drive C has no label.
    Volume Serial Number is E003-9FC6
    Directory of C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29
    29.08.2006 11:38 <DIR> .
    29.08.2006 11:38 <DIR> ..
    29.08.2006 11:38 666 624 O1_MF_ANNNN_TAG20060829T113804_2H82QYOV_.BKP
    29.08.2006 11:36 7 340 032 O1_MF_NCNNF_01_CTL_2H82NDQT_.BKP
    29.08.2006 11:37 7 372 800 O1_MF_NCSNF_TAG20060829T113622_2H82OW77_.BKP
    29.08.2006 11:36 338 657 280 O1_MF_NNNDF_TAG20060829T113622_2H82NQ58_.BKP
    4 File(s) 354 036 736 bytes
    2 Dir(s) 56 865 202 176 bytes free
    C:\>copy C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\ C:\ORACLE\ORADATA\LOCA10G2\
    C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_ANNNN_TAG20060829T113804_2H82QYOV_.BKP
    C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NCNNF_01_CTL_2H82NDQT_.BKP
    C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NCSNF_TAG20060829T113622_2H82OW77_.BKP
    C:\ORACLE\FLASH_RECOVERY_AREA\LOCA10G2\BACKUPSET\2006_08_29\O1_MF_NNNDF_TAG20060829T113622_2H82NQ58_.BKP
    4 file(s) copied.
    Thanks
    Pavel

  • Problem with the cache hit ratio

    Hello,
    I ma having a problem with the cache hit ratio I am geting. I am sure, 100% sure, that something has got to be wrong with the cache hit ratio I am fetching!
    1) I will post the code that I am using to retrieve the cache hit ratio. I've seen about a thousand different equations, all equivalent in the end.
    In oracle cache hit ratio seems to be:
    cache hits / cache lookups,
    where cache hits <=> logica IO - physical reads
    cache lookups <=> logical IO
    Now some people use the session logical Reads stat, from teh view v$sysstat; others use db block gets + db consistent gets; whatever. At the end of the day its all the same, and this is what i Use:
    SELECT (P1.value + P2.value - P3.value) AS CACHE_HITS, (P1.value + P2.value) AS CACHE_LOOKUPS, P4.value AS MAX_BUFFS_SIZEB
    FROM v$sysstat P1, v$sysstat P2, v$sysstat P3, V$PARAMETER P4
    WHERE
    P1.name = 'db block gets' AND
    P2.name = 'consistent gets' AND
    P3.name = 'physical reads' AND
    P4.name = 'sga_max_size'
    2) The problem:
    The cache hit ratio I am retrieving cannot be correct. In this case i was benchamarking a HUGELY inneficient query, consisting of the Union of 5 Projections over the same source table, and Oracle is configured with a relatively small SGA of 300 MB. They query plan is awful, the database will read the source database table 5 times.
    And I can see in the physical data statistics of the source tablespace, that total Bytes read is aproximatly 5 times the size of the text file that I used to bulk load data into the databse.
    Some of the relevant stats, wait events:
    db file scattered read     1129,93 seconds
    Elapsed time: 1311,9 seconds
    CPU time: 179,84
    SGA max Size: 314572800 Bytes
    And total bytes read: 77771964416 B (aproximatly 72 Gga bytes)
    the source txt loaded to the database was aprox 16 G
    Number of reads was like 4.5 times the source datafile.
    I would say this, given the difference between CPU time and Elapsed Time, it is clear that the query spent almost all of its time doin DB file scattered reads. How is it possible that i get the following cache hit ratio:
    Cache hit Ratio: 0,92
    Cache hits: 109680186
    Cache lookups: 119173819
    I mean only 8% of that Logical I/O corresponded to physical I/O? It is just not possible.
    3) Procedure of taking stats:
    Now to retrieve these stats I snapshot the system 2 times. One before the query, one after the query.
    But: this is not done in a single session. In total 3 sessions are created. One session two retrieve the stats before the query, one session to run the query, a last session to snapshot after the query.
    Could the problem, assuming there is one, be related to this:
    "The V$SESSTAT view contains statistics on a per-session basis and is only valid for the session currently connected. When a session disconnects all statistics for the session are updated in V$SYSSTAT. The values for the statistics are cleared until the next session uses them."
    What does this paragraph mean. Does it mean that the v$sysstat only shows you the stats of the last session that closed? Or does it mean thtat v$sysstat is increamented with the statistics of each v$sessionstat once a session terminates? If so, then my procedure for gathering those stats should be correct.
    Can anyone help me sort out the origin of such a high cache hit ratio, with so much I/O being done?

    sono99 wrote:
    Hi,s
    first of, let me start by saying that there were many things in your post that you mentioned that I could no understand. 1. Because i am not an Oracle Expert, i use whatever RDBMS whenever i need to. 2. Because another problem has come up and, right now, i cannot inform myself to be able to comprehend it all.Well, could it be that you need to understand the database you are working on in order to comprehend it? That is why we strongly advise you to read the concepts manual first, you need to understand the architecture that Oracle uses, as well as the basic concepts of how oracle does locking and maintains read consistency. It does these different than other database engines, and some things become nonsense if looked at from the viewpoint of a single user.
    >
    quote:
    It would be useful to see the execution plan jhust in case you have simplified the problem so much that a critical detail is missing.
    First, the query code:
    2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:>SQL> CREATE TABLE FAVFRIEND
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 2 NOLOGGING TABLESPACE TARGET
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 3 AS
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 4 SELECT ID as USRID, FAVF1 as FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 5 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 6 SELECT ID as USRID, FAVF2 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 7 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 8 SELECT ID as USRID, FAVF3 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 9 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 10 SELECT ID as USRID, FAVF4 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 11 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 12 SELECT ID as USRID, FAVF5 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 13 ;
    Now, Althought it is clear from the query that the statement is executed with the NOLOGGiNG, i have disabled the logging entirely for the tablespace.There are certain rules about nologging that may not be obvious. Again, this derives from the basic Oracle architecture, and if you use the wrong definitions of things like logging, you will be led down the primrose path to confusion.
    >
    Futhermore, yes, the RDBMS is a test RDBMS... I have droped the database a few times... And I am constantly deleting an re-inserting data into the source database table named PROFILE.>
    I also make sure do check all the datafile statistics, and for this query the amount of RedoLog, Undo "Log", Templife used is negligible, practically zero.Create table is DDL, which has implied commits before and afterwards. There is a lot going on, some of it dependent on the volume of data returned. The Oracle database writer writes things out when it feels like it, there are situations where it might just leave it in memory for a while. With nologging, Oracle may not care that you can't perform recovery if it is interrupted. So you might want to look into statspack or EM to tell you what is going on, the datafile statistics may not be all that informative for this case.
    >
    Most of the I/O is reading, a few of the I/O is writing.
    My idea is not to optimize this query, it is to understand how it performs. Well, have you read the Concepts manual?
    I have other implementations to test, namely I having trouble with one of them.
    Furthermore, I doubt the query Plan Oracle is using actually involves tablescans (as I I'd like it to do); because in the Wait Events, most of the wait time for this query is spent doing "db file scattered read". And I think this is different from a tablescan.Please look up the definition of [db file scattered read|http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#sthref703].
    >
    Do you really have to use sessions external to the query session ? Can you query v$mystat joined to v$statname from the session itself.
    No, I don't want to that!
    I avoid as much as possible having the code I execute being implemented in java. Why do you think java has anything to do with this? In your session, desc v$mystat and v$statname, these are views you can look at.
    When i can avoid it I don't query the database directly through JDBC, i use the RDBMS command line client, which is supposed to be very robust. Er, is that sqlplus?
    So yes, I only connect to the database with JDBC... in very last session.
    Of course, I Could Have put both the gather stats before query and gathers stats after query a single script: the script that would be also runing the query.
    But that would cause me a number of problems, namely some of the SQL i build has to be implemented dynamically. And I don't want to be replicating the snapshoting code into every query script I make. This way I have one sql with the snapshoting scripts; and multiple scripts for running each query. I avoid code replication in this manner.Instrumentation is a large subject; dynamic sql generation is something to be avoided if possible. Remember, Oracle is written with the idea that many people are going to be sharing code and the database, so it is optimized in that way. For SQL parsing in particular, if every SQL is different, you get a performance problem called "hard parsing." You can (and generally should, and sometimes can't avoid) use bind variables so that Oracle doesn't need to hard parse every SQL. In fact, this is one of those things that applies to other engines besides Oracle. I would recommend you read Tom Kyte's books, he explains what is going on in detail, including in some places the non-Oracle viewpoint.
    >
    Furthermore, Since the database is not a production database, it is there so I can do my tests. I don't have to be concerned with what other sessions may be doing to my system. There are only the sessions I control.No, there are sessions Oracle controls. If you are on unix, you can easily see this, but there are ways to see it on Windows, too. In some cases, your own sessions can affect themselves.
    >
    then what it the array fetch size ? If the array fetch size is large enough the number of block visits would be similar to the number of physical block reads.
    I don't know what the arraysize you mention is. i have not touched that parameter. So whatever it is, it's the default.You should find out! You can go to http://tahiti.oracle.com and type array fetch size into the search box. You can also go to http://asktom.oracle.com and do the same thing, with some more interesting detail.
    >
    By the way, I don't get the query results into my client, the query results are dumped into a target output table.
    So, if the arraysize has something to do with the number of rows that Oracle is returning the client in each step... I think it doesn't matter.You may hear this phrase a lot:
    "It depends."
    >
    As for the query plan, If i am not mistaken you can't get get query plans for queries that are: create table as select.What?
    JG@TTST> explain plan for create table jjj as select * from product_master;
    Explained.
    JG@TTST> select count(*) from plan_table;
      COUNT(*)
             3
    I can however commit the create table part and just call for the evalution of the Select part of the query; i believe it should be same.
    "Optimizer"     "Cost"     "Cardinality"     "Bytes"     "Partition Start"     "Partition Stop"     "Partition Id"     "ACCESS PREDICATES"     "FILTER PREDICATES"
    "SELECT STATEMENT"     "ALL_ROWS"     "2563"     "586110"     "15238860"     ""     ""     ""     ""     ""
    "UNION-ALL"     ""     ""     ""     ""     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "512"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    This query plan was taken from sql developer, exported to txt, and the PROFILE table here has only 100k tuples.
    Right now I am more concerned with testing the MODEL query. Which Oracle doesn't seem to be able any more... but that is a matter for another thread.
    Regarding this plan. The Union ALL seems to be more than just a binary Operator... IT seems to be Neray.
    The union all on that execution plan seems to be taking as leaf tables 5 99sono.Profile tables, and be making a table scan to them all. So I'd say that the RDBMS should only scan each database block once and not 5 times.
    But: It doesn't seem to be so. IT seems like what oracle is doing is scanning completly each the table, and then moving on to next select statement in the UNION ALL. Because given the amount of source table that was read, 5 times greater than the size of the source table. Oracle didn't reuse read blocks.
    But this is just my feeling.Your feeling is uninteresting. Telling us what you really hope to accomplish might be more interesting.
    Anyway, in terms of consistent gets, how many consistent gets should the RDBMS be doing? 5
    One for each table block?It depends.
    >
    My best regards,
    Nuno (99sono xp).

  • Problem with RMAN incomplete recovery

    Oracle Version: 9i
    Operating System: Windows 2000
    I have a problem with RMAN incomplete recovery until sequence.
    According to my database structure ............
    SQL> SELECT GROUP#,SEQUENCE#,THREAD# FROM V$LOG;
    GROUP# SEQUENCE# THREAD#
    1 14 1
    2 13 1
    3 12 1
    I took the backup of backup of full database using RMAN and i am trying to recover like this:
    RUN
    ALLOCATE CHANNEL C1 TYPE DISK;
    SET UNTIL SEQUENCE 7 THREAD 1;
    RESTORE DATABASE;
    RECOVER DATABASE;
    ALTER DATABASE OPEN RESETLOGS;
    BUT i am getting a message saying there is no backup of DATAFILE 1,2,......10 to restore although my database is in archivelog mode and i took a backup.
    Yachendra

    Please consult v$backup_datafile.
    It will tell you when the file was backed up.
    RMAN will always search for a datafile backup prior to the logsequence or whatever is limiting the incomplete recovery.
    Likely when you made your backup Oracle was already past sequence 7.
    Sybrand Bakker
    Senior Oracle DBA

  • RMAN problem with block corruption

    Hi
    I have problem with the block corruption in one of the database .
    here is the error message .
    ora-01578:oracle data block corrupted (file# 10,block # 55309) ora-01110: data file 10:
    '/db/gist1/data/gist1_gis_nologging_01.dbf' ora-26040: data block was loaded using the NOLOGGING option .
    gisq SQL> select * from v$database_block_corruption;
    FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
    10 11 126 3754364971 LOGICAL
    RMAN> blockrecover datafile 10 block 11;
    Starting blockrecover at 14/DEC/2012 16:25:48
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of blockrecover command at 12/14/2012 16:25:48
    RMAN-05009: Block Media Recovery requires Enterprise Edition
    Could some one help me in providing solution for this . we we have standard addition only .
    Thanks in advance ...

    It appears that there was a NOLOGGING operation on an object that resides in '/db/gist1/data/gist1_gis_nologging_01.dbf' .
    NOLOGGING operations, as the name suggests, do generate limited redo log, which makes the objects affected by them non-recoverable.
    RMAN Blockrecover, as far as I understand, uses full and archivelog backup to perform the block recovery. Since the archivelog backup does not store any changes related to the NOLOGGING operation, then Blockrecover would not be able to help you even if you were licensed.
    You can try to restore the object as of the most recent full backup…
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • Problem with xy-graph program

    HI!
    I have a problem with my program. With this program i want to load a datafile and to display  the data from the file on the xy-graph. i want to display 1,2,3 or 4 graphs on the xy-graph. The problem is, when i want to display only two graphs and i push the "... laden" button, a popup will open and ask me for the 3rd ord 4th graph, but i want only display two graphs.
    The program should display the data, when there is a path added in the path-element (the white,red,blue or grenn one).
    The second problem is, when i start the program and push the "...laden" button and no path added,  a popup will open and aks me for files, but the popup should not open!
    Could somebody help me???
    TNKS
    best regards,
    peter
    Labview 7.1 on Windows 2000
    Attachments:
    program.zip ‏30 KB

    Hi,
    The function you're using to open the file is automatically asking for a file name when you call it with an empty path connected to the file path input.
    So, if you don't want to get the data when the path is empty, you should not call that function.
    I've changed your vi to show you.
    Hope it helps,
    Paulo
    Attachments:
    Vergleich_XY.vi ‏93 KB

  • Problem with exp/imp transport_tablespace=y & CLOBs

    All,
    I am having a problem with values in a CLOB column when exporting/importing a database using "transport_tablespace" command line option.
    Some background:
    Origin: 9i Windows
    Destination: 10g Windows
    Export/import work without warnings. But this SQL command:
    SELECT DUMP(CAST(TEXT_VALUE AS VARCHAR(4000)), 1017)
    FROM SE_PSOI.SEPROPTEXT
    WHERE TEXT_ID = 316161
    Gives me from the 9i database:
    Typ=1 Len=18 CharacterSet=UTF8: U,n,d,e,r, ,C,o,n,s,t,r,u,c,t,i,o,n
    But the following from the 10g database:
    Typ=1 Len=54 CharacterSet=UTF8: e5,94,80,e6,b8,80,e6,90,80,e6,94,80,e7,88,80,e2,80,80,e4,8c,80,e6,bc,80,e6,b8,80,e7,8c,80,e7,90,80,e7,88,80,e7,94,80,e6,8c,80,e7,90,80,e6,a4,80,e6,bc,80,e6,b8,80
    It appears as if there is some sort of charset issue to me. I've tried working with the environment NLS_LANG value when I export and import (and other text values come through OK - just the CLOB is the problem). After Googling and searching the forums here I am unable to find an answer.
    Can someone help me or point me in the right direction?
    Regards,
    Matthew Lesko

    i got an error:
    IMP-00003: ORACLE ERROR 1565 encountered
    ORA-01565: error in identified file '/u01/app/oracle/oradata/test201.dbf'
    my question can we import transport_tablespaces=y with two tablespaces and four datafiles ???
    First Check, What is the status of the datafiles.
    SQL> select file_name,online_status from dba_data_files where file_name like '%test201%';
    also
    $ ls -ltr /u01/app/oracle/oradata/test201.dbf

  • Recover database with datafile and logfile

    Hi Export,
    we have one maxdb database. For some reasons, we have no backup and lost the programs binaries. Is it possble to recover the database with datafile and logfile? and how?
    thanks a lot.
    Rongfeng

    Hello Rongfeng,
    1. Please see document u201CHowTo - Creating a clone of a SAP MaxDB databaseu201D at
    http://wiki.sdn.sap.com/wiki/display/MaxDB/SAPMaxDBHowTo
    and review the section u201CCreating a clone manually via reusing volumes and parameters.u201D
    2. You wrote, that you u201Clost all dba passwords.u201D
    Please review the SAP note 25591. This note also has the brief description of the database user types.
    You are SAP customer, I recommend you to create the SAP message to the component u201CBC-DB-SDBu201D to clarify more details about the problem & find solution for you.
    Thank you and best regards, Natalia Khlopina

  • Problem with Outlook (client) 2010/2013 profiles on Exchange 2013

    We have recently started migrating to Exchange 2013 and have come across a problem with some of our accounts. This problem exists in both migrated mailboxes and natively created 2013 mailboxes.
    basically we have a lot of "shared" accounts.  We create a user object in AD, give it a mailbox, then give a security group "Full Access"  (and often "send-as") permissions on the account.  We then Disable the
    account in AD (to prevent directly using the password) and then add/remove group members to manage access. everyone uses their own credentials to get to the account.
    For the most part this works fine. Users can access OWA and get to these shared mailboxes, they can also 'add' the mailbox to their Outlook client by going into Account Settings->Properties of the Exchange user->More Settings... ->Advanced Tab->"Open
    these additional mailboxes:" and adding the mailbox.
    However what has happened is that some users were setting up a separate profile instead of adding a mailbox. (Pretty much these
    instructions: )
    When this is done an error occurs. :
    Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize your folders with your Outlook data file (.ost).
    Connecting with a profile set up as yourself first does not clear the error, whenever the shared mailbox folder is selected you get this.  The exception is if you go into the account properties and turn off Cached mode. the error then becomes:
    Cannot open your default e-mail folders. the file c:\users\[user]\appdata\local\microsoft\outlook\[datafile].ost is not an Outlook datafile (.ost).
    Which is weird since migrated accounts had working OST files which were functional prior to migration and outlook creates the new OST for new accounts.  This message appears to be a red herring and the problem is still sync access but it throws this
    error instead.
    In the end is creating a new profile for a mailbox you have Full Access to no longer supported in 2013 or is there something we can do to fix this problem?

    Hi AD-Tester,
    According to your description and I do an test, however I cannot reproduce the problem.
    I want to confirm some points, please help to collect answers for following questions:
    1. Check all account or some special user face this problem.
    2. Try this command to check Atodiscover, Exchange Web services, Availability service, Offline Address Book services works well:
    Get-ClientAccessServer | Test-OutlookWebServices -Identity 'e-mail address'
    If it works on OWA, problem may point to outlook client. Please try to re-create a outlook profile for testing.
    Note: please make sure set properly setting for Outlook anywhere.
    Additional, we can recreate an account and enable-mailbox, then disable in AD and try to login outlook account again.
    Best Regards,
    Allen Wang

  • Problems with Mail 3.5 reply to address

    I have a Gmail account and about two years ago changed the settings on Gmail to have the Gmail reply-to address be my work email. I was using Apple Mail for my Gmail account with no problems. Then, about a year ago, I decided to keep my accounts separate so I went into Gmail settings and changed the settings so that the reply-to address would be my Gmail address.
    Apple Mail continued to keep my work email as the reply-to address. I tried seemingly everything - deleting the account and reinstating it, making sure that preferences in Apple Mail didn't have my work email as the reply-to address, etc. Finally I gave up and started using Thunderbird.
    The problem is that I really like Apple Mail. I think the entire program is just superior to anything else. But it still has that problem where I send an email using Gmail, and the reply-to address is my work email. I checked Gmail settings and the reply-to address is set as my Gmail address.
    I am using IMAP on GMail, Apple Mail 3.5, and OS X 10.5.6
    Can anyone help me with this? It is really aggravating. I have a feeling that it is a problem with the GMail server-Apple Mail communication.
    I hope I don't have to create a brand new GMail account.
    Thanks for any help.
    Sunil

    ORA-01157: cannot identify/lock data file 5 - see DBWR trace file
    ORA-01110: data file 5: 'GRGLIBRARY'
    SVRMGR> ALTER DATABASE OPEN RESETLOGS;
    ALTER DATABASE OPEN RESETLOGS
    ORA-01157: cannot identify/lock data file 5 - see DBWR trace file
    YOUR DBWR TRACE FILE IN FOLDER RDBMS80/orcldbwr.trc
    ORA-01157: cannot identify data file 5 - file not found
    ORA-01110: data file 5: 'LOG3:ORACLE\ORANW803
    \DATABASE\DES2DATA.ORA'
    ORA-09202: sfifi: error identifying file
    OSD-02063: REUSE was specified, but file does not exist
    (OS 1)
    It is not find this file 5. you can use this..
    ALTER SYSTEM CHECK datafiles;
    AND COPY WITH OS THIS FILE AT THE GOOD PLACE FOR ORACLE.
    ORA-01110: data file 5: 'GRGLIBRARY' you have to have the full path and file name for Ok.
    SVRMGR> spool off;
    after this message I have to restore the db and start over.
    Please help urgently
    regards

  • Problems with second standby database

    Hello All,
    I have oracle 10gr1. 1 primary db and 2 physical standby and 1 is in the same network with the primary and the other is on a remote network. The primary and the standby on the same network is working perfectly well. My problem is with the remote standby. Some log are not being archived. See output below
    SEQUENCE# ARC APP
    1755 YES NO
    1761 YES NO
    1772 YES NO
    1774 YES NO
    1782 YES NO
    1789 YES NO
    I get this error RFS[10]: "Possible network disconnect with primary database". I think it has something to do with the network connectivity of my second standby and the master db. What parameters should I tweak to avoid this?
    Another thing is arhived logs from seq 1728 were not being applied. I tried manually recovering the archive log from 1728 but it says there is an error in the log. I tried getting the specified archive log from the master db and tried recovering it manually but still the same. How would I solve tjhis problem? Would rebuilding the second standby do the trick? Help needed badly.
    Thanks in advanced.

    Hi,
    About seq 1728, if the archive has been lost or alterred, you'll have to rebuild the standby. At least take a backup of the first standby and restore the datafiles at the second standby.
    About network connectivity, are you using DataGuard? If so, did you setup the FAL (Failed Archive Logs) server/client parameters on your standby? You can even set your fal_server as being both the primary and first standby..
    You should investigate the network connectivity problem with your network administrator. Check alert log and trace files plus sqlnet logs to see when/where/why the communication with distant site fails sometimes.
    HTH,
    Yoann.

Maybe you are looking for

  • Two macs to share an iTunes library on Time Capsule?

    what's the best way to do this? all the threads I've seen end up with only one computer able to access the library at a time. what i want is to rate a song upstairs on my G5 and not have to do it again on my iMac downstairs. I have both computers' iT

  • Unpleasant surprise after buying my first iMAC- what do I do with old movies?

    I just switched to the MAC after using Windows since it came out based on the promise that everything is so easy to perform with it... I have many short movies (most just a few minutes long, a few 10-20 minutes) in different formats created by differ

  • OSB SFTP transport not finding key for hostname, IP

    Hello everyone, I'm trying to set a proxy service in OSB that connects to a SFTP server to retrieve files. I am on a single server environment. WLS version is 10.3.6 SOA Suite/OSB version is 11.1.1.6 I've setup the directory for the known_hosts file

  • How to upgrade App World?

    I have a Torch 9800 with BB 6. When I try to install an app through App World it says: "To continue using BlackBerry App World, you must upgrade your current version. For more information, please visit www.blackberry.com/appworld/support" There I fou

  • Want to purchase LR for mobile. already have CS6

    Having mucho difficulty purchasing LR mobile. Already own CS6 & LR 5.5. Wont let me access 30 day free as I mistakenly clicked drop down 5 weeks ago - before I had mobile device...?! IT challenged; pretty much need to be walked through this. Virtuall