Oracle Dataguard - Logs apply issue
Hi,
Due to some issue our archives logs are not applying from production to standy. we had our standby at London. i dont see any error messages. not sure what is happening, i had case open with oracle support.
we need to shutdown all our production and DR last sunday. once it is back up it stopped sending logs. our production server is a 3 node RAC.
i just need how to apply archive logs from production to standby which is in london.
appricate your quick response.
Thanks
Hello,
Please post the result:
SELECT MAX(SEQUENCE#), MAX(COMPLETION_TIME), APPLIED FROM V$ARCHIVED_LOG GROUP BY APPLIED;
and
SELECT * FROM V$ARCHIVE_GAP;
also you can copy them from the primary to the standby and apply them one by one:
ALTER DATABASE REGISTER LOGFILE 'LOG_FILE_NAME';
if the gapsequence is so big like 20 or 30 days i will recreate the dataguard using and clone it using rman.
Kind regards
Mohamed
Similar Messages
-
Hello,i have just set up the physical standby first time in 11g r2 and sucessfully completed.but log is not shipped to standby ,tnsname.ora and listener.ora have all the required entries and working properly,no any error seen in the alert log of both physical and standby .it will be appreciable if any one give me the clue to debug the issue..
Thanks,
PankajPost results from ( on Primary )
show parameter LOG_ARCHIVE_DEST_STATE_2
Best Regards
mseberg -
Hi
Could you please help me on this...
We started Oracle data guard setup for a production database where primary is located in Europe Region and standby is located in Latin America.
We configured dataguard parameters on primary and performed cold backup of DB size nearly one terabyte. At the same time we kept standby server kept ready.
But it took 15 days to reach cold backup and get restored on to standby location. In these 15 days we shipped manually all the archive logs to standby which were generated on primary.
After restoration on standby server, today i created standby contro lfile on primary and transferred to standby using scp and then copied to standby control file locations. As per the standby database setup procedure i performed all the steps...
I am able to mount the database in standby mode. After that i given the command "RECOVER STANDBY DATABASE;" to apply all the logs which were shipped manually in these 15 days....I am getting error given below:
Physical Standby Database mounted.
Completed: alter database mount standby database
Mon Jun 28 07:53:33 2010
Starting Data Guard Broker (DMON)
INSV started with pid=22, OS id=1246
Mon Jun 28 07:54:35 2010
ALTER DATABASE RECOVER standby database
Mon Jun 28 07:54:35 2010
Media Recovery Start
Managed Standby Recovery not using Real Time Apply
Mon Jun 28 07:54:35 2010
Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
ORA-01157: cannot identify/lock data file 322 - see DBWR trace file
ORA-01110: data file 322: '/oracle/P19/sapdata1/sr3_289/sr3.data289'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 28 07:54:35 2010
Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
ORA-01157: cannot identify/lock data file 323 - see DBWR trace file
ORA-01110: data file 323: '/oracle/P19/sapdata1/sr3_290/sr3.data290'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 28 07:54:35 2010
Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
ORA-01157: cannot identify/lock data file 324 - see DBWR trace file
ORA-01110: data file 324: '/oracle/P19/sapdata2/sr3_291/sr3.data291'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 28 07:54:35 2010
Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
ORA-01157: cannot identify/lock data file 325 - see DBWR trace file
ORA-01110: data file 325: '/oracle/P19/sapdata3/sr3_292/sr3.data292'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
The above datafiles are added after the cold backup on primary....I am getting this error due to the latest standby controlfile used on standby....so i used the below commands on standby database:
alter database datafile '/oracle/P19/sapdata1/sr3_289/sr3.data289' offline drop;
alter database datafile '/oracle/P19/sapdata1/sr3_290/sr3.data290' offline drop;
alter database datafile '/oracle/P19/sapdata2/sr3_291/sr3.data291' offline drop;
alter database datafile '/oracle/P19/sapdata3/sr3_292/sr3.data292' offline drop;
and then recovery started by applying the logs...please find the details from alert log file given below:
Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401780_624244340.dbf
Mon Jun 28 08:37:22 2010
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Mon Jun 28 08:37:22 2010
ALTER DATABASE RECOVER CONTINUE DEFAULT
Mon Jun 28 08:37:22 2010
Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401781_624244340.dbf
Mon Jun 28 08:38:02 2010
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Mon Jun 28 08:38:02 2010
ALTER DATABASE RECOVER CONTINUE DEFAULT
Mon Jun 28 08:38:02 2010
Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401782_624244340.dbf
Mon Jun 28 08:38:32 2010
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Mon Jun 28 08:38:32 2010
ALTER DATABASE RECOVER CONTINUE DEFAULT
Mon Jun 28 08:38:32 2010
Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401783_624244340.dbf
Mon Jun 28 08:39:05 2010
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Mon Jun 28 08:39:05 2010
ALTER DATABASE RECOVER CONTINUE DEFAULT
After the manual shipped logs applying completion, i started the dataguard setup....logs are shipping and applying perfectly.....
Media Recovery Waiting for thread 1 sequence 407421
Fetching gap sequence in thread 1, gap sequence 407421-407506
Thu Jul 1 00:26:41 2010
RFS[2]: Archived Log: '/oracle/P19/oraarch/P19arch1_407529_624244340.dbf'
Thu Jul 1 00:26:49 2010
RFS[3]: Archived Log: '/oracle/P19/oraarch/P19arch1_407530_624244340.dbf'
Thu Jul 1 00:27:17 2010
RFS[1]: Archived Log: '/oracle/P19/oraarch/P19arch1_407531_624244340.dbf'
Thu Jul 1 00:28:41 2010
RFS[2]: Archived Log: '/oracle/P19/oraarch/P19arch1_407532_624244340.dbf'
Thu Jul 1 00:29:14 2010
RFS[3]: Archived Log: '/oracle/P19/oraarch/P19arch1_407421_624244340.dbf'
Thu Jul 1 00:29:19 2010
Media Recovery Log /oracle/P19/oraarch/P19arch1_407421_624244340.dbf
Thu Jul 1 00:29:24 2010
RFS[1]: Archived Log: '/oracle/P19/oraarch/P19arch1_407422_624244340.dbf'
Thu Jul 1 00:29:51 2010
Media Recovery Log /oracle/P19/oraarch/P19arch1_407422_624244340.dbf
But the above files showing as recover as status...could you please tell how to go ahead on this....
NAME
STATUS
/oracle/P19/sapdata1/sr3_289/sr3.data289
RECOVER
/oracle/P19/sapdata1/sr3_290/sr3.data290
RECOVER
/oracle/P19/sapdata2/sr3_291/sr3.data291
RECOVER
NAME
STATUS
/oracle/P19/sapdata3/sr3_292/sr3.data292
RECOVER
can i recover these files in standby mount mode? Any other solution is there? All archivelogs applied, log shipping and applying is going on...
Thank You....try this out
1.On the Primary server issue this command
SQL> Alter database backup control file to trace;
2.Go to your Udump directory and look for the trace file that has been generated by this comman.
3.This file will contain the CREATE CONTROLFILE statement. It will have two sets of statement one with RESETLOGS and another withOUT RESETLOGS. use RESETLOGS option of CREATE CONTROLFILE statement. Now copy and paste the statement in a file. Let it be c.sql
4.Now open a file named as c.sql file in text editor and set the database name from [example:ica] to [example:prod] shown in an example below
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log',
'/u01/oracle/ica/redo01_02.log'),
GROUP 2 ('/u01/oracle/ica/redo02_01.log',
'/u01/oracle/ica/redo02_02.log'),
GROUP 3 ('/u01/oracle/ica/redo03_01.log',
'/u01/oracle/ica/redo03_02.log')
RESETLOGS
DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M,
'/u01/oracle/ica/rbs01.dbs' SIZE 5M,
'/u01/oracle/ica/users01.dbs' SIZE 5M,
'/u01/oracle/ica/temp01.dbs' SIZE 5M
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG5. Start the database in NOMOUNT STATE
SQL>Stratup Nomount;
6.Now execute the script to create a new control file
SQL> @/u01/oracle/c.sql
7. Now open the database
SQL>Alter database open resetlogs;
IMPORTANT NOTE: Before implementing this suggested solution try it out on ur laptop or PC if possible
Edited by: Suhail Faraaz on Jun 30, 2010 3:00 PM
Edited by: Suhail Faraaz on Jun 30, 2010 3:03 PM -
How to know the delay in redo log apply on Active Dataguard 11g
Hello All,
How to know the delay in redo log apply on Active Dataguard 11g...
Do we need to wait till log switch occurs?
Or is it recommended to schedule a log switch every 15 min, no matter data is updated/inserted or not in primary?
Please suggest...
Oracle : oracle 11g Release 2
OS : RHEL 5.4
Thanks
Edited by: user1687821 on Feb 23, 2012 12:02 AMHello CKPT,
Thank you for the valuable information...
We have not configured databroker.
Output of the query
SELECT * FROM (
SELECT sequence#, archived, applied,
TO_CHAR(completion_time, 'RRRR/MM/DD HH24:MI') AS completed
FROM sys.v$archived_log
ORDER BY sequence# DESC)
WHERE ROWNUM <= 10
Primary...
SEQUENCE# ARCHIVED APPLIED COMPLETED
29680 YES YES 2012/02/23 01:11
29680 YES NO 2012/02/23 01:11
29679 YES NO 2012/02/22 23:11
29679 YES YES 2012/02/22 23:11
29678 YES YES 2012/02/22 23:11
29678 YES NO 2012/02/22 23:11
29677 YES YES 2012/02/22 22:32
29677 YES NO 2012/02/22 22:32
29676 YES YES 2012/02/22 22:02
29676 YES NO 2012/02/22 22:02
Standby...
SEQUENCE# ARC APP COMPLETED
29680 YES YES 2012/02/23 01:11
29679 YES YES 2012/02/22 23:11
29678 YES YES 2012/02/22 23:11
29677 YES YES 2012/02/22 22:32
29676 YES YES 2012/02/22 22:02
29675 YES YES 2012/02/22 21:24
29674 YES YES 2012/02/22 19:24
29673 YES YES 2012/02/22 18:59
29672 YES YES 2012/02/22 17:42
29671 YES YES 2012/02/22 17:41
Primary shows yes as well as no...
Next,
From primary:-
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
THREAD# MAX(SEQUENCE#)
1 29680
From standby:-
SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
THREAD# MAX(SEQUENCE#)
1 29680
What is the redo transport service you are using? is it LGWR or ARCH ?
Output of query select * from v$parameter where name like 'log_archive_dest_2' shows below value...
SERVICE=b_stdb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=b_stdb
So is it lgwr already configured...? if yes then how do i see the delay in both servers..
Yes, the network is good as they both resides in same LAN within same rack
Thanks... -
Applying oracle patchset in oracle-dataguard environment
Hello There,
I wish to apply oracle patchset in oracle dataguard environment.
can you please let me know the steps that has to be taken on Primary & standby database?
is there any specific document for this?
DB - oracle 10.2.0.4
OS - Linxu x86_64
Best Regards
Sachin BhattHi Sachinn,
To patch primary site, you can find the detailed infirmation in readme.html, in the patch zip file. Additionally, at the standby site, In order to apply the patch;
1) stop the log shipment
2) stop the oracle related services
3) patch the Oracle
4) Startup migrate
5) execute the SQL scripts
6) start the log shipment
Check the link below;
http://dbaforums.org/oracle/index.php?showtopic=17398
Best regards,
Orkun Gedik -
Oracle 10g Linux SLES, Physical Standby, Log Apply Stops
Hi, ia am faced the following problem:
Log applying on standby work perfect N days.
(Managed recovery mode, not real time)
Then just stops, without any (visible) reason.
Cancel recovery does not work. There is nothing in log/ trace/ alert files. In v$ views - just "applying log N (let say) 877" - for hours.
I kill DB Writer process on standby, do startup mount and "alter database recovery managed standby database disconnect from session". ...
It works again N days perfect.
I did not found nothing about that.
Any ideas?
Your help is highly appreciate, thank you in advanceThe error message (which you should have BTW looked up beforehand in the online error documentation, and apparently you didn't even do this minimal work) is self explanatory.
Oracle can't reach the second server. You need to establish whether you can ping it on O/S level and whether you can TNSPING it.
Problems will arise if the second server uses NAT and sends it's own IP address back, instead of the IP address it is known under by DNS.
The Net administrators manual contains a troubleshooting chapter, as far as I remember this error is discussed in depth.
Sybrand Bakker
Senior Oracle DBA -
Standby DB real time redo log apply problem
Hi all,
I am using Oracle 10g to create physical standby db. In the standby
db, normal archived log apply does not have problem, but when I try to
use redo log real time apply and issue command
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;
it shows:
ERROR at line 1:
ORA-38500: USING CURRENT LOGFILE option not available without stand
What is the problem??
Thanks a lot !
StevenNote:3633226.8 from Metalink states:
Setting a standby's RealTimeApply property to ON when there are no standby
redo logs on the standby or the standby is not in SYNC transport, will
seemingly succeed. However, the apply engine will not start. The DRC log
will report an error like ORA-38500. In this case, add standby redo logs
and set the log transport mode for the standby to be SYNC and set the
standby state to ONLINE.
Workaround:
Add Standby Redo Logs on the standby and set the following broker properties
on the standby:
LogXptMode to SYNC and reset RealTimeApply to ON.
Then set the standby state to ONLINE.
HTH -
Standby database archive log apply in production server.
Dear All,
How I apply standby database archive log apply in production server.
Please help me.
Thanks,
ManasHow can i use standby database as primary for that 48 hour ?Perform a switchover (role transitions).
First check if the standby is in sync with the primary database.
Primary database:
sql>select max(sequence#) from v$archived_log; ---> Value AStandby database:
sql>select max(sequence#) from v$archived_log where applied='YES'; -----> Value BCheck if Value B is same as Value A.
If the standby is in sycn with the primary database, then perform the switchover operation (refer the below link)
http://www.articles.freemegazone.com/oracle-switchover-physical-standby-database.php
http://docs.oracle.com/cd/B19306_01/server.102/b14230/sofo.htm
http://www.oracle-base.com/articles/9i/DataGuard.php#DatabaseSwitchover
manas
Handle: manas
Email: [email protected]
Status Level: Newbie
Registered: Jan 24, 2012
Total Posts: 10
Total Questions: 3 (3 unresolved)
Name Manas
Location kolkata Mark your questions as answered if you feel that you have got better answers rather than building up a heap of unanswered questions. -
Hangup transport archive log in primary and archive log apply
Hi
I am building Dataguard from 3-node primary cluster to 3-node standby cluster
Oracle Version:10.2.0.4
Operating system : LInux 64 bit
After I restored standby database, I configured dataguard broker with wrong unique_name parameter in standby cluster using grid control
after I corrected mistake, disabled dataguard broker parameters, delete dataguard broker files and reboot standby cluster but don't reboot primary cluster because is production enviroment.
I have problem with following symptoms:
-Hangup transport archive log while recovery database in standby then gap archivelog is produced.
-Copy and register all archivelog gap in standby but don't apply archive log.
- Don't register like applied in v$archived_log in primary the archives applied in standby manually.
-RMAN command: "backup as COMPRESSED BACKUPSET tag 'Backup Full Disk' archivelog all not backed up delete all input;"
don't delete in primary archive log applied because message " archive log is necessary"
I think that is necessary reboot primary cluster.
Please helpmePost the results of queries. It is difficult to understand.
post from primary
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;select ds.dest_id id
, ad.status
, ds.database_mode db_mode
, ad.archiver type
, ds.recovery_mode
, ds.protection_mode
, ds.standby_logfile_count "SRLs"
, ds.standby_logfile_active active
, ds.archived_seq#
from v$archive_dest_status ds
, v$archive_dest ad
where ds.dest_id = ad.dest_id
and ad.status != 'INACTIVE'
order by
ds.dest_id
Post from standby.
SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
select * from v$managed_standby; -
Logical dataguard SQL apply fails during import on primary database
I have created logical dataguard using GRID, initially every things works fine.
One time we had to do import of new data on primary database, that is where the problem started.
log apply is lagging big time, and i got this error
StatusRedo apply services stopped due to failed transaction (1,33,8478)
ReasonORA-16227: DDL skipped due to missing object
Failed SQLDROP TABLE "USA"."SYS_IMPORT_SCHEMA_01" PURGE
This table exists on logical dataguard...
How do we deal with import on logical dataguard, since import generates a lot of redlogsHello;
These Oracle notes might help :
Slow Performance In Logical Standby Database Due To Lots Of Activity On Sys.Aud$ [ID 862173.1]
Oracle10g Data Guard SQL Apply Troubleshooting [ID 312434.1]
Developer and DBA Tips to Optimize SQL Apply [ID 603361.1]
Best Regards
mseberg -
Post switch over in oracle dataguard 11g
Dear Guru,
Switch over has been completed successfully from primary database to standby database.
new primaray database is open and accessible but its showing his satus in v$database as below.
database_role = primary
switchover_status = not allowed
db_unique_name = dg1_stdby
old primaray database which is now standby showing his satus in v$database as below.
database_role = physical standby
switchover_status = session active
db_unique_name = dg1_primy
when check status in data guard broker its
for both the databases - dg1_primy and dg1_stdby its showing error ORA-16810: multiple errors or warnings detected for the database.
when checked dataguard log file on new primary server its showing
ORA-16816: incorrect database role
ORA-16700: the standby database has diverged from the primary database
Please guide me how to resolved issue.
Thanks & Regards,
Nikunj ThakerHi Nikunj,
You can find the scenario, in the "Problem : Data Guard Broker Switchover fails With ORA-16665 using Active Data Guard", on metalink.oracle.com
First of all manually complete the Switchover, ie. restart the Databases in its new Role. Note that the final Role Change has not been recognized by the Broker, so you have to rebuild the Data Guard Broker Configuration when the Databases have been restarted:
DGMGRL> remove configuration;
DGMGRL> create configuration ...
DGMGRL> add database ...
DGMGRL> enable configuration;
Best regards,
Orkun Gedik -
Oracle application adapters insallation issue..
Hi
i am installing oracle adapters on linux version:Linux 2.6.9-5.ELsmp #1 SMP Wed Jan 5 19:30:39 EST 2005 i686 i686 i386 GNU/Linux.
But while i execute the runinstaller it gives the following error:
Checking installer requirements...
Checking operating system version: must be redhat-Red Hat Enterprise Linux AS release 2.1, redhat-Red Hat Enterprise Linux AS release 3, redhat-Red Hat Enterprise Linux AS release 4, redhat-Red Hat Enterprise Linux ES release 3, SuSE-9 or UnitedLinux-1.0
Failed <<<<
Exiting Oracle Universal Installer, log for this session can be found at /u01/app/oracle/oraInventory/logs/installActions2009-06-22_04-15-45AM.log
What i understand is the linux version is already compatible and it should not throw such error.This is a show-stopper issue and i want some help to resolve it.
I am installing from oracle user.
Regards
PrabalUse
./runInstaller -ignoreSysPrereqs
Marc -
How to read the alert_log_file regarding log applied(or)not?
Hi all,
I need to know from alert_log_file
1.whether the archive log applied to Stand-by DB or not?
2.And how to rectify the below errors.
Current log# 2 seq# 6360 mem# 0: /ovsd/dbs/redo02.log
Sun Jan 29 12:59:01 2012
ARC1: Beginning to archive log 4 thread 1 sequence 6359
Creating archive destination LOG_ARCHIVE_DEST_2: 'OVSDSTBY'
ARC1: Error 3113 Creating archive log file to 'OVSDSTBY'
Sun Jan 29 12:59:01 2012
Errors in file /oracle/admin/ovsd/bdump/ovsd_arc1_3834.trc:
ORA-03113: end-of-file on communication channel
Creating archive destination LOG_ARCHIVE_DEST_1: '/archive/ovsd/archive-ovsd_1_6359.arc'
ARC1: Completed archiving log 4 thread 1 sequence 6359
Sun Jan 29 13:04:49 2012
ARC0: Begin FAL archive (thread 1 sequence 6359 destination OVSDSTBY)
Creating archive destination LOG_ARCHIVE_DEST_2: 'OVSDSTBY'
Sun Jan 29 13:10:27 2012
ARC0: Complete FAL archive (thread 1 sequence 6359 destination OVSDSTBY)
Sun Jan 29 23:34:08 2012
Thread 1 advanced to log sequence 6361
Sun Jan 29 23:34:08 2012
Current log# 1 seq# 6361 mem# 0: /ovsd/dbs/redo01.log
Sun Jan 29 23:34:08 2012
ARC0: Evaluating archive log 2 thread 1 sequence 6360
Sun Jan 29 23:34:08 2012
ARC0: Beginning to archive log 2 thread 1 sequence 6360
Creating archive destination LOG_ARCHIVE_DEST_2: 'OVSDSTBY'
Creating archive destination LOG_ARCHIVE_DEST_1: '/archive/ovsd/archive-ovsd_1_6360.arc'
Sun Jan 29 23:40:17 2012
ARC0: Completed archiving log 2 thread 1 sequence 6360
Mon Jan 30 03:00:05 2012
Errors in file /oracle/admin/ovsd/udump/ovsd_ora_3354.trc:
ORA-00600: internal error code, arguments: [xsoptloc2], [4], [4], [0], [], [], [], []
Mon Jan 30 09:50:06 2012
Thread 1 advanced to log sequence 6362
Mon Jan 30 09:50:06 2012
ARC1: Evaluating archive log 1 thread 1 sequence 6361
Mon Jan 30 09:50:06 2012
Current log# 3 seq# 6362 mem# 0: /ovsd/dbs/redo03.log
Mon Jan 30 09:50:06 2012
ARC1: Beginning to archive log 1 thread 1 sequence 6361
Creating archive destination LOG_ARCHIVE_DEST_2: 'OVSDSTBY'
ARC1: Error 3113 Creating archive log file to 'OVSDSTBY'
Mon Jan 30 09:50:06 2012
Errors in file /oracle/admin/ovsd/bdump/ovsd_arc1_3834.trc:
ORA-03113: end-of-file on communication channel
Creating archive destination LOG_ARCHIVE_DEST_1: '/archive/ovsd/archive-ovsd_1_6361.arc'
ARC1: Completed archiving log 1 thread 1 sequence 6361
Mon Jan 30 09:55:21 2012
ARC0: Begin FAL archive (thread 1 sequence 6361 destination OVSDSTBY)
Creating archive destination LOG_ARCHIVE_DEST_2: 'OVSDSTBY'
Mon Jan 30 10:03:42 2012
ARC0: Complete FAL archive (thread 1 sequence 6361 destination OVSDSTBY)
-------------------------------------------------------------------------------------------------------------------------------------------------------------You can't find archive logs applied or not of standby database in primary alert log file.
Of course you can see in standby alert log file as "media recovery log thread <thread#> <sequence#>"
You have internal errors in alert log file. Use error lookup tool for internal errors for any listed bugs. If not please do submit alert & trace files to support by SR.
ORA-600/ORA-7445 Lookup tool. Note 1082674.1
HTH. -
Oracle 11g - External Table Issue SQL - PL/SQL?
Oracle 11g - External Table Issue?
=====================
I hope this is the right forum for this issue, if not let me, where to go.
We are using Oracle 11g (11.2.0.1.0) on (Platform : solaris[tm] oe (64-bit)), Sql Developer 3.0.04
We are trying to use oracle external table to load text files in .csv format. Here is our data look like.
======================
Date1,date2,Political party,Name, ROLE
20-Jan-66,22-Nov-69,Democratic,"John ", MMM
22-Nov-70,20-Jan-71,Democratic,"John Jr.",MMM
20-Jan-68,9-Aug-70,Republican,"Rick Ford Sr.", MMM
9-Aug-72,20-Jan-75,Republican,Henry,MMM
ALL NULL -- record
20-Jan-80,20-Jan-89,Democratic,"Donald Smith",MMM
======================
Our Expernal table structures is as follows
CREATE TABLE P_LOAD
DATE1 VARCHAR2(10),
DATE2 VARCHAR2(10),
POL_PRTY VARCHAR2(30),
P_NAME VARCHAR2(30),
P_ROLE VARCHAR2(5)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY P_EXT_TAB_D
ACCESS PARAMETERS (
RECORDS DELIMITED by NEWLINE
SKIP 1
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
MISSING FIELD VALUES ARE NULL
DATE1 CHAR (10) Terminated by "," ,
DATE2 CHAR (10) Terminated by "," ,
POL_PRTY CHAR (30) Terminated by "," ,
P_NAME CHAR (30) Terminated by "," OPTIONALLY ENCLOSED BY '"' ,
P_ROLE CHAR (5) Terminated by ","
LOCATION ('Input.dat')
REJECT LIMIT UNLIMITED;
It created successfully using SQL Developer
Here is the issue.
It is not loading the records, where fields are enclosed in '"' (Rec # 2,3,4,7)
It is loading all NULL value record (Rec # 6)
*** If we remove the '"' from input data, it loads all records including all NULL records
Log file has
KUP-04021: field formatting error for field P_NAME
KUP-04036: second enclosing delimiter not found
KUP-04101: record 2 rejected in file ....
Our questions
Why did "REJECT ROWS WITH ALL NULL FIELDS" not working?
Why did Terminated by "," OPTIONALLY ENCLOSED BY '"' not working?
Any idea?
Thanks in helping.
Edited by: qwe16235 on Jun 11, 2011 11:31 AMI'm not sure, but maybe you should get rid of the redundancy that you have in your CREATE TABLE statement.
This line covers all fields:
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
{code}
So I would change the field list to:
{code}
DATE1 CHAR (10),
DATE2 CHAR (10),
POL_PRTY CHAR (30),
P_NAME CHAR (30),
P_ROLE CHAR (5)
{code}
It worked on my installation. -
Oracle 11g - External Table Issue?
Oracle 11g - External Table Issue?
=====================
I hope this is the right forum for this issue, if not let me, where to go.
We are using Oracle 11g (11.2.0.1.0) on (Platform : solaris[tm] oe (64-bit)), Sql Developer 3.0.04
We are trying to use oracle external table to load text files in .csv format. Here is our data look like.
======================
Date1,date2,Political party,Name, ROLE
20-Jan-66,22-Nov-69,Democratic,"John ", MMM
22-Nov-70,20-Jan-71,Democratic,"John Jr.",MMM
20-Jan-68,9-Aug-70,Republican,"Rick Ford Sr.", MMM
9-Aug-72,20-Jan-75,Republican,Henry,MMM
------ ALL NULL -- record
20-Jan-80,20-Jan-89,Democratic,"Donald Smith",MMM
======================
Our Expernal table structures is as follows
CREATE TABLE P_LOAD
DATE1 VARCHAR2(10),
DATE2 VARCHAR2(10),
POL_PRTY VARCHAR2(30),
P_NAME VARCHAR2(30),
P_ROLE VARCHAR2(5)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY P_EXT_TAB_D
ACCESS PARAMETERS (
RECORDS DELIMITED by NEWLINE
SKIP 1
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
MISSING FIELD VALUES ARE NULL
DATE1 CHAR (10) Terminated by "," ,
DATE2 CHAR (10) Terminated by "," ,
POL_PRTY CHAR (30) Terminated by "," ,
P_NAME CHAR (30) Terminated by "," OPTIONALLY ENCLOSED BY '"' ,
P_ROLE CHAR (5) Terminated by ","
LOCATION ('Input.dat')
REJECT LIMIT UNLIMITED;
It created successfully using SQL Developer
Here is the issue.
It is not loading the records, where fields are enclosed in '"' (Rec # 2,3,4,7)
It is loading all NULL value record (Rec # 6)
*** If we remove the '"' from input data, it loads all records including all NULL records
Log file has
KUP-04021: field formatting error for field P_NAME
KUP-04036: second enclosing delimiter not found
KUP-04101: record 2 rejected in file ....
Our questions
Why did "REJECT ROWS WITH ALL NULL FIELDS" not working?
Why did Terminated by "," OPTIONALLY ENCLOSED BY '"' not working?
Any idea?
Thanks in helping.
Edited by: qwe16235 on Jun 10, 2011 2:16 PMThe following worked for me:
drop table p_load;
CREATE TABLE P_LOAD
DATE1 VARCHAR2(10),
DATE2 VARCHAR2(10),
POL_PRTY VARCHAR2(30),
P_NAME VARCHAR2(30),
P_ROLE VARCHAR2(5)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY scott_def_dir1
ACCESS PARAMETERS (
RECORDS DELIMITED by NEWLINE
badfile scott_def_dir2:'p_load_%a_%p.bad'
logfile scott_def_dir2:'p_load_%a_%p.log'
SKIP 1
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
DATE1 CHAR (10) Terminated by "," ,
DATE2 CHAR (10) Terminated by "," ,
POL_PRTY CHAR (30) Terminated by "," ,
P_NAME CHAR (30) Terminated by "," OPTIONALLY ENCLOSED BY '"' ,
P_ROLE CHAR (5) Terminated by ","
LOCATION ('Input.dat')
REJECT LIMIT UNLIMITED;
Note that I had to interchange the two lines:
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
Just to get the access parameters to parse correctly.
I added two empty lines, one in the middle and one at the end - both were rejected.
In the log file, you will see the rejectiions:
$ cat p_load_000_9219.log
LOG file opened at 07/08/11 19:47:23
Field Definitions for table P_LOAD
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform
Reject rows with all null fields
Fields in Data Source:
DATE1 CHAR (10)
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
DATE2 CHAR (10)
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
POL_PRTY CHAR (30)
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
P_NAME CHAR (30)
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
P_ROLE CHAR (5)
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
KUP-04073: record ignored because all referenced fields are null for a record
KUP-04073: record ignored because all referenced fields are null for a record
Input Data:
Date1,date2,Political party,Name, ROLE
20-Jan-66,22-Nov-69,Democratic,"John ", MMM
22-Nov-70,20-Jan-71,Democratic,"John Jr.",MMM
20-Jan-68,9-Aug-70,Republican,"Rick Ford Sr.", MMM
9-Aug-72,20-Jan-75,Republican,Henry,MMM
4-Aug-70,20-Jan-75,Independent
Result:
SQL> select * from p_load;
DATE1 DATE2 POL_PRTY P_NAME P_ROL
20-Jan-66 22-Nov-69 Democratic John MMM
22-Nov-70 20-Jan-71 Democratic John Jr. MMM
20-Jan-68 9-Aug-70 Republican Rick Ford Sr. MMM
9-Aug-72 20-Jan-75 Republican Henry MMM
Regards,
- Allen
Maybe you are looking for
-
Can't Migrate from G4 to Intel Mac Mini
Hi I have been unable to migrate files from my old G4 destop to my new Intel Mac mini. I wonder if anyone could help. I first tried first using the migration assistant on the Mac mini and used the G4 as a fire wire drive. Failed twice, the process fr
-
pretty much what the question said..i just wanted to know why, do i have to verify this account (if so...how?) and will iGet (see what i did there hehe) my money back?
-
How do i update my itunes so that i can use my new ipad?
i need to upgrade my itunes to 10.7 or more recent version in order that i can sync my ipad with it. how do i do this?
-
Hello, can anyone please tell me how to achieve the same effect on this website, http://mylanguages.org/converter.php , in pl sql or oracle sql? They have taken an arabic string and returned a (to my knowledge) hex value. Many Thanks
-
Hi, The password is correct and and I can use it to connect to the XE But I've got this C:\oraclexe\app\oracle\product\10.2.0\server\BIN>imp system/???? dumpfile=c:\dp\may_chan.dmp schema=may_chan LRM-00101: 未知的參數名稱 'dumpfile' IMP-00022: 無法處理參數, 請鍵入