Validation failed for archived log
Hi,
oracle database version 11.2.0.4
OS centOS 6.5
Recently i have set rman backup scripy on production Database, As we are using dbvisit for standby database for that we have set cron which run in every 10 minutes it generates archive and copy it to standby side,
but sometimes backup failed due to expected archive is not represent at location so i put "crosscheck archivelog all" in script now backup is running fine, But i have analyzed backlog file getting
"validation failed for archived log" the time stamp i have checked validation failed archive is current day and yesterday even though archives are present at the location and CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
Guys i am worried it shouldn't be a big issue for me,
please suggest what is wrong
This forum is for Berkeley DB high availability. We do not have the expertise to help you with your Oracle database 11.2.0.4 issue. You'll need to submit this question to one of the Oracle database forums to get the help you are looking for.
Paula Bingham
Similar Messages
-
Order Import Failed in OM : Log Validation failed for the field - Ship To
Problem in Order Management
When i tried to do Order import in Source Org logfiles shows the Message :Validation failed for the field - Ship To, Please if anyone have a solution of this.
Thanks in advance.
Regards,
Rajesh Verma
Senior Consultant- Oracle Apps.
COLTperhaps you might want to try populating customer_id instead of name to make sure that there is no error on typing customer name.
primary ship-to and bill-to must exist under this customer. During order import, if ship-to is not specified then import will fetch primary ST of customer.
This is what we use to populate interface table -with mininum of data.
INSERT INTO oe_headers_iface_all
(orig_sys_document_ref,order_source_id,org_id
,order_type_id,payment_term_id, shipping_method_code, freight_terms_code
,customer_po_number,salesrep_id
,sold_to_org_id, ship_to_org_id,invoice_to_org_id,sold_to_contact_id
,booked_flag
,created_by, creation_date, last_updated_by, last_update_date,last_update_login
,operation_code, order_category
,attribute5,tp_attribute4,xml_message_id,xml_transaction_type_code
,request_id, error_flag)
INSERT INTO oe_lines_iface_all
(order_source_id, orig_sys_document_ref, orig_sys_line_ref,orig_sys_shipment_ref
,inventory_item,item_type_code,line_type_id
,top_model_line_ref,link_to_line_ref,component_sequence_id,component_code,option_flag
,ordered_quantity
,order_quantity_uom,salesrep_id
,created_by, creation_date, last_updated_by, last_update_date,last_update_login
,operation_code,cust_model_serial_number,line_category_code
,context,attribute6
,reference_type, reference_line_id, reference_header_id
,return_context, return_attribute1, return_attribute2
,return_reason_code
,tp_attribute1,tp_attribute2,tp_attribute3,tp_attribute4,tp_attribute5
,request_id,error_flag) -
Secondary destination for Archived logs
Version: 10.2, 11.1, 11.2
We occasionally get 'archiver error' on our production DBs due to our LOG_ARCHIVE_DEST_1 being full. How can I have a secondary location for archive logs in case my 'primary' location (LOG_ARCHIVE_DEST_1) becomes full ?
I gather that LOG_ARCHIVE_DEST_2 is reserved for shipping archive logs to Dataguard standby DB in which you specify the tns entry of standby using SERVICE parameter.
Can I specify LOG_ARCHIVE_DEST_3 as my secondary location in case LOG_ARCHIVE_DEST_1 becomes full ? Is it what LOG_ARCHIVE_DEST_n meant for ? Although the documentation says you can have upto 10 locations, I am confused if they are meant to store Multiplexed copies of archive logs ? That is not what I am looking for ?>
Hi again Tom,
I have one more question:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_4 = 'LOCATION=/disk4/arch';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_3 = 'LOCATION=/disk3/arch
ALTERNATE=LOG_ARCHIVE_DEST_4';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_4=ALTERNATE;
SQL> SELECT dest_name, status, destination FROM v$archive_dest;
DEST_NAME STATUS DESTINATION
LOG_ARCHIVE_DEST_1 VALID /disk1/arch -------------> Dest1
LOG_ARCHIVE_DEST_2 VALID +RECOVERY -------------> Dest2
LOG_ARCHIVE_DEST_3 VALID /disk3/arch -------------> Dest3
LOG_ARCHIVE_DEST_4 ALTERNATE /disk4/archMy understanding is (and I'm not terribly sure at the minute - don't have a test system to hand. I haven't
set up a backup/recovery strategy in a while - I just restore backups from time to time (normally every 4 weeks)
to ensure that the database recovers as it should) - my understanding is that under the scheme above
DEST_3 will be a copy of what's in DEST_1. DEST_4 on the other hand will "step in" should DEST_1
or DEST_3 fill up/fail.
As to DEST_2, I'm not sure - maybe something to do with Fast Recovery Area? I've Googled but can't
find anything - the trouble is that all the pages about this contain the word "recovery" and the "+"
sign doesn't appear to affect the search - does "+" mean something special to Google?
I don't have a system at the moment - if you do, why don't you test and see? On a test system, fill
up the file system for DEST_1 with rubbish and check to see what happens?
All of the above is to be taken with a pinch of salt - I don't have a system to hand and am not certain,
so CAVEAT EMPTOR
HTH,
Paul...
Edited by: Paulie on 21-Jul-2012 17:20 -
ARC4: Failed to archive log# 3 seq# 2091911 thrd# 1
Hi gurus,
Iam running my database in archivelog mode and I receive following errors in archive trace files. kindly give me a solution and the trace file is attached below
ARC4: Failed to archive log# 3 seq# 2091911 thrd# 1
*** 2006.08.08.02.52.14.656
ARC4: Begin archiving log# 4 seq# 2091912 thrd# 1
ARC4: VALIDATE
ARC4: PREPARE
ARC4: INITIALIZE
ARC4: SPOOL
ARC4: Creating archive destination 1 : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 1 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 129 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 257 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 385 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 513 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 641 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 769 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 897 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 1025 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 1153 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 1281 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 1409 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 1537 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 1665 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 1793 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 1921 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 2049 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 2177 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 2305 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 2433 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 2561 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 2689 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 2817 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 2945 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 3073 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 3201 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 3329 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 3457 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 3585 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 3713 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 3841 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 3969 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 4097 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 4225 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 4353 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 4481 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archiving block 4609 count 128 to : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: Archive destination 1 made inactive: File I/O error
ARC4: Closing archive destination 1 : E:\ORACLE\MMP\ARCHIVE\ARC91912.001
ARC4: Archive destination 1 made inactive: File close error
ARC4: FAILED, see alert log for details
ARC4: Archival failure destination 1 : 'E:\ORACLE\MMP\ARCHIVE\ARC91912.001'
ARC4: INCOMPLETE, see alert log for details
ARC4: NOTARCHIVED, see alert log for detailsHi,
Iam running on 8.1.5 where i cannt use v$recovery_file_dest command. I have checked alert <sid>.log
Mon Sep 04 14:35:08 2006
Current log# 2 seq# 2101962 mem# 0: C:\ORACLE\MMP\LOG\REDO02.LOG
Current log# 2 seq# 2101962 mem# 1: E:\ORACLE\MMP\LOG\REDO02.LOG
Mon Sep 04 14:35:09 2006
ARC3: received prod
Mon Sep 04 14:35:09 2006
ARC3: Beginning to archive log# 5 seq# 2101961
kcrrga: Warning. Log sequence in archive filename wrapped
to fix length as indicated by %S in LOG_ARCHIVE_FORMAT.
Old log archive with same name might be overwritten.
ARC3: Completed archiving log# 5 seq# 2101961
ARC3: re-scanning for new log files
ARC3: prodding the archiver
Mon Sep 04 14:35:11 2006
ARC4: received prod
Mon Sep 04 14:45:57 2006
LGWR: prodding the archiver
Thread 1 advanced to log sequence 2101963
Mon Sep 04 14:45:57 2006
Current log# 3 seq# 2101963 mem# 0: C:\ORACLE\MMP\LOG\REDO03.LOG
Current log# 3 seq# 2101963 mem# 1: E:\ORACLE\MMP\LOG\REDO03.LOG
Mon Sep 04 14:45:58 2006
ARC0: received prod
Mon Sep 04 14:45:59 2006
ARC0: Beginning to archive log# 2 seq# 2101962
kcrrga: Warning. Log sequence in archive filename wrapped
to fix length as indicated by %S in LOG_ARCHIVE_FORMAT.
Old log archive with same name might be overwritten.
ARC0: Completed archiving log# 2 seq# 2101962
ARC0: re-scanning for new log files -
ARC0: Failed to archive log# 1 seq# 1361
Hi.
Sat Apr 26 04:00:01 2003
Current log# 2 seq# 1362 mem# 0: /redolog02/TRADEDB1/log/log02_01.ora
Current log# 2 seq# 1362 mem# 1: /redolog01/TRADEDB1/log/log02_02.ora
Sat Apr 26 04:00:01 2003
ARCH: Beginning to archive log# 1 seq# 1361
Sat Apr 26 04:00:01 2003
ARC0: Beginning to archive log# 1 seq# 1361
ARC0: Failed to archive log# 1 seq# 1361
Sat Apr 26 04:00:03 2003
ARCH: Completed archiving log# 1 seq# 1361
these are the last few lines of my DB alert log file.I have scheduled a shell script to backup all the archive log script but whenever the script is running I am getting this error.
Can any one explain me what this means.Looks like archiving is done with out any problem.
I am using "alter system archivelog current" in the script.
regards
JoeAlthough this is not a serious problem, it does cause your database to stop briefly. It indicates that you do not have enough redo log groups.
Essentially, Oracle writes redo information to log group 1 until it is full, then switches to log group 2. While lgwr is writing to group 2, arc writes group 1 to the archive log destination. If arc is not finished archiving group 1 before group 2 fills, then it has nowhere to switch to (i.e. group 1 is not available for lgwr to write to yet). In normal operation, when Oracle is managing log switches, you would see an error about unable to switch logs from lgwr. However, by issuing the archivelog current, arc does all the work. Since it is unable to do the log switch required to archive the current log, it generates the failed to archive message, and waits for group 2 to finish archiving.
If this is only a problem when doing the backup, you can probably ignore it. However, if you see log switch errors in normal operation, you should add another logfile group, so there will be more time between switches into any sngle group.
TTFN
John -
Hi,
I have a database A, in archive log, with standby database. After each switch, I have an error like this :
Mon Aug 9 19:55:56 2004
ARCH: Beginning to archive log# 4 seq# 19877
Mon Aug 9 19:55:56 2004
ARC2: Beginning to archive log# 4 seq# 19877
ARC2: Failed to archive log# 4 seq# 19877
Mon Aug 9 19:55:57 2004
ARCH: Completed archiving log# 4 seq# 19877
I don't understand, because all work fine in my standby database.
Thanks for your lights,
Nicolas.No need to worry, these errors can be ignored. You have two archiver processes running, the first ARCH processed had already began the archive process, ARCH2 then tried. Not a big deal.
--Paul -
SnapshotDB failed for archive backup
hi!
one server i manage had a power failure. now when i try to start the calendar service i get this error:
20070528154729 - 1 Fatal Error: SnapshotDB failed for archive backup at /var/opt/SUNWics5/csdb/archive/archive_20070528
20070528154729 - csstoredExit: Exiting with [-1].the whole log:
20070528154726 - caldb.berkeleydb.homedir.path = /var/opt/SUNWics5/csdb
20070528154726 - caldb.berkeleydb.archive.path = /var/opt/SUNWics5/csdb/archive
20070528154726 - caldb.berkeleydb.hotbackup.path = /var/opt/SUNWics5/csdb/hotbackup
20070528154726 - caldb.berkeleydb.archive.enable = 1
20070528154726 - caldb.berkeleydb.hotbackup.enable = 1
20070528154726 - caldb.berkeleydb.hotbackup.mindays = 3
20070528154726 - caldb.berkeleydb.hotbackup.maxdays = 6
20070528154726 - caldb.berkeleydb.hotbackup.threshold = 70
20070528154726 - caldb.berkeleydb.archive.mindays = 3
20070528154726 - caldb.berkeleydb.archive.maxdays = 6
20070528154726 - caldb.berkeleydb.archive.threshold = 70
20070528154726 - caldb.berkeleydb.circularlogging = no
20070528154726 - caldb.berkeleydb.archive.interval = 120
20070528154726 - alarm.msgalarmnoticercpt = [email protected]
20070528154726 - service.admin.sleeptime = 2
20070528154726 - local.serveruid = icsuser
20070528154726 - local.hostname = jespre.dominio.local
20070528154726 - service.http.calendarhostname = jespre.dominio.local
20070528154726 - Reading configuration file - Done
csstored is started
Calendar service(s) were started
20070528154728 - Notice: Store Archiving is Enabled
20070528154728 - Notice: Hot Backup is Enabled
20070528154728 - WARNING: Removing directory [archive_20061102] from [/var/opt/SUNWics5/csdb/archive] according to system settings.
20070528154728 - In backup directory [/var/opt/SUNWics5/csdb/archive] we had [6] days worth of backup
20070528154728 - According to the system settings, we must keep between [3] and [6] days of backup
20070528154728 - We now have [5] days of backup in [/var/opt/SUNWics5/csdb/archive]
20070528154728 - Creating directory [/var/opt/SUNWics5/csdb/archive/archive_20070528] 20070528154728 - ... success
20070528154729 - Checking condition for [/var/opt/SUNWics5/csdb/archive], threshold [70], DB [257031]KB
20070528154729 - Hotbackup on [/var/opt/SUNWics5/csdb/hotbackup] mounted on [/dev/dsk/c0d0s6]
20070528154729 - Archivebackup on [/var/opt/SUNWics5/csdb/archive] mounted on [/dev/dsk/c0d0s6]
20070528154729 - Checking condition for [/var/opt/SUNWics5/csdb/archive], threshold [70], DB [257031]KB
20070528154729 - Hotbackup on [/var/opt/SUNWics5/csdb/hotbackup] mounted on [/dev/dsk/c0d0s6]
20070528154729 - Archivebackup on [/var/opt/SUNWics5/csdb/archive] mounted on [/dev/dsk/c0d0s6]
20070528154729 - SnapshotDB: Creating archive copy at /var/opt/SUNWics5/csdb/archive/archive_20070528
20070528154729 - Run CheckpointDB prior to backing up the database files
20070528154729 - Running CheckpointDB: /opt/SUNWics5/cal/lib/../tools/unsupported/bin/db_checkpoint -1 -h /var/opt/SUNWics5/csdb 2> /tmp/csstored.checkpoint.out
20070528154729 - Running CheckpointDB: /opt/SUNWics5/cal/lib/../tools/unsupported/bin/db_checkpoint -1 -h /var/opt/SUNWics5/csdb 2> /tmp/csstored.checkpoint.out
20070528154729 - Copying database files to /var/opt/SUNWics5/csdb/archive/archive_20070528
20070528154729 - Copying database file ics50alarms.db to /var/opt/SUNWics5/csdb/archive/archive_20070528
20070528154729 - SnapshotDB - Copy failed to /var/opt/SUNWics5/csdb/archive/archive_20070528 for ics50alarms.db
20070528154729 - 1 Fatal Error: SnapshotDB failed for archive backup at /var/opt/SUNWics5/csdb/archive/archive_20070528
20070528154729 - csstoredExit: Exiting with [-1].any hint?long time ago, in a galaxy far away...
i discovered that csstored.pl doesn't play very well with spanish locale (not sure with other locale).
if i launch /opt/SUNWics5/cal/sbin/start-cal with this locale
LANG=es_ES.UTF-8
LC_CTYPE=es_ES.UTF-8
LC_NUMERIC=es_ES.UTF-8
LC_TIME="es_ES.UTF-8"
LC_COLLATE=es_ES.UTF-8
LC_MONETARY=es_ES.UTF-8
LC_MESSAGES=es.UTF-8
LC_ALL=
i get the error described before. the i just reset LANG and LC_ALL to C and all goes well.
thanks to truss for the inspiration. -
ORA-19563: header validation failed for file
hai all ,
i faced problem when restore database from tape backup ,
my current database 11.1.0.7.0 , with AIX OS
im running RMAN to restore with set new name to change poiting to new lun.
but when end of restore show this :
channel c05: restore complete, elapsed time: 01:00:34
channel c06: piece handle=PRFN_DB_bk_31518_1_831778821 tag=HOT_DB_BK_LEVEL0
channel c06: restored backup piece 1
channel c06: restore complete, elapsed time: 01:01:39
Finished restore at 22-NOV-13
released channel: c01
released channel: c02
released channel: c03
released channel: c04
released channel: c05
released channel: c06
released channel: c07
released channel: c08
released channel: c09
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch command at 11/22/2013 15:13:04
ORA-19563: header validation failed for file
im open alert.log and see the error
Errors in file /oracle/PROD/db/tech_st/11.1.0/admin/PROD_smjkt-prfn01/diag/rdbms/prod/PROD/trace/PROD_m000_5243492.trc:
ORA-51106: check failed to complete due to an error. See error below
ORA-48251: Failed to open relation due to following error
ORA-48122: error with opening the ADR block file [/oracle/PROD/db/tech_st/11.1.0/admin/PROD_smjkt-prfn01/diag/rdbms/prod/PROD/metadata/HM_FINDING.ams] [0]
ORA-27041: unable to open file
IBM AIX RISC System/6000 Error: 22: Invalid argument
Additional information: 2
ORA-01122: database file 30 failed verification check
ORA-01110: data file 30: '/oradata51/PROD/data/ctxd01.dbf'
ORA-01565: error in identifying file '/oradata51/PROD/data/ctxd01.dbf'
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 3
Fri Nov 22 15:11:58 2013
Errors in file /oracle/PROD/db/tech_st/11.1.0/admin/PROD_smjkt-prfn01/diag/rdbms/prod/PROD/trace/PROD_m000_5243494.trc:
ORA-51106: check failed to complete due to an error. See error below
ORA-48251: Failed to open relation due to following error
ORA-48122: error with opening the ADR block file [/oracle/PROD/db/tech_st/11.1.0/admin/PROD_smjkt-prfn01/diag/rdbms/prod/PROD/metadata/HM_INFO.ams] [0]
ORA-27041: unable to open file
IBM AIX RISC System/6000 Error: 22: Invalid argument
Additional information: 2
ORA-01122: database file 221 failed verification check
ORA-01110: data file 221: '/oradata51/PROD/data/a_txn_data86.dbf'
ORA-01565: error in identifying file '/oradata51/PROD/data/a_txn_data86.dbf'
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 3
info
'/oradata51/PROD/data/a_txn_data86.dbf' --> path original from production
SET NEWNAME FOR DATAFILE 221 TO '/oracle/PROD/db/apps_st/data/a_txn_data86.dbf' --> path in Testing that do full restore
this file already exist a_txn_data86.dbf
someone can help me ?
ImronHi Imron,
Ensure the file is available at the OS level, if yes then follow the below link and see if it helps you
https://forums.oracle.com/thread/2544292
https://forums.oracle.com/message/1237966
https://forums.oracle.com/message/
Thanks &
Best Regards, -
.Mac account credentials validation failed for account "myaccount".
I made some changes on my web site but when I tried publishing I got back this as a "Publish Error"--.Mac account credentials validation failed for account “dannpurvis”.
How do I fix this?
Thanks,
DannI had to do this four or five times - but it eventually worked!
Thanks!
One thing that I did was I went into my System Prefs
and under the .Mac, I just deleted my account name
and password in the fields, pressed enter which just
refreshed the fields to make me enter my account &
pass again. It went through verification process and
accepted my password, logging me in again.
Went back to iWeb and everything uploaded as normal.
Hope this helps.
Intel
Macbook 2 GHz Intel Core 2 Duo Mac OS X
(10.4.9) 1 GB 667 MHz DDR2 SDRAM -
Process_order api - Validation Failed for Field Bill To
I am relatively new to the EBS world and I'm having some issues with calling the OE_Order_Pub.Process_order API. When I call this API I am getting a return error of "Validation failed for field - Bill To".
For a background...I have an APEX application where users can choose parts from a small part master list to add to an existing order. The existing order will not have a status of Closed or Cancelled and there will be atleast 1 line in the order before the new parts are added. The parts will be added as new lines to the order with some of the new line data defaulting to the same information as the first line. I am using EBS version 12.1.3 with a multi-org setup. It seems that the orders under one org (id=3) are working fine, but another org(id=569) they never work and keep getting the error. I am setting the context to the org of the order and initializing the apps user information with the responsibility "Order Management Super User".
Do you have any idea what could be wrong or how I can debug the error to get a little more detail?
Here is the procedure I have.
PROCEDURE TEK_ORD_PROCESS_ORDER(p_order_id IN NUMBER, p_return_code OUT NOCOPY VARCHAR2, p_status OUT NOCOPY VARCHAR2) IS
CURSOR c_order_parts IS
SELECT *
FROM TEK_APEX.TEK_ORD_ORDER_PARTS
WHERE ORDER_ID = p_order_id;
TYPE t_parts IS TABLE OF TEK_APEX.TEK_ORD_ORDER_PARTS%ROWTYPE;
v_order_parts t_parts;
--Setup variables
H_Op_Code VARCHAR2(25) DEFAULT OE_GLOBALS.G_OPR_UPDATE;
L_Op_Code VARCHAR2(25) DEFAULT OE_GLOBALS.G_OPR_CREATE;
v_install_type VARCHAR2(25) := 'PTO';
v_source_id NUMBER;
v_user_id NUMBER;
v_resp_id NUMBER;
v_app_id NUMBER;
v_debug VARCHAR2(32767);
v_oracle_order OE_ORDER_HEADERS_ALL%ROWTYPE;
v_apex_order TEK_APEX.TEK_ORD_SALES_ORDERS%ROWTYPE;
p_header_rec OE_Order_Pub.Header_Rec_Type;
p_header_val_rec OE_Order_Pub.Header_Val_Rec_Type;
p_Header_Adj_tab OE_Order_Pub.Header_Adj_Tbl_Type;
p_Header_Adj_val_tab OE_Order_Pub.Header_Adj_Val_Tbl_Type;
p_Header_price_Att_tab OE_Order_Pub.Header_Price_Att_Tbl_Type;
p_Header_Adj_Att_tab OE_Order_Pub.Header_Adj_Att_Tbl_Type;
p_Header_Adj_Assoc_tab OE_Order_Pub.Header_Adj_Assoc_Tbl_Type;
p_Header_Scredit_tab OE_Order_Pub.Header_Scredit_Tbl_Type;
p_Header_Scredit_val_tab OE_Order_Pub.Header_Scredit_Val_Tbl_Type;
p_line_tab OE_Order_Pub.Line_Tbl_Type;
p_line_val_tab OE_Order_Pub.Line_Val_Tbl_Type;
p_Line_Adj_tab OE_Order_Pub.Line_Adj_Tbl_Type;
p_Line_Adj_val_tab OE_Order_Pub.Line_Adj_Val_Tbl_Type;
p_Line_price_Att_tab OE_Order_Pub.Line_Price_Att_Tbl_Type;
p_Line_Adj_Att_tab OE_Order_Pub.Line_Adj_Att_Tbl_Type;
p_Line_Adj_Assoc_tab OE_Order_Pub.Line_Adj_Assoc_Tbl_Type;
p_Line_Scredit_tab OE_Order_Pub.Line_Scredit_Tbl_Type;
p_Line_Scredit_val_tab OE_Order_Pub.Line_Scredit_Val_Tbl_Type;
p_Lot_Serial_tab OE_Order_Pub.Lot_Serial_Tbl_Type;
p_Lot_Serial_val_tab OE_Order_Pub.Lot_Serial_Val_Tbl_Type;
p_action_request_tab OE_Order_pub.Request_Tbl_Type;
l_header_rec OE_Order_Pub.Header_Rec_Type;
l_header_val_rec OE_Order_Pub.Header_Val_Rec_Type;
l_Header_Adj_tab OE_Order_Pub.Header_Adj_Tbl_Type;
l_Header_Adj_val_tab OE_Order_Pub.Header_Adj_Val_Tbl_Type;
l_Header_price_Att_tab OE_Order_Pub.Header_Price_Att_Tbl_Type;
l_Header_Adj_Att_tab OE_Order_Pub.Header_Adj_Att_Tbl_Type;
l_Header_Adj_Assoc_tab OE_Order_Pub.Header_Adj_Assoc_Tbl_Type;
l_Header_Scredit_tab OE_Order_Pub.Header_Scredit_Tbl_Type;
l_Header_Scredit_val_tab OE_Order_Pub.Header_Scredit_Val_Tbl_Type;
l_line_tab OE_Order_Pub.Line_Tbl_Type;
l_line_val_tab OE_Order_Pub.Line_Val_Tbl_Type;
l_Line_Adj_tab OE_Order_Pub.Line_Adj_Tbl_Type;
l_Line_Adj_val_tab OE_Order_Pub.Line_Adj_Val_Tbl_Type;
l_Line_price_Att_tab OE_Order_Pub.Line_Price_Att_Tbl_Type;
l_Line_Adj_Att_tab OE_Order_Pub.Line_Adj_Att_Tbl_Type;
l_Line_Adj_Assoc_tab OE_Order_Pub.Line_Adj_Assoc_Tbl_Type;
l_Line_Scredit_tab OE_Order_Pub.Line_Scredit_Tbl_Type;
l_Line_Scredit_val_tab OE_Order_Pub.Line_Scredit_Val_Tbl_Type;
l_Lot_Serial_tab OE_Order_Pub.Lot_Serial_Tbl_Type;
l_Lot_Serial_val_tab OE_Order_Pub.Lot_Serial_Val_Tbl_Type;
l_ret_status VARCHAR2(200);
l_msg_count NUMBER;
l_msg_data VARCHAR2(200);
--Email information
v_email_address varchar2(100);
v_msg_text varchar(1000);
v_subject_text varchar(1000);
--Default line information
v_item_id NUMBER;
v_contact_id NUMBER;
v_invoice_to_org_id oe_order_lines_all.INVOICE_TO_ORG_ID%TYPE;
v_ship_to_org_id oe_order_lines_all.SHIP_TO_ORG_ID%TYPE;
v_sold_to_org_id oe_order_lines_all.SOLD_TO_ORG_ID%TYPE;
v_flow_status_code oe_order_lines_all.FLOW_STATUS_CODE%TYPE;
FUNCTION GET_ORACLE_ORDER(p_order_number IN OE_ORDER_HEADERS_ALL.ORDER_NUMBER%TYPE)
RETURN OE_ORDER_HEADERS_ALL%ROWTYPE IS
v_order OE_ORDER_HEADERS_ALL%ROWTYPE;
BEGIN
SELECT *
INTO v_order
FROM APPS.OE_ORDER_HEADERS_ALL
WHERE ORDER_NUMBER = p_order_number;
RETURN v_order;
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END GET_ORACLE_ORDER;
FUNCTION GET_APEX_ORDER(p_order_id IN TEK_APEX.TEK_ORD_SALES_ORDERS.ORDER_ID%TYPE)
RETURN TEK_APEX.TEK_ORD_SALES_ORDERS%ROWTYPE IS
v_order TEK_APEX.TEK_ORD_SALES_ORDERS%ROWTYPE;
BEGIN
SELECT *
INTO v_order
FROM TEK_APEX.TEK_ORD_SALES_ORDERS
WHERE ORDER_ID = p_order_id;
RETURN v_order;
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END GET_APEX_ORDER;
FUNCTION GET_SOURCE_ID(p_source_name IN VARCHAR2)
RETURN OE_ORDER_SOURCES.ORDER_SOURCE_ID%TYPE IS
v_source_id OE_ORDER_SOURCES.ORDER_SOURCE_ID%TYPE;
BEGIN
SELECT ORDER_SOURCE_ID
INTO v_source_id
FROM APPS.OE_ORDER_SOURCES
WHERE NAME = p_source_name;
RETURN v_source_id;
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END GET_SOURCE_ID;
FUNCTION GET_ITEM_ID(p_part_number IN VARCHAR2, p_org_id IN NUMBER)
RETURN MTL_SYSTEM_ITEMS.INVENTORY_ITEM_ID%TYPE IS
v_item_id MTL_SYSTEM_ITEMS.INVENTORY_ITEM_ID%TYPE;
BEGIN
SELECT INVENTORY_ITEM_ID
INTO v_item_id
FROM APPS.MTL_SYSTEM_ITEMS
WHERE SEGMENT1 = p_part_number
AND ORGANIZATION_ID = p_org_id;
RETURN v_item_id;
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END GET_ITEM_ID;
BEGIN
apps.mo_global.set_policy_context('S',3);
apps.mo_global.init('XXTEK');
BEGIN
SELECT USER_ID
INTO v_user_id
FROM APPS.FND_USER
WHERE USER_NAME = 'SYSADMIN';
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20001, 'Error selecting user');
END;
BEGIN
SELECT RESPONSIBILITY_ID, APPLICATION_ID
INTO v_resp_id, v_app_id
FROM TEK_APEX.TEK_RR_ACTIVE_RESP_VW
WHERE UPPER(RESPONSIBILITY_NAME) = 'ORDER MANAGEMENT SUPER USER';
--Set current user information
fnd_global.apps_initialize (user_id => v_user_id
,resp_id => v_resp_id
,resp_appl_id => v_app_id);
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20001, 'Error selecting responsibility');
END;
--Get the order information from Oracle and APEX
v_apex_order := GET_APEX_ORDER(p_order_id);
IF v_apex_order.ORDER_ID IS NULL THEN
RAISE_APPLICATION_ERROR(-20001, 'APEX Order ID is invalid: ' || p_order_id);
END IF;
v_oracle_order := GET_ORACLE_ORDER(TO_NUMBER(v_apex_order.ORDER_NUMBER));
IF v_oracle_order.ORDER_NUMBER IS NULL THEN
RAISE_APPLICATION_ERROR(-20001, 'Order Number not found in Oracle: ' || v_apex_order.ORDER_NUMBER);
END IF;
apps.mo_global.set_policy_context('S', v_oracle_order.ORG_ID);
v_source_id := GET_SOURCE_ID('IMPORT');
IF v_source_id IS NULL THEN
RAISE_APPLICATION_ERROR(-20001, 'Source ID not found for IMPORT');
END IF;
/* ********** Gather Order Header********** */
/* ********** Info. ********** */
OE_Order_Pub.Get_Order(p_api_version_number => 1.0,
p_init_msg_list => FND_API.G_TRUE,
p_return_values => FND_API.G_TRUE,
x_return_status => l_ret_status,
x_msg_count => l_msg_count,
x_msg_data => l_msg_data,
p_header_id => v_oracle_order.HEADER_ID,
p_header => NULL,
x_header_rec => l_header_rec,
x_header_val_rec => l_header_val_rec,
x_Header_Adj_tbl => l_Header_Adj_tab,
x_Header_Adj_val_tbl => l_Header_Adj_val_tab,
x_Header_price_Att_tbl => l_Header_price_Att_tab,
x_Header_Adj_Att_tbl => l_Header_Adj_Att_tab,
x_Header_Adj_Assoc_tbl => l_Header_Adj_Assoc_tab,
x_Header_Scredit_tbl => l_Header_Scredit_tab,
x_Header_Scredit_val_tbl=> l_Header_Scredit_val_tab,
x_line_tbl => l_line_tab,
x_line_val_tbl => l_line_val_tab,
x_Line_Adj_tbl => l_Line_Adj_tab,
x_Line_Adj_val_tbl => l_Line_Adj_val_tab,
x_Line_price_Att_tbl => l_Line_price_Att_tab,
x_Line_Adj_Att_tbl => l_Line_Adj_Att_tab,
x_Line_Adj_Assoc_tbl => l_Line_Adj_Assoc_tab,
x_Line_Scredit_tbl => l_Line_Scredit_tab,
x_Line_Scredit_val_tbl => l_Line_Scredit_val_tab,
x_Lot_Serial_tbl => l_Lot_Serial_tab,
x_Lot_Serial_val_tbl => l_Lot_Serial_val_tab);
--Save defaults from first line
IF l_line_tab.EXISTS(1) THEN
v_contact_id := l_line_tab(1).SHIP_TO_CONTACT_ID;
v_invoice_to_org_id := l_line_tab(1).INVOICE_TO_ORG_ID;
v_ship_to_org_id := l_line_tab(1).SHIP_TO_ORG_ID;
v_sold_to_org_id := l_line_tab(1).SOLD_TO_ORG_ID;
v_flow_status_code := l_line_tab(1).FLOW_STATUS_CODE;
END IF;
--Clear out the line array before adding any new parts
FOR i IN l_line_tab.FIRST..l_line_tab.LAST LOOP
l_line_tab.DELETE(i);
l_line_val_tab.DELETE(i);
l_line_adj_tab.DELETE(i);
l_line_adj_val_tab.DELETE(i);
l_line_price_att_tab.DELETE(i);
l_line_adj_att_tab.DELETE(i);
l_line_adj_assoc_tab.DELETE(i);
l_line_scredit_tab.DELETE(i);
l_line_scredit_val_tab.DELETE(i);
l_lot_serial_tab.DELETE(i);
l_lot_serial_val_tab.DELETE(i);
END LOOP;
/* ********** Gather Order Lines ********** */
OPEN c_order_parts;
FETCH c_order_parts BULK COLLECT INTO v_order_parts;
CLOSE c_order_parts;
FOR i IN v_order_parts.FIRST..v_order_parts.LAST LOOP
v_item_id := GET_ITEM_ID(v_order_parts(i).PART_NUMBER, v_oracle_order.SHIP_FROM_ORG_ID);
IF v_item_id IS NULL THEN
RAISE_APPLICATION_ERROR(-20001, 'Error selecting part number ' || v_order_parts(i).PART_NUMBER);
END IF;
--Clear line first
l_line_tab(i) := OE_Order_Pub.G_Miss_Line_Rec;
l_line_val_tab(i) := OE_ORDER_PUB.G_MISS_LINE_VAL_REC;
l_line_adj_tab(i) := OE_ORDER_PUB.G_MISS_LINE_ADJ_REC;
l_line_adj_val_tab(i) := OE_ORDER_PUB.G_MISS_LINE_ADJ_VAL_REC;
l_line_price_att_tab(i) := OE_ORDER_PUB.G_MISS_LINE_PRICE_ATT_REC ;
l_line_adj_att_tab(i) := OE_ORDER_PUB.G_MISS_LINE_ADJ_ATT_REC;
l_line_adj_assoc_tab(i) := OE_ORDER_PUB.G_MISS_LINE_ADJ_ASSOC_REC;
l_line_scredit_tab(i) := OE_ORDER_PUB.G_MISS_LINE_SCREDIT_REC;
l_line_scredit_val_tab(i) := OE_ORDER_PUB.G_MISS_LINE_SCREDIT_VAL_REC;
l_lot_serial_tab(i) := OE_ORDER_PUB.G_MISS_LOT_SERIAL_REC;
l_lot_serial_val_tab(i) := OE_ORDER_PUB.G_MISS_LOT_SERIAL_VAL_REC;
--Set line information
l_line_tab(i).PRICE_LIST_ID := v_oracle_order.PRICE_LIST_ID;
l_line_tab(i).header_id := v_oracle_order.header_id;
l_line_tab(i).inventory_item_id := v_item_id;
l_line_tab(i).ordered_quantity := v_order_parts(i).QUANTITY;
l_line_tab(i).operation := l_op_code;
l_line_tab(i).unit_list_price := 0;
l_line_tab(i).ship_from_org_id := v_oracle_order.ship_from_org_id;
l_line_tab(i).program_id := fnd_global.conc_program_id ;
l_line_tab(i).program_application_id := fnd_global.PROG_APPL_ID;
l_line_tab(i).order_source_id := v_source_id;
l_line_tab(i).calculate_price_flag := 'N' ;
l_line_tab(i).unit_selling_price := 0.00 ;
l_line_tab(i).request_date := v_apex_order.onsite_date;
l_line_tab(i).Schedule_ship_date := v_apex_order.onsite_date;
l_line_tab(i).promise_date := null;
l_line_tab(i).invoice_to_org_id := v_invoice_to_org_id;
l_line_tab(i).ship_to_org_id := v_ship_to_org_id;
l_line_tab(i).sold_to_org_id := v_sold_to_org_id;
l_line_tab(i).ship_to_contact_id := v_contact_id;
END LOOP;
--OE_DEBUG_PUB.DEBUG_ON;
--OE_DEBUG_PUB.Initialize;
--OE_DEBUG_PUB.SetDebugLevel(5);
--Add lines to order
OE_Order_Pub.Process_order(p_api_version_number => 1.0,
p_init_msg_list => FND_API.G_TRUE,
p_return_values => FND_API.G_TRUE,
p_action_commit => FND_API.G_FALSE,
x_return_status => l_ret_status,
x_msg_count => l_msg_count,
x_msg_data => l_msg_data,
p_header_rec => l_header_rec,
p_old_header_rec => l_header_rec,
p_header_val_rec => l_header_val_rec,
p_old_header_val_rec => l_header_val_rec,
p_Header_Adj_tbl => l_Header_Adj_tab,
p_old_Header_Adj_tbl => l_Header_Adj_tab,
p_Header_Adj_val_tbl => l_Header_Adj_val_tab,
p_old_Header_Adj_val_tbl => l_Header_Adj_val_tab,
p_Header_price_Att_tbl => l_Header_price_Att_tab,
p_old_Header_Price_Att_tbl => l_Header_price_Att_tab,
p_Header_Adj_Att_tbl => l_Header_Adj_Att_tab,
p_old_Header_Adj_Att_tbl => l_Header_Adj_Att_tab,
p_Header_Adj_Assoc_tbl => l_Header_Adj_Assoc_tab,
p_old_Header_Adj_Assoc_tbl => l_Header_Adj_Assoc_tab,
p_Header_Scredit_tbl => l_Header_Scredit_tab,
p_old_Header_Scredit_tbl => l_Header_Scredit_tab,
p_Header_Scredit_val_tbl => l_Header_Scredit_val_tab,
p_old_Header_Scredit_val_tbl => l_Header_Scredit_val_tab,
p_line_tbl => l_line_tab,
p_line_val_tbl => l_line_val_tab,
p_Line_Adj_tbl => l_line_adj_tab,
p_Line_Adj_val_tbl => l_line_adj_val_tab,
p_Line_price_Att_tbl => l_line_price_att_tab,
p_Line_Adj_Att_tbl => l_line_adj_att_tab,
p_Line_Adj_Assoc_tbl => l_line_adj_assoc_tab,
p_Line_Scredit_tbl => l_line_scredit_tab,
p_Line_Scredit_val_tbl => l_line_scredit_val_tab,
p_Lot_Serial_tbl => l_lot_serial_tab,
p_Lot_Serial_val_tbl => l_lot_serial_val_tab,
p_action_request_tbl => OE_ORDER_PUB.G_MISS_REQUEST_TBL,
x_header_rec => p_header_rec,
x_header_val_rec => p_header_val_rec,
x_Header_Adj_tbl => p_Header_Adj_tab,
x_Header_Adj_val_tbl => p_Header_Adj_val_tab,
x_Header_price_Att_tbl => p_Header_price_Att_tab,
x_Header_Adj_Att_tbl => p_Header_Adj_Att_tab,
x_Header_Adj_Assoc_tbl => p_Header_Adj_Assoc_tab,
x_Header_Scredit_tbl => p_Header_Scredit_tab,
x_Header_Scredit_val_tbl => p_Header_Scredit_val_tab,
x_line_tbl => p_line_tab,
x_line_val_tbl => p_line_val_tab,
x_line_adj_tbl => p_line_adj_tab,
x_line_adj_val_tbl => p_line_adj_val_tab,
x_line_price_att_tbl => p_line_price_att_tab,
x_line_adj_att_tbl => p_line_adj_att_tab,
x_line_adj_assoc_tbl => p_line_adj_assoc_tab,
x_line_scredit_tbl => p_line_scredit_tab,
x_line_scredit_val_tbl => p_line_scredit_val_tab,
x_lot_serial_tbl => p_lot_serial_tab,
x_lot_serial_val_tbl => p_lot_serial_val_tab,
x_action_request_tbl => p_action_request_tab);
-- OE_DEBUG_PUB.DEBUG_OFF;
p_return_code := l_ret_status;
IF l_ret_status != 'S' THEN
DBMS_OUTPUT.PUT_LINE(l_msg_data);
IF l_msg_count = 1 THEN
p_status := OE_Msg_Pub.Get(1,'F');
ELSE
FOR i IN 1..l_msg_count LOOP
p_status := OE_Msg_Pub.Get(i,'F') || '<br />';
END LOOP;
END IF;
p_status := 'Error loading lines<br>' || p_status;
ELSE
p_status := 'Order Processed Successfully<br>' || v_order_parts.COUNT || ' Line(s) Loaded';
END IF;
-- WHILE OE_DEBUG_PUB.G_DEBUG_INDEX < OE_DEBUG_PUB.CountDebug LOOP
-- OE_DEBUG_PUB.GetNext(v_debug);
-- DBMS_OUTPUT.PUT_LINE(v_debug);
-- END LOOP;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
RAISE;
END TEK_ORD_PROCESS_ORDER;Thanks!
Jonathan HartHello,
Try the following
1) Check if any processing constraints are applied to customer/location
2) Try to create a new customer and associate internal location.
Create a new order to reproduce the issue
Thanks
-Arif -
Validation Failed for C:\winnt\system32\vsinit.dll
I am running VPN Client 4.6.00.0045 on Windows 2000 and when I reboot I get the an error that reads "Validation Failed for C:\winnt\system32\vsinit.dll"
It seems the VPN Service is not starting. A reinstall will not fix it. I do not have ZoneAlarm or any other personal firewall installed. Any help would be appreciated.I'm not sure if the steps described at this URL will help your issue or not. There is quite a bit of information in the post that may be of some help.
http://forum.zonelabs.org/zonelabs/board/message?board.id=Off-Topic&message.id=5155
On my system, I do not have ZoneAlarm and installed at all and I have the vsinit.dll file on my system. I suspect that it is installed with the Cisco VPN Client. There is reference to this file in bug CSCec41278 that leads me to this conclusion. The version of the VPN client that I have installed is 4.6.00.0049.
HTH
Steve -
Order Import - Validation failed for the field - End Customer Location
I am trying to import an order using standard import program and it is failing with below error:
- Validation failed for the field - End Customer Location
I have checked metalink to debug this and found nothing, the values i have used to populate the interface table or valid.
Can someone please post your views on this, on where the problem might be.
Thanks!Hello,
Try the following
1) Check if any processing constraints are applied to customer/location
2) Try to create a new customer and associate internal location.
Create a new order to reproduce the issue
Thanks
-Arif -
Save validation failed for message
Hi !
I am trying to save a workflow to the database however I am getting an error saying 'Save validation failed for message 'WFTEST/COMPLAINT_REJECTED'
underthis i got
401: Could not find token 'NOTE' among message attributes
401: Could not find token 'FORWARD_TO_USERNAME' among message attributes'
What must I do to stop getting this error message.Hi,
I've answered your duplicate post on the [WorkflowFAQ forum|http://smforum.workflowfaq.com/index.php?topic=998.0].
HTH,
Matt
WorkflowFAQ.com - the ONLY independent resource for Oracle Workflow development
Alpha review chapters from my book "Developing With Oracle Workflow" are available via my website http://www.workflowfaq.com
Have you read the blog at http://www.workflowfaq.com/blog ?
WorkflowFAQ support forum: http://forum.workflowfaq.com -
"recover database until cancel" asks for archive log file that do not exist
Hello,
Oracle Release : Oracle 10.2.0.2.0
Last week we performed, a restore and then an Oracle recovery using the recover database until cancel command. (we didn't use backup control files) .It worked fine and we able to restart the SAP instances. However, I still have questions about Oracle behaviour using this command.
First we restored, an online backup.
We tried to restart the database, but got ORA-01113,ORA-01110 errors :
sr3usr.data1 needed media recovery.
Then we performed the recovery :
According Oracel documentation, "recover database until cancel recovery" proceeds by prompting you with the suggested filenames of archived redo log files.
The probleme is it prompts for archive log file that do not exist.
As you can see below, it asked for SMAarch1_10420_610186861.dbf that has never been created. Therefore, I cancelled manually the recovery, and restarted the database. We never got the message "media recovery complete"
ORA-279 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10417_61018686
Fri Sep 7 14:09:45 2007
ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf'
Fri Sep 7 14:09:45 2007
Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf
ORA-279 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_61018686
Fri Sep 7 14:10:03 2007
ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf'
Fri Sep 7 14:10:03 2007
Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf
ORA-279 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_61018686
Fri Sep 7 14:10:13 2007
ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf'
Fri Sep 7 14:10:13 2007
Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
Errors with log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
ORA-308 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_61018686
Fri Sep 7 14:15:19 2007
ALTER DATABASE RECOVER CANCEL
Fri Sep 7 14:15:20 2007
ORA-1013 signalled during: ALTER DATABASE RECOVER CANCEL ...
Fri Sep 7 14:15:40 2007
Shutting down instance: further logons disabled
When restaring the database we could see that, a recovery of online redo log has been performed automatically, is it the normal behaviour of a recovery using "recover database until cancel" command ?
Started redo application at
Thread 1: logseq 10416, block 482
Fri Sep 7 14:24:55 2007
Recovery of Online Redo Log: Thread 1 Group 4 Seq 10416 Reading mem 0
Mem# 0 errs 0: /oracle/SMA/origlogB/log_g14m1.dbf
Mem# 1 errs 0: /oracle/SMA/mirrlogB/log_g14m2.dbf
Fri Sep 7 14:24:55 2007
Completed redo application
Fri Sep 7 14:24:55 2007
Completed crash recovery at
Thread 1: logseq 10416, block 525, scn 105140074
0 data blocks read, 0 data blocks written, 43 redo blocks read
Thank you very much for your help.
Frod.Hi,
Let me answer your query.
=======================
Your question: While performing the recovery, is it possible to locate which online redolog is needed, and then to apply the changes in these logs
1. When you have current controlfile and need complete data (no data loss),
then do not go for until cancel recovery.
2. Oracle will apply all the redologs (including current redolog) while recovery
process is on.
3. During the recovery you need to have all the redologs which are listed in the view V$RECOVERY_LOG and all the unarchived and current redolog. By querying V$RECOVERY_LOG you can find out about the redologs required.
4. If the required sequence is not there in the archive destination, and if recovery process asks for that sequence you can query V$LOG to see whether requested sequence is part of the online redologs. If yes you can mention the path of the online redolog to complete the recovery.
Hope this information helps.
Regards,
Madhukar -
How can I set destination for archived logs?
I would like to know:
how to set destination for archived logs?
how to identify the init.ora that is used for my database?
With rman using compressed backupset by default and and making
backup database;
What does it backup exactly?Another thing I am wondering, when I make a backup with rman : backup database.
It saves the backups in the directory autobackup from the flash_recovery_area but it seems that it only saves the data files and the control files.Isn't there a way to sava archived logs files, control files, datafiles in a single backup?
In fact I would like to make a full backup using rman on sunday of everything and a incremental backup all days of the week how can I acomplish this with a retention of 7 days?
Maybe you are looking for
-
New documentation for Custom Panels in CS2
Can anyone provide some new documentation on creating custom XMP templates in CS2? I'd like to find out what the capabilities of using the fbname attribute in Bridge are. It doesn't seem to recognize character limitations, bag containers, etc. Thanks
-
Template does not appear in project folder
Hi My template has disappeared from my project. Therefore, I have tried reimporting the template to the 'Templates' folder in the project folder. When I do this, I do not get an error message. But the *.htt file does not appear in the 'Templates' fol
-
Hi. I can not add cover preview in DPS organizer when i use Chrome. No problem in Firefox. Ants
-
Hi, how can I build a link to a http-URL into a dynpro screen? does anyone have a simple implementation guide? Regards
-
3G switch missing on iPad3 3G after upgrading to iOS6
Hi, I'm using an iPad3 3G. After I upgraded to the iOS6, my enable 3G switch was gone. I tried to restore it using iTunes, as an Apple technician advised, but the switch is still missing. It drives me crazy! What shall I do? Thanks a lot in advance.