Huge archive logs generated ,EBS R12 11gR2 DB
Hi all,
Yesterday huge number of archive logs generated (up normal ) night time where there is no load on the server caused my disk to become full, how can i determine which concurrent program caused this huge number of archive logs ?
Regards,
Mohanad.
HI Mohanad,
Please check thread:
https://forums.oracle.com/message/10834762
https://forums.oracle.com/thread/2417003
Thanks &
Best Regards,
Similar Messages
-
HTML output for archive logs generated
Hi All,
Greetings of the day,
Have a sql scheduled in cron which gives number of archive logs generated in each hour..I have modify the shell to include HTML commands to get the ouput in HTML format...
Any ideas on how will i do this?
Thanks ,
baskar.lPlease take time to read the documentation. There is a link to "Generating HTML Reports in SQLPlus" which also has examples.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14357/ch7.htm#CHDCECJG
Edited by: Hemant K Chitale on May 21, 2009 5:08 PM -
Reduce amount of archived log generated.
RDBMS version : 9.2.0.8
SQL> SELECT tablespace_name, force_logging FROM dba_tablespaces;
TABLESPACE_NAME FORCE_LOGGING
SYSTEM NO
Above is what status of database, but when I do maintenance work of rebuilding index tablespace I get day or two worth of archived log files.
and I dont' think ALTER DATABASE no force logging will reduce the amount of log generated.
Is there any other method available?
thanksHi,
if you force logging for a tablespace or for the database, then this means only that any nologging clause that comes with statements related to segments in that tablespace/database is ignored. No force logging is the default.
In order to reduce the amount of redo protocol, you may consider to use NOLOGGING for the rebuild of your indexes:
create index <indexname> on <table(column)> nologging;Or you put the tablespace in NOLOGGING in which the indexes are created in:
alter tablespace <indextablespace> nologging;Or (perhaps even better) simply leave the indexes as they are. Most indexes do not need a rebuild anyway.
Kind regards
Uwe
http://uhesse.wordpress.com -
our database is running in archive log mode.
i want to know which are the sessions generated/generating(sysdate and sysdate -2) more archive log .
can i get it from the database view???855516 wrote:
our database is running in archive log mode.
i want to know which are the sessions generated/generating(sysdate and sysdate -2) more archive log .
can i get it from the database view???use v$archived_log/v$log_history
sys@ORCL> select name,completion_time from v$archived_log where completion_time > sysdate-2;
NAME COMPLETIO
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000042_0776788597.0001 27-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000043_0776788597.0001 27-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000044_0776788597.0001 27-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000045_0776788597.0001 27-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000046_0776788597.0001 27-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000048_0776788597.0001 27-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000049_0776788597.0001 27-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000050_0776788597.0001 27-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000051_0776788597.0001 27-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000052_0776788597.0001 28-MAR-12
C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000053_0776788597.0001 28-MAR-12
11 rows selected. -
ORA-3136 error in alert log file(EBS R12)
I have found ora-3136 alert log error in ebs r12 instance Can any one help on this error. Alert log continuously write this error. Below is the error i found.
Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.7.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
Time: 07-NOV-2013 18:33:49
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12606
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=172.17.100.108)(PORT=60495))
WARNING: inbound connection timed out (ORA-3136)Does this affect the functionality of the application?
Do you get any errors when opening forms or self-service pages?
Please see:
ORA-01403 and FRM-40735 error occur while opening any forms (Doc ID 1358117.1)
Troubleshooting ORA-3135/ORA-3136 Connection Timeouts Errors - Database Diagnostics (Doc ID 730066.1)
ORA-3136/TNS-12535 or Hang Reported for Connections Across Firewall (Doc ID 974783.1)
Thanks,
Hussein -
Urgent: Huge diff in total redo log size and archive log size
Dear DBAs
I have a concern regarding size of redo log and archive log generated.
Is the equation below is correct?
total size of redo generated by all sessions = total size of archive log files generated
I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
Before i start measuring i cleared up archive directory and started to monitor from a specific time.
Environment: Oracle 9i Release 2
How I tracked the sizing information is below
logon as SYS user and run the following statements
DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
CREATE TABLE REDOSTAT
AUDSID NUMBER,
SID NUMBER,
SERIAL# NUMBER,
SESSION_ID CHAR(27 BYTE),
STATUS VARCHAR2(8 BYTE),
DB_USERNAME VARCHAR2(30 BYTE),
SCHEMANAME VARCHAR2(30 BYTE),
OSUSER VARCHAR2(30 BYTE),
PROCESS VARCHAR2(12 BYTE),
MACHINE VARCHAR2(64 BYTE),
TERMINAL VARCHAR2(16 BYTE),
PROGRAM VARCHAR2(64 BYTE),
DBCONN_TYPE VARCHAR2(10 BYTE),
LOGON_TIME DATE,
LOGOUT_TIME DATE,
REDO_SIZE NUMBER
TABLESPACE SYSTEM
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
GRANT SELECT ON REDOSTAT TO PUBLIC;
CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
BEFORE LOGOFF
ON DATABASE
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO SYS.REDOSTAT
(AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
WHERE
A.SID = B.SID
AND
B.STATISTIC# = C.STATISTIC#
AND
C.NAME = 'redo size'
AND
A.AUDSID = sys_context ('USERENV', 'SESSIONID');
COMMIT;
END TR_SESS_LOGOFF;
Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
I have seen the similar implementation as above at many sites.
Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
If I didnt find a solution I would raise a SR with Oracle.
Thanks
[V]You can query v$sess_io, column block_changes to find out which session generating how much redo.
The following query gives you the session redo statistics:
select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
where a.statistic# = b.statistic#
and b.name like '%redo%'
and a.value > 0
group by a.sid,b.name
If you want, you can only look for redo size for all the current sessions.
Jaffar -
Generating lots of archive logs
Hi Friends,
We have an EBS 11i on AIX 5L ...which has just been setup and ready for UAT...but the AppsDBA/Functional Consultant who didi it are not around anymore to ask for questiones. I just noticed the there are archive logs generated everyday...like 30 logs...when in fact the application is not being used. Is there concurrent programs
that has been setup to update data on a background process, just like recursive updating which is not really necessary. How do I check if there are updates being done.
Thanks a lotDo not stop this concurrent program as it is used to synchronize the Workflow local tables with the user and role information stored in the product application tables until each affected product performs the synchronization automatically.
More details can be found in the following note:
Note: 171703.1 - 11.5.x: Implementing Oracle Workflow Directory Service Synchronization
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=171703.1
Did you check the total size of the log files? I believe you should not be worried now until the system is delivered to the users, you can monitor the number of log files generated daily then and based on that start your investigation. -
Hi Guys,
Can advise on the syntax to perform rman backup of archive logs generated in last 2 days?
Should it be 1 or 2?
thanks!
1. BACKUP ARCHIVELOG UNTIL TIME 'SYSDATE-2';
2. BACKUP ARCHIVELOG FROM TIME 'SYSDATE-2';What prevents you from trying both?
I'm not trying to be difficult here but why take the time to ask people in a forum, not even supplying a version number, and not just find out?
It took me less than 60 seconds to cut-and-paste both of your command lines into RMAN and look at the output.
Edited by: damorgan on Jan 19, 2013 4:11 PM -
Question about only new archive logs backed up in backup
Hi,
We are taking daily two online backup. We are running database in ARCHIVELOG mode. We configure database in PRIMARY and PHYSICAL STANDBY mode. Till now, we were taking all archive logs in backup. But it was causing problem of lot of space utilization of disk.
So based on search in this forum, I am planning to take only new archive logs generated since last backed up using following command.
BACKUP ARCHIVELOG all not backed up 1 times format '$dir/archivelogs_%s_%t' FORCE;
I am not sure about how it impact during restore and recovery when we take only new archivelogs in backup.
We restore database and then after perform always incomplete recovery till latest SCN capture in backup using following commands.
RESTORE DATABASE;
RECOVER DATABASE UNTIL SCN $BACKUP_LAST_SCN;
Do you see any problem/risk of implementing this solution going ahead?
Please let me provide your thoughts/inputs for this.
Thanks.
ShardulHi,
We are not deleting archive logs from actual location after backup. We keep latest 6 days archive logs at actual location. But here we are planning to put only new archive logs in backup image which were not backed up due to disk size problem.
For your reference below is our datbase backup RMAN commands. We are taking full database backup.
run {
ALLOCATE CHANNEL C1 TYPE DISK;
delete noprompt archivelog all completed before 'sysdate-5';
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
BACKUP INCREMENTAL LEVEL=0 CUMULATIVE format '$dir/level0_%u' DATABASE include current controlfile
for standby force;
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
BACKUP ARCHIVELOG all not backed up 1 times format '$dir/archivelogs_%s_%t' FORCE;
BACKUP CURRENT CONTROLFILE format '$dir/control_primary' FORCE;
Then in this polich do you see any problem when we restore database as PRIMARY or PHYSICAL STANDBY on server. We are using Oracle 10.2.0.3. -
RMAN ALert Log Message: ALTER SYSTEM ARCHIVE LOG
Created a new Database on Oracle 10.2.0.4 and now seeing "ALTER SYSTEM ARCHIVE LOG" in the Alert Log only when the online RMAN backup runs:
Wed Aug 26 21:52:03 2009
ALTER SYSTEM ARCHIVE LOG
Wed Aug 26 21:52:03 2009
Thread 1 advanced to log sequence 35 (LGWR switch)
Current log# 2 seq# 35 mem# 0: /u01/app/oracle/oradata/aatest/redo02.log
Current log# 2 seq# 35 mem# 1: /u03/oradata/aatest/redo02a.log
Wed Aug 26 21:53:37 2009
ALTER SYSTEM ARCHIVE LOG
Wed Aug 26 21:53:37 2009
Thread 1 advanced to log sequence 36 (LGWR switch)
Current log# 3 seq# 36 mem# 0: /u01/app/oracle/oradata/aatest/redo03.log
Current log# 3 seq# 36 mem# 1: /u03/oradata/aatest/redo03a.log
Wed Aug 26 21:53:40 2009
Starting control autobackup
Control autobackup written to DISK device
handle '/u03/exports/backups/aatest/c-2538018370-20090826-00'
I am not issuing a log swiitch command. The RMAN commands I am running are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/exports/backups/aatest/%F';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u03/exports/backups/aatest/%d_%U';
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE NOPROMPT OBSOLETE;
DELETE NOPROMPT ARCHIVELOG UNTIL TIME 'SYSDATE-2';
I do not see this message on any other 10.2.0.4 instances. Has anyone seen this and if so why is this showing in the log?
Thank you,
Curt SwartzlanderThere's no problem with log switch. Please refer to documentation for more information on syntax "PLUS ARCHIVELOG"
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup003.htm#sthref377
Adding BACKUP ... PLUS ARCHIVELOG causes RMAN to do the following:
*1. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
*2. Runs BACKUP ARCHIVELOG ALL. Note that if backup optimization is enabled, then RMAN skips logs that it has already backed up to the specified device.*
*3. Backs up the rest of the files specified in BACKUP command.*
*4. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
*5. Backs up any remaining archived logs generated during the backup.*
This guarantees that datafile backups taken during the command are recoverable to a consistent state. -
Archive Log vs Full Backup Concept
Hi,
I just need some clarification on how backups and archive logs work. Lets say starting at 1PM I have archive logs 1,2,3,4,5 and then I perform a full backup at 6PM.
Then I resume generating archive logs at 6PM to get logs 6,7,8,9,10. I then stop at 11PM.
If my understanding is correct, the archive logs should allow me to restore oracle to a point in time anywhere between 1PM and 11PM. But if I only have the full backup then I can only restore to a single point, which is 6PM. Is my understanding correct?
Do the archive logs only get applied to the datafiles when the backup occurs or only when a restore occurs? It doesn't seem like the archive logs get applied on the fly.
Thanks in advance.thelok wrote:
Thanks for the great explanation! So I can do a point in time restore from any time since the datafiles have last been written (or from when I have the last set of backed up datafiles plus the archive logs). From what you are saying, I can force the datafiles to be written from the redo logs (by doing a checkpoint with "alter set archive log current" or "backup database plus archivelog"), and then I can delete all the archive logs that have a SCN less than the checkpoint SCN on the datafiles. Is this true? This would be for the purposes of preserving disk space.Hi,
See this example. I hope this explain your doubt.
# My current date is 06-11-2011 17:15
# I not have backup of this database
# My retention policy is to have 1 backup
# I start listing archive logs.
RMAN> list archivelog all;
using target database control file instead of recovery catalog
List of Archived Log Copies
Key Thrd Seq S Low Time Name
29 1 8 A 29-10-2011 12:01:58 +HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837
30 1 9 A 31-10-2011 23:00:30 +HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025
31 1 10 A 03-11-2011 23:00:23 +HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105
32 1 11 A 04-11-2011 23:28:23 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065
33 1 12 A 05-11-2011 23:28:49 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349
## See I have archive logs from time "29-10-2011 12:01:58" until "05-11-2011 23:28:49" but I dont have any backup of database.
# So I perfom backup of database including archive logs.
RMAN> backup database plus archivelog delete input;
Starting backup at 06-11-2011 17:15:21
## Note above RMAN forcing archive current log, this archivelog generated will be usable only for previous backup.
## Is not my case... I don't have backup of database.
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=159 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=8 recid=29 stamp=766018840
input archive log thread=1 sequence=9 recid=30 stamp=766278027
input archive log thread=1 sequence=10 recid=31 stamp=766366111
input archive log thread=1 sequence=11 recid=32 stamp=766516067
input archive log thread=1 sequence=12 recid=33 stamp=766516350
input archive log thread=1 sequence=13 recid=34 stamp=766516521
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:23
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:15:38
piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 tag=TAG20111106T171521 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
channel ORA_DISK_1: deleting archive log(s)
archive log filename=+HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837 recid=29 stamp=766018840
archive log filename=+HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025 recid=30 stamp=766278027
archive log filename=+HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105 recid=31 stamp=766366111
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065 recid=32 stamp=766516067
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349 recid=33 stamp=766516350
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_13.414.766516521 recid=34 stamp=766516521
Finished backup at 06-11-2011 17:15:38
## RMAN finish backup of Archivelog and Start Backup of Database
## My backup start at "06-11-2011 17:15:38"
Starting backup at 06-11-2011 17:15:38
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=+HR/dbhr/datafile/system.386.765556627
input datafile fno=00003 name=+HR/dbhr/datafile/sysaux.396.765556627
input datafile fno=00002 name=+HR/dbhr/datafile/undotbs1.393.765556627
input datafile fno=00004 name=+HR/dbhr/datafile/users.397.765557979
input datafile fno=00005 name=+BFILES/dbhr/datafile/bfiles.257.765542997
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:39
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:03
piece handle=+FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539 tag=TAG20111106T171539 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:24
Finished backup at 06-11-2011 17:16:03
## And finish at "06-11-2011 17:16:03", so I can recovery my database from this time.
## I will need archivelogs (transactions) which was generated during backup of database.
## Note during backup some blocks are copied others not. The SCN is inconsistent state.
## To make it consistent I need apply archivelog which have all transactions recorded.
## Starting another backup of archived log generated during backup.
Starting backup at 06-11-2011 17:16:04
## So automatically RMAN force another "checkpoint" after backup finished,
## forcing archive current log, because this archivelog have all transactions to bring database in a consistent state.
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=14 recid=35 stamp=766516564
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:16:05
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:06
piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565 tag=TAG20111106T171604 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: deleting archive log(s)
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_14.414.766516565 recid=35 stamp=766516564
Finished backup at 06-11-2011 17:16:06
## Note: I can recover my database from time "06-11-2011 17:16:03" (finished backup full)
## until "06-11-2011 17:16:04" (last archivelog generated) that is my recover window in this scenary.
## Listing Backup I have:
## Archive Logs in backupset before backup full start - *BP Key: 40*
## Backup Full database in backupset - *BP Key: 41*
## Archive Logs in backupset after backup full stop - *BP Key: 42*
RMAN> list backup;
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
40 196.73M DISK 00:00:15 06-11-2011 17:15:37
*BP Key: 40* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171521
Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
List of Archived Logs in backup set 40
Thrd Seq Low SCN Low Time Next SCN Next Time
1 8 766216 29-10-2011 12:01:58 855033 31-10-2011 23:00:30
1 9 855033 31-10-2011 23:00:30 896458 03-11-2011 23:00:23
1 10 896458 03-11-2011 23:00:23 937172 04-11-2011 23:28:23
1 11 937172 04-11-2011 23:28:23 976938 05-11-2011 23:28:49
1 12 976938 05-11-2011 23:28:49 1023057 06-11-2011 17:12:28
1 13 1023057 06-11-2011 17:12:28 1023411 06-11-2011 17:15:21
BS Key Type LV Size Device Type Elapsed Time Completion Time
41 Full 565.66M DISK 00:00:18 06-11-2011 17:15:57
*BP Key: 41* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171539
Piece Name: +FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539
List of Datafiles in backup set 41
File LV Type Ckp SCN Ckp Time Name
1 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/system.386.765556627
2 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/undotbs1.393.765556627
3 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/sysaux.396.765556627
4 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/users.397.765557979
5 Full 1023422 06-11-2011 17:15:39 +BFILES/dbhr/datafile/bfiles.257.765542997
BS Key Size Device Type Elapsed Time Completion Time
42 3.00K DISK 00:00:02 06-11-2011 17:16:06
*BP Key: 42* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171604
Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565
List of Archived Logs in backup set 42
Thrd Seq Low SCN Low Time Next SCN Next Time
1 14 1023411 06-11-2011 17:15:21 1023433 06-11-2011 17:16:04
## Here make sense what I trying explain
## As I don't have backup of database before of my Last backup, all archivelogs generated before of my backup full is useless.
## Deleting what are obsolete in my env, RMAN choose backupset 40 (i.e all archived logs generated before my backup full)
RMAN> delete obsolete;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
using channel ORA_DISK_1
Deleting the following obsolete backups and copies:
Type Key Completion Time Filename/Handle
*Backup Set 40* 06-11-2011 17:15:37
Backup Piece 40 06-11-2011 17:15:37 +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
Do you really want to delete the above objects (enter YES or NO)? yes
deleted backup piece
backup piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 recid=40 stamp=766516523
Deleted 1 objectsIn the above example, I could before starting the backup run "delete archivelog all" because they would not be needed, but to show the example I follow this unnecessary way. (backup archivelog and delete after)
Regards,
Levi Pereira
Edited by: Levi Pereira on Nov 7, 2011 1:02 AM -
Cloning ebs r12 on 11gR2 database
Hi all,
our ebs r12 12.1.3 and db 11..2.0.2 OS:oul5x64
from ebs r12 12.1.3 and db 11.1.0.7 i do not have any problems with cloning, but when upgrade database to 11.2.0.2 . the template file was different and the clone failed because it could not fine the controlfile.
in this note:Troubleshooting RAC RapidClone issues with Oracle Applications R12 [ID 1303962.1]
UNDER
Create Database Image
+-- Moderator edit - deleted MOS Doc content - pl do NOT post contents of MOS Docs +*
BUT IN 11gR2 11.2.0.2 database this template like this
adcldbstage.rman
run {
sql "alter system switch logfile";
backup as copy datafile 1 format '/u05/DB/data/stage/%U_system.328.769213623';
backup as copy datafile 2 format '/u05/DB/data/stage/system.353.769211817';
backup as copy datafile 3 format '/u05/DB/data/stage/system.324.769214385';
backup as copy datafile 407 format '/u05/DB/data/stage/apps_ts_seed.295.769216925';
sql "alter system switch logfile";
so when doing the clone, it always thrown an error could not fine the controlfile...and it did not backup archive log files so even rename the controlfile to "backup_controlfile.ctl" it could not open because missing archive files.
Please help.
Thanks in advance.
Regards,Thanks Hussein,
it failed at stage # 3
because the template on image copy did not have any of archive backup. so the copy only contains the database files that was why it always failed at this stage.
Thanks very much for your help.
Regards,
restore-rac3.log
Recovery Manager: Release 11.2.0.2.0 - Production on Wed Dec 21 14:48:11 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: ASO (DBID=3710756178, not open)
using target database control file instead of recovery catalog
RMAN> host 'echo "In the above output, if RMAN-03002 and RMAN-06054 errors are present, they can be safely ignored." >> /u01/app/oracle/product/11.2.0/dbhome_1/appsutil/out/ENC1_xdba01-db1/restore-rac2.log';
2> alter database open resetlogs;
3> shutdown normal
4> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/initENC1.ora.tmp
5> host 'sed "s:db_name=.*:db_name=ENC:g" /u01/app/oracle/product/11.2.0/dbhome_1/dbs/initENC1.ora.tmp > /u01/app/oracle/product/11.2.0/dbhome_1/dbs/initENC1.ora.tmp.delete';
6> host 'cp /u01/app/oracle/product/11.2.0/dbhome_1/dbs/initENC1.ora.tmp.delete /u01/app/oracle/product/11.2.0/dbhome_1/dbs/initENC1.ora.tmp';
7> exit
host command complete
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 12/21/2011 14:48:13
ORA-01152: file 2 was not restored from a sufficiently old backup
ORA-01110: data file 2: '+EBSDISK/enc/datafile/system.367.770472175'
Recovery Manager complete. -
Autoconfig fail EBS R12 Apps-Node with EBS 11gR2 RAC
Platform: HPUX IA 64-11.31
DB: 11.2.0.3
Nodes: 2
We were following metalink note 823587.1 and have successfully converted single-instance database of EBS R12 to 20node RAC. When we are trying to do steps of Section-3.8 of the note the autoconfig is running in error. See the below error message from the "adconfig.log" file:
cat /etc/hosts
=========
127.0.0.1 localhost.localdomain localhost
#Public IP
172.16.101.23 ts1db1.bukhatir.ae ts1db1
172.16.101.24 ts1db2.bukhatir.ae ts1db2
#Vip
172.16.101.44 ts1_vip1.bukhatir.ae ts1_vip1
172.16.101.45 ts1_vip2.bukhatir.ae ts1_vip2
#inerconnect
10.0.0.1 ts1_prv1.bukhatir.ae ts1_prv1
10.0.0.2 ts1_prv2.bukhatir.ae ts1_prv2
172.16.101.20 ts1apps1.bukhatir.ae ts1apps1
172.16.101.21 ts1apps2.bukhatir.ae ts1apps2
=========
Generate Tns Names
Logfile: /locapps1/apps/apps/TEST_ts1apps1/admin/log/04101202/NetServiceHandler.log
Classpath : /ts1apps/apps/apps_st/comn/java/lib/appsborg2.zip:/ts1apps/apps/apps_st/comn/java/classes
Updating s_tnsmode to 'generateTNS'
UpdateContext exited with status: 0
AC-50480: Internal error occurred: java.lang.Exception: Error while generating listener.ora.
Error generating tnsnames.ora from the database, temporary tnsnames.ora will be generated using templates
Instantiating Tools tnsnames.ora
Tools tnsnames.ora instantiated
Web tnsnames.ora instantiated
adgentns.pl exiting with status 2
ERRORCODE = 2 ERRORCODE_END
xecuting script in InstantiateFile:
/locapps1/apps/apps/TEST_ts1apps1/admin/install/adgendbc.sh
script returned:
adgendbc.sh started at Tue Apr 10 12:03:56 UAE 2012
SQL*Plus: Release 10.1.0.5.0 - Production on Tue Apr 10 12:03:57 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter value for 1: Enter value for 2: Enter value for 3: Connected.
[ APPS_DATABASE_ID ]
Application Id : 0
Profile Value : TEST
Level Name: SITE
INFO : Updated/created profile option value.
PL/SQL procedure successfully completed.
Commit complete.
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
==============================
* * * * DBC PARAMETERS * * * *
==============================
fnd_jdbc_buffer_min=1
fnd_jdbc_buffer_max=5
fnd_jdbc_buffer_decay_interval=300
fnd_jdbc_buffer_decay_size=5
fnd_jdbc_usable_check=false
fnd_jdbc_context_check=true
fnd_jdbc_plsql_reset=false
====================================
* * * * NO CUSTOM PARAMETERS * * * *
====================================
Unique constraint error (00001) is OK if key already exists
Creating the DBC file...
java.sql.SQLException: The Network Adapter could not establish the connection
Database connection to jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip1.bukhatir.ae)(PORT=1521))(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip2.bukhatir.ae)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=TEST))) failed
ADD call failed with exit code 1
Updating Server Security Authentication
java.sql.SQLException: Invalid number format for port number
Database connection to jdbc:oracle:thin:@host_name:port_number:database failed
Updating Server Security Authentication failed with exit code 1
Restoring DBC file from backed up location /locapps1/apps/apps/TEST_ts1apps1/appltmp/TXK/TEST_Tue_Apr_10_12_03_2012.dbc
adgendbc.sh ended at Tue Apr 10 12:04:01 UAE 2012
adgendbc.sh exiting with status 1
ERRORCODE = 1 ERRORCODE_END
See the network configuration files from the environment:
===========
Dbhome
===========
/orahome/oradb/app/product/11.2.0.3/network/admin/TEST1_ts1db1/listener.ora
LISTENER_TEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ts1_vip1.bukhatir.ae)(PORT = 1521)(IP = FIRST)))
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ts1db1)(PORT = 1521)(IP = FIRST)))
SID_LIST_LISTENER_TEST =
(SID_LIST =
(SID_DESC = (ORACLE_HOME = /orahome/oradb/app/product/11.2.0.3)(SID_NAME = TEST1))
STARTUP_WAIT_TIME_LISTENER_TEST = 0
CONNECT_TIMEOUT_LISTENER_TEST = 10
TRACE_LEVEL_LISTENER_TEST = OFF
LOG_DIRECTORY_LISTENER_TEST = /orahome/oradb/app/product/11.2.0.3/network/admin
LOG_FILE_LISTENER_TEST = TEST1
TRACE_DIRECTORY_LISTENER_TEST = /orahome/oradb/app/product/11.2.0.3/network/admin
TRACE_FILE_LISTENER_TEST = TEST1
ADMIN_RESTRICTIONS_LISTENER_TEST = ON
SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER_TEST = OFF
IFILE=/orahome/oradb/app/product/11.2.0.3/network/admin/TEST1_ts1db1/listener_ifile.ora
/orahome/oradb/app/product/11.2.0.3/network/admin/TEST1_ts1db1/tnsnames.ora
TEST=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip1.bukhatir.ae)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=TEST)
(INSTANCE_NAME=TEST1)
TEST1=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip1.bukhatir.ae)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=TEST)
(INSTANCE_NAME=TEST1)
TEST1_FO=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip1.bukhatir.ae)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=TEST)
(INSTANCE_NAME=TEST1)
TEST_FO=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip1.bukhatir.ae)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=TEST)
(INSTANCE_NAME=TEST1)
TEST1_LOCAL=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip1.bukhatir.ae)(PORT=1521))
TEST_BALANCE=
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=YES)
(FAILOVER=YES)
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip1.bukhatir.ae)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=TEST)
TEST_REMOTE=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip1.bukhatir.ae)(PORT=1521))
#TEST_REMOTE=
# (DESCRIPTION=
# (ADDRESS_LIST=
# (ADDRESS=(PROTOCOL=tcp)(HOST=tsscan.bukhatir.ae)(PORT=1521))
TEST1_local=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip1.bukhatir.ae)(PORT=1521))
extproc_connection_data =
(DESCRIPTION=
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROCTEST1))
(CONNECT_DATA=
(SID=PLSExtProc)
(PRESENTATION = RO)
IFILE=/orahome/oradb/app/product/11.2.0.3/network/admin/TEST1_ts1db1/TEST1_ts1db1_ifile.ora
===========
Gridhome
===========
/gridhome/oragrid/11.2.0/grid/network/admin/listener.ora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))) # line added by Agent
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))) # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))) # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent
/gridhome/oragrid/11.2.0/grid/network/admin/endpoints_listener.ora
LISTENER_TS1DB1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=ts1_vip1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=172.16.101.23)(PORT=1521)(IP=FIRST)))) # line
added by Agent
===========
Listener EBShome
===========
cd $TNS_ADMIN
/locapps1/apps/apps/TEST_ts1apps1/ora/10.1.2/network/admin/listener.ora
APPS_TEST =
(ADDRESS_LIST =
(ADDRESS= (PROTOCOL= TCP)(Host= ts1apps1)(Port= 1629))
SID_LIST_APPS_TEST =
(SID_LIST =
( SID_DESC = ( SID_NAME = FNDSM )
( ORACLE_HOME = /ts1apps/apps/tech_st/10.1.2 )
( PROGRAM = /ts1apps/apps/apps_st/appl/fnd/12.0.0/bin/FNDSM )
( envs='MYAPPSORA=/ts1apps/apps/apps_st/appl/APPSTEST_ts1apps1.env,PATH=/usr/bin:/usr/ccs/bin:/bin,FNDSM_SCRIPT=/locapps1/apps/apps/TEST_ts1apps1/admin/scripts/gsmst
art.sh' )
( SID_DESC = ( SID_NAME = FNDFS )
( ORACLE_HOME = /ts1apps/apps/tech_st/10.1.2 )
( PROGRAM = /ts1apps/apps/apps_st/appl/fnd/12.0.0/bin/FNDFS )
( envs='EPC_DISABLED=TRUE,NLS_LANG=American_America.AL32UTF8,LD_LIBRARY_PATH=/ts1apps/apps/tech_st/10.1.2/lib32:/ts1apps/apps/tech_st/10.1.2/lib:/ts1apps/apps/tech_s
t/10.1.2/jdk/jre/lib/IA64N:/ts1apps/apps/tech_st/10.1.2/jdk/jre/lib/IA64N/server:/ts1apps/apps/apps_st/appl/sht/12.0.0/lib,SHLIB_PATH=/ts1apps/apps/tech_st/10.1.2/lib32:/ts1apps/apps
/tech_st/10.1.2/lib:/ts1apps/apps/tech_st/10.1.2/jdk/jre/lib/IA64N:/ts1apps/apps/tech_st/10.1.2/jdk/jre/lib/IA64N/server:/ts1apps/apps/apps_st/appl/sht/12.0.0/lib,LIBPATH=/ts1apps/ap
ps/tech_st/10.1.2/lib32:/ts1apps/apps/tech_st/10.1.2/lib:/ts1apps/apps/tech_st/10.1.2/jdk/jre/lib/IA64N:/ts1apps/apps/tech_st/10.1.2/jdk/jre/lib/IA64N/server:/ts1apps/apps/apps_st/ap
pl/sht/12.0.0/lib,APPLFSTT=TEST_BALANCE;TEST;TEST_FO,APPLFSWD=/locapps1/apps/apps/TEST_ts1apps1/appl/admin;/locapps1/apps/apps/TEST_ts1apps1/appltmp;/ts1apps/apps/apps_st/comn/webapp
s/oacore/html/oam/nonUix/launchMode/restricted' )
STARTUP_WAIT_TIME_APPS_TEST = 0
CONNECT_TIMEOUT_APPS_TEST = 10
TRACE_LEVEL_APPS_TEST = OFF
LOG_DIRECTORY_APPS_TEST = /locapps1/apps/apps/TEST_ts1apps1/logs/ora/10.1.2/network
LOG_FILE_APPS_TEST = APPS_TEST
TRACE_DIRECTORY_APPS_TEST = /locapps1/apps/apps/TEST_ts1apps1/logs/ora/10.1.2/network
TRACE_FILE_APPS_TEST = APPS_TEST
ADMIN_RESTRICTIONS_APPS_TEST = ON
IFILE = /locapps1/apps/apps/TEST_ts1apps1/ora/10.1.2/network/admin/TEST_ts1apps1_listener_ifile.ora
SUBSCRIBE_FOR_NODE_DOWN_EVENT_APPS_TEST = OFF
/locapps1/apps/apps/TEST_ts1apps1/ora/10.1.2/network/admin/tnsnames.ora
TEST = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1db1)(PORT=1521))
(CONNECT_DATA=(SID=TEST1))
# Net8 definitions for FNDFS and FNDSM on the HTTP server node - ts1apps1
FNDFS_ts1apps1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDFS_ts1apps1.bukhatir.ae = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
# For when the profile FS_SVC_PREFIX is set these entries will be used
FNDFS_TEST1_ts1apps1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDFS_TEST1_ts1apps1.bukhatir.ae = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDSM_ts1apps1_TEST1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDSM))
FNDSM_ts1apps1.bukhatir.ae_TEST1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDSM))
# Net8 definitions for FNDFS and FNDSM on the forms server node - ts1apps1
FNDFS_ts1apps1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDFS_ts1apps1.bukhatir.ae = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
# For when the profile FS_SVC_PREFIX is set these entries will be used
FNDFS_TEST1_ts1apps1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDFS_TEST1_ts1apps1.bukhatir.ae = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDSM_ts1apps1_TEST1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDSM))
FNDSM_ts1apps1.bukhatir.ae_TEST1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDSM))
# Net8 definitions for FNDFS and FNDSM on the administration server node - ts1apps1
FNDFS_ts1apps1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDFS_ts1apps1.bukhatir.ae = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
# For when the profile FS_SVC_PREFIX is set these entries will be used
FNDFS_TEST1_ts1apps1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDFS_TEST1_ts1apps1.bukhatir.ae = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDSM_ts1apps1_TEST1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDSM))
FNDSM_ts1apps1.bukhatir.ae_TEST1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDSM))
# Net8 definitions for FNDFS and FNDSM on the concurrent processing server node - ts1apps1
FNDFS_ts1apps1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDFS_ts1apps1.bukhatir.ae = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
# For when the profile FS_SVC_PREFIX is set these entries will be used
FNDFS_TEST1_ts1apps1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDFS_TEST1_ts1apps1.bukhatir.ae = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDFS))
FNDSM_ts1apps1_TEST1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDSM))
FNDSM_ts1apps1.bukhatir.ae_TEST1 = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ts1apps1)(PORT=1629))
(CONNECT_DATA=(SID=FNDSM))
IFILE=/locapps1/apps/apps/TEST_ts1apps1/ora/10.1.2/network/admin/TEST_ts1apps1_ifile.oraYes, following message is also reported in the adconfig.log:
Unique constraint error (00001) is OK if key already exists
Creating the DBC file...
java.sql.SQLException: The Network Adapter could not establish the connection
Database connection to jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=ts1_vip2.bukhatir.ae)(PORT=1523))(ADDRESS=(PROTOCOL=tcp)(HOST=
ts1_vip1.bukhatir.ae)(PORT=1523)))(CONNECT_DATA=(SERVICE_NAME=TEST))) failed
ADD call failed with exit code 1
Updating Server Security Authentication
java.sql.SQLException: Invalid number format for port number
Database connection to jdbc:oracle:thin:@host_name:port_number:database failed
Updating Server Security Authentication failed with exit code 1
Restoring DBC file from backed up location /locapps1/apps/apps/TEST_ts1apps1/appltmp/TXK/TEST_Mon_Apr_16_13_13_2012.dbc
adgendbc.sh ended at Mon Apr 16 13:13:46 UAE 2012
adgendbc.sh exiting with status 1
ERRORCODE = 1 ERRORCODE_END
.end std out.
.end err out.
Did you see Autoconfig has failed on Apps tier on adgendbc.cmd with error: ADD call failed with exit code 1, UPDATE call failed with exit code 1 [ID 359739.1]?But we have the file version:
$Header: AdminAppServer.java 120.11.12010000.6 2010/04/13 15:24:03 fskinner ship $ -
Too many archive logs getting generated on 11.1.0.7
I could see heavy arch generation on PROD instance although there is not much activity on PROD
this is a fresh instance which has gone live month ago and archive log being enabled weeks ago but
i could see about 20 to 23 GB of arch generation daily although the database size is 90 GB around
Raised SR they told me its unable to Purge statistics from the SYSAUX tablespace
they asked me to run some queries an run this
exec dbms_stats.purge_stats(sysdate - 50);
which was running for long hours and just coming out because of insufficient space
although the retention policy is 31 days
SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
31
history is avail for more than 90 days
SQL> select dbms_stats.get_stats_history_availability from dual;
GET_STATS_HISTORY_AVAILABILITY
01-APR-13 11.00.07.250483000 AM +05:30
asked to apply this patch 12683802 which i applied on DEV instance
but still i can see so many archs generating although there is no activity on DEV instance.
now when i run this scripts little by little
exec dbms_stats.purge_stats(sysdate - 50);
its purging but its taking ages and the size of sysaux is getting filled the current size of DEV have 3 datafiles each with around 4000mb
-- B4 applying patch
SQL> select trunc(first_time) on_date,
2 thread# thread,
3 min(sequence#) min_sequence,
4 max(sequence#) max_sequence,
5 max(sequence#) - min(sequence#) nos_archives,
6 (max(sequence#) - min(sequence#)) * log_avg_mb req_space_mb
7 from v$log_history,
8 (select avg(bytes/1024/1024) log_avg_mb
9 from v$log)
10 group by trunc(first_time), thread#, log_avg_mb
11 order by on_date
12 /
ON_DATE THREAD MIN_SEQUENCE MAX_SEQUENCE NOS_ARCHIVES REQ_SPACE_MB
24-JUN-13 1 1 3 2 2000
25-JUN-13 1 4 17 13 13000
26-JUN-13 1 18 30 12 12000
27-JUN-13 1 31 43 12 12000
28-JUN-13 1 44 51 7 7000
29-JUN-13 1 52 64 12 12000
30-JUN-13 1 65 77 12 12000
01-JUL-13 1 78 88 10 10000
-- after applying patch
ON_DATE
THREAD MIN_SEQUENCE MAX_SEQUENCE NOS_ARCHIVES REQ_SPACE_MB
21-JUN-13
1
1
5
4
4000
22-JUN-13
1
6
20
14
14000
23-JUN-13
1
21
35
14
14000
24-JUN-13
1
36
85
49
49000
25-JUN-13
1
86
111
25
25000
26-JUN-13
1
112
127
15
15000
27-JUN-13
1
128
134
6
6000
28-JUN-13
1
135
143
8
8000
29-JUN-13
1
144
151
7
7000
30-JUN-13
1
152
158
6
6000
01-JUL-13
1
159
163
4
4000
the above results b4 and after are taken from TEST and DEV which are cloned from PROD instance and only on DEV the patch is applied
here are env details
EBS:21.1.3
Database:11.1.0.7
OS:RHEL 5.6
am still not satisfied wanted to know if any one of you have a solution for this
please help
ZaviHi Amogh,
As said from support as well to run logminer i have run here is the output from it
i followed note id:1504755.1
------------------------------log miner output for archs of 02.07.13 on PROD------------------------
-- following logs
Jul 2 10:59 archive_PROD_1_1446_807549584.arc
Jul 2 11:05 archive_PROD_1_1447_807549584.arc
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1446_807549584.arc', -
OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1447_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT APPLSYS FND_LOGINS 47
UPDATE PO RCV_TRANSACTIONS_INTERFAC 2
E
INSERT PO RCV_TRANSACTIONS 3
UNSUPPORTED INV MTL_SUPPLY 6
DELETE PO RCV_SUPPLY 3
UPDATE CSI CSI_ITEM_INSTANCES 3
UPDATE APPLSYS FND_CONC_RELEASE_CLASSES 17
INSERT JA JAI_RTP_POPULATE_T 3
INSERT GL GL_CODE_COMBINATIONS 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED JA JAI_AP_TDS_INV_TAXES 7
UNSUPPORTED AP AP_INVOICE_LINES_ALL 8
DELETE ZX ZX_TRX_HEADERS_GT 4
INSERT INV MTL_ITEM_CATEGORIES 3
INSERT XLA XLA_AE_HEADERS_GT 3
UPDATE XLA XLA_AE_HEADERS_GT 3
UPDATE ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 4
R
UPDATE APPLSYS FND_USER_DESKTOP_OBJECTS 10
INSERT APPLSYS FND_APPL_SESSIONS 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE XLA XLA_TRANSFER_LOGS 2
INSERT GL GL_JE_HEADERS 2
DELETE GL GL_INTERFACE_CONTROL 1
DELETE GL GL_INTERFACE 3
INSERT CE CE_SECURITY_PROFILES_GT 1
INSERT PA PA_PROJECTS_FOR_ACCUM 8
INSERT PA PA_PJM_REQ_COMMITMENTS_TM 1162
P
DELETE PA PA_PROJECT_ACCUM_COMMITME 162
NTS
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE PA PA_TXN_ACCUM 1749
UPDATE PA PA_RESOURCE_LIST_ASSIGNME 13
NTS
INSERT ENI ENI_OLTP_ITEM_STAR 1
START 561
COMMIT 934
INSERT ICX ICX_SESSION_ATTRIBUTES 45
INSERT INV MTL_SUPPLY 8
UPDATE INV MTL_MATERIAL_TRANSACTIONS 11
OPERATION SEG_OWNER SEG_NAME COUNT(*)
_TEMP
INSERT CSI CSI_TRANSACTIONS 3
INSERT CSI CSI_I_VERSION_LABELS 1
INSERT CSI CSI_I_VERSION_LABELS_H 1
INSERT JA JAI_RTP_TRANS_T 3
INSERT JA JAI_AP_INVOICE_LINES 1
UNSUPPORTED AP AP_INVOICE_DISTRIBUTIONS_ 9
ALL
DELETE BOM BOM_RESOURCE_CHANGES 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
ROLLBACK 3
INSERT XLA XLA_TRANSACTION_ENTITIES, 2
AP
INSERT ENI MLOG$_ENI_OLTP_ITEM_STAR 7
UNSUPPORTED XLA XLA_TRANSACTION_ENTITIES, 4
AP
INSERT XLA XLA_DISTRIBUTION_LINKS,AP 6
UPDATE QA QA_CHARS 1
UPDATE MRP MRP_SCHEDULE_DATES 7
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE ENI ENI_DENORM_HIERARCHIES 4
UNSUPPORTED SYS SEG$ 1
INSERT INV MTL_TRANSACTION_ACCOUNTS 4
UPDATE APPLSYS FND_USER_PREFERENCES 5
DELETE SYS WRI$_OPTSTAT_HISTHEAD_HIS 235644
TORY
INSERT BNE BNE_DOC_USER_PARAMS 1
INSERT GL GL_INTERFACE 6
INSERT GL GL_JE_LINES 4
INSERT GL GL_JE_SEGMENT_VALUES 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT GL GL_IMPORT_REFERENCES 4
UPDATE PA PA_MAPPABLE_TXNS_TMP 3
UPDATE PA PA_PROJECT_ACCUM_COMMITME 95
NTS
INSERT INV MLOG$_MTL_SYSTEM_ITEMS_B 1
INSERT INV MTL_SYSTEM_ITEMS_TL 1
UPDATE APPLSYS FND_CONFLICTS_DOMAIN 6040
INSERT APPLSYS MO_GLOB_ORG_ACCESS_TMP 97
UPDATE APPLSYS FND_CONCURRENT_QUEUES 117
UNSUPPORTED JA JAI_RCV_LINES 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT INV MLOG$_MTL_MATERIAL_TRANSA 21
C
INSERT CSI CSI_ITEM_INSTANCES 1
INSERT JA JAI_AP_TDS_INV_TAXES 2
INSERT BOM BOM_RES_INSTANCE_CHANGES 2
INSERT PA PA_TXN_INTERFACE_AUDIT_AL 4
L
INSERT PA PA_EXPENDITURE_COMMENTS 2
DELETE PA PA_TRANSACTION_XFACE_CTRL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
_ALL
INSERT AP AP_LINE_TEMP_GT 3
UPDATE ENI ENI_OLTP_ITEM_STAR 3
INSERT XLA XLA_EVENTS_GT 3
UPDATE XLA XLA_EVENTS_GT 3
UPDATE XLA XLA_AE_HEADERS,AP 8
DELETE XLA XLA_VALIDATION_LINES_GT 2
INSERT INV MTL_TXN_COST_DET_INTERFAC 2
E
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE INV MTL_CST_TXN_COST_DETAILS 2
DELETE INV MTL_TXN_COST_DET_INTERFAC 2
E
UNSUPPORTED SYS HISTGRM$ 16
UPDATE INV MTL_MATERIAL_TRANSACTIONS 6
INSERT XLA XLA_EVENTS,CST 3
DELETE BNE BNE_DOC_ACTIONS 1
INSERT GL GL_INTERFACE_CONTROL 2
INSERT XLA XLA_TB_WORK_UNITS 1
UPDATE ZX ZX_TRX_HEADERS_GT 67
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS SYS_TEMP_0FD9D6611_EC264F 1
91
DELETE PA PA_TXN_ACCUM_DETAILS 1327
UNSUPPORTED PA PA_TXN_ACCUM 523
INSERT PA PA_MAPPABLE_TXNS_TMP 2
DELETE PA PA_RESOURCE_LIST_PARENTS_ 2
TMP
DELETE PA PA_PROJECTS_FOR_ACCUM 17
UPDATE SYS SEQ$ 224
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE APPLSYS WF_DEFERRED 5
UPDATE APPLSYS FND_CONC_PROG_ONSITE_INFO 51
INSERT APPLSYS FND_CONCURRENT_REQUESTS 46
INSERT PO PO_SESSION_GT 7
INSERT INV MTL_MATERIAL_TRANSACTIONS 5
_TEMP
INSERT INV MTL_ONHAND_QUANTITIES_DET 3
AIL
UPDATE SYS SYS_FBA_BARRIERSCN 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE MRP MRP_MESSAGES_TMP 29
DELETE MRP MRP_MESSAGES_TMP 29
UPDATE AP AP_INVOICES_ALL 15
UPDATE JA JAI_AP_TDS_INV_TAXES 43
INSERT BOM MLOG$_BOM_RESOURCE_CHANGE 4
S
UNSUPPORTED PA PA_TRANSACTION_INTERFACE_ 2
ALL
INSERT PA PA_EXPENDITURE_GROUPS_ALL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT PA PA_EXPENDITURE_ITEMS_ALL 2
INSERT AP AP_INVOICE_LINES_ALL 3
INSERT APPLSYS FND_LOG_MESSAGES 29
INSERT ZX ZX_ITM_DISTRIBUTIONS_GT 71
UNSUPPORTED ZX ZX_TRX_HEADERS_GT 5
INSERT XLA XLA_EVENTS,AP 3
UPDATE AP AP_PREPAY_HISTORY_ALL 3
UPDATE AP AP_PREPAY_APP_DISTS 1
INSERT INV MLOG$_MTL_ITEM_CATEGORIES 3
INSERT XLA XLA_AE_LINES,AP 6
DELETE XLA XLA_AE_HEADERS_GT 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE SYS HIST_HEAD$ 76
INSERT SYS WRI$_OPTSTAT_IND_HISTORY 10
UNSUPPORTED ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 3
R
UPDATE INV MTL_CST_ACTUAL_COST_DETAI 3
LS
INSERT XLA XLA_TRANSACTION_ENTITIES, 3
CST
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ICX ICX_TRANSACTIONS 1
UPDATE GL GL_INTERFACE 4
UPDATE PO PO_REQ_DISTRIBUTIONS_ALL 66
UNSUPPORTED PO PO_REQUISITION_HEADERS_AL 1
L
UNSUPPORTED PO PO_REQUISITION_LINES_ALL 66
DELETE PA PA_COMMITMENT_TXNS 1461
UNSUPPORTED PA PA_MAPPABLE_TXNS_TMP 3
INSERT PA PA_PROJECT_ACCUM_COMMITME 197
NTS
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT BOM CST_ITEM_COSTS 1
INSERT JA JAI_RCV_TRANSACTIONS 3
UPDATE PO RCV_SUPPLY 3
UPDATE INV MTL_SUPPLY 11
DELETE INV MTL_SUPPLY 11
UPDATE PO PO_DISTRIBUTIONS_ALL 3
UPDATE PO PO_LINE_LOCATIONS_ALL 9
INSERT INV MLOG$_MTL_ONHAND_QUANTITI 3
E
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE PO RCV_TRANSACTIONS_INTERFAC 3
E
INSERT MRP MRP_MESSAGES_TMP 22
INSERT AP AP_INVOICE_DISTRIBUTIONS_ 3
ALL
UPDATE AP AP_INVOICE_DISTRIBUTIONS_ 30
ALL
INSERT PA PA_EXPENDITURES_ALL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ZX ZX_TRX_HEADERS_GT 8
UNSUPPORTED ZX ZX_LINES_DET_FACTORS 566
DELETE AP AP_LINE_TEMP_GT 9
UPDATE AP AP_PAYMENT_SCHEDULES_ALL 5
UPDATE ICX ICX_SESSIONS 12
UNSUPPORTED AP AP_INVOICES_ALL 3
INSERT JA JAI_RCV_JOURNAL_ENTRIES 4
UNSUPPORTED INV MTL_TRANSACTIONS_INTERFAC 24
E
UPDATE MRP MRP_RECOMMENDATIONS 14
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ENI MLOG$_ENI_DENORM_HIERARCH 16
I
INSERT SYS WRI$_OPTSTAT_HISTGRM_HIST 8
ORY
INSERT APPLSYS WF_CONTROL 1
UPDATE BNE BNE_DOC_USER_PARAMS 1
DELETE APPLSYS WF_CONTROL 1
UPDATE GL GL_JE_BATCHES 2
INSERT GL GL_POSTING_INTERIM 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED JA JAI_PO_OSP_LINES 1
INSERT AP AP_INVOICES_ALL 2
INSERT PA PA_COMMITMENT_TXNS_TMP 160
UPDATE PA PA_COMMITMENT_TXNS 1391
DELETE PA PA_MAPPABLE_TXNS_TMP 3
INTERNAL 4906910
UPDATE APPLSYS FND_CONCURRENT_REQUESTS 153
UPDATE JA JAI_RCV_LINES 3
INSERT INV MLOG$_MTL_SUPPLY 30
INSERT CSI CSI_I_PARTIES_H 1
UNSUPPORTED JA JAI_RCV_TRANSACTIONS 7
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE JA JAI_RCV_TRANSACTIONS 24
INSERT MRP MRP_RECOMMENDATIONS 8
UNSUPPORTED AP AP_PAYMENT_SCHEDULES_ALL 4
UPDATE PA PA_TRANSACTION_INTERFACE_ 4
ALL
INSERT XLA XLA_ACCT_PROG_EVENTS_GT 4
INSERT XLA XLA_AE_LINES_GT 12
UNSUPPORTED XLA XLA_AE_LINES_GT 30
INSERT XLA XLA_AE_HEADERS,AP 3
INSERT XLA XLA_VALIDATION_LINES_GT 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE XLA XLA_EVENTS_GT 1
UNSUPPORTED XLA XLA_BAL_CONCURRENCY_CONTR 2
OL
DELETE XLA XLA_BAL_CONCURRENCY_CONTR 2
OL
INSERT APPLSYS FND_CONC_REQUEST_ARGUMENT 2
S
UPDATE QA QA_RESULTS 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT MRP MRP_SCHEDULE_CONSUMPTIONS 21
INSERT ENI ENI_DENORM_HRCHY_PARENTS 4
UNSUPPORTED ENI ENI_DENORM_HRCHY_PARENTS 4
UPDATE APPLSYS FND_USER 6
INSERT XLA XLA_TRANSFER_LOGS 2
UPDATE GL GL_JE_LINES 4
UPDATE APPLSYS FND_NODES 3
UNSUPPORTED PO PO_REQ_DISTRIBUTIONS_ALL 66
DELETE ZX ZX_ITM_DISTRIBUTIONS_GT 66
INSERT AP AP_PAYMENT_SCHEDULES_ALL 2
INSERT IBY IBY_DOCS_PAYABLE_GT 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT AP AP_DOC_SEQUENCE_AUDIT 1
INSERT PA PA_COMMITMENT_TXNS 1380
INSERT PA PA_RESOURCE_ACCUM_DETAILS 3
INSERT ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 19
I
UPDATE PA PA_PROJECT_ACCUM_ACTUALS 12
INSERT EGO EGO_ITEM_TEXT_TL 1
INSERT INV MTL_ITEM_REVISIONS_TL 1
UNSUPPORTED 1720
INSERT APPLSYS FND_CONC_PP_ACTIONS 47
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED APPLSYS FND_CONCURRENT_PROCESSES 97
DELETE APPLSYS MO_GLOB_ORG_ACCESS_TMP 11
INSERT MRP MRP_RELIEF_INTERFACE 16
UPDATE PO PO_REQUISITION_LINES_ALL 1
INSERT INV MTL_MATERIAL_TRANSACTIONS 5
INSERT CSI CSI_I_PARTIES 1
UNSUPPORTED SYS DBMS_LOCK_ALLOCATED 15
UPDATE PO PO_SESSION_GT 4
INSERT MRP MRP_SCHEDULE_DATES 4
INSERT MRP MLOG$_MRP_SCHEDULE_DATES 15
DELETE MRP MRP_SCHEDULE_DATES 4
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE BOM BOM_RES_INSTANCE_CHANGES 2
INSERT PA PA_COST_DISTRIBUTION_LINE 2
S_ALL
INSERT ZX ZX_TRANSACTION_LINES_GT 138
UNSUPPORTED CSI CSI_ITEM_INSTANCES 2
UNSUPPORTED XLA XLA_EVENTS,AP 6
INSERT XLA XLA_AE_SEGMENT_VALUES 13
UNSUPPORTED SYS TAB$ 15
INSERT SYS WRI$_OPTSTAT_HISTHEAD_HIS 81562
TORY
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS SYS_TEMP_0FD9D6610_EC264F 1
91
UNSUPPORTED SYS IND$ 10
DELETE APPLSYS FND_CONC_PP_ACTIONS 7
INSERT BNE BNE_DOC_ACTIONS 1
INSERT GL GL_JE_BATCHES 1
INSERT PA PA_TXN_ACCUM_DETAILS 1386
INSERT PA PA_TXN_ACCUM 2
INSERT PA PA_RESOURCE_LIST_PARENTS_ 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
TMP
UPDATE CTXSYS DR$INDEX 1
INSERT INV MTL_PENDING_ITEM_STATUS 1
DELETE SYS WRI$_OPTSTAT_IND_HISTORY 2130287
UNSUPPORTED APPLSYS FND_CONCURRENT_REQUESTS 49
INSERT PO RCV_TRANSACTIONS_INTERFAC 3
E
UPDATE APPLSYS FND_CONCURRENT_PROCESSES 40
UPDATE PO RCV_TRANSACTIONS 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED PO PO_HEADERS_ALL 3
INSERT INV MTL_CST_TXN_COST_DETAILS 5
INSERT CSI CSI_ITEM_INSTANCES_H 3
DELETE INV MTL_MATERIAL_TRANSACTIONS 5
_TEMP
UPDATE SYS DBMS_LOCK_ALLOCATED 15
DELETE MRP MRP_RECOMMENDATIONS 8
UPDATE AP AP_INVOICE_LINES_ALL 11
INSERT BOM MLOG$_BOM_RES_INSTANCE_CH 4
A
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT BOM BOM_RESOURCE_CHANGES 2
UPDATE PA PA_TRANSACTION_XFACE_CTRL 1
_ALL
INSERT AP AP_PREPAY_HISTORY_ALL 1
INSERT AP AP_PREPAY_APP_DISTS 1
INSERT XLA XLA_EVT_CLASS_ORDERS_GT 4
UPDATE XLA XLA_EVENTS,AP 6
INSERT ENI ENI_DENORM_HIERARCHIES 6
INSERT SYS WRI$_OPTSTAT_TAB_HISTORY 5
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED XLA XLA_AE_HEADERS_GT 3
UPDATE BNE BNE_DOC_ACTIONS 1
UPDATE GL GL_JE_HEADERS 2
DELETE XLA XLA_TRANSFER_LOGS 1
UNSUPPORTED ZX ZX_TRANSACTION_LINES_GT 264
INSERT PA PA_PJM_PO_COMMITMENTS_TMP 378
UNSUPPORTED INV MTL_SYSTEM_ITEMS_B 2
INSERT INV MTL_ITEM_REVISIONS_B 1
266 rows selected.
------------------------------log miner output for archs of 03.07.13 on PROD------------------------
--following log files
Jul 3 10:56 archive_PROD_1_1469_807549584.arc
Jul 3 10:58 archive_PROD_1_1470_807549584.arc
Jul 3 11:01 archive_PROD_1_1471_807549584.arc
Jul 3 11:03 archive_PROD_1_1472_807549584.arc
Jul 3 11:04 archive_PROD_1_1473_807549584.arc
Jul 3 11:05 archive_PROD_1_1474_807549584.arc
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1469_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1470_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1471_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1472_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1473_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1474_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE SYS CCOL$ 6
DELETE SYS SEG$ 3
INSERT APPLSYS FND_LOGINS 108
UPDATE APPLSYS FND_CONC_RELEASE_CLASSES 33
UPDATE PO RCV_TRANSACTIONS_INTERFAC 4
E
INSERT APPLSYS FND_LOG_TRANSACTION_CONTE 1
XT
INSERT PO RCV_TRANSACTIONS 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED INV MTL_SUPPLY 4
INSERT PO RCV_RECEIVING_SUB_LEDGER 4
INSERT GL GL_JE_HEADERS 12
DELETE GL GL_INTERFACE 49
DELETE GL GL_INTERFACE_CONTROL 8
INSERT JA JAI_RTP_POPULATE_T 1
INSERT JA JAI_RGM_TRM_SCHEDULES_T 4
UPDATE JA JAI_RCV_CENVAT_CLAIMS 2
INSERT APPLSYS FND_APPL_SESSIONS 9
UPDATE APPLSYS WF_NOTIFICATIONS 4
UPDATE PO PO_HEADERS_INTERFACE 10
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT PO PO_DISTRIBUTIONS_INTERFAC 6
E
INSERT JA JAI_PO_LINE_LOCATIONS 18
UNSUPPORTED PO PO_LINE_LOCATIONS_ALL 2
UNSUPPORTED PO PO_LINES_ALL 8
DELETE ZX ZX_TRX_HEADERS_GT 14
INSERT PA PA_STRUCTURES_TASKS_TMP 440
INSERT SYS SEQ$ 2
INSERT BNE BNE_DOC_CREATION_PARAMS 9
UPDATE ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 6
OPERATION SEG_OWNER SEG_NAME COUNT(*)
R
INSERT SYS MON_MODS$ 9
UPDATE PA PA_PROJECTS_ALL 4
UPDATE WIP WIP_MOVE_TXN_INTERFACE 1
INSERT CE CE_SECURITY_PROFILES_GT 8
UPDATE WIP WIP_PERIOD_BALANCES 1
INSERT INV MTL_MWB_GTMP 14
UPDATE SYS WRI$_SCH_CONTROL 1
DELETE SYS OBJ$ 51
DDL SYS WRH$_ROWCACHE_SUMMARY 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS WRH$_ACTIVE_SESSION_HISTO 1
RY
DDL SYS WRH$_SYS_TIME_MODEL 1
INSERT IBY IBY_DOCS_PAYABLE_ALL 6
UPDATE IBY IBY_DOCS_PAYABLE_ALL 36
INSERT GL GL_CODE_COMBINATIONS 3
INSERT XLA XLA_AE_HEADERS_GT 12
UPDATE -
Create procedure is generating too many archive logs
Hi
The following procedure was run on one of our databases and it hung since there were too many archive logs being generated.
What would be the answer? The db must remain in archivelog mode.
I understand the nologging concept, but as I know this applies to creating tables, views, indexes and tablespaces. This script is creating procedure.
CREATE OR REPLACE PROCEDURE APPS.Dfc_Payroll_Dw_Prc(Errbuf OUT VARCHAR2, Retcode OUT NUMBER
,P_GRE NUMBER
,P_SDATE VARCHAR2
,P_EDATE VARCHAR2
,P_ssn VARCHAR2
) IS
CURSOR MainCsr IS
SELECT DISTINCT
PPF.NATIONAL_IDENTIFIER SSN
,ppf.full_name FULL_NAME
,ppa.effective_date Pay_date
,ppa.DATE_EARNED period_end
,pet.ELEMENT_NAME
,SUM(TO_NUMBER(prv.result_value)) VALOR
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAf.ASSIGNMENT_ID ASSG_ID
,paf.ORGANIZATION_ID
FROM
pay_element_classifications pec
, pay_element_types_f pet
, pay_input_values_f piv
, pay_run_result_values prv
, pay_run_results prr
, pay_assignment_actions paa
, pay_payroll_actions ppa
, APPS.pay_all_payrolls_f pap
,Per_Assignments_f paf
,per_people_f ppf
WHERE
ppa.effective_date BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND ppa.payroll_id = pap.payroll_id
AND paa.tax_unit_id = NVL(p_GRE, paa.tax_unit_id)
AND ppa.payroll_action_id = paa.payroll_action_id
AND paa.action_status = 'C'
AND ppa.action_type IN ('Q', 'R', 'V', 'B', 'I')
AND ppa.action_status = 'C'
--AND PEC.CLASSIFICATION_NAME IN ('Earnings','Alien/Expat Earnings','Supplemental Earnings','Imputed Earnings','Non-payroll Payments')
AND paa.assignment_action_id = prr.assignment_action_id
AND prr.run_result_id = prv.run_result_id
AND prv.input_value_id = piv.input_value_id
AND piv.name = 'Pay Value'
AND piv.element_type_id = pet.element_type_id
AND pet.element_type_id = prr.element_type_id
AND pet.classification_id = pec.classification_id
AND pec.non_payments_flag = 'N'
AND prv.result_value <> '0'
--AND( PET.ELEMENT_INFORMATION_CATEGORY LIKE '%EARNINGS'
-- OR PET.element_type_id IN (1425, 1428, 1438, 1441, 1444, 1443) )
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PET.EFFECTIVE_START_DATE AND PET.EFFECTIVE_END_DATE
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PIV.EFFECTIVE_START_DATE AND PIV.EFFECTIVE_END_DATE --dcc
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN Pap.EFFECTIVE_START_DATE AND Pap.EFFECTIVE_END_DATE --dcc
AND paf.ASSIGNMENT_ID = paa.ASSIGNMENT_ID
AND ppf.NATIONAL_IDENTIFIER = NVL(p_ssn, ppf.NATIONAL_IDENTIFIER)
------------------------------------------------------------------TO get emp.
AND ppf.person_id = paf.person_id
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN ppf.EFFECTIVE_START_DATE AND ppf.EFFECTIVE_END_DATE
------------------------------------------------------------------TO get emp. ASSIGNMENT
--AND paf.assignment_status_type_id NOT IN (7,3)
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN paf.effective_start_date AND paf.effective_end_date
GROUP BY PPF.NATIONAL_IDENTIFIER
,ppf.full_name
,ppa.effective_date
,ppa.DATE_EARNED
,pet.ELEMENT_NAME
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAF.ASSIGNMENT_ID
,paf.ORGANIZATION_ID
BEGIN
DELETE cust.DFC_PAYROLL_DW
WHERE PAY_DATE BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND tax_unit_id = NVL(p_GRE, tax_unit_id)
AND ssn = NVL(p_ssn, ssn)
COMMIT;
FOR V_REC IN MainCsr LOOP
INSERT INTO cust.DFC_PAYROLL_DW(SSN, FULL_NAME, PAY_DATE, PERIOD_END, ELEMENT_NAME, ELEMENT_INFORMATION_CATEGORY, CLASSIFICATION_ID, ELEMENT_INFORMATION1, VALOR, TAX_UNIT_ID, ASSG_ID,ELEMENT_TYPE_ID,ORGANIZATION_ID)
VALUES(V_REC.SSN,V_REC.FULL_NAME,v_rec.PAY_DATE,V_REC.PERIOD_END,V_REC.ELEMENT_NAME,V_REC.ELEMENT_INFORMATION_CATEGORY, V_REC.CLASSIFICATION_ID, V_REC.ELEMENT_INFORMATION1, V_REC.VALOR,V_REC.TAX_UNIT_ID,V_REC.ASSG_ID, v_rec.ELEMENT_TYPE_ID, v_rec.ORGANIZATION_ID);
COMMIT;
END LOOP;
END ;
So, how could I assist our developer with this, so that she can run it again without it generating a ton of logs ? ?
Thanks
Oracle 9.2.0.5
AIX 5.2The amount of redo generated is a direct function of how much data is changing. If you insert 'x' number of rows, you are going to generate 'y' mbytes of redo. If your procedure is destined to insert 1000 rows, then it is destined to create a certain amount of redo. Period.
I would question the <i>performance</i> of the procedure shown ... using a cursor loop with a commit after every row is going to be a slug on performance but that doesn't change the fact 'x' inserts will always generate 'y' redo.
Maybe you are looking for
-
Hey all, I've encountered a major problem since purchasing my MacBook Air OS X Mavericks 10.9.4. As I was previously using a Pro with an account password that used to work just fine, I decided to simply set the Air with the same credentials. However,
-
About buying PSE 9...
To cut things short, I came across PSE8 by chance, I had just heard about it in som photo magazine, and I was happy to find the Trial download and after using it I was rapidly searching for good price on it and wanting to buy it. But I discovered tha
-
How to create multiple infoobjects?
Hi Experts , I have a requirement where i need to create more than 70 infoobjects. Is there any standard report or function module where we can create multiple infoobjects and a detailed log if any infoobject got any error in between. I know creating
-
[SOLVED] GRUB not booting up properly.
So. I have had a dual boot of Windows 8 and Arch for awhile now. Everything went very well until I decided to add Fedora to the mix. It installs another GRUB, and there looked like there was no way to select Do Not Install Bootloader in the install.
-
Ipad air safari won't load webpages
My ipad air won't load some webpages. Seems to happen most when I want a random piece of info and enter it into the search box. Sometimes will bring up google page, but will not load my selection. I have followed all the suggestions I can find: Reb