Database large Number of archive log

Oracle 11g
window server 2008 R2
My database working fine, from last week i have noticed that database generating large no of archive log.
Database size is 30GB
Only one table space is 16GB , other tablespaces not more 2 GB.
I can not figured out why it  generating large no. of archive log. can any one help me to figure out.
previous week i have only did these changes
Drop index
create index
create new table from existing table.
nothing else i  did.

Hi
As you say workload increases. See when the number of log switches goes high and take an AWR report or statspack report. Check the DML operations. Use below query to chk the log switches
spool c:\log_hist.txt
SET PAGESIZE 90
SET LINESIZE 150
set heading on
column "00:00" format 9999
column "01:00" format 9999
column "02:00" format 9999
column "03:00" format 9999
column "04:00" format 9999
column "05:00" format 9999
column "06:00" format 9999
column "07:00" format 9999
column "08:00" format 9999
column "09:00" format 9999
column "10:00" format 9999
column "11:00" format 9999
column "12:00" format 9999
column "13:00" format 9999
column "14:00" format 9999
column "15:00" format 9999
column "16:00" format 9999
column "17:00" format 9999
column "18:00" format 9999
column "19:00" format 9999
column "20:00" format 9999
column "21:00" format 9999
column "22:00" format 9999
column "23:00" format 9999
SELECT * FROM (
SELECT * FROM (
SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0), '99')) "00:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0), '99')) "01:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0), '99')) "02:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0), '99')) "03:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0), '99')) "04:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0), '99')) "05:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0), '99')) "06:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0), '99')) "07:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0), '99')) "08:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0), '99')) "09:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0), '99')) "10:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0), '99')) "11:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0), '99')) "12:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0), '99')) "13:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0), '99')) "14:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0), '99')) "15:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0), '99')) "16:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0), '99')) "17:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0), '99')) "18:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0), '99')) "19:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0), '99')) "20:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0), '99')) "21:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0), '99')) "22:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0), '99')) "23:00"
  FROM V$LOG_HISTORY
  WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
  GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
  ) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
  ) WHERE ROWNUM <8;
spool off
One common mistake is enabling debugging. You can  check in application code if any debugging is enabled. (insert every records for logging or support purpose)
Regards
Anand.

Similar Messages

  • When creating a tablespace why should we enable LOGGING when a database is already on ARCHIVE LOG mode

    Question :
    When creating a tablespace why should we enable LOGGING when a database is already on ARCHIVE LOG mode ?
    Example:
    Create Tablespace
    CREATE SMALLFILE TABLESPACE "TEST_DATA"
    LOGGING
    DATAFILE '+DG_TEST_DATA_01(DATAFILE)' SIZE 10G
    AUTOEXTEND ON NEXT  500K MAXSIZE 31000M
    EXTENT MANAGEMENT LOCAL
    SEGMENT SPACE MANAGEMENT AUTO;
    LOGGING: Generate redo logs for creation of tables, indexes and  partitions, and for subsequent inserts. Recoverable
    Are they not logged and not recoverable if we do not enable LOGGING? What is that ARCHIVELOG mode does?

    What is that ARCHIVELOG Mode Does?
    Whenever your database is in archive log mode , Oracle will backup the redo log files in the form of Archives so that we can recover the database to the consistent state in case of any failure.
    Archive logging is essential for production databases where the loss of a transaction might be fatal.
    Why Logging?
    Logging is safest method to ensure that all the changes made at the tablespace will be captured and available for recovery in the redo logs.
    It is just the level at which we defines:
    Force Logging at DB level
    Logging at Tablespace Level
    Logging at schema Level
    Before the existence of FORCE LOGGING, Oracle provided logging and nologging options. These two options have higher precedence at the schema object level than the tablespace level; therefore, it was possible to override the logging settings at the tablespace level with nologging setting at schema object level.

  • OVM 3.0 Database Creating Lots of Archive Logs

    Greetings - ever since we initially installed OVM 3.0 earlier this fall (~October), the OVM database has generated archive logs at a very rapid rate. It continually threatens to fill up our 16 GB filesystem dedicated to archive logs, even after daily backup and purging.
    Our OVM database itself is about 4-6 GB large, and we would need to increase the archive log filesystem to about 20-25 GB in size, which we see as unreasonable for such a small database.
    What is causing OVM to generate so many redo logs? Our best guess is that OVM is continuously gathering guest VM CPU usage on each physical server.
    Is there a way to configure the OVM application in order to reduce the amount of redo/archive logs being created?
    We are currently running 3.0.3, having upgraded each time a 3.0.* patch was released. OVMM running on OEL 6.1, database running on latest HP-UX.

    majedian21 wrote:
    Greetings - ever since we initially installed OVM 3.0 earlier this fall (~October), the OVM database has generated archive logs at a very rapid rate. It continually threatens to fill up our 16 GB filesystem dedicated to archive logs, even after daily backup and purging.I would log an SR with Oracle Support for this, so that Development can look at it. Sounds like your environment has lots of VMs running and yes, collecting usage stats for all of those environments. However, there may be some old data from the previous versions that's causing more stats to be collected than necessary.

  • 91 Database - When to Backup Archive Logs?

    hi experts,
    I have a 9i db that runs in Archive mode. NOT using the catalog.
    A full database backup is made by RMAN at 7am while database is OPEN.
    An incremental backup are made every 3 hours.
    For RMAN to be able to perform a complete recovery, when should I backup the archive logs?
    Thanks, John

    Thanks Khurran and Anvar.
    As I'm new to Oracle (my background is SQL Server), I admit I'm a bit confused about the use of archive logs.
    I had a situation last week, where I needed to restore (to another server) froma backup made at month-end. I had the whole db backup but it was made while db was open. Therefore I needed to also restore the archive log(s) but did not have them. In the end, I had to restore db and force it to open. it was messy and not at all easy.
    My recovery window is 2 days.
    If I do the following, will I be able to restore to a point in time for the past 2 days?
    - controlfile is set for autobackup
    - run RMAN> backup databse plus archivelog; once per day, then run my incremental backup every 3 hours
    Thanks for your opinions. John

  • Large number of event Log entries: connection open...

    Hi,
    I am seeing a large number of entries in the event log of the type:
    21:49:17, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
    Are these anything I should be concerned about? I have tried a couple of forum and Google searches, but I don't quite know where to start beyond pasting the first bit of the message. I haven't found anything obvious from those searches.
    DHCP table lists 192.168.1.78 as the desktop PC on which I'm writing this.
    Please could you point me in the direction of any resources that will help me to work out if I should be worried about this?
    A slightly longer extract is shown below:
    21:49:17, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/SYN_SENT ppp0 NAPT)
    21:49:11, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] TIME_WAIT/CLOSED ppp0 NAPT)
    21:49:03, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [178.190.63.75:55535] CLOSED/SYN_SENT ppp0 NAPT)
    21:49:00, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [2.96.4.85:23939] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:59, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.144.143.222:21617] CLOSED/TIME_WAIT ppp0 NAPT)
    21:48:58, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28188] ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28288] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:18048] ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:54199] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:55, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.144.91.49:60704] ppp0 NAPT)
    21:48:55, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:50875] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:45, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:57656] ppp0 NAPT)
    21:48:39, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:56975] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:29, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [79.99.145.46:8368] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:27, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [90.192.249.173:45250] ppp0 NAPT)
    21:48:16, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [212.17.96.246:62447] ppp0 NAPT)
    21:48:10, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [82.16.198.117:49942] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:08, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:04, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [89.153.251.9:53729] TIME_WAIT/CLOSED ppp0 NAPT)
    21:47:54, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:37150] ppp0 NAPT)

    Hi,
    Thank you for the response. I think, but can't remember for sure, that UPnP was already switched off when I captured that log. Anyway, even if it wasn't, it is now. So I will see what gets captured in my logs.
    I've just had to restart my Home Hub because of other connection issues and I notice that the first few entries are also odd:
    19:35:16, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:34:45, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:34:31, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:34:31, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:34:04, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:46, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:33:46, 12 Mar.
    IN: BLOCK [12] Spoofing protection (IGMP 86.164.178.188->224.0.0.22 on ppp0)
    19:33:45, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:39, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:33, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:33:29, 12 Mar.
    IN: BLOCK [15] Default policy (UDP 111.252.36.217:26328->86.164.178.188:12708 on ppp0)
    19:33:16, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 193.113.4.153:80->86.164.178.188:49572 on ppp0)
    19:33:14, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:14, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:44266 on ppp0)
    19:33:14, 12 Mar.
    ( 164.240000) CWMP: session completed successfully
    19:33:13, 12 Mar.
    ( 163.700000) CWMP: HTTP authentication success from https://pbthdm.bt.mo
    19:33:05, 12 Mar.
    BLOCKED 106 more packets (because of Default policy)
    19:33:05, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:05, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 213.1.72.209:80->86.164.178.188:49547 on ppp0)
    19:33:05, 12 Mar.
    BLOCKED 94 more packets (because of Default policy)
    19:33:05, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:33:05, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 199.59.148.87:443->86.164.178.188:49531 on ppp0)
    19:33:05, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:33:04, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:04, 12 Mar.
    ( 155.110000) CWMP: Server URL: https://pbthdm.bt.mo; Connecting as user: ACS username
    19:33:04, 12 Mar.
    ( 155.090000) CWMP: Session start now. Event code(s): '1 BOOT,4 VALUE CHANGE'
    19:32:59, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:54, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:32:53, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:52, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:32:51, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:48, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:47, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:46, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:46, 12 Mar.
    BLOCKED 4 more packets (because of First packet is Invalid)
    19:32:45, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49461->199.59.149.232:443 on ppp0)
    19:32:44, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:44, 12 Mar.
    BLOCKED 1 more packets (because of First packet is Invalid)
    19:32:43, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49398->193.113.4.153:80 on ppp0)
    19:32:42, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:42, 12 Mar.
    BLOCKED 3 more packets (because of First packet is Invalid)
    19:32:42, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49277->119.254.30.32:443 on ppp0)
    19:32:41, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:41, 12 Mar.
    BLOCKED 1 more packets (because of First packet is Invalid)
    19:32:41, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:38, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
    19:32:36, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:34, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:32:30, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:47022 on ppp0)
    19:32:30, 12 Mar.
    ( 120.790000) CWMP: session closed due to error: WGET TLS error
    19:32:30, 12 Mar.
    ( 120.140000) NTP synchronization success!
    19:32:30, 12 Mar.
    BLOCKED 1 more packets (because of Default policy)
    19:32:29, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49458->217.41.223.234:80 on ppp0)
    19:32:28, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
    19:32:26, 12 Mar.
    ( 116.030000) NTP synchronization start
    19:32:25, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49442->74.125.141.91:443 on ppp0)
    19:32:25, 12 Mar.
    OUT: BLOCK [15] Default policy (TCP 192.168.1.78:49310->204.154.94.81:443 on ppp0)
    19:32:25, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 88.221.94.116:80->86.164.178.188:49863 on ppp0)

  • Recovery of database with loss of archived logs

    Hi,
    I tried restoring the database using the standard practice, however, I am encountering errors. It looks like we are missing archive log files, but I am not sure. Could anyone provide more insight to this problem and how to recover the database?
    Here are the steps we did.
    SVRMGR> connect internal
    Connected.
    SVRMGR> startup
    ORACLE instance started.
    Total System Global Area 1117552144 bytes
    Fixed Size 48656 bytes
    Variable Size 291995648 bytes
    Database Buffers 819200000 bytes
    Redo Buffers 6307840 bytes
    Database mounted.
    ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
    SVRMGR> recover database using backup controlfile until cancel;
    ORA-00279: change 663652622 generated at 02/13/02 06:20:02 needed for thread 1
    ORA-00289: suggestion : /oraclesw8_redolog/ARCHIVE_CP/infr/infr_1_13598.dbf
    ORA-00280: change 663652622 for thread 1 is in sequence #13598
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    Log applied.
    ORA-00279: change 663654936 generated at 02/13/02 07:04:14 needed for thread 1
    ORA-00289: suggestion : /oraclesw8_redolog/ARCHIVE_CP/infr/infr_1_13599.dbf
    ORA-00280: change 663654936 for thread 1 is in sequence #13599
    ORA-00278: log file '/oraclesw8_redolog/ARCHIVE_CP/infr/infr_1_13598.dbf' no longer needed for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    AUTO
    ORA-00308: cannot open archived log '/oraclesw8_redolog/ARCHIVE_CP/infr/infr_1_13599.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01195: online backup of file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oraclesw8_data00/oradata/infr/infr_system_01.dbf'

    Hi
    It looks like you may have to manually apply the last redo log.
    SVRMGR> SELECT member FROM v$log l, v$logfile f WHERE l.group# = f.group# AND l.status = 'CURRENT';
    Note the path of this log file.
    SVRMGR> Recover database until cancel
    When you get the message that you mentioned -
    Apply the log member from the select statement above. Give it
    the full path as it appears on the select statement.
    SVRMGR> Alter database open resetlogs;
    The system should respond with "Statement processed".
    Good Luck,
    Bob

  • Database backup with lost archive logs

    How can I get a database backup when all the archive logs have been lost?

    here's the full test of the backup:
    Recovery Manager: Release 9.2.0.7.0 - 64bit Production
    Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
    RMAN>
    connected to target database: ESKHDEV1 (DBID=4075873486)
    using target database controlfile instead of recovery catalog
    RMAN>
    echo set on
    RMAN> run {
    2> allocate channel oem_backup_disk1 type disk format '/sbdbarc/backups/eskhdev1/%U' maxpiecesize 2000 M;
    3> backup filesperset = 3 tag 'BACKUP_ESKHDEV1_00_101008022533' database;
    4> backup filesperset = 3 tag 'BACKUP_ESKHDEV1_00_101008022533' archivelog all not backed up;
    5> release channel oem_backup_disk1;
    6> }
    allocated channel: oem_backup_disk1
    channel oem_backup_disk1: sid=18 devtype=DISK
    Starting backup at 10-OCT-08
    channel oem_backup_disk1: starting full datafile backupset
    channel oem_backup_disk1: specifying datafile(s) in backupset
    input datafile fno=00013 name=/spte/oracle/oradata/ESKHDEV1/leafdat02.dbf
    input datafile fno=00011 name=/spte/oracle/oradata/ESKHDEV1KLXTEMP.dbf
    input datafile fno=00001 name=/spte/oracle/oradata/ESKHDEV1/system01.dbf
    channel oem_backup_disk1: starting piece 1 at 10-OCT-08
    channel oem_backup_disk1: finished piece 1 at 10-OCT-08
    piece handle=/sbdbarc/backups/eskhdev1/2jjsq87q_1_1 comment=NONE
    channel oem_backup_disk1: backup set complete, elapsed time: 00:00:45
    channel oem_backup_disk1: starting full datafile backupset
    channel oem_backup_disk1: specifying datafile(s) in backupset
    input datafile fno=00002 name=/spte/oracle/oradata/ESKHDEV1/undotbs01.dbf
    input datafile fno=00008 name=/spte/oracle/oradata/ESKHDEV1/tools01.dbf
    input datafile fno=00005 name=/spte/oracle/oradata/ESKHDEV1/example01.dbf
    channel oem_backup_disk1: starting piece 1 at 10-OCT-08
    channel oem_backup_disk1: finished piece 1 at 10-OCT-08
    piece handle=/sbdbarc/backups/eskhdev1/2kjsq897_1_1 comment=NONE
    channel oem_backup_disk1: starting piece 2 at 10-OCT-08
    channel oem_backup_disk1: finished piece 2 at 10-OCT-08
    piece handle=/sbdbarc/backups/eskhdev1/2kjsq897_2_1 comment=NONE
    channel oem_backup_disk1: backup set complete, elapsed time: 00:01:20
    channel oem_backup_disk1: starting full datafile backupset
    channel oem_backup_disk1: specifying datafile(s) in backupset
    input datafile fno=00017 name=/spte/oracle/oradata/ESKHDEV1/parentidx01.dbf
    input datafile fno=00015 name=/spte/oracle/oradata/ESKHDEV1/lral3201.dbf
    input datafile fno=00010 name=/spte/oracle/oradata/ESKHDEV1/xdb01.dbf
    channel oem_backup_disk1: starting piece 1 at 10-OCT-08
    channel oem_backup_disk1: finished piece 1 at 10-OCT-08
    piece handle=/sbdbarc/backups/eskhdev1/2ljsq8bn_1_1 comment=NONE
    channel oem_backup_disk1: backup set complete, elapsed time: 00:01:05
    channel oem_backup_disk1: starting full datafile backupset
    channel oem_backup_disk1: specifying datafile(s) in backupset
    input datafile fno=00012 name=/spte/oracle/oradata/ESKHDEV1/leafdat01.dbf
    input datafile fno=00006 name=/spte/oracle/oradata/ESKHDEV1/indx01.dbf
    input datafile fno=00003 name=/spte/oracle/oradata/ESKHDEV1/cwmlite01.dbf
    channel oem_backup_disk1: starting piece 1 at 10-OCT-08
    channel oem_backup_disk1: finished piece 1 at 10-OCT-08
    piece handle=/sbdbarc/backups/eskhdev1/2mjsq8dp_1_1 comment=NONE
    channel oem_backup_disk1: backup set complete, elapsed time: 00:01:15
    channel oem_backup_disk1: starting full datafile backupset
    channel oem_backup_disk1: specifying datafile(s) in backupset
    input datafile fno=00014 name=/spte/oracle/oradata/ESKHDEV1/leafidx.dbf
    input datafile fno=00009 name=/spte/oracle/oradata/ESKHDEV1/users01.dbf
    input datafile fno=00004 name=/spte/oracle/oradata/ESKHDEV1/drsys01.dbf
    channel oem_backup_disk1: starting piece 1 at 10-OCT-08
    channel oem_backup_disk1: finished piece 1 at 10-OCT-08
    piece handle=/sbdbarc/backups/eskhdev1/2njsq8g4_1_1 comment=NONE
    channel oem_backup_disk1: backup set complete, elapsed time: 00:01:15
    channel oem_backup_disk1: starting full datafile backupset
    channel oem_backup_disk1: specifying datafile(s) in backupset
    input datafile fno=00016 name=/spte/oracle/oradata/ESKHDEV1/parentdat01.dbf
    input datafile fno=00007 name=/spte/oracle/oradata/ESKHDEV1/odm01.dbf
    channel oem_backup_disk1: starting piece 1 at 10-OCT-08
    channel oem_backup_disk1: finished piece 1 at 10-OCT-08
    piece handle=/sbdbarc/backups/eskhdev1/2ojsq8if_1_1 comment=NONE
    channel oem_backup_disk1: backup set complete, elapsed time: 00:01:35
    Finished backup at 10-OCT-08
    Starting backup at 10-OCT-08
    current log archived
    channel oem_backup_disk1: starting archive log backupset
    channel oem_backup_disk1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=1260 recid=1236 stamp=667755183
    channel oem_backup_disk1: starting piece 1 at 10-OCT-08
    channel oem_backup_disk1: finished piece 1 at 10-OCT-08
    piece handle=/sbdbarc/backups/eskhdev1/2pjsq8lf_1_1 comment=NONE
    channel oem_backup_disk1: backup set complete, elapsed time: 00:00:02
    Finished backup at 10-OCT-08
    Starting Control File and SPFILE Autobackup at 10-OCT-08
    piece handle=/sbdbarc/backups/eskhdev1/c-4075873486-20081010-02 comment=NONE
    Finished Control File and SPFILE Autobackup at 10-OCT-08
    released channel: oem_backup_disk1
    RMAN> delete noprompt obsolete device type disk;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to redundancy 1
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=18 devtype=DISK
    Deleting the following obsolete backups and copies:
    Type Key Completion Time Filename/Handle
    Backup Set 2 17-MAR-08
    Backup Piece 2 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_10_1_649591917.rbp
    Backup Set 6 17-MAR-08
    Backup Piece 6 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_14_1_649594217.rbp
    Backup Set 7 17-MAR-08
    Backup Piece 7 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_16_1_649594217.rbp
    Backup Set 8 17-MAR-08
    Backup Piece 8 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_17_1_649594353.rbp
    Backup Set 9 17-MAR-08
    Backup Piece 9 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_19_1_649595236.rbp
    Backup Set 10 17-MAR-08
    Backup Piece 10 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_18_1_649595236.rbp
    Backup Set 11 17-MAR-08
    Backup Piece 11 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_20_1_649595236.rbp
    Backup Set 12 17-MAR-08
    Backup Piece 12 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_21_1_649595381.rbp
    Backup Set 13 17-MAR-08
    Backup Piece 13 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_24_1_649595445.rbp
    Backup Set 14 17-MAR-08
    Backup Piece 14 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_22_1_649595445.rbp
    Backup Set 15 17-MAR-08
    Backup Piece 15 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_23_1_649595445.rbp
    Backup Set 16 17-MAR-08
    Backup Piece 16 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_25_1_649595471.rbp
    Backup Set 17 17-MAR-08
    Backup Piece 17 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_27_1_649604714.rbp
    Backup Set 18 17-MAR-08
    Backup Piece 18 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_26_1_649604714.rbp
    Backup Set 19 17-MAR-08
    Backup Piece 19 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_28_1_649604714.rbp
    Backup Set 20 17-MAR-08
    Backup Piece 20 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_29_1_649604850.rbp
    Backup Set 21 17-MAR-08
    Backup Piece 21 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_31_1_649604923.rbp
    Backup Set 22 17-MAR-08
    Backup Piece 22 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_30_1_649604923.rbp
    Backup Set 23 17-MAR-08
    Backup Piece 23 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_33_1_649606774.rbp
    Backup Set 24 17-MAR-08
    Backup Piece 24 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_32_1_649606774.rbp
    Backup Set 25 17-MAR-08
    Backup Piece 25 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_34_1_649606774.rbp
    Backup Set 26 17-MAR-08
    Backup Piece 26 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_35_1_649606900.rbp
    Backup Set 27 17-MAR-08
    Backup Piece 27 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_37_1_649606970.rbp
    Backup Set 28 17-MAR-08
    Backup Piece 28 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_36_1_649606970.rbp
    Backup Set 29 17-MAR-08
    Backup Piece 29 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_39_1_649607591.rbp
    Backup Set 30 17-MAR-08
    Backup Piece 30 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_38_1_649607591.rbp
    Backup Set 31 17-MAR-08
    Backup Piece 31 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_40_1_649607591.rbp
    Backup Set 32 17-MAR-08
    Backup Piece 32 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_41_1_649607707.rbp
    Backup Set 33 17-MAR-08
    Backup Piece 33 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_43_1_649607785.rbp
    Backup Set 34 17-MAR-08
    Backup Piece 34 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_42_1_649607785.rbp
    Backup Set 35 17-MAR-08
    Backup Piece 35 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_45_1_649609650.rbp
    Backup Set 36 17-MAR-08
    Backup Piece 36 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_44_1_649609650.rbp
    Backup Set 37 17-MAR-08
    Backup Piece 37 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_46_1_649609650.rbp
    Backup Set 38 17-MAR-08
    Backup Piece 38 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_47_1_649609777.rbp
    Backup Set 39 17-MAR-08
    Backup Piece 39 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_49_1_649609840.rbp
    Backup Set 40 17-MAR-08
    Backup Piece 40 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_48_1_649609840.rbp
    Backup Set 41 17-MAR-08
    Backup Piece 41 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_51_1_649611229.rbp
    Backup Set 42 17-MAR-08
    Backup Piece 42 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_50_1_649611229.rbp
    Backup Set 43 17-MAR-08
    Backup Piece 43 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_52_1_649611229.rbp
    Backup Set 44 17-MAR-08
    Backup Piece 44 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_53_1_649611355.rbp
    Backup Set 45 17-MAR-08
    Backup Piece 45 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_55_1_649611419.rbp
    Backup Set 46 17-MAR-08
    Backup Piece 46 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_54_1_649611419.rbp
    Backup Set 47 17-MAR-08
    Backup Piece 47 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_57_1_649612920.rbp
    Backup Set 48 17-MAR-08
    Backup Piece 48 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_56_1_649612920.rbp
    Backup Set 49 17-MAR-08
    Backup Piece 49 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_58_1_649612920.rbp
    Backup Set 50 17-MAR-08
    Backup Piece 50 17-MAR-08 /spte/oracle/backups/eskhdev1/df_ESKHDEV1_59_1_649613075.rbp
    Backup Set 51 17-MAR-08
    Backup Piece 51 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_61_1_649613158.rbp
    Backup Set 52 17-MAR-08
    Backup Piece 52 17-MAR-08 /spte/oracle/backups/eskhdev1/al_ESKHDEV1_60_1_649613158.rbp
    Backup Set 66 10-OCT-08
    Backup Piece 68 10-OCT-08 /sbdbarc/backups/eskhdev1/2bjsppct_1_1
    Backup Set 67 10-OCT-08
    Backup Piece 69 10-OCT-08 /sbdbarc/backups/eskhdev1/2cjsppea_1_1
    Backup Piece 70 10-OCT-08 /sbdbarc/backups/eskhdev1/2cjsppea_2_1
    Backup Set 68 10-OCT-08
    Backup Piece 71 10-OCT-08 /sbdbarc/backups/eskhdev1/2djsppgq_1_1
    Backup Set 69 10-OCT-08
    Backup Piece 72 10-OCT-08 /sbdbarc/backups/eskhdev1/2ejsppir_1_1
    Backup Set 70 10-OCT-08
    Backup Piece 73 10-OCT-08 /sbdbarc/backups/eskhdev1/2fjsppl6_1_1
    Backup Set 71 10-OCT-08
    Backup Piece 74 10-OCT-08 /sbdbarc/backups/eskhdev1/2gjsppnh_1_1
    Backup Set 72 10-OCT-08
    Backup Piece 75 10-OCT-08 /sbdbarc/backups/eskhdev1/c-4075873486-20081010-00
    Backup Set 73 10-OCT-08
    Backup Piece 76 10-OCT-08 /sbdbarc/backups/eskhdev1/c-4075873486-20081010-01
    deleted backup piece
    backup piece handle=/sbdbarc/backups/eskhdev1/2bjsppct_1_1 recid=68 stamp=667739549
    deleted backup piece
    backup piece handle=/sbdbarc/backups/eskhdev1/2cjsppea_1_1 recid=69 stamp=667739594
    deleted backup piece
    backup piece handle=/sbdbarc/backups/eskhdev1/2cjsppea_2_1 recid=70 stamp=667739639
    deleted backup piece
    backup piece handle=/sbdbarc/backups/eskhdev1/2djsppgq_1_1 recid=71 stamp=667739674
    deleted backup piece
    backup piece handle=/sbdbarc/backups/eskhdev1/2ejsppir_1_1 recid=72 stamp=667739739
    deleted backup piece
    backup piece handle=/sbdbarc/backups/eskhdev1/2fjsppl6_1_1 recid=73 stamp=667739814
    deleted backup piece
    backup piece handle=/sbdbarc/backups/eskhdev1/2gjsppnh_1_1 recid=74 stamp=667739889
    deleted backup piece
    backup piece handle=/sbdbarc/backups/eskhdev1/c-4075873486-20081010-00 recid=75 stamp=667739965
    deleted backup piece
    backup piece handle=/sbdbarc/backups/eskhdev1/c-4075873486-20081010-01 recid=76 stamp=667742372
    Deleted 9 objects
    RMAN-06207: WARNING: 48 objects could not be deleted for DISK channel(s) due
    RMAN-06208: to mismatched status. Use CROSSCHECK command to fix status
    List of Mismatched objects
    ==========================
    Object Type Filename/Handle
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_10_1_649591917.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_14_1_649594217.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_16_1_649594217.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_17_1_649594353.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_19_1_649595236.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_18_1_649595236.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_20_1_649595236.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_21_1_649595381.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_24_1_649595445.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_22_1_649595445.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_23_1_649595445.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_25_1_649595471.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_27_1_649604714.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_26_1_649604714.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_28_1_649604714.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_29_1_649604850.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_31_1_649604923.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_30_1_649604923.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_33_1_649606774.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_32_1_649606774.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_34_1_649606774.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_35_1_649606900.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_37_1_649606970.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_36_1_649606970.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_39_1_649607591.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_38_1_649607591.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_40_1_649607591.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_41_1_649607707.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_43_1_649607785.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_42_1_649607785.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_45_1_649609650.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_44_1_649609650.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_46_1_649609650.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_47_1_649609777.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_49_1_649609840.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_48_1_649609840.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_51_1_649611229.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_50_1_649611229.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_52_1_649611229.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_53_1_649611355.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_55_1_649611419.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_54_1_649611419.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_57_1_649612920.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_56_1_649612920.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_58_1_649612920.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/df_ESKHDEV1_59_1_649613075.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_61_1_649613158.rbp
    Backup Piece /spte/oracle/backups/eskhdev1/al_ESKHDEV1_60_1_649613158.rbp
    RMAN> release channel;
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of release command at 10/10/2008 15:33:09
    RMAN-06474: maintenance channels are not allocated
    RMAN> exit;
    Recovery Manager complete.

  • FSDB database running in no archived log mode - is this OK?

    I'm new to Exadata and a contractor at a client site, and noticed that the FSDB database is running in no archive mode and isn't getting backed up.
    I understand that the FSDB database is linked to the dbfs file system, so my question is - what should be backed up? The database, the file system, or both?
    TIA.

    No it would backup the whole dbfs database. as the dbfs databse is easily rereated the database metadata is not generally considered significant. the data on the dbfs filesystem may be more critical, but since this is typically used as staging often there is another copy of the files somewhere on the source systems and so loss and receration of the dbfs databsae would be no big deal. If on the other hand it is a big deal then yes the backup would be slightly larger than the filesystem. seeing issues we had with DBFS I am curious to know whether your dbfs filesystem seeming as it is so large is it being used as a permanent file store ?
    that really isn't the best use case for dbfs. It is generally best used as a temporary staging area.

  • Large number of objets - log miner scalability?

    We have been consolidating several departmental databases into one big RAC database. Moreover, in tests databases we are cloning test cells (for example, an application schema is getting cloned hundred of times so that our users may test independently from each others).
    So, our acception test database now have about 500,000 objects in it. We have production databases with over 2 millions objects in it.
    We are using streams. At this time we're using a local capture, but our architecture aims to use downstream capture soon... We are concerned about the resources required for the log miner data dictionary build.
    We are currently not using DBMS_LOGMNR_D.build directly, but rather indirectly through the DBMS_STREAMS_ADM.add_table_rule. We only want to replicate about 30 tables.
    We are surprised to find that the log miner always build a complete data dictionary for every objets of the database (tables, partitions, columns, users, and so on).
    Apparently there is no way to create a partial data dictionary even by using DBMS_LOGMNR_D.BUILD directly...
    Lately, it took more than 2 hours just to build the log miner data dictionary on a busy system! And we ended up with an ORA-01280 error. So we started all over again...
    We just increased our redo log size recently. I haven't had a chance to test after the change. Our redo log was only 4MB, we increased it to 64MB to reduce checkpoint activity. This will probably help...
    Does anybody has encountered slow log miner dictionary build?
    Any advice?
    Thanks you in advance.
    Jocelyn

    Hello Jocelyn,
    In streams environment, the logminer dictionary build is done using DBMS_CAPTURE_ADM.BUILD procedure. You should not be using DBMS_LOGMNR_D.BUILD for this.
    In Streams Environment, DBMS_STREAMS_ADM.ADD_TABLE_RULE will dump the dictionary only on the first time when you call this, since the capture process is not yet created and it will be created only when you call DBMS_STREAMS_ADM.ADD_TABLE_RULE and a dictionary dump as well. Logminer dictionary will have the information about all the objects like tables, partitions, columns, users and etc.. The dictionary dump will take time depends on the number of objects in the database since if the number of objects are very high in the database then the data dictionary itself will be big.
    Your redo size 64MB and this is too small for a production system, you should consider having a redo log size of 200M atleast.
    You can have a complete logminer dictionary build using DBMS_CAPTURE_ADM.BUILD and then create a capture process using the FIRST_SCN returned from the BUILD procedure.
    Let me know if you have more doubts.
    Thanks,
    Rijesh

  • Delete large partition generates archive log?

    Hi!
    I have a partition 3 partitions with 100 milj rows in each partition. If I drop a partition, does it generats 300milj logs in archive or just 3?
    /Andreas

    Deletes perform normal DML: take locks on rows, generate redo (lots of it), If a mistake is made a rollback can be issued to restore the records prior to a commit. A delete does not relinquish segment space thus a table in which all records have been deleted retains all of its original blocks.
    Truncate is DDL, it moves the High Water Mark of the table back to zero. No row-level locks are taken, no redo or rollback is generated. All extents bar the initial are de-allocated from the table As truncate is a DDL command, you can't roll it back.
    in your case, if You need to permanenty delete all data in one partiotion, then use truncate or drop partition. It will not generate redo.

  • Standby Database (Archive Log Mode)

    I'm going to be setting up a standby database.
    I understand that the primary database must be in archive log mode.
    Is there any reason for the standby database to be in archivelog mode?

    Since your primary Db is in archive log mode, so will be your standby, when it is made primary.But. you can use STANDBY REDO LOGS from 9i version, where these Standby Redo Logs then store the information received from the Primary Database.
    As per metalink:-
    >
    Standby Redo Logs are only supported for the Physical Standby Database in Oracle 9i and as well for Logical Standby Databases in 10g. Standby Redo Logs are only used if you have the LGWR activated for archival to the Remote Standby Database.If you have Standby Redo Logs, the RFS process will write into the Standby RedoLog as mentioned above and when a log switch occurs, the Archiver Process of the Standby Database will archive this Standby Redo Log to an Archived Redo Log, while the MRP process applies the information to the Standby Database. In a Failover situation, you will also have access to the information already written in the Standby Redo Logs, so the information will not be lost.
    >
    Check metalink Doc ID: Note:219344.1
    Regards,
    Anand

  • Standby database Archive log destination confusion

    Hi All,
    I need your help here..
    This is the first time that this situation is arising. We had sync issues in the oracle 10g standby database prior to this archive log destination confusion.So we rebuilt the standby to overcome this sync issue. But ever since then the archive logs in the standby database are moving to two different locations.
    The spfile entries are provided below:
    *.log_archive_dest_1='LOCATION=/m99/oradata/MARDB/archive/'
    *.standby_archive_dest='/m99/oradata/MARDB/standby'
    Prior to rebuilding the standby databases the archive logs were moving to /m99/oradata/MARDB/archive/ location which is the correct location. But now the archive logs are moving to both /m99/oradata/MARDB/archive/ and /m99/oradata/MARDB/standby location, with the majority of them moving to /m99/oradata/MARDB/standby location. This is pretty unusual.
    The archives in the production are moving to /m99/oradata/MARDB/archive/ location itself.
    Could you kindly help me overcome this issue.
    Regards,
    Dan

    Hi Anurag,
    Thank you for update.
    Prior to rebuilding the standby database the standby_archive_dest was set as it is. No modifications were made to the archive destination locations.
    The primary and standby databases are on different servers and dataguard is used to transfer the files.
    I wanted to highlight one more point here, The archive locations are similar to the ones i mentioned for the other stndby databases. But the archive logs are moving only to /archive location and not to the /standby location.

  • SGEN & database archive log mode

    Hi Experts,
    To apply ABAP support packs, I disabled archive log mode of databse and successfully applied support packs.
    As post processing, I kicked off SGEN.
    Is it required that database be in "NO Archive log mode"  while SGEN is running or can I enable it.
    Thanks
    Putla

    Not sure what database it is.. but if it is ORACLE...
    $sqlplus / as sysdba
    SQL> shutdown immediate;
    SQL> startup mount;
    SQL> Alter database noarchivelog;
    SQL> alter database open;
    After the completion of SGEN.....
    $sqlplus / as sysdba
    SQL> shutdown immediate;
    SQL> alter database mount;
    SQL> alter database archivelog;
    SQL> alter database open;

  • Database refresh using dump and archive logs

    Hi all,
    I have a full datapump dump taken from oracle 11g R2 database (PROD) running in HP-UX server.The dump was taken on March,1,2012.
    Also i have all the archive logs till today (March 13,2012).
    I want to clone it to a new database (TEST) in windows machine with this dump, and i want to refresh(restore) the database (TEST) using this archive logs to make the database sync with the (PROD) till today.
    I need your suggestions.

    raoofdba wrote:
    Hi all,
    I have a full datapump dump taken from oracle 11g R2 database (PROD) running in HP-UX server.The dump was taken on March,1,2012.
    Also i have all the archive logs till today (March 13,2012).
    I want to clone it to a new database (TEST) in windows machine with this dump, and i want to refresh(restore) the database (TEST) using this archive logs to make the database sync with the (PROD) till today.
    I need your suggestions.I suggest you to perform transport tablespace method.below is the link: -
    http://neeraj-dba.blogspot.in/2012/01/cross-platform-transportable.html

  • Clone database with no archive log mode and downtime with new file structure

    We need to clone database with following business requirements :
    No downtime at all can be afford
    databases are in no archive log mode
    data-file locations need to be change to new mount points while cloning for space management factor
    Please suggest the best possible methods for same ?

    Can you post your version of oracle in 4 digits and OS for better understandig
    I dont think you can move the No Archive database to a new location without shutting down.
    You just want to move them to the new mount points is it?
    1. How big are your datafiles are?
    2.How do you backup this DB?
    When you move the datafilles then it becomes inconsistent and you need redo logs and probably archive logs to be there to make the file consistent or else it would be prompting for recovery. The chances are your redolog will be overwritten and hence you wont be able to recover.
    You need downtime.

Maybe you are looking for

  • Error in updating schema in SSM 7.0

    Hi, When I am updating schema in SSM 7.0, I am getting below error: An Error has occurred in the scipt on this page. Line:     67 Char:     3 Error:     'out1.1' is null or not an object Code:     0 URL:     http://<server:50000>/strategy/bi/admin_bi

  • Help Scripting "Mirror displays" and resolution

    HI there, I work in a situation where it would be super helpful to have a script that runs at login that sets the resolution to a specific resolution and even more awesome if the script could make them "mirror displays" as well. Any help would be so

  • Setting Google Maps as default Map program

    I am using Torch 9810 Ssw V7.0.0 build 2404 on ATT.  I have installed Google Maps and it seems to be working fine.  Of course the Torch also came with ATT Maps and Blackberry Maps.  How to I set up Google Maps as the default map program so that when

  • Creating PDF Buyer Guides

    I'm working on an engagement ring affiliate site and hoping I can get some advice. From an SEO stand point is there a rel canonical format I can recreate a duplicate version of a buyer guide as a downloadable pdf without causing duplicate content pro

  • When I click a sidebar bookmark, site opens in sidebar, not in Firefox page

    Just reinstalled windows and Firefox