Pls immediate help required  log files are getting big size than 1GB

oc4j_DBConsole_mbcvizpilot3.mbc.uae_vizrtdb\log

Do as follows :
- stop EM
- rename log directory to log.old
- create a new log directory
- restart EM
- check that new log files have been created in the new log directory
- after checking that everything works fine you can remove the old directory.
You shouldn't have any problem, at least I didn't have any...

Similar Messages

  • Nohup.log files are not getting generated correctly

    hai,
    im a weblogic administrator. Here is my problem in weblogic. My partner has complained that nohup.logs are not getting generated properly. Like for every 5 mins a new nohup .log file is getting generated. But almost 4 to 5 logs are of 0 size and after that the 6 log file is very hug ( around MB). And the status of the managed servers is in FAILED state. When i checked the nohup logs, i can see "E297: Write Error In Swap File" and also the below error. But when i checked the disk space it is only 30 % full. Please suggest me something that can help me in this. Why is this behavior in the nohup.logs? have anyone faced anythign like this? Please help me.
    But after the recycle everything is fine. But i want to know what went wrong and why it got recovered after the recycle.
    <Feb 11, 2010 7:43:59 AM CST> <Error> <HTTP> <BEA-101246> <Error occurred while flushing HTTP log file for the Web server: wl38_managed1
    java.io.IOException: Disk quota exceeded.
    java.io.IOException: Disk quota exceeded
    at java.io.FileOutputStream.writeBytes([BII)V(FileOutputStream.java:???)
    at java.io.FileOutputStream.write(FileOutputStream.java:260)
    at com.wily.introscope.agent.probe.io.ManagedFileOutputStream.write(ManagedFileOutputStream.java:423)
    at weblogic.utils.io.DoubleBufferedOutputStream.flushBuffer(DoubleBufferedOutputStream.java:58)
    at weblogic.utils.io.DoubleBufferedOutputStream.flush(DoubleBufferedOutputStream.java:157)
    at weblogic.servlet.logging.LogManagerHttp$FlushLogStreamTrigger.trigger(LogManagerHttp.java:522)
    at weblogic.time.common.internal.ScheduledTrigger.run(ScheduledTrigger.java:243)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
    at weblogic.time.common.internal.ScheduledTrigger.executeLocally(ScheduledTrigger.java:229)
    at weblogic.time.common.internal.ScheduledTrigger.execute(ScheduledTrigger.java:223)
    at weblogic.time.server.ScheduledTrigger.execute(ScheduledTrigger.java:50)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)

    according to error message, your partition size have not enough disk space from where you are running nohup command. check with df -kh.

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Empty files are getting created at receiver FTP server

    Hi Experts,
    I have an Idoc to File scenario where I am sending an XML file to receiver FTP server.
    Scenario is working fine but sometimes an empty file is getting generated at receiver FTP server.
    I have already selected ignore empty file at receiver channel so issue is not within PI system configuration.
    When I checked the message log I can see that almost all the files are getting created successfully without any issues, but
    for some files/messages I can see that there are below error logs.
    "Transmitting the message to endpoint <local> using connection IDoc_AAE_http://sap.com/xi/XI/System failed, due to: com.sap.engine.interfaces.messaging.api.exception.MessagingException: Could not get FTP connection from connection pool (1 connections) within 5,000 milliseconds; increase the number of available connections"
    "Exception caught by adapter framework: Could not get FTP connection from connection pool (1 connections) within 5,000 milliseconds; increase the number of available connections."
    And after there is again success message log in the same message and it creating a file successfully during that time stamp.
    but the third party is sometime receiving empty file which I am not able to find in any trace or log (my file name is SD_timestamp.xml).
    Can you please let me know what is the solution and what adjustments FTP server need to do in order to resolve this issue.
    Thanks in advance.
    Regards,
    Rahul Kulkarni

    The error you are getting that says "Could not get FTP connection from connection pool (1 connections) within 5,000 milliseconds; increase the number of available connections" has probably nothing to do with the empty files problem.
    I second Hareesh Gampa that you first should try "temporary file creation". You might also need to tell the FTP owner that he should only pick up files with that are written completely and that do comply with a negotiated file name schema. The temp file should have another schema of course then. He should not pick up just every file that is written. See here for details
    http://help.sap.de/saphelp_nw74/helpdata/en/44/6830e67f2a6d12e10000000a1553f6/content.htm
    HTH
    Cheers
    Jens

  • Log Files are not shipping to standby.

    Hi,
    I am getting the below error. My log files are not getting shipped from primary to standby. Below is the error msg from alert log file. Help needed.
    Thu Jan 10 17:27:17 2013
    Error 1031 received logging on to the standby
    Errors in file d:\app\sesa241915\diag\rdbms\orcl\orcl\trace\orcl_arc2_2944.trc:
    ORA-01031: insufficient privileges
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    Thanks in advance.

    Please find the content of trace file.
    *** 2013-01-11 10:16:41.389
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:16:41.404 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:16:41.404 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:16:41.420 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:16:41.420 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:17:41.482
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:17:41.795 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:17:41.795 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:17:41.795 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:17:41.795 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:18:41.857
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:18:42.154 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:18:42.154 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:18:42.154 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:18:42.154 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:19:42.185
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:19:42.467 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:19:42.467 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:19:42.467 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:19:42.467 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:20:42.545
    Redo shipping client performing standby login
    *** 2013-01-11 10:20:42.639
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:20:42.810 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:20:42.810 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:20:42.810 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:20:42.810 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:21:42.889
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:21:43.217 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:21:43.217 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:21:43.217 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:21:43.217 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:22:43.295
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:22:43.639 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:22:43.639 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:22:43.639 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:22:43.639 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:23:43.701
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:23:44.045 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:23:44.045 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:23:44.045 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:23:44.045 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:24:44.123
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:24:44.451 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:24:44.451 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:24:44.451 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:24:44.451 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:25:44.514
    Redo shipping client performing standby login
    *** 2013-01-11 10:25:44.639
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:25:44.951 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:25:44.951 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:25:44.951 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:25:44.951 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:26:45.014
    Redo shipping client performing standby login
    *** 2013-01-11 10:26:45.170
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:26:45.373 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:26:45.373 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:26:45.389 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:26:45.389 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:27:45.435
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    *** 2013-01-11 10:27:45.779
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:27:45.951 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:27:45.951 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:27:45.967 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:27:45.967 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:28:46.029
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    *** 2013-01-11 10:28:46.326
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:28:46.435 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:28:46.435 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:28:46.435 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:28:46.435 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:29:46.482
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:29:46.889 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:29:46.889 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:29:46.904
    *** 2013-01-11 10:29:46.904 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:29:46.904 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    Redo shipping client performing standby login
    *** 2013-01-11 10:30:47.140
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:30:47.297 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:30:47.297 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:30:47.312 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:30:47.312 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:31:47.378
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:31:47.691 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:31:47.691 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:31:47.691 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:31:47.691 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:32:47.772
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:32:48.069 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:32:48.069 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:32:48.069 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:32:48.069 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    *** 2013-01-11 10:33:48.135
    Redo shipping client performing standby login
    *** 2013-01-11 10:34:09.261
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12170 and errmsg is 'ORA-12170: TNS:Connect timeout occurred
    *** 2013-01-11 10:34:45.873
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12170 and errmsg is 'ORA-12170: TNS:Connect timeout occurred
    *** 2013-01-11 10:35:06.984
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12170 and errmsg is 'ORA-12170: TNS:Connect timeout occurred
    *** 2013-01-11 10:35:06.984 4132 krsh.c
    Error 12170 received logging on to the standby
    *** 2013-01-11 10:35:06.984 869 krsu.c
    Error 12170 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 12170 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-12170: TNS:Connect timeout occurred
    *** 2013-01-11 10:35:06.999 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 12170.
    *** 2013-01-11 10:35:06.999 2747 krsi.c
    krsi_dst_fail: dest:2 err:12170 force:0 blast:1
    *** 2013-01-11 10:36:07.063
    Redo shipping client performing standby login
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1017 and errmsg is 'ORA-01017: invalid username/password; logon denied
    OCISessionBegin failed. Error -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1031 and errmsg is 'ORA-01031: insufficient privileges
    *** 2013-01-11 10:36:07.267 4132 krsh.c
    Error 1031 received logging on to the standby
    *** 2013-01-11 10:36:07.267 869 krsu.c
    Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'orcl'
    ORA-01031: insufficient privileges
    *** 2013-01-11 10:36:07.267 4132 krsh.c
    PING[ARC2]: Heartbeat failed to connect to standby 'orcl'. Error is 1031.
    *** 2013-01-11 10:36:07.267 2747 krsi.c
    krsi_dst_fail: dest:2 err:1031 force:0 blast:1
    2. Please find the query results.
    From Primary:
    SQL> set lines 200
    SQL> set numwidth 15
    SQL> column ID format 99
    SQL> column "SRLs" format 99
    SQL> column active format 99
    SQL> col type format a4
    SQL> select ds.dest_id id
    2 , ad.status
    3 , ds.database_mode db_mode
    4 , ad.archiver type
    5 , ds.recovery_mode
    6 , ds.protection_mode
    7 , ds.standby_logfile_count "SRLs"
    8 , ds.standby_logfile_active active
    9 , ds.archived_seq#
    10 from v$archive_dest_status ds
    11 , v$archive_dest ad
    12 where ds.dest_id = ad.dest_id
    13 and ad.status != 'INACTIVE'
    14 order by
    15 ds.dest_id
    16 /
    ID STATUS DB_MODE TYPE RECOVERY_MODE PROTECTION_MODE SRLs ACTIVE ARCHIVED_SEQ#
    1 VALID OPEN ARCH IDLE MAXIMUM PERFORMANCE 0 0 72
    2 ERROR UNKNOWN LGWR IDLE MAXIMUM PERFORMANCE 0 0 0
    From Standby:
    SQL> set lines 200
    SQL> set numwidth 15
    SQL> column ID format 99
    SQL> column "SRLs" format 99
    SQL> column active format 99
    SQL> col type format a4
    SQL> select ds.dest_id id
    2 , ad.status
    3 , ds.database_mode db_mode
    4 , ad.archiver type
    5 , ds.recovery_mode
    6 , ds.protection_mode
    7 , ds.standby_logfile_count "SRLs"
    8 , ds.standby_logfile_active active
    9 , ds.archived_seq#
    10 from v$archive_dest_status ds
    11 , v$archive_dest ad
    12 where ds.dest_id = ad.dest_id
    13 and ad.status != 'INACTIVE'
    14 order by
    15 ds.dest_id
    16 /
    ID STATUS DB_MODE TYPE RECOVERY_MODE PROTECTION_MODE SRLs ACTIVE ARCHIVED_SEQ#
    1 VALID OPEN ARCH IDLE MAXIMUM PERFORMANCE 0 0 72
    2 ERROR UNKNOWN LGWR IDLE MAXIMUM PERFORMANCE 0 0 0
    Regards
    Srinivasan R

  • Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Why multiple  log files are created while using transaction in berkeley db

    we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and
    without transaction implementing secondary database concept the issues we are getting are as follows:-
    with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how REASON CONCRETE CONCLUSION ..
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soon

    we are using berkeleydb java edition db base api, If you are seeing __db.NNN files in your environment root directory, these are environment's shared region files. And since you see these you are using Berkeley DB Core (with the Java/JNI Base API), not Berkeley DB Java Edition.
    with transaction ...
    without transaction ...First of all, do you need transactions or not? Review the documentation section called "Why transactions?" in the Berkeley DB Programmer's Reference Guide.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb.There should be no logs created when transactions are not used. That single log file has likely remained there from the previous transactional run.
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001Have you reviewed the basic documentations references for Berkeley DB Core?
    - Berkeley DB Programmer's Reference Guide
    in particular sections: The Berkeley DB products, Shared memory regions, Chapter 11. Berkeley DB Transactional Data Store Applications, Chapter 17. The Logging Subsystem.
    - Getting Started with Berkeley DB (Java API Guide) and Getting Started with Berkeley DB Transaction Processing (Java API Guide).
    If so, you would have had the answers to these questions; the __db.NNN files are the environment shared region files needed by the environment's subsystems (transaction, locking, logging, memory pool buffer, mutexes), and the log.MMMMMMMMMM are the log files needed for recoverability and created when running with transactions.
    --Andrei                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Redo log files are not applied in DR of primary

    Hi All,
    I have a DR database of primary on QA Serevr. The Redo log files are not properly applied in the DR database.
    The Oracle version is 11.2.0.1 Some of the files get shipped and applied to DR database automatically but not all.
    SQL> select status, error from v$archive_dest where dest_id=2; gives following massage
    ERROR     ORA-16086: Redo data cannot be written to the standby redo log
    Please suggest.
    Regards,
    Shashi

    Hi,
    Sorry for delay in response. Here I am attaching the error captured in standby database.
    Please advise
    alert_abc.log
    RFS[1780]: Identified database type as 'physical standby': Client is LGWR SYNC pid 21855
    Primary database is in MAXIMUM AVAILABILITY mode
    Standby controlfile consistent with primary
    Standby controlfile consistent with primary
    RFS[1780]: No standby redo logfiles of file size 94371840 AND block size 512 exist
    Clearing online log 16 of thread 0 sequence number 0
    Errors in file /oracle/diag/rdbms/abc_location11/abc/trace/abc_rfs_27994.trc:
    ORA-00367: checksum error in log file header
    ORA-00315: log 16 of thread 0, wrong thread # 1 in header
    ORA-00312: online log 16 thread 0: '/oracle/abc/origlogB/log_g116m1.dbf'
    Mon Nov 14 00:49:16 2011
    Clearing online log 9 of thread 0 sequence number 0
    Errors in file /oracle/diag/rdbms/abc_location11/abc/trace/abc_arc0_15653.trc:
    /oracle/diag/rdbms/abc_location11/abc/trace/abc_rfs_27994.trc
    2011-11-14 00:49:19.385
    DDE rules only execution for: ORA 312
    START Event Driven Actions Dump -
    END Event Driven Actions Dump -
    START DDE Actions Dump -
    Executing SYNC actions
    START DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -
    DDE Action 'DB_STRUCTURE_INTEGRITY_CHECK' was flood controlled
    END DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (FLOOD CONTROLLED, 1 csec) -
    Executing ASYNC actions
    END DDE Actions Dump (total 0 csec) -
    ORA-00367: checksum error in log file header
    ORA-00315: log 16 of thread 0, wrong thread # 1 in header
    ORA-00312: online log 16 thread 0: '/oracle/abc/origlogB/log_g116m1.dbf'
    DDE rules only execution for: ORA 312
    START Event Driven Actions Dump -
    END Event Driven Actions Dump -
    START DDE Actions Dump -
    Executing SYNC actions
    START DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -
    DDE Action 'DB_STRUCTURE_INTEGRITY_CHECK' was flood controlled
    END DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (FLOOD CONTROLLED, -641 csec) -
    Executing ASYNC actions
    END DDE Actions Dump (total 0 csec) -
    ORA-19527: physical standby redo log must be renamed
    ORA-00312: online log 16 thread 0: '/oracle/abc/origlogB/log_g116m1.dbf'
    Error 19527 clearing SRL 16
    /oracle/diag/rdbms/abc_location11/abc/trace/abc_arc0_15653.trc
    ORA-19527: physical standby redo log must be renamed
    ORA-00312: online log 9 thread 0: '/oracle/abc/origlogA/log_g19m1.dbf'
    Error 19527 clearing SRL 9
    DDE rules only execution for: ORA 312
    START Event Driven Actions Dump -
    END Event Driven Actions Dump -
    START DDE Actions Dump -
    Executing SYNC actions

  • Log files are not purged

    Hi All,
    I have a TT data store, with setting LogPurge=1. There are lots of transactions manipulating the data store. If I'm correct, the log files, that are older then the older checkpoint file are to be deleted automatically by TT, if there are no operations holding them. In my case the log files are not being deleted, so an ls -ltr command prints:
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 10:49 appdbtt.res1
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 10:49 appdbtt.res0
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 10:49 appdbtt.res2
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:02 appdbtt.log0
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:03 appdbtt.log1
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:03 appdbtt.log2
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:04 appdbtt.log3
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:04 appdbtt.log4
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:04 appdbtt.log5
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:05 appdbtt.log6
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:05 appdbtt.log7
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:06 appdbtt.log8
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:06 appdbtt.log9
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:07 appdbtt.log10
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:07 appdbtt.log11
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:08 appdbtt.log12
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:08 appdbtt.log13
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:09 appdbtt.log14
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:09 appdbtt.log15
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:09 appdbtt.log16
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:10 appdbtt.log17
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:10 appdbtt.log18
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:11 appdbtt.log19
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:11 appdbtt.log20
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:12 appdbtt.log21
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:12 appdbtt.log22
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:13 appdbtt.log23
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:13 appdbtt.log24
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:14 appdbtt.log25
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:14 appdbtt.log26
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:15 appdbtt.log27
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:15 appdbtt.log28
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:16 appdbtt.log29
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:16 appdbtt.log30
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:16 appdbtt.log31
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:17 appdbtt.log32
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:17 appdbtt.log33
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:18 appdbtt.log34
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:18 appdbtt.log35
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:19 appdbtt.log36
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:19 appdbtt.log37
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:20 appdbtt.log38
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:20 appdbtt.log39
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:21 appdbtt.log40
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:21 appdbtt.log41
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:22 appdbtt.log42
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:22 appdbtt.log43
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:22 appdbtt.log44
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:23 appdbtt.log45
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:23 appdbtt.log46
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:24 appdbtt.log47
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:25 appdbtt.log48
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:25 appdbtt.log49
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:25 appdbtt.log50
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:26 appdbtt.log51
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:26 appdbtt.log52
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:27 appdbtt.log53
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:27 appdbtt.log54
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:28 appdbtt.log55
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:28 appdbtt.log56
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:29 appdbtt.log57
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:29 appdbtt.log58
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:30 appdbtt.log59
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:30 appdbtt.log60
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:31 appdbtt.log61
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:31 appdbtt.log62
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:32 appdbtt.log63
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:32 appdbtt.log64
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:33 appdbtt.log65
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:33 appdbtt.log66
    -rw-rw-rw- 1 timesten timesten 487444480 Dec 07 11:33 appdbtt.ds0
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:34 appdbtt.log67
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:34 appdbtt.log68
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:35 appdbtt.log69
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:35 appdbtt.log70
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:35 appdbtt.log71
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:36 appdbtt.log72
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:36 appdbtt.log73
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:37 appdbtt.log74
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:37 appdbtt.log75
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:38 appdbtt.log76
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:38 appdbtt.log77
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:39 appdbtt.log78
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:39 appdbtt.log79
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:40 appdbtt.log80
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:40 appdbtt.log81
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:41 appdbtt.log82
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:41 appdbtt.log83
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:42 appdbtt.log84
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:42 appdbtt.log85
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:43 appdbtt.log86
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:43 appdbtt.log87
    -rw-rw-rw- 1 timesten timesten 632098816 Dec 07 11:43 appdbtt.ds1
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:44 appdbtt.log88
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:45 appdbtt.log89
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:45 appdbtt.log90
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:46 appdbtt.log91
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:46 appdbtt.log92
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:46 appdbtt.log93
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:47 appdbtt.log94
    -rw-rw-rw- 1 timesten timesten 67108864 Dec 07 11:47 appdbtt.log95
    -rw-rw-rw- 1 timesten timesten 4767744 Dec 07 11:47 appdbtt.log96
    As you can see, I have 67 older log files than the older checkpoint file. Now if I enter the DS in ttIsql, and call ttLogHolds I get:
    Command> call ttLogHolds();
    < 0, 38034792, Replication , APPDBTT:_ORACLE >
    < 67, 44319520, Checkpoint , appdbtt.ds0 >
    < 88, 45855168, Checkpoint , appdbtt.ds1 >
    3 rows found.
    What can be the problem?
    Thanks in advance:
    Dave

    This bookmark
    < 0, 38034792, Replication , APPDBTT:_ORACLE >
    indicates that the AWT bookmark hasn't moved from log file 0. Since AWT is performed by the replication agent, it maintains a bookmark to track where its reached in reading through the transaction log files looking for transactions against any AWT cachegroups. This looks as though somehow a transaction against an AWT cachegroup has not been commited, meaning that it cannot be sent to Oracle, acknowledged and the bookmark moved on. Once the bookmark moves into a new transaction log file, all old log files can then be purged.
    You might be able to identify any uncommited transaction by using ttXactAdmin and checking for locks held against AWT cachegroups.

  • Coded UI - Test Results/Out - The binary files are getting copied to the TestResults/Out folder

    Hi,
    I am using Visual Studio 2013 for developing the Coded UI Automation Scripts.
    Whenever I execute a Coded UI Automation Script, the binary files are getting copied to the TestResults/Out folder.
    I do not want these binary files to be copied to OUT folder, and I have tried disabling the Deployment option under Test Settings > Deployment, but still the behavior is the same.
    Could you please advice, how to disable copying binary files into the Out directory.
    Thanks in Advance.
    Regards,
    Karthick K

    Hi Karthick K,
    From your description, as far as I know that when we enable Tracing and HtmlLogger for Coded UI Test and then run the coded UI test, we will get the Test Results/Out folder automatically.
    So it is default that the Out folder includes some files like codeduitestproject6.dll, CodedUITestProject6.pdb etc, we could not disable copying binary files into the Out directory.
    However, there have a replace workaround is I suggest you can disable Tracing and HtmlLogger for Coded UI Test. After you disable the Tracing and HtmlLogger for Coded UI Test, it will not generate the Out folder.
    Similarly, it will not get the binary files.
    http://blogs.msdn.com/b/visualstudioalm/archive/2012/11/08/enabling-htmllogger-in-coded-ui-test.aspx
    If you still want to this feature, I suggest you could submit this feature request:
    http://visualstudio.uservoice.com/forums/121579-visual-studio. The Visual Studio product team is listening to user voice there. You can send your idea there and people can vote. If
    you submit this suggestion, I hope you could post that link here, I will help you vote it.
    Thanks for your understanding.
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Appended log files are not well formed XML?

    I'm working on a retrofit of our home grown logging class to use the new java.util.logging classes. It works beautifully with one exception. If I need to instantiate the logger in the same day, appending to an existing log file, I get a second (or third, or fourth) <?xml version="1.0" encoding="UTF-8" standalone="no"?><!DOCTYPE log SYSTEM "logger.dtd"> tag inserted at the beginning of the new log entries. This causes the log file to not be well formed and therefore cannot be parsed.
    I've looked and looked, but can't find a method to suppress these extra tags. Anybody know a way? We want to use this on a development server that will be restarted several times a day. If I don't append, it wipes out my previous logs for the day.

    You'll have to do something to prevent your logger from appending to old logs that have been closed. Sorry if that is not very helpful, but I'm not familiar with the logging features in SDK 1.4. For your interest, though, Log4J (which you can get from Apache) also features XML logging, and it solved that rather obvious problem thus:
    "The output of the XMLLayout consists of a series of log4j:event elements as defined in the log4j.dtd. It does not output a complete well-formed XML file. The output is designed to be included as an external entity in a separate file to form a correct XML file."
    In other words, you would have to wrap the output in an XML header and a root node to be able to use it, which is not difficult to do.

  • SHADOW_IMPORT_UPG1 is very very slow, no log files are created

    Hi all
    We are now doing our production upgrade, during the SHADOW_IMPORT_UPG1 phase system is very slow
    no log files are created in the /usr/sap/put/log directory.
    only three files are growing in /usr/sap/tmp directory
    orar3p> ls -lrt
    total 219176
    -rw-rw-rw-   1 r3padm     sapsys        2693 Aug 15 18:42 UCMIG_DE.ECO
    -rw-rw-rw-   1 r3padm     sapsys        2374 Aug 15 18:42 R3trans.out
    -rw-rw-rw-   1 r3padm     sapsys        2685 Aug 15 18:46 ADDON_TR.ECO
    -rw-rw-rw-   1 r3padm     sapsys         726 Aug 15 20:04 crshdusr.log
    -rw-rw-rw-   1 r3padm     sapsys        3915 Aug 15 21:53 EU_IMTSK.ECO
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLFRN18.R3P
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLPTN18.R3P
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLESN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     36433272 Aug 15 23:44 SAPKLESN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     36807577 Aug 15 23:44 SAPKLFRN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     35372350 Aug 15 23:44 SAPKLPTN18.R3P
    orar3p> date
    Fri Aug 15 23:44:54 PDT 2008
    Can anyone help what to do
    Thanks
    Senthil

    Hello,
    did you discover what the cause was for this phase running so slow? And  how long did it take to complete in the end?
    We are currently running an upgrade of our Development system and have struck the same issue.
    I killed the upgrade after the phase had been running for 4 hours and restarted it, but it looks like it is still going to run for a long time.
    Regards....John

  • Filled redo log files are available to LGWR for reuse

    Hi,
    Oracle version:
    Oracle Database 10g Release 10.2.0.4.0 -
    OS:Windows XP
    In oracle documentation it is mentioned that:
    Filled redo log files are available to LGWR for reuse depending on whether archiving is enabled.
    *> If archiving is disabled (the database is in NOARCHIVELOG mode), a filled redo log file is available after the changes recorded in it have been written to the datafiles*.
    Link for Documentation is:
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/onlineredo001.htm
    My doubt is:
    Redo Records are written to datafiles also??

    user12141893 wrote:
    Does it mean:
    Suppose:
    Online redo log files contains some redo entries and database buffer cache contains some data and this data belong to redo entries of redo log files.
    Now this redo log file can't be reused until those (related) blocks of database buffer cache are written into datafiles.
    This is sort of correct. If the log file would be filled up , LGWR would switch over to the next log group and would initiate a Log Switch which would further trigger DBWR to start checkpointing the content protected by this log group to the data file. By the time this operation would be going on, the Log group's members would have the status of Active and once it would be complete, the status would be marked to Inactive which means that this log group and its members can be reused by LGWR.
    HTH
    Aman....

  • FMS  - change directory where the log files are located?

    I want to change the logs files directory from:
    C:\Program Files (x86)\Adobe\Flash Media Server 3.5/logs
    to:
    D:\fmsLogs
    Please halp me to understand...
    in adobe in:
    Home / Flash Media Server  3.5 Configuration and Administration Guide / XML configuration files reference
    it says:
    in Logger.xml in Directory
    Specifies the directory where the log files are located.
    By default, the log files are located in the logs directory in the server installation directory.
    Example:
    <Directory>${LOGGER.LOGDIR}</Directory>
    what this meens: ${LOGGER.LOGDIR} ?
    in order to change the logs files directory from:
    C:\Program Files (x86)\Adobe\Flash Media Server 3.5/logs
    to:
    D:\fmsLogs
    do i need to write this:
    <Directory>D:\fmsLogs</Directory>
    or what do i neet to write?
    it is totaly not understandable from this example...
    big thanks for any halp
    cheinan

    You can change LOGGER.LOGDIR in fms.ini to your preferred location i.e. D:\fmsLogs and restart FMS.
    Now if you want to change for individual logs - you can change in Logger.xml - by default logger.xml will use value from fms.ini

  • The words in the .docx (MSWord 2010) file are getting merged when the file is transferred to another computer for printing. What could be the reason?

     The words in my .docx file are getting merged when the file is transferred to another computer for printing. For eg. the sentence "rate of success ......"  is displayed as "rateof success" on the other computer. What could
    be the reason for this? How to solve this issue?  

    Have you checked that the document is using the exact same font on both machines? If the second machine doesn't have the font installed that's used in the original document on the first machine, Word will pick the closest matching font, and that may
    display slightly differently.

Maybe you are looking for