Securefile Logging Confusion

I'm trying to understand the recoverability implications of setting the LOGGING clause for Securefiles, but the documentation is not helping. The exact documentation from Oracle® Database SecureFiles and Large Objects Developer's Guide is below.
"LOGGING/NOLOGGING/FILESYSTEM_LIKE_LOGGING
Specify LOGGING if you want the creation of a database object, as well as subsequent inserts into the object, to be logged in the redo log file. LOGGING is the default.
Specify NOLOGGING if you do not want these operations to be logged.
For a non-partitioned object, the value specified for this clause is the actual physical attribute of the segment associated with the object. For partitioned objects, the value specified for this clause is the default physical attribute of the segments associated with all partitions specified in the CREATE statement (and in subsequent ALTER ... ADD PARTITION statements), unless you specify the logging attribute in the PARTITION description.
FILESYSTEM_LIKE_LOGGING means that SecureFiles only log the metadata. This option is invalid for BasicFiles. This setting is similar to the metadata journaling of file systems, which reduces mean time to recovery from failures. The LOGGING setting for SecureFile LOBs is similar to the data journaling of file systems. Both the LOGGING and FILESYSTEM_LIKE_LOGGING settings provide a complete transactional file system by way of SecureFiles.
For SecureFile LOBs, the NOLOGGING setting is converted internally to FILESYSTEM_LIKE_LOGGING.
FILESYSTEM_LIKE_LOGGING ensures that data is completely recoverable after a server failure.
Note:
For LOB segments, with the NOLOGGING and FILESYSTEM_LIKE_LOGGING settings it is possible for data to be changed on disk during a backup operation, resulting in read inconsistency. To avoid this situation, ensure that changes to LOB segments are saved in the redo log file by setting LOGGING for LOB storage."
Is Filesystem_like_logging a recoverable operation? (The 4th paragraph says it reduces mean time to recover, but the last Note implies that Securefiles are essentially NOLOGGING, which is also implied by the 5th paragraph…but the 6th paragraph says that the data is completely recoverable).

I've been thinking about this a bit too. Here's what I think happens.
If you have LOGGING set, everything goes to redo, everything is fully recoverable. No problem.
If you have NOLOGGING set, in the event of recovery, there's no data in the redo, so, those blocks will get ORA-26040 "data block was loaded using the NOLOGGING option". The only way to fix this type of corruption is to truncate the segment, and reload.
That much, I think I'm clear on. This new FILESYSTEM_LIKE_LOGGING option, is where I'm a bit foggy. I'm speculating that FILESYSTEM_LIKE_LOGGING means that the metadata is written to redo after the LOB data is written to the securefiles area. So, the metadata is logged, i.e. "the lob data is over here in this securefiles area". But, the LOB itself is not logged. So, if there's a failure, and you do recovery, clean up is a bit simpler, in that you can identify LOBs that are "missing" and selectively delete them, rather than having to resort to truncation of the entire segment.
This seems to be supported by the 11gR1 Concepts manual:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/intro.htm#CNCPT1918
which has this to say:
"File System-like Logging: Modern file systems have the ability to keep a running log of the file system metadata. Putting this metadata into a running log (called a journal) that is flushed in a lazy fashion increases performance and removes the need for file system checking operations like fsck. SecureFiles' file system-like logging provides this same high performance journaling. File system-like logging also allows for +soft corruptions+, so that if an error is found on a block, SecureFiles returns a block with the LOB fill character. This allows the application to detect the error by seeing known invalid data and to recover either through deletion of the LOB (something that is not possible with the original implementation of LOBs) or by other means."
Finally, the 11gR2 Securfiles and Large Objects Developer's Guide:
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10645/adlob_tables.htm#ADLOB45293
Seems to suggest that doing bulk loads w/ FILESYSTEM_LIKE_LOGGING is a good idea.
That brings me to another thought, though. Clearly, NOLOGGING is for direct load only. What about FILESYSTEM_LIKE_LOGGING? Is that only going to work for direct load? Or any type of DML?
So, I think I understand it, sort of... :-) Clearly, this feature needs more documentation and explanation.
I need to play around a bit more, maybe set up a test case or two, before I'll be convinced I understand how it works....
Hope that helps,
-Mark

Similar Messages

  • Server Log Confusion (message meanings)

    I realize that there are several pieces to this whole LiveCycle thing, but I and my colleagues find the robustness of the server log files somewhat overwhelming and extremely daunting.  I am not even sure what question to ask or how to ask it.  Basically I would like a list or a link to the descriptions and troubleshooting guides for the various modules/statements/messages that can appear in the server log.  Some are rather easy to interpret, understand, and figure out.  BUT, many of them are worse than reading Latin especially those that reference a STACKTRACE.  Even a high-level breakdown of some of the more common things.  Just about any help would aid us.  Attempting to search Adobe or even the web for specific entries with the specifics stripped out produces few is any sound results.
    I apologize if this seems like I am venting.  Some of it may be just that but I can not believe that I/we am the only one confused by the server log.

    I found that one and it is useful to an extent but only if the log entry has one of those codes and if the code is in that file.  It seems that 90% of the log entries I am dealing with do not have any of those codes.  This may mean that they are specific to LiveCycle but related to one of the other tools used by LiveCycle (i.e., JBoss).  I do not know and that is part of my frustration.

  • Data Guard : Standby Redo Log CONFUSION

    Trying to set up test Standby db on 10.2.0
    I am well confused about below step 3.1.3, how is the normal redo linked with standby redo, should standby not be members of orginal redo groups?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i1225703
    Original redo logs:
    SQL>  select * from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME
             1          1         28   52428800          1 YES INACTIVE                375136 22-NOV-07
             2          1         29   52428800          1 YES INACTIVE                375138 22-NOV-07
             3          1         30   52428800          1 NO  CURRENT                 375143 22-NOV-07I added below from notes:
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 10
    ('/u01/oracle/oradata/db01/redo01_stb.log') SIZE 50M;
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 11
    ('/u02/oracle/oradata/db01/redo02_stb.log') SIZE 50M;
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 12
    ('/u03/oracle/oradata/db01/redo03_stb.log') SIZE 50M;After few alter system switch logfile; I still have:
        GROUP#    THREAD#  SEQUENCE# ARC STATUS
            10          0          0 YES UNASSIGNED
            11          0          0 YES UNASSIGNED
            12          0          0 YES UNASSIGNEDAll are UNASSIGNED, should one standby group not be ACTIVE like the above link shows.
    Many think for any help

    First things first:
    From the Docs.:
    "Minimally, the configuration should have one more standby redo log file group than the number of online redo log file groups on the primary database. However, the recommended number of standby redo log file groups is dependent on the number of threads on the primary database. Use the following equation to determine an appropriate number of standby redo log file groups:
    (maximum number of logfiles for each thread + 1) * maximum number of threads
    Using this equation reduces the likelihood that the primary instance's log writer (LGWR) process will be blocked because a standby redo log file cannot be allocated on the standby database. For example, if the primary database has 2 log files for each thread and 2 threads, then 6 standby redo log file groups are needed on the standby database."
    You are 1 short!

  • RAC redo logs (Confusion)

    I was reading RAC documents that is written by Steve Karam.
    Cache Fusion for RAC
    RAC provides us a multiple instance, single database system. In a RAC environment, there is one shared set of datafiles. Each instance in the “cluster” will have its own SGA (RAM areas) and binary processes. They will also have their own control files and redo log files, though these must be viewable by all nodes, or systems, in the cluster.
    http://www.dba-oracle.com/t_implementation_decision_rac_clusters.htm
    I'm confused whether each instance has it's own redo log file or they are centrally stored. The above document says that each instance has it's own redo log files.
    Rakesh Soni.

    I'm confused whether each instance has it's own redo log file or they are centrally stored. The above document says that each instance has it's own redo log files.No it is not...
    We do have log_archive_format parameter here we will be using _%t along with the format
    t=Thread whih tells oracle which instance the log is coming from...
    I am not RAC expert but it cannot be like that

  • Help me. Text log confusion.

    I've recently had problems with my phone. Lately, it's been confusing two of the people on my text messages. On the lock screen, it shows who it's actually from, but on my text logs it shows that these two are the same people. On my contacts it shows that they have different numbers but when I look at the texts they are one person. I've tried restarting my phone and I've tried starting a new message to the person i actually want to talk to, but it doesn't work. It does the same thing. please help me, is this a carrier problem or the actual device problem. Also I recently updated my phone.

    No, one person does not even have an apple product, but the other person does.

  • Activate BI content

    Hey guys,
    i tried to acitvate the BI content using the activity in SPRO but not even the simulation worked fine.
    The job crashed with an ABAP DUMP and the message
    Name: OBJECTS_OBJREF_NOT_ASSIGNED_NO
    Exception: CX_SY_REF_IS_INITIAL
    method: CL_RSO_TLOGO_COLLECTION
    then i did the job in background, it crashed again...
    the job-log confuses me:
    about 30 lines like this: (just different objects)
    DS 0TCTBWOSTYP_TEXT zu QS D07110 doesnt exist;
    InfoPackage 0PAK_4D6IR4VL7P5NFFOX9KRJYZ4NS not activated  
    so, some things seem to be missing or do i have to install those objects manually?
    i thought, if i got that right, that the activity in SPRO - Activate Business content would be enough and everything should work fine.?
    i hope you can help me out to solve that problem
    regards
    Matthias

    Name: OBJECTS_OBJREF_NOT_ASSIGNED_NO
    Exception: CX_SY_REF_IS_INITIAL
    method: CL_RSO_TLOGO_COLLECTION
    Check SAP Note 1019055 for the above

  • Crashing iSight

    Hi all,
    I have the iSight model and it is crashing quite a lot. Today has been worse then others.
    It has been upgraded to 1.5gig RAM and it was bought through Crucial. I have now thrown all the packaging away but hope they will refund if it turns out to be the memory
    It passed the hardware test off the cd and also passed a 3rd party memory test (memtest).
    Usually it just locks up and the fans start going. I have now removed the extra RAM and will see how I get on. It sometimes fails with a kernal error and sometimes with a box in the middle saying powerdown.
    The panic log confuses me :-0 but here is one from earlier:
    Sun Dec 4 17:04:44 2005
    Unresolved kernel trap(cpu 0): 0x300 - Data access DAR=0x000000000000004C PC=0x00000000000A50F0
    Latest crash info for cpu 0:
    Exception state (sv=0x3CB82A00)
    PC=0x000A50F0; MSR=0x00009030; DAR=0x0000004C; DSISR=0x42000000; LR=0x002A9B6C; R1=0x21E63EC0; XCP=0x0000000C (0x300 - Data access)
    Backtrace:
    0x00000000 0x000ABE30 0x6E672049
    backtrace terminated - frame not mapped or invalid: 0xBFFEEC20
    Proceeding back via exception chain:
    Exception state (sv=0x3CB82A00)
    previously dumped as "Latest" state. skipping...
    Exception state (sv=0x3BF4A780)
    PC=0x90001B48; MSR=0x0200F030; DAR=0xE40FB000; DSISR=0x42000000; LR=0x900C46F0; R1=0xBFFEEC20; XCP=0x00000030 (0xC00 - System call)
    Kernel version:
    Darwin Kernel Version 8.3.0: Mon Oct 3 20:04:04 PDT 2005; root:xnu-792.6.22.obj~2/RELEASE_PPC
    panic(cpu 0 caller 0xFFFF0003): 0x300 - Data access
    Latest stack backtrace for cpu 0:
    Backtrace:
    0x00095698 0x00095BB0 0x0002683C 0x000A8304 0x000ABC80
    Proceeding back via exception chain:
    Exception state (sv=0x3CB82A00)
    PC=0x000A50F0; MSR=0x00009030; DAR=0x0000004C; DSISR=0x42000000; LR=0x002A9B6C; R1=0x21E63EC0; XCP=0x0000000C (0x300 - Data access)
    Backtrace:
    0x00000000 0x000ABE30 0x6E672049
    backtrace terminated - frame not mapped or invalid: 0xBFFEEC20
    Exception state (sv=0x3BF4A780)
    PC=0x90001B48; MSR=0x0200F030; DAR=0xE40FB000; DSISR=0x42000000; LR=0x900C46F0; R1=0xBFFEEC20; XCP=0x00000030 (0xC00 - System call)
    Kernel version:
    Darwin Kernel Version 8.3.0: Mon Oct 3 20:04:04 PDT 2005; root:xnu-792.6.22.obj~2/RELEASE_PPC
    The last few crashes I have taken a photo of the kernal errors (Curropt Stack, Unaligned Stackand invailed PMAP) and they are here.
    http://static.flickr.com/35/701783370671d3a92co.jpg
    http://static.flickr.com/24/70178270ca7ed8b555o.jpg
    http://static.flickr.com/24/70178294fb4bba647fo.jpg
    Usual programs running are Firefox, EyeTV, iCal, Mail.
    I really hope you can help. I have searched around the web for an answer.
    James
    G5 iMac 20" 2.1     Powerbook 1Ghz, Lacie D2 External drive, iPod Photo,

    The very first thing to do is to pull the extra ram and see how the computer runs without it. Crucial is pretty good about exchanging defective RAM, so I doubt you'd have any trouble if that turns out to be the problem. FWIW, defective ram often passes the hardware test, so you can't rule it out because of that.

  • Hi, i am a littl confused as I logged into creative cloud and bough the in design plan for a year but i cant seem to donwload it... there is a window that pops up and it says its downloading but its taking forever? any advice?

    hi, i am a littl confused as I logged into creative cloud and bough the in design plan for a year but i cant seem to donwload it... there is a window that pops up and it says its downloading but its taking forever? any advice?

    Hi Dima,
    Please refer to the help documents below:
    Troubleshoot Creative Cloud download and install issues
    Error downloading, installing, or updating Creative Cloud applications
    Regards,
    Sheena

  • Confusing error message in job log of infocube loading job

    Hello,
    I executed a job which runs a process chain. The process chain has two important steps: It extracts tranaction data from a planning area in an infocube and it loads APO-relevant master data (materials, plants) from R3 system in the same infocube.
    Almost every step of this process seems to be executed correctly. The loading process ended successfully. But the spool of the job is a bit confusing:
    Loaded CVC's are all listed and appear with a green traffic light. Master data appear with a red traffic light and with the comment: Product xxx does not exist. The same with the location.
    Does anybody know why the signals are red in this case.
    In APO and R3 all listed master data are available - in the planning area as well as in the infocube after the data loading.
    So, I am very confused about this message text in the spool because everything seems to be ok....
    Thanks in advance
    Best regards
    H.Becker

    Hello Heinz,
    First of all, we need to be sure regarding which step the particular log correspoinds to.
    You mentioned that your process chain has 2 steps.
    Do you know how to look at the logs for a particular step of a process chain?
    Go to RSPC, double click on your process chain, and then click the log button at the top?
    You would now be in the log view. Then double-click on the step no.1, and see the information contained in the central tab named "backg". Does this contain the errors corresponding to you reds (that location or product does not exist)?
    If not, repeat the same step for step 2.
    Once you are sure regarding the step against which you are getting your error, please share soe more info regarding the step.
    Then we could look further into the possible cause.
    PS: Your first message itself was very confusing. You mentioned that you get transaction data from planning area to the cube, and you also get CVCs (location/products) from R/3 source system to the cube. This is very confusing.
    Thanks - Pawan

  • Confused about standby redo log groups

    hi masters,
    i am little bit confuse about creating redo log group for standby database,as per document number of standby redo group depends on following equation.
    (maximum number of logfiles for each thread + 1) * maximum number of threads
    but i dont know where to fing threads? actually i would like to know about thread in deep.
    how to find current thread?
    thanks and regards
    VD

    is it really possible that we can install standby and primary on same host??
    yes its possible and i have done it many times within the same machine.
    For yours confusion about spfile ,i agree document recommend you to use spfile which is for DG broker handling if you go with DG borker in future only.
    There is no concern spfile using is an integral step for primary and standby database implementation you can go with pfile but good is use spfile.Anyhow you always keep pfile on that basis you created spfile,i said you make an entry within pfile then mount yours standby database with this pfile or you can create spfile from this pfile after adding these parameter within pfile,i said cause you might be adding this parmeter from SQL prompt.
    1. logs are not getting transfered(even i configure listener using net manager)
    2.logs are not getting archived at standby diectory.
    3.'ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION' NEVER COMPLETE ITS RECOVERY
    4. when tried to open database it always note it 'always' said system datafile is not from sufficiently old backup.
    5.i tried 'alter database recover managed standby database camncel' also.Read yours alert log file and paste the latest log here..
    Khurram

  • Standby database Archive log destination confusion

    Hi All,
    I need your help here..
    This is the first time that this situation is arising. We had sync issues in the oracle 10g standby database prior to this archive log destination confusion.So we rebuilt the standby to overcome this sync issue. But ever since then the archive logs in the standby database are moving to two different locations.
    The spfile entries are provided below:
    *.log_archive_dest_1='LOCATION=/m99/oradata/MARDB/archive/'
    *.standby_archive_dest='/m99/oradata/MARDB/standby'
    Prior to rebuilding the standby databases the archive logs were moving to /m99/oradata/MARDB/archive/ location which is the correct location. But now the archive logs are moving to both /m99/oradata/MARDB/archive/ and /m99/oradata/MARDB/standby location, with the majority of them moving to /m99/oradata/MARDB/standby location. This is pretty unusual.
    The archives in the production are moving to /m99/oradata/MARDB/archive/ location itself.
    Could you kindly help me overcome this issue.
    Regards,
    Dan

    Hi Anurag,
    Thank you for update.
    Prior to rebuilding the standby database the standby_archive_dest was set as it is. No modifications were made to the archive destination locations.
    The primary and standby databases are on different servers and dataguard is used to transfer the files.
    I wanted to highlight one more point here, The archive locations are similar to the ones i mentioned for the other stndby databases. But the archive logs are moving only to /archive location and not to the /standby location.

  • Log File Creation Confusion

    SQL*Plus: Release 10.2.0.3.0 - Production on Mon Mar 11 11:42:45 2013
    Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.There are some initialization parameters that decide the location of the online redo log files in general.These initialization parameters are
    - DB_CREATE_ONLINE_LOG_DEST_n
    - DB_RECOVERY_FILE_DEST
    - DB_CREATE_FILE_DEST
    I could not understand the level of precedence of these parameters if you set each of them for creating online logfile, if i set all these parameter then creating online log file always goes to the path which define in parameter DB_CREATE_ONLINE_LOG_DEST_n and ignores the others parameter (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST).
    If i just set the last two parameter (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) and do not set the DB_CREATE_ONLINE_LOG_DEST_n the logfile created in both location DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) with mirrored mechanisim.
    SQL> select name,value
      2    from v$parameter
      3   where upper(name) in ('DB_CREATE_ONLINE_LOG_DEST_1','DB_RECOVERY_FILE_DEST','DB_CREATE_FILE_DEST')
      4  /
    NAME                                                                             VALUE
    db_create_file_dest                                                              D:\ORACLE\PRODUCT\10.2.0\DB_1\dbfile
    db_create_online_log_dest_1
    db_recovery_file_dest                                                            D:\oracle\product\10.2.0\db_1\flash_recovery_area
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                              
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                    
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                    
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                    
    SQL> alter database add logfile
      2  /
    Database altered.
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                                      
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                            
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                            
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                            
             4         ONLINE  D:\ORACLE\PRODUCT\10.2.0\DB_1\DBFILE\ORCL\ONLINELOG\O1_MF_4_8MTHLWTJ_.LOG                   
             4         ONLINE  D:\ORACLE\PRODUCT\10.2.0\DB_1\FLASH_RECOVERY_AREA\ORCL\ONLINELOG\O1_MF_4_8MTHLZB8_.LOGAs you can see above result , creating a logfile adhere defining parameters DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) , when i define the parameter DB_CREATE_ONLINE_LOG_DEST_1 , logfile creation will goes to only defining within parameter DB_CREATE_ONLINE_LOG_DEST_1 no matter what you define for DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST).Here you go.
    SQL> alter database drop logfile group 4
      2  /
    Database altered.
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                      
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                            
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                            
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                            
    SQL> alter system set db_create_online_log_dest_1='D:\oracle' scope=both
      2  /
    System altered.
    SQL> select name,value
      2    from v$parameter
      3   where upper(name) in ('DB_CREATE_ONLINE_LOG_DEST_1','DB_RECOVERY_FILE_DEST','DB_CREATE_FILE_DEST')
      4  /
    NAME                                                                             VALUE
    db_create_file_dest                                                              D:\ORACLE\PRODUCT\10.2.0\DB_1\dbfile
    db_create_online_log_dest_1                                                      D:\oracle
    db_recovery_file_dest                                                            D:\oracle\product\10.2.0\db_1\flash_recovery_area
    SQL> alter database add logfile
      2  /
    Database altered.
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                              
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                    
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                    
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                    
             4         ONLINE  D:\ORACLE\ORCL\ONLINELOG\O1_MF_4_8MTJ10B8_.LOG                                       My confusion is here why the mechanisim of (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) is same while the same with both of them becomes differ when you define
    'DB_CREATE_ONLINE_LOG_DEST_n'?

    DB_CREATE_FILE_DEST is used if DB_CREATE_ONLINE_LOG_DEST_n is not defined.
    DB_RECOVERY_FILE_DEST is used for multiplexed log files.
    Thus, if Oracle uses DB_CREATE_FILE_DEST (because DB_CREATE_ONLINE_LOG_DEST_n is not defined), it multiplexes the log file to DB_RECOVERY_FILE_DEST if DB_RECOVERY_FILE_DEST is also defined.
    If, however, DB_CREATE_ONLINE_LOG_DEST_1 is used, Oracle expects you to define DB_CREATE_ONLINE_LOG_DEST_2 as well for multiplexing the log file; else it assumes that you do not want the log file multiplexed. The fact that the parameter ends with an n means that Oracle uses the n=2 f or the multiplexed location if defined.
    Hemant K Chitale

  • I'm a bit confused about standby log files

    Hi all,
    I'm a bit confused about something and wondering if someone can explain.
    I have a Primary database that ships logs to a Logical Standby database.
    Everything appears to be working properly. If I check the v$archived_log table in the Primary and compare it to the dba_logstdby_log view in the Logical Standby, I'm seeing that logs are being applied.
    On the logical standby, I have the following configured for log_archive_dest_n parameters:
    *.log_archive_dest_1='LOCATION=/u01/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_2='LOCATION=/u02/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_3='LOCATION=/u03/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_4='SERVICE=PNX8A_WDC ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PNX8A_WDC'
    *.log_archive_dest_5='LOCATION=/u01/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_6='LOCATION=/u02/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_7='LOCATION=/u03/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    Here is my confusion now. Before converting from a Physical standby database to a Logical Standby database, I was under the impression that I needed the standby logs (i.e. log_archive_dest_5, 6 and 7 above) because a Physical Standby database would receive the redo from the primary and write it into the standby logs before applying the redo in the standby logs to the Physical standby database.
    I've now converted to a Logical Standby database. What's happening is that the standby logs are accumulating in the directory pointed to by log_archive_dest_6 above (/u02/oracle/standbylogs/ORADB1). They do not appear to be getting cleaned up by the database.
    In the Logical Standby database I do have STANDBY_FILE_MANAGEMENT parameter set to AUTO. Can anyone explain to me why standby log files would continue to accumulate and how I can get the Logical Standby database to remove them after they are no longer needed on the LSB db?
    Thanks in advance.
    John S

    JSebastian wrote:
    I assume you mean in your question, why on the standby database I am using three standby log locations (i.e. log_archive_dest_5, 6, and 7)?
    If that is your question, my answer is that I just figured more than one location would be safer but I could be wrong about this. Can you tell me if only one location should be sufficient for the standby logs? The more I think of this, that is probably correct because I assume that Log Transport services will re-request the log from the Primary database if there is some kind of error at the standby location with the standby log. Is this correct?As simple configure as below. Why more multiple destinations for standby?
    check notes Step by Step Guide on How to Create Logical Standby [ID 738643.1]
    >
    LOG_ARCHIVE_DEST_1='LOCATION=/arch1/boston VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston'
    LOG_ARCHIVE_DEST_2='SERVICE=chicago LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago'
    LOG_ARCHIVE_DEST_3='LOCATION=/arch2/boston/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=boston'
    The following table describes the archival processing defined by the initialization parameters shown in Example 4-2.
         When the Boston Database Is Running in the Primary Role      When the Boston Database Is Running in the Logical Standby Role
    LOG_ARCHIVE_DEST_1      Directs archival of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/boston/.      Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/boston/.
    LOG_ARCHIVE_DEST_2      Directs transmission of redo data to the remote logical standby database chicago.      Is ignored; LOG_ARCHIVE_DEST_2 is valid only when boston is running in the primary role.
    LOG_ARCHIVE_DEST_3      Is ignored; LOG_ARCHIVE_DEST_3 is valid only when boston is running in the standby role.      Directs archival of redo data received from the primary database to the local archived redo log files in /arch2/boston/.
    >
    Source:-
    http://docs.oracle.com/cd/B19306_01/server.102/b14239/create_ls.htm

  • My confusion with logs in iAS 6.5

    Hi everybody,
    I am very much confused with the log files in iAS 6.5. Bit lengthy, but Kindly go through this slowly and help me in solving the same.
    (1) In the settings -> Control Panel -> Services window, i have enabled "Allow Service to Interact with Desktop".
    In the admin tool, i have checked "Enable Server Event Log" check box. I have also enabled the option "Log to console" and selected the message type to be "All Messages".
    When i did this, i was able to see the logs as console(command prompt windows). When ever i run a jsp/servlet/EJB containing System.out.println() statements, i was able to see the std out messages in the KJS.exe console. What should i do these std out messages in a file rather than a console?
    (2) In the settings -> Control Panel -> Services window, i have enabled "Allow Service to Interact with Desktop".
    In the admin tool, i have checked "Enable Server Event Log" check box. I have also enabled the option "Log to File", gave the file name as "logs\ias" and selected the message type to be "All Messages".
    When i did this, i saw the following files created under the installdir\ias6\ias\logs.
    ias.10817
    ias.10818
    ias.10819
    ias.10820
    ias.10821
    (10817 - kas port, 10818 - kxs port, 10819 - kjs port, 10820 - kcs port, 10821 - cxs port)
    Since, these log files contains the port number of the above mentioned processes, i expected them to be the log files of the processes with corresponding port numbers. For example, ias.10819 to be the log of KJS process, since its port is 10819. Is it correct?
    If it is correct, i am not able to see the System.out.println()
    messages in this file, though it appears in the console(as mentioned in point 1 ). Why is it?
    I want to see all the Std Out messages (which appears in KJS console) in a log file. What should i do for that?
    (3) In the iPlanet Application Server Administration guide (Page 90), i found a way to log to a file on Windows platform. As per that document, i changed the IAS_KASLOGFILE from 0 to 1 in the System Variable and restarted the system. It created the file by the name, KAS.log in the installdir/ias6/ias/logs.
    Now i was able to see the KAS process's console messages in this log file. But there is nothing written to the KAS console. Why is it?
    (4) What is the difference between the log file ias.10817 (created as per in point 2) and KAS.log (created as per in point 3)?
    thanks and regards,
    desigan

    Hi,
    Can you please let me know the resource from where you have got this information ?
    Thanks & Regards
    Ganesh .R
    Developer Technical Support
    Sun Microsystems
    http://www.sun.com/developers/support

  • I keep getting an error trying to open labview 7.1, and I am really confused. screen shot and log included

    Hello,
    For some reason, recently I have been getting problems trying to run labview. It all started when I upgraded from labview 6 to labview 7.1.  I even went as far as to re-image a brand new machine with XP and start over again.  I'm still getting issues.  What do you think?  Can a virus cause problems like this?
    Thanks for your help, I really appreciate it
    Attachments:
    error_labview_7_1.JPG ‏13 KB
    4f9d_appcompat.txt ‏27 KB

    Hello!
    Sorry to hear you are experiencing problems with
    LabVIEW.  This error dialogue is a result
    of some error causing Windows to shut LabVIEW down.  This is a little different than an internal
    LabVIEW error which usually results in an error message with a .cpp file/line
    number.  I don’t really know how to
    decode the MS error log file, but what I see it looks like you are doing quite
    a bit of complicated programming.  I see references
    to quite a lot of the more advanced LabVIEW programming items (such as dlls, mathscript,
    images, and storage).  I highly suspect
    that the crashes are occurring somewhere in external code, but without more
    info on your program it could be hard for me to diagnose.  Could you provide us with a little more
    information about your application?  Also,
    it would be great for troubleshooting if you could reduce the crashing VI as
    much as possible so that we can see exactly _where_ the crash is occurring.  This could shed a lot of light on the
    problem.
    Also, if you are doing some mathscript stuff, don’t forget
    to check out: http://digital.ni.com/public.nsf/websearch/4475BC3CEB062C9586256D750058F14B?OpenDocument
    in case any of this pertains to your system.
    Thanks for posting to the NI Discussion Forums!  Please let me know how it all goes-
    Travis M
    LabVIEW R&D
    National Instruments

Maybe you are looking for

  • Exchange 2010 is suddenly unmanageable

    Hi -- On a SBS 2011 box running Exchange 2010 SP2, Exchange has suddenly become unmanageable. I have no idea what happened overnight to cause this. But I've been trying to get this fixed for two hours already, with no luck. Nobody can open OWA. From

  • Access denied

    Hi, I'm facing a problem with GUI_UPLOAD in that it sometimes fails with return code 13 (ACCESS_DENIED). . Here's the call I make: CALL METHOD cl_gui_frontend_services=>gui_upload Has anyone faced anything similar? Anyone know what the cause might be

  • Which transaction is used to view bank statment?

    Hello, i have created a manual bank statement with transaction ff67, which is the transaction used to view this bank statement and its accounting documents? Thanks,

  • Oracle Text Scoring

    I wanted to know the significance of the label number in oracle text scoring. What numbers can we use and why ? In oracle material i have seen that everywhere they r using 1 in the queries i have used 5 in my queries. What could be the difference.Can

  • My apple tv won't come off the loading screen

    i try to use youtube or mirrioring on my ipad or iphone and all it does is have the loading icon.