Log File Creation Confusion

SQL*Plus: Release 10.2.0.3.0 - Production on Mon Mar 11 11:42:45 2013
Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.There are some initialization parameters that decide the location of the online redo log files in general.These initialization parameters are
- DB_CREATE_ONLINE_LOG_DEST_n
- DB_RECOVERY_FILE_DEST
- DB_CREATE_FILE_DEST
I could not understand the level of precedence of these parameters if you set each of them for creating online logfile, if i set all these parameter then creating online log file always goes to the path which define in parameter DB_CREATE_ONLINE_LOG_DEST_n and ignores the others parameter (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST).
If i just set the last two parameter (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) and do not set the DB_CREATE_ONLINE_LOG_DEST_n the logfile created in both location DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) with mirrored mechanisim.
SQL> select name,value
  2    from v$parameter
  3   where upper(name) in ('DB_CREATE_ONLINE_LOG_DEST_1','DB_RECOVERY_FILE_DEST','DB_CREATE_FILE_DEST')
  4  /
NAME                                                                             VALUE
db_create_file_dest                                                              D:\ORACLE\PRODUCT\10.2.0\DB_1\dbfile
db_create_online_log_dest_1
db_recovery_file_dest                                                            D:\oracle\product\10.2.0\db_1\flash_recovery_area
SQL> select * from v$logfile
  2  /
    GROUP# STATUS  TYPE    MEMBER                                                                              
         3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                    
         2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                    
         1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                    
SQL> alter database add logfile
  2  /
Database altered.
SQL> select * from v$logfile
  2  /
    GROUP# STATUS  TYPE    MEMBER                                                                                      
         3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                            
         2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                            
         1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                            
         4         ONLINE  D:\ORACLE\PRODUCT\10.2.0\DB_1\DBFILE\ORCL\ONLINELOG\O1_MF_4_8MTHLWTJ_.LOG                   
         4         ONLINE  D:\ORACLE\PRODUCT\10.2.0\DB_1\FLASH_RECOVERY_AREA\ORCL\ONLINELOG\O1_MF_4_8MTHLZB8_.LOGAs you can see above result , creating a logfile adhere defining parameters DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) , when i define the parameter DB_CREATE_ONLINE_LOG_DEST_1 , logfile creation will goes to only defining within parameter DB_CREATE_ONLINE_LOG_DEST_1 no matter what you define for DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST).Here you go.
SQL> alter database drop logfile group 4
  2  /
Database altered.
SQL> select * from v$logfile
  2  /
    GROUP# STATUS  TYPE    MEMBER                                                                      
         3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                            
         2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                            
         1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                            
SQL> alter system set db_create_online_log_dest_1='D:\oracle' scope=both
  2  /
System altered.
SQL> select name,value
  2    from v$parameter
  3   where upper(name) in ('DB_CREATE_ONLINE_LOG_DEST_1','DB_RECOVERY_FILE_DEST','DB_CREATE_FILE_DEST')
  4  /
NAME                                                                             VALUE
db_create_file_dest                                                              D:\ORACLE\PRODUCT\10.2.0\DB_1\dbfile
db_create_online_log_dest_1                                                      D:\oracle
db_recovery_file_dest                                                            D:\oracle\product\10.2.0\db_1\flash_recovery_area
SQL> alter database add logfile
  2  /
Database altered.
SQL> select * from v$logfile
  2  /
    GROUP# STATUS  TYPE    MEMBER                                                                              
         3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                    
         2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                    
         1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                    
         4         ONLINE  D:\ORACLE\ORCL\ONLINELOG\O1_MF_4_8MTJ10B8_.LOG                                       My confusion is here why the mechanisim of (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) is same while the same with both of them becomes differ when you define
'DB_CREATE_ONLINE_LOG_DEST_n'?

DB_CREATE_FILE_DEST is used if DB_CREATE_ONLINE_LOG_DEST_n is not defined.
DB_RECOVERY_FILE_DEST is used for multiplexed log files.
Thus, if Oracle uses DB_CREATE_FILE_DEST (because DB_CREATE_ONLINE_LOG_DEST_n is not defined), it multiplexes the log file to DB_RECOVERY_FILE_DEST if DB_RECOVERY_FILE_DEST is also defined.
If, however, DB_CREATE_ONLINE_LOG_DEST_1 is used, Oracle expects you to define DB_CREATE_ONLINE_LOG_DEST_2 as well for multiplexing the log file; else it assumes that you do not want the log file multiplexed. The fact that the parameter ends with an n means that Oracle uses the n=2 f or the multiplexed location if defined.
Hemant K Chitale

Similar Messages

  • Sqlnet.log file creation permissions

    One of my sites has a sqlnet.log file that has been symlinked to /dev/null and we are trying to determine if this has caused the issue with /dev/null's permissions being reset to 660 instead of 666.
    It looks like the other sqlnet.log files that are not symlinked to /dev/null are 640 so we are not sure this is the issue.
    Does anyone know if oracle changes permissions on the sqlnet.log file at creation or access time? And how it might change those permissions?
    This is on Oracle 10g running on RHEL4
    edit: I typed sqlnet.ora instead of sqlnet.log ... oops.
    Edited by: user12198769 on Nov 10, 2009 7:35 AM

    looks to be just the default in that file:
    NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)
    There are several sqlnet.log files in other places on the server yet there is only one oracle home on here so is there another location that has sqlnet.ora info?

  • Disable Log file creation for a Berkeley DB database

    Hi,
    I'm using Berkeley DB 6 with Oracle Mobile Server 11.3.  When I sync a lot of data, a lot of logfile are created and I think that this is really slowing down my sync process.  Since I never need to recover those client database, I would like to know if it is possible to disable log creation on a Berkeley Database?
    Thank you

    The version of BDB that is used for DMS is TDS (Transaction Data Store).   In that environment, logging is needed to ensure recoverability.   There isnt a way to disable logging.     If you never need to do recovery, then you can use the BDB utilities and occasionally do checkpoints which will flush the cache, or you can shut down the client application.  After this is done then you can remove the log files, since you are claiming that you will not need them for recovery.   
    thanks
    mike

  • Exchange Server 2010 - Message Tracking Logs - Log file creation

    Hi,
    I would like to find out on the behavior of the exchange server in the way that it logs the message tracking.
    Currently the parameter used is 
    MessageTrackingLogMaxDirectorySize - 10GBMessageTrackingLogMaxAge - 30daysI would like to check when the Max Directory Size has exceeded the value indicated, does Exchange server immediately deletes the oldest log file to make space for the new logs?And in the event that the oldest file is being open or locked, will exchange server delete the next oldest file? or it will reattempt to delete the "locked" file for a period of time?Lastly, when these "oldest" files is not able to be deleted, will exchange server stops logging new tracking events?Thanks!

    Hi Zack,
    Thank you for your question.
    If you have configured the parameter of “MessageTrackingLogMaxDirectorySize” and “MessageTrackingLogMaxAge”, we think you have enable circular logging, it will delete the oldest message tracking log files for new log file when the either of the following
    conditions is true:
    The message tracking log directory reaches its specified maximum size.
    A message tracking log file reaches its specified maximum age.
    In addition, it didn’t exceeded the value indicated.
    If there are any questions regarding this issue, please be free to let me know. 
    Best Regard,
    Jim
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Jim Xu
    TechNet Community Support

  • Log file creation using km api

    Hi,
    how to create log file using km api . please provide me if any sample code available.
    Thanks and Regards,
    Nari.

    Thanks for your quick reply but one more requirement is... here i can able create text file in km and adding content to created text file on the same line but i want to update new content in next line(newline).Please see below code and correct it.
         Date dt = new Date(Calendar.getInstance().getTimeInMillis());
                                  com.sapportals.portal.security.usermanagement.IUser iuser = WPUMFactory.getServiceUserFactory().getServiceUser("cmadmin_service");
                                  IResourceContext irCtx = new ResourceContext(iuser);
                                  RID docsResource = RID.getRID(filepath);
                                  IContent initCont = new Content(new ByteArrayInputStream("".getBytes()),"text/plain",-1,null);
                                  if(ResourceFactory.getInstance().getResource(RID.getRID(filepath+"/"+filename), irCtx) == null)
                                       ICollection docsColl = (ICollection)com.sapportals.wcm.repository.ResourceFactory.getInstance().getResource(docsResource,irCtx);
                                       docsColl.createResource(filename,null,initCont);
                              String InputData = Exception;
                              RID sugg_html = RID.getRID(filepath+"/"+filename);
                              IResource resource = com.sapportals.wcm.repository.ResourceFactory.getInstance().getResource(sugg_html,irCtx);
                              String existingComments;
                              IContent cont = resource.getContent();
                              BufferedReader buf_in = new BufferedReader(new InputStreamReader(cont.getInputStream()));
                              existingComments = buf_in.readLine();
                              existingComments = existingComments+"   "+"\n"+dt+InputData;
                              ByteArrayInputStream inputStream = new ByteArrayInputStream(existingComments.getBytes());
                              cont = new Content(inputStream,"text/plain",-1,null);
                              resource.updateContent(cont);
                              cont.close();

  • Log file creation -- BDC

    Hi friends.This is Sudhir . I have a scenario of loading mass vendor data. If in case any error occurs while loading the error record should be created as log file. Do we have any function module to create the log file?
    Hope your answers will be helpful to proceed further.
    With regards ,
    Sudhir S

    Are you looking a way for generating logs as the ones you can see in SLG0?
    You can also store them into a spool.
    To store them into a file, you can simply read the spool output (or you can maybe use a SUBMIT ... EXPORTING LIST TO MEMORY to avoid the spool).
    In that case, please refer to [sap library|http://help.sap.com/saphelp_nw2004s/helpdata/en/d3/1fa03940fab918e10000000a114084/frameset.htm]
    and SBAL* demo programs (use of BAL_* function modules)
    Edited by: Sandra Rossi on Jul 20, 2010 10:39 PM

  • Log file creation

    hi ,
            can anybody tell me how to create a log file to know that which master data has been replicated?

    hi ,
            can anybody tell me how to create a log file to know that which master data has been replicated?

  • Log file creation ( Trace)

    Hi,
    I am trying to do the global trace for the application and
    set the following in the mm.cfg file
    ErrorReportingEnable = 1
    TraceOutputFileEnable =1
    MaxWarnings = 100
    TraceOutputFileName = "C:\Documents and
    Settings\user\myProject.log"
    The file is not created when I am using a trace("Test") in
    the button click event on the page.
    Is there anything important that I am missing?
    Thanks in advance.

    Hi,
    Post your question in the appropriate General Database Discussions.
    Thanks,
    Hussein

  • Jar file: log file is not created!

    Hi, I create a log file with these instructions:
    //logger
              FileHandler fh = new FileHandler(LOG_FILE,true); //append mode
              fh.setFormatter(new SimpleFormatter());
              logger = Logger.getLogger(this.getClass().getName());
              logger.addHandler(fh);It works well as long I execute it normally, but when I create a .jar file and put this logger class inside logger file is not created..how come?

    Hi, I create a log file with these instructions:The code extract creates a Logger object, sure, but does that mean it creates a log file? The log file creation could be deferred until the first usage of the logger...
    It works well as long I execute it normally,Let's clarify: do you mean, when executing from your IDE, or when executing via a command line?
    but when I create a .jar file and put this logger class inside logger file is not created..how come?Let's clarify: do you mean, when you execute the same code packaged in a jar file?
    The most likely issue is configuration: apparently the various ways in which you run your program use different logging configurations.
    J.

  • I'm a bit confused about standby log files

    Hi all,
    I'm a bit confused about something and wondering if someone can explain.
    I have a Primary database that ships logs to a Logical Standby database.
    Everything appears to be working properly. If I check the v$archived_log table in the Primary and compare it to the dba_logstdby_log view in the Logical Standby, I'm seeing that logs are being applied.
    On the logical standby, I have the following configured for log_archive_dest_n parameters:
    *.log_archive_dest_1='LOCATION=/u01/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_2='LOCATION=/u02/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_3='LOCATION=/u03/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_4='SERVICE=PNX8A_WDC ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PNX8A_WDC'
    *.log_archive_dest_5='LOCATION=/u01/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_6='LOCATION=/u02/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_7='LOCATION=/u03/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    Here is my confusion now. Before converting from a Physical standby database to a Logical Standby database, I was under the impression that I needed the standby logs (i.e. log_archive_dest_5, 6 and 7 above) because a Physical Standby database would receive the redo from the primary and write it into the standby logs before applying the redo in the standby logs to the Physical standby database.
    I've now converted to a Logical Standby database. What's happening is that the standby logs are accumulating in the directory pointed to by log_archive_dest_6 above (/u02/oracle/standbylogs/ORADB1). They do not appear to be getting cleaned up by the database.
    In the Logical Standby database I do have STANDBY_FILE_MANAGEMENT parameter set to AUTO. Can anyone explain to me why standby log files would continue to accumulate and how I can get the Logical Standby database to remove them after they are no longer needed on the LSB db?
    Thanks in advance.
    John S

    JSebastian wrote:
    I assume you mean in your question, why on the standby database I am using three standby log locations (i.e. log_archive_dest_5, 6, and 7)?
    If that is your question, my answer is that I just figured more than one location would be safer but I could be wrong about this. Can you tell me if only one location should be sufficient for the standby logs? The more I think of this, that is probably correct because I assume that Log Transport services will re-request the log from the Primary database if there is some kind of error at the standby location with the standby log. Is this correct?As simple configure as below. Why more multiple destinations for standby?
    check notes Step by Step Guide on How to Create Logical Standby [ID 738643.1]
    >
    LOG_ARCHIVE_DEST_1='LOCATION=/arch1/boston VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston'
    LOG_ARCHIVE_DEST_2='SERVICE=chicago LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago'
    LOG_ARCHIVE_DEST_3='LOCATION=/arch2/boston/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=boston'
    The following table describes the archival processing defined by the initialization parameters shown in Example 4-2.
         When the Boston Database Is Running in the Primary Role      When the Boston Database Is Running in the Logical Standby Role
    LOG_ARCHIVE_DEST_1      Directs archival of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/boston/.      Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/boston/.
    LOG_ARCHIVE_DEST_2      Directs transmission of redo data to the remote logical standby database chicago.      Is ignored; LOG_ARCHIVE_DEST_2 is valid only when boston is running in the primary role.
    LOG_ARCHIVE_DEST_3      Is ignored; LOG_ARCHIVE_DEST_3 is valid only when boston is running in the standby role.      Directs archival of redo data received from the primary database to the local archived redo log files in /arch2/boston/.
    >
    Source:-
    http://docs.oracle.com/cd/B19306_01/server.102/b14239/create_ls.htm

  • XML log: Error during temp file creation for LOB Objects

    Hi All,
    I got this exception in the concurrent log file:
    [122010_100220171][][EXCEPTION] !!Error during temp file creation for LOB Objects
    [122010_100220172][][EXCEPTION] java.io.FileNotFoundException: null/xdo-dt-lob-1292864540169.tmp (No such file or directory (errno:2))
         at java.io.RandomAccessFile.open(Native Method)
         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:98)
         at oracle.apps.xdo.dataengine.LOBList.initLOB(LOBList.java:39)
         at oracle.apps.xdo.dataengine.LOBList.<init>(LOBList.java:30)
         at oracle.apps.xdo.dataengine.XMLPGEN.updateMetaData(XMLPGEN.java:1051)
         at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:511)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1121)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1144)
         at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:558)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:308)
         at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:273)
         at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:215)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:254)
         at oracle.apps.xdo.dataengine.DataProcessor.processDataStructre(DataProcessor.java:390)
         at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:355)
         at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:348)
         at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:293)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
    I have this query defined in my data template:
    <![CDATA[
    SELECT lt.long_text inv_comment
    FROM apps.fnd_attached_docs_form_vl ad,
    apps.fnd_documents_long_text lt
    WHERE ad.media_id = lt.media_id
    AND ad.category_description = 'Draft Invoice Comments'
    AND ad.pk1_value = :project_id
    AND ad.pk2_value = :draft_invoice_num
    ]]>
    Issue: The inv_comment is not printing on the PDF output.
    I had the temp directory defined under the Admin tab.
    I'm guessing if it's the LONG datatype of the long_text field that's causing the issue.
    Anybody knows how this can be fixed? any help or advice is appreciated.
    Thanks.
    SW
    Edited by: user12152845 on Dec 20, 2010 11:48 AM

    Hi All,
    I got this exception in the concurrent log file:
    [122010_100220171][][EXCEPTION] !!Error during temp file creation for LOB Objects
    [122010_100220172][][EXCEPTION] java.io.FileNotFoundException: null/xdo-dt-lob-1292864540169.tmp (No such file or directory (errno:2))
         at java.io.RandomAccessFile.open(Native Method)
         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:98)
         at oracle.apps.xdo.dataengine.LOBList.initLOB(LOBList.java:39)
         at oracle.apps.xdo.dataengine.LOBList.<init>(LOBList.java:30)
         at oracle.apps.xdo.dataengine.XMLPGEN.updateMetaData(XMLPGEN.java:1051)
         at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:511)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1121)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1144)
         at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:558)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:308)
         at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:273)
         at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:215)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:254)
         at oracle.apps.xdo.dataengine.DataProcessor.processDataStructre(DataProcessor.java:390)
         at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:355)
         at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:348)
         at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:293)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
    I have this query defined in my data template:
    <![CDATA[
    SELECT lt.long_text inv_comment
    FROM apps.fnd_attached_docs_form_vl ad,
    apps.fnd_documents_long_text lt
    WHERE ad.media_id = lt.media_id
    AND ad.category_description = 'Draft Invoice Comments'
    AND ad.pk1_value = :project_id
    AND ad.pk2_value = :draft_invoice_num
    ]]>
    Issue: The inv_comment is not printing on the PDF output.
    I had the temp directory defined under the Admin tab.
    I'm guessing if it's the LONG datatype of the long_text field that's causing the issue.
    Anybody knows how this can be fixed? any help or advice is appreciated.
    Thanks.
    SW
    Edited by: user12152845 on Dec 20, 2010 11:48 AM

  • Creation of materialized view with view log file for fast refresh in 10.1db

    Hi,.. I have a select statements that includes data from almost 20 tables and takes long time to complete..I am planing to create a materialized view on this.. would you please suggest best way of doing this?
    we would like to have materialized view and materialized log file to refresh changes from underline table to mv view. please provide help on this .. thanks in advance

    It will be possible to create a Materialised view with up to 20 tables, but you have to understand the restrictions on complex Materialised views with regards to fast refresh.
    To help your understanding, refer to Materialized View Concepts and Architecture
    <br>
    Oracle Database FAQs
    </br>

  • Avoid creation of log file for external table

    Hi
    This script is creating log file in the ext directory. How to avoid it. Can you specify the syntex.
    Thanks alot.
    Bhaskar
    CREATE TABLE datfiles_list
    (file_name varchar2(255))
    ORGANIZATION EXTERNAL
    (TYPE ORACLE_LOADER
    DEFAULT DIRECTORY ext_dir
    ACCESS PARAMETERS (RECORDS DELIMITED BY NEWLINE)
    LOCATION ('datfiles_list.txt')
    );

    Example
    CREATE TABLE datfiles_list
    (file_name varchar2(255))
    ORGANIZATION EXTERNAL
    (TYPE ORACLE_LOADER
    DEFAULT DIRECTORY ext_dir
    ACCESS PARAMETERS (RECORDS DELIMITED BY NEWLINE NOLOGFILE)
    LOCATION ('datfiles_list.txt')
    );

  • Adobe 9.5 deletes log files - need to turn off creation of log files

    We have a bunch of files from SAS outputs that include a .sas, .rtf, .log extensions.
    When we try to convert a batch of the .rtf files to PDF by right clicking on them, the .log files are deleted. The originals are not Adobe .log files, but required files from the SAS output.
    I have unchecked "Delete Log Files for Successful jobs" in both Distiller and the Adobe printer preferences.
    It only deletes the .log with the same name as file types associated with Word. If I create .txt or xls files and .log files with the same name (i.e. test.txt and test.log) it does not delete the .log file, but it is overwritten by the Adobe log.
    This happens whether the file is local or on a mapped network drive.
    If I save to a different location, the .log is not deleted or overwritten, but that is really just a workaround. It's doable if it's the only option.
    They can also copy only the .rtf files to another folder and copy the .pdf files back after, but this is alot of extra work for high volumes
    This is an ongoing need involving lots of files, so moving or renaming is not an option, even with batch programs.
    What I really need to do is stop Adobe from creating AND deleting log files or force it to create the .log files in a different location than the original. Unless the problem is Word, but I cannot find any information on this problem.
    Thanks
    Mike

    Hi Shay,
    You are right, it would make perfect sense, however as you can see from the below forum, I was not able to solve this compilation issue..
    Oracle 10g Email Portlet - HELP PLEASE!!!
    (First post is the issue).
    If you have any ideas on how I could solve it, it would be great.
    Thanks
    Sam

  • Confused about the log files

    I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
    The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
    The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
    Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
    Please help - it is driving me nuts!

    Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
    a. is the application doing anything that prohibits log cleaning? (in your case, no)
    b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
    c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
    1) Ran DbDump with and withour -r. I am expecting the
    data to stay consistent. So, after the first run it
    creates the data, and leaves 20mb in place, 3 log
    files near 100% used. After the second run it should
    update the records (which it does from the
    applications point of view) but I now have 40mb
    across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
    Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
    run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
    run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
    run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
    So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
    I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
    I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
    java -jar je.jar DbPrintLog -h <envhome> -S
    and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
    The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
    A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
    In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
    So in summary, let's try these steps
    - use DbDump and DbPrintLog to double check the amount and size of your application data
    - make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
    - run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
    If it all points to JE, we'll probably take it offline, and ask for your test case.
    Regards,
    Linda

Maybe you are looking for

  • How many times can I download from volume licensing

    Hi all. My university in Japan got me CS5 through volume licensing in Japan. I am asking my question here because it is easier for me to work in English than Japanese, and I don't suppose there is much difference between policies here and there. I am

  • Second mac in cluster uses only one core

    Hello all. For some reason the second computer in my render farm is using only one core out of eight. I'm using 2 Mac Pros linked together directly via giga ethernet wire. Mac one is a Quad Core 3ghz. Mac two is a Octo Core 2009 model. Both computers

  • Missing table data

    We are testing RFC scenarios and created a remote enabled FM ( with a table parameter ) in our ECC system.  The TCP/IP "XITEST" destination has been defined in SM59.  All the required configurations have also been done in XI.   When the FM is execute

  • Nokia 6260 : Problem: Start-up Failed. Contatct Re...

    Nokia 6260 : Problem Message : "Start-up Failed. Contatct Retailer" If Nokia Care fix this problem then can the data be retrieved completely from phone book, wallet etc...? Please share your experience....

  • How do I close/disable Adobe Update Manager from MenuBar?

    Hello, I would like to remove the Adobe Update Manager from my menu bar for my Mac. It just really annoys me and I really like a clean menu bar. Thanks,