Disable Log file creation for a Berkeley DB database

Hi,
I'm using Berkeley DB 6 with Oracle Mobile Server 11.3.  When I sync a lot of data, a lot of logfile are created and I think that this is really slowing down my sync process.  Since I never need to recover those client database, I would like to know if it is possible to disable log creation on a Berkeley Database?
Thank you

The version of BDB that is used for DMS is TDS (Transaction Data Store).   In that environment, logging is needed to ensure recoverability.   There isnt a way to disable logging.     If you never need to do recovery, then you can use the BDB utilities and occasionally do checkpoints which will flush the cache, or you can shut down the client application.  After this is done then you can remove the log files, since you are claiming that you will not need them for recovery.   
thanks
mike

Similar Messages

  • XML log: Error during temp file creation for LOB Objects

    Hi All,
    I got this exception in the concurrent log file:
    [122010_100220171][][EXCEPTION] !!Error during temp file creation for LOB Objects
    [122010_100220172][][EXCEPTION] java.io.FileNotFoundException: null/xdo-dt-lob-1292864540169.tmp (No such file or directory (errno:2))
         at java.io.RandomAccessFile.open(Native Method)
         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:98)
         at oracle.apps.xdo.dataengine.LOBList.initLOB(LOBList.java:39)
         at oracle.apps.xdo.dataengine.LOBList.<init>(LOBList.java:30)
         at oracle.apps.xdo.dataengine.XMLPGEN.updateMetaData(XMLPGEN.java:1051)
         at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:511)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1121)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1144)
         at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:558)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:308)
         at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:273)
         at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:215)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:254)
         at oracle.apps.xdo.dataengine.DataProcessor.processDataStructre(DataProcessor.java:390)
         at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:355)
         at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:348)
         at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:293)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
    I have this query defined in my data template:
    <![CDATA[
    SELECT lt.long_text inv_comment
    FROM apps.fnd_attached_docs_form_vl ad,
    apps.fnd_documents_long_text lt
    WHERE ad.media_id = lt.media_id
    AND ad.category_description = 'Draft Invoice Comments'
    AND ad.pk1_value = :project_id
    AND ad.pk2_value = :draft_invoice_num
    ]]>
    Issue: The inv_comment is not printing on the PDF output.
    I had the temp directory defined under the Admin tab.
    I'm guessing if it's the LONG datatype of the long_text field that's causing the issue.
    Anybody knows how this can be fixed? any help or advice is appreciated.
    Thanks.
    SW
    Edited by: user12152845 on Dec 20, 2010 11:48 AM

    Hi All,
    I got this exception in the concurrent log file:
    [122010_100220171][][EXCEPTION] !!Error during temp file creation for LOB Objects
    [122010_100220172][][EXCEPTION] java.io.FileNotFoundException: null/xdo-dt-lob-1292864540169.tmp (No such file or directory (errno:2))
         at java.io.RandomAccessFile.open(Native Method)
         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:98)
         at oracle.apps.xdo.dataengine.LOBList.initLOB(LOBList.java:39)
         at oracle.apps.xdo.dataengine.LOBList.<init>(LOBList.java:30)
         at oracle.apps.xdo.dataengine.XMLPGEN.updateMetaData(XMLPGEN.java:1051)
         at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:511)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1121)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(XMLPGEN.java:1144)
         at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(XMLPGEN.java:558)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:445)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:308)
         at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:273)
         at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:215)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:254)
         at oracle.apps.xdo.dataengine.DataProcessor.processDataStructre(DataProcessor.java:390)
         at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:355)
         at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:348)
         at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:293)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
    I have this query defined in my data template:
    <![CDATA[
    SELECT lt.long_text inv_comment
    FROM apps.fnd_attached_docs_form_vl ad,
    apps.fnd_documents_long_text lt
    WHERE ad.media_id = lt.media_id
    AND ad.category_description = 'Draft Invoice Comments'
    AND ad.pk1_value = :project_id
    AND ad.pk2_value = :draft_invoice_num
    ]]>
    Issue: The inv_comment is not printing on the PDF output.
    I had the temp directory defined under the Admin tab.
    I'm guessing if it's the LONG datatype of the long_text field that's causing the issue.
    Anybody knows how this can be fixed? any help or advice is appreciated.
    Thanks.
    SW
    Edited by: user12152845 on Dec 20, 2010 11:48 AM

  • Log file location for FC 3510

    Does somebody know whats the default log file location for a FC3510 connected to solaris 8 machine?
    thanks.

    You need to enable specific logger for the same:
    http://identityandaccessmanager.blogspot.com/2011/02/setting-up-log-level-in-oim-11g.html
    Loggers:
    oracle.iam.request
    oracle.iam.requestactions

  • Log File Creation Confusion

    SQL*Plus: Release 10.2.0.3.0 - Production on Mon Mar 11 11:42:45 2013
    Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.There are some initialization parameters that decide the location of the online redo log files in general.These initialization parameters are
    - DB_CREATE_ONLINE_LOG_DEST_n
    - DB_RECOVERY_FILE_DEST
    - DB_CREATE_FILE_DEST
    I could not understand the level of precedence of these parameters if you set each of them for creating online logfile, if i set all these parameter then creating online log file always goes to the path which define in parameter DB_CREATE_ONLINE_LOG_DEST_n and ignores the others parameter (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST).
    If i just set the last two parameter (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) and do not set the DB_CREATE_ONLINE_LOG_DEST_n the logfile created in both location DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) with mirrored mechanisim.
    SQL> select name,value
      2    from v$parameter
      3   where upper(name) in ('DB_CREATE_ONLINE_LOG_DEST_1','DB_RECOVERY_FILE_DEST','DB_CREATE_FILE_DEST')
      4  /
    NAME                                                                             VALUE
    db_create_file_dest                                                              D:\ORACLE\PRODUCT\10.2.0\DB_1\dbfile
    db_create_online_log_dest_1
    db_recovery_file_dest                                                            D:\oracle\product\10.2.0\db_1\flash_recovery_area
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                              
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                    
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                    
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                    
    SQL> alter database add logfile
      2  /
    Database altered.
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                                      
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                            
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                            
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                            
             4         ONLINE  D:\ORACLE\PRODUCT\10.2.0\DB_1\DBFILE\ORCL\ONLINELOG\O1_MF_4_8MTHLWTJ_.LOG                   
             4         ONLINE  D:\ORACLE\PRODUCT\10.2.0\DB_1\FLASH_RECOVERY_AREA\ORCL\ONLINELOG\O1_MF_4_8MTHLZB8_.LOGAs you can see above result , creating a logfile adhere defining parameters DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) , when i define the parameter DB_CREATE_ONLINE_LOG_DEST_1 , logfile creation will goes to only defining within parameter DB_CREATE_ONLINE_LOG_DEST_1 no matter what you define for DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST).Here you go.
    SQL> alter database drop logfile group 4
      2  /
    Database altered.
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                      
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                            
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                            
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                            
    SQL> alter system set db_create_online_log_dest_1='D:\oracle' scope=both
      2  /
    System altered.
    SQL> select name,value
      2    from v$parameter
      3   where upper(name) in ('DB_CREATE_ONLINE_LOG_DEST_1','DB_RECOVERY_FILE_DEST','DB_CREATE_FILE_DEST')
      4  /
    NAME                                                                             VALUE
    db_create_file_dest                                                              D:\ORACLE\PRODUCT\10.2.0\DB_1\dbfile
    db_create_online_log_dest_1                                                      D:\oracle
    db_recovery_file_dest                                                            D:\oracle\product\10.2.0\db_1\flash_recovery_area
    SQL> alter database add logfile
      2  /
    Database altered.
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                              
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                    
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                    
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                    
             4         ONLINE  D:\ORACLE\ORCL\ONLINELOG\O1_MF_4_8MTJ10B8_.LOG                                       My confusion is here why the mechanisim of (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) is same while the same with both of them becomes differ when you define
    'DB_CREATE_ONLINE_LOG_DEST_n'?

    DB_CREATE_FILE_DEST is used if DB_CREATE_ONLINE_LOG_DEST_n is not defined.
    DB_RECOVERY_FILE_DEST is used for multiplexed log files.
    Thus, if Oracle uses DB_CREATE_FILE_DEST (because DB_CREATE_ONLINE_LOG_DEST_n is not defined), it multiplexes the log file to DB_RECOVERY_FILE_DEST if DB_RECOVERY_FILE_DEST is also defined.
    If, however, DB_CREATE_ONLINE_LOG_DEST_1 is used, Oracle expects you to define DB_CREATE_ONLINE_LOG_DEST_2 as well for multiplexing the log file; else it assumes that you do not want the log file multiplexed. The fact that the parameter ends with an n means that Oracle uses the n=2 f or the multiplexed location if defined.
    Hemant K Chitale

  • Exchange Server 2010 - Message Tracking Logs - Log file creation

    Hi,
    I would like to find out on the behavior of the exchange server in the way that it logs the message tracking.
    Currently the parameter used is 
    MessageTrackingLogMaxDirectorySize - 10GBMessageTrackingLogMaxAge - 30daysI would like to check when the Max Directory Size has exceeded the value indicated, does Exchange server immediately deletes the oldest log file to make space for the new logs?And in the event that the oldest file is being open or locked, will exchange server delete the next oldest file? or it will reattempt to delete the "locked" file for a period of time?Lastly, when these "oldest" files is not able to be deleted, will exchange server stops logging new tracking events?Thanks!

    Hi Zack,
    Thank you for your question.
    If you have configured the parameter of “MessageTrackingLogMaxDirectorySize” and “MessageTrackingLogMaxAge”, we think you have enable circular logging, it will delete the oldest message tracking log files for new log file when the either of the following
    conditions is true:
    The message tracking log directory reaches its specified maximum size.
    A message tracking log file reaches its specified maximum age.
    In addition, it didn’t exceeded the value indicated.
    If there are any questions regarding this issue, please be free to let me know. 
    Best Regard,
    Jim
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Jim Xu
    TechNet Community Support

  • Side effect of SQl server upgrade from 2008 R2 to Server 2012, logical name of log file changed for one database

    I came to know that name has changed when I tried to shrink the file. Here is the error message I got:
    Shrink failed for LogFile "Tfs_TESTTFS_Log'. (Microsoft.SqlServer.Smo)
    Additonal information
    An exception occured while executing a Transact-SQL statement or batch.
    (Microsoft.SqlServer.COnnectionInfo)
    Could not locate file 'Tfs_TESTTFS_Log' for database 'Tfs_TESTTFS' in sys.database_files. The file 
    either does not exist, or was dropped. (Microsoft Sql Server, Error: 8995)
    This is test environment upgrade and I checked on production environment which is still on SQL 2008R2, shrink works fine.
    Please help.

    I did in place Upgrade.
    Before Upgrade
    Logical Names
    Database Name: Tfs_TESTTFS
    Database Log: Tfs_TESTTFS_Log
    After Upgrade
    Logical Names
    Database Name: Tfs_TESTTFS
    Database Log: TfsVersionControl_Log
    Thx

  • Best log file format for multivariable non-continous time series

    Databases or TDM(S) files are great, but what if you cannot use a database (due to the type of target) and TDM files are (seems) unsuitable because the data does not come in blocks of continous time series. What is the best file option for data logging?
    Scenario:
    The number of variables you are going to log to a file can change during run-time
    The data is not sampled at fixed intervals (they have been deadband filtered e.g.)
    The files must be compact and fast to search through (i.e. binary files with known positions of the time stamps, channel descriptions etc.)
    Must be supported on compact fieldpoint and RIO controllers
    Right now we use our own custom format for this, but it does not support item no. 1 in the list above (at least not within the same file) and it would be much nicer to have an open format that other software can read as well.
    Any suggestions?
    MTO

    I did some tests of the performance. For a months worth of data (2592000 rows) with 4 channels, I got the following results when reading all of the data:
    1. TDMS file written as blocks of 60 values (1 minute buffers):1,5 seconds.
    2. As test 1, but with a defrag run on the final file: 0,9 seconds
    3. As test 1 & 2, but with all the data written in one operation: 0,51 seconds 
    3. Same data stored in binary file (1 header+2D array): 0,17 seconds.
    So even if I could write everything in 1 go (which I cannot), reading a month of data is 3 times faster with a binary file. The application I have might get a lot of read-requests and will need to read much more than 1 month of data - so the difference is significant (reading a year of data if stored as monthly files would take me 12-18 seconds with TDMS files, but just 2 seconds with a binary file.
    Because I'll be writing different groups of data at different rates, using the  advanced api to just get one (set) og header(s) is not an option.
    TDMS files are very versatile, it is great to be able to dump a new group/channel into the file at any time, and to have a file format that is supported by other applications as well. However, if the number of writes are many and the size of each write is (has to be) small the performance gets a serious hit. In this particular case performance trumphs ease of use so I'll probably  need to rewrite our custom binary format to preallocate chunks for each group (feature request for TDMS? :-) ).
    MTO

  • Log file creation using km api

    Hi,
    how to create log file using km api . please provide me if any sample code available.
    Thanks and Regards,
    Nari.

    Thanks for your quick reply but one more requirement is... here i can able create text file in km and adding content to created text file on the same line but i want to update new content in next line(newline).Please see below code and correct it.
         Date dt = new Date(Calendar.getInstance().getTimeInMillis());
                                  com.sapportals.portal.security.usermanagement.IUser iuser = WPUMFactory.getServiceUserFactory().getServiceUser("cmadmin_service");
                                  IResourceContext irCtx = new ResourceContext(iuser);
                                  RID docsResource = RID.getRID(filepath);
                                  IContent initCont = new Content(new ByteArrayInputStream("".getBytes()),"text/plain",-1,null);
                                  if(ResourceFactory.getInstance().getResource(RID.getRID(filepath+"/"+filename), irCtx) == null)
                                       ICollection docsColl = (ICollection)com.sapportals.wcm.repository.ResourceFactory.getInstance().getResource(docsResource,irCtx);
                                       docsColl.createResource(filename,null,initCont);
                              String InputData = Exception;
                              RID sugg_html = RID.getRID(filepath+"/"+filename);
                              IResource resource = com.sapportals.wcm.repository.ResourceFactory.getInstance().getResource(sugg_html,irCtx);
                              String existingComments;
                              IContent cont = resource.getContent();
                              BufferedReader buf_in = new BufferedReader(new InputStreamReader(cont.getInputStream()));
                              existingComments = buf_in.readLine();
                              existingComments = existingComments+"   "+"\n"+dt+InputData;
                              ByteArrayInputStream inputStream = new ByteArrayInputStream(existingComments.getBytes());
                              cont = new Content(inputStream,"text/plain",-1,null);
                              resource.updateContent(cont);
                              cont.close();

  • Logical file creation for physical inventory

    Hi,
    How to use MI31/MI34/MI37 t.codes for Physical inventory.
    What is mean by logical file.
    Regards,
    Prabu

    Hi
    I didnt work on MI34. But SAP help gives below reference: Check with an ABAPer
    Short text
    Batch input: Enter count with reference to document
    Description
    This report generates a batch input session (BI session) which, when processed, enters the inventory count results with reference to a physical inventory document.
    Before you start the report, make sure that the file entered on the selection screen is stored in the specified directory at operating system level.
    The data for the batch input session is imported from an external dataset as a sequential file. The structure of the sequential file is preallocted by the table BISEG. You can display and print this structure via the information system in the data dictionary. Note that the object class Fields is chosen on the request screen of the information system.
    Requirements
    Entering counting results via a batch input session requires that the corresponding physical inventory document has been created in the system. If you try to enter a counting result for which no physical inventory document exists, an error will occur during the BI session, which will interrupt processing.
    Any errors are recorded in the log file for the corresponding BI session.
    Before the sequential file is read, you should perform a test run. For this purpose, you have to maintain the corresponding test data in table T159I. For the sequential file data to be read and and for test data in table T159I, please note the following:
    The name of the report has to be specified only in table T159I. The name to be specified is RM07II34.
    The name of the BI session is optional.
    The name of the transaction Enter inventory count to be specified is MI04.
    The following data must be entered into the system:
    Count date
    Fiscal year
    Number of physical inventory document
    Item in physical inventory document
    Quantity of stock counted
    Output
    The system generates a batch input session which you can process via options System -> Services -> Batch input -> Sessions.
    When processing the batch input session for testing purposes, you should set the Display errors only indicator.
    When importing the actual productive data, you should always use the Background indicator.
    If data expected by the system (required-entry fields) does not exist, an error will occur during the batch input session which will interrupt processing.
    An error will also occur during the batch input session if you try to maintain data not expected by the system (fields not ready for input). However, processing will not be interrupted. The system writes a comment in the session log.
    Thanks

  • Db context file creation for rac to single instance cloning

    DOC ID: 559518.1 Section 6: RAC to Single Instance Cloning mentions that the context file creation should be done as in the case of Single Instance cloning
    what would be the command syntax?

    Thanks Hussein. However, section 6 of doc 559518.1 mentions that part 5.1.3 when cloning from rac to single node should be done as in the case of Single Instance cloning.
    the syntax for rac to rac cloning (which is in 5.1.3) is
    perl adclonectx.pl \
    contextfile=[PATH to OLD Source RAC contextfile.xml] \
    template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
    pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
    initialnode
    so what is the syntax for rac to single instance? I reckon I will still use adclonectx.pl, but now what would be the complete sysntax for single instance cloning?

  • OIM11g - Log file location for Request Wizard

    Hi,
    May I know what would be the log file which captures all the logs related to request flow operation in OIM 11g?.Thanks.

    You need to enable specific logger for the same:
    http://identityandaccessmanager.blogspot.com/2011/02/setting-up-log-level-in-oim-11g.html
    Loggers:
    oracle.iam.request
    oracle.iam.requestactions

  • Sqlnet.log file creation permissions

    One of my sites has a sqlnet.log file that has been symlinked to /dev/null and we are trying to determine if this has caused the issue with /dev/null's permissions being reset to 660 instead of 666.
    It looks like the other sqlnet.log files that are not symlinked to /dev/null are 640 so we are not sure this is the issue.
    Does anyone know if oracle changes permissions on the sqlnet.log file at creation or access time? And how it might change those permissions?
    This is on Oracle 10g running on RHEL4
    edit: I typed sqlnet.ora instead of sqlnet.log ... oops.
    Edited by: user12198769 on Nov 10, 2009 7:35 AM

    looks to be just the default in that file:
    NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)
    There are several sqlnet.log files in other places on the server yet there is only one oracle home on here so is there another location that has sqlnet.ora info?

  • Log file creation -- BDC

    Hi friends.This is Sudhir . I have a scenario of loading mass vendor data. If in case any error occurs while loading the error record should be created as log file. Do we have any function module to create the log file?
    Hope your answers will be helpful to proceed further.
    With regards ,
    Sudhir S

    Are you looking a way for generating logs as the ones you can see in SLG0?
    You can also store them into a spool.
    To store them into a file, you can simply read the spool output (or you can maybe use a SUBMIT ... EXPORTING LIST TO MEMORY to avoid the spool).
    In that case, please refer to [sap library|http://help.sap.com/saphelp_nw2004s/helpdata/en/d3/1fa03940fab918e10000000a114084/frameset.htm]
    and SBAL* demo programs (use of BAL_* function modules)
    Edited by: Sandra Rossi on Jul 20, 2010 10:39 PM

  • Where does Log files created for Timesten?

    Hi!
    my question id that where does log file are created ...my timetsen is installed on linux having locations:
    Installation Directories
    Times Ten Registry     /etc/TimesTen
    Default Installation Directory     /d01/app/oracle/tt70/TimesTen/tt70
    Default temporary directory     /tmp
    Instance name     tt70
    Instance Home Directory     /d01/app/oracle/tt70
    Daemon Home Directory     /d01/app/oracle/tt70/TimesTen/tt70/info
    DemoDataStore Directory     /d01/app/oracle/tt70/TimesTen/tt70/info
    Documentation Directory     /d01/app/oracle/tt70/TimesTen/tt70/doc
    LD_LIBRARY_PATH     /d01/app/oracle/tt70/TimesTen/tt70/lib
    Startup Script     /d01/app/oracle/tt70/TimesTen/tt70/startup/tt_tt70
    regards
    Muh.Usman

    The TimesTen daemon message logs are located in the daemon home directory which by default is <tt_install_dir>/info but can be changed to another location at install time.
    A datastore's transaction log files are located by default in the same diredctory as the checkpoint files (specified by the DSN attribute Datastore) but can be placed eslewhere using the DSN attribute LogDir.
    Again, this is all covered in the documentation. With all due respect, this forum is not intended to be a substitute for reading the comprehensive and well written documentation.
    Chris

  • Help to fill parameters in microsoft odbc file creation for Oracle

    When i get a modal window for odbc file creation i must fill this fields:
    user name:?
    server:?
    How i fill this parameters if i want connect to a remote server whit this credentials:
    ip:50.80.1.245
    port:1521
    server name: orcl
    scheme: sh
    pwd:sh
    I need know the correct sintax for connect to this remote server.
    kind regards.
    deniscuba

    What dialog is this? What application?
    First, do you have Oracle Client or Oracle Instant Client with the ODBC drivers installed?
    Second, do you have a TNS entry for this server?

Maybe you are looking for

  • Using JCo as an RFCServer in a J2EE Container (Threading Issue)

    Hello, I want to use JCo as an RFCServer in a J2EE Container (e.g. JBoss, BEA WLS or WAS6.40). Threrefore I use a the JCo.Server class as shown in Example5 in the JCo Examples. But the JCo.Server class starts a thread (JCo.ServerThread) for each Serv

  • JBPM vs. BPEL

    Hi all, I just wanted to know what is the difference between a Business Process Manager, a Workflow Engine, and an Orchestration Server? I'm a little bit confused. Also, I wanted to know which category does BPEL fit in? Finally, could someone compare

  • Include an interactive Youtube player in Edge Animate - Not working properly!

    Hi folks! I am working on a new project which is based on a interactive web video. In the Edge Animate proyect I have implemented a Youtube fullscreen player which has interactivity. On the one side, in the Edge project I have included a JS which con

  • Help please w/ Analogue connect

    Hello, I have purchased an X-fi xtreme music through my local computer builder. when they ordered the card I asked them to pick up the home theater connectors. http://us.creative.com/products/product.asp?category=3&subcategory=55&product=4309 they to

  • Adding Audio Clip

    Does anyone know how to add music clips or itunes files or video to the website with Contribute? Thanks