Writing to servlet log files?

Using the log(...) method of the servlet API. No problem finding the log file in the case of the embedded server in JDev. I cannot find the log file in the standalone OC4J. Seems to me there is an XML attribute to 'turn-on' to enable logging but I do not remember it.
Please send me info to enable logging -
THANKS - Ken Cooper

The string written to standalone log file does not exist anywhere in the JDev installation folder!Do you mean your oc4j standalone installation is inside you Jdev installation? I am so confused.
As far as I checked, the servlet log(...) should go to the log file for the j2ee application, which is by default, application.log in application-deployments/yourApp directory. It did in my simple test.

Similar Messages

  • Reading and Writing to a Log file

    Hello,
    I'm writing a class that will write a user id to a log file every time they click on a particular button. However, I don't want a new line every time a user clicks on the button, I'd like to be able to find their id in the log, increment a counter and write that back to the log in place of the previous entry.
    For example, User A clicks on the button for the first time so I write in the log:
    User A 1
    Next time they come along, I want to see if User A exists in the log - which they do - add 1 to the counter and replace User A 1 with User A 2. So the log will now say:
    User A 2
    I was thinking of writing to the log, reading the log back in to a HashMap and then writing the HashMap out every time. Seems like a rather inefficient solution. Is there anything else I can do?
    Thanks!

    Hi,
    counters are a standard topic. Many solutions are to be found in
    textbooks. Here is one of them (Hunter & Crawford Java Servlet
    Programming):
    String activeUser = ... ;
    try
      FileReader fileReader = new FileReader( "mylog" );
      BufferedReader bufferedReader = new BufferedReader(fileReader);
      FileWriter fileWriter = new FileWriter("mylog.new");
      BufferedWriter bufferedWriter = new BufferedWriter( fileWriter );
      String line;
      String user;
      String count;
      while((line = bufferedReader.readLine() != null )
        StringTokenizer tokenizer = new StringTokenizer( line );
        if( tokenizer.countTokens != 2 ) continue; // bogus line
        user = tokenizer.nextToken;
        if( activeUser.equals( user ) )
          count = tokenizer.nextToken;
          bufferedWriter.write(user+" "+(Integer.parseInt(count)+1)+"\n" );
        else
          bufferedWriter.write( line );
      bufferedWriter.close();
      fileWriter.close();
      bufferedReader.close();
      fileReader.close();
      File oldLog = new File("mylog");
      File newLog = new File("mylog.new");
      newLog.renameTo(oldLog);
    catch( Exception e )
        // do whatever appropriate
    }Have fun,
    Klaus

  • SQLScript , writing messages to log file

    Hi,
    We're developing SQLScript and executing them directly on Unix via Unix command HDBSQL. We use the -o keyword with HDBSQL to write messages like comments, custom errors into a log file. The only way we know how to force custom messages into the log file is the SELECT "Text" FROM DUMMY.
    Now our SQL consists of multiple procedures that may be called in a nested fashion i.e Proc 1 calls Proc 2 which in turn calls Proc 3. We noticed that the SELECT FROM DUMMY way of writing comments into the log only works if the procedure is the first procedure in the line of nested calls(Proc 1). SELECT FROM DUMMY from a  sub/nested procedure does not make it to the log file.
    Is there any other way we can achieve this? Other DBs have options like DBMS.output which work well in such scenarios.
    I would like to thank this group for being ever so helpful in all the issues we have faced so far.
    Regards,
    Nehal

    Hello,
    I do not agree. have you taken a look at Oracle9i Supplied PL/SQL Packages and Types Reference?
    You can use UTL_FILE and DBMS_OUTPUT to achieve exactly what you want.
    Rgds
    Fidel

  • Writing messages to log file from database procedures

    Folks,
    Is there a way by which I will be able to write messages from a database procedure to a log file? I would like to know what a procedure is doing and if it has failed or suceeded just as we do in a Unix shell script where we can direct messages to a file.
    e.g. echo 'step 34 completed' >> $X_LOG
    Is there a log file in Oracle where we can check if a procedure has failed and what was the error. I am using Oracle 9i.
    Thanks.

    Hello,
    I do not agree. have you taken a look at Oracle9i Supplied PL/SQL Packages and Types Reference?
    You can use UTL_FILE and DBMS_OUTPUT to achieve exactly what you want.
    Rgds
    Fidel

  • Writing to a log file.

    I have written the following code. Also, by going through
    the archives, many have written the similar way and were successful. The problem I have is, the log file is empty. Where am I missing something?
    Thanks
    Sriram
    <code>
    import java.util.Calendar;
    import java.io.*;
    public class debugTest{
    public static void main(String args[]) throws Exception {
    String logFileDirString = "C:\\projects\\javaapp\\log\\tradematch\\";
    String logFileName = getLogFileName(logFileDirString);
    System.out.println("The log file name" + logFileName);
    File logFileDir = new File(logFileDirString);
    if (!logFileDir.exists()){ logFileDir.mkdirs(); }
    File logFile = new File(logFileName);
    if (!logFile.exists()){ logFile.createNewFile(); }
    FileOutputStream fops = new FileOutputStream(logFile.getName(), true);
    PrintStream ps = new PrintStream(fops);
    System.setOut(ps);
    System.out.println("Line 1");
    System.out.println("Line 2");
    ps.close();
    public static String getLogFileName(String logFileDirString) {
    Calendar rightNow = Calendar.getInstance();
    //Month is 0 based. Added 1 to make it 1 based. Subtracted by Calendar.JANUARY, so that
    //it takes care if it ever becomes 1 based.
    Integer theMonth = new Integer((rightNow.get(rightNow.MONTH) + 1 - Calendar.JANUARY));
    String theMonthString = theMonth.toString();
    if (theMonthString.length() < 2) theMonthString = "0" + theMonthString;
    Integer theDayOfMonth = new Integer(rightNow.get(rightNow.DAY_OF_MONTH));
    String theDayOfMonthString = theDayOfMonth.toString();
    if (theDayOfMonthString.length() < 2) theDayOfMonthString = "0" + theDayOfMonthString;
    Integer theYear = new Integer(rightNow.get(rightNow.YEAR));
    String theYearString = theYear.toString();
    String theLogFileString = logFileDirString.concat(theMonthString).concat(theDayOfMonthString)
    .concat(theYearString).concat(".txt");
    return theLogFileString;
    </code>

    Use this:
    <pre>
    FileOutputStream fops = new FileOutputStream(logFile.getAbsolutePath() , true);
    System.out.println( logFile.getAbsolutePath() );
    </pre>

  • Is there any downside to writing plain old log files?

    Hi all;
    We have our web & worker roles using log4net to write log files to disk. It works great where we then remote desktop in and go to the folder where they are and we've got exactly what we want - a daily log file for each worker.
    Is there any downside to this approach?
    Also, is there an easy way to have it delete all log files over a month old?
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    hi dave,
    If you stored the log file on instances disk, you may get them using the RDP. But if you redeployed ,auto-scaled the cloud service, the log file maybe removed. So I don't recommend you storage the content on Azure instances disk.
    If you stored the log file on Azure Blob/Table storage (https://github.com/crossvertise/log4net.Appender.AzureBlobStorage/tree/master/TransactionLogger),
    you could view it using some tool,such as Azure server explore and Azure storage Explore. And Azure storage will backup your storage data well.
    >>Also, is there an easy way to have it delete all log files over a month old?
    If you storage on azure storage, the one approach is that you could remove it manually . Another approach is that you could create a service on your project. And then you could create a Job using Scheduler service (http://msdn.microsoft.com/library/azure/dn479785.aspx
    ).  If you don't like to use the Scheduler service, you could coding a delete log data or file method and execulted  this method every day.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Writing data to log file

    Is it possible to write data to a log (text) file from within Crystal Reports, possibly in a UFL?  Is there a way that's already built in to CR?
    Thanks.
    Ron

    Ido - thanks for the link.  This will help.
    Don,
    Not exporting a report to a text file.  More like logging report events. One of my report developers was thinking of maybe tracking the start/stop times of each publication per person as well as the full run time.
    I hope I explained that correctly. 
    Thanks.
    Ron

  • Reader 10.1 update fails, creates huge log files

    Last night I saw the little icon in the system tray saying an update to Adobe Reader was ready to be installed.
    I clicked it to allow the install.
    Things seemed to go OK (on my Windows XP Pro system), although very slowly, and it finally got to copying files.
    It seemed to still be doing something and was showing that it was copying file icudt40.dll.  It still displayed the same thing ten minutes later.
    I went to bed, and this morning it still showed that it was copying icutdt40.dll.
    There is no "Cancel" button, so this morning I had to stop the install through Task Manager.
    Now, in my "Local Settings\TEMP" directory, I have a file called AdobeARM.log that is 2,350,686 KB in size and a file MSI38934.LOG that is 4,194,304 KB in size.
    They are so big I can't even look at them to see what's in them.  (Too big for Notepad.  When I tried to open the smaller log file, AdobeARM.log, with Wordpad it was taking forever and showing only 1% loaded, so after five minutes, I terminated the Wordpad process so I could actually do something useful with my computer.)
    You would think the installer would be smart enough to stop at some point when the log files begin to get enormous.
    There doesn't seem to be much point to creating log files that are too big to be read.
    The update did manage to remove the Adobe Reader X that was working on my machine, so now I can no longer read PDF files.
    Maybe I should go back Adobe Reader 9.
    Reader X never worked very well.
    Sometimes the menu bar showed up, sometimes it didn't.
    PDF files at the physics e-print archive always loaded with page 2 displayed first.  And if you forgot to disable the look-ahead capability, you could get banned from the e-print archive site altogether.
    And I liked the user interface for the search function a lot better in version 9 anyway.  Who wants to have to pop up a little box for your search phrase when you want to search?  Searching is about the most important and routine activity one does, other than going from page to page and setting the zoom.

    Hi Ankit,
    Thank you for your e-mail.
    Yesterday afternoon I deleted the > 2 GB AdobeARM.log file and the > 4.194 GB
    MSI38934.LOG file.
    So I can't upload them.  I expect I would have had a hard time doing so
    anyway.
    It would be nice if the install program checked the size of the log files
    before writing to them and gave up if the size was, say, three times larger
    than some maximum expected size.
    The install program must have some section that permits infinite retries or
    some other way of getting into an endless loop.  So another solution would be
    to count the number of retries and terminate after some reasonable number of
    attempts.
    Something had clearly gone wrong and there was no way to stop it, except by
    going into the Task Manager and terminating the process.
    If the install program can't terminate when the log files get too big, or if
    it can't get out of a loop some other way, there might at least be a "Cancel"
    button so the poor user has an obvious way of stopping the process.
    As it was, the install program kept on writing to the log files all night
    long.
    Immediately after deleting the two huge log files, I downloaded and installed
    Adobe Reader 10.1 manually.
    I was going to turn off Norton 360 during the install and expected there
    would be some user input requested between the download and the install, but
    there wasn't.
    The window showed that the process was going automatically from download to
    install. 
    When I noticed that it was installing, I did temporarily disable Norton 360
    while the install continued.
    The manual install went OK.
    I don't know if temporarily disabling Norton 360 was what made the difference
    or not.
    I was happy to see that Reader 10.1 had kept my previous preference settings.
    By the way, one of the default settings in "Web Browser Options" can be a
    problem.
    I think it is the "Allow speculative downloading in the background" setting.
    When I upgraded from Reader 9 to Reader 10.0.x in April, I ran into a
    problem. 
    I routinely read the physics e-prints at arXiv.org (maintained by the Cornell
    University Library) and I got banned from the site because "speculative
    downloading in the background" was on.
    [One gets an "Access denied" HTTP response after being banned.]
    I think the default value for "speculative downloading" should be unchecked
    and users should be warned that one can lose the ability to access some sites
    by turning it on.
    I had to figure out why I was automatically banned from arXiv.org, change my
    preference setting in Adobe Reader X, go to another machine and find out who
    to contact at arXiv.org [I couldn't find out from my machine, since I was
    banned], and then exchange e-mails with the site administrator to regain
    access to the physics e-print archive.
    The arXiv.org site has followed the standard for robot exclusion since 1994
    (http://arxiv.org/help/robots), and I certainly didn't intend to violate the
    rule against "rapid-fire requests," so it would be nice if the default
    settings for Adobe Reader didn't result in an unintentional violation.
    Richard Thomas

  • Wait Events "log file parallel write" / "log file sync" during CREATE INDEX

    Hello guys,
    at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
    To get some performance values, that i can compare i just built up a normal oracle database in the first step.
    Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
    My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
    I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
    After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
    And now take a look at these values from the AWR
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    log file parallel write              10,019     .0         132      13      33.5
    log file sync                           293     .7           4      15       1.0
    ......How can this be possible?
    Regarding to the documentation
    -> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
    Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
    Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
    I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
    Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
    Do you have any idea how these values come about?
    Any thoughts/ideas are welcome.
    Thanks and Regards

    Surachart Opun (HunterX) wrote:
    Thank you for Nice Idea.
    In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
    CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
    Two points on nologging, though:
    <ul>
    it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
    If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
    </ul>
    Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
    The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
    There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • DataGuard Windows 9201 - log file transfer interrupt with a big redo log

    OS WINDOWS
    Oracle 9201
    Primary: service_name orcl1 db_name orcl1
    Standby: service_name orcl2 db_name orcl1
    Same dir structure distribute on different VMware machine but connect with a real physical fiber network enviorment, two node distance more than 20km.
    LOG FILE - 100M
    MAX PERFORMACE MODE
    we can got succesful result when input 'alter system switch log file' manually, the log usually small than 20m.
    but when we try to switch a full redo log the error occur, log can't transfer to standby site.
    it's seem to a transfer interrupt by some unnameable reason.
    we check the network ping, lsnrctl service_name status, dataguard configration and windows tcpip configration, but have no conclusion.
    we will crzy!! help
    the log trace that use log_archive_trace=128 on primary site show:
    Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
    Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
    - Created archivelog as 'C:\ORACLE\ORAARCH\ARC00095.001'
    *** 2010-09-02 15:30:39.000
    Fail to ping standby 'orcl2', error = 12571
    Error 12571 when pinging standby orcl2.
    *** 2010-09-02 15:30:39.000
    kcrrfail: dest:2 err:12571 force:0
    *** 2010-09-02 15:31:40.000
    Fail to ping standby 'orcl2', error = 1010
    Error 1010 when pinging standby orcl2.
    *** 2010-09-02 15:31:41.000
    kcrrfail: dest:2 err:1010 force:0
    *** 2010-09-02 15:32:32.000
    Setting trace level: 31 (1f)
    *** 2010-09-02 15:32:32.000
    ARC0: Evaluating archive log 3 thread 1 sequence 97
    VALIDATE
    PREPARE
    *** 2010-09-02 15:32:32.000
    Acquiring global enqueue on thread 1 sequence 97
    *** 2010-09-02 15:32:32.000
    Acquired global enqueue on thread 1 sequence 97
    INITIALIZE
    SPOOL
    *** 2010-09-02 15:32:32.000
    ARC0: Beginning to archive log 3 thread 1 sequence 97
    *** 2010-09-02 15:32:32.000
    Creating archive destination LOG_ARCHIVE_DEST_2: 'orcl2'
    Network re-configuration required
    Detaching RFS server from standby instance at 'orcl2'
    RFS message number 151
    Error 1010 detaching RFS from standby instance at host 'orcl2'
    Disconnecting from destination LOG_ARCHIVE_DEST_2 standby host 'orcl2'
    Ignoring kcrrvnc() detach error 1010
    Primary database is in CLUSTER CONSISTENT mode
    Primary database is in MAXIMUM PERFORMANCE mode
    Connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl2'
    Attaching RFS server to standby instance at 'orcl2'
    RFS message number 152
    Dest LOG_ARCHIVE_DEST_2 standby mount ID: '42590f20'
    Standby database restarted; old mount ID 0x4258a5ae now 0x42590f20
    Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
    Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
    Issuing standby Create archive destination at 'orcl2'
    RFS message number 153
    *** 2010-09-02 15:32:32.000
    Creating archive destination LOG_ARCHIVE_DEST_1: 'C:\ORACLE\ORAARCH\ARC00097.001'
    - Created archivelog as 'C:\ORACLE\ORAARCH\ARC00097.001'
    Dest LOG_ARCHIVE_DEST_1 primary mount ID: '0x42586021'
    Archiving block 1 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 1 count 2048 to 'orcl2'
    RFS message number 154
    Archiving block 1 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 2049 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 2049 count 2048 to 'orcl2'
    RFS message number 155
    Archiving block 2049 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 4097 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 4097 count 2048 to 'orcl2'
    RFS message number 156
    Archiving block 4097 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 6145 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 6145 count 2048 to 'orcl2'
    RFS message number 157
    Archiving block 6145 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 8193 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 8193 count 2048 to 'orcl2'
    RFS message number 158
    Archiving block 8193 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 10241 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 10241 count 2048 to 'orcl2'
    RFS message number 159
    Archiving block 10241 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 12289 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 12289 count 2048 to 'orcl2'
    RFS message number 160
    Archiving block 12289 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 14337 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 14337 count 2048 to 'orcl2'
    RFS message number 161
    Archiving block 14337 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 16385 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 16385 count 2048 to 'orcl2'
    RFS message number 162
    Archiving block 16385 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 18433 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 18433 count 2048 to 'orcl2'
    RFS message number 163
    Archiving block 18433 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 20481 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 20481 count 2048 to 'orcl2'
    RFS message number 164
    Archiving block 20481 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 22529 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 22529 count 2048 to 'orcl2'
    RFS message number 165
    Archiving block 22529 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 24577 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 24577 count 2048 to 'orcl2'
    RFS message number 166
    Archiving block 24577 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 26625 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 26625 count 2048 to 'orcl2'
    RFS message number 167
    Archiving block 26625 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 28673 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 28673 count 2048 to 'orcl2'
    RFS message number 168
    Archiving block 28673 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 30721 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 30721 count 2048 to 'orcl2'
    RFS message number 169
    Archiving block 30721 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 32769 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 32769 count 2048 to 'orcl2'
    RFS message number 170
    Archiving block 32769 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 34817 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 34817 count 2048 to 'orcl2'
    RFS message number 171
    Archiving block 34817 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 36865 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 36865 count 2048 to 'orcl2'
    RFS message number 172
    Archiving block 36865 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 38913 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 38913 count 2048 to 'orcl2'
    RFS message number 173
    Archiving block 38913 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 40961 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 40961 count 2048 to 'orcl2'
    RFS message number 174
    Archiving block 40961 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 43009 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 43009 count 2048 to 'orcl2'
    RFS message number 175
    Archiving block 43009 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 45057 count 2048 block(s) to 'orcl2'
    Issuing standby archive of block 45057 count 2048 to 'orcl2'
    RFS message number 176
    *** 2010-09-02 15:33:22.000
    RFS network connection lost at host 'orcl2'
    Error 3114 writing standby archive log file at host 'orcl2'
    *** 2010-09-02 15:33:22.000
    ARC0: I/O error 3114 archiving log 3 to 'orcl2'
    *** 2010-09-02 15:33:22.000
    kcrrfail: dest:2 err:3114 force:0
    Local destination LOG_ARCHIVE_DEST_1 is still active
    ORA-03114: not connected to ORACLE
    Archiving block 45057 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 47105 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 49153 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 51201 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 53249 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 55297 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 57345 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 59393 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 61441 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 63489 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 65537 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 67585 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 69633 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 71681 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 73729 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 75777 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 77825 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 79873 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 81921 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 83969 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 86017 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 88065 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 90113 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 92161 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 94209 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 96257 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 98305 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 100353 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 102401 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 104449 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 106497 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 108545 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 110593 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 112641 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 114689 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 116737 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 118785 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 120833 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 122881 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 124929 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 126977 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 129025 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 131073 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 133121 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 135169 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 137217 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 139265 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 141313 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 143361 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 145409 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 147457 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 149505 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 151553 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 153601 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 155649 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 157697 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 159745 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 161793 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 163841 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 165889 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 167937 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 169985 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 172033 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 174081 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 176129 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 178177 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 180225 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 182273 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 184321 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 186369 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 188417 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 190465 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 192513 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 194561 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 196609 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 198657 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 200705 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Archiving block 202753 count 2024 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
    Closing archive destination LOG_ARCHIVE_DEST_1: C:\ORACLE\ORAARCH\ARC00097.001
    FINISH
    Archival failure destination LOG_ARCHIVE_DEST_2: 'orcl2'
    Archival success destination LOG_ARCHIVE_DEST_1: 'C:\ORACLE\ORAARCH\ARC00097.001'
    COMPLETE, min-succeed count met
    *** 2010-09-02 15:33:27.000
    ArchivedLog entry added for thread 1 sequence 97 ID 0x42585a2b: C:\ORACLE\ORAARCH\ARC00097.001
    Marking [1] log 3 thread 1 sequence 97 spooled
    Updating thread 1 sequence 97 archive SCN 0:4503061
    Scanning 'to be archived' list': kcrrdal
    log 2 thread 1 sequence 98
    Completed 'to be archived' list
    *** 2010-09-02 15:33:27.000
    Releasing global enqueue
    ARCHIVED
    *** 2010-09-02 15:33:27.000
    ARC0: Completed archiving log 3 thread 1 sequence 97
    Scanning 'to be archived' list': kcrrwk
    log 2 thread 1 sequence 98
    Completed 'to be archived' list
    Scanning 'to be archived' list': kcrrwk
    log 2 thread 1 sequence 98
    Completed 'to be archived' list
    *** 2010-09-02 15:34:29.000
    ARC0: Heartbeat ticks... (thread 1)
    Establishing link for destination LOG_ARCHIVE_DEST_2 to standby orcl2
    Primary database is in CLUSTER CONSISTENT mode
    Primary database is in MAXIMUM PERFORMANCE mode
    Connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl2'
    Attaching RFS server to standby instance at 'orcl2'
    RFS message number 177
    Dest LOG_ARCHIVE_DEST_2 standby mount ID: '42590f20'
    Pinging destination LOG_ARCHIVE_DEST_2 at standby orcl2
    RFS message number 178
    Not in RAC mode
    *** 2010-09-02 15:35:30.000
    ARC0: Heartbeat ticks... (thread 1)
    Establishing link for destination LOG_ARCHIVE_DEST_2 to standby orcl2
    Pinging destination LOG_ARCHIVE_DEST_2 at standby orcl2
    RFS message number 179
    Not in RAC mode
    *** 2010-09-02 15:36:22.000
    ARC0: Heartbeat ticks... (thread 1)
    Establishing link for destination LOG_ARCHIVE_DEST_2 to standby orcl2
    Pinging destination LOG_ARCHIVE_DEST_2 at standby orcl2
    RFS message number 180
    Not in RAC mode
    *** 2010-09-02 15:36:39.000
    Setting trace level: 128 (80)
    Setting trace level: 128 (80)
    Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
    Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
    - Created archivelog as 'C:\ORACLE\ORAARCH\ARC00099.001'
    Setting trace level: 128 (80)
    *** 2010-09-02 15:37:32.000
    Setting trace level: 128 (80)

    Something is going on in your network:
    RFS network connection lost at host 'orcl2'
    Error 3114 writing standby archive log file at host 'or
    Network Administrators may help

  • Pc Suite - Sync log file

    Vista Enterprise - "Non-Administrator" Account.
    PC Suite v7.0.9.2
    Cable Connection
    N78
    I currently have a "small" issue with Sync. Sync works fine (i.e. my device and Outlook synchronise) however the log file doesn't seem to generate or be viewable. (The Sync screen says something like - 7 records updated but if you click view log, the log is empty).
    I've done a search on my PC and there are no .lml files anywhere so I assume this is probably a permissions issue with writing out the log file.
    Does anyone know where the log file should be stored when syncing?

    Just downloaded new version but still have same problem.
    (There goes another 30Mb of mobile data charges ;- )
    I'm fairly sure it's permissions - can anyone tell me where the files are meant to get written to?

  • Dynamically defined output log file in log4j

    I am trying to generate an output log file dynamically using log4j. I have standard configuration XML file for log4j where I have an appender configuired. In my java code I am trying to add appender to the logger and then change an output file:
    <code>
    Logger logger = Logger.getLogger(ClassFSTest.class);
    Appender rfa =logger.getAppender("rolling");
    rfa.setFile(myFileName);
    rfa.activateOprions();
    </code>
    However getAppender method returns NULL, but at the same time I have an Appender named "rolling" in my confuiguration. And if I use standard output mechanism like "logger.info()" the message is being written into the defined file.
    thanks in advance.

    Hope the following link will address the solution for your problem:
    http://cognitivecache.blogspot.com/2008/08/log4j-writing-to-dynamic-log-file-for.html

  • How i can access log file in servlet?

    How i can access log file in servlet then display it to browser.
              Can i have peice of codes...
              Thanks in advance.!
              

    This is not something that can be answered easily. There is quite alot of code involved....
    To get started I suggest you read up on the built-in package 'DBMS_LOB' from Oracle . Most of what you need you should find there.
    regards Dave.

  • Servlet error: Security sensitive exception occured. , where is log files?

    thank you for reading my post
    i get this message , where should i look for log files ?
    Servlet error: Security sensitive exception occured. Please consult application log for details.

    I ran into this just myself last week.
    We used to show the Exception type and a stacktrace in the browser as the default error message.
    In 10.1.3.1 we've modified it so that the details of the application (ie the stack trace and exception) are output by default.
    Avi indicated the correct log file to look at.
    If you do want to revert to the old behaviour for development time, then you can do so by setting the attribute development="true" in the orion-web.xml file of the deployed application.
    <orion-web-app .... development="true"/>
    cheers
    -steve-

  • Delay in event driven log file data writing? Please help!!!

    System Information
    Operating System: XP
    Labview: 8.2
    Force sensor data acquisition via DAQPad-6070E
    Actuator: Actuator via MCS-3D controller
    Programming Information
    Number of events: 13
    Position read: Reads the position and the force sensor data every second.
    Move I &  Forward:  Moves the actuator forward with a define step size
    Two actuators are made to travel certain distance. A force sensor is attached to the system. The aim here is to acquire continuous data as per the defined time wait (1 sec). The data is logged in a text file which gives the position travelled from the actuator, the force sensor data with a time stamp.
    The issues I am encountering is during writing a file.
    For ex: When even is activated ( Move actuator at defined stepsize) the event are logged into the log file but the positions are updated into the log file only when the next event is activated. So it means that the positions and the force values are updated into the logfile after the consecutive event is executed. If you see the logfile ex inside the attachment the red block explains the event executed but the position are updated in the next line (event). This file is just for example.
    Please help here I am going wrong!
    Thanks in advanced
    Attachments:
    EventMoveex.PNG ‏582 KB
    Logfile.PNG ‏64 KB

    Dear Method M
    I find it out what was going on. As you mentioned that I was writing the values before the actuators achieving the final position, so I introduced a delay between the execution of two SubVI's. It isnt a clean method but it works.
    thank you very much!
    Regards
    Itz

Maybe you are looking for

  • Auto creation of Batch master data when external batch no entered in PR

    Dear All, At persent we can auto create batches at production order creation level, Goods receipt level. In same way how we can achive auto creation of Batch master data at Purchase Requisition level. Regards, Manoj

  • Anyone managed to run Leopard on an iBook G3?

    I would like to know if there are people who succeeded installing Leopard on an iBook G3? I know that with the current system requirements its not done, but with some magic anything is possible. If someone managed to install it, please share your exp

  • How to set a vendor as fixed vendor during PR creation

    Dear experts, There is a field in PR for fixed vendor. How this fiexed vendor determined by SAP during PR creation. How can i ensure that certain vendor will be selected automatically as fixed vendor for some material and service so that during PO cr

  • My tv shows and films start to play but there is no picture

    Can anybody offer any help please. If i play any tv shows or films in my librar, they start to play but there is no picture.

  • Templates not included?

    Is it me or does the iWorks templates shown at the Apple site are not included? I can't seem to find any of them. Pages is shown with one titled "kite lines" - flight of fancy also with the rest of the apps...  strange.