Creation of .log files

When uploading new jsps to the jsp-bin directory the resulting files are all in the format name.jsp.log
The contents of one of these files is:
java.lang.ClassCastException
     at oracle.ifs.beans.parsers.ClassSelectionParser.createDocument(ClassSelectionParser.java, Compiled Code)
     at oracle.ifs.beans.parsers.ClassSelectionParser.putPublicObjectWithVersioning(ClassSelectionParser.java, Compiled Code)
     at oracle.ifs.beans.parsers.ClassSelectionParser.parse(ClassSelectionParser.java, Compiled Code)
     at oracle.ifs.utils.common.ParserHelper.parseExistingDocument(ParserHelper.java, Compiled Code)
     at oracle.ifs.protocols.ntfs.server.FileProxy.parseFile(FileProxy.java, Compiled Code)
     at oracle.ifs.protocols.ntfs.server.FileProxy.cleanupFile(FileProxy.java, Compiled Code)
     at oracle.ifs.protocols.ntfs.server.FileProxy.runFileProxy(Native Method)
     at oracle.ifs.protocols.ntfs.server.FileProxy.run(FileProxy.java, Compiled Code)
Any thoughts would be welcome.

Hi,
First of all put an exception block and see what exact exeption it is throwing and then post that exception. You also have to check wheather you have created a directory and it has sufficient privileges.
create or replace procedure verify as
declare
ACTIVITY_FILE UTL_FILE.FILE_TYPE;
log varchar2(600);
begin
ACTIVITY_FILE := UTL_FILE.fopen('/dacscan/Mani',log,'W');
EXCEPTION
WHEN others THEN
DBMS_OUTPUT.PUT_LINE(SQLCODE||SQLERRM);
end;
/

Similar Messages

  • Avoid creation of log file for external table

    Hi
    This script is creating log file in the ext directory. How to avoid it. Can you specify the syntex.
    Thanks alot.
    Bhaskar
    CREATE TABLE datfiles_list
    (file_name varchar2(255))
    ORGANIZATION EXTERNAL
    (TYPE ORACLE_LOADER
    DEFAULT DIRECTORY ext_dir
    ACCESS PARAMETERS (RECORDS DELIMITED BY NEWLINE)
    LOCATION ('datfiles_list.txt')
    );

    Example
    CREATE TABLE datfiles_list
    (file_name varchar2(255))
    ORGANIZATION EXTERNAL
    (TYPE ORACLE_LOADER
    DEFAULT DIRECTORY ext_dir
    ACCESS PARAMETERS (RECORDS DELIMITED BY NEWLINE NOLOGFILE)
    LOCATION ('datfiles_list.txt')
    );

  • Adobe 9.5 deletes log files - need to turn off creation of log files

    We have a bunch of files from SAS outputs that include a .sas, .rtf, .log extensions.
    When we try to convert a batch of the .rtf files to PDF by right clicking on them, the .log files are deleted. The originals are not Adobe .log files, but required files from the SAS output.
    I have unchecked "Delete Log Files for Successful jobs" in both Distiller and the Adobe printer preferences.
    It only deletes the .log with the same name as file types associated with Word. If I create .txt or xls files and .log files with the same name (i.e. test.txt and test.log) it does not delete the .log file, but it is overwritten by the Adobe log.
    This happens whether the file is local or on a mapped network drive.
    If I save to a different location, the .log is not deleted or overwritten, but that is really just a workaround. It's doable if it's the only option.
    They can also copy only the .rtf files to another folder and copy the .pdf files back after, but this is alot of extra work for high volumes
    This is an ongoing need involving lots of files, so moving or renaming is not an option, even with batch programs.
    What I really need to do is stop Adobe from creating AND deleting log files or force it to create the .log files in a different location than the original. Unless the problem is Word, but I cannot find any information on this problem.
    Thanks
    Mike

    Hi Shay,
    You are right, it would make perfect sense, however as you can see from the below forum, I was not able to solve this compilation issue..
    Oracle 10g Email Portlet - HELP PLEASE!!!
    (First post is the issue).
    If you have any ideas on how I could solve it, it would be great.
    Thanks
    Sam

  • Creation of log files in PL/SQL

    Hi,
    Here is a piece of code where I am trying to create a log file.
    create or replace procedure verify as
    declare
    ACTIVITY_FILE UTL_FILE.FILE_TYPE;
    log varchar2(600);
    begin
    ACTIVITY_FILE := UTL_FILE.fopen('/dacscan/Mani',log,'W');
    end;
    I get the error while executing this procedure.
    ERROR at line 1:
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "SYS.UTL_FILE", line 145
    ORA-06512: at "DACSCAN.VERIFY", line 7
    ORA-06512: at line 1
    Thanks in advance

    Hi,
    First of all put an exception block and see what exact exeption it is throwing and then post that exception. You also have to check wheather you have created a directory and it has sufficient privileges.
    create or replace procedure verify as
    declare
    ACTIVITY_FILE UTL_FILE.FILE_TYPE;
    log varchar2(600);
    begin
    ACTIVITY_FILE := UTL_FILE.fopen('/dacscan/Mani',log,'W');
    EXCEPTION
    WHEN others THEN
    DBMS_OUTPUT.PUT_LINE(SQLCODE||SQLERRM);
    end;
    /

  • Log File Creation Confusion

    SQL*Plus: Release 10.2.0.3.0 - Production on Mon Mar 11 11:42:45 2013
    Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.There are some initialization parameters that decide the location of the online redo log files in general.These initialization parameters are
    - DB_CREATE_ONLINE_LOG_DEST_n
    - DB_RECOVERY_FILE_DEST
    - DB_CREATE_FILE_DEST
    I could not understand the level of precedence of these parameters if you set each of them for creating online logfile, if i set all these parameter then creating online log file always goes to the path which define in parameter DB_CREATE_ONLINE_LOG_DEST_n and ignores the others parameter (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST).
    If i just set the last two parameter (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) and do not set the DB_CREATE_ONLINE_LOG_DEST_n the logfile created in both location DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) with mirrored mechanisim.
    SQL> select name,value
      2    from v$parameter
      3   where upper(name) in ('DB_CREATE_ONLINE_LOG_DEST_1','DB_RECOVERY_FILE_DEST','DB_CREATE_FILE_DEST')
      4  /
    NAME                                                                             VALUE
    db_create_file_dest                                                              D:\ORACLE\PRODUCT\10.2.0\DB_1\dbfile
    db_create_online_log_dest_1
    db_recovery_file_dest                                                            D:\oracle\product\10.2.0\db_1\flash_recovery_area
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                              
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                    
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                    
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                    
    SQL> alter database add logfile
      2  /
    Database altered.
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                                      
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                            
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                            
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                            
             4         ONLINE  D:\ORACLE\PRODUCT\10.2.0\DB_1\DBFILE\ORCL\ONLINELOG\O1_MF_4_8MTHLWTJ_.LOG                   
             4         ONLINE  D:\ORACLE\PRODUCT\10.2.0\DB_1\FLASH_RECOVERY_AREA\ORCL\ONLINELOG\O1_MF_4_8MTHLZB8_.LOGAs you can see above result , creating a logfile adhere defining parameters DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) , when i define the parameter DB_CREATE_ONLINE_LOG_DEST_1 , logfile creation will goes to only defining within parameter DB_CREATE_ONLINE_LOG_DEST_1 no matter what you define for DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST).Here you go.
    SQL> alter database drop logfile group 4
      2  /
    Database altered.
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                      
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                            
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                            
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                            
    SQL> alter system set db_create_online_log_dest_1='D:\oracle' scope=both
      2  /
    System altered.
    SQL> select name,value
      2    from v$parameter
      3   where upper(name) in ('DB_CREATE_ONLINE_LOG_DEST_1','DB_RECOVERY_FILE_DEST','DB_CREATE_FILE_DEST')
      4  /
    NAME                                                                             VALUE
    db_create_file_dest                                                              D:\ORACLE\PRODUCT\10.2.0\DB_1\dbfile
    db_create_online_log_dest_1                                                      D:\oracle
    db_recovery_file_dest                                                            D:\oracle\product\10.2.0\db_1\flash_recovery_area
    SQL> alter database add logfile
      2  /
    Database altered.
    SQL> select * from v$logfile
      2  /
        GROUP# STATUS  TYPE    MEMBER                                                                              
             3         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG                                    
             2         ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG                                    
             1 STALE   ONLINE  D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG                                    
             4         ONLINE  D:\ORACLE\ORCL\ONLINELOG\O1_MF_4_8MTJ10B8_.LOG                                       My confusion is here why the mechanisim of (DB_RECOVERY_FILE_DEST,DB_CREATE_FILE_DEST) is same while the same with both of them becomes differ when you define
    'DB_CREATE_ONLINE_LOG_DEST_n'?

    DB_CREATE_FILE_DEST is used if DB_CREATE_ONLINE_LOG_DEST_n is not defined.
    DB_RECOVERY_FILE_DEST is used for multiplexed log files.
    Thus, if Oracle uses DB_CREATE_FILE_DEST (because DB_CREATE_ONLINE_LOG_DEST_n is not defined), it multiplexes the log file to DB_RECOVERY_FILE_DEST if DB_RECOVERY_FILE_DEST is also defined.
    If, however, DB_CREATE_ONLINE_LOG_DEST_1 is used, Oracle expects you to define DB_CREATE_ONLINE_LOG_DEST_2 as well for multiplexing the log file; else it assumes that you do not want the log file multiplexed. The fact that the parameter ends with an n means that Oracle uses the n=2 f or the multiplexed location if defined.
    Hemant K Chitale

  • Sqlnet.log file creation permissions

    One of my sites has a sqlnet.log file that has been symlinked to /dev/null and we are trying to determine if this has caused the issue with /dev/null's permissions being reset to 660 instead of 666.
    It looks like the other sqlnet.log files that are not symlinked to /dev/null are 640 so we are not sure this is the issue.
    Does anyone know if oracle changes permissions on the sqlnet.log file at creation or access time? And how it might change those permissions?
    This is on Oracle 10g running on RHEL4
    edit: I typed sqlnet.ora instead of sqlnet.log ... oops.
    Edited by: user12198769 on Nov 10, 2009 7:35 AM

    looks to be just the default in that file:
    NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)
    There are several sqlnet.log files in other places on the server yet there is only one oracle home on here so is there another location that has sqlnet.ora info?

  • Disable Log file creation for a Berkeley DB database

    Hi,
    I'm using Berkeley DB 6 with Oracle Mobile Server 11.3.  When I sync a lot of data, a lot of logfile are created and I think that this is really slowing down my sync process.  Since I never need to recover those client database, I would like to know if it is possible to disable log creation on a Berkeley Database?
    Thank you

    The version of BDB that is used for DMS is TDS (Transaction Data Store).   In that environment, logging is needed to ensure recoverability.   There isnt a way to disable logging.     If you never need to do recovery, then you can use the BDB utilities and occasionally do checkpoints which will flush the cache, or you can shut down the client application.  After this is done then you can remove the log files, since you are claiming that you will not need them for recovery.   
    thanks
    mike

  • Creation of materialized view with view log file for fast refresh in 10.1db

    Hi,.. I have a select statements that includes data from almost 20 tables and takes long time to complete..I am planing to create a materialized view on this.. would you please suggest best way of doing this?
    we would like to have materialized view and materialized log file to refresh changes from underline table to mv view. please provide help on this .. thanks in advance

    It will be possible to create a Materialised view with up to 20 tables, but you have to understand the restrictions on complex Materialised views with regards to fast refresh.
    To help your understanding, refer to Materialized View Concepts and Architecture
    <br>
    Oracle Database FAQs
    </br>

  • Exchange Server 2010 - Message Tracking Logs - Log file creation

    Hi,
    I would like to find out on the behavior of the exchange server in the way that it logs the message tracking.
    Currently the parameter used is 
    MessageTrackingLogMaxDirectorySize - 10GBMessageTrackingLogMaxAge - 30daysI would like to check when the Max Directory Size has exceeded the value indicated, does Exchange server immediately deletes the oldest log file to make space for the new logs?And in the event that the oldest file is being open or locked, will exchange server delete the next oldest file? or it will reattempt to delete the "locked" file for a period of time?Lastly, when these "oldest" files is not able to be deleted, will exchange server stops logging new tracking events?Thanks!

    Hi Zack,
    Thank you for your question.
    If you have configured the parameter of “MessageTrackingLogMaxDirectorySize” and “MessageTrackingLogMaxAge”, we think you have enable circular logging, it will delete the oldest message tracking log files for new log file when the either of the following
    conditions is true:
    The message tracking log directory reaches its specified maximum size.
    A message tracking log file reaches its specified maximum age.
    In addition, it didn’t exceeded the value indicated.
    If there are any questions regarding this issue, please be free to let me know. 
    Best Regard,
    Jim
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Jim Xu
    TechNet Community Support

  • Log file creation using km api

    Hi,
    how to create log file using km api . please provide me if any sample code available.
    Thanks and Regards,
    Nari.

    Thanks for your quick reply but one more requirement is... here i can able create text file in km and adding content to created text file on the same line but i want to update new content in next line(newline).Please see below code and correct it.
         Date dt = new Date(Calendar.getInstance().getTimeInMillis());
                                  com.sapportals.portal.security.usermanagement.IUser iuser = WPUMFactory.getServiceUserFactory().getServiceUser("cmadmin_service");
                                  IResourceContext irCtx = new ResourceContext(iuser);
                                  RID docsResource = RID.getRID(filepath);
                                  IContent initCont = new Content(new ByteArrayInputStream("".getBytes()),"text/plain",-1,null);
                                  if(ResourceFactory.getInstance().getResource(RID.getRID(filepath+"/"+filename), irCtx) == null)
                                       ICollection docsColl = (ICollection)com.sapportals.wcm.repository.ResourceFactory.getInstance().getResource(docsResource,irCtx);
                                       docsColl.createResource(filename,null,initCont);
                              String InputData = Exception;
                              RID sugg_html = RID.getRID(filepath+"/"+filename);
                              IResource resource = com.sapportals.wcm.repository.ResourceFactory.getInstance().getResource(sugg_html,irCtx);
                              String existingComments;
                              IContent cont = resource.getContent();
                              BufferedReader buf_in = new BufferedReader(new InputStreamReader(cont.getInputStream()));
                              existingComments = buf_in.readLine();
                              existingComments = existingComments+"   "+"\n"+dt+InputData;
                              ByteArrayInputStream inputStream = new ByteArrayInputStream(existingComments.getBytes());
                              cont = new Content(inputStream,"text/plain",-1,null);
                              resource.updateContent(cont);
                              cont.close();

  • Log file creation -- BDC

    Hi friends.This is Sudhir . I have a scenario of loading mass vendor data. If in case any error occurs while loading the error record should be created as log file. Do we have any function module to create the log file?
    Hope your answers will be helpful to proceed further.
    With regards ,
    Sudhir S

    Are you looking a way for generating logs as the ones you can see in SLG0?
    You can also store them into a spool.
    To store them into a file, you can simply read the spool output (or you can maybe use a SUBMIT ... EXPORTING LIST TO MEMORY to avoid the spool).
    In that case, please refer to [sap library|http://help.sap.com/saphelp_nw2004s/helpdata/en/d3/1fa03940fab918e10000000a114084/frameset.htm]
    and SBAL* demo programs (use of BAL_* function modules)
    Edited by: Sandra Rossi on Jul 20, 2010 10:39 PM

  • Log file creation

    hi ,
            can anybody tell me how to create a log file to know that which master data has been replicated?

    hi ,
            can anybody tell me how to create a log file to know that which master data has been replicated?

  • Resisting the creation of new log files when SQL SERVER is restarted

    Hi,
    I know that when SQL server is restarted new log files are created. But is it possible to resist creating new log fils and insert log data in the existing log files that are used before restarting the sql server

    Hello,
    I guess Raghvendra answered your question. And as per your previous post its not clear what you want to ask an you did not revert. Again if your issue is solved appreciate if you can please mark the answer and vote the posts helpful.
     Can I continue to log in the same file.?
    What does this line mean exactly ? Yes SQL Server will continue to use same transaction log file(LDF file) for writing information as it was using before shutdown. If you are talking about errorlog file a new errorlog file would be created which you can
    read using
    sp_readerrorlog
    Even if you stopped SQL Server service mistakenly its not that server is gone. Yes when you stopped the server all inflight transactions are rolled back. And when SQL Server would come online it would undergo crash recovery and would bring all the databases
    online by reading transaction log file and performing redo and undo of information. All committed transaction would be rolled forward and uncommitted would be rolled back.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • How to Properly Protect a Virtualized Exchange Server - Log File Discontinuity When Performing Child Partition Snapshot

    I'm having problems backing up a Hyper-V virtualized Exchange 2007 server with DPM 2012. The guest has one VHD for the OS, and two pass-through volumes, one for logs and one for the databases. I have three protection groups:
    System State - protects only the system state of the mail server, runs at 4AM every morning
    Exchange Databases - protects the Exchange stores, 15 minute syncs with an express full at 6:30PM every day
    VM - Protecting the server hosting the Exchange VM. Does an child partition snapshot backup of the Exchange server guest with an express full at 9:30PM every day
    The problem I'm experiencing is that every time the VM express full completes I start receiving errors on the Exchange Database synchronizations stating that a log file discontinuity was detected. I did some poking around in the logs on the Exchange server
    and sure enough, it looks like the child partition snapshot backup is causing Exchange to truncate the log files even though the logs and databases are on pass-through disks and aren't covered by the child partition snapshot.
    What is the correct way to back up an entire virtualized Exchange server, system state, databases, OS drive and all?

    I just created a new protection group. I added "Backup Using Child Partition Snapshot\MailServer", short-term protection using disk, and automatically create the replica over the network immediately. This new protection group contains only the child partition
    snapshot backup. No Exchange backups of any kind.
    The replica creation begins. Soon after, the following events show up in the Application log:
    =================================
    Log Name:      Application
    Source:        MSExchangeIS
    Date:          10/23/2012 10:41:53 AM
    Event ID:      9818
    Task Category: Exchange VSS Writer
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      PLYMAIL.mcquay.com
    Description:
    Exchange VSS Writer (instance 7d26282d-5dec-4a73-bf1c-f55d5c1d1ac7) has been called for "CVssIExchWriter::OnPrepareSnapshot".
    =================================
    Log Name:      Application
    Source:        ESE
    Date:          10/23/2012 10:41:53 AM
    Event ID:      2005
    Task Category: ShadowCopy
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      PLYMAIL.mcquay.com
    Description:
    Information Store (3572) Shadow copy instance 2051 starting. This will be a Full shadow copy.
    =================================
    The events continue on, basically snapshotting all of Exchange. From the DPM side, the total amount of data transferred tells me that even though Exhange is trunctating its logs, nothing is actually being sent to the DPM server. So this snapshot operation
    seems to be superfluous. ~30 minutes later, when my regularly scheduled Exchange job runs, it fails because of a log file discontinuity.
    So, in this case at least, a Hyper-V snapshot backup is definitely causing Exchange to truncate the log files. What can I look at to figure out why this is happening?

  • DATE fields and LOG files  in context with external tables

    I am facing two problems when dealing with the external tables feature in Oracle 9i.
    I created an External Table with some fileds with the DATE data type . There were no issues during the creation part. But when i query the table, the DATE fields are not properly selected though the data is there in the files. Is there any ideas to deal with this ?
    My next question is regarding the log files. The contents in the log file seems to be growing when querying the external tables. Is there a way to control this behaviour?
    Suggestions / Advices on the above two issues are welcome.
    Thanks
    Lakshminarayanan

    Hi
    If you have date datatypes than:
    select
    greatest(TABCASER1.CASERRECIEVEDDATE, EVCASERS.FINALEVDATES, EVCASERS.PUBLICATIONDATE, EVCASERS.PUBLICATIONDATE, TABCASER.COMPAREACCEPDATE)
    from TABCASER, TABCASER1, EVCASERS
    where ...-- join and other conditions
    1. greatest is good enough
    2. to_date creates date dataype from string with the format of format string ('mm/dd/yyyy')
    3. decode(a, b, c, d) is a function: if a = b than return c else d. NULL means that there is no data in the cell of the table.
    6. to format the date for display use to_char function with format modell as in the to_date function.
    Ott Karesz
    http://www.trendo-kft.hu

Maybe you are looking for

  • Sudden increase in import time

    <p>On 9/6 I refreshed my database by exporting level 0 information,resetting the database, and importing the data back in.  Theimport took 984 seconds.  On 9/19, I performed the exact sameprocedure, and the import took 5,838 seconds.  While I hadadde

  • Weblogic temporary or synchronous request/reply queue

    Hi, Is there a possibility of creating temporary or Synchronous request/reply Queue in weblogic. If so, please let me know the steps to create it. Thanks

  • Adding comment in the OTL Timecard ( in both timekeeper entry & Self-Service)

    Hi all, I need to add a new comment field in the timecard (both the timekeeper entry page & the Timecard Self-Service page). I am going through the white paper "Configuring The Oracle Time and Labor Timecard User Interface" currently. what i understa

  • What is the Ultrabook primarily good for?

    i like the look of the ultrabooks cos they are so slim but am concerned abt performance - could you advise what ultrabooks are recommended for?

  • How do I stop my songs displaying twice within library?

    Songs within my iTunes library have now started to appear twice, as maybe iCloud has not recognised I already have the songs within my library. Anything I have purchased in the iTunes store is now appearing twice in my library - one the actual song,