Creation of log files in PL/SQL

Hi,
Here is a piece of code where I am trying to create a log file.
create or replace procedure verify as
declare
ACTIVITY_FILE UTL_FILE.FILE_TYPE;
log varchar2(600);
begin
ACTIVITY_FILE := UTL_FILE.fopen('/dacscan/Mani',log,'W');
end;
I get the error while executing this procedure.
ERROR at line 1:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.UTL_FILE", line 145
ORA-06512: at "DACSCAN.VERIFY", line 7
ORA-06512: at line 1
Thanks in advance

Hi,
First of all put an exception block and see what exact exeption it is throwing and then post that exception. You also have to check wheather you have created a directory and it has sufficient privileges.
create or replace procedure verify as
declare
ACTIVITY_FILE UTL_FILE.FILE_TYPE;
log varchar2(600);
begin
ACTIVITY_FILE := UTL_FILE.fopen('/dacscan/Mani',log,'W');
EXCEPTION
WHEN others THEN
DBMS_OUTPUT.PUT_LINE(SQLCODE||SQLERRM);
end;
/

Similar Messages

  • Display data in log file using PL/SQL procedure

    Just as srw.message is used in Oracle RDF Reports to display data in log file in Oracle Apps, similarly how it is possible to display data in log file using PL/SQL procedure?
    Please also mention the syntax too.

    Pl post details of OS, database and EBS versions.
    You will need to invoke the seeded FND_LOG procedure - see previous discussions on this topic
    Enable debug for pl/sql
    https://forums.oracle.com/forums/search.jspa?threadID=&q=FND_LOG&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    HTH
    Srini

  • Steps to move Data and Log file for clustered SQL Server

    Hi guys 
    we have Active'passive SQL 2008R2 cluster environment.
    looking for steps to move Data and log files from user Database  and System Database for  SQL Server Clustered Instance. 
    Currently Data and log  files resides on same drive for user and system Databases..
    Thanks
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Try the below link
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/468de435-3432-45c2-a50b-23519cd2686e/moving-the-system-databases-in-a-sql-cluster?forum=sqldisasterrecovery
    -Prashanth

  • Log File Issue In SQL server 2005 standard Edition

    We have database of size 375GB .The data file has 80 GB free space within .When trying to rebuild the index we had 450 GB free space on the disk where Log file is residing.The rebuild index activity failed due to space issue.added more space and got the
    job done successfully
    The Log file has grow up to 611GB to complete the rebuild index.
    version :SQL server 2005 Standard Edition .Is ther a way to estimate the space required for rebuild index in this version.
    I am aware we notmaly allocate 1.5 times of data file.But in this case It was totaly wrong.
    Any suggestion with examples would be appreciated.
    Raghu

    OK, there's a few things here.
    Can you outline for everybody the recovery model you are using, the frequency with which you take full, differential and transaction log backups.
    Are you selectively rebuilding your indexes or are you rebuilding everything?
    How often are you doing this? Do you need to?
    There are some great resources on automated index maintenance, check out
    this post by Kendra Little.
    Depending on your recovery point objectives I would expect a production database to be in the full recovery mode and as part of this you need to be taking regular log backups otherwise your log file will just continue to grow. By taking a log backup it will
    clear out information from inactive VLF's and therefore allow SQL Server to write back to those VLF's rather than having to grow the log file. This is a simplified version of events, there are caveats.
    A VLF will be marked as active if it still has an open transaction in it or there is a HA option that still requires that data to be available as that data has not been copied to another node yet.
    Most customers that I see take transaction log backups every 15 - 30 minutes, but this really does depend upon how much data your company can afford to lose. That's another discussion for another day.
    Make sure that you take a transaction log backup prior to your job that does your index rebuilds (hopefully a smart job not a sledge hammer job).
    As mentioned previously swapping to bulk logged can help to reduce the size of the amount of information logged during index rebuilds. If you do this make sure to swap back into the full recovery model straight after and perform a full backup. There are
    problems with the ability to do point in time restores whilst in the bulk logged recovery model, so you need to reduce the amount of time you use it.
    Really you also need to look at how your indexes are created does the design of them lead to them being fragmented on a regular basis? Are they being used? Are there better indexes out there that can help performance?
    Hopefully that should put you on the right track.
    If you find this helpful, please mark the post as helpful,
    If you think this solves the problem, please propose or mark it an an answer.
    Please provide details on your SQL Server environment such as version and edition, also DDL statements for tables when posting T-SQL issues
    Richard Douglas
    My Blog: Http://SQL.RichardDouglas.co.uk
    Twitter: @SQLRich

  • Avoid creation of log file for external table

    Hi
    This script is creating log file in the ext directory. How to avoid it. Can you specify the syntex.
    Thanks alot.
    Bhaskar
    CREATE TABLE datfiles_list
    (file_name varchar2(255))
    ORGANIZATION EXTERNAL
    (TYPE ORACLE_LOADER
    DEFAULT DIRECTORY ext_dir
    ACCESS PARAMETERS (RECORDS DELIMITED BY NEWLINE)
    LOCATION ('datfiles_list.txt')
    );

    Example
    CREATE TABLE datfiles_list
    (file_name varchar2(255))
    ORGANIZATION EXTERNAL
    (TYPE ORACLE_LOADER
    DEFAULT DIRECTORY ext_dir
    ACCESS PARAMETERS (RECORDS DELIMITED BY NEWLINE NOLOGFILE)
    LOCATION ('datfiles_list.txt')
    );

  • Adobe 9.5 deletes log files - need to turn off creation of log files

    We have a bunch of files from SAS outputs that include a .sas, .rtf, .log extensions.
    When we try to convert a batch of the .rtf files to PDF by right clicking on them, the .log files are deleted. The originals are not Adobe .log files, but required files from the SAS output.
    I have unchecked "Delete Log Files for Successful jobs" in both Distiller and the Adobe printer preferences.
    It only deletes the .log with the same name as file types associated with Word. If I create .txt or xls files and .log files with the same name (i.e. test.txt and test.log) it does not delete the .log file, but it is overwritten by the Adobe log.
    This happens whether the file is local or on a mapped network drive.
    If I save to a different location, the .log is not deleted or overwritten, but that is really just a workaround. It's doable if it's the only option.
    They can also copy only the .rtf files to another folder and copy the .pdf files back after, but this is alot of extra work for high volumes
    This is an ongoing need involving lots of files, so moving or renaming is not an option, even with batch programs.
    What I really need to do is stop Adobe from creating AND deleting log files or force it to create the .log files in a different location than the original. Unless the problem is Word, but I cannot find any information on this problem.
    Thanks
    Mike

    Hi Shay,
    You are right, it would make perfect sense, however as you can see from the below forum, I was not able to solve this compilation issue..
    Oracle 10g Email Portlet - HELP PLEASE!!!
    (First post is the issue).
    If you have any ideas on how I could solve it, it would be great.
    Thanks
    Sam

  • Shrink Log File on MS sql 2005

    Hi all,
    My DB has a huge logfile, with more than 100gb.
    The idea is to shrink it, but the good way.
    I was trying this way:
    use P01
    go
    backup log P01 TO DISK = 'D:\P01LOG1\P01LOG1.bak'
    go
    dbcc shrinkfile (P01LOG1,250) with no_infomsgs
    go
    The problem is that the backup file is getting bigger and bigger each backup.
    So, my question is, how to shrink the logfile, correctly, with backup, but that backup should not increase but stay at the same level, overwriting the backups.
    I have full dayly backup with data protector from HP, but it doesn't clean the log, and it isn't possible to shrink it.

    What you want to do with the log backups depends on how you are going to recover the database in case the system/database loss and your backup schedule.
    1. If you are not going to do point in time recovery then there is no point in taking a tran log backup to a backup file. You can change the recovery model of the database to "simple". If your recovery model is "simple" you don't have to take transaction log backups at all. The inactive transactions are flushed from the log automatically. You should still be taking full and differential backups so that you can atleast recover your database to last full backup and apply the latest differential backup.
    2. If this is a production system then you should definitly be on "full" recovery mode and should be taking regular transaction log backups and storing them in a safe place so that you can use them to recover your system to any point in time. Storing the transaction log backup on the same server kind of defeats the purpose because if you lost the server and disks you will not have the backups either.
    3. If you are in full recovery mode and lets assume that you run your transaction log backups every 30 mins then you need your log file to be of the size that can handle the transactions that happen in any given 30 to 60 mins.
    There shouldn't be a need to constantly shrink log files if you configure things right.
    Edited by: Neeraj Nagpal on Aug 20, 2010 2:48 AM

  • Read the c2 log file of the sql server using java

    Hi All,
    i want to read the c2 log file using the core java. how is it possible ? if anybody knows about this please give me the sample code to help me.
    i am also searching on net but i am not getting any result about this? so please help me to doing this task?
    awaited person

    Hi All,
    i want to read the c2 log file using the core java. how is it possible ? if anybody knows about this please give me the sample code to help me.
    i am also searching on net but i am not getting any result about this? so please help me to doing this task?
    awaited person

  • Log file shrinking in SQL server

    Hi,
    I have a log file with initial size of 80 GB in C drive.
    Now we are having space issue in C drive, so i tried to shrink the log file, but its not reducing below initial size.
    Is it the default behavior or i missed anything while shrinking ?
    I used DBCC SHRINKFILE option to shrink it.
    How can i change the initial size of the log file ?
    If it has been set as 80 GB , because of that am i not able to free space in C drive ?
    Thanks,
    Vinodh Selvaraj

    Hello,
    Please check first the log reuse wait state of the databases, may you have to run an additional log backup before you can shrink the log file
    select name, log_reuse_wait_desc
    from sys.databases
    order by name
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Creation of .log files

    When uploading new jsps to the jsp-bin directory the resulting files are all in the format name.jsp.log
    The contents of one of these files is:
    java.lang.ClassCastException
         at oracle.ifs.beans.parsers.ClassSelectionParser.createDocument(ClassSelectionParser.java, Compiled Code)
         at oracle.ifs.beans.parsers.ClassSelectionParser.putPublicObjectWithVersioning(ClassSelectionParser.java, Compiled Code)
         at oracle.ifs.beans.parsers.ClassSelectionParser.parse(ClassSelectionParser.java, Compiled Code)
         at oracle.ifs.utils.common.ParserHelper.parseExistingDocument(ParserHelper.java, Compiled Code)
         at oracle.ifs.protocols.ntfs.server.FileProxy.parseFile(FileProxy.java, Compiled Code)
         at oracle.ifs.protocols.ntfs.server.FileProxy.cleanupFile(FileProxy.java, Compiled Code)
         at oracle.ifs.protocols.ntfs.server.FileProxy.runFileProxy(Native Method)
         at oracle.ifs.protocols.ntfs.server.FileProxy.run(FileProxy.java, Compiled Code)
    Any thoughts would be welcome.

    Hi,
    First of all put an exception block and see what exact exeption it is throwing and then post that exception. You also have to check wheather you have created a directory and it has sufficient privileges.
    create or replace procedure verify as
    declare
    ACTIVITY_FILE UTL_FILE.FILE_TYPE;
    log varchar2(600);
    begin
    ACTIVITY_FILE := UTL_FILE.fopen('/dacscan/Mani',log,'W');
    EXCEPTION
    WHEN others THEN
    DBMS_OUTPUT.PUT_LINE(SQLCODE||SQLERRM);
    end;
    /

  • How to append timestamp to log file in SQL*Plus ?

    Version: 11.2.0.3
    Platform : RHEL 5.8 (But I am looking for platform independant solution)
    I want to append the timestamp to spooled log file name in SQL*Plus.
    The spooled log filename should look like
    WMS_APP_23-March-2013.logI tried the following 3 methods found in the google. But none of them worked !
    I tried this
    col sysdt noprint new_value sysdt_var
    SELECT TO_CHAR(SYSDATE, 'yyyymmdd_hh24miss') sysdt FROM DUAL;
    spool run_filename_&sysdt_var.Logas suggested in
    http://power2build.wordpress.com/2011/03/11/sqlplus-spool-name-with-embedded-timestamp/
    and this
    spool filename with timestamp
    col sysdt noprint new_value sysdt
    SELECT TO_CHAR(SYSDATE, 'yyyymmdd_hh24miss') sysdt FROM DUAL;
    spool run_filename_&sysdt..Logas suggested in
    http://powerbuildev.wordpress.com/2011/03/11/sqlplus-spool-name-with-embedded-timestamp/
    and this
    column tm new_value file_time noprint
    select to_char(sysdate, 'YYYYMMDD') tm from dual ;
    prompt &file_time
    spool logfile_id&file_time..logas suggested in
    Creating a spool file with date/time appended to file name
    None of the above worked in RHEL or MS DOS. Any workaround ?

    I have tested your suggestions. But I still couldn't append the date to the logfile in RHEL or MS DOS SQL*Plus
    Here are the attempts I've made. I am posting how the logfile looked like after every test.
    #Attempt1 with two dots (&sysdate..log )
    set echo on
    set feedback on
    set define off
    set pages 999
    column dcol new_value SYSDATE noprint
    select to_char(sysdate,'YYYYMMDD') dcol from dual;
    spool testlog.&sysdate..log
    select 'hello' from dual;
    spool off;Log File Name -- > testlog.&sysdate..log
    #Attempt2 with single dot (&sysdate.log)
    set echo on
    set feedback on
    set define off
    set pages 999
    column dcol new_value SYSDATE noprint
    select to_char(sysdate,'YYYYMMDD') dcol from dual;
    spool testlog.&sysdate.log
    select 'hello' from dual;
    spool off;Log File Name ---> testlog.&sysdate.log
    #Attempt3. Replacing first dot with Hyphen (testlog- ) to check if the first dot was causing the issue
    set echo on
    set feedback on
    set define off
    set pages 999
    column dcol new_value SYSDATE noprint
    select to_char(sysdate,'YYYYMMDD') dcol from dual;
    spool testlog-&sysdate.log
    select 'hello' from dual;
    spool off;Log Filename: testlog-&sysdate.log
    #Attempt4: replacing SYSDATE with SDATE
    set echo on
    set feedback on
    set define off
    set pages 999
    column dcol new_value SDATE noprint
    select to_char(sysdate,'YYYYMMDD') dcol from dual;
    spool testlog1.&SDATE..log
    select 'hello' from dual;
    spool off;Log File Name -- > testlog1.&SDATE..log

  • DATA Log file path change in sql

    Dear experts,
    due to non avaibility of space in disk i have move my SAPLog file in another drive, but now my SAP working fine but when i try to change path in SQL 2005 its not giving me permission to do.
    please let me know how to change log file path in sql 2005.
    & if infuture i change my SAP 1 DATA file path how we change configure new path for dabase  in SQL 2005.
    Regards,
    jitendra.

    Hi
    Kindly go thru below web links. before start the process stop the SAP instance.
    http://support.microsoft.com/kb/224071
    Set the log DB size - http://help.sap.com/saphelp_nwce72/helpdata/en/f2/31ad9b810c11d288ec0000e8200722/content.htm
    and ref the SAP Note 363018
    Regards
    Sriram

  • SQL - How to attach FileStream enabled db without log file

    I'm trying to attach a FileStream enabled database without a log file. My SQL looks something like this:
    USE master
    CREATE DATABASE MyDB
    ON PRIMARY(NAME = N'MyDB', FILENAME = 'C:\myDB.MDF' ),
    FILEGROUP myFileGroup CONTAINS FILESTREAM ( NAME = myData, FILENAME = 'C:\myFileGroup')      
    For Attach
    Here is the error I'm receiving:
    Msg 5173, Level 16, State 3, Line 2
    One or more files do not match the primary file of the database.
    If you are attempting to attach a database, retry the operation with the correct files.  
    If this is an existing database, the file may be corrupted and should be restored from a backup.
    Does anyone know if it's possible to attach a FileStream enabled database without the original log file?  Thanks!

    Hi cgregory,
    The error might occur if the database is not shutdown cleanly. In this case, log file is required, or, you will have some data lost. Please pay attention to this thread addressing this type of issue:
    attaching DB without .ldf file ???
    For attach a database with FILESTREAM enabled, please refer to this article:
    How to Detach and Attach a SQL Server FILESTREAM Enabled Database.
    Stephanie Lv
    TechNet Community Support

  • Calling the Log file (Operating System)  using PL/SQL

    Hi to everybody
    i am loading the legacy data to oracle apps table through sqlloader,
    now i want to know how many data record in my legacy file,
    we get it through log file of Sqlloader,
    now my question is how to call the log file through PL/sql script
    Please solved my question

    You can define an external table on it, and read it with Sql commands.
    See External Table Example

  • SCOM2012 - SQL 2012 DB Log File Discovery isssue

    Dear Experts,
    I have some SQL 2012 servers, that has few log files (.LDF) stored in a specific drive. SCOM discovers these files but, has a wrong value in 'Display Name'.
    For example. The log file name on the server is PROD and the file path c:\SQL\PROD.LDF, but in the console it shows name as UAT and file path as c:\SQL\PROD.LDF (file path is correct as expected). It always stays in critical state stating that the log file
    is out of space while it is not the case.
    We even have tried wiping the agent off the server and reinstalling it. But it did not fix the issue. When I remove the agent 'Operations Manager' under event log disappears, but when I reinstall the agent after few days, I see the log created,
    but with older events too dated well prior to the uninstallation.
    And the other thing is, we had this issue while using 2007 R2 and even after switching over to 2012 it continues.
    SCOM 2012 was a fresh setup and was not an upgrade.
    Hope someone could help me out with this.
    Regards,
    Saravanan

    Hi Niki,
    Sorry for the delay in reply.
    I hope the image can explain better. I have that log file on a SQL server, which is being discovered with a wrong File name but with the correct path. The actual file name what I see on the server is exactly the same as it shows in the File path in console.
    But 'File Name' in the console is completely irrelevant. Also, this log file is in critical state for log file full, which is statistically false. There are few other log files on this server, for which we do not have this issue.
    Please let me know if any other information would be required
    Regards,
    Saravanan

Maybe you are looking for

  • Open Sales Balance on Sales order

    Hi SAP Group - I have a couple questions about the settlement of manufacturing variance to COPA. When our settlement process is complete (production order and sales order both settled to COPA), there is a balance remaining on the sales order cost rep

  • CATS report to get the details of the PS transfer

    Hello All, I have integrated the SAP PS with CATS and I want to know which all reports i can use to confirm the PS transfer is successful. As soon as the CAT5 is run and time is transferred i want  a report for the confirmations of the transfer is su

  • [SOLVED] How to turn on composing in KDE?

    I installed KDE but I do not see how to turn desktop composing fancy effects on. There is nothing like the Settings top level entry in the main KDE menu. Most likely, I did not install some package, but I cannot figure out which one. Proposals? Last

  • In the wake of ipod service...

    I had apple replace my ipod completely. Now when I drag purchased music files to my 'new' ipod (dock-connector) - itunes will not let me transfer them. I asume there is some sort of conflict with my old ipod in itunes? Any help would be appreciated.

  • Max temperature for PB?

    Is there a max ambient temperature at which one can use a PB G4 12" ? Being in Europe, we don't have air conditioning in our home or office.. As the temperatures climb, I'm using a fan to blow air onto my PB to keep it cooler. Today's temperatures we