Huge log file generation

Hi,
I have a report server.when i start the report server the size of log file located at
$ORACLE_HOME/opmn/logsOC4J~Webfile2~default~island~1 > 2GB in 24 Hrs
Please tell me what may be the root cause for this and what will be the possible solution for this.
Please its urgent.

Hi Jaap,
First of all Thanks.
how to set debuging off on the container?
some lines of the messages in line are as follows:95178969 [AJPRequestHandler-ApplicationServerThread-90] ERROR com.allianz.weo.struts.StoredProcAction - SQLException while calling proc CUSTOMER.AZBJ_WEO_SECURITY.LOAD_MODULE_MENUS: ORA-01013: user requested cancel of current operation
ORA-06512: at "CUSTOMER.AZBJ_WEO_SECURITY", line 107
ORA-06512: at line 1
95178969 [AJPRequestHandler-ApplicationServerThread-90] ERROR com.allianz.weo.struts.StoredProcAction - SQLException while calling proc CUSTOMER.AZBJ_WEO_SECURITY.LOAD_MODULE_MENUS: ORA-01013: user requested cancel of current operation
ORA-06512: at "CUSTOMER.AZBJ_WEO_SECURITY", line 107
ORA-06512: at line 1
95178969 [AJPRequestHandler-ApplicationServerThread-90] ERROR com.allianz.weo.struts.StoredProcAction - SQLException while calling proc CUSTOMER.AZBJ_WEO_SECURITY.LOAD_MODULE_MENUS: ORA-01013: user requested cancel of current operation
ORA-06512: at "CUSTOMER.AZBJ_WEO_SECURITY", line 107
ORA-06512: at line 1
95178969 [AJPRequestHandler-ApplicationServerThread-90] ERROR com.allianz.weo.struts.StoredProcAction - SQLException while calling proc CUSTOMER.AZBJ_WEO_SECURITY.LOAD_MODULE_MENUS: ORA-01013: user requested cancel of current operation
ORA-06512: at "CUSTOMER.AZBJ_WEO_SECURITY", line 107
ORA-06512: at line 1
07/07/12 12:18:32 DriverManagerConnectionPoolConnection not closed, check your code!
07/07/12 12:18:32 (Use -Djdbc.connection.debug=true to find out where the leaked connection was created)
Regards,
Sushama.

Similar Messages

  • Reader 10.1 update fails, creates huge log files

    Last night I saw the little icon in the system tray saying an update to Adobe Reader was ready to be installed.
    I clicked it to allow the install.
    Things seemed to go OK (on my Windows XP Pro system), although very slowly, and it finally got to copying files.
    It seemed to still be doing something and was showing that it was copying file icudt40.dll.  It still displayed the same thing ten minutes later.
    I went to bed, and this morning it still showed that it was copying icutdt40.dll.
    There is no "Cancel" button, so this morning I had to stop the install through Task Manager.
    Now, in my "Local Settings\TEMP" directory, I have a file called AdobeARM.log that is 2,350,686 KB in size and a file MSI38934.LOG that is 4,194,304 KB in size.
    They are so big I can't even look at them to see what's in them.  (Too big for Notepad.  When I tried to open the smaller log file, AdobeARM.log, with Wordpad it was taking forever and showing only 1% loaded, so after five minutes, I terminated the Wordpad process so I could actually do something useful with my computer.)
    You would think the installer would be smart enough to stop at some point when the log files begin to get enormous.
    There doesn't seem to be much point to creating log files that are too big to be read.
    The update did manage to remove the Adobe Reader X that was working on my machine, so now I can no longer read PDF files.
    Maybe I should go back Adobe Reader 9.
    Reader X never worked very well.
    Sometimes the menu bar showed up, sometimes it didn't.
    PDF files at the physics e-print archive always loaded with page 2 displayed first.  And if you forgot to disable the look-ahead capability, you could get banned from the e-print archive site altogether.
    And I liked the user interface for the search function a lot better in version 9 anyway.  Who wants to have to pop up a little box for your search phrase when you want to search?  Searching is about the most important and routine activity one does, other than going from page to page and setting the zoom.

    Hi Ankit,
    Thank you for your e-mail.
    Yesterday afternoon I deleted the > 2 GB AdobeARM.log file and the > 4.194 GB
    MSI38934.LOG file.
    So I can't upload them.  I expect I would have had a hard time doing so
    anyway.
    It would be nice if the install program checked the size of the log files
    before writing to them and gave up if the size was, say, three times larger
    than some maximum expected size.
    The install program must have some section that permits infinite retries or
    some other way of getting into an endless loop.  So another solution would be
    to count the number of retries and terminate after some reasonable number of
    attempts.
    Something had clearly gone wrong and there was no way to stop it, except by
    going into the Task Manager and terminating the process.
    If the install program can't terminate when the log files get too big, or if
    it can't get out of a loop some other way, there might at least be a "Cancel"
    button so the poor user has an obvious way of stopping the process.
    As it was, the install program kept on writing to the log files all night
    long.
    Immediately after deleting the two huge log files, I downloaded and installed
    Adobe Reader 10.1 manually.
    I was going to turn off Norton 360 during the install and expected there
    would be some user input requested between the download and the install, but
    there wasn't.
    The window showed that the process was going automatically from download to
    install. 
    When I noticed that it was installing, I did temporarily disable Norton 360
    while the install continued.
    The manual install went OK.
    I don't know if temporarily disabling Norton 360 was what made the difference
    or not.
    I was happy to see that Reader 10.1 had kept my previous preference settings.
    By the way, one of the default settings in "Web Browser Options" can be a
    problem.
    I think it is the "Allow speculative downloading in the background" setting.
    When I upgraded from Reader 9 to Reader 10.0.x in April, I ran into a
    problem. 
    I routinely read the physics e-prints at arXiv.org (maintained by the Cornell
    University Library) and I got banned from the site because "speculative
    downloading in the background" was on.
    [One gets an "Access denied" HTTP response after being banned.]
    I think the default value for "speculative downloading" should be unchecked
    and users should be warned that one can lose the ability to access some sites
    by turning it on.
    I had to figure out why I was automatically banned from arXiv.org, change my
    preference setting in Adobe Reader X, go to another machine and find out who
    to contact at arXiv.org [I couldn't find out from my machine, since I was
    banned], and then exchange e-mails with the site administrator to regain
    access to the physics e-print archive.
    The arXiv.org site has followed the standard for robot exclusion since 1994
    (http://arxiv.org/help/robots), and I certainly didn't intend to violate the
    rule against "rapid-fire requests," so it would be nice if the default
    settings for Adobe Reader didn't result in an unintentional violation.
    Richard Thomas

  • Exchange 2010 SP3, RU5 - Massive Transaction Log File Generation

    Hey All,
    I am trying to figure out why 1 of our databases is generating 30k Log Files a day! The other one is generating 20K log files a day. The database does not grow in size as the log files are generated, the problem is log file generation.
    I've tried running through some of the various solutions out there, reviewed message tracking logs, rpc client access logs, IIS Logs - all of which show important info, but none of which actually provide the answers.
    I Stopped the following services to see if that would affect the log file generation in any way, and it has not!
    MS Exchange Transport
    Mail Submission
    IIS (Site Stopped in IIS)
    Mailbox Assistants
    Content Indexing Service
    With the above services stopped, I still see dozens (or more) log files generated in under 10 minutes, I also checked mailbox size reports (top 10) and found that several users mailboxes were generating item count increases for one user of
    about 300, size increases for one user of about 150Mb (over the whole day).
    I am not sure what else to check here? Any ideas?
    Thanks,
    Robert
    Robert

    Hmm - this sounds like an device is chewing up the logs.
    If you use log parser studio, are there any stand out devices in terms of the # of hits?
    And for the ExMon was that logged over a period of time?  The default 60 second window normally misses a lof of stuff.  Just curious!
    Cheers,
    Rhoderick
    Microsoft Senior Exchange PFE
    Blog:
    http://blogs.technet.com/rmilne 
    Twitter:   LinkedIn:
      Facebook:
      XING:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
    Rhoerick, 
    Thanks for the response. When checking the logs the highest number of hits were the (Source) Load Balancers, Port 25 VIP. The problem i was experience was the following: 
    1) I kept expecting the log file generation to drop to an acceptable rate of 10~20 MB Per Minute (Max). We have a large environment and use the exchange sevrers as the mail relays for the hated Nagios monitoring environment
    2) We didn't have our enterprise monitoring system watching SMTP traffic, this is  being resolved. 
    3) I needed to look closer at the SMTP transport database counters, logs, log files and focus less on the database log generation, i did do some of that but not enough of that. 
    4) My troubleshooting kept getting thrown off due to the monitoring notifications seeming to be sent out in batches (or something similar) stopping the transport service for 10 ~ 15 minutes several times seemed to finally "stop the transactions logs
    from growing at a psychotic rate". 
    5) I am re-running my data captures now that i have told the "Nagios Team" to quit killing the exchange servers, with their notifications, sometimes as much as 100+ of the same notifications for the same servers, issues. so far at a quick glance
    the log file generation seems to have dropped by about 30%. 
    Question: What would be the best counters to review in order to "Put it all together"? Also note: our Server roles are split, MBX and CAS/HT. 
    Robert 
    Robert

  • Exchange 2010 personal archive database massive log file generation

    Exchange Server 2010 SP3 + Update Rollup 4
    Windows Server 2008 R2, all updates
    VMware ESXi 5.5
    Server config: 2 x Xeon Quad Core 2.20GHz, 16GB RAM
    We recently started using personal archives. I created a database for this purpose ("Archive Mailboxes") on the same datastore as our live mailbox database ("Live Mailboxes"). It works great except that the mailbox maintenance generates
    massive amounts of log files, over 220GB per day on average. I need to know why. The Live Mailbox database generates around 70GB of log files every day. The database sizes are: Live = 159.9GB, Archive = 196.8GB. Everything appears to be working fine, there
    are no Error events related to archiving. There are 10025 MSExchangeMailboxAssistant warning events logged every day. I have moved those mailboxes back-and-forth to temp databases (both Live and Archive mailboxes) and the 10025 events have not stopped so I'm
    reasonably certain there is no corruption. Even if there were it still doesn't make sense to me that over 100 log files are generated every single minute of the day for the Archive store. And it's not that the database isn't being fully backed up; it is, every
    day.
    Do I need to disable the 24x7 option for mailbox maintenance to stop this massive log file generation? Should I disable mailbox maintenance altogether for the Archive store? Should I enable circular logging for the Archive store (would prefer to NOT do this,
    though I am 100% certain we have great backups)? It appears to me that mailbox maintenance on the Live store takes around 12 hours to run so I'm not sure it needs the 24x7 option.
    This is perplexing. Need to find a solution. Backup storage space is being rapidly consumed.

    I'm sure it will be fine for maintenance to run only on weekends so I'll do that.
    We use Veeam B&R Enterprise 7.0.0.833. We do not run incremental backups during the day but probably could if necessary. All this is fine and dandy but it still doesn't explain why this process generates so many logs. There are a lot of posts around
    the internet from people with the same issue so it would be nice to hear something from Microsoft, even if this is expected behavior.
    Thank you for the suggestions!

  • FTP log file generation failed in shell script

    Hi ALL,
    I am doing FTP file transfer in shell script and able to FTP the files in to corresponding directory , But when i am trying to check the FTP status through the log files then its giving problem . please check the below code.
    for file in $FILENAME1
    do
    echo "FTP File......$file"
    echo 'FTP the file to AR1 down stream system'
    ret_val=`ftp -n> $file.log <<E
    #ret_val=`ftp -n << !
    open $ar1_server
    user $ar1_uname $ar1_pwd
    hash
    verbose
    cd /var/tmp
    put $file
    bye
    E`
    if [ -f $DATA_OUT/$file.log ]
    then
    grep -i "Transfer complete." $DATA_OUT/$file.log
    if [ $? -eq 0 ]; then
    #mv ${file.log} ${DATA_OUT}/../archive/$file.log.log_`date +"%m%d%y%H%M%S"`
    echo 'Log file archived to archive directory'
    #mv $file ${DATA_OUT}/../archive/$FILENAME1.log_`date +"%m%d%y%H%M%S"`
    echo 'Data file archived to archived directory'
    else
    echo 'FTP process is not successful'
    fi
    else
    echo 'log file generation failed'
    fi
    its giving syntax error end of file not giving the exact line number , please help me on thsi
    Regards
    Deb

    Thanks for ur reply
    Actually i did a mistake in the code i wrote the following piece of code below
    ret_val=`ftp -n> $file.log <<E
    #ret_val=`ftp -n << !
    so after the tilde symbol it as again taking the '# ' as a special character so it was giving error, so i removed the second line now its working fine.

  • How to configure Log file generation

    Hi,
    I am in a migration project. Currently the OS is Unix. After migration it is going to be Windows.
    So we want to change the log files being created in Unix to Windows.
    Can anyone suggest any settings in SAP for the log file.
    Regards,
    Gijoy

    Hi Gijoy,
    can you please reformulate your question for better understanding?
    The log location and tracing severity setup mechanism is platform independent.
    After migration there's no necessary step(s) to be taken, the logs will be created in the same way on windows as on  unix under your current sap installation folder (e.g. defaultTrace is on unix under /usr/sap/.../j2ee/cluster/server<n>/log , on windows this will be <DRIVE:>\usr\sap\...\j2ee\cluster\server<n>\log)
    I hope this answers your question.
    Best Regards,
    Ervin

  • JDBC driver log file generation on v8i

    I want to know if there is a means to generate the log files for JDBC driver transactions similar to sqlnet.log file which gets created when the OCI connection is used between the client and server.
    Where should this be done - on the client/server side? Is there a means to enable it without touching any of native Java code by enabling it through Server/Client side setting?
    thanks

    You should ask your question in the JDBC forum found at:
    http://forums.oracle.com/forums/forum.jsp?forum=99

  • Huge log file in leopard

    after a day of using leopard, i noticed in console that one log kept on returning every 5 seconds with the same message
    Oct 27 19:57:14 stephen-straubs-macbook com.apple.nibindd[633]: dyld: Symbol not found: xdr_nibind_cloneargs
    Oct 27 19:57:14 stephen-straubs-macbook com.apple.nibindd[633]: Referenced from: /usr/sbin/nibindd
    Oct 27 19:57:14 stephen-straubs-macbook com.apple.nibindd[633]: Expected in: /usr/lib/libSystem.B.dylib
    Oct 27 19:57:15 stephen-straubs-macbook ReportCrash[634]: Formulating crash report for process nibindd[633]
    Oct 27 19:57:15 stephen-straubs-macbook com.apple.launchd[1] (com.apple.nibindd[633]): Exited abnormally: Trace/BPT trap
    Oct 27 19:57:15 stephen-straubs-macbook com.apple.launchd[1] (com.apple.nibindd): Throttling respawn: Will start in 10 seconds
    Oct 27 19:57:15 stephen-straubs-macbook ReportCrash[634]: Saved crashreport to /Library/Logs/CrashReporter/nibindd2007-10-27-195714stephen-straubs-macbook.crash using uid: 0 gid: 0, euid: 0 egid: 0
    and the log file is starting to take up space and all I can do is constantly run a periodic script but that is temporary. So is there any solution to this?

    Try running this command:
    *sudo launchctl -w com.apple.nibbindd*
    If that doesn't work, please run this:
    sudo find / -name com.apple.nibindd*
    The output file should be a file called com.apple.nibindd.plist
    Just reenter the command above like so:
    *sudo lauchctl -w /<path to>/com.apple.nibindd.plist*
    I suspect you may have a file left over from the Tiger Netinfo Manager.
    Edit: Corrected command
    Message was edited by: willtisdale

  • Oracle bat scripts on windows log file generation

    I want to generate logfile/recording of this .bat file, msglog file is not generating any thing , any idea or any other suggestion how i can generate logfile ? like we can do in crotab scropt>> script.log etc
    set ORACLE_HOME=c:\oracle\10
    set ORACLE_SID=clin
    sqlplus ops/***** @ C:\u06\users\db\oracle\scripts\ops\datastore\scripts\main\main2xwk.sql msglog=opsshell3.log

    DBA2011 wrote:
    i am hoping to log all the action main2xwk.sql do, in one single logfileAnd exactly how do you anticpate that passing sqlplus the string 'msglog=opsshell3.log' will accomplish that? What do you expect sqlplus to do with that?
    Let's deconstruct your .bat file
    set ORACLE_HOME=c:\oracle\10
    set ORACLE_SID=clinThe above two lines simply set a couple of environment variables, to be used by some process running in the same enviornment in whatever manner said process chooses. Since your next step executes sqlplus, we know it may choose to use them, and in fact it does.
    sqlplus ops/***** @ C:\u06\users\db\oracle\scripts\ops\datastore\scripts\main\main2xwk.sql msglog=opsshell3.log The above line tells the OS to locate an executable file named 'sqlplus'
    and make available to it the character string 'ops/***** @ C:\u06\users\db\oracle\scripts\ops\datastore\scripts\main\main2xwk.sql msglog=opsshell3.log
    to use as it (sqlplus) sees fit. We know from the SQLPLus docs that sqlplus will parse that out as follows, with a 'space' as the delimiter.
    "ops/*****"
    will be broken down into username 'ops' and password '*****' and used to issue a connection request to a local database instance identified by the value of the environment variable ORACLE_SID.
    "@ C:\u06\users\db\oracle\scripts\ops\datastore\scripts\main\main2xwk.sql "
    Well, if you had closed the space between '@' and 'C:', sqlplus would have attempted to locate the file 'C:\u06\users\db\oracle\scripts\ops\datastore\scripts\main\main2xwk.sql' and process it. However, since you seem to have a space there (all I did was copy and paste the code you posted) it probably just gave up and did nothing because it found nothing appended directly to the '@' and the strings that follow the space after '@' have no special meaning at all to sqlplus.
    "msglog=opsshell3.log"
    If you had closed the space after the @, and sqlplus was able to locate and process 'main2xwk.sql', it would simply have used the string 'msglog=opsshell3.log' as the first command-line substitution parameter to be used by main2xwk.sql, however it was written to use it. Was main2xwk.sql written to accept a command-line parm? See http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch5.htm#sthref1080

  • Need to shrink huge log file

    Hi, 
    Have a database which is published using transactional replication.  The replication was broken yesterday due to a restore.  In order to try and fix this I issued  the "EXEC sp_replrestart" command and left it running, unfortunately
    it has now filled up the disk the log sits creating a 250GB file. 
    Getting this error: 
    Msg 9002, Level 17, State 6, Procedure sp_replincrementlsn_internal, Line 1
    The transaction log for database 'RKHIS_Live' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
    I really need to free up space on this disk and shrink the log, however I can't backup the database. 
    I've not tried shrinking the files yet as I can't do a full backup.
    Any ideas? 
    I don't care about replication at this point and will happily ditch it if it gets me out of this situation. 
    Thanks 

    I disabled replication at the subscriber and then the publisher and disabled all the agent jobs. 
    Then I shrank the database and files to 1GB.  Phew, database is functioning just fine.  
    Need a solution to this.  Problem is the published database (managed by a 3rd party) is backed up, worked on and then restored as part of their software upgrade procedure.  In this case there is bound to be discrepency between the transactions
    in the log.   
    Learned a valuable lesson regarding sp_replrestart today though :-( 

  • Log file in Sql*Loader

    Hi,
    Sql loader always creates the logfile with same name as .ctl file,
    but i want log file should get created in case of unsuccessful run
    not in case of successful run.
    Is this possible.
    Regards,
    SS

    I don’t think it is possible to suppress log file generation on successful run. The Log file contains a detailed summary of the load and helps you to identify successful/fail load.

  • Oracle 10g R2 Database Redo Log Files

    I had 3 redo log files, each of size 50 MB. i added 3 more redo log files, each of size 250 MB.
    Database is running in archive mode, files are generating with different sizes like 44 MB and 240 MB, i need to know is this harm for database or not?
    to make all archive redo log files generation of equal size what should i do?
    Please guide

    Waheed,
    When the redo log switch willbe happening,oracle would be asking archiver to log that into the archive file.So in case you have any parameters set to make the switch happen at certain time,depending on the activity of teh database,the archive file size may vary.There is no harm wit the different sizes of the files.What matters is the transaction informaiton contained in them not their size.
    to make all archive redo log files generation of equal size what should i do?
    As mentioned by Syed, you can make the switch happen at a defined interval which will not ensure but still will be a step to make the archive files of the same size.But I shall say you should bother more about making sure that the files are available rather than their size.
    Aman....

  • Slow sync and huge synservices.log file

    If you get slow sync and huge ~/Library/Logs/Sync/syncservices.log file, check that iSync is log logging everything it can.
    In terminal, type :
    defaults read -g SyncServicesLogEverything
    if it returns YES, then here is your probem, just turn it off or delete this default :
    defaults delete -g SyncServicesLogEverything

    Can you specify the versions of the iPhone OS, Windows XP and iTunes??

  • Log.nmbd is a huge, unremovable file

    Hi All,
    Noticed that people have been reporting problems with Windows sharing and the creation of HUGE log fiiles. I think that I'm reporting the largest file to date: a whopping 85 GB file.
    I've stopped Windows file sharing, checked/repaired permissions, fixed disk etc., but the problem is that I can not delete the file. I always get a kernel panic if I try to rm it, srm it, cp /dev/null log.nmbd, etc.
    How can I recover what amounts to about HALF of my hard disk?
    Eric

    Hi BDAqua, I tried ALL of the tricks short of buying DiskWarrior or something like that.
    I tried single user mode, and then tried to delete the massive file. It locked up the computer again.
    Eventually, I just had to reinstall my hard drive. Hah, this time I do not have Windows file sharing enabled -- i just bought my girlfriend a MacBook so that we don't have any more Windows machines.
    Notwithstanding, there is something very very wrong with the deletion of these massive files. What is so bad about it is that I would imagine a number of situations where I would want a file this big -- digital video, HDF5 files, etc.
    In short, if you can create it, you should be able to delete it!
    Eric

  • How to Monitor the size of log file (Log4j) During its Generation.

    i have made a program using log4j technology. Now i want to know that is there any method that i can restrict the size of the log file. for example if the size of generating log file exceeds the restricted size then it should be placed in backup and a new file should be automatically generate and remaining contents should be written in that file.
    is there any method that can monitor the size of file during the generation of that (say) log file.
    Waiting for ur Urgent response

    I have wrote that code
    <?xml version="1.0" encoding="UTF-8" ?>
    <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
    <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
    <appender name="appender" class="org.apache.log4j.FileAppender">
    <param name="File" value="c:\\abc.txt"/>
    <param name="MaxFileSize" value="100B"/>
    <param name="MaxBackupIndex" value="3"/>
    <param name="Append" value="false"/>
    <layout class="org.apache.log4j.SimpleLayout"></layout>
    </appender>
    <root>
    <priority value ="debug"/>
    <appender-ref ref="appender"/>
    </root>
    </log4j:configuration>
    When i run it it gave me error Message
    log4j:WARN No such property [maxFileSize] in org.apache.log4j.FileAppender.
    log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.FileAppender.

Maybe you are looking for

  • How do i know if my ipad is an ipad 2

    I purchased my iPad from a reputable retailer however it was a customer return and it did not have the original plastic on the package.  I am concerned that the customer may have done a switch and returned an iPad in and iPad 2 box for a bigger refun

  • How to put back in XCode 4.5.x all provisioning files under devices?

    How do i replace/put back all provisioning profiles that I have accidently deleted and start over from scratch with the profiles? I am using macbook air with OS X 10.8.2. Any help will be appreciated and helpful to a beginner developer to this Mac OS

  • Error installing windows 7 on partition with Bootcamp.

    I'm having issues when I try and istall Windows 7 using bootcamp on my Macbook Pro 15' Retina Late-2013. I am able to create the USB Key with Windows 64-bit and Support files from apple with no issues. For this, I'm using a ScanDisk Extreme 32Gb USB

  • Problems uploading pics to Windows (new since iOS7)

    I have an iPhone 4 and Windows 7. Before the update, I could use the windows uploader to upload pics from my phone to my computer. However, now the uploader will not start. I get an error sound, but no message. When I try to start the uploader manual

  • Linux suse 10

    Hi, within linux suse 10 (i586), how can I call Sqlplus? Regards, huamin