Best Practices: opmn/logs filling up disk

Has anyone experienced( I assume we all have this issue) with the logs in the $HTTP_SERVER_HOME/opmn/logs getting HUGE. I have APEX 3.0 installed on a UNIX box. Also I do not have a full install of Oracle AS, only the DB http server. What happens is that the logs fill up and the disk hits 100% so the HTTP server can't be started. We bring the DB and HTTP server down every evening and back up every morning(this is done with a scheduling tool that calls a couple of scripts) but it looks like it is only taking about half a day to fill up the logs(depends on the amount of activity on the different apps in APEX). Maybe this isn't the right place for this, if not I'll move it over to the other forums.
Any thoughts/suggestions.
Thanks,
David

Hi David,
That's an error message indicating that you're trying to run it on the same port as something else, take a look at the following thread -
ons.log growing enormously in 10g on RHEL3
which shows you how to resolve it.

Similar Messages

  • What's best practice for logging messages in pageflow?

    What's best practice for logging messages in pageflow?
    Workshop complains when I try to use a Log4J logger by saying it's not serializable. Is there a context similar to JWSContext that you can get a logger from?
    There seems to be a big hole in the documentation on debug logging in workflows and JSP pages.
    thanks,
    Rodger...

    Make the configuration change in setDomainEnv.cmd. Find where the following variable is set:
    LOG4J_CONFIG_FILE
    and change it to your desired path.
    In your Global.app class, instantiate a static Logger like this:
    transient static Logger logger = Logger.getLogger(Global.class);
    You should be logging now as long as you have the categories and appenders configured properly in your log4j.xml file.

  • Best practice for logging

    Hi All,
    I would like to know if there is any best practice document for Firewall logging. This would include
    1. What level of logging is ideal
    2. If a log is stored in a logging server, how long is it best to store the logs and retain the logs by a backup tape etc.
    This can include for various industries like IT, Banking etc.
    Any document pertaining to these would be helpful. Thanks in advance.
    Regards,
    Manoj

    Hi All,
    I would like to know if there is any best practice document for Firewall logging. This would include
    1. What level of logging is ideal
    2. If a log is stored in a logging server, how long is it best to store the logs and retain the logs by a backup tape etc.
    This can include for various industries like IT, Banking etc.
    Any document pertaining to these would be helpful. Thanks in advance.
    Regards,
    Manoj
    Manoj,
    Check out the below link for best practice for logging and prerequiste in cisco devices.
    http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080120f48.shtml#logbest
    http://www.ciscopartner.biz/en/US/docs/security/asa/asa82/configuration/guide/monitor_syslog.html#wp1110908
    Hope to Help !!
    Ganesh.H
    Remember to rate the helpful post

  • Best practices for logging results from Looped steps

    Hi all
    I would like to start a discussion  to document best practices for logging results (to reports and databases) from Looped Steps 
    As an application example - let's say you are developing a test for one of NI's analog input or output cards and need to measure a voltage across multiple inputs or outputs.
    One way to do that would be to create a sequence that switches the appropriate signals and performs a "Voltage Measurement" test in a loop.    
    What are your techniques for keeping track of the individual measurements so that they can be traced to the individual signal paths that are being measured?
    I have used a variety of techniques such as
    i )creating a custom step type that generates unique identifiers for each iteration of the loop.    This required some customization to the results processing . Also the sequence developer had to include code to ensure that a unique identifier was generated for each iteration
    ii) Adding an input parameter to the test function/vi, passing loop iteration to it and adding this to Additional results parameters to log.   

    I have attached a simple example (LV 2012 and TS 2012) that includes steps inside a loop structure as well as a looped test.
    If you enable both database and report generation, you will see the following:
    1)  The numeric limit test in the for loop always generates the same name in the report and database which makes it difficult to determine the result of a particular iteration
    2) The Max voltage test report includes the paramater as an additional result but the database does not include any differentiating information
    3) The Looped Limit test generates both uniques reports and database entries - you can easily see what the result for each iteration is.   
    As mentioned, I am seeking to start a discussion for how others handle results for steps inside loops.    The only way I have been able to accomplish a result similar to that of the Looped step (unique results and database entry for each iteration of the loop) is to modify the process model results processing.  
    Attachments:
    test.vi ‏27 KB
    Sequence File 2.seq ‏9 KB

  • Best practice for partitioning 300 GB disk

    Hi,
    I would like to seek for advise on how I should partition a 300 GB disk on Solaris 8.x, what would be the optimal size for each of the partition.
    The system will be used internally for running web/application servers and database servers.
    Thanks in advance for your help.

    There is no "best practice" regardles of what others might say. I depends entirely on how you plan on using and maintaining the system. I have run into too many situations where fine-grained file system sizing bit the admins in the backside. For example, I've run into some that assumed that /var is only going to be for logging and printing, so they made it nice and small. What they didn't realize is that patch and package information is also stored in /var. So, when they attempted to install the R&S cluster, they couldn't because they couldn't put the patch info into /var.
    I've also run into other problems where a temp/export system that was mounted on a root-level directory. They made the assumption that "Oh, well, it's root. It can be tiny since /usr and /opt have their own partitions." The file system didn't mount properly, so any scratch files in that directory that were created went to the root file system and filled it up.
    You can never have a file system that's too big, but you can always have a file system that's too small.
    I will recommend the following, however:
    * /var is the most volatile directory and should be on its own several GB partition to account for patches, packages, and logs.
    * You should have another partition as big as your system RAM and assign that parition as a system/core dump for system crashes.
    * /usr or whatever file system it's on must be big enough to assume that it will be loaded with FOSS/Sunfreeware tools, even if at this point you have no plans on installing them. I try to make mine 5-6 GB or more.
    * If this is a single-disk system, do not use any kind of parallel access structure, like what Oracle prefers, as it will most likely degrade system performance. Single disks can only make single I/O calls, obviously.
    Again, there is no "best practice" for this. It's all based on what you plan on doing with it, what applications you plan on using, and how you plan on using it. There is nothing that anyone here can tell you that will be 100% applicable to your situation.

  • BEST PRACTICE TO PARTITION THE HARD DISK

    Can some Please guide me on THE BEST PRACTICE TO PARTITION THE[b] HARD DISK FOR 10G R2 on operating system HP-UX-11
    Thanks,
    Amol
    Message was edited by:
    user620887

    I/O speed is a basic function of number of disk controllers available to read and write, physical speed of the disks, size of the I/O pipe(s) between SAN and server, and the size of the SAN cache, and so on.
    Oracle recommends SAME - Stripe And Mirror Everything. This comes in RAID10 and RAID01 flavours. Ideally you want multiple fibre channels between the server and the SAN. Ideally you want these LUNs from the SAN to be seen as raw devices by the server and use these raw devices as ASM devices - running ASM as the volume manager. Etc.
    Performance is not achieved by just partitioning. Or just a more memory. Or just a faster CPU. Performance planning and scalability encapsulates the complete system. All parts. Not just a single aspect like partitioning.
    Especially not partitioning as an actual partition is simple a "logical reference" for a "piece" of the disk. I/O performance has very little do with how many pieces you split a a single disk into. That is the management part. It is far more important how you stripe and whether you use RAID5 instead of a RAID1 flavour, etc.
    So I'm not sure why you are all uppercase about partitioning....

  • What is the best practice for logging runtime error  or uncheked exceptions

    hello
    my main problem is logging the nullPointerException to a file
    suppose I have this method
    public void myMethod()
       //..... some declaration here
       User user = obj.findUser("userx"); //this may return null
       System.out.println("user name is "+user.getName()); // I may have a null pointer exception here
    }so what is the best practice to catch the exception ??
    can I log the exception without catching it ???
    thank you

    A terrible way of logging exceptions.Not that im not agreeing with you, but why? (I havent
    actually used this)Because it's always on, for one thing. It's not really configurable either, unless you go to some trouble to make it so. You'd have to provide an InputStream. How? Either at compile-time, which is undesirable, or by providing configuration, which a logging framework has already done, better. Then there's the fact that you can't log anything other than the stacktrace - not particularly helpful a lot of the time. In short, it's a buggy and incomplete solution to something that's already been solved much much better by, for example, Log4J

  • Oim11g - Best Practice for Logging

    Hello all,
    want to know  the best practice or common usage of error logging in oim11g. As we know that oim has some sequence to run processes that will end in Java. For the best practice, where and when should I create the log? Should I create log inside each function called by oim adapter? If I make a sysout at the beginning of function, parameter names and values, e.getMessage() inside catch(Exception e) and at the end of function, is it okay? Or is there any better implementation? Using sysout or log4j, commons logging, etc.
    in my idea (in each function):
    <function_name>::BEGIN
    - Time: <dd-MMM-yyyy>
    <function_name>::PARAMETER
    - <param_name1>: <param1_value>
    - <param_name2>: <param2_value>
    - <param_name3>: <param3_value>
    if error, inside throw{}
    <function_name>::ERROR
    - Message: <e.getMessage()>
    <function_name>::END
    is it good?

    This is what i've been doing with 11g logging.  In every custom code class i run, i use this to declare my logger:
    private final static Logger LOGGER = Logger.getLogger(<Class_Name>.class.getName().toUpperCase());  // Replace <Class_Name> with the actual class name
    This lets me go go the enterprise manager and make changes to the logging level once the class has been used.
    You can then use the following code:
    LOGGER.log(Level.INFO, "Insert Information Message Here");
    LOGGER.log(Level.CONFIG, "Insert More Detailed Debugger Information Message Here");
    LOGGER.log(Level.WARNING, "Insert Error Message Information Here", e); //e is your exception that is caught
    Personally, i like to put a start and end output in my logging, and then any details in the middle, i use the CONFIG level.  This lets me know pieces are running successfully, and only need to see the details during testing or if needed.  When deployed to production, i set the logger to WARNING level to only know about the problems.
    By using these, you can set your logger appropriately in the enterprise manager to output more detailed when needed.
    -Kevin

  • Best Practices for Logging Interactions in WLST

    I am wondering what is the best method to log commands implemented during execution of a WLST in interactive mode and offline mode?
    Is it best just to import Python's logging module, or is there a better facility provided by WLST?

    I have attached a simple example (LV 2012 and TS 2012) that includes steps inside a loop structure as well as a looped test.
    If you enable both database and report generation, you will see the following:
    1)  The numeric limit test in the for loop always generates the same name in the report and database which makes it difficult to determine the result of a particular iteration
    2) The Max voltage test report includes the paramater as an additional result but the database does not include any differentiating information
    3) The Looped Limit test generates both uniques reports and database entries - you can easily see what the result for each iteration is.   
    As mentioned, I am seeking to start a discussion for how others handle results for steps inside loops.    The only way I have been able to accomplish a result similar to that of the Looped step (unique results and database entry for each iteration of the loop) is to modify the process model results processing.  
    Attachments:
    test.vi ‏27 KB
    Sequence File 2.seq ‏9 KB

  • SQL Logs filling up disk space

    Hi there,
    On my DEV SQL Server - suddenly the logs have been filling up space so quickly - this morning I increased 10 GIGs and now it is full again just in a few hours.
    What action should I take?
    - Is it okay to switch Recovery Model from Full to Simple? (It is DEV server)
    - Anything else?
    Thanks.

    Hi frob,
    For development databases, if you don’t care that if recent data changes are lost, you can change the recovery model from full to simple. Then shrink the transaction log file to a reasonable size, below is a example for you, please note that you can't shrink
    it below its original size.
    USE AdventureWorks2012;
    GO
    -- Truncate the log by changing the database recovery model to SIMPLE.
    ALTER DATABASE AdventureWorks2012
    SET RECOVERY SIMPLE;
    GO
    -- Shrink the truncated log file to 1024 MB.
    DBCC SHRINKFILE (AdventureWorks2012_Log, 1024);
    GO
    Additionally, there are other options resolving the issue that SQL Server  log file grows out of control as follows.
    • Backing up transaction logs frequently.
    • Adding a log file on a different disk.
    • Completing or killing a long-running transaction.
    Reference:
    Troubleshoot a Full Transaction Log
    SQL Server Runaway Transaction Logs
    Managing the SQL Server Transaction Log: Dealing with Explosive Log Growth
    Thanks,
    Lydia Zhang

  • Mail logs filling up disk space: why? ok to delete?

    Computer spec’s: 2GHz Mac Mini with 4GB of RAM and a 250GB flash storage. I am running OS 10.9.5 and Mail 7.3.
    Problem: My storage has been mysteriously filling up, with about 200 GB in “other” in “about this Mac.” I would delete files and then it would fill up again. Using WhatSize, I determined that Mail logs took up 107GB, with two files accounting for most of this (details below).
    Questions: Is it safe to delete Mail logs? Why are some so big? Why are dates so random? Can I prevent large mail logs in the future?
    In Library/Containers/com.apple.mail/Data/Library/Logs, here are the dates and size of the largest files named in format: [long stream of numbers and letters]=imap.gmail.com.txt
    July 15: 55.8 GB
    Sept 3: 47.17 GB
    Aug 6: 5.43 GB
    Aug 6 3.64 GB
    Then there are six files between 312GB and 867.2 GB and many others named in the above format and multiple files named in this format: [date]_GmailDelete.log dating back to May 21.
    Thank you for your help.

    Hi, I'm having a very similar trouble but I do not use Gmail. I have several mail accounts, essentially from 2 mail server providers (hosting and registrar companies), most accounts in IMAP but some still in POP.
    I'm on a PowerBook pro, retina 15 mid 2012, 2.7 Ghz Intel Core i7, with 750 Gb SSD disk", running OS X Yosemite 10.10.2
    Since one week I experience the same problem, disk space filling up without appearant reasons. In the Data/Library/Logs/Mail I have 541 files (!!), some of them very huge. I have deleted a 75 Gb file and another of 13Gb, but I still have files from more than 9Gb (ten file obver 1Gb).
    I have changed passwords in my accounts and even to access my hosting proividers. But no changes.
    Deleting the huge files in the Logs doesn't seem to affect Mails except the fact that I have to enter all POP / IMAP and SMTP passwords :-(
    In the logs, the hugest files are "2015-03-18_IMAPMailboxSyncEngine.log", 2mail.mydomainname-d44f367b-6650-4ddf-9cc6-4fe08822ab6c.txt", "2015-03-21_SocketStreamEvents.log", "137778C8-7690-48F3-877E-48D400322117-mail.mydomainename.txt", and I have plenty like that.
    The question is, where the problemn comes from ??
    When deleting the log files ... the problem is not solved because it continues to filling it with new logs, again ..
    Thank you for your assistance and help.

  • DEV_ICM_SEC logs filling up disk

    I'm getting the message below, over and over. I can't figure out why or where it is coming from. I konw this is the ICM Security Log file, but that's about all I know at this point. The log file grows to 500KB, then a new one is created. These files are taking up over 5 GBs of disk space. Aside from constantly deleting them I want to find out the cause. Any idea as to where to start looking?
                                 SECURITY WARNING                         ******
    Fri Mar 25 13:30:18 2011
    Error: Protocol error (-21), Error in HTTP Request: 9 [http_plg.c 4808]
    [Thr 5320] CONNECTION (id=0/1286714):
        used: 1, type: 1, role: 1, stateful: 0
        NI_HDL: 61, protocol: HTTP(1)
        local host:  1.1.1.1:8014 ()
        remote host: 2.2.2.2:60755 ()
        status: READ_REQUEST
        connect time: 25.03.2011 13:30:18
        MPI request:        <246a8f>   MPI response:        <246a90>
        request_buf_size:   65464    response_buf_size:   0    
        request_buf_used:   93       response_buf_used:   0    
        request_buf_offset: 0        response_buf_offset: 0

    Here is what I see in the dev_icm. This message repeats itself over and over as well.
    [Thr 2132] Mon Mar 28 13:52:28 2011
    [Thr 2132] *** ERROR => Error in HTTP Request: 9 [http_plg.c 4808]
    [Thr 2132] Address    Offset  REQUEST:
    [Thr 2132] -
    [Thr 2132] 0000000006E773C8  000000  47455420 2f736170 2f62632f 6775692f |GET /sap/bc/gui/|
    [Thr 2132] 0000000006E773D8  000016  7361702f 6974732f 69736970 695f736d |sap/its/isipi_sm|
    [Thr 2132] 0000000006E773E8  000032  0d0a4854 54502f31 2e300d0a 41757468 |..HTTP/1.0..Auth|
    [Thr 2132] 0000000006E773F8  000048  6f72697a 6174696f 6e3a2042 61736963 |orization: Basic|
    [Thr 2132] 0000000006E77408  000064  20553163 784d4445 314d4445 364d6d51 | U1cxMDE1MDE6MmQ|
    [Thr 2132] 0000000006E77418  000080  7857544d 35574659 3d0d0a0d 0a       |xWTM5WFY=....   |
    [Thr 2132] -
    [Thr 2132] *** ERROR => PlugInHandleNetData: HttpParseRequestHeader failed (rc=701) [http_plg.c   2220]

  • Best practice for exporting the same disk to multiple guests?

    I've got a T4-4 with two service domains. Each service domain has access to the same pool of SAN disks.
    primary service domain has vds primary-vds0
    secondary service domain has vds secondary-vds0
    So far I've sucessfully built a single backend service for each SAN disk on each service domain and used the MP group operation to enable multipathing, i.e.
    ldm add-vds primary-vds0_a primary
    ldm add-vds secondary-vds0 secondary
    ldm add-vdsdev mpgroup=target1 /dev/dsk/c0t6target1 target1@primary-vds0
    ldm add-vdsdev mpgroup=target1 /dev/dsk/c0t6target1 target1@secondary-vds0
    ldm add-vdisk id=0 bootdisk target1@primary-vds0 guest_a
    Now, I have database datafile LUNS that I need to present to two guests at once since they're using Veritas clustering. As I understand it, what I have to present to each guest is a unique volume (volume@vds) export, and I can do it one of two ways:
    Build mutiple exports of the same disk on the same vds:
    ldm add-vds primary-vds0_a primary
    ldm add-vds secondary-vds0 secondary
    ldm add-vdsdev mpgroup=db1a /dev/dsk/c0c6target5 db1_server_a@primary-vds0
    ldm add-vdsdev mpgroup=db1a /dev/dsk/c0c6target5 db1_server_a@secondary-vds0
    ldm add-vdsdev mpgroup=db1b /dev/dsk/c0c6target5 db1_server_b@primary-vds0
    ldm add-vdsdev mpgroup=db1b /dev/dsk/c0c6target5 db1_server_b@secondary-vds0
    ldm add-vdisk id=6 database_disk db1_server_a@primary-vds0 guest_a
    ldm add-vdisk id=6 database_disk db1_server_b@primary-vds0 guest_b
    Build a separate vds for each guest domain, and then add the vdsdevs to each vds:
    ldm add-vds primary-vds-server_a primary
    ldm add-vds primary-vds-server_b primary
    ldm add-vds secondary-vds-server_a secondary
    ldm add-vds secondary-vds-server_b secondary
    ldm add-vdsdev mpgroup=db1a /dev/dsk/c0c6target5 db_disk@primary-vds-server_a
    ldm add-vdsdev mpgroup=db1a /dev/dsk/c0c6target5 db_disk@secondary-vds-server_a
    ldm add-vdsdev mpgroup=db1b /dev/dsk/c0c6target5 db_disk@primary-vds-server_b
    ldm add-vdsdev mpgroup=db1b /dev/dsk/c0c6target5 db_disk@secondary-vds-server_b
    ldm add-vdisk id=6 database_disk db_disk@primary-vds-server_a guest_a
    ldm add-vdisk id=6 database_disk db_disk@primary-vds-server_b guest_b
    The end result is the same, but is there any advantage (performance, configuration management, whatever) to doing it one way versus the other?
    Thanks!
    Tim Metzinger

    the standards for videoDVD are 720x480, and usually mpeg2 encoded..
    so, your HiDef project HAS to be 'downsampled' somehow..
    I would Export with Qucktime/apple intermediate => which is the 'format' your project is allready, and you avoid any useless 'inbetween encoding'..
    iDVD will 'swallow' this huge export file - don't mind: iDVD cares for length, not size.
    iDVD will then convert into DVD-standards..
    you can 'raise' quality, by using projects <60min - this sets iDVD automatically to highest technical possible bitrate
    hint: judge pic quality on a DVDplayer + TV.. not on your computer (DVDs are meant for TVdelivery)

  • Logging Best Practices in J2EE

    Hi,
    I've been struggling with Apache Commons Logging and Log4J Class Loading problems between module deployments in the Sun App Server. I've also had the same problems with other App servers.
    What is the best practice for Logging in J2EE.
    i.e. I think i may be java.util.logging. But what is the best practise for providing different logging config (i.e. Levels for classes and output) for each deployed module. and how would you structure that in the EAR.
    Thanks in advance.
    Graham

    I find using the java.util.logging works fine. For configuration of the log levels I use a LifeCycle module that sets up all my levels and handlers. That way I can set up the server.policy to allow only the LifeCycle module jar to configure logging (with a codebase grant), but no other normal modules can.
    The LifeCycle module gets its properties as event data with the INIT event and configures the logging on the STARTUP event.
    Hope this helps.

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

Maybe you are looking for

  • Change Button Name in JOptionPane.showMessageDialog

    Hello, I've written the following code that invokes a JOPtionPane messageDialog, when the delete button is hit in the following frame and the selectedItem in the JComboBox is Default. I want to change the JOPtionPane button name from OK to Close. Is

  • New iPad 4 freezes

    Why is it that my 4 month old iPad running latest OS freeze ( It has happened 2 times now). I had to restart it to make it work normal. Is my iPad defective or is it just some upgrade problem?

  • Alternate Hierarchy in same version

    HI, I have a requirement to create an alternate hierarchy in the same version. In a current version, we are having a hierarchy named as HN_Managerial. Our requirement is to have an alternate hierarchy named HN_Managerial_Mgmt which should be the exac

  • Validation & Substitutions

    Hello Guru, Any body having documentation and configuration step for Validation& substitution rule Thanks & Advance Laxmi

  • LCM: Foundation Calc Manager vs Planning Rules Files

    I need to LCM rules files and see the rules in 2 different places, Foundation -> Calc Manager and Planning -> Rules Files. Which one is the correct file from which to LCM, and what is the difference between them? Also, files that were brought in via