ONS.Log is filling up my machine

Can anyone help me with this problem. I have installed 9iAS infrastructure and mid-iter (portal and forms only) on a Windows 2000 server.
Everything seems to run fine for a few hours and then I am out of disk space - almost 30GB are used up in the ons.log files.
How can I stop this - turn off the log - or correct the basic error?
I can't figure out what is causing the log to grow so large and so quickly. Please help

Hi,
You could check the log level in your ons.log file. This value is specified in the following location:
$ORACLE_HOME/opmn/conf/opmn/xml
The <log-file> tag for the ons.log file would have the value of log level. The permitted range of values are from 1(least messages logged) to 9(most messages logged).
You could select a lower logging level value to prevent your log level from growing large.
Thanks,
Rashmi.

Similar Messages

  • Error in ONS logs while implmenting FCF on oracle RAC from java program

    I have java prog on client machine that uses properties from a property file.While making the connection to the ONS port on the oracle RAC server to implement FCF the program is throwing error as below:
    java.sql.SQLException: Io exception: The Network Adapter could not establish the connection
    and when i checked the ons logs for that node the logs are as follows:
    Connection 5,199.xxx.xxxxxx,8200 header RCV failed (Connect
    ion reset by peer) coFlags=1002a
    These logs are generated only when java program tries to connect else the daemon started without any errors.
    But sometime it connets and gives the desired output.
    Please advice and do let me know in case you need more information.
    Java program on the client machine is as follows..
    * Oracle Support Services
    import java.sql.Connection;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import java.sql.Statement;
    import java.util.Enumeration;
    import java.util.Properties;
    import java.util.ResourceBundle;
    import oracle.jdbc.pool.OracleConnectionCacheManager;
    import oracle.jdbc.pool.OracleDataSource;
    public class FCFConnectionCacheExample
    private OracleDataSource ods = null;
    private OracleConnectionCacheManager occm = null;
    private Properties cacheProperties = null;
    public FCFConnectionCacheExample() throws SQLException
    // create a cache manager
    occm = OracleConnectionCacheManager.getConnectionCacheManagerInstance();
    Properties props = loadProperties("fcfcache");
    cacheProperties = new java.util.Properties();
    cacheProperties.setProperty("InitialLimit", (String)props.get("InitialLimit"));
    cacheProperties.setProperty("MinLimit", (String)props.get("MinLimit"));
    cacheProperties.setProperty("MaxLimit", (String)props.get("MaxLimit"));
    ods = new OracleDataSource();
    ods.setUser((String)props.get("username"));
    ods.setPassword((String)props.get("password"));
    ods.setConnectionCachingEnabled(true);
    ods.setFastConnectionFailoverEnabled(true);
    ods.setConnectionCacheName("MyCache");
    ods.setONSConfiguration((String)props.get("onsconfig"));
    ods.setURL((String)props.get("url"));
    occm.createCache("MyCache", ods, cacheProperties);
    private Properties loadProperties (String file)
    Properties prop = new Properties();
    ResourceBundle bundle = ResourceBundle.getBundle(file);
    Enumeration enumlist = bundle.getKeys();
    String key = null;
    while (enumlist.hasMoreElements())
    key = (String) enumlist.nextElement();
    prop.put(key, bundle.getObject(key));
    return prop;
    public void run() throws Exception
    Connection conn = null;
    Statement stmt = null;
    ResultSet rset = null;
    String sQuery =
    "select sys_context('userenv', 'instance_name'), " +
    "sys_context('userenv', 'server_host'), " +
    "sys_context('userenv', 'service_name') " +
    "from dual";
    try
    conn = null;
    conn = ods.getConnection();
    stmt = conn.createStatement();
    rset = stmt.executeQuery(sQuery);
    rset.next();
    System.out.println("-----------");
    System.out.println("Instance -> " + rset.getString(1));
    System.out.println("Host -> " + rset.getString(2));
    System.out.println("Service -> " + rset.getString(3));
    System.out.println("NumberOfAvailableConnections: " +
    occm.getNumberOfAvailableConnections("MyCache"));
    System.out.println("NumberOfActiveConnections: " +
    occm.getNumberOfActiveConnections("MyCache"));
    System.out.println("-----------");
    catch (SQLException sqle)
    while (sqle != null)
    System.out.println("SQL State: " + sqle.getSQLState());
    System.out.println("Vendor Specific code: " +
    sqle.getErrorCode());
    Throwable te = sqle.getCause();
    while (te != null) {
    System.out.print("Throwable: " + te);
    te = te.getCause();
    sqle.printStackTrace();
    sqle = sqle.getNextException();
    finally
    try
    rset.close();
    stmt.close();
    conn.close();
    catch (SQLException sqle2)
    System.out.println("Error during close");
    public static void main(String[] args)
    System.out.println(">> PROGRAM using JDBC thin driver no oracle client required");
    System.out.println(">> ojdbc14.jar and ons.jar must be in the CLASSPATH");
    System.out.println(">> Press CNTRL C to exit running program\n");
    try
    FCFConnectionCacheExample test = new FCFConnectionCacheExample();
    while (true)
    test.run();
    Thread.currentThread().sleep(10000);
    catch (InterruptedException e)
    System.out.println("PROGRAM Ended by user");
    catch (Exception ex)
    System.out.println("Error Occurred in MAIN");
    ex.printStackTrace();
    Some of the info i have deleted intensionally as this is confidential
    Property file is as follows
    # properties required for test
    username=test
    password=test
    InitialLimit=10
    MinLimit=10
    MaxLimit=20
    onsconfig=nodes=RAC-node1:port,RAC-node2:port
    url=jdbc:oracle:thin:@(DESCRIPTION= \
    (LOAD_BALANCE=yes) \
    (ADDRESS=(PROTOCOL=TCP)(HOST=RAC-node1)(PORT=1521)) \
    (ADDRESS=(PROTOCOL=TCP)(HOST=RAC-node1)(PORT=1521)) \
    (CONNECT_DATA=(service_name=RAC_SERVICE)))

    Hi;
    Please check below note:
    Link Errors While Installing CRS & RAC Database software [ID 438747.1]
    Codeword File $TIMEBOMB_CWD,/opt/aCC/newconfig/aCC.cwd Missing Or Empty [ID 552893.1]
    Regard
    Helios

  • Log is filled with AFP_VFS afpfs_vnop_getxattr:  bad dataLength offset 2 replySize 2

    Hi all,
    I'm connecting to a LaCie big NAS, wired, via AFP. Lately networktransfer is veeeeery slow and I see my log is filled with:
    AFP_VFS afpfs_vnop_getxattr:  bad dataLength offset 2 replySize 2 (as soon as i browse the NAS disk or start a transfer a series of these error messages are added).
    I've seen some reports of this error, apparently it has something to do with OS X Maverick and netatalk version. But I can't find a solution...
    I tried to switch to SMB but that throwes lot of error -36 (unexpected end of file) when saving files (for example AI) directly to the disk.
    Any suggestions? Would be much appreciated .
    Viktor

    Resetting the NetworkSpace 2 back to it's shipping configuration and freshly updating to 2.2.8.5 didn't help, so don't bother going to the (considerable) trouble if Lacie tech support suggests it will fix the problem. 
    Their current advice is to abandon AFP and connect to the NS2 as an SMB volume or via FTP.   Not very useful if you purchased the device intending to use it (as advertised) for Time Machine backups.

  • Multiple OD users simultaneously logged in to one client machine?

    I'd like to be able to have multiple OD network home folder users logged into a single client machine at a time. They would switch between themselves using Fast User Switching. I can't figure out how to make this work. Is this simply not possible or am I missing a configuration setting somewhere that allows this to happen?

    The first thing that comes to mind is that the share point hosting the user's network home directory is already mounted as another user which would cause it to fail. When I try this the Login Window says the user is unable to log in- makes sense.
    Next I tried creating another automount share point on the server (/Shared Items/More Users) and assigning the 2nd user to use that share point (so the homes are on different share points) and that appears to work. Not sure exactly how 'supported' this configuration is but it appears to work (in other words, your mileage may vary). Here are the mount command results from the client:
    mount
    /dev/disk0s3 on / (hfs, local, journaled)
    devfs on /dev (devfs, local, nobrowse)
    /dev/disk0s2 on /Volumes/Loki (hfs, local, journaled)
    map -hosts on /net (autofs, nosuid, automounted, nobrowse)
    map auto_home on /home (autofs, automounted, nobrowse)
    map -fstab on /Network/Servers (autofs, automounted, nobrowse)
    trigger on /Network/Servers/server.domain.com/Users (autofs, automounted, nobrowse)
    trigger on /Network/Servers/server.domain.com/Shared Items (autofs, automounted, nobrowse)
    trigger on /Network/Servers/server.domain.com/Shared Items/More Users (autofs, automounted, nobrowse)
    afp_3a2gxv44sbgc0lNAhO1lX1fO-1.2d000007 on /Network/Servers/server.domain.com/Shared Items/More Users (afpfs, nodev, nosuid, automounted, nobrowse, mounted by jupeman)
    afp_3a2gxv44sbgc0lNAhO1lX1fO-1.2d000008 on /Volumes/Public (afpfs, nodev, nosuid, nobrowse, mounted by jupeman)
    afp_3a2gxv44sbgc0lNAhO1lX1fO-1.2d000009 on /Network/Servers/server.domain.com/Users (afpfs, nodev, nosuid, automounted, nobrowse, mounted by rick)
    afp_3a2gxv44sbgc0lNAhO1lX1fO-1.2d00000a on /Volumes/Public-1 (afpfs, nodev, nosuid, nobrowse, mounted by rick)
    As you can see, jupeman has mounted /Network/Servers/server.domain.com/Shared Items/More Users and rick has mounted /Network/Servers/server.domain.com/Users
    Best of luck!

  • Logs are filling up disk space

    Dear all
    i am facing a problem in my database
    the database is in archive log mode and continuous archiving of log is filling up the disk space
    please let me know what should i do so that at the end of activity there should be no complaints of logs filling up
    i wil ensure the big disk space
    please let me know what kind of administration i have to do with this issue ...

    hello Sagar
    when you use RMAN for backup you can set RETENTION POLICY for window of day or redundancy for example
    RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    or
    RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
    and when you backup use "BACKUP ... PLUS ARCHIVELOG" for example
    BACKUP DEVICE TYPE sbt
    DATABASE PLUS ARCHIVELOG;
    it cause that RMAN when backup your database , backup your archived logs too ;
    and about that two first CONFIGURE command
    first command cause backupfiles and archived log files that backuped up before than 3 days signed obsolete in V$BACKUP_FILES view
    (you can query the V$BACKUP_FILES view and check the OBSOLETE column)
    second command cause backupfiles and archived log files that backuped up 3 times signed obsolete in V$BACKUP_FILES view
    and if you use "DELETE OBSOLETE ..." command in RMAN it delete obsolete files
    and your archived logs that backued up and are obsolete will be delete
    AND you can use DBMS_SCHEDULER for synchronize this routine
    Message was edited by: khosravi
    khosravi

  • Ons.log errors

    I have a 10g grid console installed configured and working.
    However thirty four error lines a second are written to the ons.log file. The error is:
    05/05/02 12:59:42 [4] Local connection 0,127.0.0.1, 6100 missing form factor.
    The above line is repeated thirty four times a second.
    What is this error and how do I get rid of it?

    Could you pls tell me the detail of Metalink id:284602.1?
    Thank you very much!

  • Ons.log getting too many and too large

    Hi,
    Just recently when I tried to connect to our staging database I found that I couldn't connect and it gave me the "disk is full" error. It was quite amazing because the archive logs were not large nor were their any backups on it. And it was a 200g disk partition. After doing some research we found out the the ons logs were the problem. There was probably like 40 or so log files and each was a gig and a half in size. My question is what causes these logs to be created and how can we manage them. Because we have development environment and it was setup exactly the same way as the staging, but we never had a problem like that on it (dev). How can we turn the logging off?
    Thank you.

    Take a look on thi Metalink Notes
    Doc-ID: 284602.1

  • Build 9879 - Need to restart Win 10 if remotely logged on from other windows machine

    Hi,
    Installed Win 10 32 bit Build 9879. I remotely logon to the Win 10 machine without any problem from my laptop. The problem comes when I try log in from Win 10 machine itself (i.e. which has Win 10 installed). No password screen... only black screen
    with a mouse.
    Work around is to do remote log on to Win 10 and restart the system.
    Is anyone faced similar problem ? How to resolve it ?
    Thanks,
    Mukul

    Hi Mukul,
    It sounds like you have a log on issue, right?
    Based on my understanding, when the Windows 10 9879 starts, you can remotely login from other PCs, but you have a native login issue.
    If I misunderstand your meaning, please feel free to let me know about it.
    If there is no password screen, I would suspect this is a corrupted installation. You can remotely launch the system recovery to perform a Refresh:
    Alex Zhao
    TechNet Community Support

  • ONS.log , invalid connect server IP format

    Dear all,
    We have a RAC system 10g (10.2.0.3) with two database nodes on two dedicated IBM Power servers with AIX 5.3 as OS.
    The following message is occurring in a frequent way in ons.log noting that we have detected that the ons process is consuming very high memory:
    +13/03/18 09:34:33 [2] Passive connection 0,<IP of server 1>,6200 invalid connect server IP format+
    +3232237319,6200,6113,0+
    ONSinfo: !!3232237319!0!6200!0!6113
    hostName: <hostname of server2>
    clusterId: databaseClusterId
    clusterName: databaseClusterName
    instanceId: databaseInstanceId
    instanceName: databaseInstanceName
    Unfortunately we can't find any documentation for that.
    Regards.

    ons process is consuming very high memory....
    is it log size is increasing drastically or any other issue....we are not getting issue .....
    ONS is consuming high CPU? is it ur issue...then
    according to my understanding the hostname or IP value configured for OPMN in the opmn.xml file does not match the corresponding entry in the ons.conf file.
    OPMN reconnects to itself over and over, thereby increasing CPU usage, one more thing is ONS topology is mis-configured....i hope...
    ons.conf file content must be same as all other instance in the cluster.....
    for more information examine the log ORACLE_HOME/opmn/logs/ipm.log ....if possible post the log.....
    thanks,
    DBC,
    Sr DBA.

  • Oas 10gr2 can't be started: ons.log shows local listener terminated

    Hello all. I have the following problem
    When I try to start the oas infrastructure (we are using 10g release 2), I got the following error:
    */oas/product/10.1.2/opmn/logs>opmnctl startall*
    opmnctl: starting opmn and all managed processes...
    opmnctl: opmn start failed
    Reviewing oas log files, the $OAS_HOME/opmn/logs/ONS.LOG shows the following error:
    *09/03/25 17:23:04 [1] Local listener terminated*
    *09/03/26 09:45:17 [4] ONS server initiated*
    *09/03/26 09:45:17 [2] BIND (Can't assign requested address)*
    *09/03/26 09:45:17 [2] 127.0.0.0:399835136 - listener BIND failed*
    *09/03/26 09:45:17 [4] Listener thread 1543: 127.0.0.0:399835136 (0x442) terminating*
    *09/03/26 09:45:17 [1] Local listener terminated*
    This is similar to what others have reported about problems with listener binding, but this problem cannot be found in Metalink because it says the BIND can't assign requested address.
    I have the suspict this is a problem with the hostname or something because the ONS is trying to bind to 127.0.0.0... strange...
    The /etc/hosts shows this configuration
    *# 10.2.0.2 x25sample # x.25 name/address*
    *127.0.0.1 localhost.av-c.com loopback localhost*
    *172.19.1.26 avc1.av-c.com avc1*
    Please help because this install has a lot of applications and we cannot start it.
    Thanks!

    DO you have a Database there?
    What process did not start?
    What does the log of this process show?
    Regards.

  • Cisco IronPort S170 Access Logs are filling up the HDD

    We have a Cisco IronPort S170.
    The access logs have filled the HDD to 91%
    The device is taking a serious performance hit.
    It now takes 5 minutes per click if I'm lucky.
    I have accessed the device via FTP and am about to copy off all of our AccessLogs.
    Once this is completed is there a way to wipe only the accesslogs from the device?
    Via FTP the transactions seemed to be read only
    I was looking through the CLI, but wasn't sure which command to use.
    Thanks,
    Brian

    When you FTP to the device, and CD to the appropriate directory path - are you not able to mdel the files?  Are you accessing the appliance via FTP as an admin level user?
    -Robert

  • Oraagent_oracle.log  has filled filesystem

    hi,
    in my grid 11.2, oraagent_oracle.log has filled filesystem.
    it switch every 10Mb and write every 1 seconds information like:
    2010-07-28 12:11:22.309: [    AGFW][1164872000] CHECK initiated by timer for: ora.orcl.db 1 1
    2010-07-28 12:11:22.310: [    AGFW][1143892288] Executing command: check for resource: ora.orcls.db 1 1
    2010-07-28 12:11:22.311: [ora.orcl.db][1143892288] [check] Gimh::check condition (GIMH_NEXT_NUM) 9 exists
    2010-07-28 12:11:22.311: [    AGFW][1143892288] check for resource: ora.orcl.db 1 1 completed with status: ONLINE
    is there a way to reduce the number of file generated?
    Thank you

    Hi... it's not agent (Enterprise manager) log.
    It's a oraagent crs (grid infrastructure) log.
    It can be found under:
    $CRS_HOME/log/<hostname>/agent/crsd
    there are 2 directory:
    oraagent_oracle
    orarootagent_root
    in the oraagent_oracle:
    -rw-r--r-- 1 oracles oinstall 10584334 Jul 27 00:49 oraagent_oracle.l07
    -rw-r--r-- 1 oracle oinstall 10584320 Jul 28 12:56 oraagent_oracle.l06
    -rw-r--r-- 1 oracle oinstall 10584435 Jul 28 17:51 oraagent_oracle.l05
    -rw-r--r-- 1 oracle oinstall 10584433 Jul 28 22:47 oraagent_oracle.l04
    -rw-r--r-- 1 oracle oinstall 10584400 Jul 29 03:42 oraagent_oracle.l03
    -rw-r--r-- 1 oracle oinstall 10584351 Jul 29 08:37 oraagent_oracle.l02
    -rw-r--r-- 1 oracle oinstall 10584399 Jul 29 13:33 oraagent_oracle.l01
    drwxr-xr-t 2 oracle oinstall 4096 Jul 29 13:33 .
    -rw-r--r-- 1 oracle oinstall 3936515 Jul 29 15:23 oraagent_oracle.log
    who perform log rotate? Where i can found the file properties?

  • Ons Log Problem

    Dear Sir/Madam,
    Regarding the log file of Opmn storage at ../10gccd/opmn/logs has warning/error as below :-
    Local connection 0,127.0.0.1,6100 missing form factor.
    I don't know what those meanings ,it non-stop repeating and extend. Each log file has 1.4GB. it will fill up my
    disk shortly.
    Anyone can help me solve the problem ?
    best regards
    boris

    Boris,
    You've landed in the wrong forum. This one is devoted to Oracle HTML DB.
    Scott

  • Log files filling up rapidly

    I have noticed that each time I mount a smbfs share the wireless network fails after a couple of minutes, and has to be reset. The log files seem to be filling up rapidly.
    The daemon and sys log contain similar lines.
    Jan 2 21:51:34 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 21:51:40 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 21:53:48 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 21:53:54 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 21:56:03 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 21:56:09 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 21:58:10 TOSHIBA-User gdmgreeter[21619]: Gtk-CRITICAL: gtk_tree_view_get_selection: assertion `GTK_IS_TREE_VIEW (tree_view)' failed
    Jan 2 21:58:10 TOSHIBA-User gdmgreeter[21619]: Gtk-CRITICAL: gtk_tree_selection_unselect_all: assertion `GTK_IS_TREE_SELECTION (selection)' failed
    Jan 2 21:58:10 TOSHIBA-User gdmgreeter[21619]: Gtk-CRITICAL: gtk_tree_selection_select_iter: assertion `GTK_IS_TREE_SELECTION (selection)' failed
    Jan 2 21:58:10 TOSHIBA-User gdmgreeter[21619]: Gtk-CRITICAL: gtk_tree_view_scroll_to_cell: assertion `GTK_IS_TREE_VIEW (tree_view)' failed
    Jan 2 21:58:17 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 21:58:23 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 21:58:41 TOSHIBA-User NetworkManager: <info> Updating allowed wireless network lists.
    Jan 2 22:00:31 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 22:00:37 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 22:02:45 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    This might have some bearing on anther issue already logged
    Any ideas?
    malcolli

    Further reading and checking log files it appears that this may be two faults.
    The NetwokManager and the Gnome Display Manager.
    Now to find out more.
    malcolli

  • Archived logs quickly filling up

    DBA from the client site where one of our product is running is complaining that one of the processes from our application is generating too much redo and are filling up the archive redo log files quickly. How can i find the SQL which is creating the most redo?

    Hi,
    Enable STATSPACK/AWR on the database and monitor whats happening at the peak time.
    If you would like to see which process/statement generated more redo then you need to mine the archive logs.
    To monitor which process is currently generating redo then use the below query:
    select b.name stat_name, sum(a.value)
    from v$sesstat a, v$statname b
    where a.statistic# = b.statistic#
    and b.name like '%redo%'
    group by b.name ;

Maybe you are looking for