WARNING:Oracle process running out of OS kernel I/O resources

Hi!
I am getting below warning in the dbwr trace files almost daily at different times:
WARNING:Oracle process running out of OS kernel I/O resources
We are using :
SuSE Linux Enterprise Edition 10 Sp 2
OS Kernel --> 2.6.16.60-0.39.3-smp
Oracle Database 10.2.0.2 64-bit with only 5380055 bug fix applied
Storage: IBM DS 8300
fs.aio-max-nr = 65536
Please help in resolving this issue.
Regards,
Raju

Hi Mark,
Thank u for quick ur reply.
Doc #396057.1 (SuSE 10.2 OS issue) --> This note says "The archiver is getting stuck with ORA-19502 and ORA-27061", whereas in my case I'm not getting any ORA- errors in alert log nor anywhere but only getting these warnings in dbwr trace files that is also intermittently.
I have consulted my system team in regards to disk I/O they said we are using very high end storage and there are no errors or warnings in the OS system logs.
# 415872.1 --> this note says "HUNG DATABASE INSTANCE IF LINUX KERNEL MISS AIO REQUEST". This is not the situation in our case.
#6687381.8 --> says versions confirmed as being affected are 10.2.0.3 and 10.2.0.4
I'm unable to find anything which points to my situation, pls help.
regards,
raju

Similar Messages

  • WARNING:Oracle process running out of OS kernel I/O resources (1)

    Hi,
    on my server, IBM power6, with oracle 10.2.0.4, dbwr trace report some errors like this:
    *** 2010-08-31 06:26:46.574
    Warning: lio_listio returned EAGAIN
    Performance degradation may be seen.
    WARNING:Oracle process running out of OS kernel I/O resources (1)
    *** 2010-09-01 07:11:38.691
    Warning: lio_listio returned EAGAIN
    Performance degradation may be seen.
    WARNING:Oracle process running out of OS kernel I/O resources (1)
    The awr in this period of time reports:
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    db file sequential read 509,435 2,610 5 42.3 User I/O
    CPU time 1,714 27.8
    log file sync 55,309 1,146 21 18.6 Commit
    log file parallel write 60,498 937 15 15.2 System I/O
    db file parallel write 27,166 295 11 4.8 System I/O
    The workload was represented from a sqlldr with rows=100000.
    Warning in trace file worry me.
    Warning: lio_listio returned EAGAIN
    depends on aioserver that is low, 10 instead 50, because we have 40disks*10/8 cpu.
    Can anyone help me?

    In note 443368.1:
    " If you are using JFS/JFS2, then set the initial value to *10 times the number of logical disks divided by the number of CPUs*."
    pstat -a | grep -c aios
    161
    lsattr -E -l aio0
    autoconfig available STATE to be configured at system restart True
    fastpath enable State of fast path True
    kprocprio 39 Server PRIORITY True
    maxreqs 4096 Maximum number of REQUESTS True
    maxservers 10 MAXIMUM number of servers per cpu True
    minservers 1 MINIMUM number of servers True
    Edited by: Davy on 1-set-2010 3.04

  • Oracle process running out of OS kernel I/O resources

    Hi All,
    Here is our platform:
    Oracle Version: 10.2.0.4.0
    O/S: Linux x86_64
    We are getting the below Error Msg:
    ERROR:
    ===================================================================
    WARNING:io_submit failed due to kernel limitations MAXAIO for process=128 pending aio=123 WARNING:asynch I/O kernel limits is set at AIO-MAX-NR=1048576 AIO-NR=64384 WARNING:Oracle process running out of OS kernel I/O resources
    ===================================================================
    To avoid this we recently upgrade the Oracle to 10.2.0.4.0 and applied the below Ipatches also but still we are getting the same error:
    p6051177_10204_Linux-x8664.zip
    p6024730_10204_Linux-x8664.zip
    p5935935_10204_Linux-x8664.zip
    p5923486_10204_Linux-x8664.zip
    p5895190_10204_Linux-x8664.zip
    p5880921_10204_Linux-x8664.zip
    p5756769_10204_Linux-x8664.zip
    p5747462_10204_Linux-x8664.zip
    p5561212_10204_Linux-x8664.zip
    p6944036_10204_Linux-x8664.zip
    p6826661_10204_Linux-x8664.zip
    p6775231_10204_Linux-x8664.zip
    p6768114_10204_Linux-x8664.zip
    p6679303_10204_Linux-x8664.zip
    p6645719_10204_Linux-x8664.zip
    p6452766_10204_Linux-x8664.zip
    p6379441_10204_Linux-x8664.zip
    p6324944_10204_Linux-x8664.zip
    p6313035_10204_Linux-x8664.zip
    p6151380_10204_Linux-x8664.zip
    p6084232_10204_Linux-x8664.zip
    p6082832_10204_Linux-x8664.zip
    p7573151_10204_Linux-x8664.zip
    p7522909_10204_Linux-x8664.zip
    p7513673_10204_Linux-x8664.zip
    p7300608_10204_Linux-x8664.zip
    p7287289_10204_Linux-x8664.zip
    p7149863_10204_Linux-x8664.zip
    p7027551_10204_Linux-x8664.zip
    p6972843_10204_Linux-x8664.zip
    p6954829_10204_Linux-x8664.zip
    p7592168_10204_Linux-x8664.zip
    p8201796_10204_Linux-x8664.zip
    p7592346_10204_Linux-x8664.zip
    p6705635_10204_Generic.zip
    p7252962_10204_Generic.zip
    Do we need to set any Oracle parameters?
    Thanks And Regards:
    Giridhar N

    Hi,
    under the limits.conf i am seeing the below values:
    Added for SAP on 2008-07-17 15:25:47 UTC
    @sapsys          soft    nofile          48000
    @sapsys          hard    nofile          48000
    @sdba            soft    nofile          32800
    @sdba            hard    nofile          32800
    @dba             soft    nofile          32800
    @dba             hard    nofile          32800
    still i am getting the same error:
    WARNING:io_submit failed due to kernel limitations MAXAIO for process=128 pending aio=127 WARNING:asynch I/O kernel limits is set at AIO-MAX-NR=65536 AIO-NR=65536 WARNING:Oracle process running out of OS kernel I/O resources (1)

  • Oracle process running out of OS kernel I/O resources (1)

    Hello All,
    I am getting "Oracle process running out of OS kernel I/O resources (1)" error message in DBWxx.trc files on AIX 5.3 platform, Please can anybody help me?
    Regards,
    Ajay

    What Oracle version? What, if any, error messages follow that message in the trace files? Anything in the alert.log?
    Did you try an MOS search? How about a Google search?
    This seems to be a pretty well known error.
    -Mark

  • No Oracle processes running Oracles Shared segments not cleaned up

    I've been running into situations where a databases SGA isn't cleaned up from memory but no Oracle processes are running. I always thought that if the Oracle processes die the instance does too, but that doesn't appear to be the case. There's a script that pulls down databases to refresh DB copies. My initial thought was that in certain situations the db isn't shutdown and the files are overwritten, but I have not been able to reproduce it. There's nothing in the Alert.log.... no errors what so ever. In what sort of circumstances would you see a databases SGA still in memory and no Oracle processes running? I can cleanup the shared memory segments.... I'm more concerned about understanding how it is occurring.
    Any insight would be appreciated.
    Thanks

    Please reread your post with great care.
    Do you see a platform and operating system?
    Do you see a product name and full version number?
    Do you see a description of which commands were executed?
    Good. Because I don't either.
    Post a full and complete description of the environment so we can try to replicate it.

  • WARNING:Oracle instance running on a system with low open file

    Hi Guys
    please can you advice on how to resolve this error: WARNING:Oracle instance running on a system with low open file

    Answer a couple of questions for us first:
    1. What version of Oracle (ex. 10.2.0.3) are you running?
    2. What operating system is Oracle installed on?
    3. Where did this error show up (alert log, SQL*Plus session, etc.)?
    Tom

  • EXT3-fs warning: checktime reached, running e2fsck is recommended - kernel

    We are seeing the following meassages:
    2012 Jan 17 03:25:20 MDSBDC09 Jan 17 03:25:20 %KERN-4-SYSTEM_MSG: EXT3-fs warning: checktime reached, running e2fsck is recommended - kernel
    2012 Jan 17 03:30:28 MDSBDC09 Jan 17 03:30:28 %KERN-4-SYSTEM_MSG: EXT3-fs warning: checktime reached, running e2fsck is recommended - kernel
    I can't find any info on same.
    Please advise. Any actions required?
    thanks!

    We see that the switch did run the e2fsk successfully and fix those errors.
    e2fsck 1.27 (8-Mar-2002)
    /dev/hd-cfg0: recovering journal
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    Free blocks count wrong (34334, counted=34279).
    Fix? yes
    Free inodes count wrong (9920, counted=9918).
    Fix? yes
    /dev/hd-cfg0: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/hd-cfg0: 42/9960 files (0.0% non-contiguous), 5505/39784 blocks
    e2fsck 1.27 (8-Mar-2002)
    /dev/hd-cfg1: recovering journal
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    Free blocks count wrong (33356, counted=33301).
    Fix? yes
    Free inodes count wrong (9680, counted=9678).
    Fix? yes
    /dev/hd-cfg1: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/hd-cfg1: 42/9720 files (0.0% non-contiguous), 5475/38776 blocks
    e2fsck 1.27 (8-Mar-2002)
    /dev/hd-pss: recovering journal
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    Free blocks count wrong (33756, counted=33512).
    Fix? yes
    Free inodes count wrong (9875, counted=9870).
    Fix? yes
    /dev/hd-pss: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/hd-pss: 90/9960 files (5.6% non-contiguous), 6272/39784 blocks
    e2fsck 1.27 (8-Mar-2002)
    /dev/hd-obfl: recovering journal
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    Free blocks count wrong (7246, counted=7232).
    Fix? yes
    Free inodes count wrong (2132, counted=2130).
    Fix? yes
    /dev/hd-obfl: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/hd-obfl: 14/2144 files (7.1% non-contiguous), 1336/8568 blocks
    e2fsck 1.27 (8-Mar-2002)
    /dev/hd-bootflash: recovering journal
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    Free blocks count wrong (189131, counted=161555).
    Fix? yes
    Free inodes count wrong (112210, counted=112208).
    Fix? yes
    We will monitor the switch for its frequency. 
    We have looked at the flash and I do not see any directory 100% full
    thanks

  • Oracle 9i running out of memory

    Folks !
    I have a simple 3 table schema with a few thousand entries each. After dedicating gigabytes of hard disk space and 50% of my 1+ GB memory, I do a few simple Oracle Text "contains" searches (see below) on these tables and oracle seems to grow some 25 MB after each query (which typically return less than a dozen rows each) till it eventually runs out of memory and I have to reboot the system (Sun Solaris).
    This is on Solaris 9/Sparc with Oracle 9.2 . My query is simple right outer join. I think the memory growth is related to Oracle Text index/caching since memory utilization seems pretty stable with simple like '%xx%' queries.
    "top" shows a dozen or so processes each with about 400MB RSS/SIZE. It has been a while since I did Oracle DBA work but I am nothing special here. Databse has all the default settings that you get when you create an Oracle database.
    I have played with SGA sizes and no matter how large or small the size of SGA/PGA, Oracle runs out of memory and crashes the system. Pretty stupid to an Enterprise databas to die like that.
    Any clue on how to arrest the fatal growth of memory for Oracle 9i r2?
    thanks a lot.
    -Sanjay
    PS: The query is:
    SELECT substr(sdn_name,1,32) as name, substr(alt_name,1,32) as alt_name, sdn.ent_num, alt_num, score(1), score(2)
    FROM sdn, alt
    where sdn.ent_num = alt.ent_num(+)
    and (contains(sdn_name,'$BIN, $LADEN',1) > 0 or
    contains(alt_name,'$BIN, $LADEN',2) > 0)
    order by ent_num, score(1), score(2) desc;
    There are following two indexes on the two tables:
    create index sdn_name on sdn(sdn_name) indextype is ctxsys.context;
    create index alt_name on alt(alt_name) indextype is ctxsys.context;

    I am already using MTS.
    Atached is the init.ora file below.
    may be I should repost this article with subject "memory leak in Oracle" to catch developer attention. I posted this a few weeks back in Oracle Text groiup and no response there either.
    Thanks for you help.
    -Sanjay
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Cache and I/O
    db_block_size=8192
    db_cache_size=33554432
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=ofac
    # Diagnostics and Statistics
    background_dump_dest=/space/oracle/admin/ofac/bdump
    core_dump_dest=/space/oracle/admin/ofac/cdump
    timed_statistics=TRUE
    user_dump_dest=/space/oracle/admin/ofac/udump
    # File Configuration
    control_files=("/space/oracle/oradata/ofac/control01.ctl", "/space/oracle/oradata/ofac/control02.ctl", "/space/oracle/oradata/ofac/control03.ctl")
    # Instance Identification
    instance_name=ofac
    # Job Queues
    job_queue_processes=10
    # MTS
    dispatchers="(PROTOCOL=TCP) (SERVICE=ofacXDB)"
    # Miscellaneous
    aq_tm_processes=1
    compatible=9.2.0.0.0
    # Optimizer
    hash_join_enabled=TRUE
    query_rewrite_enabled=FALSE
    star_transformation_enabled=FALSE
    # Pools
    java_pool_size=117440512
    large_pool_size=16777216
    shared_pool_size=117440512
    # Processes and Sessions
    processes=150
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Security and Auditing
    remote_login_passwordfile=EXCLUSIVE
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=25165824
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_retention=10800
    undo_tablespace=UNDOTBS1

  • System Process Running out of Control - Overheating!

    The other day I arrived home and went to check my e mail and found my computer running at full tilt with the fans going all out and the case quite hot.  I thought maybe I had left dashboard open all day as I have done this before with similar results, but this was not the case.  
    I quit all programs and re started and the computer started to run at close to full process.  
    I ran software update and installed the RAW and system update that was available, re started still there.  I ran disk utility to check/repair permissions until they were good.  
    I ran the onyx cleaning set of stuff that clears cashes and histories.  
    When looking at the process monitor it shows the processor running at close to 3/4 but even with programs open in the background it shows them using little to none as expected with the process monitor on top of the list using some processor, but the little color chart below that shows process separated by programs and system show the system process running quite high.  
    Lastly I noted that when I restart and then let the computer sit for a few minutes at the log in screen  the system must be running hard again because the fans come on.  I do not know what is causing this or what next step I should take to remediate it, please help!
    Jerome

    Hi, C Jerome. If your Powerbook were actually overheating, it would put itself to sleep, so you needn't worry that it's in danger of damaging itself. I can understand your interest in calming and quieting it, though.
    First, how full is your hard drive? At least 10% of its capacity or 5GB, whichever is larger, should be free at all times. If you get down to much less than that, the processor and hard drive must work very hard cycling data in and out of virtual memory, and you run a substantial risk of system crashes, hard drive directory corruption and consequent loss of data.
    Do you have disk indexing turned on? Indexing your drive is only useful if you use Find By Content searches (I'm not sure that's what Spotlight calls them, because I don't use Tiger). Indexing can consume an enormous amount of processor cycles and generate a lot of heat, in addition to taking up a significant amount of hard drive space. If you don't need it, turn it off.

  • Oracle server running out of space

    We have a linux(debian)server which has Oracle10g in it and it is running out of space now. So we decided to add some more disk space(adding a new hard drive in the same server). But if we want to add more data files how can I tell the DB to use the added disk space. Since all the data files are existing in /home/oracle/oradata/orcl and their tablespace (USERS) are in the old disk,is it possible to tell the DB to use the added disk.
    Please let me know if that can be done and how.
    Thanks in advance.

    Drop the tablespace. If you are sure you don't need it.
    ALTER TABLESPACE tools offline;
    DROP TABLESPACE tools;
    1* create tablespace noned datafile '/u02/app/oracle/oradata/DWDEV01/noned.dbf' size 10M extent management local
    SQL> /
    Tablespace created.
    shutdown immediate;
    oracle@debian:~/oradata/DWDEV01$ mv noned.dbf _noned.dbf
    oracle@debian:~/oradata/DWDEV01$ sqlplus sys/p as sysdba
    SQL*Plus: Release 10.2.0.1.0 - Production on Sat Nov 11 18:44:13 2006
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup;
    ORACLE instance started.
    Total System Global Area 285212672 bytes
    Fixed Size 1218992 bytes
    Variable Size 75499088 bytes
    Database Buffers 205520896 bytes
    Redo Buffers 2973696 bytes
    Database mounted.
    ORA-01157: cannot identify/lock data file 5 - see DBWR trace file
    ORA-01110: data file 5: '/u02/app/oracle/oradata/DWDEV01/noned.dbf'
    1* alter database datafile '/u02/app/oracle/oradata/DWDEV01/noned.dbf' offline drop
    SQL> /
    Database altered.
    SQL> drop tablespace noned;
    drop tablespace noned
    ERROR at line 1:
    ORA-01109: database not open
    SQL> alter database open;
    Database altered.
    SQL> drop tablespace noned;
    Tablespace dropped.
    Message was edited by:
    gopalora

  • Oracle Home running out of space

    Hi,
    I have installed oracle application server 10g and oracle 10g on an Linux server.
    Both products have been installed on the /oracle partition which is about 11GB
    The database control and data files have been setup to another partition /oradata.
    After a few months i seem to be running out of space on the /oracle partition.
    I think it might be due to some log files. Can anyone please give me ideas on which files I can safely purge on the application server home and database home to create some space on the /oracle parition
    Thanks

    hi Prativ,
    For oracle application server,
    The log files in the below three location can be purged,if you want u can take a backup of these older log files in another mount point before deleting
    1st $ORACLE_HOME/webcache/logs
    2nd $ORACLE_HOME/j2ee/OC4J_BI_Forms/log/OC4J_BI_Forms_default_island_1/
    Purge the default-web-access.log and server.log
    3rd $ORACLE_HOM/Apache/Apache/logs
    Regards
    Fabian

  • Oracle processes run during installation

    Hi All,
    I want to know what all are the processes that are run during installation of oracle database.
    BR
    Sphinx

    $phinx19 wrote:
    Hi John,
    I understand your part but i cannnot say the samething to the interviewer. Can I?Why can't you? There is no point of just giving an answer for the sake of giving it when the question doesn't make any sense! If you are asked a question which you think doesn't make sense or may need more explanation , you should(must) ask teh person asking about it. There is nothing wrong in doing so. If I am the one asking such kind of question, my motive would be to check what the person comes up with, an answer just for the sake of it or be more open towards discussion.
    Aman....

  • What Are Those Oracle Processes Doing?

    Hi,
    I have a system using oracle as the database server.
    Sometimes, the performance of my system would be very bad.
    And when I check the CPU usage using the 'top' command,
    it seemed that oracle process is taking up a lot of the resources.
    Is there anyway for me to know what is oracle processing at that moment?
    Thanks in advance.
    Bye.

    is it a generic oracle<instance> process that is running? Or does it have another name like ora_pmon_<instace> or ora_snpX_<instance>?
    to find out you can take the Unix PID and figure out what session has spawned that process.
    select usename,sid,serial# from v$session where paddr in (select addr from v$process where spid = 'UNIX PID HERE')
    You can get more info from V$session than that if you like.

  • Oracle instance running on a system with low open file descriptor

    Hello.
    We have 10.1.0.4 on SuSE 9 on x86 64bit Sun servers.
    We have databases that if started manually come up without the warning, but if started via a shell script scheduled through a crontab start up with this warning: "Oracle instance running on a system with low open file descriptor".
    My understanding is that it has to do with OS ulimit. it appears that non-interactive shell (crontab) does not set the nofiles at 65536.
    All our system are set up exactly the same way. The problem is, though, that some systems do not report the warning even when started non-interactively.
    My question is this: assuming my nofiles ulimit is in fact too low on all systems, why would some systems report the warning and others would not? Is there anything database specific the instance looks for when it starts, such as the number of datafiles in the database, instance memory size, etc..., which would make the instance warn in some cases but not the others?
    Thank You
    Boris

    Thank You Satish.
    This is a good reference and we may end up looking into the patch bundle associated with the bug.
    But does anyone have any idea why the systems that are set up the same exact way would warn on systems and not on others?
    Also, the Metalink note talks about init.crsd. I have not build an association between this inconsistency and RAC.
    What I do see is that if we start up our database non-interactively (where ulimit -n resolves to 1024, instead of 65536) the warning is generated, in some cases.
    Perhaps, the 1024 is too low. But then my question is why would Oracle think it is too low only on some servers and not all?
    Boris

  • Finale running out of memory

    Mac Pro 3.5GHz 6-core Intel Xeon E5
    Memory:  16GB 1867 MHz DDR3
    IT of Flash
    I am running OSX 10.9.5
    The problem occurs when I use Finale 2014 3.4736  (latest version) music notation application with Garritan for Finale
    Shortly after starting work on a small vocal score with piano accompaniment I am faced with the warning:
    "Finale has run out of memory. Please save your work and quit".
    No other applications are running.
     How can this be?
    Help is needed
    Thanks Sion
    Back

    Hi ..
    If you can, don't run other memory intensive apps while using the Finale app.
    Open the Activity Monitor located in HD > Applications > Utilities then select the Memory tab.
    From there you can see which apps are using the most RAM. (memory)

Maybe you are looking for

  • Can't Display any item from PersonDetailsVO in SummaryRegion

    Using SSHR, OAF personalization I am trying to display some more fields in the summary region like 'DateOfBirth' & 'FullName' from 'PersonDetailsVO' I create the item with the below steps: 1) Click on the "Personalize "Summary Region"" link 2) Create

  • Kodo 4.0.0EA3 JDOHelper.is...() state functions

    I'm trying to write unit tests for some of my higher level classes, and I've run into trouble trying to test functions that delete persistent objects. I'm using Spring Framework's AbstractTransactionalSpringContextTests class as my harness (which der

  • Message driven bean and security

    Hi there! I need to apply security capability during analyzing incoming messages. Details: I have an Entity bean with one method restricted by security to access. But one of requirements is to provide this method call also in asynchronous way. So I a

  • How to add line break to UILabel or to UITextView

    How can I do it? \n is not working Thank you

  • Compressions with No marker updates

    Hi experts, What is the purpose of compressing the request of 2lis_03_bx with No marker update not set and compressing the request of 2lis_03_bf with No marker update set ? Full points will be assigned. V N.