Oracle consumes 100% memory on Solaris 10

Hi,
Our database (Oracle 10g, R2) is running is running on Solaris, When I use the unix command prstat -a, it shows that, 100% Memomry utilized. Below are the details.
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
12934 oracle 2573M 2559M sleep 59 0 0:00:00 0.1% oracle/1
5914 sirsi 4912K 4664K sleep 59 0 0:00:29 0.0% prstat/1
12937 oracle 4896K 4592K cpu3 49 0 0:00:00 0.0% prstat/1
833 oracle 2572M 2558M sleep 59 0 0:01:05 0.0% oracle/1
114 root 7464K 6632K sleep 59 0 0:01:20 0.0% picld/12
829 oracle 2573M 2559M sleep 59 0 0:01:04 0.0% oracle/1
823 oracle 2574M 2560M sleep 59 0 0:00:46 0.0% oracle/11
811 oracle 2573M 2559M sleep 59 0 0:00:43 0.0% oracle/1
146 root 2288K 1312K sleep 59 0 0:00:22 0.0% in.mpathd/1
831 oracle 2576M 2562M sleep 59 0 0:00:24 0.0% oracle/1
639 root 3664K 2392K sleep 59 0 0:00:00 0.0% snmpXdmid/2
700 nobody 7520K 3752K sleep 59 0 0:00:00 0.0% httpd/1
701 nobody 7520K 3752K sleep 59 0 0:00:00 0.0% httpd/1
637 root 3080K 2048K sleep 59 0 0:00:00 0.0% dmispd/1
472 root 5232K 2320K sleep 59 0 0:00:00 0.0% dtlogin/1
720 root 2912K 2400K sleep 59 0 0:00:01 0.0% vold/5
629 root 2376K 1664K sleep 59 0 0:00:00 0.0% snmpdx/1
702 nobody 7520K 3736K sleep 59 0 0:00:00 0.0% httpd/1
378 root 3928K 1784K sleep 59 0 0:00:00 0.0% sshd/1
699 nobody 7520K 3704K sleep 59 0 0:00:00 0.0% httpd/1
697 root 9384K 6520K sleep 59 0 0:00:01 0.0% snmpd/1
695 root 7360K 5376K sleep 59 0 0:00:04 0.0% httpd/1
375 root 12M 8088K sleep 59 0 0:00:01 0.0% fmd/15
354 root 3728K 2040K sleep 59 0 0:00:00 0.0% syslogd/13
415 root 2016K 1440K sleep 59 0 0:00:00 0.0% smcboot/1
416 root 2008K 1016K sleep 59 0 0:00:00 0.0% smcboot/1
338 root 4736K 1296K sleep 59 0 0:00:00 0.0% automountd/2
340 root 5080K 2384K sleep 59 0 0:00:00 0.0% automountd/3
263 daemon 2384K 1760K sleep 60 -20 0:00:00 0.0% lockd/2
256 root 1280K 936K sleep 59 0 0:00:00 0.0% utmpd/1
395 root 7592K 2560K sleep 59 0 0:00:02 0.0% sendmail/1
273 root 2232K 1496K sleep 59 0 0:00:00 0.0% ttymon/1
254 root 2072K 1224K sleep 59 0 0:00:00 0.0% sf880drd/1
417 root 2008K 1016K sleep 59 0 0:00:00 0.0% smcboot/1
272 root 5152K 4016K sleep 59 0 0:00:02 0.0% inetd/4
206 root 1232K 536K sleep 59 0 0:00:00 0.0% efdaemon/1
394 smmsp 7568K 1904K sleep 59 0 0:00:00 0.0% sendmail/1
128 root 2904K 2056K sleep 59 0 0:00:00 0.0% devfsadm/6
241 daemon 2640K 1528K sleep 59 0 0:00:00 0.0% rpcbind/1
245 daemon 2672K 1992K sleep 59 0 0:00:00 0.0% statd/1
251 root 2000K 1248K sleep 59 0 0:00:00 0.0% sac/1
123 root 3992K 3008K sleep 59 0 0:00:07 0.0% nscd/26
NPROC USERNAME SIZE RSS MEMORY TIME CPU
24 oracle 48G 48G 100% 0:04:48 0.1%
10 sirsi 1101M 35M 0.1% 0:00:32 0.0%
37 root 148M 97M 0.2% 0:02:18 0.0%
10 nobody 73M 36M 0.1% 0:00:00 0.0%
1 smmsp 7568K 1904K 0.0% 0:00:00 0.0%
4 daemon 12M 7920K 0.0% 0:00:00 0.0%
Total: 86 processes, 260 lwps, load averages: 0.02, 0.02, 0.02
Can anyone suggest why Oracle consumes 100% Memory? and how do we resolve this?.
Regards,
Sabdar Syed.

Many Unix tools add to each dedicated server process memory the SGA size because under Unix each dedicated server is attaching the SGA shared memory segment to its process address space: so these Unix tools are not so reliable for Oracle.
To check the Oracle memory usage, it is generally more recommendded to use the V$ views such as V$SGASTAT and V$PGASTAT.

Similar Messages

  • Oracle.exe consuming 100% CPU on windows and database hang

    Hi all,
    every time my oracle database is hanging when the application run, the problem is the oracle.exe consum 100% CPU but not memory and the server hang and the dabase is going to inaccessible, we need to restart oracle instance service or server to bring the databas eback to normal but it's not permanent because the problem occurs once the application turn on.
    Checking the log file i found the below error every time:
    My database version is 9.2.0.7.0
    OS: Windows 2003 Server Standard Edition Service Pack 2
    RAM: 3,5Gb
    CPU: Inte Xeon 3.20 GHz
    ORA-00600: internal error code, arguments: [kghuclientasp_03], [0xBFEADCE0], [0], [0], [0], [], [], []
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-29400: data cartridge error
    KUP-04050: error while attempting to allocate 163500 bytes of memory
    ORA-06512: at "SYS.ORACLE_LOADER", line 14
    ORA-06512: at line 1
    Fri Mar 05 05:35:15 2010
    Errors in file e:\oracle\admin\optprod\udump\optprod_ora_5876.trc:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-04030: out of process memory when trying to allocate 8389132 bytes (pga heap,redo read buffer)
    ORA-04030: out of process memory when trying to allocate 8389132 bytes (pga heap,redo read buffer)
    ORA-04030: out of process memory when trying to allocate 8180 bytes (callheap,kcbtmal allocation)
    Thank you
    Lucienot.

    Is this a new application on this database?
    Has it run well in the past?
    I have had this happen before on a 32bit Windows server. Our problem was a poorly written procedure that kept pegging the cpu to 100%. You should be able to figure out what SQL is being used that is causing this problem, it will be the Top Working SQL most likely.
    I also had this problem on a Logical Standby server which was trying to apply SQL to the SYS.AUD$ table. As soon as SQL Apply was started, the CPU went to 100%. Once I truncated that table, the cpu usage went back to normal. Not sure what you are using to monitor your database but if you can, try to find out what SQL is running when your CPU goes to 100%.

  • Oracle Consuming too much memory.

    Hi All,
    We are having Oracle with only single instance running on the machine which has 8GB RAM. And we just have only Oracle running on this machine.
    OS - RHEL4
    Oracle : 10.2.0.1
    Out of this 8GB RAM 7.5GB has been utilized.. Can somebody please help me to find out which processes are using this much memory..
    PGA_AGGREGATE_TARGET = 512MB
    SGA_TARGET = 2G
    SGA_MAX_SIZE = 2G
    Since we have allocated only 2GB as our SGA still oracle is utilizing upto 7.5GB memory.. what might be the reason behind this.
    Any help would be great.
    Thanks

    Here's my top output.. Sorted by ,memory usage
    top - 03:00:31 up 18:25, 2 users, load average: 0.06, 0.03, 0.00
    Tasks: 246 total, 1 running, 245 sleeping, 0 stopped, 0 zombie
    Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Mem: 8306864k total, 8028476k used, 278388k free, 103548k buffers
    Swap: 10241396k total, 51416k used, 10189980k free, 7397472k cached
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    3061 oracle 15 0 2145m 434m 431m S 0 5.4 0:40.53 oracle
    2987 oracle 15 0 2145m 144m 140m S 0 1.8 0:01.82 oracle
    2993 oracle 15 0 2142m 111m 108m S 0 1.4 0:05.68 oracle
    2999 oracle 15 0 2143m 97m 93m S 0 1.2 0:03.02 oracle
    15198 oracle 15 0 2142m 82m 79m S 0 1.0 0:00.46 oracle
    2997 oracle 15 0 2142m 78m 76m S 0 1.0 0:00.47 oracle
    13237 oracle 15 0 2150m 68m 65m S 0 0.8 0:01.35 oracle
    2989 oracle 15 0 2156m 45m 42m S 0 0.6 0:08.21 oracle
    6800 oracle 15 0 2141m 40m 38m S 0 0.5 0:00.22 oracle
    3009 oracle 15 0 2167m 40m 21m S 0 0.5 0:00.36 oracle
    3015 oracle 15 0 2167m 39m 21m S 0 0.5 0:00.39 oracle
    3011 oracle 18 0 2167m 39m 18m S 0 0.5 0:00.40 oracle
    3013 oracle 15 0 2167m 39m 18m S 0 0.5 0:00.80 oracle
    3108 oracle 18 0 2141m 36m 34m S 0 0.4 0:00.37 oracle
    2991 oracle 16 0 2143m 35m 33m S 0 0.4 0:00.14 oracle
    20921 oracle 16 0 2141m 33m 31m S 0 0.4 0:00.06 oracle
    19872 oracle 15 0 2142m 32m 29m S 0 0.4 0:00.09 oracle
    2985 oracle 15 0 2141m 31m 30m S 0 0.4 0:00.13 oracle
    This just displays the process that consumes maximum memory..
    Can we find the PGA usage for each process from database side..

  • Oracle 11g R2 consuming 100% CPU on execution plans

    Hi Friends
    I just installed oracle 11.2.0.1.0 on Windows 2008 Server R2.
    I got 4GB of RAM = 2536MB for SGA and 600M for PGA
    I'm running a processes that is consuming 100% of CPU ant its showed on execution plan of every statement this utilization.
    I have worked with disk_asynch_io parameter to FALSE and had no improvement on performance.
    Reading a doc relating to a bug I set up the dbwr_io_slaves to 4 but no changes.
    Now I'm stuck on asynch descriptor resize wait event for more than 4 hours.
    Any ideias os tips or experience about this issue?
    Tks a lot

    Well, Tks for your help guys,
    The problem was solved "almost" at all by disabling MEMORY_TARGET and SGA_TARGET and setting up all the memory parameters manually.
    This has eliminated the event "asynch description resize" and the overall performance of the DB has increased. I was able then, to complete my operation.
    I believe its a Windows bug or a poor design or configuration of mine. But at this time we are able to run the system well and fast enough.
    Tks a lot

  • Some Oracle processes are consuming 100% CPU

    Hi,
    Some of my oracle processes are consuming most amount of cpu continuously.What are the general steps to sort out this problem.I think the query is a tuned one.

    Hi,
    What is your veriosn of Oracle? what is the OS? which oracle process is consuming 100% CPU? how did u find out it is conuming?
    and you have mentiond query is tuned one... it does your query is taking time to execute?
    there is no clarity in your question...
    Regards,
    Vijayaraghavan K

  • Oracle.exe is consuming 100% cpu o windows

    Hi,
    We have 6 oracle 10g database on windows box and from task manager we are able to see oracle.exe is consuming 100% cpu
    Need to know this oracle.exe belongs to which database and can i get the sql for this process
    Thanks

    On my Vista PC, I find that the easiest way to identify which instance an oracle.exe is, is the task manager. On the Processes tab, right-click the oracle.exe and then click Go To Service(s) and that takes me to the OracleServiceORCL or whichever insance it is.
    I'm sure there is a better way that a Windows person could tell us about. Then having identified the instance, it is just normal tuning from there.

  • Compatibility of Oracle 7.2.3 on Solaris 8

    I'm trying to install Oracle 7.2.3 onto Solaris 8, I know Oracle 7.2.3 is compatible with Sun Solaris 2.4 and 2.5. Does 7.2.3 compatible with Solaris 8?

    Yes I did set the semaphores (and restarted the machine afterwards) but maybe I did something wrong there as they do not show up with the ipcs command.
    bash-2.03$ tail /etc/system
    *     Set an integer variable in the kernel or a module to a new value.
    *     This facility should be used with caution. See system(4).
    *     Examples:
    *     To set variables in 'unix':
    *          set nautopush=32
    *          set maxusers=40
    *     To set a variable named 'debug' in the module named 'test_module'
    *          set test_module:debug = 0x13
    set semmni=100
    set semmns=1024
    set semmsl=256
    set shmmax=4294967295
    set shmmin=1
    set shmmni=100
    set shmseg=10
    bash-2.03$ ipcs
    IPC status from <running system> as of Mon Feb 2 10:00:28 MET 2004
    T ID KEY MODE OWNER GROUP
    Message Queues:
    q 0 0x2e781d5 rw-rr-- root root
    q 2 0x1fc4 -Rrw-r--r-- root root
    Shared Memory:
    m 0 0x500005fa rw-rr-- root root
    Semaphores:
    bash-2.03$
    The question is, what did I do wrong there?

  • DBSNMP consuming 100% of CPU

    I'm currently in the process of implementing Grid in our environment. I've created the repository database, installed the OMS and have deployed agents. Everything appears to be functioning. However, I began to notice that DBSNMP would connect to the targets, issue a metrics-gathering query which would consume 100% of CPU, and retain that consumption until I killed it (in some cases over an entire weekend). I'm running the OMS and repository database on Solaris 10 while the agents are running on 5.9. The Oracle versions are 10.2.0.3.
    Has anybody encountered this problem and if so, what did you do to solve it?
    Much appreciated.

    I haven't seen this issue..looks like it's a big problem if it consumes 100% of CPU.
    I'm just installing Grid Control on Solaris 10. I'm almost done with the 10.2.0.1 installation and i plan to upgrade GC to 10.2.0.3 next. You've had no problems upgrading, have you?
    Did you install the patches they recommend?

  • Weblogic cluster consumes 100% CPU and bringing server to its knees.

    I have BEA Weblogic 10.3.1 clustering as part of FMW 11g clustering. This is two node cluster on "Oracle VM" running OEL 4.0 with 4GB Memory and 4GB swap. Install seems to go with little difficulties like 100%CPU for short period but during the configuration always the Java thread consumes 100%.
    Now I am at a point where shared drive is created (Chapter 5.20 of EDG) for HA FileAdapter and Persistence store. Hitting same 100% CPU problem. Looks like thread locking situation.
    ^-- Holding lock: java.io.InputStreamReader@6215c5d[thin lock]
    ^-- Holding lock: java.io.InputStreamReader@6215c5d[thin lock]
    Also noticed the SWAP space is never being used though it has 4GB swap space. It brings the VM to its knees. Thanks in advance for any response.
    Here is the error stack
    ========================
    <Feb 4, 2010 8:23:56 AM EST> <Error> <WebLogicServer> <BEA-000337>
    <[STUCK] ExecuteThread: '1' for queue:
    'weblogic.kernel.Default (self-tuning)' has been busy for "346" seconds working on the request
    "weblogic.kernel.WorkManagerWrapper$1@62171ee", which is more than the configured time (StuckThreadMaxTime) of "300" seconds.
    Stack trace: Thread-22 "[STUCK] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'
    " <alive, in native, suspended, priority=1, DAEMON>
    jrockit.net.SocketNativeIO.readBytesPinned(SocketNativeIO.java:???)
    jrockit.net.SocketNativeIO.socketRead(SocketNativeIO.java:31)
    java.net.SocketInputStream.socketRead0(SocketInputStream.java:???)
    java.net.SocketInputStream.read(SocketInputStream.java:107)
    weblogic.utils.io.ChunkedInputStream.read(ChunkedInputStream.java:149)
    java.io.InputStream.read(InputStream.java:85)
    com.certicom.tls.record.ReadHandler.readFragment(Unknown Source)
    com.certicom.tls.record.ReadHandler.readRecord(Unknown Source)
    com.certicom.tls.record.ReadHandler.read(Unknown Source)
    ^-- Holding lock: com.certicom.tls.record.ReadHandler@6254fe1[thin lock]
    com.certicom.io.InputSSLIOStreamWrapper.read(Unknown Source)
    sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:250)
    sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:289)
    sun.nio.cs.StreamDecoder.read(StreamDecoder.java:125)
    ^-- Holding lock: java.io.InputStreamReader@6215c5d[thin lock]
    java.io.InputStreamReader.read(InputStreamReader.java:167)
    java.io.BufferedReader.fill(BufferedReader.java:105)
    java.io.BufferedReader.readLine(BufferedReader.java:288)
    ^-- Holding lock: java.io.InputStreamReader@6215c5d[thin lock]
    java.io.BufferedReader.readLine(BufferedReader.java:362)
    weblogic.nodemanager.client.NMServerClient.checkResponse(NMServerClient.java:286)
    weblogic.nodemanager.client.NMServerClient.checkResponse(NMServerClient.java:311)
    weblogic.nodemanager.client.NMServerClient.start(NMServerClient.java:90)
    ^-- Holding lock: weblogic.nodemanager.client.SSLClient@62164a2[thin lock]
    weblogic.nodemanager.mbean.StartRequest.start(StartRequest.java:75)
    weblogic.nodemanager.mbean.StartRequest.execute(StartRequest.java:45)
    weblogic.kernel.WorkManagerWrapper$1.run(WorkManagerWrapper.java:63)
    weblogic.work.ExecuteThread.execute(ExecuteThread.java:198)
    weblogic.work.ExecuteThread.run(ExecuteThread.java:165)
    }

    Few things that u can try first off
    1. Turn off Native IO
    2 Use this flag
    -DUseSunHttpHandler=true
    3. This stack seems to suggest communication wid NM
    weblogic.nodemanager.client.NMServerClient.checkResponse(NMServerClient.java:286)
    weblogic.nodemanager.client.NMServerClient.checkResponse(NMServerClient.java:311)
    Can you stop ur Node Manger and see if it helps..
    If these doesnt help, contact Oracle Support..

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • Calculating Kernel parameters for Oracle 11g R2 db on solaris 10u9

    Hi Everyone,
    I have query regarding calculating the kernel parameters for deploying oracle 11g R2 db on solaris 10 v 5.10 update 09 machine , we have Ram size of 64gb.
    My question is how to calculate shared memory ,shared memory identifiers,semaphores, semaphores identiifiers for creating resource control for the project(user.oracle).
    And how to fine out the available semphore values allocated in system..
    Thanks in Advance.
    Edited by: 898979 on Dec 15, 2011 10:24 PM

    Hi;
    For those setting mention in installation guide which is already shared previous post.
    I suggest also see:
    Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2) [ID 169706.1]
    Regard
    Helios

  • After starting up the listener the server consuming 100% CPU

    Hi,
    when i startup the database tier (EBS 12.1.3 and DB 11.2.0.2) its stared well and the CPU usage is normal.But when i start the listener the server (RHEL 5.8) start consuming 100% CPU.
    alert.log details
    in file /ebs/UAT/bin/db/tech_st/11.2.0/admin/UAT_dcvlodbdev/diag/rdbms/uat/UAT/trace/UAT_j000_7601.trc:
    ORA-00600: internal error code, arguments: [13011], [262414], [1644193998], [33], [1644194002], [0], [], [], [], [], [], []
    ORA-06512: at "APPS.WF_BES_CLEANUP", line 488
    ORA-06512: at line 1
    Wed Dec 12 10:11:33 2012
    Sweep [inc][480592]: completed
    Wed Dec 12 10:13:17 2012
    Errors in file /ebs/UAT/bin/db/tech_st/11.2.0/admin/UAT_dcvlodbdev/diag/rdbms/uat/UAT/trace/UAT_j001_7624.trc (incident=480612):
    ORA-00600: internal error code, arguments: [13011], [262414], [1644193998], [33], [1644194002], [0], [], [], [], [], [], []
    Use ADRCI or Support Workbench to package the incident.
    See Note 411.1 at My Oracle Support for error and packaging details.
    Errors in file /ebs/UAT/bin/db/tech_st/11.2.0/admin/UAT_dcvlodbdev/diag/rdbms/uat/UAT/trace/UAT_j001_7624.trc:
    ORA-00600: internal error code, arguments: [13011], [262414], [1644193998], [33], [1644194002], [0], [], [], [], [], [], []
    ORA-06512: at "APPS.WF_BES_CLEANUP", line 488
    ORA-06512: at line 1
    Wed Dec 12 10:13:18 2012
    Sweep [inc][480612]: completed
    Wed Dec 12 10:14:18 2012
    Errors in file /ebs/UAT/bin/db/tech_st/11.2.0/admin/UAT_dcvlodbdev/diag/rdbms/uat/UAT/trace/UAT_ora_7814.trc (incident=481092):
    ORA-00600: internal error code, arguments: [kdsgrp1], [], [], [], [], [], [], [], [], [], [], []
    Incident details in: /ebs/UAT/bin/db/tech_st/11.2.0/admin/UAT_dcvlodbdev/diag/rdbms/uat/UAT/incident/incdir_481092/UAT_ora_7814_i481092.trc
    Wed Dec 12 10:14:23 2012
    Dumping diagnostic data in directory=[cdmp_20121212101423], requested by (instance=1, osid=7814), summary=[incident=481092].
    Use ADRCI or Support Workbench to package the incident.
    See Note 411.1 at My Oracle Support for error and packaging details.
    Wed Dec 12 10:14:24 2012
    Sweep [inc][481092]: completed
    Sweep [inc2][481092]: completed
    Wed Dec 12 10:16:50 2012
    Errors in file /ebs/UAT/bin/db/tech_st/11.2.0/admin/UAT_dcvlodbdev/diag/rdbms/uat/UAT/trace/UAT_j001_8058.trc (incident=481492):
    ORA-00600: internal error code, arguments: [13011], [262414], [1644193998], [33], [1644194002], [0], [], [], [], [], [], []
    Use ADRCI or Support Workbench to package the incident.
    See Note 411.1 at My Oracle Support for error and packaging details.
    Errors in file /ebs/UAT/bin/db/tech_st/11.2.0/admin/UAT_dcvlodbdev/diag/rdbms/uat/UAT/trace/UAT_j001_8058.trc:
    ORA-00600: internal error code, arguments: [13011], [262414], [1644193998], [33], [1644194002], [0], [], [], [], [], [], []
    ORA-06512: at "APPS.WF_BES_CLEANUP", line 488
    ORA-06512: at line 1
    Wed Dec 12 10:19:04 2012
    Incremental checkpoint up to RBA [0xf.ac257.0], current log tail at RBA [0xf.c3691.0]
    Wed Dec 12 10:20:56 2012
    Active Session History (ASH) performed an emergency flush. This may mean that ASH is undersized. If emergency flushes are a recurring issue, you may consider increasing ASH size by setting the value of ASHSIZE to a sufficiently large value. Currently, ASH size is 2097152 bytes. Both ASH size and the total number of emergency flushes since instance startup can be monitored by running the following query:
    select total_size,awr_flush_emergency_count from v$ash_info;
    Wed Dec 12 10:21:26 2012
    Sweep [inc][481492]: completed

    Please see these docs.
    ORA-600/ORA-7445/ORA-700 Error Look-up Tool [ID 153788.1]
    ORA-600 [13011] "Problem occurred when trying to delete a row" [ID 28184.1]
    Workflow Control Queue Clean Up Program Fails With ORA-25226 [ID 334237.1]
    Thanks,
    Hussein

  • Is there a way to free up memory in Solaris

    Hi,
    I am wondering if there is a way to free up memory in Solaris manually ? the way we can do it in Linux for example :
    [xml]echo `/bin/date` "************* Memory Info Before *************"
    free -m
    sync
    echo 1 > /proc/sys/vm/drop_caches
    echo 2 > /proc/sys/vm/drop_caches
    echo 3 > /proc/sys/vm/drop_caches
    echo `/bin/date` "************* Memory Info After *************"
    free -m[xml]
    Thanks.
    Regards
    Terry

    Hi,
    I have two Solaris 11 VMs with Oracle Clusterware running on VBox, both of them with 4GiB RAM, problem is they get slower and slower and then all of a suden one of them gets crashed, it happens so frequently that I just cannot work, increasing RAM is out of question, previously I was running the same setup on Oracle Linux 5 and I was able to tweak the memory using the above script and never got a crash.
    I need to find a way to tune the memory in solaris so the VMs can stay up and I can do the work.
    Here are the mem stats from one of the server, this will give you an idea as to whats happening and you may be able to suggest me a way to tune them.
    Just to let you know I am very new in Solaris so every bit of information will be very helpful to get this problem resolved.
    echo ::memstat | mdb -k
    Page Summary                Pages                MB  %Tot
    Kernel                     202104               789   19%
    ZFS File Data                9039                35    1%
    Anon                       750232              2930   72%
    Exec and libs               34059               133    3%
    Page cache                  36889               144    4%
    Free (cachelist)             6119                23    1%
    Free (freelist)             10021                39    1%
    Total                     1048463              4095Please have a look at above stats and suggest me what to do.
    Thank you very much for your assistance !
    Regards
    Terry

  • Opening VI and consuming 100% CPU power

    I am trying to debug my VI. The first few times the VI ran well but suddenly the VI hang up. I have no choice but to end the LabView process using the Task Manager. When I try to restart this VI again, it fails to open and the Task Manager indicates that the LabView program is now consuming 100% of the CPU time but still fail to open this VI. Looks like it is entering into an infinite loop. I cannot fix anything for I cannot go into my VI to change anything. LabView just consumes all the CPU power and do nothing. What went wrong?
    Regards,
    Larry

    Okay.  That is very strange.  I am seeing the same behavior.  (I have LV 8.2)
    When I first downloaded the VI, it opened just fine.  If I did a saveas, closed the VI, and reopened the copy, I got 99% CPU usage.  I reopened the one from the message, no problem.  I left that one open and tried opening the one I had saved.  It said I already had one in memory by that name do I want to view the one in memory or replace?  I said replace.  When it opened to the block diagram, everything was okay.  But when I clicked on the window to view the front panel, that is when the CPU usage jumped up again.
    So somehow the problem is associated with the FP.   Nothing looks particular wrong about your code.  The question is how did you manage to get a copy of the file that isn't causing problems when you open it?
    I think you will want to recreate the VI from scratch.  You are able to view a copy to see whay you have which is good.  And it doesn't look too complicated.
    What is interesting is that if I delete everything from the VI's block diagram (leaving nothing apparent on the FP), save, close, reopen, I still get the extreme CPU Usage.  It seems confined to LV, as I am still able to switch over and working within the browser to answer this message.
    Message Edited by Ravens Fan on 04-16-2007 02:30 PM

  • Listener - consuming 100% CPU - problem

    Hi Gurus!
    I have a problem with my Listener in Oracle 10g. When I`m trying to start/stop I see
    +[oracle@ZB ~]$ lsnrctl stop+
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 02-JAN-2010 16:18:47
    Copyright (c) 1991, 2005, Oracle.  All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    but listner is not starting. Does anybody knows where can be a problem ? It cant start but listner proces consuming 100% CPU.
    This is my listner.ora
    +# listener.ora Network Configuration File: /opt/oracle/network/admin/listener.ora+
    +# Generated by Oracle configuration tools.+
    SID_LIST_LISTENER =
    +(SID_LIST =+
    +(SID_DESC =+
    +(SID_NAME = PLSExtProc)+
    +(ORACLE_HOME = /opt/oracle)+
    +(PROGRAM = extproc)+
    +)+
    +)+
    LISTENER =
    +(DESCRIPTION_LIST =+
    +(DESCRIPTION =+
    +(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.129)(PORT = 1521))+
    +)+
    +)+
    and my tnsnames.ora
    +# tnsnames.ora Network Configuration File: /opt/oracle/network/admin/tnsnames.ora+
    +# Generated by Oracle configuration tools.+
    ZKB =
    +(DESCRIPTION =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = ZKB)(PORT = 1521))+
    +(CONNECT_DATA =+
    +(SERVER = DEDICATED)+
    +(SERVICE_NAME = ZKB)+
    +)+
    +)+
    EXTPROC_CONNECTION_DATA =
    +(DESCRIPTION =+
    +(ADDRESS_LIST =+
    +(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))+
    +)+
    +(CONNECT_DATA =+
    +(SID = PLSExtProc)+
    +(PRESENTATION = RO)+
    +)+
    +)+
    Where can I find logs for Listner ? /oracle/network/logs/listener.log ?
    help...

    I have found a problem... Vmware...
    This is my last Oracle on Virtual Machine...
    Simply reboot of the virtual machine and working again... ehh...

Maybe you are looking for

  • General Error Issue: FCP 7 on my Mac Book Pro 10.6.8 2.53 GHZ 4GB

    I'm working on a lengthy project, multiple timelines. Out of nowhere I started getting a "General Error" message on playback in a specific area of the timeline. I trash preferences, trash render preferences (making the timeline now red), and now the

  • Content of JTextField

    Hello All, I have this problem: With Swing I create a window containig a text field and a button: // eigene Fensterklasse class Winster extends JFrame { // Konstruktor public Winster (){  super("Countdown"); setLocation(200,200); setSize(300,200); ad

  • Unable to set the header - Message-Id in MimeMessage

    Hi all, I'm facing problem in setting the header -> Message-Id for my MimeMessage.I tried setting the header using MimeMessage.setHeader() , also I tried with MimeMessage.addHeader() both the methods didn't work. I've also excuted the MimeMessage.sav

  • ALBPM Portlets in WebSphere

    Has anyone successfully deployed the ALBPM 5.7 portlets in IBM WebSphere Portal 6? If so, please provide tips/pointers. I have been able to generate and deploy the ALBPM portlet war file in the Websphere Portal. And I'm able to put the portlet into a

  • SQL Join in PHP

    Dear all I have two tables in my database like the following Table A Code Narr 1 Code1 2 Code2 3 Code3 4 Code4 Table B Code Data 2 Data1 3 Data2 By joining I want to display the matching records. But I want to display all the Codes from Table A. How