Oracle 11G consuming more memory

Dear experts,
I have upgraded oracle 10 to 11 as one of my project.
After upgrading the 11G is consuming more OS level resources. almost 15GB ram whith out any load.
I would like to know what is the minimal usage of 11G at os level.
thanks in advance.
Regards,
shiva P

For a old version of BW system with Oracle 11G, not sure how much your system is loaded.
However, you can always try to optimize resource available. If your memory is less, you need to add swap space and monitor it's usage.
Check for BW specific parameters.
1431798 - Oracle 11.2.0: Database Parameter Settings
180605 - Oracle database parameter settings for BW
1013912 - FAQ: Oracle BW performance
Divyanshu

Similar Messages

  • Exchange 2013 - The Microsoft Exchange Transport service is rejecting message submissions because the service continues to consume more memory than the configured threshold

    Noticed at about noon that no emails had been received all day. Began to investigate and found that the MS Exchange Transport service had been set to deny email submission because it was using too much memory on the server (91%). 
    The error message makes me think that we may have been getting used by malware or something similar.“The Microsoft Exchange Transport service is rejecting message submissions because the service continues to consume more memory than the
    configured threshold.” 
    There are also several warning messages that list particular IP addresses and say that a connection from that IP was denied because there were already the maximum number of connections (20). 
    From what I can tell, all of the IP addresses are from Taiwan. 
    The time period for which some emails may be missing is from close of business yesterday ( 4/3/2014) through about 12:45 today (4/4/2014). 
    From the time I spent reading and trying to figure out the error, I think we may need to readjust our throttling policies to prevent this from happening. 
    The exchange server is currently running at 90%+ CPU and 50%+ memory usage the majority of the time, and I’m not sure how to fix it.
    Also, I cannot get into EMS I get a access denied message from the destination computer. (Exchange server) I want to get into there to change the throttling policy back to default, since we disabled it.
    The Error reads:
    The WinRM client cannot process the request. The WinRM client tried to use Kerberos authentication mechanism, but the destination computer <Exchange> returned an 'access denied' error. Change the configuration to allow Kerberos authentication
    mechanism to be used or specify one of the authentication mechanism supported by the server. (How do I do this?) To use Kerberos, specify the local computer name as the remote destination. (I'm trying to use EMS while logged into the local Exchange server)
    Also verify that the client computer and the destination computer are joined to a domain. (Exchange is on our domain, and the computer trying to connect is the same computer) To use basic, specify the local computer name as the remote destination, specify
    Basic authentication and provide user mane and password. Possible authentication mechanisms reported by server.
    At line:1 char:1
    + New-PSSession -ConnectionURI "$connectionUri" -ConfigurationName Microsoft.Excha ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotingTransportException + FullyQualifiedErrorId : AccessDenied,PSSessionOpenFailed
    I assumed control of this exchange system already in place and I do not have much experience with exchange 2013 or server 2012. I do know 2008, but that doesn't help very much in this situation.
    Recent changes to the system:
    About three days ago we switch our sessions policy to allow many more connections, and I believe this caused the issue. This is what I changed it to:
    Made the registry DWORD (32-bit) "Maximum Allowed Sessions Per User" and modified the value to 1000. Location of registry change @ HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem
    I just changed it to 10 from the 1000. I'm hoping this solves this. So far no.
    Also, I am not the best in the shell or command line interfaces. Any help would be wonderful!

    Hi,
    Yes, could be hardware performance issue. Try recycle the Transport process and see if the issue persists.
    Thanks,
    Simon Wu
    TechNet Community Support

  • How can i found which program consume more memory in server

    Dear Experts,
    please do  needful on below issue
    in my servers one server is very slow i checked in all arear
    but i didnot found any issues on my servers
    now i have one doubt
    is their any problem in abapers coading
    so i need to check how much memory it will takes for sql statments
    i dont know how can i get this requirment
    so please tel me how can i check sql statments which they consuming more memory in my server
    Regards

    Hi,
    please do needful on below issue
    For the same, I suppose ?   
    Use Transaction ST03, choose the period (day, week or month), select "Memory Use Statistics" and sort by "Average Total Memory Usage " or "Maximum Extended Memory Usage".
    Check specially if some programs use Private Memory.
    Regards,
    Olivier

  • Edge server consuming more memory

    Hi
    We are using flash media server 3.0.1 to do online streaming for more than 1 year. Sometime (once in 2-3 months) edge server consumes more memory. It will be utilizing more than 80% of swap memory.
    I am new to flash and unable to figure out, why this is happening.
    FMS is running on unix box.
    Please any body help me to know why this is happening.
    Thanks in advance,
    Bhaskar

    Hello,
    8 Gb is the strict minimum for solman itself, but your OS needs also at least 2Gb and maybe your DB is running on the same host and consume few Gb also...
    Thus I would say the minimum RAM should be 12 Gb (8 for solman, 2 for Db 2 for OS)
    Regards
    Extract from SAP Solution Manager 7.1 Sizing Guide

  • Firefox 7.0.1 consumes more memory than just FF 7.0!

    You have got to be kidding me, Mozilla! You claimed that Firefox 7.0 would consume less memory than its predecessors. Well, FF7.0 does consume less memory, but when I upgraded to Firefox 7.0.1 and used the browser, it now consumes MORE memory than FF7.0 did before, and it is now at nearly 300,000 KB to nearly 500,000 KB! And that was on my first use! The second or other times I used it, Firefox 7.0.1 only consumes less memory, than the first time I used the browser! Mozilla, when will you wise up and fix the memory leaks again?

    That is nothing, my Firefox 7.0.1 consumes 2.47GB (2 473 805 KB to be exact) ram with just 2 open tabs...
    My system is with 8GB ram and I'm running out of memory...
    Most of the time when I browse, the pages load slowly, firefox freezes during scrolling (and assortment of other problems)

  • Oracle 11g AMM (Automatic Memory Management)

    Hi All,
    I have a very powerful server 24 Processors with 6 cores each and 74 GB RAM for my production database. The server will host only one production database. I wanted to use AMM for this database and allocate maximum memory to Oracle by setting memory_target. By default /dev/shm is set 37 GB but I wanted to increase it least 55 GB. I know I can get this changed by my system admin but I wanted to know how much memory should leave for OS?
    Please help me on sizing this.
    Thanks,
    Arun Singh

    From MOS ID 169706.1
    Automatic Memory Management
    Starting with Oracle Database 11g, the Automatic Memory Management feature requires more shared memory (/dev/shm) and file descriptors. The shared memory should be sized to be at least the greater of MEMORY_MAX_TARGET and MEMORY_TARGET for each Oracle instance on the computer. To determine the amount of shared memory available, enter the following command: # df -k /dev/shm/
    Note: MEMORY_MAX_TARGET and MEMORY_TARGET cannot be used when LOCK_SGA is enabled or with huge pages on Linux

  • FBL3N consumes more memory

    Hi all,
    When i run FBL3N, it consumes more and more memory(~6 GB) on server. I think that there is a problem. I apply OSS note 194842, but problem was not solved. How can i achieve this problem?
    Best regards,
    Munur

    Hi Munur,
    I just did a search and came across your message, did you ever find a solution to your problem ?
    We are experiencing a similar problem in our system running ECC6.0 I have seen users using in excess of 6.5GB of memory with transaction FBL3N in a single session! Surely this is not normal ?
    We are currently on ERP6.0 SPS10 soon to be update to SPS13 which I'm hoping may "fix" the issue.
    Anybody have any ideas ?
    Regards,
    Nelis

  • SQL consuming more memory without any process

    Hi experts,
      SQL is consuming 95% of physical memory with out any process in SQL Server.How to overcome this problem.
    Details:-
    SQL 2008 64 bit in WIndow server 2008 SP2.
    Thanks
    Selva

    Hi experts,
      SQL is consuming 95% of physical memory with out any process in SQL Server.How to overcome this problem.
    This is not a problem unless you show me Out of memory error.
    Use following counters to set correct value for max server memory.
    You should refer to perfmon counters to get estimate of memory utilized by SQL Server and then set accurate value
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): 
    SQLServer:Buffer Manager--Page Life Expectancy(PLE):   
    SQLServer:Buffer Manager--CheckpointPages/sec
    SQLServer:Memory Manager--Memory Grants Pending: 
    SQLServer:memory Manager--Target Server Memory:
    SQLServer:memory Manager--Total Server memory 
    Please read below article for details about this counter
    http://social.technet.microsoft.com/wiki/contents/articles/22316.sql-server-memory-and-troubleshooting.aspx#Does_my_system_have_low_memory
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Which is better to install Oracle 11g database based on ASM or Filesystem

    We will install 2 sets of Oracle 11.2.0.3 on Redhat Linix 5.6 and configure Data Guard for them further -- one will be a primary DB server, the other will be a physical standby DB server. The Oracle DB stoage is based on SAN Array Disk with 6TB size. Now there are two options to manage the DB datafiles:
    1. Install Oracle ASM
    2. Create the tranditional OS filesystem
    Which is better? in the past, our 10g data guard environment is not based on Oracle ASM.
    Someone think if we adopt the oracle ASM, the shortcomings are :
    1. as there is one more instance that will consume more memory and resource.
    2. as the ASM file system cannot be shown out on the OS level directly such as "df" command, the disk utilization monitor job will be more difficult. at least it cannot be supervised at OS level.
    3. as the DB bshoule be done the daily incremental backup (Mon-Sat) to Local Backup Drive. the bakup job must be done by RMAN rather than user-managed script.
    Who can provide some advices? Thanks very much in advance.

    user5969983 wrote:
    We will install 2 sets of Oracle 11.2.0.3 on Redhat Linix 5.6 and configure Data Guard for them further -- one will be a primary DB server, the other will be a physical standby DB server. The Oracle DB stoage is based on SAN Array Disk with 6TB size. Now there are two options to manage the DB datafiles:
    1. Install Oracle ASM
    2. Create the tranditional OS filesystem
    Which is better? in the past, our 10g data guard environment is not based on Oracle ASM. ASM provides a host of new features ito data management, and performance - to the extent that you can rip out the entire existing storage system, replace it with a brand new storage system, without a single second of database downtime.
    Someone think if we adopt the oracle ASM, the shortcomings are :
    1. as there is one more instance that will consume more memory and resource.Not really relevant on 64bit h/w architecture that removes limitations such a 4GB of addressable memory. On the CPU side... heck, my game PC at home has a 8 core 64bit CPU. Single die and dual core CPUs belong to the distant past.
    Arguing that an ASM instance has overheads would be silly. And totally ignores the wide range of real and tangible benefits that ASM provides.
    2. as the ASM file system cannot be shown out on the OS level directly such as "df" command, the disk utilization monitor job will be more difficult. at least it cannot be supervised at OS level.That is a A Very Good Thing (tm). Managing database storage from o/s level is flawed in many ways.
    3. as the DB bshoule be done the daily incremental backup (Mon-Sat) to Local Backup Drive. the bakup job must be done by RMAN rather than user-managed script.
    rman supports ASM fully.
    I have stopped using cooked file systems for Oracle - I prefer ASM first and foremost. The only exceptions are tiny servers with a single root disk that needs to be used for kernel, database s/w, and database datafiles. (currently these are mostly Oracle XE systems in my case, and configured that way as XE does not support ASM and is used as a pure cost decision).

  • Oracle 11g R2 - AWR Section UnOptimized Read Reqs / Optimized Read Reqs

    Hello guys,
    using Oracle 11g R2 more and more i have checked out the new AWR and its sections.
    I have found some section lke this
    SQL ordered by Physical Reads (UnOptimized)DB/Inst: SID/sid  Snaps: 20296-202
    -> UnOptimized Read Reqs = Physical Read Reqts - Optimized Read Reqs
    -> %Opt   - Optimized Reads as percentage of SQL Read Requests
    -> %Total - UnOptimized Read Reqs as a percentage of Total UnOptimized Read Reqs
    -> Total Physical Read Requests:         151,508
    -> Captured SQL account for   25.3% of Total
    -> Total UnOptimized Read Requests:         151,508
    -> Captured SQL account for   25.3% of Total
    -> Total Optimized Read Requests:               1
    -> Captured SQL account for    0.0% of TotalWhat the heck is "Optimized Read Reqs" and "UnOptimized Read Reqs"? These terms are used very often right now in AWR of Oracle 11g R2.
    Does anyone know what this term means and how it is defined? Don't find any information on web and documentation.
    Thanks guys!

    Hello,
    If my guess is close than "Buffer Hit" (Instance Efficiency Percentages) could be as low as 0% from this report.No it isn't ... check it here:
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:   99.96       Redo NoWait %:  100.00
                Buffer  Hit   %:   99.50    In-memory Sort %:  100.00
                Library Hit   %:   99.09        Soft Parse %:   89.55
             Execute to Parse %:   95.93         Latch Hit %:   98.93
    Parse CPU to Parse Elapsd %:   68.68     % Non-Parse CPU:   98.81Sometimes i wonder why oracle introduces such new terms / measurements without documenting it.
    Regards

  • /dev/shm on Oracle Linux 6.x to run Oracle 11g R2 - manual configuration?

    Hello
    We are building a server to run Oracle 11g R2 database (11.2.0.3 x64) on Oracle Linux 6.2 with UEK R2.
    Our preference is to use AMM to have Oracle 11g R2 manage memory. We may impose some minium SGA and PGA memory allocations but basically aim to use MEMORY_TARGET to manage overall memory.
    By default Linux makes the size of /dev/shm ~50% of server physical RAM, as far as I can tell.
    Here is the /etc/fstab entry created by the installation:
    tmpfs /dev/shm tmpfs defaults 0 0
    Given this Linux server will only run Oracle 11g R2 database and some monitoring software, almost application code will run on the server. The application code will run on the separate application server and is Java based.
    Can I change the */etc/fstab* entry for /dev/shm to manually increase the size to take up ~80-90% of the server physical RAM ? Is it a good idea?
    The server is 64-bit, the RAM = 64 GB, so I am thinking to manually make /dev/shm to be = ~55GB, leaving ~8GB for other purposes.
    Right now it's about 32GB (50%?) if I leave the /dev/shm 'defaults' on.
    many thanks

    thanks,
    I have read the doc (what little there is on this topic).
    I have asked on the database forum......
    just FYI - below is the proof:
    SQL> show parameter mem
    NAME                    TYPE     VALUE
    hi_shared_memory_address     integer     0
    memory_max_target          big integer 4G
    memory_target          big integer 0
    shared_memory_address     integer     0
    SQL> show parameter ga
    NAME                    TYPE     VALUE
    lock_sga               boolean     FALSE
    pga_aggregate_target          big integer 1600M
    pre_page_sga          boolean     FALSE
    sga_max_size          big integer 3G
    sga_target               big integer 1600M
    still does not work.
    And I cant change memory_max_target = 0 because I get error on startup:
    SQL> alter system set memory_max_target=0 scope=spfile;
    System altered.
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup;
    ORA-01078: failure in processing system parameters
    ORA-00843: Parameter not taking MEMORY_MAX_TARGET into account
    ORA-00849: SGA_TARGET 3221225472 cannot be set to more than MEMORY_MAX_TARGET 0.
    BUT if memory_max_target is > 0 then alert log says hugepages can not be used
    it feels like catch-22.....
    thanks
    Edited by: yurib on Jun 1, 2012 4:53 PM

  • How to know the amount of ora 11g page-out  memory (sga and pga)?

    How to know the amount of oracle 11g page-out memory ( sga and pga) in the SunSolaris 10 Unix and Linux.
    I need to know how many oracle memory are being page-out ( all and for a one oracle server process).
    thanks

    You can monitor the paging with vmstat or sar commands.
    http://download.oracle.com/docs/cd/B28359_01/server.111/b32009/tuning.htm#sthref500
    You can also get the paging information on OEM home page if configured for your database.
    But I don't know if there exists a method with which one can find out how much memory per session/server process is getting paged out.

  • The danger of memory target in Oracle 11g - request for discussion.

    Hello, everyone.
    This is not a question, but kind of request for discussion.
    I believe that many of you heard something about automatic memory management in Oracle 11g.
    The concept is that Oracle manages the target size of SGA and PGA. Yes, believe it or not, all we have to do is just to tell Oracle how much memory it can use.
    But I have a big concern on this. The optimizer takes the PGA size into consideration when calculating the cost of sort-related operations.
    So what would happen when Oracle dynamically changes the target size of PGA? Following is a simple demonstration of my concern.
    UKJA@ukja116> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    -- Configuration
    *.memory_target=350m
    *.memory_max_target=350m
    create table t1(c1 int, c2 char(100));
    create table t2(c1 int, c2 char(100));
    insert into t1 select level, level from dual connect by level <= 10000;
    insert into t2 select level, level from dual connect by level <= 10000;
    -- First 10053 trace
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';
    -- Do aggressive hard parse to make Oracle dynamically change the size of memory segments.
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
      vc       sys_refcursor;
      vs        varchar2(1000);
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 10000000 loop
        execute immediate 'select count(*) from t1 where rownum = ' || (idx+1)
              into va;
        if mod(idx, 1000) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it!');
            exit;
          end if;
        end if;
      end loop;
    end;
    -- As to alert log file,
    25000th execution
    26000th execution
    27000th execution
    28000th execution
    29000th execution
    30000th execution
    yep, I got it! <-- the pga target changed with 30000th hard parse
    -- Second 10053 trace for same query
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';With above test case, I found that
    1. Oracle invalidates the query when internal pga aggregate size changes, which is quite natural.
    2. With changed pga aggregate size, Oracle recalculates the cost. These are excerpts from the both of the 10053 trace files.
    -- First 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 11468 KB
    _smm_px_max_size                    = 28672 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    -- Second 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 13107 KB
    _smm_px_max_size                    = 32768 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    Bug Fix Control Environment10053 trace file clearly says that Oracle recalculates the cost of the query with the change of internal pga aggregate target size. So, there is a great danger of unexpected plan change while Oracle dynamically controls the memory segments.
    I believe that this is a desinged behavior, but the negative side effect is not negligible.
    I just like to hear your opinions on this behavior.
    Do you think that this is acceptable? Or is this another great feature that nobody wants to use like automatic tuning advisor?
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

    I made a slight modification with my test case to have mixed workloads of hard parse and logical reads.
    *.memory_target=200m
    *.memory_max_target=200m
    create table t3(c1 int, c2 char(1000));
    insert into t3 select level, level from dual connect by level <= 50000;
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 1000000 loop
        -- try many patterns here!
        execute immediate 'select count(*) from t3 where 10 = mod('||idx||',10)+1' into va;
        if mod(idx, 100) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          for p in (select ksppinm, ksppstvl
              from sys.xm$ksppi i, sys.xm$ksppcv v
              where i.indx = v.indx
              and i.ksppinm in ('__shared_pool_size', '__db_cache_size', '__pga_aggregate_target')) loop
              sys.dbms_system.ksdwrt(2, p.ksppinm || ' = ' || p.ksppstvl);
          end loop;
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it! pat1=' || pat1 ||', pat2='||pat2);
            exit;
          end if;
        end if;
      end loop;
    end;
    /This test case showed expected and reasonable result, like following:
    100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    300th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    400th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    500th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 37748736
    __pga_aggregate_target = 58720256
    yep, I got it! pat1=83886080, pat2=58720256Oracle continued being bounced between shared pool and buffer cache size, and about 1200th execution Oracle suddenly stole some memory from PGA target area to increase db cache size.
    (I'm still in dark age on this automatic memory target management of 11g. More research in need!)
    I think that this is very clear and natural behavior. I just want to point out that this would result in unwanted catastrophe under special cases, especially with some logic holes and bugs.
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

  • Oracle 11g - Memory used for sorting

    Hi everyone,
    I would like to know how I could analyze memory used for sorting in Oracle 11g. When I run the below query, it returns 1531381.
    select value from v$sysstat where name like 'sorts (memory)';But when I check sort_area_size parameter from v$parameter, it returns 65536. Does it mean my database is using more memory for sorting than sort_area_size. Or is the way I interpret v$sysstat view and sort_area_size wrong? What is the best way to monitor the memory usage for sorting? Thanks in advance.
    Regards,
    K.H
    Edited by: K Hein on Apr 5, 2012 8:16 PM

    check the valuse of pga_aggregate_target
    http://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams157.htm
    Note:
    Oracle does not recommend using the SORT_AREA_SIZE parameter unless the instance is configured with the shared server option. Oracle recommends that you enable automatic sizing of SQL working areas by setting PGA_AGGREGATE_TARGET instead. SORT_AREA_SIZE is retained for backward compatibility.
    What is the best way to monitor the memory usage for sorting? try v$sort_usage
    or v$tempseg_usage
    col sid_serial for a44
    col size for a22
    col SID_SERIAL for a22
    SELECT b.tablespace,
            ROUND(((b.blocks*p.value)/1024/1024),2)||' MB' "SIZE",
            a.sid||','||a.serial# SID_SERIAL,
            a.username,a.osuser,
            a.program
       FROM sys.v_$session a,
            sys.v_$sort_usage b,
            sys.v_$parameter p
      WHERE p.name  = 'db_block_size'
        AND a.saddr = b.session_addr
    ORDER BY b.blocks;

  • Oracle taking more memory ?

    hi
    i have oracle running on solaris, sga set to 3 gb but below command showing multiple process for PRDLIVE
    and taking more memory than 3GB in total, .... occasionally i see oracle shutting down due to out of memory issue ...
    ps -eo pid,pmem,vsz,rss,comm | sort -rnk2 | head
    18688 11.4 3381200 1859272 oraclePRDLIVE
    18649 11.4 3377664 1847864 oraclePRDLIVE
    18557 9.6 3377392 1553744 ora_w000_PRDLIVE
    18555 9.6 3377272 1550384 ora_smco_PRDLIVE
    18703 9.2 2058304 1489584 oracleTEST
    14420 9.2 2065448 1494536 oracleTEST
    14414 9.2 2061368 1485776 oracleTEST
    18690 9.1 2052264 1483248 oracleTEST
    18584 9.1 2050200 1480608 ora_w000_TEST
    18515 8.1 3387888 1310160 oraclePRDLIVE
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Solaris: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    cat /etc/release
    Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
    Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
    Assembled 23 August 2011
    PMON (ospid: 19876): terminating the instance due to error 490
    ORA-04030: out of process memory when trying to allocate 4088 bytes (PLS CGA hp,pdz2M87_Allocate_Permanent)

    user9182826 wrote:
    hi
    i have oracle running on solaris, sga set to 3 gb but below command showing multiple process for PRDLIVE
    and taking more memory than 3GB in total, .... occasionally i see oracle shutting down due to out of memory issue ...
    ps -eo pid,pmem,vsz,rss,comm | sort -rnk2 | head
    18688 11.4 3381200 1859272 oraclePRDLIVE
    18649 11.4 3377664 1847864 oraclePRDLIVE
    18557 9.6 3377392 1553744 ora_w000_PRDLIVE
    18555 9.6 3377272 1550384 ora_smco_PRDLIVE
    18703 9.2 2058304 1489584 oracleTEST
    14420 9.2 2065448 1494536 oracleTEST
    14414 9.2 2061368 1485776 oracleTEST
    18690 9.1 2052264 1483248 oracleTEST
    18584 9.1 2050200 1480608 ora_w000_TEST
    18515 8.1 3387888 1310160 oraclePRDLIVE
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Solaris: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    cat /etc/release
    Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
    Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
    Assembled 23 August 2011
    PMON (ospid: 19876): terminating the instance due to error 490
    ORA-04030: out of process memory when trying to allocate 4088 bytes (PLS CGA hp,pdz2M87_Allocate_Permanent)04030, 00000, "out of process memory when trying to allocate %s bytes (%s,%s)"
    // *Cause:  Operating system process private memory was exhausted.
    Oracle is victim; not the culprit.
    Problem is at OS level & fix must be done at OS level.

Maybe you are looking for

  • How to use two jdk versions in one system.

    Hi All, Am working for two projects, in one i have to use jdk1.4 and in the second i have to use jdk1.5, how can i have two jdk in one system, am using windows xp, in that if i create two accounts and install two jdk's will it accept? what is the sol

  • Can I filter the table being profiled in the Data Profiling task, or know of a work around? SSIS2008r2

    Hi, I import several files into a staging table.  Inside a foreach loop, after the data flow task I have a Data Profiling task that profiles the table so we can give feedback to those who sent us the file. I just relalized that after the first loop,

  • How to upgrade windows 8.1 to Pro using VLSC key?

    hi, How to upgrade windows 8.1 to Pro using VLSC key? Will this work like a pro pack? Thanks

  • JDBC service

    Hi, I am using adobe livecycle ES. I use the jdbc service provided by Livecycle to update database. I alway get this error "Caused by: java.net.SocketTimeoutException: Read timed out......at com.adobe.idp.dsc.webservice.WebServiceSoapUIInvoker.invoke

  • Problems configuring a Context path=

    Hi, I�m running Apache 2.0, Tomcat 5.5.17, jdk 1.5 I�m upgrading from Apache 1.3 Tomcat 3.* jdk 1.4 In my Apache log file I have the following: JkMount /manager* foo JkMount /servlets/* foo The manager application works fine, when typing the URL http