Increase MDB pool size

Hi.
Is there anyway to increase the pool size for a message driven bean (ejb3) in Netweaver CE 7.1?
Can this be done through the NWA application?
Regards,
Andrew

You can do this on design-time if you specify the properties InitialSize, MaxSize & ResizeStep in ejb-j2ee-engine.xml of your application:
<enterprise-bean>
   <bean-props>
      <property>
         <property-name>MaxSize</property-name>
         <property-value>1000</property-value>
      </property>
      <property>u2026</property>
   </bean-props>
</enterprise-bean>
Check also the reference about the properties: http://help.sap.com/saphelp_nwce10/helpdata/en/44/ee019ab68d27dfe10000000a1553f6/content.htm
BR, Sergei

Similar Messages

  • Increased Shared Pool size longer hot backup time?

    Hello,
    I have hot backup that usually took 2 hours to complete. Then we had to increase Shared pool size from 280Mb to 380Mb due to some performance issue.
    One week after increase of size the hot backup is taking 4 hours to complete. Since there was no other changes to the system I am suspecting size increase
    of Shared Pool from 280Mb to 380Mb might be is reason for such drastic increase of backup time.
    I would like to get your comments on my suspicion to the problem.
    thank you

    Paul.S wrote:
    One week after increase of size the hot backup is taking 4 hours to complete. Since there was no other changes to the system I am suspecting size increase of Shared Pool from 280Mb to 380Mb might be is reason for such drastic increase of backup time.Hmm... at first glance it does not seem that this is the problem.. but perhaps it contributes to it.
    When a SQL hits the SQL engine, the 1st thing the SQL engine does is determine if there is an existing parsed and ready-to-use copy of that SQL. If there is, that existing SQL is re-used - thus the name SQL Shared Pool. The resulting soft parse of the SQL is a lot faster (or should be) than a hard parse . A hard parse is where there is no re-usable copy, and the SQL needs to be parsed, validated and execution plan determined, etc.
    Kind of like re-using an existing compiled program (with different input data) versus compiling that program before using it. (and that is what source SQL is - a program that needs to be compiled).
    Okay, now what happens if there 100's of 1000's of SQLs in your shared pool? The scan that the SQL engine does to determine if there is a re-usable cursor, becomes pretty slow. A soft parse quickly becomes very expensive.
    The problem is typically clients that create SQLs that are not sharable (where the input data is hardcoded into the program, instead of input variables to the program). In other words, SQLs that do not use bind variables.
    Each and every hardcoded SQL that hits the SQL engine, is now a brand new SQL. And requires storage in the SQL Shared Pool. Shared Pool footprint grows and you start getting errors like insufficient shared pool memory.. and increase the shared pool which means even more space to store even more unique non-sharable SQLs and making soft and hard parses even more expensive.
    Like moving the wall a few metres further and then running even faster into it. !http://www.pprune.org/forums/images/smilies2/eusa_wall.gif!
    Now if the backup fires off a load of SQLs against the SQL engine, the fast soft/hard parses can now be a lot slower than previously, thanks to a larger shared pool that now caters for more junk than before.
    Unsure if this is the problem that you are experiencing.. but assuming that your suspicion is correct, it offers an explanation as to why there can be a degradation in performance.

  • DNS not resolving to new machines on network after increasing DHCP pool size

    Hello,
    I am having a very strange issue with connecting new machines to reach the internet.
    We have a ASA 5505 which the previous tech configured the DHCP pool to 192.168.1.60 - 192.168.1.110
    We ended up reaching our limit which I changed it to: 192.168.1.60 - 192.168.187
    Then next day when I arrived to work, our DC was hung from windows updates. Once we got everything back up, every computer currently on the network can reach the internet/VPN tunnels etc. So (continuing with my day) I created a new server in a VM (Hyper-V)
    I can ping everything internally (even the router) 192.168.1.1, but I cannot resolve DNS. I have configured a static IP, tried Dynamic IP.
    I have looked for any ACL indicating to block outside the range of the old DHCP pool but no luck.
    On my local maching I can ping the DNS addresses, but just not on the new server.
    Can anyone point me in the right direction to where to look for this issue?

    I ended up figuring out what the issue was.
    Since it was in a Hyper-V VM. the hosting server had to be updated to SP1.
    Once completed, and rebooted. The VM in question got an IP address.

  • Shared pool size

    Hi All,
    DB:oracle9iR2
    os:solaris
    how to get the shared_pool usage,free total size and hit ratio in oracle 9i R2?,can any one help to me....
    POOL BYTES MB
    shared pool used :
    shared pool free :
    shared pool (Total):
    =================
    Shared_pool hit ratio:
    thanks.

    Hi All,
    thank you for all the responses..
    Db:oracle 9iR2
    os :solaris
    Actually i am facing below problem..
    prob: ORA-00604: error occurred at recursive SQL level 2
    ORA-04031: unable to allocate 4224 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","sga heap(1,0)","library ca
    che")
    Wed Feb 8 19:33:43 2012
    Errors in file /ora/admin/cddp/bdump/cddp_cjq0_2601.trc:
    ORA-00604: error occurred at recursive SQL level 2
    ORA-04031: unable to allocate 4224 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","sga heap(1,0)","library ca
    che")
    Wed Feb 8 19:33:43 2012
    Errors in file /ora/admin/cddp/bdump/cddp_cjq0_2601.trc:
    ORA-00604: error occurred at recursive SQL level 2
    ORA-04031: unable to allocate 4224 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","sga heap(1,0)","library ca
    che")
    Wed Feb 8 19:33:48 2012
    Errors in file /ora/admin/cddp/bdump/cddp_cjq0_2601.trc:
    ORA-00604: error occurred at recursive SQL level 2
    ORA-04031: unable to allocate 4224 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","sga heap(1,0)","library ca
    che")
    ========================================
    i was running with 200MB size of share pool....couple of days back un expectdly i got above error....for temparory solution i did shared pool flush...
    again nexday same error got repeated....for that i increased shared pool size to 420 MB....
    while monitoring db it is using shared pool memory up to 400MB with avg shared pool hit ratio of 94.5 %..(database started recently)...
    Earlier shared_pool size:200MB
    Now:420 MB
    Avg usage:up to 400MB
    my question is :
    1)if we have many different sql statements in shared pool ..won't
    oracle flush share pool (ie aged out based on LRU or some alogorithm) if any program need memory in shared pool?
    2) Any data Fragmentation will cause@above error?
    3)can any one please explain..... how to check whats going on in Shared_pool(internally)...why it is using 400MB while compare to erlier avg usage 170MB .....any idea...(how to find root cause)..?
    4)will plan table cause any issue ?
    can any one explain to me...
    thanks..
    Edited by: kk001 on Feb 11, 2012 4:54 PM
    Edited by: kk001 on Feb 11, 2012 4:56 PM

  • Mdb ejb won't load using steady-pool-size

    I have a message driven bean I'm deploying in an .ear file. The sun-ejb-jar.xml file has the following:
    <sun-ejb-jar>
    <enterprise-beans>
    <unique-id>1</unique-id>
    <ejb>
    <ejb-name>LoggerEJB</ejb-name>
    <jndi-name>com.ecc.utils.LoggerTopic</jndi-name>
    <mdb-connection-factory>
    <jndi-name>com.ecc.utils.JMSTopicConnectionFactory</jndi-name>
    </mdb-connection-factory>
    <bean-pool>
    <steady-pool-size>2</steady-pool-size>
    <resize-quantity>1</resize-quantity>
    <max-pool-size>5</max-pool-size>
    <pool-idle-timeout-in-seconds>600</pool-idle-timeout-in-seconds>
    </bean-pool>
    </ejb>
    </enterprise-beans>
    </sun-ejb-jar>
    Yet I see no evidence it is pre-loading my mdb ejbs. I have output statements and logging that would indicated an instance has actually loaded, and I am seeing none of it. So, what magic trick does it take to get the mdb to acutally use it's <steady-pool-size> setting in the descriptor file?
    Also, how can you tell what ejbs and how many instances are loaded? I see nothing in the Console for this. Also, how can you monitor a topic/queue to see if anything is getting sent?
    Tony F

    Does the ext directory have the php_oci8.dll? In the original steps the PHP dir is renamed. In the given php.in the extension_dir looks like it has been updated correctly. Since PHP distributes php_oci8.dll by default I reckon there would be a very good chance that the problem was somewhere else. Since this is an old thread I don't think we'll get much value from speculation.
    -- cj

  • Setting max bean pool size in MDB

    Hi,
    I need to set the max bean pool size for my MDB to 1. This MDB is a part of my application and is packaged in an ear.
    I tried to set it with the following annotation -
    import javax.ejb.*;
    @MessageDriven
    (mappedName = "MyQueue",
    name = "MyMDB",
    activationConfig = {
    @ActivationConfigProperty(propertyName="maxBeansInFreePool",
    propertyValue="1"),
    @ActivationConfigProperty(propertyName="initialBeansInFreePool",
    propertyValue="1"),
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
    However, this does not seem to work since I see the Current pool count on the WLS console as 3 after processing is done.
    After looking at various posts in this forum, I also tried it with weblogic ejbgen as follows-
    import weblogic.ejbgen.*;
    @MessageDriven(ejbName = "MyMDB",
    destinationType = "javax.jms.Queue",
    initialBeansInFreePool = "1",
    maxBeansInFreePool = "1",
    destinationJndiName = "MyQueue")
    However, with this the MDB did not get deployed in WLS.
    I am using Weblogic 10.3 / EJB 3.0.
    Any help on this is greatly appreciated.
    Thanks
    Meera

    As far as I know, it currently isn't possible to set max-beans-in-free-pool via annotations. You can use a deployment plan (configurable from console and/or follow the link supplied by atheek1).
    I think you can also automatically generate descriptors based on javadoc text via ejb-gen, I'm not quite sure if this tooling can work in conjunction with EJB 3.0 annotations. See http://download.oracle.com/docs/cd/E12840_01/wls/docs103/ejb/EJBGen_reference.html
    Tom

  • Need to Increase the Font Size of a screen field in a Dialog Screen ?

    Hi ppl,
    I have a requirement wherein i need to increase the font size of one field on a dialog screen .
    i was going through the threads and found this which has beautiful example given by vijay
    Increase text size in Screen
    But I'm facing a problem in implementing it since the value in the field (of which the font size needs to be increased) depends on a nother field filled by user at runtime only.
    Can anyone gimme some suggestions or alternative way to solve my problem
    Thanks
    Sachin Soni

    Hi Sachin,
    There is not way to change the font style.
    But you can add ur own text through icons or graphics.
    Create your own graphics or icons and include it in ur module pool.
    Or,
    Goto the Layout Editor of your Screen.
    Double click on the text field .. Double clk the text fild, and from the Botton right u can see an arrow icon -> click on that -> will open a Popup -> in that set Area Title to TRUE.
    Hope this solves your problem.
    Thanks & Regards,
    Tarun Gambhir

  • Issue with shared pool size while upgrading from 10.2.0.4 to 11.2.0.2

    Hi Team,
    Below is my issue
    I am running dbua as part of the upgrade from 10.2.0.4 to 11.2.0.2. Its almost 59% and nothing is happening on the instance. The logs directory contian the below
    -bash-3.00$ ls -ltr
    total 280
    -rw-r----- 1 ora8014 ems8014 3662 Oct 16 08:58 upgrade.xml
    -rw-r----- 1 ora8014 ems8014 1127 Oct 16 09:00 Upgrade_Directive.log
    -rw-r----- 1 ora8014 ems8014 7452 Oct 16 09:00 mapfile.txt
    -rw-r----- 1 ora8014 ems8014 297 Oct 16 09:04 SpaceUsage.txt
    -rw-r----- 1 ora8014 ems8014 9803 Oct 16 09:04 PreUpgradeResults.html
    -rw-r----- 1 ora8014 ems8014 1572 Oct 16 09:06 PreUpgrade.log
    -rw-r----- 1 ora8014 ems8014 2850 Oct 16 09:06 Oracle_Text.log
    -rw-r----- 1 ora8014 ems8014 157816 Oct 16 09:09 trace.log
    -rw-r----- 1 ora8014 ems8014 71368 Oct 16 09:09 sqls.log
    -rw-r----- 1 ora8014 ems8014 339 Oct 16 09:09 Oracle_Server.log
    -bash-3.00$ date
    Sat Oct 16 22:54:27 PDT 2010
    -bash-3.00$ pwd
    /slot/ems8014/oracle/app/ora8014/cfgtoollogs/dbua/ebs11i10/upgrade1
    It seems its almost more 12 hrs nothing has happened. When I check the oracle_server.log it has the error
    -bash-3.00$ tail Oracle_Server.log
    ERROR at line 1:
    ORA-01034: ORACLE not available
    Process ID: 0
    select count(*) from v$instance
    ERROR at line 1:
    ORA-01034: ORACLE not available
    Process ID: 0
    ORA-00371: not enough shared pool memory, should be atleast 424463564 bytes
    Hence I started the Db in 10g oracle home to check and below is the details.
    SQL> select * from V$SGAINFO;
    NAME BYTES RES
    Fixed SGA Size 1267908 No
    Redo Buffers 11313152 No
    Buffer Cache Size 614400000 Yes
    Shared Pool Size 301989888 Yes
    Large Pool Size 8388608 Yes
    Java Pool Size 67108864 Yes
    Streams Pool Size 50331648 Yes
    Granule Size 4194304 No
    Maximum SGA Size 1056964608 No
    Startup overhead in Shared Pool 188743680 No
    Free SGA Memory Available 0
    11 rows selected.
    I tried to increase the space of shared pool as below
    SQL> ALTER SYSTEM SET shared_pool_size ='301M' SCOPE=MEMORY SID='ebs11i10';
    ALTER SYSTEM SET shared_pool_size ='301M' SCOPE=MEMORY SID='ebs11i10'
    ERROR at line 1:
    ORA-02097: parameter cannot be modified because specified value is invalid
    ORA-04033: Insufficient memory to grow pool
    I am stuck and cant proceed further. Could you please help me on this issue so that I overcome this and proceed further.
    Thanks
    Shyam.

    These are the logs. Could you please let me know which one you want.
    -bash-3.00$ pwd
    /slot/ems8014/oracle/app/ora8014/cfgtoollogs/dbua/ebs11i10/upgrade1
    -bash-3.00$ ls -ltr
    total 280
    -rw-r----- 1 ora8014 ems8014 3662 Oct 16 08:58 upgrade.xml
    -rw-r----- 1 ora8014 ems8014 1127 Oct 16 09:00 Upgrade_Directive.log
    -rw-r----- 1 ora8014 ems8014 7452 Oct 16 09:00 mapfile.txt
    -rw-r----- 1 ora8014 ems8014 297 Oct 16 09:04 SpaceUsage.txt
    -rw-r----- 1 ora8014 ems8014 9803 Oct 16 09:04 PreUpgradeResults.html
    -rw-r----- 1 ora8014 ems8014 1572 Oct 16 09:06 PreUpgrade.log
    -rw-r----- 1 ora8014 ems8014 2850 Oct 16 09:06 Oracle_Text.log
    -rw-r----- 1 ora8014 ems8014 157816 Oct 16 09:09 trace.log
    -rw-r----- 1 ora8014 ems8014 71368 Oct 16 09:09 sqls.log
    -rw-r----- 1 ora8014 ems8014 339 Oct 16 09:09 Oracle_Server.log

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • The reporting service web service connection pool reached the max pool size

    I got a problem that it throw an exception "The timeout peroid elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connection was in use and max pool size was reached."
    The satuation is our service use 15 thread to render report, but sometimes we met such exception I list above. I didn't change any configuration in rsreportserver.config, and it seems the connection to reportserver database from reporting service web
    service was not disposed.
    Is there any configuration I can modify to fix this issue?

    Hi Dexter,
    In your case, we can try to increasing the size of the connection pool to resolve the issue. By default, the Max Pool Size is 100. You can refer to the similar issue below:
    http://social.msdn.microsoft.com/Forums/en-US/c57c0432-c27b-45ab-81ca-b2df76c911ef/timeout-expired-the-timeout-period-elapsed-prior-to-obtaining-a-connection-from-the-pool?forum=adodotnetdataproviders
    Since the issue is related with ADO.NET. I suggestion you post the question in the following forum:
    http://social.msdn.microsoft.com/Forums/en-US/home?forum=adodotnetdataproviders
    It is appropriate and more experts will assist you.
    Regards,
    Alisa Tang
    Alisa Tang
    TechNet Community Support

  • Shared pool size in

    Hi experts,
    How could i check current shared pool size in my 8i instance? i want to increase the size of these memory what would be the command?
    Regards,
    SKP

    Sumit,
    How could i check current shared pool size in my 8iif you're just interested in the size, a "show parameter shared_p" in SQL*Plus would fit your needs. A look at init.ora is also sufficient as you can neither grow or shrink the shared pool in 8i, so "shared_pool_size" will show your current settings.
    Regards,
    --==/ Uwe \==--

  • Increase Shared Pool for erorr # ORA-04031

    hi,
    what do i need to look at before i increase the shared pool of our database?
    there is just the one database instance on the machine.
    i am concerned about the repurcussions on the server.
    i hope the information below is of help.
    db version: 10.2.0.1.0
    os: Red Hat Linux 3
    SQL> select name, value from v$parameter where name like '%pool%';
    name value
    shared_pool_size 150994944
    large_pool_size 33554432
    java_pool_size 50331648
    streams_pool_size 0
    shared_pool_reserved_size 10066329
    buffer_pool_keep
    buffer_pool_recycle
    global_context_pool_size
    olap_page_pool_size 0
    thanks,
    santosh sewlal

    Hi Santosh,
    This is what i faced last two days back! Now i am monitoring the Issue! If you got any solutions please let me know how to avoid this!
    ORA-04031 error can be due to either an inadequeate sizing of the SHARED POOL size or due to heavy
    fragmentation leading the database to not finding large enough chuncks of memory.
    You can monitor this with the two events...
    alter system set events '4031 trace name errorstack level 3';
    alter system set events '4031 trace name heapdump level 3';
    Fragmentataion is one of the causes of ora 4031
    Please refer these.
    1.Article-ID: Note 146599.1
    Title: Diagnosing and Resolving Error ORA-04031
    2.Article-ID: Note 62143.1
    Title: Understanding and Tuning the Shared Pool
    3.Article-ID: Note 61623.1
    This is paticular for Oracle 9i Rel 2, Hope the same for Oracle 10 G
    Regards
    Ravi

  • ORA-04031 on 10g - should I just adjust my SGA POOL SIZE?

    Has anyone gotten this message frequently:
    ORA-04031: unable to allocate 37536 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","session parame")
    We are a business intelligence application that issues lots of large queries. We just migrated to 10g and we are seeing this every 2-3 days on our testing machine.
    In particular, I am not sure about "sga heap"... I would just set my Shared Pool Size higher - currently 144 MB but will this help here? Thoughts?

    In Oracle 10g a new feature called "automatic memory management" allows the dba to reserve a pool of shared memory that is used to allocate the shared pool, the buffer cache, the java pool and the large pool.
    In general, when the database needs to allocate a large object into the shared pool and cannot find contiguous space available, it will automatically increase the shared pool size using free space from other SGA structure.
    Since the space allocation is automatically managed by Oracle, the probability of getting ora-4031 errors may be greatly reduced. Automatic Memory Management is enabled when the parameter SGA_TARGET is greater than zero and the current setting can be obtained quering the v$sga_dynamic_components view.
    Please refer to the 10g Administration Manual for further reference
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14231/toc.htm

  • Calculate Shared Pool Size

    HI,
    EBS:12.0.6
    DB:10.2.0.3
    This is the formula to calculate shared pool size..
    minimum share_pool_size = — mp(sp_idx).minvalue — + — (Num_of_CPU * 2MB) + — (Num_of_sessions * 17408) + — (10% of the old shared_pool_size for overhead)from where i can get the values for
    1.mp(sp_idx).minvalue
    2.No of CPU means - the no of CPU in the cpuinfo.txt of OS ?
    3.No of sessions ? how to get ?
    Thanks

    Google ....Please see the docs referenced in your other thread, those docs are the official ones to use for setting SHARED_POOL_SIZE parameter -- increase SGA Size
    Thanks,
    Hussein

  • Connection Pooling Size

    Currently I am using Oracle Application Server for my application, Minimum 800 to 1000 users going to access my application. To this kind of user access, What is the pool size of connection required?

    This isn't really practical advice. Moreover, it makes no sense. Sizing the connection and thread pools to match number of simultaneous users means 800-1000 threads each. Not very logical.I think it's practical. I'm basing it on experience I've had deploying production apps on WebLogic app server running on a Sun box.
    I'm saying that threads are a scarce resource. They each have to have a slice of CPU and memory in order to run, so the number isn't unlimited. And there's context switching. If you increase the number of threads to the point where they're being swapped out before they can do useful work, then you aren't doing any good. It's called "thrashing", I believe.
    So just giving a number isn't realistic without knowing more about the system, what the threads are doing, how much memory and CPU they're consuming, etc.
    This might give you an idea of what I'm thinking:
    http://www.developerweb.net/forum/showthread.php?t=3030
    look into sizing them dynamically. at some point you can (hopefully) set a good max size that won't be exceeded.
    minimum 800-1000 users simultaneously? i'm betting that they won't be simultaneous. what gives you that idea? i agree it's a lot of users, but not out of the realm of possibilities.The key is the word "simultaneous". I'm not doubting that 1000 users per hour is unreasonable, but I am questioning that 1000 requests will come into a single server at once.
    I'm way confused. Are you saying that you believe there will be 50-100 users simultaneously and OP is wrong about the 800-1000 figure? Yes.
    Or are you saying you now believe there will be 800-1000 and 50-100 threads should be able to service them?No. See above.
    %

Maybe you are looking for

  • Change font size on macbook pro 15 retina

    HELP... Cant see the words can someone please tell me how to change font size on macbook pro 15 retina... I have this HUGE screen but the font on the screen is SUPER small.... can someone help me

  • AmChart Vs Flex Professional

    Hi,     Iam  already using Flex's Standard edition and planning to move to professional.The basic idea  for this move is I want to create realistic dynamic graphs/charts in my application using either CSV or XML.I happen to go through AmCharts(http:/

  • Output with quotes

    Hi, I have table as below. create table members (member_id varchar2(128), name varcahr2(128)) insert into members (member_id, name) select '5591558906BA4A019FAB99EDAA26E62B', 'abc' from dual union all select '0F1E7A1B363111D3BDF20008C707ACC6', 'eur'

  • Safari downloads directly to iphoto ( yes / no )

    i actually use the latest versin of iphoto. when i control click on an image using safari ( Google Images ) i am able to download the image directly to iphoto library (version 6). i have been unsuccessful trying this with iphoto 5. Can it be done? wh

  • How to deploy WebDynpro application from Portal

    Hi Friends, when we develop WebDynpro applications, we deploy and run the applications from IDE NWDS studio.. Is there any way to deploy it from portal?