Log Buffer size

Could someone please explain me why do I see different values for log_buffer in v$spparameter / v$parameter and v$sgastat
SYS> select value from v$spparameter where name ='log_buffer' ;
VALUE
2703937
SYS> select value from v$parameter where name ='log_buffer' ;
VALUE
2703360
SYS> select * from v$sgastat where name = 'log_buffer';
POOL NAME BYTES
log_buffer 5259264Thank you in advance and sorry for newbee questions :)

In 10G R2, Oracle combines fixed SGA area and redo buffer [log buffer] together. If there is a free space after Oracle puts the combined buffers into a granule, that space is added to the redo buffer. Thus you see redo buffer has more space than expected. This is an expected behavior.
In 10.2 the log buffer is rounded up to use the rest of the granule.
The Log_buffer Default Size Cannot Be Reduced In 10g R2 [ID 351857.1]

Similar Messages

  • Redo log and buffer size

    Hi,
    i'm trying to size redolog and the buffer size in the best way.
    I already adjusted the size of the redo to let them switch 1/2 times per hour.
    The next step is to modify the redo buffer to avoid waits.
    Actually this query gives me 896 as result.
    SELECT NAME, VALUE
    FROM V$SYSSTAT
    WHERE NAME = 'redo buffer allocation retries';
    I suppose this should be near to 0.
    Log_buffer is setted to 1m.
    And i read "sizing the log buffer larger than 1M does not provide any performance benefit" so what can i do to reduce that wait time?
    Any ideas or suggestions?
    Thanks
    Acr

    ACR80,
    Every time you create a redo entry, you have to allocate space to copy it into the redo buffer. You've had 588 allocation retries in 46M entries. That's "close to zero"..
    redo entries 46,901,591
    redo buffer allocation retries 588The 1MB limit was based around the idea that a large buffer could allow a lot of log to accumulate between writes with the result that a process could execute a small transaction and commit - and have to wait a "long" time for the log writer to do a big write.
    If in doubt, check the two wait events:
    "log file sync"
    "log buffer space".
    As a guideline, you may reduce waits for "log buffer space" by increasing the size of the log buffer - but this introduces the risk of longer waits for "log file sync". Conversely, reducing the log buffer size may reduce the impact of "log file sync" waits but runs the risk of increasing "log buffer space" waits.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Log buffer overflow

    I have been receiving the flex log buffer overflow error for a long time. I don't believe it is causing any problem but I'm not sure.
    I have Iplanet Web Server 4.1 on Solaris 2.6.
    I have changed the LogFlushInterval from the default 30 seconds to 5 seconds.
    I am logging a great deal of information
    My questions are...
    should I be concerned ?
    when I get that error is the buffer being immediately dumped to the log file ?
    am I losing any log information ?
    can I increase the buffer size ?
    should I reduce the LogFlushInterval any more ?
    Thanks

    The error message indicates that an access log entry exceeded the maximum of 4096 bytes and was truncated. You should check the access log file for suspicious entries.
    Adjusting LogFlushInterval won't affect this problem, and unfortunately there's no way to increase flex log buffer size.

  • Redo Log Buffer sizing problem

    My pc has 512mb RAM and i was trying to increase the redo log buffer size. Initially the log_buffer size was 2899456 bytes. So i tried to increase it to 3099456 by issuing the command:
    ALTER SYSTEM SET LOG_BUFFER=3099456 SCOPE=SPFILE;
    And i issued SHUTDOWN IMMEDIATE. Upon restarting my database, when i queried SHOW PARAMETERS LOG_BUFFER . The value has been changed to 7029248 bytes not 3099456 which i wanted. How did this happen?

    1.) We are all volunteers.
    2.) It was only 5 hours between posts and you're complaining that there are no answers?
    3.) You didn't bother to mention platform or Oracle version, even after being specifically asked for it? Which part of "What is your Oracle version?" do you not understand? And yes, the platform may be useful too....
    From memory, there could a couple of things going on. First off, starting in 9i, Oracle allocates memory in granules, so, allocating chunks smaller than granule size can result in being rounded up to granule size. Second, on some platforms, Oracle protects the redo buffer with "guard pages", i.e., extra memory that serves simply to try to prevent accidental memory overflows from corrupting the redo buffer.
    If you want a specific answer, or at least a shot at one, post:
    1.) Oracle version (specific version: 8.1.7.4, 9.2.0.8, 10.2.0.3, etc).
    2.) Platform
    3.) O/S and version
    4.) Current SGA size
    Reposting the same question, or threatening to do so, will get you nowhere.
    -Mark

  • Redo Log Buffer 32.8M, Seems to Big?

    I just took over a database (Mainly used for OLTP on 11gR1) and I am looking at the log_buffer parameter it is set to 34412032 (32.8M). Not sure why it is so high.
    select
        NAME,
        VALUE
    from
        SYS.V_$SYSSTAT
    where
        NAME in ('redo buffer allocation retries', 'redo log space wait time');
    redo buffer allocation retries     185
    redo log space wait time          5180(database has been up for 7.5 days)
    Any opinions on this? I Normally keep try to stay below 3M and have not really seen it above 10M.

    Sky13 wrote:
    I just took over a database (Mainly used for OLTP on 11gR1) and I am looking at the log_buffer parameter it is set to 34412032 (32.8M). Not sure why it is so high.
    In 11g you shouldn't set the log_buffer parameter - let Oracle set the default.
    The value is derived from the setting for the CPU count and the transactions parameter, which may be derived from sessions, which may be derived from processes. Moreover, Oracle is going to allocate at least a granule (which may be 4MB, 8MB, 16MB, 64MB or 256MB depending on the size of the SGA, so you are unlikely to save memory by reducing the log buffer size.
    Here's a link to a discussion which shows you how to find out what's really behind that figure.
    Re: Archived redo log size more less than online redo logs
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • What does redo log buffer holds, changed value or data block?

    Hello Everyone,
    i am new to database side and have one query as i know redo log buffer contain change information , my doubt is does it store the value only or the changed data block? because if we can see data buffer cache size is more as it holds data block and redo log buffer size is less .

    The Redo Log buffer contains OpCodes that represent the SQL commands, the "address" (file,block,row) where the change is to be made and the nature of the change.
    It does NOT contain the data block.
    (the one exception is when you run a User Managed Backup with ALTER DATABASE BEGIN BACKUP or ALTER TABLESPACE BEGIN BACKUP : The first time a block is modified when in BEGIN BACKUP mode, the whole block is written to the redo stream).
    The log buffer can be and is deliberately smaller than the blocks buffer cache. Entries in the redo log buffer are quickly written to disk (at commits, when it is 1/3rd or 1MB full, every 3seconds, before DBWR writes a modified data block).
    Hemant K Chitale

  • About Log Buffer writhing..

    Hi,
    Wondering how Log Buffer behaves...
    1. If timesten is configured as single node(no replication and cache option), When does log buffer data write to log file?
    By running some test, it seems like only at checkpoint action it is being write.
    Is there any other way to write log buffer data beside with checkpoint action?
    2. Which process is doing writing action from log buffer to log file?
    Is it the same process that uses with replication and cache option?
    3. In TimesTen Manual ...
    asynchronous replication "TimesTen Data Manager writes the transaction update records to the transaction log buffer."
    Return Twosafe "The master replication agent writes the transaction records to the log and inserts a special precommit log record before the commit record."
    Then ... does it mean that log buffer write process is different according to replication type?
    4. I am assuming Log_Buffer_Wait from 'monitor;' output is the waiting time of Log Buffer writing time to logfile...
    If it is correct, is the possibility of Log_Buffer_Wait occurance increases when log buffer size is large with no replication option?
    w
    I am willing to hear for above.
    Thank you,

    TimesTen generates log records for the purposes of redo and undo. Log records are generated for pretty much any change to persistent recoverable data within TimesTen. Log records are first written into the in memory log buffer and then are written to disk by a dedicated flusher thread that runs within the sub-daemon process that is assigned to the datastore. The log flusher thread runs continuously when there is data to be flushed and when there is any significant write workload on the system log data will reach disk very shortly after it has been placed in the buffer. Under very light write workload it may talke a little longer for the data to reach disk.
    There is a single logical log buffer (size determined by LogBufMB) which in TimesTen 11g, is divided into multiple physical buffers (strands) for increased concurrency of logging operations ( number of strands determined by LogBufParallelism).
    Several of your observations are not correct; I would like to understand hwat tests you performed to arrive at these conclusions:
    1. Yes, the log buffer is flushed during a checkpoint operation but in fact it is also being flushed continuously at all times by the log flusher thread.
    2. You can force the buffer to be flushed at any time simply by executing a durable commit within the datastore. A durable commit flushes all log starnds synchronously to disk and does not return until the writes have completed successfully and been acknowledged by the storage hardware.
    3. The text that you quote from the replication guide is ambiguous and could be better phrased. When it talks about 'writing to the log' it means placing records in the in-memory log buffer. The presence or absence of replication does not fundamentally change the way logging works though the replication agent, when active, typically perfoms a durable commit every 100 ms. Also, in some replication modes, additional durable commits may be executed by the replication agent before sending a block of replicated transactions.
    4. The LOG_BUFFER_WAITS field in SYS.MONITOR counts the number of times that application transactions have been blocked due to there being no free space in the log buffer to receive their log records. This is due to some form of logging bottleneck. By far the most common reason is that the log buffer is undersized. the default size if only 64 MB and this is far too small for any kind of write intensive workload. For write intensive workloads a significantly larger log buffer size is needed (max size allowed is 1 GB).
    5. The field LOG_FS_WRITES in SYS.MONITOR counts the number of physical writes that the log flusher thread has performed to the logs on disk. The flusher will typically write a lot of data in a single write (when under heavy load). Flusher writes are filesystem block aligned.
    Hope that helps clarify things.
    Chris

  • Log buffer advice

    I want to get the log buffer advise to tune it. my database is 10g.

    Hi,
    Yes that's good but OP is already on 10g so the log buffer size will be rounded up to the granule size and will be rather high I suppose or extremely high in some cases.
    Furthermore you can't tune log buffer anymore and it makes no sense (in most cases) to tune log buffer having it sized 10-15 times more than 1M.
    That means any events seen in v$sesstat related to redo log that indicate possible problems should be considered as checkpointing and/or archiving problems.
    Without this explanation it might appear that anyone who see high valus of mentioned statistics should increase log buffer without taking into considerations other statistics and database version.
    That's my point of view. I surely can be wrong on this and if you know something from your experience or other sources please share with us.
    Best Regards,
    Alex

  • Getting recv buffer size error even after tuning

    I am on AIX 5.3, IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3...), Coherence 3.1.1/341
    I've set the following parameters as root:
    no -o sb_max=4194304
    no -o udp_recvspace=4194304
    no -o udp_sendspace=65536
    I still get the following error:
    UnicastUdpSocket failed to set receive buffer size to 1428 packets (2096304 bytes); actual size is 44 packets (65536 bytes)....
    The following commands/responses confirm that the settings are in place:
    $ no -o sb_max
    sb_max = 4194304
    $ no -o udp_recvspace
    udp_recvspace = 4194304
    $ no -o udp_sendspace
    udp_sendspace = 65536
    Why am I still getting the error? Do I need to bounce the machine or is there a different tunable I need to touch?
    Thanks
    Ghanshyam

    Can you try running the attached utility, and send us the output. It will simply try to allocate a variety of socket buffer sizes and report which succeed and which fail. Based on the Coherence log message I expect this program will also fail to allocate a buffer larger then 65536, but it will allow you verify the issue externally from Coherence.
    There was an issue with IBM's 1.4 AIX JVM which would not allow allocation of buffers larger then 1MB. This program should allow you to identify if 1.5 has a similar issue. If so you may wish to contact IBM support regarding obtaining a patch.
    thanks,
    Mark<br><br> <b> Attachment: </b><br>so.java <br> (*To use this attachment you will need to rename 399.bin to so.java after the download is complete.)<br><br> <b> Attachment: </b><br>so.class <br> (*To use this attachment you will need to rename 400.bin to so.class after the download is complete.)

  • SQL*NET V2 에서 BUFFER SIZE 지정 방법

    제품 : SQL*NET
    작성날짜 : 1996-04-15
    SQL*NET V2와 ODBC Driver를 사용하는 경우 Power Builder, Visual Basic,
    SQL Windows, Object View, Excel, Access 등과 같은 프로그램을 사용하면서
    대량의 데이타를 처리하는 경우 세션이 끊어지거나 하는 문제가 발생을 하면
    SQL*NET V2의 Buffer Size를 줄여 줌으로써 문제를 해결할 수 있습니다.
    PC의 tnsnames.ora화일에 SDU 를 적용하여 다음과 같이 수정하면 됩니다.
    만약 SDU를 적용하여도 문제가 해결이 안된다면 DEDICATED 방식으로 접속
    을 하여서 사용하기 바랍니다.
    tnsnames.ora 화일의 위치
    16BIT SQL*NET : c:\orawin\network\admin
    32BIT SQL*NET : c:\orawin95\network\admin
    =======================
    SDU (Session Data Unit)
    =======================
    Syntax : SDU=n (Byte)
    Range for 'n': 512 <= n <= 2048
    Default value = 2048 Bytes.
    =========
    바꾸기 전
    =========
    TORA =
    (DESCRIPTION=
    (ADDRESS=
    (PROTOCOL=TCP)
    (PORT=1521)
    (HOST=krhp2)
    (CONNECT_DATA=(SID=RC))
    =======
    바꾼 후
    =======
    TORA =
    (DESCRIPTION=
    (ADDRESS=
    (PROTOCOL=TCP)
    (PORT=1521)
    (HOST=krhp2)
    (CONNECT_DATA=(SID=RC))
    (SDU=1024)
    단, 서버가 ORACLE V7.3 인 경우에는 SDU를 서버쪽에도 세팅할 수 있다.
    서버의 listener.ora 화일에서
    ========
    바꾸기 전
    ========
    SID_LIST_LIST73 =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = ORA73)
    (ORACLE_HOME=/oracle2/ora73/app/oracle/product/7.3.2)
    =======
    바꾼 후
    =======
    SID_LIST_LIST73 =
    (SID_LIST =
    (SID_DESC =
    (SDU=1024)(SID_NAME = ORA73)
    (ORACLE_HOME=/oracle2/ora73/app/oracle/product/7.3.2)
    와 같이 변경해 주면 된다. 서버와 클라이언트 양쪽에 SDI가 세팅된 경우에는
    두 값중 작은 값을 사용하게 된다.

    I have 4 redo log groups with one member each, size of each redo log file is 128 MB( By doing some research on internet I found the solution to increase the redo logs file size which I tried upto 400MB each still getting the same error. If there is any other way to check optimal size of REDO FILES, without changing the size of FAST_START_MTTR_TARGET please share with me). Use below doc to check redo log optimum size. And also as per note mentioned by Justin you can ignore the alert as its not going to harm your database.
    274264.1 (10g New Feature - REDO LOGS SIZING ADVISORY)
    Mark your Post as Answered or Helpful if Your question is answered.
    Thanks & Regards,
    SID
    (StepIntoOracleDBA)
    Email : [email protected]
    http://stepintooracledba.blogspot.in/
    http://www.stepintooracledba.com/

  • Tuning : log buffer space in 11gr2

    Hi,
    version : 11202 on hpux
    awr Top 5 events shows :
    Top 5 Timed Foreground Events :
    Event Waits Time(s) Avg wait (ms) % DB time Wait Class
    log buffer space 12,401 29,885 2410 55.83 Configuration
    My log_buffer size is :
    SQL> show parameter log_buffer
    NAME                                 TYPE        VALUE
    log_buffer                           integer     104857600And the sga values are :
    SQL> show parameter sga
    NAME                                 TYPE        VALUE
    sga_max_size                         big integer 15G
    sga_target                           big integer 15GI wanted to know if there are guide lines for tuning log_buffer space .
    Can just double it from 100m to 200m ?
    Thanks

    Yoav wrote:
    Top 5 Timed Foreground Events :
    Event                    Waits    Time(s)       Avg wait (ms) % DB time    Wait Class
    log buffer space      12,401  29,885            2410              55.83     Configuration My log_buffer size is :
    SQL> show parameter log_buffer
    NAME                                 TYPE        VALUE
    log_buffer                           integer     104857600I wanted to know if there are guide lines for tuning log_buffer space .
    Can just double it from 100m to 200m ?
    You're the second person this week to come up with this issue.
    The ONLY sensible guideline for the log buffer is to let it default until you have good reason to change it. Reasons for change (even to the point of modifying a hidden parameter) are really application dependent.
    The last couple of times something like this has come up the issue has revolved around mixing very large uncommitted changes with large numbers of small transactions - resulting in many small transactions waiting for the log writer to complete a large write on behalf of the large transaction. Does this pattern describe your application environment ?
    For reference - how many public and how many private redo strands do you have, and how many have been active. (See http://jonathanlewis.wordpress.com/2012/09/17/private-redo-2/ for a query that shows the difference).
    Regards
    Jonathan Lewis

  • Pros and cons between the large log buffer and small log buffer?

    pros and cons between the large log buffer and small log buffer?
    Many people suggest that small log buffer (1-3MB) is better because we can avoid the waiting events from users. But I think that we can also have advantage with the bigger on...it's because we can reduce the redo log file I/O...
    What is the optimal size of the log buffer? should I consider OLTP vs DSS as well?

    Hi,
    It's interesting to note that some very large shops find that a > 10m log buffer provides better throughput. Also, check-out this new world-record benchmark, with a 60m log_buffer. The TPC notes that they chose it based on the cpu_count:
    log_buffer = 67108864 # 1048576x cpuhttp://www.dba-oracle.com/t_tpc_ibm_oracle_benchmark_terabyte.htm

  • IMP-000200: long column Too larege for column buffer size 22

    IMP error: long column too large for column buffer size
    IMP-000200: long column Too larege for column buffer size <22>
    imp hr/hr file=/home/oracle/hr.dmp fromuser=hr touser=hr buffer=10000 and try also 100000000
    and still the same error please any body can help me with detials please

    Providing more information/background is probably the wise thing to do.
    Versions (databases, exp, imp), commands and parameters used - copy&paste, relevant part of logs - copy&paste, describe table, etc.
    Some background, like what's the purpose, did this work before, what has changed, etc.
    Also you might check the suggested action for the error code per documentation:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14219/impus.htm#sthref10620
    Edited by: orafad on Dec 5, 2010 7:16 PM

  • Log Buffer

    I note that log_buffer even if set, is going to be different {more} post 10.2.x – however if it (oracle internal algorithm) sets a very high value we can see side effects of too long “LOG FILE SYNC”? In one of our instance, we have a SGA granule size of 64MB and when we set LOG_BUFFER to 16MB, i see redo buffers of more than 78MB. Since we have a mixed workload (short transactions + batch jobs) – short transactions are seeing some waits on log file sync. Earlier log_buffer was set to 150MB and oracle took nearly 210MB the waits seen on short UI house keeping transactions were waiting quite long and after reducing – we are now in a “thin acceptable limit”. Is there any other way to reduce redo buffers? anything less than 32MB not only may be enough it will bring down my log file sync even further (which is very much needed for UI house keeping transactions). Even if i unset, i think its still going to allocate 64MB? BTW, we are on 11.2.0.3.
    - Abhay

    Hi Jonathan,
    My suspect was CPU starvation as well and may be possibly caused by high COMMIT rate, but avg CPU load on system was not even 20% - but that still doesn't warrant of what was happening per CPU core and if my UI session was hooked along with batch processes on a CORE - need to check run time CPU stats on the system when we hit this issue (typically when a lot of jobs are running). Going by the avg utilization of CPU it doesnt seem there was any CPU starvation, PS below OS stats from AWR report for the bad period (before log buffer change)
    Operating System Statistics
    Statistic          Value          End Value
    AVG_BUSY_TIME          53,872     
    AVG_IDLE_TIME          308,864     
    AVG_SYS_TIME          3,019     
    AVG_USER_TIME          50,683     
    BUSY_TIME          3,459,745     
    IDLE_TIME          19,779,080     
    SYS_TIME          204,469     
    USER_TIME          3,255,276     
    LOAD               25          8
    OS_CPU_WAIT_TIME     19,500      OS CPU wait time is not so significant hence didnt bother too much to see each CPU level stats at run time. Also below is % utilization from AWR
    Operating System Statistics - Detail
    Snap Time Load %busy %user %sys %idle %iowait
    07-Mar 10:45:11 25.05          
    07-Mar 11:00:20 7.10 13.27 12.17 1.10 86.73 0.00
    07-Mar 11:15:27 11.00 12.15 11.43 0.72 87.85 0.00
    07-Mar 11:30:40 10.00 18.27 17.32 0.95 81.73 0.00
    07-Mar 11:45:42 8.20 15.80 15.06 0.74 84.20 0.00 After log buffer change if things have improved as seen from AWRs pasted earlier, and when we factually know from AWR log file parallel writes are not taking significant amount of time - are we hitting a possible issue of serialization as described in [Tony Hasler - Log File Sync|http://tonyhasler.wordpress.com/2011/07/24/log-file-sync-and-log-file-parallel-write-part-2/] ? the whole lot of time spent in copying blocks from private redo strands? PS we have a lot of private strands, below is snippet you asked for.
    Or is Oracle not registering log file parallel writes properly? as i see from 10298 event enabled for LGWR process that the issue starts when write rate drops to 1-3MPBS and when it does 6-9MBPS things look quite reasonable. Waiting less on log file sync after log buffer reduction hints us of possible IO issue but not registered by oracle, and especially after observations of bad time vs good time & consistently in bad time 10298 reported lesser IO write speed.
          INDX TOTAL_BUFS_KCRFA STRAND_SIZE_KCRFA INDEX_KCRF_PVT_STRAND SPACE_KCRF_PVT_STRAND                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             0             8192           4194304                     0                     0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             1             8192           4194304                     0                     0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             2             8192           4194304                     0                     0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             3             8192           4194304                     0                     0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             4              249            132096                     4                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             5              249            132096                     5                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             6              249            132096                     6                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             7              249            132096                     7                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             8              249            132096                     8                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
             9              249            132096                     9                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
            10              249            132096            3735928559                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
            11              249            132096                    11                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
           554              249            132096            3735928559                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
           555              249            132096            3735928559                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
           556              249            132096            3735928559                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
           557              249            132096            3735928559                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
           558              249            132096            3735928559                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
           559              249            132096            3735928559                126464                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
    560 rows selected.Edited by: 993122 on Mar 14, 2013 6:26 AM

  • JMS Server Message Buffer Size & Thresholds and Quotas settings

    On WLS10MP1,
    For persistent messages:
    1.Does "JMS Server Message Buffer" setting serve the same purpose as "Bytes Threshold High" under Threshold ?
    2.If no, can someone explain the difference pls.
    Many thanx,

    Message Buffer Size relates to the number of message the JMS server keeps in the memory. The value of this determines when the server should start paging the message out of memory to a persistence store. So this is directly related with the memory/storage issue and the size of messages.
    Bytes Threshold High relates to the performance of the JMS server. When this limit is reached JMS server starts logging the message and may even instruct he producer to slow down the message input.
    So the if you get Bytes Threshold High messages that means you should check on your consumer (MDB who is picking up messages from the que), and try to increase its performance.
    However if your Message Buffer Size is crossing limits then you should think of increasing the momory so that more messages can be kept in memory and disck IO can be reduce.
    Anyone wants to add something more to it?

Maybe you are looking for

  • Tags to be used in appraisal template

    Hi We are on EHP4, implementing Performance Management, generic version. I'm trying to format the text displayed in the PM document using formatting in the description field (Web Layout) inside the template. I have used the tags h1, h2, em, p, ul, li

  • G5 with a 30"

    I just installed a new Apple Cinema 30" on my Dual 2.5 PowerPC G5. The graphics card is an ATI Radeon 9800XT. The new display connects via the DVI port. The previous display used the ADC port. The previous Apple Cinema worked fine. The new display ca

  • Enabled users are not seen in the rtc database

    Hi, I have installed Lync 2013 into our environment and I am having an issue where users enabled for Lync are not able to log into the client, receiving the error: "You didn't get signed in. It might be your sign-in address or logon credentials, so t

  • No mention of contract expiry date on bill?

    I realise that when you enter into a new contract with BT you receive written details of start date, services etc. When logging in to My BT and viewing current balance, phone, broadband etc, I cannot see any reference to the contract period. Viewing

  • I can't update to iso 3.0 software

    I'm updating my mom's 3rd generation Ipod touch but cannot find where to update to iso 3.0.  When I go to the sync page and choose update is says it is current with 2.2.1.  I've been to the update page on the apple website but it does not offer a sof