Buffer cache
Hi,
How we can find out which table need to be kept in buffer cache.
Regards,
W@s..
Hello,
What do you mean by candidate for keeping in buffer ?
Do you intend to create a KEEP BUFFER for a few specific Tables ?
If yes this article from Vikash Varma may guide you:
http://oradbhome.itpub.net/post/14580/301182
Hope this help.
Best regards,
Jean-Valentin
Similar Messages
-
10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE
제품 : ORACLE SERVER
작성날짜 : 2004-05-25
10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE
===============================================
PURPOSE
이 자료는 Oracle 10g new feature 로 manual 하게
buffer cache 를 flush 할 수 있는 기능에 대하여 알아보도록 한다.
Explanation
Oracle 10g 에서 new feature 로 소개된 내용으로 SGA 내 buffer cache 의
모든 data 를 command 수행으로 clear 할 수 있다.
이 작업을 위해서는 "alter system" privileges 가 있어야 한다.
Buffer cache flush 를 위한 command 는 다음과 같다.
주의) 이 작업은 database performance 에 영향을 줄 수 있으므로 주의하여 사용하여야 한다.
SQL > alter system flush buffer_cache;
Example
x$bh 를 query 하여 buffer cache 내 존재하는 정보를 확인한다.
x$bh view 는 buffer cache headers 정보를 확인할 수 있는 view 이다.
우선 test 로 table 을 생성하고 insert 를 수행하고
x$bh 에서 barfil column(Relative file number of block) 과 file# 를 조회한다.
1) Test table 생성
SQL> Create table Test_buffer (a number)
2 tablespace USERS;
Table created.
2) Test table 에 insert
SQL> begin
2 for i in 1..1000
3 loop
4 insert into test_buffer values (i);
5 end loop;
6 commit;
7 end;
8 /
PL/SQL procedure successfully completed.
3) Object_id 확인
SQL> select OBJECT_id from dba_objects
2 where object_name='TEST_BUFFER';
OBJECT_ID
42817
4) x$bh 에서 buffer cache 내에 올라와 있는 DBARFIL(file number of block) 를 조회한다.
SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
2 from x$bh where obj= 42817;
TS# FILE# DBARFIL DBABLK CLASS STATE MODE_HELD J
9 23 23 1297 8 1 0 7
9 23 23 1298 9 1 0 7
9 23 23 1299 4 1 0 7
9 23 23 1300 1 1 0 7
9 23 23 1301 1 1 0 7
9 23 23 1302 1 1 0 7
9 23 23 1303 1 1 0 7
9 23 23 1304 1 1 0 7
8 rows selected.
5) 다음과 같이 buffer cache 를 flush 하고 위 query 를 재수행한다.
SQL > alter system flush buffer_cache ;
SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
2 from x$bh where obj= 42817;
6) x$bh 에서 state column 이 0 인지 확인한다.
0 은 free buffer 를 의미한다. flush 이후에 state 가 0 인지 확인함으로써
flushing 이 command 를 통해 manual 하게 수행되었음을 확인할 수 있다.
Reference Documents
<NOTE. 251326.1>I am also having the same issue. Can this be addressed or does BEA provide 'almost'
working code for the bargin price of $80k/cpu?
"Prashanth " <[email protected]> wrote:
>
Hi ALL,
I am using wl:cache tag for caching purpose. My reqmnt is such that I
have to
flush the cache based on user activity.
I have tried all the combinations, but could not achieve the desired
result.
Can somebody guide me on how can we flush the cache??
TIA, Prashanth Bhat. -
What else are stored in the database buffer cache?
What else are stored in the database buffer cache except the data blocks read from datafiles?
That is a good idea.
SQL> desc v$BH;
Name Null? Type
FILE# NUMBER
BLOCK# NUMBER
CLASS# NUMBER
STATUS VARCHAR2(10)
XNC NUMBER
FORCED_READS NUMBER
FORCED_WRITES NUMBER
LOCK_ELEMENT_ADDR RAW(4)
LOCK_ELEMENT_NAME NUMBER
LOCK_ELEMENT_CLASS NUMBER
DIRTY VARCHAR2(1)
TEMP VARCHAR2(1)
PING VARCHAR2(1)
STALE VARCHAR2(1)
DIRECT VARCHAR2(1)
NEW CHAR(1)
OBJD NUMBER
TS# NUMBERTEMP VARCHAR2(1) Y - temporary block
PING VARCHAR2(1) Y - block pinged
STALE VARCHAR2(1) Y - block is stale
DIRECT VARCHAR2(1) Y - direct block
My question is what are temporary block and direct block?
Is it true that some blocks in temp tablespace are stored in the data buffer? -
Hello -
We have 3 x EX2010 SP3 RU5 nodes in a cross-site DAG.
Multi-role servers with 18 GB RAM [increased from 16 GB in an attempt to clear this warning without success].
We run nightly backups on both nodes at the Primary Site.
Node 1 backup covers all mailbox databases [active & passive].
Node 2 backup covers the Public Folders database.
The backups for each database are timed so they do not overlap.
During each backup we get several of these event log warnings:
Log Name: Application
Source: ESE
Date: 23/04/2014 00:47:22
Event ID: 906
Task Category: Performance
Level: Warning
Keywords: Classic
User: N/A
Computer: EX1.xxx.com
Description:
Information Store (5012) A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation.
See help link for complete details of possible causes.
Resident cache has fallen by 42523 buffers (or 27%) in the last 903 seconds.
Current Total Percent Resident: 26% (110122 of 421303 buffers)
We've rescheduled the backups and the warning message occurences just move with the backup schedules.
We're not aware of perceived end-user performance degradation, overnight backups in this time zone coincide with the business day for mailbox users in SEA.
I raised a call with the Microsoft Enterprise Support folks, they had a look at BPA output and from their diagnostics tool. We have enough RAM and no major issues detected.
They suggested McAfee AV could be the root of our problems, but we have v8.8 with EX2010 exceptions configured.
Backup software is Asigra V12.2 with latest hotfixes.
We're trying to clear up these warnings as they're throwing SCOM alerts and making a mess of availability reporting.
Any suggestions please?
Thanks in advanceHaving said all that, a colleague has suggested we just limit the amount of RAM available for the EX2010 DB cache
Then it won't have to start releasing RAM when the backup runs, and won't throw SCOM alerts
This attribute should do it...
msExchESEParamCacheSizeMax
http://technet.microsoft.com/en-us/library/ee832793.aspx
Give me a shout if this is a bad idea
Thanks -
Find available space in buffer cache
Hi.
I want to find available space from buffer cache. First thought was to make it 8i-9i comp, by not using v$bh to calculate sum of memory and available space.
I have the following pl/sql block to calculate the values:
declare
num_free_blck integer;
num_all_blck integer;
num_used_blck integer;
overal_cache number := 0;
used_cache number := 0;
free_cache number := 0;
blck_size integer;
pct_free number := 0;
begin
select count(1) into num_free_blck from v$bh where status='free';
select count(1) into num_all_blck from v$bh;
select count(1) into num_used_blck from v$bh where status <> 'free';
select value into blck_size from v$parameter where name ='db_block_size';
used_cache := (blck_size * num_used_blck)/(1024*1024);
free_cache := (blck_size * num_free_blck)/(1024*1024);
overal_cache := (blck_size * num_all_blck)/(1024*1024);
pct_free := ((free_cache/overal_cache)*100);
dbms_output.put_line('There are '||num_free_blck||' free blocks in buffer cache');
dbms_output.put_line('There are '||num_used_blck||' used block in buffer cache');
dbms_output.put_line('There are totally '||num_all_blck||' blocks in buffer cache');
dbms_output.put_line('Overall cache size is '||to_char(overal_cache,'999.9')|| 'mb');
dbms_output.put_line('Used cache is '||to_char(used_cache,'999.9')||' mb');
dbms_output.put_line('Free cache is '||to_char(free_cache,'999.9')||' mb');
dbms_output.put_line('Percent free db_cache is '||to_char(pct_free,'99.9')||' %');
end;
The result of the execution is:
SQL> @c:\temp\bh
There are 3819 free blocks in buffer cache
There are 4189 used block in buffer cache
There are totally 8008 blocks in buffer cache
Overall cache size is 62.6mb
Used cache is 32.7 mb
Free cache is 29.8 mb
Percent free db_cache is 47.7 %
PL/SQL-prosedyren ble fullført.
SQL>
This is not correct according to the actuall size of the buffer cache:
SQL> select name,value from v$parameter where name='db_cache_size';
NAME
VALUE
db_cache_size
67108864
SQL>
Anyone that have an idea bout this?
Thanks
Kjell OveMark D Powell wrote:
select decode(state,0,'Free',
1,'Read and Modified',
2,'Read and Not Modified',
3,'Currently being Modified',
'Other'
) buffer_state,
count(*) buffer_count
from sys.xx_bh
group by decode(state,0,'Free',
1,'Read and Modified',
2,'Read and Not Modified',
3,'Currently being Modified',
'Other'
Provided the OP figures out that xx_bh is probably a view defined by sys on top of x$bh this will get him the number of free buffers - which may be what he wants - but apart from that your query is at least 10 years short of complete, and the decode() of state 3 is definitley wrong.
The decode of x$bh.state for 10g is:
decode(state,
0,'free',
1,'xcur',
2,'scur',
3,'cr',
4,'read',
5,'mrec',
6,'irec',
7,'write',
8,'pi',
9,'memory',
10,'mwrite',
11,'donated'
), and for 11g it is:
decode(state,
0, 'free',
1, 'xcur',
2, 'scur',
3, 'cr',
4, 'read',
5, 'mrec',
6, 'irec',
7, 'write',
8, 'pi',
9, 'memory',
10, 'mwrite',
11, 'donated',
12, 'protected',
13, 'securefile',
14, 'siop',
15, 'recckpt',
16, 'flashfree',
17, 'flashcur',
18, 'flashna'
), (At least, that was the last time I looked - they may have changed again in 10.2.0.5 and 11.2.0.2)
Regards
Jonathan Lewis -
Many "Flushing buffer cache" in 11.1.0.7
Hello,
I am getting "ALTER SYSTEM: Flushing buffer cache" in out alert log continuously . I have not done any buffer pool flushing but still it coming . does anyone know is there any oracle scheduled job will do this ? or this will happen only by issuing a manual command ?
Any thoughts will be highly appreciated.
Thu Jul 11 03:46:27 2013
Archived Log entry 151129 added for thread 1 sequence 92387 ID 0xc7afa6e dest 1:
Thu Jul 11 03:48:07 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:50:28 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:51:29 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:52:25 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:53:00 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:53:29 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:57:27 2013
Thanks
AjuThis is not normal. Can be issued manually or by sheduled jobs, or by 3rd party software. Are you running PeopleSoft?
As adviced already check AUDIT_TRAIL, check that auditing is enabled first, and issue one flush manually to be sure that this action is logged.
Bug 12530225 : ALTER SYSTEM: FLUSHING BUFFER CACHE MESSAGES IN ALERT.LOG
Regards
Ed -
" unable to allocate space from the buffer cache" Message
Hi
I am trying to delete a large volume or records from a BTREE database. I have used the DB_SET_RANGE with a cursor to locate the desired records. After that the Dbc::get() with DB_NEXT is called. After deleting a considerable amount of records I am receiving a message in the error callback function as "unable to allocate space from the buffer cache".
What might be the reason for such a message.
Regards
NisamNisam,
This means that the cache is full and there are no pages that BDB can evict to make space. Are you running with the default cache size? You can increase the cache size by calling: dbenv->set_cachesize or db->set_cachesize.
Related docs:
Selecting a cache size: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
Bogdan Coman -
Will I increase my Buffer Cache ?
Oracle 9i
Shared Pool 2112 Mb
Buffer Cache 1728 Mb
Large Pool 32Mb
Java Pool 32 Mb
Total 3907.358 Mb
SGA Max Size 17011.494 Mb
PGA
Aggregate PGA Target 2450 Mb
Current PGA Allocated 3286059 KB
Maximum PGA Allocated (since Startup) 3462747 KB
Cache Hit Percentage 98.71%
The Buffer Cache Size advise is telling me that if I increase the Buffer Cache to 1930Mb i will get a 8.83 decrease in phyiscal reads (And its get better the more I increase it)
The question is .. can I safely increase it (In light of my current memory allocations) ? Is it worth it .. ?Two things stand out:
Your sga max size is 17Gb, but you are only using about 4Gb of it - so you seem to have 13Gb that you are not making best use of.
Your pga aggregate target is 2.4Gb, but you've already hit a peak of 3.4Gb - which means your target may be too small - so it's lucky you had all that spare memory which hadn't gone into the SGA. Despite the availability of memory, some of your queries may have been rationed at run-time to try to minimise the excess demand.
Is this OLTP or DSS - where do you really need the memory ? (Have a look in v$process to see the pga usage on a process by process level).
How many processes are allowed to connect to the database ? (You ought to allow about 2Mb - 4Mb per process to the pga_aggregate_target for OLTP) and at least 1Mb per process for the buffer cache.
Where do you see time lost ? time on disk I/O, or time on CPU ? What type of disk I/O, what's the nature of the CPU usage. These figures alone do not tell us what you should do with the spare memory you seem to have.
A simple response to your original question would be that you probably need to increase the pga_aggregate_target, and you might as well increase the buffer size since you seem to have the memory for both.
On the downside, changing the pga_aggregate_target could cause some execution plans to change; and changing the buffer size does change the limit size on a 'short' table, which can cause an increase in I/O as an unlucky side effect if you're a little heavy on "long" tablescans.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
Data Buffer Cache Error Message
I'm using a load rule that builds a dimenson on the fly and getting the following error: "Not enough memory to allocate the Data Buffer Cache [adDatInitCacheParamsAborted]"I've got 4 other databases which are set up the same as this one and I'm not getting this error. I've checked all the settings and I think they're all the same.Anyone have any idea what this error could mean?I can be reached at [email protected]
Hi,
Same issue, running Vista too. This problem is recent. It may be due to the last itunes update. itunes 11.2.23 -
ORA-00385: cannot enable Very Large Memory with new buffer cache 11.2.0.2
[oracle@bnl11237dat01][DWH11]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Mon Jun 20 09:19:49 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs//initDWH11.ora
ORA-00385: cannot enable Very Large Memory with new buffer cache parameters
DWH12.__large_pool_size=16777216
DWH11.__large_pool_size=16777216
DWH11.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
DWH12.__pga_aggregate_target=2902458368
DWH11.__pga_aggregate_target=2902458368
DWH12.__sga_target=4328521728
DWH11.__sga_target=4328521728
DWH12.__shared_io_pool_size=0
DWH11.__shared_io_pool_size=0
DWH12.__shared_pool_size=956301312
DWH11.__shared_pool_size=956301312
DWH12.__streams_pool_size=0
DWH11.__streams_pool_size=134217728
#*._realfree_heap_pagesize_hint=262144
#*._use_realfree_heap=TRUE
*.audit_file_dest='/u01/app/oracle/admin/DWH/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='/dborafiles/mdm_bn/dwh/oradata01/DWH/control01.ctl','/dborafiles/mdm_bn/dwh/orareco/DWH/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='DWH'
*.db_recovery_file_dest='/dborafiles/mdm_bn/dwh/orareco'
*.db_recovery_file_dest_size=7373586432
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DWH1XDB)'
DWH12.instance_number=2
DWH11.instance_number=1
DWH11.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat01-vip)(PORT=1521))))'
DWH12.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat02-vip)(PORT=1521))))'
*.log_archive_dest_1='LOCATION=/dborafiles/mdm_bn/dwh/oraarch'
*.log_archive_format='DWH_%t_%s_%r.arc'
#*.memory_max_target=7226785792
*.memory_target=7226785792
*.open_cursors=1000
*.processes=500
*.remote_listener='LISTENERS_SCAN'
*.remote_login_passwordfile='exclusive'
*.sessions=555
DWH12.thread=2
DWH11.thread=1
DWH12.undo_tablespace='UNDOTBS2'
DWH11.undo_tablespace='UNDOTBS1'
SPFILE='/dborafiles/mdm_bn/dwh/oradata01/DWH/spfileDWH1.ora' # line added by Agent
[oracle@bnl11237dat01][DWH11]$ cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
#kernel.shmall = 4294967296
kernel.shmall = 8250344
# Oracle kernel parameters
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
kernel.shmmax = 536870912
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
net.ipv4.tcp_wmem = 262144 262144 262144
net.ipv4.tcp_rmem = 4194304 4194304 4194304
Please can I know how to resolve this error.CAUSE: User specified one or more of { db_cache_size , db_recycle_cache_size, db_keep_cache_size, db_nk_cache_size (where n is one of 2,4,8,16,32) } AND use_indirect_data_buffers is set to TRUE. This is illegal.
ACTION: Very Large Memory can only be enabled with the old (pre-Oracle_8.2) parameters -
DB buffer cache vs. SQL query & PL/SQL function result cache
Hi all,
Started preparing for OCA cert. just myself using McGraw Hill's exam guide. Have a question about memory structures.
Actually, DB buffer cache is used to copy e.g. SELECT queries result data blocks, that can be reused by another session (server process).
There is also additional otion - SQL query & PL/SQL function result cache (from 11g), where also stored the results of such queries.
Do they do the same thing or nevertheless there is some difference, different purpose?
thanks in advance...There is also additional otion - SQL query & PL/SQL function result cache (from 11g), where also stored the results of such queries.Result cache located in shared pool.So it is one component of shared pool.When server process execute query(and if you configured result cache) then result will store in shared pool.Then next execution time run time mechanism will detect and consider using result cache without executing this query(if data was not changed this is happen detection time)
Do they do the same thing or nevertheless there is some difference, different purpose?.Buffer cache and result cache are different things and purpose also,but result cache introduced to improve response time of query in 11g(but such mechanism also implemented in 10g subquery execution,in complex query).In buffer cache holds data blocks but not such results.
Edited by: Chinar on Nov 4, 2011 4:40 AM
(Removing lots of "But" word from sentences :-) ) -
Hi,
We seem to get this error through SCOM every couple of weeks. It doesn't correlate with the AV updates, so I'm not sure what's eating up the memory. The server has been patched to the latest roll up and service pack. The mailbox servers
have been provisioned sufficiently with more than enough memory. Currently they just slow down until the databases activate on another mailbox server.
A significant portion of the database buffer cache has been written out to the system paging file.
Any ideas?I've seen this with properly sized servers with very little Exchange load running. It could be a number of different things. Here are some items to check:
Confirm that the server hardware has the latest BIOS, drivers, firmware, etc
Confirm that the Windows OS is running the recommended hotfixes. Here is an older post that might still apply to you
http://blogs.technet.com/b/dblanch/archive/2012/02/27/a-few-hotfixes-to-consider.aspx
http://support.microsoft.com/kb/2699780/en-us
Setup a perfmon to capture data from the server. Look for disk performance, excessive paging, CPU/Processor spikes, and more. Use the PAL tool to collect and analyze the perf data -
http://pal.codeplex.com/
Include looking for other applications or processes that might be consuming system resources (AV, Backup, security, etc)
Be sure that the disk are properly aligned -
http://blogs.technet.com/b/mikelag/archive/2011/02/09/how-fragmentation-on-incorrectly-formatted-ntfs-volumes-affects-exchange.aspx
Check that the network is properly configured for Exchange server. You might be surprise how the network config can cause perf & scom alerts.
Make sure that you did not (improperly) statically set msExchESEParamCacheSizeMax and msExchESEParamCacheSizeMin attributes in Active Directory -
http://technet.microsoft.com/en-us/library/ee832793(v=exchg.141).aspx
Be sure that hyperthreading is NOT enabled -
http://technet.microsoft.com/en-us/library/dd346699(v=exchg.141).aspx#Hyper
Check that there are no hardware issues on the server (RAM, CPU, etc). You might need to run some vendor specific utilities/tools to validate.
Proper paging file configuration should be considered for Exchange servers. You can use the perfmon to see just how much paging is occurring.
These will usually lead you in the right direction. Good Luck! -
This was discussed here, with no resolution
http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/bb073c59-b88f-471b-a209-d7b5d9e5aa28?prof=required
I have the same issue. This is a single-purpose physical mailbox server with 320 users and 72GB of RAM. That should be plenty. I've checked and there are no manual settings for the database cache. There are no other problems with
the server, nothing reported in the logs, except for the aforementioned error (see below).
The server is sluggish. A reboot will clear up the problem temporarily. The only processes using any significant amount of memory are store.exe (using 53GB), regsvc (using 5) and W3 and Monitoringhost.exe using 1 GB each. Does anyone have
any ideas on this?
Warning ESE Event ID 906.
Information Store (1497076) A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation. See help link for complete details of possible causes. Resident cache
has fallen by 213107 buffers (or 11%) in the last 207168 seconds. Current Total Percent Resident: 79% (1574197 of 1969409 buffers)Brian,
We had this event log entry as well which SCOM picked up on, and 10 seconds before it the Forefront Protection 2010 for Exchange updated all of its engines.
We are running Exchange 2010 SP2 RU3 with no file system antivirus (the boxes are restricted and have UAC turned on as mitigations). We are running the servers primarily as Hub Transport servers with 16GB of RAM, but they do have the mailbox role installed
for the sole purpose of serving as our public folder servers.
So we theroized the STORE process was just grabbing a ton of RAM, and occasionally it was told to dump the memory so the other processes could grab some - thus generating the alert. Up until last night we thought nothing of it, but ~25 seconds after the
cache flush to paging file, we got the following alert:
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:08:14 AM
Event ID: 17012
Task Category: Storage
Level: Error
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: The database could not allocate memory. Please close some applications to make sure you have enough memory for Exchange Server. The exception is Microsoft.Exchange.Isam.IsamOutOfMemoryException: Out of Memory (-1011)
at Microsoft.Exchange.Isam.JetInterop.CallW(Int32 errFn)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, String connect, MJET_GRBIT grbit, MJET_WRN& wrn)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, MJET_GRBIT grbit)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file)
at Microsoft.Exchange.Isam.Interop.MJetOpenDatabase(MJET_SESID sesid, String file)
at Microsoft.Exchange.Transport.Storage.DataConnection..ctor(MJET_INSTANCE instance, DataSource source).
Followed by:
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:08:15 AM
Event ID: 17106
Task Category: Storage
Level: Information
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: MSExchangeTransport has detected a critical storage error, updated the registry key (SOFTWARE\Microsoft\ExchangeServer\v14\Transport\QueueDatabase) and as a result, will attempt self-healing after process restart.
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:13:50 AM
Event ID: 17102
Task Category: Storage
Level: Warning
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: MSExchangeTransport has detected a critical storage error and has taken an automated recovery action. This recovery action will not be repeated until the target folders are renamed or deleted. Directory path:E:\EXCHSRVR\TransportRoles\Data\Queue
is moved to directory path:E:\EXCHSRVR\TransportRoles\Data\Queue\Queue.old.
So it seems as if the Forefront Protection 2010 for Exchange inadvertently trigger the cache flush which didn't appear to happen quick or thuroughly enough for the transport service to do what it needed to do, so it freaked out and performed the subsequent
actions.
Do you have any ideas on how to prevent this 906 warning, which cascaded into a transport service outage?
Thanks! -
Will Oracle look into the database buffer cache in this scenario?
hi guys,
say I have a table with a million rows, there are no indexes on it, and I did a
select * from t where t.id=522,000.
About 5 minutes later (while that particular (call it blockA) block is still in the database buffer cache) I do a
select * from t where t.id >400,000 and t.id < 600,000
Would Oracle still pick blockA up from the database buffer cache? if so, how? How would it know that that block is part of our query?
thanksWithout an Index, Oracle would have done a FullTableScan on the first query. The blocks would be very quickly aged out of the buffer cache as they have been retrieved for an FTS on a large table. It is unlikely that block 'A' would be in the buffer_cache after 5minutes.
However, assuming that block 'A' is still in the buffer_cache, how does Oracle know that records for the second query are in block 'A' ? It doesn't. Oracle will attempt another FullTableScan for the second query -- even if, as in the first query -- the resultset returned is only 1 row.
Now, if the table were indexed and rows were being retrieved via the Index, Oracle would use the ROWID to get the "DBA" (DataBlockAddress) and get the hash value of that DBA to identify the 'cache buffers chain' where the block is likely to be found. Oracle will make a read request if the block is not present in the expected location.
Hemant K Chitale
http://hemantoracledba.blogspot.com -
What are all information brought into database buffer cache ?
Hi,
What are all information brought into database buffer cache , when user does any one of operations such as "insert","update", "delete" , "select" ?
Whether the datablock to be modified only brought into cache or entire datablocks of a table brought into cache while doing operations i mentioned above ?
What is the purpose of SQL Area? What are all information brought into SQLArea?
Please explain me the logic behind the questions i asked above.
thanks in advance,
nvseenuDocumentation is your friend. Why not start by
reading the
[url=http://download.oracle.com/docs/cd/B19306_01/serv
er.102/b14220/memory.htm]Memory Architecturechapter.
Message was edited by:
orafad
Hi orafad,
I have learnt MemoryArchitecture .
In that documentation , folowing explanation are given,
The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles.
But i would like to know whether all or few datablocks brought into cache.
thanks in advance,
nvseenu -
Is dictionary cache double buffered (shared pool, buffer cache)
Hi,
I'm trying to get idea about how dictionary cache is buffered .
Let us say we're talking about dc_objects .
It is dba_tables view related so all underlying sys.obj$ sys.user$ ... tables block are cached in buffer cache.
So why we are caching them in dictionary cache space in shared pool additionally ?
Looks like double buffering and wasting SGA .
Please explain .
Regards
GregGHI,
Dictionary cache will not cache data of tables, rather it will cache the structural information of table (in your case).
If i will do "select ename from emp", during statement compilation, it needs to check whether "ename" is a real column? and for this it needs to query data dictionary information (from using physical read of system data file or from data dictionary cache if information is there). It also need to check whether i have (logged in user) rights to access this table/column and all this information comes from data dictionary.
This is a simple example, otherwise dictionary cache need to store a lot of other information also (but purely the information present in data dictionary)
Salman
Maybe you are looking for
-
Mail not loading a Viewer Window by default.
My question is how do I set Mail so that it does open the viewer window by default? I am running Mail 8.0 on a 2014 iMac running Yosemite. I have tried to locate .plist files (although that is close to the edge of my expertise) and there is no "Mail"
-
FB 4.7 No Longer Validates AIR Version For Projects?
Hi, I migrated my FB 4.6 project over to 4.7, which in the app descriptor xml specifies the minimum AIR SDK to use is 3.5. Since 4.7 only has 3.4, I would expect FB to report this as a problem. FB 4.6 always did. Now 4.7 does not seem to care. The wo
-
Crash whilst looping through Excel rows
Hi Folks. I have a program unit in forms which opens an XL spreadsheet and then reads the rows into the DB. It loops though about 4800 rows and then the whole app dies and forms closes. No error is given. The code is shown below. If anyone has any su
-
the message show (49)
-
Internet is crawling with 5 bars.
All of a sudden my internet has come to a screeching halt. I just upgraded to FiOS Ultimate, and it's terribly slow. I'm in Rhode Island, and I choose Boston on Speed Test, or NY on Speak Easy and I've been getting 2-6 Mbps. I called tech support and