Async io & dbwr
my box has Solaris 5.8,veritas FS, ODM,Oracle9206, 56CPU,120G RAM, DSS kind of application but high dataload
1.db_writer_process(7)
2.disk_async_io(async)
3.filesystem_io_option(async)
4.parallel_automatic_tuning(true)
5.parallel_min_Servers(16)
6.parallel_max_servers(160)
settings. If 2 is async what could be the value of 1 and
filesystem_io_option(async). my knowledge is multiple writer process will help in no-async environment. Is there any negative impact if 1# is > 1. I have PIOT with buffer busy waits,defile sequential read, To reduce the buffer busy waits and all, shall we do the frequent checkpointing or increase the db_writer process.
Also I am getting ora-1555 even after increasing undo from 65 to 230G. and retention 6 to 25hr. Most of the queries are parallel queries/sub queries. what could be the proper settings
thanks
If you are really using ASYNC IO then you should only
need one dbwr process.
HTH -- Mark D Powell --Not quite true
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#sthref909
10.3.9.3.3 Choosing Between Multiple DBWR Processes and I/O Slaves
Configuring multiple DBWR processes benefits performance when a single DBWR process is unable to keep up with the required workload. However, before configuring multiple DBWR processes, check whether asynchronous I/O is available and configured on the system. If the system supports asynchronous I/O but it is not currently used, then enable asynchronous I/O to see if this alleviates the problem. If the system does not support asynchronous I/O, or if asynchronous I/O is already configured and there is still a DBWR bottleneck, then configure multiple DBWR processes.
Note:
If asynchronous I/O is not available on your platform, then asynchronous I/O can be disabled by setting the DISK_ASYNCH_IO initialization parameter to FALSE.
Using multiple DBWRs parallelizes the gathering and writing of buffers. Therefore, multiple DBWn processes should deliver more throughput than one DBWR process with the same number of I/O slaves. For this reason, the use of I/O slaves has been deprecated in favor of multiple DBWR processes. I/O slaves should only be used if multiple DBWR processes cannot be configured.
Similar Messages
-
Last night one of our standby databases crashed because our sysadmin was doing some maint with NetApp volumes and for a few seconds all the mounts vanished, following that database crashed with following message in the logs.
Errors in file /app/oracle/admin/xxxx/bdump/xxx_ckpt_23056.trc:
ORA-00206: error in writing (block 3, # blocks 1) of control file
ORA-00202: control file: '/XXXX/XXXX_ARCH/XXXX_C02.ctl'
ORA-27061: waiting for async I/Os failed
Linux-x86_64 Error: 13: Permission denied
Additional information: -1
Additional information: 16384
CKPT: terminating instance due to error 221
when I restarted the instance, it came back fine and MRP started applying the logs without any problem.
So my questions are following, these were already asked on Asktom, but was not answered.
1> In view of disk failure, what is the timeout in terms of DBWR waiting for a confirmation from OS.
2> Is this timeout OS dependent or something that can be tweaked in Oracle.
3> Would DBWR crash the instance if it doesn't receive a confirmation from the OS that the block has been written, or does it retry ?
If it retries to write the block, then how many retries are there before it'd eventually fail, crashing the instance.
Thanks,
RamkiThanks for the reply.
What full version of Oracle? >> Oracle 10.2.0.4
What OS? >> RHEL 5 Advanced server ( Fully supports ASYNC I/O)
NetApp filesystem.
There are several problem/bug reports on my Oracle support related to the ORA-27061 error. A couple of them point to a lack of OS resources being available.Reasons for this error are pretty known that almost someone pulled the plug on the NetApp filer.
My 3 questions are related to the timeouts pertaining to the DBWR waiting to hear back from the OS about the previously submitted ASYNC I/O operations, how long does it wait before it gives out distress calls and database is crashed.
-Ramki -
ORA-01157: cannot identify/lock data file 13 - see DBWR trace file
Hi all,
I've a Oracle Database 11g Release 11.1.0.6.0 - 64bit Production With the Real Application Clusters option.
I'm using ASM.
Yesterday I added new disks and then I changed the +/etc/udev/rules.d/98-oracle.rules+ file.
Now it looks like this (the bolded are the new ones):
# Oracle Configuration Registry
KERNEL=="emcpowerd1", OWNER="root", GROUP="oinstall", MODE="640", NAME="ocr"
# Voting Disks
KERNEL=="emcpowerr1", OWNER="oracle", GROUP="oinstall", MODE="640", NAME="voting"
# Spfile ASM+
KERNEL=="emcpowers1", OWNER="oracle", GROUP="dba", MODE="660", NAME="spfileASM"
# ASM Devices
KERNEL=="emcpowerj1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm0" #onlineredo asm disk
KERNEL=="emcpowern1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm1" #data asm disk
KERNEL=="emcpowerh1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm2" #data asm disk
KERNEL=="emcpowerq1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm3" #data asm disk
KERNEL=="emcpowere1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm4" #data asm disk
KERNEL=="emcpowerg1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm5" #data asm disk
KERNEL=="emcpowerl1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm6" #data asm disk
KERNEL=="emcpowero1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm7" #data asm disk
KERNEL=="emcpowerf1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm8" #data asm disk
KERNEL=="emcpowerm1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm9" #data asm disk
KERNEL=="emcpoweri1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm10" #data asm disk
KERNEL=="emcpowerp1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm11" #data asm disk
KERNEL=="emcpowerk1", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm12" #data asm disk
KERNEL=="emcpowert", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm210" #data asm disk SATA
KERNEL=="emcpowerc", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm211" #data asm disk SATA
KERNEL=="emcpowerb", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm212" #data asm disk SATA
KERNEL=="emcpowera", OWNER="oracle", GROUP="dba", MODE="660", NAME="asm213" #data asm disk SATA
It's the same on both RAC nodes.
The operating system, a RedHat 5.4, see the new devices (both nodes):
ls -ltr /dev/asm*
brw-rw---- 1 oracle dba 120, 16 May 18 10:03 /dev/asm212
brw-rw---- 1 oracle dba 120, 304 May 18 10:03 /dev/asm210
brw-rw---- 1 oracle dba 120, 32 May 18 10:03 /dev/asm211
brw-rw---- 1 oracle dba 120, 0 May 18 10:03 /dev/asm213
brw-rw---- 1 oracle dba 120, 209 May 18 10:05 /dev/asm1
brw-rw---- 1 oracle dba 120, 81 May 18 13:40 /dev/asm8
brw-rw---- 1 oracle dba 120, 97 May 18 13:40 /dev/asm5
brw-rw---- 1 oracle dba 120, 193 May 18 13:40 /dev/asm9
brw-rw---- 1 oracle dba 120, 161 May 18 13:40 /dev/asm12
brw-rw---- 1 oracle dba 120, 241 May 18 13:40 /dev/asm11
brw-rw---- 1 oracle dba 120, 177 May 18 13:40 /dev/asm6
brw-rw---- 1 oracle dba 120, 225 May 18 13:40 /dev/asm7
brw-rw---- 1 oracle dba 120, 65 May 18 13:40 /dev/asm4
brw-rw---- 1 oracle dba 120, 129 May 18 13:40 /dev/asm10
brw-rw---- 1 oracle dba 120, 257 May 18 13:40 /dev/asm3
brw-rw---- 1 oracle dba 120, 113 May 18 13:40 /dev/asm2
brw-rw---- 1 oracle dba 120, 145 May 18 13:40 /dev/asm0
Both ASM instance see new devices:
From ASM1
SQL*Plus: Release 11.1.0.6.0 - Production on Tue May 18 13:43:10 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.1.0.6.0 - 64bit Production
With the Real Application Clusters option
SQL> select instance_name from v$instance;
INSTANCE_NAME
+ASM1
SQL> select path from v$asm_disk;
PATH
/dev/asm212
/dev/asm211
/dev/asm213
/dev/asm210
/dev/asm1
/dev/asm4
/dev/asm5
/dev/asm0
/dev/asm12
/dev/asm9
/dev/asm2
/dev/asm10
/dev/asm7
/dev/asm11
/dev/asm3
/dev/asm8
/dev/asm6
17 rows selected.
SQL>
From ASM2
SQL*Plus: Release 11.1.0.6.0 - Production on Tue May 18 13:42:39 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.1.0.6.0 - 64bit Production
With the Real Application Clusters option
SQL> select instance_name from v$instance;
INSTANCE_NAME
+ASM2
SQL> select path from v$asm_disk;
PATH
/dev/asm213
/dev/asm211
/dev/asm210
/dev/asm212
/dev/asm8
/dev/asm7
/dev/asm6
/dev/asm11
/dev/asm4
/dev/asm12
/dev/asm5
/dev/asm9
/dev/asm1
/dev/asm3
/dev/asm10
/dev/asm2
/dev/asm0
17 rows selected.
SQL>
Then I created a disk group:
CREATE DISKGROUP STORE EXTERNAL REDUNDANCY DISK '/dev/asm210';
Then I created a new tablespace:
CREATE TABLESPACE store DATAFILE '+STORE';
I did all this operations from NODE1.
What is happening now is that everytime I try to read something from new diskgroup FROM NODE2 I get the ORA-01157:
ORA-01157: cannot identify/lock data file 13 - see DBWR trace file
ORA-01110: data file 13: '+STORE/evodb/datafile/store.256.719232707
No problem to read from NODE1.
The simple query on dba_data_file work from NODE1 and fails from NODE2 with the ORA-01157.
I found this on the alert log:
<msg time='2010-05-18T10:06:41.084+00:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
module='' pid='11014'>
<txt>Errors in file /u01/app/oracle/diag/rdbms/evodb/EVODB2/trace/EVODB2_smon_11014.trc:
ORA-01157: cannot identify/lock data file 13 - see DBWR trace file
ORA-01110: data file 13: '+STORE/evodb/datafile/store.256.719232707'
</txt>
</msg>
And this from the trace:
Trace file /u01/app/oracle/diag/rdbms/evodb/EVODB2/trace/EVODB2_smon_11014.trc
Oracle Database 11g Release 11.1.0.6.0 - 64bit Production
With the Real Application Clusters option
ORACLE_HOME = /u01/app/oracle/product/11.1.0/db1
System name: Linux
Node name: node02
Release: 2.6.18-128.7.1.el5
Version: #1 SMP Wed Aug 19 04:00:49 EDT 2009
Machine: x86_64
Instance name: EVODB2
Redo thread mounted by this instance: 2
Oracle process number: 19
Unix process pid: 11014, image: oracle@node02 (SMON)
*** 2010-05-18 10:06:41.084
*** SESSION ID:(151.1) 2010-05-18 10:06:41.084
*** CLIENT ID:() 2010-05-18 10:06:41.084
*** SERVICE NAME:(SYS$BACKGROUND) 2010-05-18 10:06:41.084
*** MODULE NAME:() 2010-05-18 10:06:41.084
*** ACTION NAME:() 2010-05-18 10:06:41.084
DDE rules only execution for: ORA 1110
----- START Event Driven Actions Dump ----
---- END Event Driven Actions Dump ----
----- START DDE Actions Dump -----
----- DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----
Successfully dispatched
----- (Action duration in csec: 0) -----
----- END DDE Actions Dump -----
*** 2010-05-18 10:06:41.084
SMON: following errors trapped and ignored:
ORA-01157: cannot identify/lock data file 13 - see DBWR trace file
ORA-01110: data file 13: '+STORE/evodb/datafile/store.256.719232707'
Any suggestion about how to solve the problem?
Thanks in advance!
SamuelI didn't understand what do you mean with thread...
But I think you found the problem
Initialization files of both ASM instance are: SPFILE='/dev/spfileASM'
that SPFILE is (common for both):
+ASM2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
+ASM2.asm_diskgroups='ONLINELOG','ARCHIVELOG','DATA'
+ASM1.asm_diskgroups='ONLINELOG','ARCHIVELOG','DATA','STORE'#Manual Mount
*.asm_diskstring='/dev/asm*'
*.cluster_database=true
*.diagnostic_dest='/u01/app/oracle'
+ASM1.instance_number=1
+ASM2.instance_number=2
*.instance_type='asm'
*.large_pool_size=12M
+ASM1.local_listener='LISTENER_ASM'
+AC
Then I executed another query:
From ASM1
SQL> select instance_name from v$instance;
INSTANCE_NAME
+ASM1
SQL> select name, state from v$asm_diskgroup;
NAME STATE
ARCHIVELOG MOUNTED
DATA MOUNTED
ONLINELOG MOUNTED
STORE MOUNTED
SQL>
From ASM2
SQL> select instance_name from v$instance;
INSTANCE_NAME
+ASM2
SQL> select name, state from v$asm_diskgroup;
NAME STATE
ARCHIVELOG MOUNTED
DATA MOUNTED
ONLINELOG MOUNTED
STORE DISMOUNTED
SQL>
Than the question is:
how can I mount the disk group STORE also on ASM2?
I think the problems is into the spfile lines:
+ASM2.asm_diskgroups='ONLINELOG','ARCHIVELOG','DATA'
+ASM1.asm_diskgroups='ONLINELOG','ARCHIVELOG','DATA','STORE'
How can I change the +ASM2.asm_diskgroups value?
Thanks -
ORACLE8에서의 DBWR (DBWR_IO_SLAVES와 DB_WRITER_PROCESSES)
제품 : ORACLE SERVER
작성날짜 : 2002-08-12
Oracle 8에서의 DBWR (dbwr_io_slaves와 db_writer_processes)
Oracle 7에서의 db_writers는 master-slave processing을 통해, async I/O를
simulate하기 위해 사용되었다고 볼 수 있다. Oracle 8에서 DBWR의 write
processing에 더 나은 성능을 제공하기 위해 복수 개의 database writer를 사용
하는 방법은 다음과 같이 두가지로 나눌 수 있다.
1. DBWR IO slaves (dbwr_io_slaves)
Oracle7에서의 mulitple DBWR process들은 단순히 slave process로서, asynch
I/O call을 수행할 수는 없었다. Oracle 8.0.3부터, slave database writer code
가 kernal에 포함되었고, slave process의 async I/O가 가능하게 되었다. 이것은
init.ora file 내에 dbwr_io_slaves라는 parameter를 통해 가능하며, IO slave가
asynchronous I/O가 가능하여 I/O call 이후에 slave가 block되지 않아 더 나은
성능을 제공한다는 것을 제외하고는 Oracle7과 매우 유사하다. slave process는
instance 생성 시기가 아닌 database open 시에 start되기 때문에 oracle process
id가 9번부터 할당되며, o/s에서 확인되는 process 이름도 ora_i10n_SID와 같은
형태가 된다.
dbwr_io_slaves=3으로 지정한 경우, 아래와 같은 oracle background process가
구동되며, ora_i101_V804, ora_i102_V804, ora_i103_V804이 dbwr의 slave
process들이다.
tcsol2% ps -ef | grep V804
usupport 5419 1 0 06:23:53 ? 0:00 ora_pmon_V804
usupport 5429 1 1 06:23:53 ? 0:00 ora_smon_V804
usupport 5421 1 0 06:23:53 ? 0:00 ora_dbw0_V804
usupport 5433 1 0 06:23:56 ? 0:00 ora_i101_V804
usupport 5423 1 0 06:23:53 ? 0:00 ora_arch_V804
usupport 5431 1 0 06:23:53 ? 0:00 ora_reco_V804
usupport 5435 1 0 06:23:56 ? 0:00 ora_i102_V804
usupport 5437 1 0 06:23:56 ? 0:00 ora_i103_V804
usupport 5425 1 0 06:23:53 ? 0:00 ora_lgwr_V804
usupport 5427 1 0 06:23:53 ? 0:00 ora_ckpt_V804
2. Multiple DBWR (db_writer_processes)
multiple database writer는 init.ora file내의 db_writer_processes라는
parameter에 의해 구현되며, 이것은 Oracle 8.0.4부터 제공되었다. 이것은
기존의 master-slave 관계가 아닌 진정한 의미의 복수개의 database writer를
사용하는 것이며, database writer process들은 PMON이 start된 후에 start되어
진다.
이름은 ora_dbwn_SID 형태이며, 아래에 db_block_lru_latches=2,
db_writer_processes=2로 지정한 경우 구동된 oracle background process들의
예이다. 여기에서 ora_dbw0_V804, dbw1_V804이 dbwr process들이다. 만약
db_writer_processes를 지정하지 않으면 기본값은 1인데 이 때도 Oracle7과 같이
ora_dbwr_SID 형태가 아닌 ora_dbw0_SID 형태의 process가 구동된다.
usupport 5522 1 0 06:31:39 ? 0:00 ora_dbw1_V804
usupport 5524 1 0 06:31:39 ? 0:00 ora_arch_V804
usupport 5532 1 0 06:31:39 ? 0:00 ora_reco_V804
usupport 5528 1 0 06:31:39 ? 0:00 ora_ckpt_V804
usupport 5530 1 0 06:31:39 ? 0:00 ora_smon_V804
usupport 5526 1 0 06:31:39 ? 0:00 ora_lgwr_V804
usupport 5520 1 0 06:31:39 ? 0:00 ora_dbw0_V804
usupport 5518 1 0 06:31:38 ? 0:00 ora_pmon_V804
db_writer_processes로 지정된 각 writer process는 하나의 latch set에 할당된다.
그러므로 db_writer_processes를 db_block_lru_latches으로 지정되는 LRU latch의
갯수와 같은 값으로 지정하는 것이 권장할 만하며, 단 CPU의 갯수를 초과하는 것은
바람직하지 않다.
[참고] 현재까지 init.ora file내에 구동되는 dbwr의 갯수는 db_block_lru_latches
parameter에 의해 제한된다. 즉 db_writer_processes 값을 db_block_lru_latches
보다 크게 하여도 db_block_lru_latches로 지정
된 수의 dbwr process가 기동된다.
Oracle8에서 DBWR I/O slave나 복수개의 DBWR를 제공하는 방법 중 좋은 점은
이 기법을 제공하는 것이 kernal 안에 포함되어 기존의 OSD layer로 구현되었던
것보다 port specific한 부분이 없고 generic하다는 것이다.
3. 두 가지 방법 중 선택 시 고려사항
이러한 두가지 형태의 DBWR 기법이 모두 도움이 되기는 하지만, 일반적으로
어느 것을 사용할 것인지는 OS level에서 asynchronous I/O가 제공하는지와
CPU 갯수에 의존한다. 즉, system이 복수 개의 CPU를 가지고 있으면
db_writer_processes를 사용하는 것이 바람직하며, aync I/O를 제공하는 경우
두 가지 모두 사용에 효과를 얻을 수 있다. 그런데 여기서 주의할 것은
dbwr_io_slaves가 약간의 overhead가 있다는 것이다.
slave IO process를 가능하게 하면, IO buffer와 request queue의 할당을 위해
부가적인 shared memory가 필요하다.
multiple writer processes와 IO slave는 매우 부하가 많은 OLTP 환경에서
적합하며, 일정 수준 이상의 성능을 요구할 때만 사용하도록 한다. 예를 들어
asynch I/O가 사용 가능한 경우, I/O slave도 사용하지 않고 하나의 DBWR만을
asynch I/O mode로 사용하는 것이 충분하고 바람직할 수 있다. 현재의 성능을
조사하고 bottleneck이 되는 부분이 DBWR 부분인지 정확히 조사한 후 사용하여야
한다.
[참고] 이 두 parameter를 함께 사용하면 dbwr_io_slaves만 효과가 있다.
이것은 dbwr_io_slaves는 master dbwr process를 db_writer_proceses에 관계 없이
하나만 가지도록 되어 있기 때문이다.http://www.fors.com/velpuri2/PERFORMANCE/ASYNC
hare krishna
Alok -
Hi,
just looking for peoples thoughts on how many dbwr processes a system should have.
We have a database instance doing OLTP 24 * 7, with a single dbwr but linux ASYNC I/O enabled.
The server has 8 logical cpus and the thought is that there should be a dbwr process for each cpu.
My understanding however was that with ASYNC I/O enabled at OS level that the database required just a single DBWR process.
regards
AlanPavan Kumar wrote:
Hi,
I read this with respect to ORACLE 8i Verison and check with respect 9i.
Will the result of cpu_count parameter be different from the fixed table you mentioned?
Are you sure on this.. ??
Refer : http://www.pvmehta.com/new/db_writer_process%20or%20dbwr_io_slaves.htm (History ;-))
Let me know ur views..waiting for your comments.
- Pavan Kumar NPavan,
AFAIK, the db_writer_processes will be equal to the number o LRU Latches (db_block_lru_latches), since each latch set will be handled by one db_writer process. But From 9i version the "db_block_lru_latches" is hidden and the db_writer_processes must be less than the number of CPUs .Either you read and understood it wrong or I am missing something? Where it is mentioned that LRU latches are going to control the DBWR process?
Here is the link to 8i,
http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76961/ch131.htm#50255
DB_BLOCK_LRU_LATCHES specifies the maximum number of LRU latch sets. The buffers of a buffer pool are equally divided among the working LRU latch sets of the buffer pool so that each buffer is protected by one LRU latch. Normally, the more latches you specify, the less contention exists for those latches. However, too many latches may result in small LRU lists, potentially reducing the cache life of a database block. The maximum of (CPU_COUNT x 2 x 3) ensures that the number of latches does not exceed twice the product of the number of CPUs and the number of buffer pools.LRU latch control the buffers movement in the LRU list within one workarea. Workareas are 2*cpu_count( I need to check that whether the same is true for 10g or not) so this means that there will be 1:1 relationship between LRU latch and one workarea. Now , here there "may be" a possibility that one DBWR may not be sufficient for the entire cache so there may be a need for multiples of them.But still, that doesn't signfy anywhere that the LRU latches and DBWRs are supposed to be in 1:1 ratio or the former control or determines the other.Itmay be a need that one has to go for mor e number of DBWRs. But that still doesn't mean that there must to be a DBWR for one workarea set in correlation with LRU latch.
I couldn't find any given reference in 9i or beyond as the parameter was deprecated from 9i. Can you point me to the 9i docs where it is mentioned?
I didin't understand the link that youhave given.Its seems to be never updated beyond oracle 8 and a lot has changed since.I don't see that its of much use.
Lastly, I asked about the X$ table that you mentioned for the number of CPUs,
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> show parameter count
NAME TYPE VALUE
active_instance_count integer
cpu_count integer 2
db_file_multiblock_read_count integer 16
plsql_native_library_subdir_count integer 0
SQL> select KVIIVAL from X$KVII where KVIITAG='ksbcpu'
2 /
no rows selected
SQL> select KVIIVAL from X$KVII where KVIITAG='KSBCPU';
no rows selected
SQL>I have got no result from the query of yours while simply the parameter is giving number of cpus.
Cheers
Aman.... -
Hi,
We are currently experiencing a change in behavior on our performance testing environment with regards to the size of the writes issued by the database writer.
In our case we have a table partition which resides on its own LUN and is 100% inserts (no indexexs, constraints etc). Previously the writes to this LUN were approximately 100K as observed in IOSTAT and dba_hist_filestatxs database table. The I/O profile using Sun Storage Performance Analyser (SWAT) also reflects this and indicated the majority of I/O on this LUN is sequential.
We have always had 8K writes to another LUN which does more random I/O.
We are now seeing that the writes sizes have now dropped to approximately 16K indicating that the dbwr is not coaelesing the writes in the same way and the I/O profile appears more random.
Does anybody have any feel for what may influence this behaviour?.
The one change that we are seeing at the I/O level is that our I/O service time has increased, so I am wondering if the DBWR process would change its behaviour if it detected longer response times for db_file_parallel_write events.
Any theories would be appreciated.
Environment
Database Version : 10.2.0.2
OS Version: Solaris 10 SPARC64
CPU Cores : 8
Database Writers: 2
Thanks and Regards
AdrianHi,
We are now seeing that the writes sizes have now dropped to approximately 16K indicating that the dbwr is not coaelesing the writes in the same way and the I/O profile appears more random.Do you have problems with that?
Why are you looking at the DBWR performance rather than end-user's?
What is your db_cache_size?
Are you using async IO?
Are you aware of any changes made to the environment?
and, of course, why not 10.2.0.4? -
제품 : ORACLE SERVER
작성날짜 : 2003-01-15
ASYNC/IO에 대한 기본적인 질문들
===============================
Purpose
Asynchronous I/O
1. async I/O란 무엇인가요 ?
2. async I/O는 어떻게 설정하나요 ?
3. DBWRs와 async I/O는 어떻게 설정되어야 하나요 ?
4. 어떤 system에서 async I/O를 실행할 수 있나요 ?
Explanation
답변
1. Asynchronous I/O는 processes들이 write를 한 후에
기다리지 않고 바로 다음 작업을 수행할 수 있도록 해주는
Input/Output mechanism입니다.
Asynchronous I/O는 불필요하게 낭비되는 idle time을 최소화
해서 system 성능을 향상시켜 줍니다.
DBWR는 각각의 I/O에 방해받지 않습니다.
2. Asynchronous I/O는 INIT.ORA파일에 ASYNC_WRITE 또는
USE_ASYNC_IO parameter값을 true로 설정해서 실행이 가능한
상태로 만들 수 있습니다.
Asynchronous I/O 는 Unix platforms에서 많은 Oracle에서
사용이 가능하며 raw disk devices를 사용하거나 특별한
kernel 설정이 필요한 경우도 있습니다.
좀 더 많은 정보는 Installation Guide를 참조하시기 바랍니다.
3. 만일 system에서 Asynchronous I/O의 사용이 불가능할 경우
multiple DBWRs를 사용하시기 바랍니다.
Oracle에서는 disk당 최소한 한개 이상의 DBWR를 사용하기를
권장합니다. 만일 asynchronous I/O가 사용되는 경우에는
한개의 DBWR를 사용하십시요. I/O가 parallel로 되는 경우에는
multiple DBWRs를 사용할 이유가 없습니다.
4. Asynchronous I/O는 operating system에 따라 다릅니다.
이 문제에 대한 결정은 operating system vendor과 상의하시는
것이 좋습니다.
Reference Documment
Note.1034289.6 -
MochiKit.Async.XMLHttpRequestError Error while invoking a hyperlink
Hi,
I have a report , which has a sub report embedded in it . IN the sub report, there is a filed, that has a hyper link. The hyperlinks is this
"https://myserver.com/myportal/generateDocument?reportLink=true&filePath=" + CurrentFieldValue
and when I hover the mouse over the field, the console shows up
javascript.parent.bobj.event.publish('hyperlinked','MyReport','urs=https%3A%2F%2Fmyserver.com%2Fportal%2FgenerateDocument%3FreportLink%3Dtrue%26filePath%3DNTkxNA%3D%3D&target=_self')
When I click on the link, the error message pops up
Unable to process your request
message: Request failed
number: 0
name: MochiKit.Async.XMLHttpRequestError
but , the document opens .It is a pdf document , which is suppose to be opened, and it opens up. But I get the error message.
Any clues how to suppress the error message pop-up window ?
ThanksTry this,
Do not launch the sub-report from the main report, rather open the sub report by itself from infoview and try to perform the action. I assume something may be happening in the Opendoc. if you can narrow it down to opendoc, then you can work on how you need to build the opendoc URL.
Just need to do the natural troubleshooting.
try to open the sub report in Webi righ client and publish it back, see if it fixes anything. I googled and found something about post back issues and that seems to be a problem. There are a few links on good when i searched using "MochiKit.Async.XMLHttpRequestError", they should help you or at the least give you a direction. -
In BPM sync/async Step , can you have different message schema?
Scenario:
File to XI to BPM to SOAP to RFC
In order to trigger the webservice , I use a dummy file which will be polled at a certain frequency- once in 30 minutes in Test mode - this will call the Webservice. The answer of webservice is sent to RFC Async.
so these are the Repository objects I created :
1. File Outbound Async Message Interface- Output Message (File_Request_MT)
2.File Abstract Message Interface-Output Message (File_Request_MT)
3.Async/ sync bridge Abstract interface to call the Webservice
4 Webservice Inbound Sync Message interface- with both input and output message
5.RFC Inbound Async message interface- input message (RFC_Request)
6 RFC Abstract Async Message interface-input message (RFC_Request)
BPM
a) Receive step which uses the the object 2.
b) Sync send which uses object 3
c) Send step which uses Object 5
I am using a BPM to Receive the file data/ "request " by using the File abstract message interface
Then use the the object 3 to call the webservice in BPM .
My question is on this step
Can I have for the abstract Interface(Object 3) the Request message as File_Request_MT and
response message as (RFC_Request) ?
I use a message mapping to map Input and Ouput message of Object 3 with Object 4.
Thanks for your insightRaj,
Thanks for the feedback , but I have a question regarding your reply
Interface Objects
Object 1: File_Request_Abs
Object 2: Soap_Response_Abs
Object 3:Soap_Abs_Synch
Output Message:File_Request_Abs
Input Message:Soap_Response_Abs
Object 4:Rfc_Request_Abs
The question is about
a)SOAP_RESPONSE_ABS : could you please tell me why do we need an abstract interface -SOAP_RESPONSE_ABS?
I created a message type MT_SOAP_RESPONSE and used that in the BPM sync Send step as the Input message.
b)Soap_Abs_Synch- I am using message types
MT_File_request and MT_SOAP_Response From your reponse, it looks like you are suggesting to use
Abstract Interfaces as Output message and Input Message, am I correct? Could you please tell me whether this has advantages over using message types MT_file_request and MT_soap_reponse? I haven't used Abstract interfaces before as Input message and output message; infact, I wasn't even aware that it can be. Please confirm that it is possible. Thank you for increasing my knowledge !!!
These are the Objects I created
Message Type :
a) MT_Filerequest
b) MT_SOAPresponse
Message Interface
a)MI_Filerequest_out_async -
Output message
Mess. type MT_filerequst
b)MI_filerequest_async_abs-
Mess. type MT_filerequst
Used
i) used for BPM receiver step- container definition
ii) receiver determination
c)MI_webservice_sync_in - This is created from External definition
d)MI_webservice_sync_abs-
Input message - MT_soapresponse
Output message- MT_fierequest
Used:
i) used for BPM sync send step ,
ii)Interface mapping between MI_Webservice_sync_in and MI_webservice_sync_abs
iii)Container element-SOAP_response
e) MI_RFC_async_out
Input message
RFC_Request(This is imported from RFC definition)
g) MI_RFC_async_abs
Input message
RFC_Request(This is imported from RFC definition)
Mapping
Message mapping
i)Filerequest_TO_SOAPrequest
Source: MT_Filerequest
Target: SOAPrequest(Got from External definition)
ii)SOAPresponse_TO_BPM_response
Source : SOAPresponse(got from External definition)
Target : MT_SOAPresponse
Interface Mapping
i)IM_BPM_TO_SOAP
Source Interface : MI_webservice_sync_abs
Target :MI_webservice_sync_in
uses following message mapping
Request : filerequest_TO_SOAPrequest
Response:SOAPresponse_TO_BPM_response
BPM container element
i)Receiver_container TYPE MI_Filerequest_out_async
ii)SOAP Responsecontainer TYPE MI_webservice_sync_abs
iii)RFC_Request_container TYPE MI_RFC_async_abs
BPM flow
Receive---->Send Synch-->Transformation----->Send Asynch
Receive -
> receiver_container
Send Synch -
> receiver_container(Request Message), Soap_response_container(Response Message)
Transformation -
> Source(Soap_response_container), Target(RFC_request_container)
Send Asynch -
> RFC_request_container
Thanks for your help!!! -
Unable to capture in Async Hot Log
Hi,
I am unable to get the changes captured in async hot log mode of operation. Have verified that DB is in archivelog mode and am able to create change tables, but changes in the source table are not getting propagated. Is it due to logs not getting picked up from the proper place.
my change_sources view has empty values in all the fields except source_type, source_description and created.
would appreciate any pointers for what may be missing..or how to go about identifying the problem.
Thanks,
RKThank you Patrick. You have helped me before, so thanks again.
I do not have anything else connected to the firewire ports. I have an external hard drive, but I have tried to capture with and without the hard drive attached. I did recently back up using Carbon Copy Cloner to another hard drive, if anyone thinks this may be the issue.
I can't recall the exact error message in FCP (I have deleted the program trying to fix the issue, and focus on the capture problem in Quicktime thinking that if Quicktime and iMovie both don't capture, that the problem is not with FCP, but either with the OS or the back end of QT) I can re-install FCP to give you the exact FCP error message if you would like me too. Ie, if the exact error message is necessary.
Thanks again for your response. -
Hi ,
We are planing to upgrade our 10g R2 CRS to 11g R2 CRS on HP-UX server ,while running cluvfy tool we are getting the below error.Please help me to fix this issue.
Checking settings of device file "/dev/async"
Node Name Available Comment
erpdev04 yes failed (incorrect setting for minor number.)
erpdev03 yes failed (incorrect setting for minor number.)
Result: Check for settings of device file "/dev/async" failed.
Pre-check for cluster services setup was unsuccessful on all the nodes.
oracle.oracle.erpdev03.fwprod_app1> (/app/oracle/oracle_source/patches/bin)# /sbin/mknod /dev/async c 101 0x104
-
Hello. I would like to write async tcp client and server. I wrote this code but a have a problem, when I call the disconnect method on client or stop method on server. I can't identify that the client or the server is no longer connected.
I thought I will get an exception if the client or the server is not available but this is not happening.
private async void Process()
try
while (true)
var data = await this.Receive();
this.NewMessage.SafeInvoke(Encoding.ASCII.GetString(data));
catch (Exception exception)
How can I determine that the client or the server is no longer available?
Server
public class Server
private readonly Dictionary<IPEndPoint, TcpClient> clients = new Dictionary<IPEndPoint, TcpClient>();
private readonly List<CancellationTokenSource> cancellationTokens = new List<CancellationTokenSource>();
private TcpListener tcpListener;
private bool isStarted;
public event Action<string> NewMessage;
public async Task Start(int port)
this.tcpListener = TcpListener.Create(port);
this.tcpListener.Start();
this.isStarted = true;
while (this.isStarted)
var tcpClient = await this.tcpListener.AcceptTcpClientAsync();
var cts = new CancellationTokenSource();
this.cancellationTokens.Add(cts);
await Task.Factory.StartNew(() => this.Process(cts.Token, tcpClient), cts.Token, TaskCreationOptions.LongRunning, TaskScheduler.Default);
public void Stop()
this.isStarted = false;
foreach (var cancellationTokenSource in this.cancellationTokens)
cancellationTokenSource.Cancel();
foreach (var tcpClient in this.clients.Values)
tcpClient.GetStream().Close();
tcpClient.Close();
this.clients.Clear();
public async Task SendMessage(string message, IPEndPoint endPoint)
try
var tcpClient = this.clients[endPoint];
await this.Send(tcpClient.GetStream(), Encoding.ASCII.GetBytes(message));
catch (Exception exception)
private async Task Process(CancellationToken cancellationToken, TcpClient tcpClient)
try
var stream = tcpClient.GetStream();
this.clients.Add((IPEndPoint)tcpClient.Client.RemoteEndPoint, tcpClient);
while (!cancellationToken.IsCancellationRequested)
var data = await this.Receive(stream);
this.NewMessage.SafeInvoke(Encoding.ASCII.GetString(data));
catch (Exception exception)
private async Task Send(NetworkStream stream, byte[] buf)
await stream.WriteAsync(BitConverter.GetBytes(buf.Length), 0, 4);
await stream.WriteAsync(buf, 0, buf.Length);
private async Task<byte[]> Receive(NetworkStream stream)
var lengthBytes = new byte[4];
await stream.ReadAsync(lengthBytes, 0, 4);
var length = BitConverter.ToInt32(lengthBytes, 0);
var buf = new byte[length];
await stream.ReadAsync(buf, 0, buf.Length);
return buf;
Client
public class Client
private TcpClient tcpClient;
private NetworkStream stream;
public event Action<string> NewMessage;
public async void Connect(string host, int port)
try
this.tcpClient = new TcpClient();
await this.tcpClient.ConnectAsync(host, port);
this.stream = this.tcpClient.GetStream();
this.Process();
catch (Exception exception)
public void Disconnect()
try
this.stream.Close();
this.tcpClient.Close();
catch (Exception exception)
public async void SendMessage(string message)
try
await this.Send(Encoding.ASCII.GetBytes(message));
catch (Exception exception)
private async void Process()
try
while (true)
var data = await this.Receive();
this.NewMessage.SafeInvoke(Encoding.ASCII.GetString(data));
catch (Exception exception)
private async Task Send(byte[] buf)
await this.stream.WriteAsync(BitConverter.GetBytes(buf.Length), 0, 4);
await this.stream.WriteAsync(buf, 0, buf.Length);
private async Task<byte[]> Receive()
var lengthBytes = new byte[4];
await this.stream.ReadAsync(lengthBytes, 0, 4);
var length = BitConverter.ToInt32(lengthBytes, 0);
var buf = new byte[length];
await this.stream.ReadAsync(buf, 0, buf.Length);
return buf;Hi,
Have you debug these two applications? Does it go into the catch exception block when you close the client or the server?
According to my test, it will throw an exception when the client or the server is closed, just log the exception message in the catch block and then you'll get it:
private async void Process()
try
while (true)
var data = await this.Receive();
this.NewMessage.Invoke(Encoding.ASCII.GetString(data));
catch (Exception exception)
Console.WriteLine(exception.Message);
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
By the way, I don't know what the SafeInvoke method is, it may be an extension method, right? I used Invoke instead to test it.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Interface Problem Sync/Async (Open Bridge)
Hello everyone,
we made the migration from PI 7 to PI 7.01 EHP1 SP7.
After this migration, the Sync interface with BPM (Opens Bridge), began to give trouble.
It turns out that the interface in SMQ2 stand still for a long time, and after some time, timeout error occurs.
In transaction SXMS_SAMON, I noticed that some processes are in timeout.
It seems the problem is that the interfaces are entering the SMQ2, in PI can not sue for any reason.
Has also altered the value of the parameter CHECK_FOR_MAX_SYNC_CALLS most did not.
Does anyone have any idea how we can solve this problem ?
Thank you all.
MarlonInterface Mapping is between SOAP Request <--> Your XML message Request Response.
Only one interface mapping is required.
Message Interface Required
1. SOAP Synchronous
2. XML Request Async
3. XML Resp Async
4. XML Message Request Response Sync
Gaurav Jain
Reward Points if answer is helpful -
Questions on async and sync's unit tests
Hi All
I write a API project and that have sync and async method.
Then I also write a unit tests for them.
What I want to ask is how to write them is better ?
public abstract class ApiProxyBase
protected async Task<T>
GetDataAsync<T>(IRestRequest restRequest, bool ignoreSslCertificateValidation
= true) where T
: ApiResponse, new()
protected T
GetData<T>(IRestRequest restRequest, bool ignoreSslCertificateValidation
= true where T
: ApiResponse, new();
public class GameServerApiProxy : ApiProxyBase, IGameServerApiProxy
async Task<GetGameRedirectResponse>
GetGameRedirectAsync(GetGameRedirectRequest request) ;
public GetGameRedirectResponse GetGameRedirect(GetGameRedirectRequest request)
public class GameServerApiProxyTest : TestsBase
[Test]
public async void Can_Get_Game_RedirectUrl_Async()
await LogIn();
var response
= await _gameServerApiProxy.GetGameRedirectAsync(new GetGameRedirectRequest());
response.Url.Should().NotBeNull();
LogOut();
[Test]
public void Can_Get_Game_RedirectUrl()
LogInSync();
var response
= _gameServerApiProxy.GetGameRedirect(new GetGameRedirectRequest());
response.Url.Should().NotBeNull();
LogOut();I have an
article on asynchronous unit testing. I'd say your async unit test method should be async Task and not async void. Other than that, it looks good. In particular, since your interface has both synchronous and asynchronous APIs, you should have tests for
both of them.
Home page: <a href="http://stephencleary.com/">http://stephencleary.com/</a><br/> Programming blog: <a href="http://blog.stephencleary.com/">http://blog.stephencleary.com/</a><br/> <br/> <a
href="http://www.landmarkbaptist.com/salvation">How to get to Heaven according to the Bible</a><br/> -
How can I monitor async requests in OSB
Hello everybody,
I have to set an async with response call to my legacy application. But as OSB does not support async calls with response yet, I'll use this approach described here: http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jpdtransport/transport.html#wp1102152 that tells me to set up one proxy that will make the request to the legacy application, and another proxy to handle the callback (my application will call this one manually when it finishes the process).
But, doing this, I'll have two distinct transactions, two distinct proxies, and no "natural" correlation between them. I need to set some correlation that allows me to create one SLA alert when the requisition takes more than X minutes. I already thought on sending the transaction id ($messageID) to my legacy application and have it returned to the response proxy. But doing this, I would still have no link between the information (when started, etc) of request proxy.
Well, that is what I need to get done. Some way on measuring the execution time (in SLA Alerts) of my service. How this could be done?
Thanks a lot guys.I have to set an async with response call to my legacy application. But as OSB does not support async calls with response yet, I'll use this approach described here: http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jpdtransport/transport.html#wp1102152 that tells me to set up one proxy that will make the request to the legacy application, and another proxy to handle the callback (my application will call this one manually when it finishes the process).
But, doing this, I'll have two distinct transactions, two distinct proxies, and no "natural" correlation between them. I need to set some correlation that allows me to create one SLA alert when the requisition takes more than X minutes. I already thought on sending the transaction id ($messageID) to my legacy application and have it returned to the response proxy. But doing this, I would still have no link between the information (when started, etc) of request proxy.
Well, that is what I need to get done. Some way on measuring the execution time (in SLA Alerts) of my service. How this could be done?What would be the transport of communication with legacy system (JMS)?
1) OSB is efficient in stateless point to point invocation/message enrichment but this problem appears to be use case in orchestration which is better addressed by BPEL.
2) SLA alerts in OSB are designed to work at a single proxy level. Using SLA alert framework in OSB for your use case is not possible.
3) At other level OSB also does not expose metrics (http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/concepts.html) that will allow you to do this type of correlation using external java program.
My two cents
Manoj
Maybe you are looking for
-
New Apple Airport Time Capsule Suddenly Stopped Being Recognized by IMAC
It worked flawlessly for a little over a month or so. Then suddenly it is not recognized. It's ID has changed when I look in Time Machine, so I hesitate to reselect this "data" ID that was not what was setup in the store (and which worked flawless fo
-
Windows 8 compatibility with adobe creative suite cs5
Hello I would like to buy new notebook. I have adobe creative suite cs5 master collection. Would that version work with new windows 8? Thank you in advance.
-
Okay so like I said above, my Mozilla suddenly changed one day to showing a search box, and two different "favorite" tabs when I opened a new tab. After I set it up with my favorites and deleted the ones I didn't used I liked it a lot better and it w
-
Dreamweaver/Flash/Getting pages to load faster
I am working on a site in CS3 Dreamweaver that uses Flash Videos (.flv). Unfortunately, these videos are taking forever to load (sometimes they do not load at all) when you visit the site (each video is less then 30 MB). I am currently using the Goda
-
I'd purchased Mountain Lion on release date and Maverick on release date as well. I realized my old Macbook Pro (late 2007) is supported by Mountain Lion, so I want to upgrade it from 10.6.8. However, in the App Store purchases I only see Maverick