DB Polling one process
Hello Everybody,
I have a Database adapter as an exposed service to my BPEL process. My BPEL keeps polling the DB adapter. My configurations of polling options are as follows
Polling frequency : 60 seconds
no of databse rows per XML document : 1
Database rows per transaction : 1
Now when there are n records in the database, there are n BPEL instances getting created in EM console. IS there any way to tell DB adapter, like when there are n records in Databse, after the first record is processed completely (Column JOB:Yes --> JOB:NO) , then the 2nd BPEL instance should be created and third..n mean one after another....
Please help me on this regard.
Thanks a lot !
Do Synchronous Polling....
Please follow this link...
DB Polling Options
Thanks,
N
Similar Messages
-
Multiple BPEL processes polling one inbound directory ?
Hi All-
Somewhere it mentioned that :
"Multiple BPEL processes or multiple file adapters polling one inbound directory are not supported. Ensure that all are polling their own unique directory."
Is this issue still there in 10.1.3.3.0?
Please advice.
Regards,
SreejitHi,
I have one directory say c:/files and I have two BPEL process accessing this folder say BPELA and BPELB.
BPELA looking for the file with pattern file_A*.xml
BPELB looking for the file with pattern file_B*.xml
Still this phrase *"Multiple file adapters polling one inbound directory are not supported. Ensure that each is polling its own unique directory."* is a problem?
Please advice.
Regards,
Sreejit -
Peformance of one process is slow ( statspack report is attached)
Hi,
My version is 9.2.0.7 (HP-UX Itanium)
we have recently migrated the DB from windows 2003 to Unix (HP-UX Itanium 11.23).
we have one process which usually takes 15 mins before migration, now it is taking 25 mins to complete. I did not change anything at db level. same init.ora parameters. tables and indexes statistiscs are upto to date.
Please guide me, what might be the wrong at instance level. Here I am skipping the sql query portion of statspack report due to security reasons.
this statspack report is taken before running the process and after completion of process.
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
UAT 496948094 UAT 1 9.2.0.7.0 NO dbt
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 2 15-Jul-09 10:59:05 11 2.7
End Snap: 3 15-Jul-09 12:42:18 17 4.4
Elapsed: 103.22 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 400M Std Block Size: 8K
Shared Pool Size: 160M Log Buffer: 512K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 44,830.27 435,162.76
Logical reads: 15,223.37 147,771.73
Block changes: 198.12 1,923.15
Physical reads: 47.02 456.37
Physical writes: 7.05 68.45
User calls: 50.01 485.42
Parses: 25.99 252.26
Hard parses: 0.24 2.38
Sorts: 3.40 33.00
Logons: 0.02 0.16
Executes: 34.64 336.27
Transactions: 0.10
% Blocks changed per Read: 1.30 Recursive Call %: 27.05
Rollback per transaction %: 33.70 Rows per Sort: 1532.57
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.69 In-memory Sort %: 100.00
Library Hit %: 99.38 Soft Parse %: 99.06
Execute to Parse %: 24.98 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 48.39 % Non-Parse CPU: 99.53
Shared Pool Statistics Begin End
Memory Usage %: 94.56 94.19
% SQL with executions>1: 74.01 62.51
% Memory for SQL w/exec>1: 52.89 54.29
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
CPU time 895 48.10
db file sequential read 195,597 443 23.83
log file parallel write 1,706 260 13.97
log buffer space 415 122 6.54
control file parallel write 2,074 66 3.53
Wait Events for DB: UAT Instance: UAT Snaps: 2 -3
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
db file sequential read 195,597 0 443 2 306.6
log file parallel write 1,706 0 260 152 2.7
log buffer space 415 0 122 293 0.7
control file parallel write 2,074 0 66 32 3.3
log file sync 678 4 51 75 1.1
db file scattered read 6,608 0 21 3 10.4
log file switch completion 9 0 2 208 0.0
SQL*Net more data to client 24,072 0 1 0 37.7
log file single write 18 0 0 19 0.0
db file parallel read 9 0 0 13 0.0
control file sequential read 928 0 0 0 1.5
SQL*Net break/reset to clien 292 0 0 0 0.5
latch free 25 2 0 3 0.0
log file sequential read 18 0 0 2 0.0
LGWR wait for redo copy 37 0 0 0 0.1
direct path read 45 0 0 0 0.1
direct path write 45 0 0 0 0.1
SQL*Net message from client 308,861 0 30,960 100 484.1
SQL*Net more data from clien 26,217 0 3 0 41.1
SQL*Net message to client 308,867 0 0 0 484.1
Background Wait Events for DB: UAT Instance: UAT Snaps: 2 -3
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file parallel write 1,706 0 260 152 2.7
control file parallel write 2,074 0 66 32 3.3
log buffer space 10 0 1 149 0.0
db file scattered read 90 0 1 7 0.1
db file sequential read 104 0 1 5 0.2
log file single write 18 0 0 19 0.0
control file sequential read 876 0 0 0 1.4
log file sequential read 18 0 0 2 0.0
latch free 4 2 0 9 0.0
LGWR wait for redo copy 37 0 0 0 0.1
direct path read 45 0 0 0 0.1
direct path write 45 0 0 0 0.1
rdbms ipc message 7,222 5,888 21,416 2965 11.3
pmon timer 2,079 2,079 6,044 2907 3.3
smon timer 21 21 6,002 ###### 0.0
Instance Activity Stats for DB: UAT Instance: UAT Snaps: 2 -3
Statistic Total per Second per Trans
CPU used by this session 89,478 14.5 140.3
CPU used when call started 89,478 14.5 140.3
CR blocks created 148 0.0 0.2
DBWR buffers scanned 158,122 25.5 247.8
DBWR checkpoint buffers written 11,909 1.9 18.7
DBWR checkpoints 3 0.0 0.0
DBWR free buffers found 136,228 22.0 213.5
DBWR lru scans 53 0.0 0.1
DBWR make free requests 53 0.0 0.1
DBWR summed scan depth 158,122 25.5 247.8
DBWR transaction table writes 43 0.0 0.1
DBWR undo block writes 19,283 3.1 30.2
SQL*Net roundtrips to/from client 308,602 49.8 483.7
active txn count during cleanout 6,812 1.1 10.7
background checkpoints completed 3 0.0 0.0
background checkpoints started 3 0.0 0.0
background timeouts 7,204 1.2 11.3
branch node splits 4 0.0 0.0
buffer is not pinned count 35,587,689 5,746.4 55,780.1
buffer is pinned count 202,539,737 32,704.6 317,460.4
bytes received via SQL*Net from c 106,536,068 17,202.7 166,984.4
bytes sent via SQL*Net to client 98,286,059 15,870.5 154,053.4
calls to get snapshot scn: kcmgss 346,517 56.0 543.1
calls to kcmgas 42,563 6.9 66.7
calls to kcmgcs 7,735 1.3 12.1
change write time 12,666 2.1 19.9
cleanout - number of ktugct calls 9,698 1.6 15.2
cleanouts and rollbacks - consist 0 0.0 0.0
cleanouts only - consistent read 1,161 0.2 1.8
cluster key scan block gets 15,789 2.6 24.8
cluster key scans 6,534 1.1 10.2
commit cleanout failures: block l 199 0.0 0.3
commit cleanout failures: buffer 69 0.0 0.1
commit cleanout failures: callbac 0 0.0 0.0
commit cleanouts 40,688 6.6 63.8
commit cleanouts successfully com 40,420 6.5 63.4
commit txn count during cleanout 4,652 0.8 7.3
consistent changes 150 0.0 0.2
consistent gets 93,071,913 15,028.6 145,880.7
consistent gets - examination 1,487,526 240.2 2,331.6
cursor authentications 322 0.1 0.5
data blocks consistent reads - un 51 0.0 0.1
db block changes 1,226,967 198.1 1,923.2
db block gets 1,206,448 194.8 1,891.0
deferred (CURRENT) block cleanout 13,478 2.2 21.1
dirty buffers inspected 9,876 1.6 15.5
enqueue conversions 41 0.0 0.1
enqueue releases 12,783 2.1 20.0
enqueue requests 12,785 2.1 20.0
enqueue waits 0 0.0 0.0
execute count 214,538 34.6 336.3
free buffer inspected 9,879 1.6 15.5
free buffer requested 349,615 56.5 548.0
hot buffers moved to head of LRU 141,298 22.8 221.5
immediate (CR) block cleanout app 1,161 0.2 1.8
immediate (CURRENT) block cleanou 23,894 3.9 37.5
Instance Activity Stats for DB: UAT Instance: UAT Snaps: 2 -3
Statistic Total per Second per Trans
index fast full scans (full) 19 0.0 0.0
index fetch by key 671,512 108.4 1,052.5
index scans kdiixs1 56,328,309 9,095.5 88,288.9
leaf node 90-10 splits 16 0.0 0.0
leaf node splits 2,187 0.4 3.4
logons cumulative 105 0.0 0.2
messages received 1,653 0.3 2.6
messages sent 1,653 0.3 2.6
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 35,118,594 5,670.7 55,044.8
opened cursors cumulative 4,036 0.7 6.3
parse count (failures) 43 0.0 0.1
parse count (hard) 1,516 0.2 2.4
parse count (total) 160,939 26.0 252.3
parse time cpu 421 0.1 0.7
parse time elapsed 870 0.1 1.4
physical reads 291,165 47.0 456.4
physical reads direct 45 0.0 0.1
physical writes 43,672 7.1 68.5
physical writes direct 45 0.0 0.1
physical writes non checkpoint 41,379 6.7 64.9
pinned buffers inspected 3 0.0 0.0
prefetched blocks 88,896 14.4 139.3
prefetched blocks aged out before 22 0.0 0.0
process last non-idle time 75,777 12.2 118.8
recursive calls 114,829 18.5 180.0
recursive cpu usage 11,704 1.9 18.3
redo blocks written 275,521 44.5 431.9
redo buffer allocation retries 419 0.1 0.7
redo entries 623,735 100.7 977.6
redo log space requests 10 0.0 0.0
redo log space wait time 192 0.0 0.3
redo ordering marks 3 0.0 0.0
redo size 277,633,840 44,830.3 435,162.8
redo synch time 5,185 0.8 8.1
redo synch writes 675 0.1 1.1
redo wastage 818,952 132.2 1,283.6
redo write time 26,562 4.3 41.6
redo writes 1,705 0.3 2.7
rollback changes - undo records a 395 0.1 0.6
rollbacks only - consistent read 49 0.0 0.1
rows fetched via callback 553,910 89.4 868.2
session connect time 74,797 12.1 117.2
session logical reads 94,278,361 15,223.4 147,771.7
session pga memory 2,243,808 362.3 3,516.9
session pga memory max 1,790,880 289.2 2,807.0
session uga memory 2,096,104 338.5 3,285.4
session uga memory max 32,637,856 5,270.1 51,156.5
shared hash latch upgrades - no w 56,430,882 9,112.0 88,449.7
sorts (memory) 21,055 3.4 33.0
sorts (rows) 32,268,330 5,210.5 50,577.3
summed dirty queue length 53,238 8.6 83.5
switch current to new buffer 37,071 6.0 58.1
table fetch by rowid 90,385,043 14,594.7 141,669.4
table fetch continued row 104,336 16.9 163.5
table scan blocks gotten 376,181 60.7 589.6
Instance Activity Stats for DB: UAT Instance: UAT Snaps: 2 -3
Statistic Total per Second per Trans
table scan rows gotten 5,103,693 824.1 7,999.5
table scans (long tables) 97 0.0 0.2
table scans (short tables) 53,485 8.6 83.8
transaction rollbacks 247 0.0 0.4
user calls 309,698 50.0 485.4
user commits 423 0.1 0.7
user rollbacks 215 0.0 0.3
workarea executions - opt 37,753 6.1 59.2
write clones created in foregroun 718 0.1 1.1
Tablespace IO Stats for DB: UAT Instance: UAT Snaps: 2 -3
->ordered by IOs (Reads + Writes) desc
Tablespace
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
USERS
200,144 32 2.3 1.4 22,576 4 0 0.0
UNDOTBS1
38 0 9.5 1.0 19,348 3 0 0.0
SYSTEM
2,016 0 4.7 1.5 505 0 0 0.0
TOOLS
14 0 9.3 1.3 1,237 0 0 0.0
IMAGES
3 0 6.7 1.0 3 0 0 0.0
INDX
3 0 6.7 1.0 3 0 0 0.0
Buffer Pool Statistics for DB: UAT Instance: UAT Snaps: 2 -3
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Write Buffer
Number of Cache Buffer Physical Physical Buffer Complete Busy
P Buffers Hit % Gets Reads Writes Waits Waits Waits
D 49,625 99.7 94,278,286 291,074 43,627 0 0 0
Instance Recovery Stats for DB: UAT Instance: UAT Snaps: 2 -3
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
B 38 9 2311 13283 13021 92160 13021
E 38 7 899 4041 3767 92160 3767
Buffer Pool Advisory for DB: UAT Instance: UAT End Snap: 3
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate (default block size first)
Size for Size Buffers for Est Physical Estimated
P Estimate (M) Factr Estimate Read Factor Physical Reads
D 32 .1 3,970 2.94 2,922,389
D 64 .2 7,940 2.54 2,524,222
D 96 .2 11,910 2.38 2,365,570
D 128 .3 15,880 2.27 2,262,338
D 160 .4 19,850 2.19 2,183,287
D 192 .5 23,820 1.97 1,962,758
D 224 .6 27,790 1.30 1,293,415
D 256 .6 31,760 1.21 1,203,737
D 288 .7 35,730 1.10 1,096,115
D 320 .8 39,700 1.06 1,056,077
D 352 .9 43,670 1.04 1,036,708
D 384 1.0 47,640 1.02 1,012,912
D 400 1.0 49,625 1.00 995,426
D 416 1.0 51,610 0.99 982,641
D 448 1.1 55,580 0.97 966,874
D 480 1.2 59,550 0.89 890,749
D 512 1.3 63,520 0.88 879,062
D 544 1.4 67,490 0.87 864,539
D 576 1.4 71,460 0.80 800,284
D 608 1.5 75,430 0.76 756,222
D 640 1.6 79,400 0.75 749,473
PGA Aggr Target Stats for DB: UAT Instance: UAT Snaps: 2 -3
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
100.0 851 0
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
B 320 282 12.6 0.0 .0 .0 .0 16,384
E 320 281 15.3 0.0 .0 .0 .0 16,384
PGA Aggr Target Histogram for DB: UAT Instance: UAT Snaps: 2 -3
-> Opt Executions are purely in-memory operations
Low High
Opt Opt Total Execs Opt Execs 1-Pass Execs M-Pass Execs
8K 16K 37,010 37,010 0 0
16K 32K 70 70 0 0
32K 64K 11 11 0 0
64K 128K 34 34 0 0
128K 256K 9 9 0 0
256K 512K 54 54 0 0
512K 1024K 536 536 0 0
1M 2M 7 7 0 0
2M 4M 24 24 0 0
PGA Memory Advisory for DB: UAT Instance: UAT End Snap: 3
-> When using Auto Memory Mgmt, minly choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
40 0.1 3,269.7 98.2 97.0 0
80 0.3 3,269.7 9.6 100.0 0
160 0.5 3,269.7 9.6 100.0 0
240 0.8 3,269.7 0.0 100.0 0
320 1.0 3,269.7 0.0 100.0 0
384 1.2 3,269.7 0.0 100.0 0
448 1.4 3,269.7 0.0 100.0 0
512 1.6 3,269.7 0.0 100.0 0
576 1.8 3,269.7 0.0 100.0 0
640 2.0 3,269.7 0.0 100.0 0
960 3.0 3,269.7 0.0 100.0 0
1,280 4.0 3,269.7 0.0 100.0 0
1,920 6.0 3,269.7 0.0 100.0 0
2,560 8.0 3,269.7 0.0 100.0 0
-------------------------------------------------------------Rollback Segment Stats for DB: UAT Instance: UAT Snaps: 2 -3
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
managment, as RBS may be dynamically created and dropped as needed
Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
0 22.0 0.00 0 0 0 0
1 650.0 0.00 1,868,300 0 0 0
2 1,987.0 0.00 4,613,768 9 0 7
3 6,070.0 0.00 24,237,494 37 0 36
4 223.0 0.00 418,942 3 0 1
5 621.0 0.00 1,749,086 11 0 11
6 8,313.0 0.00 48,389,590 54 0 52
7 7,248.0 0.00 14,477,004 19 0 17
8 1,883.0 0.00 12,332,646 14 0 12
9 2,729.0 0.00 17,820,450 19 0 19
10 1,009.0 0.00 2,857,150 5 0 3
Rollback Segment Storage for DB: UAT Instance: UAT Snaps: 2 -3
->Opt Size should be larger than Avg Active
RBS No Segment Size Avg Active Opt Size Maximum Size
0 450,560 0 450,560
1 8,511,488 6,553 8,511,488
2 8,511,488 4,592,363 18,997,248
3 29,351,936 14,755,792 29,483,008
4 2,220,032 105,188 2,220,032
5 3,137,536 3,416,104 54,648,832
6 55,697,408 21,595,184 55,697,408
7 26,337,280 9,221,107 26,337,280
8 13,754,368 5,142,374 13,754,368
9 22,011,904 10,220,526 22,011,904
10 4,317,184 3,810,892 13,754,368
Undo Segment Summary for DB: UAT Instance: UAT Snaps: 2 -3
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Undo Num Max Qry Max Tx Snapshot Out of uS/uR/uU/
TS# Blocks Trans Len (s) Concurcy Too Old Space eS/eR/eU
1 19,305 109,683 648 3 0 0 0/0/0/0/0/0
Undo Segment Stats for DB: UAT Instance: UAT Snaps: 2 -3
-> ordered by Time desc
Undo Num Max Qry Max Tx Snap Out of uS/uR/uU/
End Time Blocks Trans Len (s) Concy Too Old Space eS/eR/eU
15-Jul 12:32 10 13,451 3 2 0 0 0/0/0/0/0/0
15-Jul 12:22 87 13,384 6 1 0 0 0/0/0/0/0/0
15-Jul 12:12 3,746 13,229 91 1 0 0 0/0/0/0/0/0
15-Jul 12:02 8,949 13,127 648 1 0 0 0/0/0/0/0/0
15-Jul 11:52 1,496 10,476 24 1 0 0 0/0/0/0/0/0
15-Jul 11:42 3,895 10,441 6 1 0 0 0/0/0/0/0/0
15-Jul 11:32 531 9,155 1 3 0 0 0/0/0/0/0/0
15-Jul 11:22 0 8,837 3 0 0 0 0/0/0/0/0/0
15-Jul 11:12 4 8,817 3 1 0 0 0/0/0/0/0/0
15-Jul 11:02 587 8,766 2 2 0 0 0/0/0/0/0/0
Latch Activity for DB: UAT Instance: UAT Snaps: 2 -3
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
Consistent RBA 1,708 0.0 0 0
FIB s.o chain latch 40 0.0 0 0
FOB s.o list latch 467 0.0 0 0
SQL memory manager latch 1 0.0 0 2,038 0.0
SQL memory manager worka 174,015 0.0 0 0
active checkpoint queue 2,081 0.0 0 0
archive control 1 0.0 0 0
cache buffer handles 162,618 0.0 0 0
cache buffers chains 190,111,507 0.0 0.2 0 426,778 0.0
cache buffers lru chain 425,142 0.0 0.2 0 65,895 0.0
channel handle pool latc 202 0.0 0 0
channel operations paren 4,405 0.0 0 0
checkpoint queue latch 228,932 0.0 0.0 0 41,321 0.0
child cursor hash table 18,320 0.0 0 0
commit callback allocati 4 0.0 0 0
dml lock allocation 2,482 0.0 0 0
dummy allocation 204 0.0 0 0
enqueue hash chains 25,615 0.0 0 0
enqueues 15,416 0.0 0 0
event group latch 104 0.0 0 0
hash table column usage 410 0.0 0 191,319 0.0
internal temp table obje 1,048 0.0 0 0
job_queue_processes para 103 0.0 0 0
ktm global data 21 0.0 0 0
lgwr LWN SCN 3,215 0.0 0.0 0 0
library cache 1,657,451 0.0 0.0 0 1,479 0.1
library cache load lock 1,126 0.0 0 0
library cache pin 1,112,420 0.0 0.0 0 0
library cache pin alloca 670,952 0.0 0.0 0 0
list of block allocation 2,748 0.0 0 0
loader state object free 36 0.0 0 0
longop free list parent 1 0.0 0 1 0.0
messages 19,427 0.0 0 0
mostly latch-free SCN 3,229 0.3 0.0 0 0
multiblock read objects 15,022 0.0 0 0
ncodef allocation latch 99 0.0 0 0
object stats modificatio 28 0.0 0 0
post/wait queue 1,810 0.0 0 1,102 0.0
process allocation 202 0.0 0 104 0.0
process group creation 202 0.0 0 0
redo allocation 629,175 0.0 0.0 0 0
redo copy 0 0 623,865 0.0
redo writing 11,487 0.0 0 0
row cache enqueue latch 197,626 0.0 0 0
row cache objects 201,089 0.0 0 642 0.0
sequence cache 348 0.0 0 0
session allocation 3,634 0.1 0.0 0 0
session idle bit 621,031 0.0 0 0
session switching 99 0.0 0 0
session timer 2,079 0.0 0 0
Latch Activity for DB: UAT Instance: UAT Snaps: 2 -3
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
shared pool 786,331 0.0 0.1 0 0
sim partition latch 0 0 193 0.0
simulator hash latch 5,885,552 0.0 0 0
simulator lru latch 12,981 0.0 0 66,129 0.0
sort extent pool 120 0.0 0 0
transaction allocation 249 0.0 0 0
transaction branch alloc 99 0.0 0 0
undo global data 27,867 0.0 0 0
user lock 396 0.0 0 0
Latch Sleep breakdown for DB: UAT Instance: UAT Snaps: 2 -3
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
cache buffers lru chain 425,142 82 15 67/15/0/0/0
library cache 1,657,451 76 3 73/3/0/0/0
shared pool 786,331 37 2 35/2/0/0/0
redo allocation 629,175 31 1 30/1/0/0/0
cache buffers chains 190,111,507 21 4 19/0/2/0/0
Latch Miss Sources for DB: UAT Instance: UAT Snaps: 2 -3
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
cache buffers chains kcbget: pin buffer 0 2 0
cache buffers chains kcbgtcr: fast path 0 2 0
cache buffers lru chain kcbbiop: lru scan 2 12 0
cache buffers lru chain kcbbwlru 0 2 0
cache buffers lru chain kcbbxsv: move to being wri 0 1 0
library cache kgllkdl: child: cleanup 0 1 0
library cache kglpin: child: heap proces 0 1 0
library cache kglpndl: child: before pro 0 1 0
redo allocation kcrfwi: more space 0 1 0
shared pool kghalo 0 2 0
------------------------------------------------------------- -
How to save report output in a PDF file and also show preview in one processing cycle
Hi every body,
We R re-developing an application from COBOL to Oracle.
Using Reports 6i 6.0.8.11.3.
The requirement is that whenever a user processes a report, it should be saved on disk say in PDF format. Then it should be shown in Reports Runtime Previewer.
So far I have explored that only one of theses can be done in one processing cycle.
That is if we set DESTYPE to FILe it will only be saved to disk. If we set DESTYPE to Preview, the report can be previewed and optionally printed but not saved.
I want that both saving to disk and preview should be possible.
I have explored .DST file option but there is only provision for FILE, PRINTER and MAIL.
Printing facility should be optional, otherwise we would have used FILE and PRINTER destinations in .DST file.
Once that file is saved on disk, it can later on be printer if user requires more copies.
I would prefer a solution other than User_exit as I don't have training in developing user_exits.
Any help please.
TariqI had a similar requirement and decided to run the report from forms. When a button is pressed, the report is run to create the pdf file, followed by separate command to invoke acrobat reader or internet explorer to display the file, rather than using reports previewer.
The file can be displayed from forms by a host command in client server, or by web.show_document on the web. -
How can I pass parameters from one process flow to another process flow?
How can I pass parameters from one process flow to another process flow (sub process) in warehouse builder? let me know the steps I have to do in warehouse builder.
Thanks in advance,
KishanHi Kishan,
Please post this question to the Warehouse Builder forum:
Warehouse Builder
Thanks, Mark -
Should we use only one process order for each CO-product?
Dear All,
We have process orders. One process order opens per day.
Now we need to use CO-products and calculate actual cost of goods manufactured for them.
Suppose we have 30 process orders. We have specified Apportionment Structure in Material Master.
Then when we create process order we define actual Apportionment Structure in setllement rule as well.
After we have setlled all process orders would Material Ledger/ Actual costing run consider this Apportionment Structures from all 30 process orders?
I heard opinion from Consultant from SAP that it will not work.
Has anyone heard anything?
Best regards,
Kamila.Hi Evgeniy,
Thanks for response
But we use process orders, not CO-production orders.
We open process order each day because management wants to have information about KPI of each schift.
We have defined in material master apportionment structure, When we create process order we define apportionment structure in settlement rule as well.
When we settle process orders our cost distributes to finished good and CO-product according with this apportionment structure.
But when we run ckmlcp ML/actual costing doesn't distribute raw material's price differences and Semifinished good's price differences according with this apportionment structure.
What can we do here?
Best regards,
Kamila. -
Add at least one process to the chain before saving !!!
Hello SDN s,
How ya all ?
I am building a Process Chain which loads around 35 Transactional Data InfoPackages. For this the requirement is to Delete Indexes collectively before the 35 data loads and after the data loads the Indexes has to be Generated.
Here i D&D the Data Target Administration -> Delete Index with that Create Index (Generated from DROPINDEX) came automatically. I removed the link between these 2, and tried to keep the 1 st InfoPackage with Execute InfoPackage, it gave me the message
Display More Chains:
Process Z_DELODS (type DROPCUBE) has already been used in other chains
Do you want these chains to be displayed in the maintenance screen too
Yes / No / Cancel
Then i tried to save or activate the PC it throws this message "Add at least one process to the chain before saving"
What could be the problem here ?
Best Regards....
Sankar Kumar
+91 98403 47141Hi,
Please select <b>No</b> when it displays the message
Display More Chains:
Process Z_DELODS (type DROPCUBE) has already been used in other chains
Do you want these chains to be displayed in the maintenance screen too
Yes / No / Cancel
Include atleast one process after the start process before saving..
Hope this helps you... -
My AMF channel seem to be a polling one
Hello everyone.
When i'm trying to use a simple amf channel for my messaging service, it looks like that some king of polling is happening instead. I'm testing this on simple chat application, and when i'm sending, for example, 4 message in a row I have to wait few second to see them appear in the right place.
Beside that i'm not doing anything, once i have subscribed to the destination, i can see in the blazeds log that the same messaging action is going over and over. Here is a sample of the log
[BlazeDS] Channel endpoint my-lol received request.
[BlazeDS] Deserializing AMF/HTTP request
Version: 3
(Message #0 targetURI=null, responseURI=/99)
(Array #0)
[0] = (Externalizable Object #0 'flex.messaging.messages.CommandMessageExt')
(Object #1)
(Object #2)
DSId = "5669DA6B-360B-0DB5-61E4-2B7EF19B15B5"
(Byte Array #3, Length 16)
2
[BlazeDS] Before handling general client poll request.
incomingMessage: Flex Message (flex.messaging.messages.CommandMessageExt)
operation = poll
clientId = null
correlationId =
destination =
messageId = AFDBF526-C039-0761-A2CD-CF71C19A7061
timestamp = 1223251050921
timeToLive = 0
body = {}
hdr(DSEndpoint) = my-lol
hdr(DSId) = 5669DA6B-360B-0DB5-61E4-2B7EF19B15B5
[BlazeDS] After handling general client poll request.
reply: Flex Message (flex.messaging.messages.AcknowledgeMessage)
clientId = null
correlationId = null
destination = null
messageId = 5674B9CD-9919-6D3D-2E72-A81BC9E2866B
timestamp = 1223251050921
timeToLive = 0
body = null
[BlazeDS] Serializing AMF/HTTP response
Version: 3
(Message #0 targetURI=/99/onResult, responseURI=)
(Externalizable Object #0 'DSK')
1.223251050921E12
(Byte Array #1, Length 16)
And so on ! (only the Id are changing)
Again i'm doing simple things here, I have a producer and a consumer with the same destination "chat". This destination is using a channel named "my-lol" but it's a copy of the default "my-amf" one.
Can you explain to me what is happening ?
Thank youAs you know, to use a Consumer you'll need a channel capable of push (or at least one that can simulate it) in order to receive updates. The AMFChannel sends and receives AMF messages over HTTP, so if you want to received pushed messages you'll either need a long lived connection or some method of polling at a reasonable interval to simulate it.
While LCDS more channels than BlazeDS, I suggest reading Seth's documentation on the pros and cons of each kind of channel so that you can pick which approach you want based on your needs and application environment.
http://www.dcooper.org/Blog/client/index.cfm?mode=entry&entry=8E1439AD-4E22-1671-58710DD52 8E9C2E7
Pete
From: Jordan Amar [mailto:[email protected]]
Sent: Sunday, October 05, 2008 8:10 PM
To: [email protected]
Subject: my AMF channel seem to be a polling one
A new discussion was started by Jordan Amar in
Configuration and Getting Started Discussion --
my AMF channel seem to be a polling one
Hello everyone.
When i'm trying to use a simple amf channel for my messaging service, it looks like that some king of polling is happening instead. I'm testing this on simple chat application, and when i'm sending, for example, 4 message in a row I have to wait few second to see them appear in the right place.
Beside that i'm not doing anything, once i have subscribed to the destination, i can see in the blazeds log that the same messaging action is going over and over. Here is a sample of the log
[BlazeDS] Channel endpoint my-lol received request.
[BlazeDS] Deserializing AMF/HTTP request
Version: 3
(Message #0 targetURI=null, responseURI=/99)
(Array #0)
[0] = (Externalizable Object #0 'flex.messaging.messages.CommandMessageExt')
(Object #1)
(Object #2)
DSId = "5669DA6B-360B-0DB5-61E4-2B7EF19B15B5"
(Byte Array #3, Length 16)
2
[BlazeDS] Before handling general client poll request.
incomingMessage: Flex Message (flex.messaging.messages.CommandMessageExt)
operation = poll
clientId = null
correlationId =
destination =
messageId = AFDBF526-C039-0761-A2CD-CF71C19A7061
timestamp = 1223251050921
timeToLive = 0
body = {}
hdr(DSEndpoint) = my-lol
hdr(DSId) = 5669DA6B-360B-0DB5-61E4-2B7EF19B15B5
[BlazeDS] After handling general client poll request.
reply: Flex Message (flex.messaging.messages.AcknowledgeMessage)
clientId = null
correlationId = null
destination = null
messageId = 5674B9CD-9919-6D3D-2E72-A81BC9E2866B
timestamp = 1223251050921
timeToLive = 0
body = null
[BlazeDS] Serializing AMF/HTTP response
Version: 3
(Message #0 targetURI=/99/onResult, responseURI=)
(Externalizable Object #0 'DSK')
1.223251050921E12
(Byte Array #1, Length 16)
And so on ! (only the Id are changing)
Again i'm doing simple things here, I have a producer and a consumer with the same destination "chat". This destination is using a channel named "my-lol" but it's a copy of the default "my-amf" one.
Can you explain to me what is happening ?
Thank you
View/reply at my AMF channel seem to be a polling one
Replies by email are OK.
Use the unsubscribe form to cancel your email subscription. -
How can I deny write access to datalog files for all but one process in LV8?
In LabVIEW 7.1, wiring the deny mode terminal of Open File.vi with a Deny Write Only enum constant was an effective means for ensuring that only one process could write to a datalog file at a time. In LabVIEW 8.0, Open File.vi is no longer available and the new Open/Create/Replace Datalog vi does not provide a deny mode terminal. Also, the new Deny Access vi does not support datalog files. Furthermore, the Set Permissions vi is an unsatisfactory solution because under the Windows operating system, it simply sets the Read Only file attribute. This is inadequate because I have demonstrated that it is still possible for two processes to open a datalog file with read/write access before either one has had a chance to set the Read Only file attribute in order to lock out the file. If a process sets the Read Only file attribute first, then it can't open the file with read/write access for itself.
Does anyone understand the file mechanism by which deny mode used to work with the old Open File.vi? I wish to restore the functionality I had in LV 7.1 in my LV 8 programs.
Thanks!
LarryLarry Stanos wrote:
I appreciate the
empathy from Rolf, but I'm hoping that someone may have written one or
more vi's containing CINs that call Windows 2000/XP file access control
library routines. At least I'm assuming that is how the deny mode
input to Open File.vi used to work in LV7.1. The Microsoft
Developers Network on-line documentation on Access Control http://msdn.microsoft.com/library/default.asp?url=/library/en-us/secauthz/security/access_control.as... is
daunting to put it mildly. But even if a set of CINs has not
already been coded, perhaps someone could point me to the specific set
of calls I need to make to absolutely guarantee that no two
clients can simultaneously open the same file with write privileges. Unfortunately
the elimination of deny mode functionality for datalog files in
LV8 has sabotaged my commitment to a March 1 release date because it
would also be impractical to convert everything back to LV7.1 at
this point. Sincere thanks to anyone who can help me out here!
Unfortunately
the functionality you mention does not work in the way the deny mode in
the LabVIEW nodes works. Basically that deny mode is converted to an
according FILE_SHARE_READ/FILE_SHARE_WRITE value and passed to the
Win32 API CreateFile function. This is more or less the only place
where you can define a global share (or deny) access to a file. That is
also why the Deny Access node online help is talking about that the
file is reopened.
But I just retried what you had tried to do, and low and behold it
works with wiring a datalog refnum to Deny Access. What is important
here however is that you do need to wire a datatype to the record type
input of the Open/Create/Replace Datalog node. Otherwise you can't
connect the resulting datalog refnum to any other file function, since
it is an incomplete datatype.
Rolf Kalbermatter
Rolf Kalbermatter
CIT Engineering Netherlands
a division of Test & Measurement Solutions -
Run one Process without Button Click
Hi Guys,
I created one report using a simple Select.
select ID_FACTURE,
ID_NUM
from FACTURE
In Report Attributes I edited ID_FACTURE and in Column Link I put this:
Link Text: <img src="#APP_IMAGES#delete.png"; border="0">
Target: Page in this Application - 58
item1: P58_ID_FACTURE Value: #ID_FACTURE#
So, my report is showing me the way that I want. When I click in the delete img the item P58_ID_FACTURE is getting the right value.
I created one processes (Type :*PL/SQL anonymous block* ) to delete the row that I want.
But it is not working, because it supposed to be one button.
Anyone knows how I can cheat Apex to do it?
ThanksIf anyone wants to add a pop-up just add this code
javascript:{if (confirm('Voulez-vous effectuer cette action de suppression?')) redirect('f?p=&APP_ID.:58:&SESSION.::&DEBUG.::P58_ID_FACTURE:#ID_FACTURE#');}
:D -
Connecting One process Server to Multiple SAP Systems
Hi Experts,
We have a licensed version of the SAP CPS with the Process server limit parameter set to 4, which means we are limited to 4 process server as per my understanding. But when we create a SAP system the process server and the respective Queue is automatically created. SO that means we can create a max number of 4 SAP systems with 4 process servers!!
Or is there a way we can connect one process server with multiple SAP systems ?
Thanks,
Eric.Yes - Process server limit parameter set to 4 means your CPS environment can start jobs in 4 SAP systems only.
Maybe "SAP Business Automation Enabler (BAE) Connectors" could be useful to you in your situation.
- Check the CPS admin guide for more information on BAE connector.
Regards,
David -
How can one process "ask" if another process is running?
Hi there,
This is a very general question: how can one process "ask" if another process is running?
In other words, what 2 process share together.
Thanks for any help.This code will print all active Threads.
Look at at the API docs for further information.
ThreadGroup tg=Thread.currentThread().getThreadGroup();
while(tg.getParent() != null)
tg=tg.getParent();
tg.list(); -
Tcode to view fico related postings for one process order
Hi Experts,
Is there any trasaction code which can be used to view all related posting into FICO for one process order
Thanks in advance.
Narayanan
Moderator: Please, search SDNHi,
Check In KoB1 T.code, Give Order Number and Date Range.
Thanks
Goutam -
How encode streaming and publish it to youtube and livestream in one process ?
Dears ,
We start encode to youtube and another process to livestreeam.com , that conusmes much of cpu ,
I want to configure to start encode to youtube and livestream.com in one process that will minimize CPU consuming for encoding by 50%
Kindly advice how to configure it ?
thanks.There isn't any good way around this. If you want to send a stream to two different platforms... you have to have two different encoders. In order for FMLE to take the output of a single compressed stream and deliver it to two different streaming servers (Livestream and YouTube) you have to be running a streaming server.
A streaming server will take one single "input" stream, and deliver it to multiple endpoints. Those endpoints could be viewers who want to watch, or they could be other streaming servers. In any event, what your asking for is handled by a streaming server... and FMLE is an encoder, not a streaming server.
What your asking to do can be done... but it's going to be very expensive, and I'm assuming you want to keep this low cost. Your cheapest solution is to use a computer that has the CPU horsepower to run two copies of FMLE -- each encoding and sending a stream to the two different endpoints (Livestream and YouTube)... or simply purchase a second computer and have each computer handle each stream encoding individually. -
More than one process bind() to a multicast port.
Hi,
I found one strange behavior when there are more than one process bind() to the same multicast port. The problem is as the following:
If there is a process bind to a port, say 2000 and joined a multicast group. This process did not do a SO_REUSEADDR,
before bind() and join the multicast group. Later on, I have another program in the same host joining the same multicast group with the same port number (with SO_REUSEADDR), the second process will not be able to receive any data from the multicast group (with the same port number) eventhough the second process can join the multicast group successfully.
However, if both process call setsockop with SO_REUSEADDR, both process can receive multicast data from the multicast group. The simplest fix is to have both set SO_REUSEADDR. But the problem is, the first application is an off-the-shelf software and we do not have source code to it.
I wonder if this is a problem with Solaris. Any thoughts?
Thanks,
ShaoHi,
I found the answer to this question. Basically, according to the source code (Solaris 8 Foundation Source), ip6.c, line #2510, multicast data is forwarded to all "listeners" only if the first "binder" has a SO_REUSEADDR set. What that mean is, if the first "binder" does not set SO_REUSEADDR before bind(), any subsequent bind() with SO_REUSEADDR is meaningless (in practical terms).
My question is, is this behavior correct? I think it's much better to allow subsequent listeners (successfully bind and joined the multicast group) to be able to receive data. The reason is that if the first binder does not set SO_REUSEADDR before bind() the port and joined the multicast group should not deny any "late comers" to join in and participate in the multicast group. "Fixing the code" is not always practical especially if one of the application is a third-party application. Or reject bind() if the first "binder" does not allow SO_REUSEADDR.
Any comment?
Regards,
Shao
Maybe you are looking for
-
Inserting an entry in to a partition in a particular node
I have the below scenario (1) Initiate Cache A with a Map Trigger (2) Initiate Cache B (3) Insert Entry A into Cache A. This will call the Map Trigger (4) I am inserting Entry B into Cache B in the Map Trigger I want to keep Entry A and Entry B in th
-
I down loaded the new iwork update. Now when I create a docume
-
Implementing an EDI scenario with XI
Hi, I need to implement an EDI scenario with a non SAP system. I would like to know what are the "components" that I should have in order to acomplish the task ? (how is it different from an ALE scenario) I'm new to EDI ! (so take is slow...) Thank
-
Hi, As a comoany continues to use OLM, the catalog will expand and cause space constriants as a result. I am curious to hear how other clients handle archiving of vintage catalog objects and enrollmets and histories in OLM. Is there a tool that could
-
ITunes 8 won't recognize CD devise to load new songs
For some reason, iTunes will not automatically recognize the CD player so that I can load songs into iTunes. Any ideas on how to get the program to automatically identify that there is a CD devise?