Concurrency databse insertation
Hi every one
Short history:
I have my application which contains 10 classes. 5 classes are for temporary storing data and the rest classes for manipulation, reading data from the files storing data to these storage classes and inserting data to database. This application is working fine with one problem it takes about 3 to 4 days to insert 4 million rows.
Problem:
I created multithread application but I experience many problems, data are no longer intact, it updates some objects before using them. Sometimes try to insert into child table before inserting to parent table as result it come up with the following error "parent key not found". I use the synchronized keyword for each method but still this problem is there.
Request:
Please help me on how to insert many rows at the same time using multithreading programming, keep in mind that this application contains many classes with in turns are used in storing temporary data before send them to the database.
The fastest way to get data into a database is to use
the import tools that come with the database. You
create a file with a specific format and the tool
processes the file.
I use oracle database but not only I don't know these
tools but also I don't know how to use them. Can you
give me some sample code.Oracle has a command line tool "sqlldr". It takes an input data file and a control file. The control file tells sqlldr database connection properties and about the data file (column names, fixed width vs. delimited, etc.). You can have your java application generate the data file then use Runtime.exec() to call the sqlldr command line. Look at the sqlldr documentation from Oracle for more info on the control and data file format, and for command line goodness. Handling errors is a pain - if there is an error then sqlldr terminates and writes an error file. Your java will have to look for this error file and possibly parse it to deal with the errors. It really sucks, but is possible. sqlldr will insert records very very fast, much faster than you can do through JDBC.
Before you go to that pain though, make sure that its the insert that is really taking the time. If the records you are generating are complex and require a lot of computation time on the java side, then that could be at least part of the slowdown. A profiler is best for figuring this out, but you could also use System.currentTimeMillis to track time spent in different methods, or even comment out the "insert" and see how long it takes just to generate the records. Don't get me wrong, the inserts will take a lot of time, but it may be that you can get the rest of the code fast enough that the total runtime is acceptable - without going the sqlldr approach.
If you spend a lot of time generating data then multithreading will help, since you can be generating data and inserting at the same time. In that case I'd recommend two threads. One generates the data and adds it to an insert queue, and the other pulls data from the insert queue and inserts it. Make sure the generation queue only puts completed records in the queue, and you have no problem with inserting half-baked records. Also look at PreparedStatement.addBatch() to batch the inserts. Oh, and if you're not already using PreparedStatement then change that immediately, that alone will speed things up somewhat. Dynamic SQL and plain Statement is pure evil for big inserts.
You'll probably want to use something like ArrayBlockingQueue for the queue, since it will prevent things like the generating queue getting too far ahead and inserting a million records into the queue and running out of memory.
- Jemiah
Similar Messages
-
Concurrency when inserting in the same table
Hi,
I have a report used by several users and in this record there is a sentence to insert in a z table. My question is, what happens whether two or more users are inserting records in that table at the same time? There is a concurrency problem, isn't there?
How does SAP manage this issue? or, what must I do in this case?
Thanks in advance.
Regards.Hi David,
As SAP applications are accessed by many end users at same time, SAP has concept called LOCKING of particular tables involved in the transaction.
You can achieve your requirement as below
Go to t-code SE11, and create a lock object for your Ztable
Now, system create 2 function modules - 1 for locking ENQUEUE_* & other for un-locking DEQUEUE_* ( search for function module in SE37 )
Before saving data to ZTABLE, call function module ENQUEUE_* to lock the table for writing data, if locking fails, means there is somebody already working on it, hence you can display error message
After Save action, you can unlock the table by using FM ... DEQUEUE_*
Note: you can lock and unlock event during DISPLAY/CHANGE mode if your requirement requires.
Hope this helps you.
Regards,
Rama -
Gc current request waits for concurrent DL inserts
I see 8 different concurrent session runing following insert statement ( DL load ) on 2 different node of a 4 node RAC cluster (10.2.0.4). Table T1_TEMP is on ASSM tablespace. This DL insert from a ETL tool.
INSERT /*+ SYS_DL_CURSOR */ INTO T1_TEMP
("C1","C2","C3","C4","C5","C6"
) VALUES
(NULL,NULL,NULL,NULL,NULL,NULL)and they all observing 'gc current request' or "gc buffer busy" waits.. for several files with Block# 2. Wondeing why it's having so many waits on Block#2 ( I guess, t's file header block ).. Since it's a direct level Insert( Yes I have confirmed it's indeed DL insert), all it should aquire is TEMP segement above HWM and start inserting directly into DL. So just wondering why it's experincing gc related waits...
I tried to find object name by querying dba_extents but it does not return any thing for file# 717, block# =2 or for any of the file#/block#2 mentioned below.
See bellow..
SID Wait State EVENT P1 P2 P3 SEQ# % TotTime TEventTime(ms) DistEvnts Avgtime(ms)/Evnt
1758 WAITING gc current request file#= 717 block#= 2 id#= 33554445 5.99 86.227 2 43.114
1758 WAITING gc buffer busy file#= 1058 block#= 2 id#= 65549 5.33 76.694 2 38.347
1758 WAITING gc current request file#= 738 block#= 2 id#= 33554445 5.29 76.147 2 38.074
1758 WAITING gc current request file#= 710 block#= 2 id#= 33554445 5.18 74.650 2 37.325
1758 WAITING gc current request file#= 719 block#= 2 id#= 33554445 5.13 73.901 2 36.950
1758 WAITING gc current request file#= 716 block#= 2 id#= 33554445 4.82 69.408 2 34.704
1758 WAITING gc current request file#= 13 block#= 2 id#= 33554445 4.80 69.091 2 34.546
1758 WAITING gc current request file#= 707 block#= 2 id#= 33554445 4.73 68.141 2 34.070
1758 WAITING gc current request file#= 726 block#= 2 id#= 33554445 4.45 64.138 2 32.069
1758 WAITING gc current request file#= 702 block#= 2 id#= 33554445 4.43 63.734 2 31.867
1758 WAITING gc current request file#= 1550 block#= 2 id#= 33554445 4.34 62.496 2 31.248
1758 WAITING gc current request file#= 739 block#= 2 id#= 33554445 3.49 50.198 2 25.099
1758 WAITING gc current request file#= 1 block#= 73575 id#= 33554433 3.25 46.829 1 46.829
1758 WAITING gc current request file#= 900 block#= 2 id#= 33554445 3.01 43.315 1 43.315
1758 WAITING gc buffer busy file#= 750 block#= 2 id#= 65549 2.88 41.501 1 41.501
1758 WAITING gc buffer busy file#= 901 block#= 2 id#= 65549 2.58 37.094 1 37.094
1758 WAITING gc buffer busy file#= 744 block#= 2 id#= 65549 2.41 34.762 1 34.762
1758 WAITING gc buffer busy file#= 745 block#= 2 id#= 65549 2.41 34.733 1 34.733
1758 WAITING gc buffer busy file#= 742 block#= 2 id#= 65549 2.39 34.474 1 34.474
1758 WAITING gc current request file#= 744 block#= 2 id#= 33554445 2.37 34.070 1 34.070
1758 WAITING gc current request file#= 749 block#= 2 id#= 33554445 2.36 34.042 1 34.042
1758 WAITING gc current request file#= 898 block#= 2 id#= 33554445 2.36 34.013 1 34.013
1758 WAITING gc current request file#= 742 block#= 2 id#= 33554445 2.31 33.235 1 33.235
1758 WAITING gc buffer busy file#= 749 block#= 2 id#= 65549 2.30 33.149 1 33.149
1758 WAITING gc current request file#= 750 block#= 2 id#= 33554445 2.29 32.918 1 32.918
1758 WAITING gc current request file#= 745 block#= 2 id#= 33554445 2.25 32.400 1 32.400
1758 WAITING gc current request file#= 901 block#= 2 id#= 33554445 2.18 31.392 1 31.392
1758 WAITING gc buffer busy file#= 739 block#= 2 id#= 65549 1.74 24.998 1 24.998
1758 WAITING SQL*Net message from client driver id= 1413697536 #bytes= 1 = 0 1.58 22.694 5 4.539
1758 WORKING On CPU / runqueue .73 10.512 64 .164
1758 WAITING gc cr request file#= 4 block#= 825 class#= 177 .08 1.210 1 1.210
1758 WAITING gc cr request file#= 1 block#= 73575 class#= 1 .08 1.152 1 1.152
1758 WAITING gc buffer busy file#= 1055 block#= 2 id#= 65549 .06 .922 2 .461
1758 WAITING SQL*Net more data from client driver id= 1413697536 #bytes= 40149 = 0 .06 .806 1 .806Can anyone explain this??Hi Tanel,
From the OP's Inputs
Buffer buy wait is with respect to : id#= 65549 (P3) in block#= 2
gc current request with respect to : id#= 33554445 in block#= 2
As you stated from first post,
Block #2 is where the file level space usage bitmap header lives in locally managed tablespaces. This manages the file level space, not segment level.
If you're doing heavy parallel insertions with small fixed LMT extent size then when segments are extending frequently you can end up with contention on that block. First of all you way for troubling shooting this problem is pretty good..!!
My doubt, what is the reason that it would "end up with contention", since as user is performing "parallel insertions" small fixed LMT Extent size. In this case does row extending would not happen..??
From the second post
The ID# is a partially a bitfield and partially the class# of block involved in the wait.
The lower two bytes indicate the block class. Which in your case is mostly 13 (if you look only at the lower 2 bytes of the values, e.g. 65549 - power(2,16) = 13.How about the "33554445" - (33554432 ->2^25) = 13
I would be thankful to if you could added some inputs.
- Pavan Kumar N -
High Buffer Busy Wait due to Concurrent INSERTS
Hi All,
One of my OLTP database is running on 11.1.0.7 (11.1.0.7.0 - 64bit Production) with RHEL 5.4.
On frequent basis, i am observing 'BUFFER BUSY WAITS' and last time i tried to capture some dictionary information to dig the waits.
1. Session Watis :
Oracle Sec Hash
Sid,Serial User OS User Svr-Pgm Wait Event State-Seq Wt Module Cmnd Value P1 P2 P3
633,40830 OLTP_USE fateadm 21646-orac buffer busy wai Wtng-9999 1 ORDERS ISRT 3932487748 384 1863905 1
647, 1761 OLTP_USE fateadm 22715-orac buffer busy wai Wtng-3837 0 ORDERS ISRT 3932487748 384 1863905 1
872, 5001 OLTP_USE fateadm 21836-orac buffer busy wai Wtng-9999 1 ORDERS ISRT 3932487748 384 1863905 1
702, 1353 OLTP_USE fateadm 21984-orac buffer busy wai Wtng-9999 1 ORDERS ISRT 3932487748 384 1863905 1
337,10307 OLTP_USE fateadm 21173-orac buffer busy wai Wtng-9999 1 ORDERS ISRT 3932487748 384 1863905 1
751,43016 OLTP_USE fateadm 21619-orac buffer busy wai Wtng-9999 1 ORDERS ISRT 3932487748 384 1863905 1
820,17959 OLTP_USE fateadm 21648-orac buffer busy wai Wtng-9999 0 ORDERS ISRT 3932487748 384 1863905 1
287,63359 OLTP_USE fateadm 27053-orac buffer busy wai Wtng-9999 0 ORDERS ISRT 3932487748 384 1863905 1
629, 1653 OLTP_USE fateadm 22468-orac buffer busy wai Wtng-9999 1 ORDERS ISRT 3932487748 384 1863905 1
788,14160 OLTP_USE fateadm 22421-orac buffer busy wai Wtng-9999 0 ORDERS ISRT 3932487748 384 1863905 1
615, 4580 OLTP_USE fateadm 21185-orac buffer busy wai Wtng-9999 0 ORDERS ISRT 3932487748 384 1863905 1
525,46068 OLTP_USE fateadm 27043-orac buffer busy wai Wtng-9034 1 ORDERS ISRT 3932487748 384 1863905 1
919,23243 OLTP_USE fateadm 21428-orac buffer busy wai Wtng-6340 1 ORDERS ISRT 3932487748 384 1863906 1
610,34557 OLTP_USE fateadm 21679-orac buffer busy wai Wtng-6422 1 ORDERS ISRT 3932487748 384 1863906 1
803, 1583 OLTP_USE fateadm 21580-orac buffer busy wai Wtng-6656 1 ORDERS ISRT 3932487748 384 1863906 1
781, 1523 OLTP_USE fateadm 21781-orac buffer busy wai Wtng-9999 0 ORDERS ISRT 3932487748 384 1863906 1
369,11005 OLTP_USE fateadm 21718-orac buffer busy wai Wtng-9999 0 ORDERS ISRT 3932487748 384 1863906 1
823,35800 OLTP_USE fateadm 21148-orac buffer busy wai Wtng-9999 1 ORDERS ISRT 3932487748 384 1863906 1
817, 1537 OLTP_USE fateadm 22505-orac buffer busy wai Wtng-9999 1 ORDERS ISRT 3932487748 384 1863906 1
579,54959 OLTP_USE fateadm 22517-orac buffer busy wai Wtng-9999 0 ORDERS ISRT 3932487748 384 1863906 1
591,33597 OLTP_USE fateadm 27027-orac buffer busy wai Wtng-9999 1 ORDERS ISRT 3932487748 384 1863906 1
481, 3031 OLTP_USE fateadm 21191-orac buffer busy wai Wtng-3502 1 ORDERS ISRT 3932487748 384 1863906 1
473,24985 OLTP_USE fateadm 22629-orac buffer busy wai Wtng-9999 0 ORDERS ISRT 3932487748 384 1863906 1
868, 3984 OLTP_USE fateadm 27191-orac buffer busy wai Wtng-9999 0 ORDERS ISRT 3932487748 384 1863906 1
select owner,segment_name,segment_type from dba_extents where file_id = 384 and 1863905 between block_id and block_id + blocks -1;
OWNER SEGMENT_NAME SEGMENT_TYPE
ORDER ORDER_DETAILS TABLE
select TABLE_NAME,PARTITIONED,ini_trans ,degree,compression,FREELISTS from dba_TABLES WHERE TABLE_NAME='ORDER_DETAILS';
TABLE_NAME PAR INI_TRANS DEGREE COMPRESS FREELISTS
ORDER_DETAILS NO 1 1 ENABLED 1
Tablespace is not ASSM managed !
select
object_name,
statistic_name,
value
from
V$SEGMENT_STATISTICS
where
object_name = 'ORDER_DETAILS';
OBJECT_NAME STATISTIC_NAME VALUE
ORDER_DETAILS logical reads 487741104
ORDER_DETAILS buffer busy waits 4715174
ORDER_DETAILS db block changes 200858896
ORDER_DETAILS physical reads 143642724
ORDER_DETAILS physical writes 20581330
ORDER_DETAILS physical reads direct 55239903
ORDER_DETAILS physical writes direct 19500551
ORDER_DETAILS space allocated 1.6603E+11
ORDER_DETAILS segment scans 9727
ORDER_DETAILS table is ~ 153 GB non-partitioned table.
It seems its not a READ BY OTHER SESSIONS wait but BUFFER BUSY due to write-wirte contention inside same block. I have never observed Cache Buffer Chain/ ITL-Wait/ High wait time on dbfile sequential/scattered reads.Table contains one PK (composite index on 3 columns) which seems to be highly fragmented.This non partitioned global Index has 3182037735 rows and blevel is 4.
BHAVIK_DBA.FATE1NA>select index_name,status,num_rows,blevel,pct_free,ini_trans,clustering_factor from dba_indexes where index_name='IDX_ORDERS';
INDEX_NAME STATUS NUM_ROWS BLEVEL PCT_FREE INI_TRANS CLUSTERING_FACTOR
IDX_ORDERS VALID 3182037735 4 2 2 2529462377
1 row selected.
One of the index column value is being populated by sequence. (Monotonically increasing value)
SEGMENT_NAME MB
IDX_ORDERS 170590.438
Index size is greater than table size !Tuning goal here is to reduce buffer busy waits and thus commit latencies.
I think, i need to increase FREELISTS and PCT_FREE to address this issue, but not much confident whether it is going to solve the issue or not?
Can i ask for any help here ?Hi Johnathan;
Many thanks for your detailed write-up. I was expecting you !
Your post here gave lot of information and wisdom that made me think last couple of hrs that is the reason for the delay in reply.
I did visited your index explosion posts couple of times and that scenario only gave me insight that concurrent DML (INSERT) is cause of index fragmentation in my case.
Let me also pick the opportunity to ask you to shed more light on some of the information you have highlighted.
if you can work out the number of concurrent inserts that are really likely to happen at any one instant then a value of freelists that in the range of
concurrency/4 to concurrency/2 is probably appropriate.May i ask you how did you derive this formula ? I dont want to miss learning opportunity here !
Note - with multiple freelists, you may find that you now get buffer busy waits on the segment header block.I did not quite get this point ? Can you shed more light please? What piece in segment header block is going to result contention(BBW on SEGMENT HEADER) on all concurrent inserts ?
The solution to this is to increase the number of freelist groups (making sure that
freelists and freelist groups have no common factors).My prod db NON-RAC environment. Can i use FREELIST GROUPS here ? My little knowledge did not get, What "common factors" you are referring here ?
The reads could be related to leaf block splits, but there are several possible scenarios that could lead to that pattern of activity - so the next step is to find out which blocks are being
read. Capture a sample of the waits, then query dba_extents for the extent_id, file_id, and block_id (don't run that awful query with the "block_id + blocks" predicate) and cross-check the
list of blocks to see if they are typically the first couple of blocks of an extent or randomly scattered throughout extents. If the former the problem is probably related to ASSM, if the
latter it may be related to failed probes on index leaf block reuse (i.e. after large scale deletes).I have 10046 trace file with me (giving you some sample below) that can give some information. However, since the issue was critical, i killed the insert process and rebuilt both the indexes. Since, index is rebuilt, i am not able to find any information in dba_extents.
select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=42 and block_id=1109331;
no rows selected
select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=42 and block_id=1109395 ;
no rows selected
select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=42 and block_id=1109459;
no rows selected
select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=10 and block_id=1107475;
no rows selected
select SEGMENT_NAME,SEGMENT_TYPE,EXTENT_ID from dba_extents where file_id=10 and block_id=1107539;
no rows selected
select object_name,object_Type from dba_objects where object_id=17599;
no rows selected
WAIT #4: nam='db file sequential read' ela= 49 file#=42 block#=1109331 blocks=1 obj#=17599 tim=1245687162307379
WAIT #4: nam='db file sequential read' ela= 59 file#=42 block#=1109395 blocks=1 obj#=17599 tim=1245687162307462
WAIT #4: nam='db file sequential read' ela= 51 file#=42 block#=1109459 blocks=1 obj#=17599 tim=1245687162307538
WAIT #4: nam='db file sequential read' ela= 49 file#=10 block#=1107475 blocks=1 obj#=17599 tim=1245687162307612
WAIT #4: nam='db file sequential read' ela= 49 file#=10 block#=1107539 blocks=1 obj#=17599 tim=1245687162307684
WAIT #4: nam='db file sequential read' ela= 198 file#=10 block#=1107603 blocks=1 obj#=17599 tim=1245687162307905
WAIT #4: nam='db file sequential read' ela= 88 file#=10 block#=1107667 blocks=1 obj#=17599 tim=1245687162308016
WAIT #4: nam='db file sequential read' ela= 51 file#=10 block#=1107731 blocks=1 obj#=17599 tim=1245687162308092
WAIT #4: nam='db file sequential read' ela= 49 file#=10 block#=1107795 blocks=1 obj#=17599 tim=1245687162308166
WAIT #4: nam='db file sequential read' ela= 49 file#=10 block#=1107859 blocks=1 obj#=17599 tim=1245687162308240
WAIT #4: nam='db file sequential read' ela= 52 file#=10 block#=1107923 blocks=1 obj#=17599 tim=1245687162308314
WAIT #4: nam='db file sequential read' ela= 57 file#=42 block#=1109012 blocks=1 obj#=17599 tim=1245687162308395
WAIT #4: nam='db file sequential read' ela= 52 file#=42 block#=1109076 blocks=1 obj#=17599 tim=1245687162308470
WAIT #4: nam='db file sequential read' ela= 98 file#=42 block#=1109140 blocks=1 obj#=17599 tim=1245687162308594
WAIT #4: nam='db file sequential read' ela= 67 file#=42 block#=1109204 blocks=1 obj#=17599 tim=1245687162308686
WAIT #4: nam='db file sequential read' ela= 53 file#=42 block#=1109268 blocks=1 obj#=17599 tim=1245687162308762
WAIT #4: nam='db file sequential read' ela= 54 file#=42 block#=1109332 blocks=1 obj#=17599 tim=1245687162308841
WAIT #4: nam='db file sequential read' ela= 55 file#=42 block#=1109396 blocks=1 obj#=17599 tim=1245687162308920
WAIT #4: nam='db file sequential read' ela= 54 file#=42 block#=1109460 blocks=1 obj#=17599 tim=1245687162308999
WAIT #4: nam='db file sequential read' ela= 52 file#=10 block#=1107476 blocks=1 obj#=17599 tim=1245687162309074
WAIT #4: nam='db file sequential read' ela= 89 file#=10 block#=1107540 blocks=1 obj#=17599 tim=1245687162309187
WAIT #4: nam='db file sequential read' ela= 407 file#=10 block#=1107604 blocks=1 obj#=17599 tim=1245687162309618TKPROF for above trace
INSERT into
order_rev
(aggregated_revenue_id,
legal_entity_id,
gl_product_group,
revenue_category,
warehouse_id,
tax_region,
gl_product_subgroup,
total_shipments,
total_units_shipped,
aggregated_revenue_amount,
aggregated_tax_amount,
base_currency_code,
exchange_rate,
accounting_date,
inventory_owner_type_id,
fin_commission_structure_id,
seller_of_record_vendor_id,
organizational_unit_id,
merchant_id,
last_updated_date,
revenue_owner_type_id,
sales_channel,
location)
VALUES
(order_rev.nextval,:p1,:p2,:p3,:p4,:p5,:p6,:p7,:p8,:p9,:p10,:p11,:p12,to_date(:p13, 'dd-MON-yyyy'),:p14,:p15,:p16,:p17,:p18,sysdate,:p19,:p20,:p21)
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 613 5.50 40.32 96672 247585 306916 613
Fetch 0 0.00 0.00 0 0 0 0
total 613 5.50 40.32 96672 247585 306916 613
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 446
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 164224 0.04 62.33
SQL*Net message to client 613 0.00 0.00
SQL*Net message from client 613 0.03 0.90
latch: cache buffers chains 8 0.00 0.00
latch: object queue header operation 2 0.00 0.00Is there any other way to find out culprit amongst the two you have listed (ASSM / failed probes on index leaf block reuse ) ? -
Wait in concurrent insert in a table
I have a serious problem in my database which significantly increase concurrency .
It leads to long-time waits and force me to kill some sessions.
I have a table (KP...) in a specific time of day there is up to 4 concurrent heavy insert in this
table.
what is the possibilities in improving the performance of this table ?
How can i reduce this kind of wait events?
thank you in advancethis the script of tablespace and that specific table
CREATE TABLESPACE ADMIN_DATA DATAFILE
'/opt/oradata/datafile/admin01.dbf' SIZE 250M AUTOEXTEND ON NEXT 1M MAXSIZE 4000M
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO;
CREATE TABLE KPSTRNT
SRL NUMBER(12),
NUM VARCHAR2(13 BYTE),
DAY NUMBER(2),
MON NUMBER(2),
YER NUMBER(4),
ST1 NUMBER(2),
VER NUMBER(4),
U_DATE_TIME VARCHAR2(30 BYTE),
UAUSER_SRL NUMBER(6),
PRN_STS VARCHAR2(1 BYTE),
OGCOST_SRL NUMBER(6),
ISSINF_SRL NUMBER(6),
SRSTCK_SRL VARCHAR2(10 BYTE),
SRSTCK_SRL_TWO VARCHAR2(10 BYTE),
KPSTRN_SRL NUMBER(12),
KPSTRN_SRL_TWO NUMBER(12),
KPTDES_KCODE VARCHAR2(6 BYTE),
KPBDOC_SRL VARCHAR2(4 BYTE),
KPDCTP_SRL VARCHAR2(2 BYTE),
KPDCTP_SRL_REF VARCHAR2(2 BYTE),
CMMISR_SRL NUMBER(6),
U_DATE_TIME0 VARCHAR2(30 BYTE),
EFC_DTE VARCHAR2(20 BYTE),
ISSINF_SRL_OWN NUMBER(7),
CMSTAT_SRL NUMBER(6),
OLD_NUM VARCHAR2(30 BYTE),
NAME VARCHAR2(50 BYTE),
CAR_NUM VARCHAR2(20 BYTE),
CAR_TYPE VARCHAR2(20 BYTE),
PRPBLC_SRL NUMBER(6)
TABLESPACE ADMIN_DATA
PCTUSED 0
PCTFREE 20
INITRANS 20
MAXTRANS 255
STORAGE (
INITIAL 120M
NEXT 20M
MINEXTENTS 1
MAXEXTENTS 5
BUFFER_POOL KEEP
LOGGING
PARTITION BY RANGE (KPDCTP_SRL)
PARTITION KPSTRN_P10 VALUES LESS THAN ('11')
LOGGING
TABLESPACE KP_DATAP10
PCTFREE 20
INITRANS 20
MAXTRANS 255
STORAGE (
INITIAL 3120K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL KEEP
PARTITION KPSTRN_P11 VALUES LESS THAN ('12')
LOGGING
TABLESPACE KP_DATAP11
PCTFREE 20
INITRANS 20
MAXTRANS 255
STORAGE (
INITIAL 4160K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL KEEP
PARTITION KPSTRN_P20 VALUES LESS THAN ('21')
LOGGING
TABLESPACE KP_DATAP20
PCTFREE 20
INITRANS 20
MAXTRANS 255
STORAGE (
INITIAL 22680K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL KEEP
PARTITION KPSTRN_P21 VALUES LESS THAN ('22')
LOGGING
TABLESPACE KP_DATAP21
PCTFREE 20
INITRANS 20
MAXTRANS 255
STORAGE (
INITIAL 23720K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL KEEP
PARTITION KPSTRN_P30 VALUES LESS THAN ('31')
LOGGING
TABLESPACE KP_DATAP30
PCTFREE 20
INITRANS 20
MAXTRANS 255
STORAGE (
INITIAL 3120K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL KEEP
PARTITION KPSTRN_P31 VALUES LESS THAN ('32')
LOGGING
TABLESPACE KP_DATAP31
PCTFREE 20
INITRANS 20
MAXTRANS 255
STORAGE (
INITIAL 3120K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL KEEP
CACHE
NOPARALLEL
MONITORING;
and this id the script for my partitioned tablespace
CREATE TABLESPACE KP_DATAP20 DATAFILE
'/opt/oradata/datafile/kp_datap20_reorg0.dbf' SIZE 100M AUTOEXTEND OFF
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO;
what do you suggest to improve the performance?
Edited by: Hesam on Dec 27, 2008 8:58 AM -
I wrote a concurrent process that updates a few tables. I wrote a concurrent process submitted through Oracle Fiancials Version 11i and runs in the Oracle 10g database. I tried the following command to retrieve the name of the user that submitted the concurrent process:
INSERT INTO nyc.zzsco_material_transactions
(TRANSACTION_DATE
, INVENTORY_ITEM_ID
, trx_source_line_id
, creation_date
, user_name
SELECT TRUNC(mmt.TRANSACTION_DATE)
, mmt.INVENTORY_ITEM_ID
, mmt.trx_source_line_id
, SYSDATE
, SYS_CONTEXT('USERENV','SESSION_USER')
FROM apps.mtl_material_transactions mmt
WHERE mmt.TRANSACTION_DATE BETWEEN vFrom_Date AND vTo_Date
AND mmt.transaction_type_id = 53;
The function SYS_CONTEXT('USERENV','SESSION_USER') returns the user 'APPS' but I want to know the name of the user that submitted to process like 'JOE' or 'SUSAN' or a user ID. What function do I use for this in my envirtonment?Now I got it working. with the select statement:
SELECT sys_context('USERENV', 'OS_USER') INTO vCurrentUser FROM dual;
But the SELECT returns the value: 'appltecdev'
This is the name of the database instance but I need the name of the user which is for this test me. Or a user ID would be okay. I just need something to identify the person who submitted the concurrent process.
Thanks, Ned -
Re:- XMII is there a way todynamically create local or transaction variable
Dear all,
Is there a way to create local or transaction variable dynamically inside the BLS transactions.
I have a scenario where in i am reading the XMl document and based on the number of node (sub node ) items i need to temporary save those values into transaction variable . I want to use these saved transaction variable tvalues o later insert into databaseAfter reading your second post, I wonder why you want to temporarily store the separate values from the XML node in a local property.
If you already have all values available in a XML, than it would be easier to process the XML directly in MII, as that is what MII does best. MII can directly access the node values and use it as input for your databse inserts. Using a repeater action, you can process 0 to n nodes. Use XPath to transfer the node values to the query parameters.
Is this the option you are looking for, or do I miss something?
Michael -
Implementing Workflow in Oracle Financials
Hi,
I would like to implement Workflow in 11i . can anyone pls guide me or refer the link which has some detailed steps to do ?
Regards
MuthuHi Hussein,
Yes I follwed the instruction on the doc to purge the email notifications.
Once everything done. I activated my Workflow Mailer service & Workflow Agent listner.
But of no luck, Again nearly 300 concurrent programs inserted into the FND_Concurrent_requests table for the description
"Prepayment Matching Program" in 2-3 mins . So i deactivated the Mailer service again.
Can you pls guide me , How to check is there any worflow enabled for the prepayment Matching program and how to disable it ?
Regards
Muthu -
Hi Experts ,
Please suggest for SendGovernor document,please tellme where it tells me how to make any modification.
Like :
SAP BPC Send Governor Configuration Modifications Background
Some background on what the Send Governor (SG) accomplishes, so that you can understand
better how this works. The SG is designed to manage the Microsoft
Analysis Services locks.
This ensures consistent performance for the user and avoids deadlocks.
Configuring the Send Governor
I would like the actual system documentation for the SendGovernor and its parameters.Does this Documenttation exist?
Regards
YaswanthHi,
All the parameters for Send Governor can be configured from web admin page - Appset parameters.
You can find documentation for SG into Best Practices document from HTG
Business Planning and Consolidation 5.x Performance Tuning Guide document from
_http://wiki.sdn.sap.com/wiki/display/BPX/EnterprisePerformanceManagement%28EPM%29How-to+Guides
The Send Governor settings and the way how ti is working was not changed from 5.x version until 7.5.
The only new thing into 7.5 is the parameter SEND_SGTABLE_COUNT added into Application Parameter
"Use this parameter to specify the count of the sgData[Application] table that is used by the data sending process. It is to be scalable for sending data, and it process sending data parallel as the number of the table.
Use this parameter so that the system can split the sgData[Application] table when it sends large amounts of data.
Valid values are positive integers larger than 0.
The default value is 2.
After you add or modify this parameter, modify the application in the Administration console."
In this way you will have complete information about SG.
In case of high concurentiality the recomendations are:
THREAD_MAXNUM_SG - Maximum number of additional concurrent threads inserting into write back table. Recommended value 1
INTERVAL_CHECK_SEND Frequency the system sends batches of data from the staging table to the write-back table. (milliseconds) Recommended value 4000
MAXCELLS_THREAD_SG The number of cells that will be processed by a thread Recommended value 1000000
UNITPER_SP Maximum number of cells inserted into the write back table per send Recimmended value 1000000 -
Building indexes on top of BDB
Hi,
I have a need to index multiple entries based on the same key. Potentially for the same key I can have millions of duplicate values. One of the problems I have seen if once this scales the concurrency for inserts searching for a non existent entry can perform poorly.
What advice does the community have in terms of solving problems in this space?Hi,
Do you mean create multiple indexes and query by join cursor? You could set DB_DUPSORT in secondary index to achieve better performance on query. For more details, you could refer to DB->set_flags().
Emily -
Concurrent program Do not Insert Data Into Interface Tables!
Hi there:
I am facing a problem with my concurrent program whenever I execute my stored procedure in SQL/PLUS it's work fine data load into the AP_INTERFACE/LINES.
Whenever I try to load data through my concurrent programs it doesn't load into AP_INTERFACE/LINES and concurrent request successfully completed but don't load data.
This is code query take a look.
CREATE OR REPLACE PROCEDURE CINDNOTE(errbuff OUT VARCHAR2,
retcode OUT NUMBER,
p_org IN VARCHAR2,
p_from_date IN VARCHAR2,
p_to_date IN VARCHAR2)
--p_org_id IN NUMBER,
*Module Name AP DEBIT NOTE INTERFACE
*Description This Package contains PL/SQL to support the
* the DEBIT NOTE Inward Interface to AP
*Author Zeeshan Hussain Siddiqui
*Date 15 May, 2007
*Modification Log
*Developer Version Date Description
*Zeeshan Hussain Siddiqui 1.0.0.1 15-MAY-2007 This interface integrated to AP
AS
ap_sequence NUMBER;
reject_debit CHAR:='D';
--v_invoice_lookup_code VARCHAR2(25):='Debit Memo';
--v_negative_amt1 CHAR:='(';
--v_negative_amt2 CHAR:=')';
v_code VARCHAR2(250):='01.01.000.10450.00.00000';
v_description VARCHAR2(250);
V_rma_no VARCHAR2(10):='RMA#';
from_date DATE;
to_date DATE;
CURSOR rejected_cur
IS
SELECT HR.full_name,ORG.organization_code InvOrg,
ROUND(NVL((CR.unit_price*quantity_rejected*-1)+NVL(CR.gst_amount*-1,0),0),2)
Invoice_Amt,ROUND(NVL(CR.unit_price*quantity_rejected*-1,0),2) AMT,ROUND(NVL(CR.gst_amount*-1,0),2) GST_AMT,
POS.vendor_site_code,CR.date_of_disposition disposition_date,POS.vendor_site_id,CR.organization_id,
(CASE WHEN CR.organization_id =305 THEN '01' WHEN CR.organization_id =304 THEN '01'
WHEN CR.organization_id =450 THEN '07' WHEN CR.organization_id =303 THEN '02' ELSE '00' END)||'.'||
(CASE WHEN CR.organization_id=305 THEN '02' ELSE '01' END)||'.000.'||(CASE WHEN CR.disposition=4
THEN '10430' WHEN CR.disposition=6 THEN '10433' WHEN CR.disposition=3 THEN '10430'
ELSE '00000' END)||'.'||'00.00000' Distribution_Code,
PO.vendor_id,CR.reject_number,CR.disposition,CR.po_number,CR.unit_price,CR.rework_po_no,
CR.shipping_memo, PO.vendor_name,
CR.debit_note_number Invoice_Number,CR.account_number,CR.currency_code,
CR.shipped_via,CR.vendor_rma,POC.first_name||' '||POC.last_name Contact,POS.phone,
SUBSTR(POS.Fax_Area_Code,1,10)||'-'||SUBSTR(POS.Fax,1,20) Fax_Number,
SUBSTR(POS.Address_Line1,1,100) Address,
SUBSTR(POS.City,1,25)||' '||SUBSTR(POS.State,1,20)||' '||SUBSTR(POS.Province,1,20)"City/State/Prov"
FROM apps.hr_employees hr,apps.mtl_system_items mtl,
apps.org_organization_definitions ORG,
apps.cin_rejects CR,apps.po_headers_all POH,
apps.po_vendors PO,apps.po_vendor_contacts POC,apps.po_vendor_sites_all POS
--WHERE TRUNC(CR.date_of_disposition) BETWEEN from_date AND to_date
WHERE To_char(CR.date_of_disposition,'j') BETWEEN to_char(from_date,'j') AND to_char(to_date,'j')
AND CR.organization_id =p_org_id ORG.organization_id
AND ORG.organization_code =p_org
AND POH.segment1 =CR.po_number
AND HR.employee_id =MTL.buyer_id
and CR.organization_id =MTL.organization_id
AND CR.INVENTORY_ITEM_ID =MTL.INVENTORY_ITEM_ID
AND PO.vendor_id =POH.vendor_id
AND POH.vendor_contact_id =POC.vendor_contact_id
AND POH.vendor_site_id =POS.vendor_site_id
AND POS.invoice_currency_code =CR.currency_code
AND CR.disposition IN(3,4,6);
BEGIN
from_date:=FND_CONC_DATE.STRING_TO_DATE(p_from_date);
to_date:=FND_CONC_DATE.STRING_TO_DATE(p_to_date);
FOR rejected_rec IN rejected_cur
LOOP
IF rejected_rec.vendor_rma IS NULL THEN
v_description:=rejected_rec.shipping_memo||' '||rejected_rec.full_name;
ELSIF rejected_rec.shipping_memo IS NULL THEN
v_description:=v_rma_no||rejected_rec.vendor_rma||' '||rejected_rec.full_name;
ELSIF rejected_rec.vendor_rma IS NULL AND rejected_rec.shipping_memo IS NULL THEN
v_description:=rejected_rec.full_name;
ELSIF rejected_rec.vendor_rma IS NOT NULL AND rejected_rec.shipping_memo IS NOT NULL
AND rejected_rec.full_name IS NOT NULL THEN
v_description:=v_rma_no||rejected_rec.vendor_rma||' '||rejected_rec.shipping_memo||'
'||rejected_rec.full_name;
END IF;
SELECT AP_INVOICES_INTERFACE_S.NEXTVAL
INTO ap_sequence
FROM DUAL;
INSERT INTO AP_INVOICES_INTERFACE
INVOICE_ID
,VENDOR_ID
,INVOICE_CURRENCY_CODE
,DESCRIPTION
,INVOICE_NUM
,VENDOR_NAME
,VENDOR_SITE_ID
,VENDOR_SITE_CODE
,INVOICE_DATE
,SOURCE
,INVOICE_AMOUNT
,INVOICE_TYPE_LOOKUP_CODE
VALUES
ap_sequence
,rejected_rec.vendor_id
,rejected_rec.currency_code
,v_description
,reject_debit||rejected_rec.reject_number
,rejected_rec.vendor_name
,rejected_rec.vendor_site_id
,rejected_rec.vendor_site_code
,rejected_rec.disposition_date
,'REJECTS'
,rejected_rec.Invoice_Amt
,'CREDIT'
IF rejected_rec.GST_AMT <0 THEN
INSERT INTO AP_INVOICE_LINES_INTERFACE
INVOICE_ID
,LINE_TYPE_LOOKUP_CODE
,DIST_CODE_CONCATENATED
,ITEM_DESCRIPTION
,AMOUNT
VALUES
ap_sequence
,'TAX'
,v_code
,v_description
,rejected_rec.GST_AMT
END IF;
INSERT INTO AP_INVOICE_LINES_INTERFACE
INVOICE_ID
,LINE_TYPE_LOOKUP_CODE
,DIST_CODE_CONCATENATED
,ITEM_DESCRIPTION
,AMOUNT
VALUES
ap_sequence
,'ITEM'
,rejected_rec.Distribution_Code
,v_description
,rejected_rec.AMT
COMMIT;
END LOOP;
END;
Please reply me ASAP.
Thanks,
ZeeshanHi All,
I have created a package with a procedure. This procedure is used for inserting records into the custom tables created by me. When I am running the script in back end, it is running in reasonable time and giving back the desired output.
But, as per requirement, I have to run this package-procedure via a concurrent program. When I am submitting the request, it is taking hours to complete. Also I am using BULK COLLECT and FORALL(Since the number of records are more than 3 lacs) and firing COMMIT after every FORALL. But when I am quering the table, after an hour or so, no rows are returned.
Please help and reply asap.
Thanks in Advance....!! -
Why an insert journal concurrent works for one app user and not for other?
We have two app users. User A and User B. They have same responsibilities and grants over same functions and menus. When they run a custom insert journal concurrent, only user A can execute it successfully.
User B uses same parameters and data as User A, but when User B runs concurrent, it always fails
What is the difference between this users?
How we can check this difference?Hi,
did you check user specific profile option settings as well?
In addition, it would be helpful to have some more input
regarding the error which is raised when user B runs the
concurrent (concurrent log file output).
Regards -
Concurrency wait on an insert statement
Hello All,
I am running Oracle RAC 2 nodes 11g R2 on AIX 7.1
I have a table with unique index, and the application is doing inserts/updates into this table.
Suddenly and for about half a minute I faced a high concurrency waits on all the processes running these inserts for one node. I saw this high concurrency wait in the top activity screen of the OEM only on one of the nodes. knowing that the processes doing these inserts are running on both nodes.
All what I have that in this half minute I see high concurrency wait in OEM top activity screen related to this insert statement and when I clicked on the insert I found high "enq: TX - index contention". Again this was only on one node.
After this half minute everything went back to normal.
What could be the reason and how can I investigate it ?
Regards,Neo-b wrote:
Hello All,
I am running Oracle RAC 2 nodes 11g R2 on AIX 7.1
I have a table with unique index, and the application is doing inserts/updates into this table.
Suddenly and for about half a minute I faced a high concurrency waits on all the processes running these inserts for one node. I saw this high concurrency wait in the top activity screen of the OEM only on one of the nodes. knowing that the processes doing these inserts are running on both nodes.
All what I have that in this half minute I see high concurrency wait in OEM top activity screen related to this insert statement and when I clicked on the insert I found high "enq: TX - index contention". Again this was only on one node.
After this half minute everything went back to normal.
What could be the reason and how can I investigate it ?
Regards,I bet that the INDEX contains a SEQUENCE. -
Facing error wen running Java Concurrent Program to insert data into table
Hi All,
Its first time I am wrkng on Java Concurrent Programs. I created a java class with the below code:
import oracle.apps.fnd.cp.request.*;
import oracle.apps.iby.scheduler.*;
import java.sql.*;
import java.sql.Statement;
import java.sql.Connection;
import oracle.apps.fnd.common.VersionInfo;
import oracle.apps.fnd.cp.request.CpContext;
import oracle.apps.fnd.cp.request.JavaConcurrentProgram;
import oracle.apps.iby.database.DBWrapper;
import oracle.apps.iby.ecapp.OraPmt;
import oracle.apps.iby.exception.Log;
import oracle.apps.iby.security.SecurityUtil;
//package oracle.apps.ibe.util.key;
//package oracle.apps.iby.scheduler;
// Referenced classes of package oracle.apps.iby.scheduler:
// SchedUtils
public class XXIBE_KeyInsert
implements JavaConcurrentProgram
public XXIBE_KeyInsert()
public void runProgram(CpContext cpcontext)
try{
String s;
oracle.apps.fnd.cp.request.ReqCompletion reqcompletion;
Connection connection;
// s = "iby.scheduler.XXIBE_keyInsert.runProgram";
reqcompletion = cpcontext.getReqCompletion();
connection = null;
OraPmt.init(cpcontext);
Log.debug("Inserting Credit Card key", 1, "XXIBE_KeyInsert.java");
byte abyte0[] = SecurityUtil.getSystemKey();
connection = cpcontext.getJDBCConnection();
Statement st = connection.createStatement();
String sql = "Insert into xxibe_scodes values ("+abyte0+")";
st.executeUpdate(sql);
connection.commit();
st.close();
connection.close();
Log.debug("done", 1, "XXIBE_KeyInsert.java");
reqcompletion.setCompletion(reqcompletion.NORMAL, "Request Completed Normal");
OraPmt.end();
SchedUtils.setSuccess(reqcompletion);
DBWrapper.closeDBConnection(connection);
catch(Exception e){
e.printStackTrace();
//reqcompletion.setCompletion(ReqCompletion.ERROR, e.toString());
I compiled the program and in the java_top/oracle/apps/../.. (in the pkg given). Now the class files and java files are in the said location. I created an executable of type "java concurent Prog" and location same as Package name.
Created a Program and assigned it to the responsibility. Having done this when i run the CP I find it ends with an error msg. The log files show the below exceptions: "java.lang.ClassNotFound Exception".
We are doing it in R12.0.6 and in Unix server. Pls help me where I am missing it. It turned to be an urgent requirement frm my end.
Thanks,
Abhishek.The Exception Stack is :
java.lang.ClassNotFoundException: oracle.apps.iby.scheduler.test.XXIBE_KeyInsert
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:268)
at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:164)
at oracle.apps.fnd.cp.request.Run.main(Run.java:152).
Please advice me on how to proceed. -
Data inserted twice in a table through concurrent program
Hi All,
I have a concurrent program that is run from two different responsibilities, CANADA and US respectively.
Now it sometimes happens that since its being run from two different responsibilities, it is inserting data twice in a table.
I want to know what is the best way to restrict this, setting incompatibility would be the right option, so that i make that conconcurrent program incompatible to itself or is there some better way possible.
If setting the incompatibility is better, please let me know how do i do that.
Regards,
SKThe process to make an program incompatible with itself is described in MOS Doc 142944.1 (How to Make a Concurrent Program Incompatible with Itself) - you will have to bounce the concurrent managers for the setting to take effect
HTH
Srini
Maybe you are looking for
-
Connection pool and datasource
In WL 7, we need to configure a datasource and a connection pool in order for the application to get a connection to the database. How is the datasource and connection pool related? In WL 9.2, there is no option to configure the connection pool, ther
-
I used MT_PerformRate for both sender side and receiver side in message mapping When I goto interface mapping and click on the mapping program it gives following error. The source or target structure has been changed or could not be found in the Inte
-
How do I open a file that needs adobe reader on my iPad
How do I open a file on my iPad that needs adobe reader.
-
help! my first mac! and I can't get past the Welcome menu window... I try to select "united states" and want to continue but nothing happens. Any idea why?
-
Microsoft product specific search criteria in WUA API.
I am trying to implement one use case for searching Windows updates using WUA APIs. I am able to search available update using some criteria like "IsInstalled=0". I followed "http://msdn.microsoft.com/en-us/library/aa386526%28VS.85%29.aspx" for crite