Bulk Insert/Update in Toplink 10.1.3.
Hi experts,
I want to update a column in the database for all rows of a pariticular table. Also want the change to reflect in Toplink cache on all nodes of the Weblogic cluster. The cache in various nodes are synchronized using a JMS Topic. I want to avoid registering all these objects in the UnitOfWork
for performance reasons. The changes do not seem to propogate when I use other Bulk update methods. Is there a standard way of doing this?
Thanks,
Kamal
You can update a set of rows using an Update All JPQL query in JPA, or using the native UpdateAllQuery class.
An Update All query will invalidate the local cache, but is not currently broadcast across cache coordination.
The Cache/IdentityMapAccessor invalidateObject() API allows an object to be invalidate across the cluster, but not a class or query set.
Please log a bug for this on EclipseLink, and contact Oracle technical support if you need a patch for this.
James : http://www.eclipselink.org
Similar Messages
-
Extremely slow inserts/updates reported after HW upgrade...
Hi gyus,
I'll try to be as descriptive as I can. It's this project in which we have to move circa 6 mission critical (24x7) and mostly OLTP databases from MS Windows 2003 (DB on local disks) to HP-UX IA (CA metrocluster, HP XP 12000 disk array) - all ORA10gR2 10.2.0.4. And everything was perfect until we moved this XYZ database...
Almost immediately users reported "considerable" performance degradation. According to 3rd party application log they get almost 40 secs. instead of previously recorded 10.
We, I mean Oracle and HP specialists, haven't noticed/recorded any significant peeks/bottlenecks (RAM, CPU, Disk I/O).
Feel free to check 3 AWR reports and the init.ora at [http://www.mediafire.com/?sharekey=0269c9bc606747b47f7ec40ada4772a6e04e75f6e8ebb871]
1_awrrpt_standard.txt - standard workload during 8 hours (peek hours are from 8-12AM)
2_awrrpt_2hrs_ca.txt - standard workload during 2 peek hours (8-10)
3_awrrpt_2hrs_noca.txt - standard workload during 2 peek hours (10-12) with CA disk mirroring disabled
Of course, I've checked the ADDM reports - and first, I'd like to ask why ADDM keeps on reporting the following (on all database instances on this
cluster node):
FINDING 1: 100% impact (310 seconds)
Significant virtual memory paging was detected on the host operating system.
RECOMMENDATION 1: Host Configuration, 100% benefit (310 seconds)
Is it just some kind of false alarm (like we use to get on MS Windows)? Both nodes are running on 32gigs of RAM
with roughly more than 10gigs constantly free.
Second, as ADDM reported:
FINDING 2: 44% impact (135 seconds)
Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
were consuming significant database time.
we've tried to split CA disk mirroring, using RAID 10 for redo log file disks etc. etc. No substantial performance gain was reported from users (though I've noticed some in AWR reports).
Despite confusing app. users' feedback I'm nearly sure that our bottleneck are redo log file disks. Why? Previously (old HW) we had 1-3 ms avg wait on log file sync and log file parallel write and now (new HW, RAID5/RAID10 we've tested both) - it's 8 ms or even more. We were able to get 2ms only with CA switched off (HP etrocluster
disk array mirroring).
And that brings up two new questions:
1. Does redo log group mirroring (2 on 2 separate disks vs. 1 on 1 disk) have any
significant impact on abovementioned wait events? I mean what performance gain
could I expect when I drop all "secondary" redo log members?
2. Why do we get almost identical response times when we run bulk insert/update tests (say
1000000 rows) against old and new DB/HW?
Thanks in advance,
tOPsEEK
Edited by: smutny on 1.11.2009 17:39
Edited by: smutny on 1.11.2009 17:46Hi again,
so here's the actual AWR report (1 minute window while the most problematic operation took place). I think it's becoming clear we have to deal with slow redo log writes...
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
... 294736156 ... 1 10.2.0.4.0 NO ...
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 1254 02-Nov-09 10:07:45 91 46.4
End Snap: 1255 02-Nov-09 10:08:47 91 46.4
Elapsed: 1.04 (mins)
DB Time: 0.51 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 576M 576M Std Block Size: 8K
Shared Pool Size: 912M 912M Log Buffer: 14,372K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 163,781.94 4,575.45
Logical reads: 1,677.15 46.85
Block changes: 1,276.32 35.66
Physical reads: 1.99 0.06
Physical writes: 1.16 0.03
User calls: 426.41 11.91
Parses: 20.74 0.58
Hard parses: 0.19 0.01
Sorts: 2.38 0.07
Logons: 0.00 0.00
Executes: 386.76 10.80
Transactions: 35.80
% Blocks changed per Read: 76.10 Recursive Call %: 31.51
Rollback per transaction %: 0.00 Rows per Sort: 369.98
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.88 In-memory Sort %: 100.00
Library Hit %: 99.90 Soft Parse %: 99.07
Execute to Parse %: 94.64 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 112.50 % Non-Parse CPU: 99.11
Shared Pool Statistics Begin End
Memory Usage %: 88.90 88.87
% SQL with executions>1: 98.74 99.39
% Memory for SQL w/exec>1: 95.35 97.75
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
log file parallel write 2,228 21 10 69.4 System I/O
log file sync 2,220 21 10 69.2 Commit
CPU time 20 65.5
SQL*Net break/reset to client 2,106 1 1 3.4 Applicatio
db file sequential read 131 0 4 1.5 User I/O
Time Model Statistics DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> Total time in database user-calls (DB Time): 30.9s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
DB CPU 20.2 65.5
sql execute elapsed time 9.1 29.4
PL/SQL execution elapsed time 0.3 1.0
parse time elapsed 0.2 .5
hard parse elapsed time 0.1 .5
repeated bind elapsed time 0.0 .0
DB time 30.9 N/A
background elapsed time 22.2 N/A
background cpu time 0.4 N/A
Wait Class DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
System I/O 3,213 .0 22 7 1.4
Commit 2,220 .0 21 10 1.0
Application 2,106 .0 1 1 0.9
User I/O 134 .0 0 4 0.1
Network 29,919 .0 0 0 13.4
Wait Events DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 2,228 .0 21 10 1.0
log file sync 2,220 .0 21 10 1.0
SQL*Net break/reset to clien 2,106 .0 1 1 0.9
db file sequential read 131 .0 0 4 0.1
control file parallel write 44 .0 0 9 0.0
db file parallel write 44 .0 0 4 0.0
SQL*Net more data to client 3,397 .0 0 0 1.5
SQL*Net message to client 26,509 .0 0 0 11.9
control file sequential read 897 .0 0 0 0.4
SQL*Net more data from clien 13 .0 0 0 0.0
direct path write 3 .0 0 0 0.0
SQL*Net message from client 26,510 .0 1,493 56 11.9
Streams AQ: qmn slave idle w 2 .0 55 27412 0.0
Streams AQ: qmn coordinator 4 50.0 55 13706 0.0
PL/SQL lock timer 10 100.0 49 4897 0.0
Background Wait Events DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 2,228 .0 21 10 1.0
control file parallel write 44 .0 0 9 0.0
db file parallel write 44 .0 0 4 0.0
control file sequential read 71 .0 0 0 0.0
rdbms ipc message 2,412 7.7 525 218 1.1
pmon timer 20 100.0 59 2929 0.0
Streams AQ: qmn slave idle w 2 .0 55 27412 0.0
Streams AQ: qmn coordinator 4 50.0 55 13706 0.0
Operating System Statistics DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
Statistic Total
AVG_BUSY_TIME 847
AVG_IDLE_TIME 5,362
AVG_IOWAIT_TIME 2,692
AVG_SYS_TIME 295
AVG_USER_TIME 549
BUSY_TIME 3,396
IDLE_TIME 21,457
IOWAIT_TIME 10,776
SYS_TIME 1,190
USER_TIME 2,206
LOAD 0
OS_CPU_WAIT_TIME 192,401,000
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 40,960
VM_OUT_BYTES 335,872
PHYSICAL_MEMORY_BYTES 34,328,276,992
NUM_CPUS 4
NUM_CPU_SOCKETS 4
Instance Activity Stats DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
Statistic Total per Second per Trans
CPU used by this session 984 15.8 0.4
CPU used when call started 344 5.5 0.2
CR blocks created 4 0.1 0.0
Cached Commit SCN referenced 208 3.3 0.1
Commit SCN cached 0 0.0 0.0
DB time 2,589 41.6 1.2
DBWR checkpoint buffers written 69 1.1 0.0
DBWR checkpoints 0 0.0 0.0
DBWR object drop buffers written 0 0.0 0.0
DBWR tablespace checkpoint buffe 0 0.0 0.0
DBWR thread checkpoint buffers w 0 0.0 0.0
DBWR transaction table writes 8 0.1 0.0
DBWR undo block writes 15 0.2 0.0
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 1,156 18.6 0.5
IMU Redo allocation size 996,048 16,017.2 447.5
IMU commits 2,100 33.8 0.9
IMU contention 0 0.0 0.0
IMU ktichg flush 0 0.0 0.0
IMU pool not allocated 0 0.0 0.0
IMU recursive-transaction flush 0 0.0 0.0
IMU undo allocation size 22,402,560 360,250.9 10,064.0
IMU- failed to get a private str 0 0.0 0.0
Misses for writing mapping 0 0.0 0.0
SQL*Net roundtrips to/from clien 26,480 425.8 11.9
active txn count during cleanout 34 0.6 0.0
application wait time 106 1.7 0.1
background checkpoints completed 0 0.0 0.0
background checkpoints started 0 0.0 0.0
background timeouts 199 3.2 0.1
branch node splits 0 0.0 0.0
buffer is not pinned count 13,919 223.8 6.3
buffer is pinned count 19,483 313.3 8.8
bytes received via SQL*Net from 884,016 14,215.7 397.1
bytes sent via SQL*Net to client 9,602,642 154,418.1 4,313.9
calls to get snapshot scn: kcmgs 13,641 219.4 6.1
calls to kcmgas 3,029 48.7 1.4
calls to kcmgcs 56 0.9 0.0
change write time 8 0.1 0.0
cleanout - number of ktugct call 42 0.7 0.0
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 0 0.0 0.0
cluster key scan block gets 1,100 17.7 0.5
cluster key scans 1,077 17.3 0.5
commit batch/immediate performed 1 0.0 0.0
commit batch/immediate requested 1 0.0 0.0
commit cleanout failures: block 0 0.0 0.0
commit cleanout failures: buffer 0 0.0 0.0
commit cleanout failures: callba 4 0.1 0.0
commit cleanout failures: cannot 0 0.0 0.0
commit cleanouts 9,539 153.4 4.3
commit cleanouts successfully co 9,535 153.3 4.3
commit immediate performed 1 0.0 0.0
commit immediate requested 1 0.0 0.0
commit txn count during cleanout 26 0.4 0.0
concurrency wait time 0 0.0 0.0
consistent changes 264 4.3 0.1
consistent gets 48,659 782.5 21.9
consistent gets - examination 26,952 433.4 12.1
consistent gets direct 2 0.0 0.0
consistent gets from cache 48,657 782.4 21.9
cursor authentications 2 0.0 0.0
data blocks consistent reads - u 4 0.1 0.0
db block changes 79,369 1,276.3 35.7
db block gets 55,636 894.7 25.0
db block gets direct 3 0.1 0.0
db block gets from cache 55,633 894.6 25.0
deferred (CURRENT) block cleanou 4,768 76.7 2.1
dirty buffers inspected 0 0.0 0.0
enqueue conversions 15 0.2 0.0
enqueue releases 9,967 160.3 4.5
enqueue requests 9,967 160.3 4.5
enqueue timeouts 0 0.0 0.0
enqueue waits 0 0.0 0.0
execute count 24,051 386.8 10.8
failed probes on index block rec 0 0.0 0.0
frame signature mismatch 0 0.0 0.0
free buffer inspected 680 10.9 0.3
free buffer requested 1,297 20.9 0.6
heap block compress 11 0.2 0.0
hot buffers moved to head of LRU 1,797 28.9 0.8
immediate (CR) block cleanout ap 0 0.0 0.0
immediate (CURRENT) block cleano 2,274 36.6 1.0
index crx upgrade (positioned) 47 0.8 0.0
index fast full scans (full) 0 0.0 0.0
index fetch by key 10,326 166.1 4.6
index scans kdiixs1 6,071 97.6 2.7
leaf node 90-10 splits 14 0.2 0.0
leaf node splits 18 0.3 0.0
lob reads 0 0.0 0.0
lob writes 198 3.2 0.1
lob writes unaligned 176 2.8 0.1
logons cumulative 0 0.0 0.0
messages received 2,272 36.5 1.0
messages sent 2,272 36.5 1.0
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 21,083 339.0 9.5
opened cursors cumulative 1,290 20.7 0.6
parse count (failures) 0 0.0 0.0
parse count (hard) 12 0.2 0.0
parse count (total) 1,290 20.7 0.6
parse time cpu 18 0.3 0.0
parse time elapsed 16 0.3 0.0
physical read IO requests 124 2.0 0.1
physical read bytes 1,015,808 16,335.0 456.3
physical read total IO requests 1,030 16.6 0.5
physical read total bytes 15,785,984 253,851.1 7,091.6
physical read total multi block 0 0.0 0.0
physical reads 124 2.0 0.1
physical reads cache 122 2.0 0.1
physical reads cache prefetch 0 0.0 0.0
physical reads direct 2 0.0 0.0
physical reads direct (lob) 0 0.0 0.0
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 0 0.0 0.0
physical write IO requests 47 0.8 0.0
physical write bytes 589,824 9,484.8 265.0
physical write total IO requests 4,591 73.8 2.1
physical write total bytes 25,374,720 408,045.5 11,399.3
physical write total multi block 4,461 71.7 2.0
physical writes 72 1.2 0.0
physical writes direct 3 0.1 0.0
physical writes direct (lob) 3 0.1 0.0
physical writes direct temporary 0 0.0 0.0
physical writes from cache 69 1.1 0.0
physical writes non checkpoint 18 0.3 0.0
pinned buffers inspected 1 0.0 0.0
prefetch warmup blocks aged out 0 0.0 0.0
prefetched blocks aged out befor 0 0.0 0.0
process last non-idle time 0 0.0 0.0
recursive calls 12,197 196.1 5.5
recursive cpu usage 747 12.0 0.3
redo blocks written 11,398 183.3 5.1
redo buffer allocation retries 0 0.0 0.0
redo entries 6,920 111.3 3.1
redo log space requests 0 0.0 0.0
redo log space wait time 0 0.0 0.0
redo ordering marks 96 1.5 0.0
redo size 10,184,944 163,781.9 4,575.5
redo subscn max counts 811 13.0 0.4
redo synch time 2,190 35.2 1.0
redo synch writes 2,220 35.7 1.0
redo wastage 1,377,920 22,158.0 619.0
redo write time 2,192 35.3 1.0
redo writer latching time 0 0.0 0.0
redo writes 2,228 35.8 1.0
rollback changes - undo records 24 0.4 0.0
rollbacks only - consistent read 4 0.1 0.0
rows fetched via callback 1,648 26.5 0.7
session connect time 0 0.0 0.0
session cursor cache hits 1,242 20.0 0.6
session logical reads 104,295 1,677.2 46.9
session pga memory 2,555,904 41,101.0 1,148.2
session pga memory max 0 0.0 0.0
session uga memory 123,488 1,985.8 55.5
session uga memory max 0 0.0 0.0
shared hash latch upgrades - no 66 1.1 0.0
sorts (disk) 0 0.0 0.0
sorts (memory) 148 2.4 0.1
sorts (rows) 54,757 880.5 24.6
sql area evicted 86 1.4 0.0
sql area purged 0 0.0 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 596 9.6 0.3
table fetch by rowid 9,173 147.5 4.1
table fetch continued row 0 0.0 0.0
table scan blocks gotten 982 15.8 0.4
table scan rows gotten 154,079 2,477.7 69.2
table scans (cache partitions) 0 0.0 0.0
table scans (long tables) 0 0.0 0.0
table scans (short tables) 59 1.0 0.0
total number of times SMON poste 0 0.0 0.0
transaction rollbacks 1 0.0 0.0
undo change vector size 3,990,136 64,164.5 1,792.5
user I/O wait time 49 0.8 0.0
user calls 26,517 426.4 11.9
user commits 2,226 35.8 1.0
user rollbacks 0 0.0 0.0
workarea executions - onepass 0 0.0 0.0
workarea executions - optimal 204 3.3 0.1
write clones created in backgrou 0 0.0 0.0
write clones created in foregrou 0 0.0 0.0
-------------------------------------------------------------... and what's even more interesting is the report that I've got using this great Tanel Poder's session snapper script. Take a look at these numbers (excerpt):
SID USERNAME TYPE STATISTIC DELTA HDELTA/SEC %TIME
668 ROVE_EDA WAIT log file sync 2253426 450.69ms 45.1%
668 ROVE_EDA WAIT log file sync 2140618 428.12ms 42.8%
668 ROVE_EDA WAIT log file sync 2088327 417.67ms 41.8%
668 ROVE_EDA WAIT log file sync 2184408 364.07ms 36.4%
668 ROVE_EDA WAIT log file sync 2117470 352.91ms 35.3%
668 ROVE_EDA WAIT log file sync 2051280 341.88ms 34.2%
668 ROVE_EDA WAIT log file sync 1595019 265.84ms 26.6%
668 ROVE_EDA WAIT log file sync 612034 122.41ms 12.2%
668 ROVE_EDA WAIT log file sync 2162980 432.6ms 43.3%
668 ROVE_EDA WAIT log file sync 2071811 345.3ms 34.5%
668 ROVE_EDA WAIT log file sync 2004571 334.1ms 33.4%
668 ROVE_EDA WAIT db file sequential read 28401 5.68ms .6%
668 ROVE_EDA WAIT db file sequential read 29028 4.84ms .5%
668 ROVE_EDA WAIT db file sequential read 24846 4.14ms .4%
668 ROVE_EDA WAIT db file sequential read 24323 4.05ms .4%
668 ROVE_EDA WAIT db file sequential read 17026 3.41ms .3%
668 ROVE_EDA WAIT db file sequential read 6736 1.35ms .1%
668 ROVE_EDA WAIT db file sequential read 33028 5.5ms .6%
764 (LGWR) WAIT log file parallel write 2236748 447.35ms 44.7%
764 (LGWR) WAIT log file parallel write 2150825 430.17ms 43.0%
764 (LGWR) WAIT log file parallel write 2139532 427.91ms 42.8%
764 (LGWR) WAIT log file parallel write 2119086 423.82ms 42.4%
764 (LGWR) WAIT log file parallel write 2134938 355.82ms 35.6%
764 (LGWR) WAIT log file parallel write 2083649 347.27ms 34.7%
764 (LGWR) WAIT log file parallel write 2034998 339.17ms 33.9%
764 (LGWR) WAIT log file parallel write 1996050 332.68ms 33.3%
764 (LGWR) WAIT log file parallel write 1797057 299.51ms 30.0%
764 (LGWR) WAIT log file parallel write 555403 111.08ms 11.1%
764 (LGWR) WAIT log file parallel write 277875 46.31ms 4.6%
764 (LGWR) WAIT log file parallel write 2067591 344.6ms 34.5%Where SID=668 is the session we've been looking for... OK, we've got to get back to monitoring disk array and the corresponding network components.
tOPsEEK -
SQL SERVER BULK FETCH AND INSERT/UPDATE?
Hi All,
I am currently working with C and SQL Server 2012. My requirement is to Bulk fetch the records and Insert/Update the same in the other table with some business logic?
How do i do this?
Thanks in Advance.
Regards
Yogesh.B> is there a possibility that I can do a bulk fetch and place it in an array, even inside a stored procedure ?
You can use Temporary tables or Table variables and have them indexes as well
>After I have processed my records, tell me a way that I will NOT go, RECORD by RECORD basis, even inside a stored procedure ?
As i said earlier, you can perform UPDATE these temporary tables or table variables and finally INSERT/ UPDATE your base table
>Arrays are used just to minimize the traffic between the server and the program area. They are used for efficient processing.
In your case you will first have to populate the array (Using some of your queries from the server) which means you will first load the arrary, do some updates, and then send them back to server therefore
network engagement
So I just gave you some thoughts I feel could be useful for your implementation, like we say, there are many ways so pick the one that works good for you in the long run with good scalability
Good Luck! Please Mark This As Answer if it solved your issue. Please Vote This As Helpful if it helps to solve your issue -
Query on Correct Toplink Usage for insert/update
Hi,
I have a JSF based web application where-in my backing bean is having the Toplink generated enities as managed properties. i.e. the Toplink entity is being populated by the JSF value binding.
Now when I want to insert/update this entity, what approach should be followed(keeping in mind that the entities are detached entities).
I am using Toplink ORM v 10.1.3. I was thinking that I would have to call mergeClone for the update and registerObject for insert. But can I use mergeClone for both insert and update ?
Also mention the performance implications of the suggested option.
Regards,
Ani1. The main concern is will it work with the telecom carriers in India?
It will work in India. The N9 is not released in the US either, so its certainly not tuned to work with specific carriers there.
2. Do I need it to get it unlocked if I buy it from USA?
The N9 is not released in the US, hence you will not be able to buy a locked phone from there. So no question of unlocking it.
3. Will I be able to update the phone's software from India?
I bought the phone in UK, where the N9 is not released. I was able to update the software without problems.
4. What are the chances it will give problems as there would be no warranty/support on it when I bring it here in India.
No idea. Depends on your luck, as with any other phone.
5. Should I go ahead and buy Lumia 800 as it is readily available in India and its support too.
I believe the N9 is a better phone than the Lumia 800. But if you have concerns on the support and warranty, better to be safe than sorry.
A better bet would be to buy from Singapore, if you have someone who could send you the phone from there, or buy it in India from some dealer, who also promises some support. -
Hi,
I am trying to figure out how to fix my problem
Error: Could not be opened. Operating system error code 5(Access is denied.)
Process Description:
Target Database Server Reside on different Server in the Network
SSIS Package runs from a Remote Server
SSIS Package use a ForEachLoop Container to loop into a directory to do Bulk Insert
SSIS Package use variables to specified the share location of the files using UNC like this
\\server\files
Database Service accounts under the Database is runing it has full permission on the share drive were the files reside.
In the Execution Results tab shows the prepare SQL statement for the BULK insert and I can run the same exact the bulk insert in SSMS without errors, from the Database Server and from the server were SSIS package is executed.
I am on a dead end and I don’t want to re-write SSIS to use Data Flow Task because is not flexible to update when metadata of the table changed.
Below post it has almost the same situation:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/8de13e74-709a-43a5-8be2-034b764ca44f/problem-with-bulk-insert-task-in-foreach-loop?forum=sqlintegrationservicesInsteresting how I fixed the issue, Adding the Application Name into the SQL OLAP Connection String Fixed the issue. I am not sure why SQL Server wasn't able to open the file remotely without this.
-
Error while running bulk insert in SSIS package
Hi:
I have an error when I am running bulk insert in SSIS package.
I have implemented an SSIS package to update master data directly from R/3, R/3 gives the file in a specified format, I take this and insert all the records into a temporary table and then update mbr table and process the dimension.
This works perfectly well in our development system where both our app server and sql server on the same box. But in QAS, the 2 servers are separate and when I try to run the SSIS package I get the below error.
We have tested all connections and are able to access the path and file from both app server and sql server using the shared folder. Our basis team says that it is a problem with bulk insert task and nothing to do with any authorization.
Has anyone experienced with this sort of problem in multi server environment? Is there another way to load all data from a file into bespoke table without using bulk insert.
Thanks,
Subramania
Error----
SSIS package "Package.dtsx" starting.
Error: 0xC002F304 at Insert Data Into Staging Table (Account), Bulk Insert Task: An error occurred with the following error message: "Cannot bulk load because the file "
msapbpcapq01\dim\entity.csv" could not be opened. Operating system error code 5(Access is denied.).".
Task failed: Insert Data Into Staging Table (Account)
SSIS package "Package.dtsx" finished: Success.
The program '[2496] Package.dtsx: DTS' has exited with code 0 (0x0).Hi Subramania
From your error:
Error: 0xC002F304 at Insert Data Into Staging Table (Account), Bulk Insert Task: An error occurred with the following error message: "Cannot bulk load because the file "
msapbpcapq01\dim\entity.csv" could not be opened. Operating system error code 5(Access is denied.).".
Let say, server A is where the file entity.csv is located
Please check the Event Viewer->Security of Server A at the time when the SSIS run, there must be an entry with Logon Failure and find what user was used to access the shared path.
If your both servers are not in a domain, create the user in server A with the same name and password and grant read access to the shared folder.
The other workaround is grant read access to Everybody on the shared folder.
Halomoan
Edited by: Halomoan Zhou on Oct 6, 2008 4:23 AM -
Bulk inserts on Solaris slow as compared to windows
Hi Experts,
Looking for tips in troubleshooting 'Bulk inserts on Solaris'. I have observed the same bulk inserts are quite fast on Windows as compared to Solaris. Is there known issues on Solaris?
This is the statement:
I have 'merge...insert...' query which is in execution since long time more than 12 hours now:
merge into A DEST using (select * from B SRC) SRC on (SRC.some_ID= DEST.some_ID) when matched then update ...when not matched then insert (...) values (...)Table A has 600K rows with unique identifier some_ID column, Table B has 500K rows with same some_id column, the 'merge...insert' checks if the some_ID exists, if yes then update query gets fired, when not matched then insert query gets fired. In either case it takes long time to execute.
Environment:
The version of the database is 10g Standard 10.2.0.3.0 - 64bit Production
OS: Solaris 10, SPARC-Enterprise-T5120
These are the parameters relevant to the optimizer:
SQL>
SQL> show parameter sga_target
NAME TYPE VALUE
sga_target big integer 4G
SQL>
SQL> show parameter sga_target
NAME TYPE VALUE
sga_target big integer 4G
SQL>
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL>
SQL> show parameter db_file_multi
NAME YPE VALUE
db_file_multiblock_read_count integer 16
SQL>
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL>
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL>
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL>
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 07-12-2005 07:13
SYSSTATS_INFO DSTOP 07-12-2005 07:13
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 452.727273
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
Following is the error messages being pushed into oracle alert log file:
Thu Dec 10 01:41:13 2009
Thread 1 advanced to log sequence 1991
Current log# 1 seq# 1991 mem# 0: /oracle/oradata/orainstance/redo01.log
Thu Dec 10 04:51:01 2009
Thread 1 advanced to log sequence 1992
Current log# 2 seq# 1992 mem# 0: /oracle/oradata/orainstance/redo02.logPlease provide some tips to troubleshoot the actual issue. Any pointers on db_block_size,SGA,PGA which are the reasons for this failure?
Regards,
neuronSID, SEQ#, EVENT, WAIT_CLASS_ID, WAIT_CLASS#, WAIT_TIME, SECONDS_IN_WAIT, STATE
125 24235 'db file sequential read' 1740759767 8 -1 *58608 * 'WAITED SHORT TIME'Regarding the disk, I am not sure what needs to be checked, however from output of iostat it does not seem to be busy, check last three row's and %b column is negligible:
tty cpu
tin tout us sy wt id
0 320 3 0 0 97
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 ramdisk1
0.0 2.5 0.0 18.0 0.0 0.0 0.0 8.3 0 1 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0 -
BULK INSERT into View w/ Instead Of Trigger - DML ERROR LOGGING Issue
Oracle 10.2.0.4
I cannot figure out why I cannot get bulk insert errors to aggregate and allow the insert to continue when bulk inserting into a view with an Instead of Trigger. Whether I use LOG ERRORS clause or I use SQL%BULK_EXCEPTIONS, the insert works until it hits the first exception and then exits.
Here's what I'm doing:
1. I'm bulk inserting into a view with an Instead of Trigger on it that performs the actual updating on the underlying table. This table is a child table with a foreign key constraint to a reference table containing the primary key. In the Instead of Trigger, it attempts to insert a record into the child table and I get the following exception: +5:37:55 ORA-02291: integrity constraint (FK_TEST_TABLE) violated - parent key not found+, which is expected, but the error should be logged in the table and the rest of the inserts should complete. Instead the bulk insert exits.
2. If I change this to bulk insert into the underlying table directly, it works, all errors get put into the error logging table and the insert completes all non-exception records.
Here's the "test" procedure I created to test my scenario:
View: V_TEST_TABLE
Underlying Table: TEST_TABLE
PROCEDURE BulkTest
IS
TYPE remDataType IS TABLE of v_TEST_TABLE%ROWTYPE INDEX BY BINARY_INTEGER;
varRemData remDataType;
begin
select /*+ DRIVING_SITE(r)*/ *
BULK COLLECT INTO varRemData
from TEST_TABLE@REMOTE_LINK
where effectiveday < to_date('06/16/2012 04','mm/dd/yyyy hh24')
and terminationday > to_date('06/14/2012 04','mm/dd/yyyy hh24');
BEGIN
FORALL idx IN varRemData.FIRST .. varRemData.LAST
INSERT INTO v_TEST_TABLE VALUES varRemData(idx) LOG ERRORS INTO dbcompare.ERR$_TEST_TABLE ('INSERT') REJECT LIMIT UNLIMITED;
EXCEPTION WHEN others THEN
DBMS_OUTPUT.put_line('ErrorCode: '||SQLCODE);
END;
COMMIT;
end;
I've reviewed Oracle's documentation on both DML logging tools and neither has any restrictions (at least that I can see) that would prevent this from working correctly.
Any help would be appreciated....
Thanks,
SteveThanks, obviously this is my first post, I'm desperate to figure out why this won't work....
This code I sent is only a test proc to try and troubleshoot the issue, the others with the debug statement is only to capture the insert failing and not aggregating the errors, that won't be in the real proc.....
Thanks,
Steve -
SQL Server 2008 - RS - Loop of multiple Bulk Inserts
Hi,
I want to import multiple flat files to a table on SQL Server 2008 R2. However, I don't have access to Integration Services to use a foreach loop, so I'm doing the process using T-SQL. Actually, I'm using manually code to which file to introduce the data on
tables. My code are like this:
cREATE TABLE #temporaryTable
[column1] [varchar](100) NOT NULL,
[column2 [varchar](100) NOT NULL
BULK
INSERT #temp
FROM 'C:\Teste\testeFile01.txt'
WITH
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n',
FIRSTROW = 1
GO
BULK
INSERT #temp
FROM 'C:\Teste\testeFile02.txt'
WITH
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n',
FIRSTROW = 1
GO
-------------------------------------------------INSERT INTO dbo.TESTE ( Col_1, Col_2)
Select RTRIM(LTRIM([column1])), RTRIM(LTRIM([column2])) From #temporaryTable
IF EXISTS(SELECT * FROM #temporaryTable) drop table #temporaryTable
The problem is that I have 20 flat files to Insert... Do I have any loop solution in T-SQL to insert all the flat files on same table?
Thanks!Here is a working sample of powershell script I adopted from internet( I don't have the source handy now).
Import-Module -Name 'SQLPS' -DisableNameChecking
$workdir="C:\temp\test\"
$svrname = "MC\MySQL2014"
Try
#Change default timeout time from 600 to unlimited
$svr = new-object ('Microsoft.SqlServer.Management.Smo.Server') $svrname
$svr.ConnectionContext.StatementTimeout = 0
$table="test1.dbo.myRegions"
#remove the filename column in the target table
$q1 = @"
Use test1;
IF COL_LENGTH('dbo.myRegions','filename') IS NOT NULL
BEGIN
ALTER TABLE test1.dbo.myRegions DROP COLUMN filename;
END
Invoke-Sqlcmd -ServerInstance $svr.Name -Database master -Query $q1
$dt = (get-date).ToString("yyyMMdd")
$formatfilename="$($table)_$($dt).xml"
$destination_formatfilename ="$($workdir)$($formatfilename)"
$cmdformatfile="bcp $table format nul -c -x -f $($destination_formatfilename) -T -t\t -S $($svrname) "
Invoke-Expression $cmdformatfile
#Delay 1 second
Start-Sleep -s 1
$q2 = @"
Alter table test1.dbo.myRegions Add filename varchar(500) Null;
#add the filename column to the target table
Invoke-Sqlcmd -ServerInstance $svr.Name -Database master -Query $q2
$files = Get-ChildItem $workdir
$items = $files | Where-Object {$_.Extension -eq ".txt"}
for ($i=0; $i -lt $items.Count; $i++) {
$strFileName = $items[$i].Name
$strFileNameNoExtension= $items[$i].BaseName
$query = @"
BULK INSERT test1.dbo.myRegions from '$($workdir)$($strFileName)' WITH (FIELDTERMINATOR = '\t', FIRSTROW = 2, FORMATFILE = '$($destination_formatfilename)');
Invoke-Sqlcmd -ServerInstance $svr.Name -Database master -Query $query -querytimeout 65534
#Delay 10 second
Start-Sleep -s 10
# Update the filename column
Invoke-Sqlcmd -ServerInstance $svr.Name -Database master -querytimeout 65534 -Query "Update test1.dbo.myRegions SET filename= '$($strFileName)' WHERE filename is null; "
# Move uploaded file to archive
If ((Test-Path "$($workdir)$($strFileName)") -eq $True) { Move-Item -Path "$($workdir)$($strFileName)" -Destination "$($workdir)Processed\$($strFileNameNoExtension)_$($dt).txt"}
Catch [Exception]
write-host "--$strFileName "$_.Exception.Message -
First Row Record is not inserted from CSV file while bulk insert in sql server
Hi Everyone,
I have a csv file that needs to be inserted in sql server. The csv file will be format will be like below.
1,Mr,"x,y",4
2,Mr,"a,b",5
3,Ms,"v,b",6
While Bulk insert it coniders the 2nd column as two values (comma separte) and makes two entries .So i used filelterminator.xml.
Now, the fields are entered into the column correctly. But now the problem is, the first row of the csv file is not reading in sql server. when i removed the terminator, i can get the all records. But i must use the above code terminator. If
am using means, am not getting the first row record.
Please suggests me some solution.
Thanks,
SelvamHi,
I have a csv file (comma(,) delimited) like this which is to be insert to sql server. The format of the file when open in notepad like below:
Id,FirstName,LastName,FullName,Gender
1,xx,yy,"xx,yy",M
2,zz,cc,"zz,cc",F
3,aa,vv,"aa,vv",F
The below is the bulk insert query which is used for insert above records,
EXEC(BULK INSERT EmployeeData FROM '''+@FilePath+'''WITH
(formatfile=''d:\FieldTerminator.xml'',
ROWTERMINATOR=''\n'',
FIRSTROW=2)'
Here, I have used format file for the "Fullname" which has comma(,) within the field. The format file is:
The problem is , it skip the first record (1,xx,yy,"xx,yy",M) when i use the format file. When i remove the format file from the query, it takes all the records but the "fullName" field makes the problem because of comma(,) within the
field. So i must use the format file to handle this. So please suggest me , why the first record skipped always when i use the above format file.
If i give the "FirstRow=1" in bulk insert, it shows the "String or binary data would be truncated.
The statement has been terminated." error. I have checked the datatype length.
Please update me the solution.
Regards,
Selvam. M -
ROWCOUNT in BULK Insert Statement
Hi,
I'm using in a BULK INSERT statement in a PL/SQL procedure and after execution of the SQL statement,I need to capture the row count.
Same is the case for UPDATE.
The Example code is as mentioned below:
INSERT INTO TBL1
(SELECT VAL1,VAL2 FROM TBL2)
No. of rows inserted needs to be retrieved after execution of this SQL.
Please let me know if there is any way to do it.
Thanks.SQL> create table emp as select * from scott.emp where 1 = 0 ;
Table created.
SQL> set serveroutput on
SQL> begin
2 insert into emp select * from scott.emp ;
3 dbms_output.put_line('Count='||SQL%RowCount) ;
4 end ;
5 /
Count=14
PL/SQL procedure successfully completed.
SQL> -
Performance: Bulk Insert Oracle 10g
Following Situation: We have a VISUAL BASIC 6 Application (I know ... VB6 ... ), an XML-File with data and an Oracle 10g database. The XML-File has to be imported in the database.
Up to now the Application (Via ADO) analyses the XML-File and creates INSERT and UPDATE Stmts and sends them to the DB. The Stmts will be handled within one Transaction and the application sends each INSERT, UPDATE separate to the database.
But this is a performance desaster... as expected ... :-) The import takes several hours ...
Now, my task is to increase the performance, but how .....
I tried several things, but without the real success, e.g. ...
I performed some tests with the SQl*Loader. The Insert is really fast, but I can't make an Update, so I had to delete the existing data first. But I can't proceed the two steps in one transaction, because of the SQL*Loader.
I tried to write a stored procedure which accepts an ADO.Recordset as input param and then creates the Insert and Update statements within the DB to reduce network traffic, but I didn't find a way to handle a ADO.Recordset as a input parameter for a stored procedure.
Has someone an idea how I can import the XML file in a fast way into the existing DB (and maybe make an Replace of existing records during the import ...) within one transaction without changing the structure of the DB ??? (Oracle packages?? interface in C++ integrated in VB6....) Is there an way to import the XML-File directly to the DB?
Thanks in advance for any idea :-))I tried to write a stored procedure which accepts an ADO.Recordset as input param ...., but I didn't find a way to handle a ADO.Recordset as a input parameter
for a stored procedure.Use SYS_REFCURSOR as the parameter type. Bulk collect it into a PL/SQL collection. Use FORALL to soup up the INSERT and UPDATE statements.
Cheers, APC
blog: http://radiofreetooting.blogspot.com -
EF Inserts/Updates: Ridiculously Slow
I cracked open the latest version of EF the other day. I had a project where I wanted to insert a lot of data in one hit. The number of records I wanted to insert was about 2,000,000. There are about 8 decimal columns in the table. So, I wrote the code in
EF. I wrote the code with an "Add", and then a "SaveChanges". What was taking an hour and half to write to XML suddenly jumped to an estimated time of 48 hours. (Note: most of the 1 1/2 hours is downloading data from the internet).
So I put an time trace in to see where each action was spending its time. The "Add" (putting the record of the type I want to insert in to the collection) was taking about 300 milliseconds and the "SaveChanges" was taking about 500 milliseconds.
This is simply absurd.
When I changed the code to avoid using EF - i.e. executing a straight INSERT statement against the SQL server database, the time went back down to about 1 1/2 hours. I found that the insert call was only taking about 2-3 milliseconds.
I thought perhaps EF was doing too much at the database level. So, I traced SQL Server. To my surprise, I found that EF was only executing insert statements in the same way I was executing the insert statement. So, why so slow? What's wrong with EF and when
will it be fixed? Is it even possible to use EF?Several things are important here. First of all: what do you want to do, using which tool,
and how do you implement it.
Concerning the what and which-tool:
several people have already stated that an ORM is not the tool of choice when a bulk insert is what you actually want to do. EF has tons of great features but it’s simply not a solution for every problem.
Then there’s the question of
how you use EF. Two important things that come to mind here are the fact that by default EF tracks all entities that are attached to it, and that a call to savechanges will result in the execution of one insert/update statements for each of the changed
entities.
Let’s assume that EF was applied in this scenario, in the most simple –and naive - way imaginable. We create a context, we add 2M entities, we call SaveChanges.
Then we can take a very long break… Now let’s take a look at what happened here. 2.000.000 entities were attached to the context and it will be tracking all of them for changes – that’s a lot to handle! Once all entities are attached, the SaveChanges method
is called. At this point all modified/new entities are persisted to the database. But EF will actually do this using 2M single insert statement. Ouch.
Now there are two issues here. One is the fact that EF executes only one update/insert statement at a time. You can deal with this using EF extensions that
actually implement the bulk insert you’re looking for. The second issue is the number of entities attached to the context. You can significantly increase performance by lowering the number of attached entities. You could go about this by batching: for the
first set of 500 entities create a new context, add and save the entities; then create a new context for the next 500, and so on. -
Bulk Insert, Domain Based Attributes
Hi
I have a product model similar to the ProductSamplemodel that ships with MDS. i.e. Product Category, Product Sub Category entities. Both of these entities have been created to automatically generate a code - as a code does not exist in the business only a description.
The product entity has both the category and sub category attributes as domain-based attributes which results in storing the code physically and also displaying the description.
From a bulk insert scenario where the business has a number of new products that they want to add (due to a new range) I was suggesting that they use the Excel AddIn. This would allow them to cut and paste the data into the product entity. However as the category
and sub category are domain-based Attributes they will need to know the ID – they will only know the name.
From a MDS functionality point of view is it possible to get it to look up the code from the name?
My current understanding on how to achieve this would mean I would need to have both code and name attributes on the Product entity for Category and Sub. The name attributes would be free text and the code attribute would be populated from a business rule.
Downside to this approach is that you lose out on the nice dropdown pick list and the user does not know what a valid entry is as they can no longer select one.
How is this usually implemented or handled in MDS?
Cheers
KevinHi Kevin,
Another approach (if the one above does NOT suit you).
Once again, as Reza suggested and I agree with him totally, this is a job for SSIS.
For what it's worth...
Whilst I understand what you are trying to do, may I suggest another approach. What concerns me is users adding data willy-nilly and that YOU have no control as to what is going on. Secondly, what is more disconcerting is the thought of losing relational
integrity, especially if you are using derived hierarchies within MDS.
I would handle this in a different manner, using SSIS and a filesystem watcher.
Let the users submit their spreadsheets (with their updates) to a common directory on the server. Implement a .NET FileSystemWatcher (this takes 10 minutes for a Newbie). This file system watcher will launch a DOS batch file on the arrival of a spreadsheet
within the given directory. The DOS batch file fires a DTS exec to start an SSIS package. This package together with SQL Server procedures will ensure that the correct codes are obtained and the attribute data is correctly inserted into the correct Entities(with
the correct relationships).
While this sounds vague, I do it all the time. I am more than prepared to help you get going, should wish any assistance. I KNOW that this is not the answer that you are looking for HOWEVER it is perhaps the most effective.
sincerest regards
Steve Simon SQL Server MVP
[email protected] -
Batch Updates with Toplink/JPA
Hi All,
I am new to JPA/Toplink and I wonder is there a way that we can do the batch updates/insertions using the JPA? If yes please help me how to do it?
My requirement is this, I need to fetch the n number of records from database and compare with the records from a file and insert/update/delete as required.
I don't wanted to do it one by one record but as a batch. Is there a way that I can do batch updates/inserts/deletes?
Any suggestion would be appreciated.
Thank you.Hello,
I'm not sure how you are going to do the comparison as a batch and not record by record. A JPA query will give you objects back which you can use to compare and make changes to as neccessary. When you commit, the JPA provider will commit those changes to the database for you. TopLink/EclipseLink has options to efficiently comit those changes, even combining them into batches where possible. See http://wiki.eclipse.org/Optimizing_the_EclipseLink_Application_(ELUG)#How_to_Use_Batch_Writing_for_Optimization for details on using batch writing in EclipseLink.
Best Regards,
Chris
Maybe you are looking for
-
How to setup Cintiq 12wx along with dual monitors (3 displays total)
I'm considering purchasing a Wacom Cintiq 12wx. I currently run the latest LR and Photoshop CS5 with two NEC P221W monitors (which I keep color-calibrated) on Windows 7, set up with my desktop extended across both displays. They are both running at
-
Documentation on Interface tables for lease management
Hi i found out the following interface tables for lease management but couldn't find any documentation or information of the mandatory columns to be populated in these tables is there any documentation provided in R12 for the following tables? PN_LEA
-
Re: Use PowerShell to Find Metadata from Photograph Files
Hi, Just read the new blog and thought i'd try it. I got as far as setting and calling the $picMetaData for a folder. It didn't list anything when i just has a TIF file in there but when i created a jpg version of the same file it listed the details
-
Error code: Application "widget name" DashboardClient quit unexpectedly
After installing my cloned backup onto my iBook I find my dashboard is not loading most of it's widgets but gives me error messages of: "Application "Widget name"DashboardClient quit unexpectedly". The cloned backup does likewise. My Calculator and C
-
What's the difference between OSB and ESB. ESB is a component of SOA Suite and it can connect to different services. OSB has more functions such as dynamic routing and Message Enrichment. If I just buy SOA suite, can I use OSB? Or I must buy OSB if I