EF Inserts/Updates: Ridiculously Slow
I cracked open the latest version of EF the other day. I had a project where I wanted to insert a lot of data in one hit. The number of records I wanted to insert was about 2,000,000. There are about 8 decimal columns in the table. So, I wrote the code in
EF. I wrote the code with an "Add", and then a "SaveChanges". What was taking an hour and half to write to XML suddenly jumped to an estimated time of 48 hours. (Note: most of the 1 1/2 hours is downloading data from the internet).
So I put an time trace in to see where each action was spending its time. The "Add" (putting the record of the type I want to insert in to the collection) was taking about 300 milliseconds and the "SaveChanges" was taking about 500 milliseconds.
This is simply absurd.
When I changed the code to avoid using EF - i.e. executing a straight INSERT statement against the SQL server database, the time went back down to about 1 1/2 hours. I found that the insert call was only taking about 2-3 milliseconds.
I thought perhaps EF was doing too much at the database level. So, I traced SQL Server. To my surprise, I found that EF was only executing insert statements in the same way I was executing the insert statement. So, why so slow? What's wrong with EF and when
will it be fixed? Is it even possible to use EF?
Several things are important here. First of all: what do you want to do, using which tool,
and how do you implement it.
Concerning the what and which-tool:
several people have already stated that an ORM is not the tool of choice when a bulk insert is what you actually want to do. EF has tons of great features but it’s simply not a solution for every problem.
Then there’s the question of
how you use EF. Two important things that come to mind here are the fact that by default EF tracks all entities that are attached to it, and that a call to savechanges will result in the execution of one insert/update statements for each of the changed
entities.
Let’s assume that EF was applied in this scenario, in the most simple –and naive - way imaginable. We create a context, we add 2M entities, we call SaveChanges.
Then we can take a very long break… Now let’s take a look at what happened here. 2.000.000 entities were attached to the context and it will be tracking all of them for changes – that’s a lot to handle! Once all entities are attached, the SaveChanges method
is called. At this point all modified/new entities are persisted to the database. But EF will actually do this using 2M single insert statement. Ouch.
Now there are two issues here. One is the fact that EF executes only one update/insert statement at a time. You can deal with this using EF extensions that
actually implement the bulk insert you’re looking for. The second issue is the number of entities attached to the context. You can significantly increase performance by lowering the number of attached entities. You could go about this by batching: for the
first set of 500 entities create a new context, add and save the entities; then create a new context for the next 500, and so on.
Similar Messages
-
After installing Yosemite on my MacBook Retina, (including the 10.10.1 update), my wifi connection has basically disappeared. When it does work, it is ridiculously slow. Any tips?
Wi-FI connection drops
Wi-Fi Problems in OS X Yosemite
Wi-Fi Problems in OS X Yosemite (2)
Wireless Diagnostics
Also try turning off Bluetooth.
Troubleshooting Wi-Fi issues in OS X
Wireless Connection Problems - Fix
Wireless Connection Problems - Fix (2)
Wireless Connection Problems - Fix (3)
Wireless Connection Problems - Fix (4) -
Extremely slow inserts/updates reported after HW upgrade...
Hi gyus,
I'll try to be as descriptive as I can. It's this project in which we have to move circa 6 mission critical (24x7) and mostly OLTP databases from MS Windows 2003 (DB on local disks) to HP-UX IA (CA metrocluster, HP XP 12000 disk array) - all ORA10gR2 10.2.0.4. And everything was perfect until we moved this XYZ database...
Almost immediately users reported "considerable" performance degradation. According to 3rd party application log they get almost 40 secs. instead of previously recorded 10.
We, I mean Oracle and HP specialists, haven't noticed/recorded any significant peeks/bottlenecks (RAM, CPU, Disk I/O).
Feel free to check 3 AWR reports and the init.ora at [http://www.mediafire.com/?sharekey=0269c9bc606747b47f7ec40ada4772a6e04e75f6e8ebb871]
1_awrrpt_standard.txt - standard workload during 8 hours (peek hours are from 8-12AM)
2_awrrpt_2hrs_ca.txt - standard workload during 2 peek hours (8-10)
3_awrrpt_2hrs_noca.txt - standard workload during 2 peek hours (10-12) with CA disk mirroring disabled
Of course, I've checked the ADDM reports - and first, I'd like to ask why ADDM keeps on reporting the following (on all database instances on this
cluster node):
FINDING 1: 100% impact (310 seconds)
Significant virtual memory paging was detected on the host operating system.
RECOMMENDATION 1: Host Configuration, 100% benefit (310 seconds)
Is it just some kind of false alarm (like we use to get on MS Windows)? Both nodes are running on 32gigs of RAM
with roughly more than 10gigs constantly free.
Second, as ADDM reported:
FINDING 2: 44% impact (135 seconds)
Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
were consuming significant database time.
we've tried to split CA disk mirroring, using RAID 10 for redo log file disks etc. etc. No substantial performance gain was reported from users (though I've noticed some in AWR reports).
Despite confusing app. users' feedback I'm nearly sure that our bottleneck are redo log file disks. Why? Previously (old HW) we had 1-3 ms avg wait on log file sync and log file parallel write and now (new HW, RAID5/RAID10 we've tested both) - it's 8 ms or even more. We were able to get 2ms only with CA switched off (HP etrocluster
disk array mirroring).
And that brings up two new questions:
1. Does redo log group mirroring (2 on 2 separate disks vs. 1 on 1 disk) have any
significant impact on abovementioned wait events? I mean what performance gain
could I expect when I drop all "secondary" redo log members?
2. Why do we get almost identical response times when we run bulk insert/update tests (say
1000000 rows) against old and new DB/HW?
Thanks in advance,
tOPsEEK
Edited by: smutny on 1.11.2009 17:39
Edited by: smutny on 1.11.2009 17:46Hi again,
so here's the actual AWR report (1 minute window while the most problematic operation took place). I think it's becoming clear we have to deal with slow redo log writes...
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
... 294736156 ... 1 10.2.0.4.0 NO ...
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 1254 02-Nov-09 10:07:45 91 46.4
End Snap: 1255 02-Nov-09 10:08:47 91 46.4
Elapsed: 1.04 (mins)
DB Time: 0.51 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 576M 576M Std Block Size: 8K
Shared Pool Size: 912M 912M Log Buffer: 14,372K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 163,781.94 4,575.45
Logical reads: 1,677.15 46.85
Block changes: 1,276.32 35.66
Physical reads: 1.99 0.06
Physical writes: 1.16 0.03
User calls: 426.41 11.91
Parses: 20.74 0.58
Hard parses: 0.19 0.01
Sorts: 2.38 0.07
Logons: 0.00 0.00
Executes: 386.76 10.80
Transactions: 35.80
% Blocks changed per Read: 76.10 Recursive Call %: 31.51
Rollback per transaction %: 0.00 Rows per Sort: 369.98
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.88 In-memory Sort %: 100.00
Library Hit %: 99.90 Soft Parse %: 99.07
Execute to Parse %: 94.64 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 112.50 % Non-Parse CPU: 99.11
Shared Pool Statistics Begin End
Memory Usage %: 88.90 88.87
% SQL with executions>1: 98.74 99.39
% Memory for SQL w/exec>1: 95.35 97.75
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
log file parallel write 2,228 21 10 69.4 System I/O
log file sync 2,220 21 10 69.2 Commit
CPU time 20 65.5
SQL*Net break/reset to client 2,106 1 1 3.4 Applicatio
db file sequential read 131 0 4 1.5 User I/O
Time Model Statistics DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> Total time in database user-calls (DB Time): 30.9s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
DB CPU 20.2 65.5
sql execute elapsed time 9.1 29.4
PL/SQL execution elapsed time 0.3 1.0
parse time elapsed 0.2 .5
hard parse elapsed time 0.1 .5
repeated bind elapsed time 0.0 .0
DB time 30.9 N/A
background elapsed time 22.2 N/A
background cpu time 0.4 N/A
Wait Class DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
System I/O 3,213 .0 22 7 1.4
Commit 2,220 .0 21 10 1.0
Application 2,106 .0 1 1 0.9
User I/O 134 .0 0 4 0.1
Network 29,919 .0 0 0 13.4
Wait Events DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 2,228 .0 21 10 1.0
log file sync 2,220 .0 21 10 1.0
SQL*Net break/reset to clien 2,106 .0 1 1 0.9
db file sequential read 131 .0 0 4 0.1
control file parallel write 44 .0 0 9 0.0
db file parallel write 44 .0 0 4 0.0
SQL*Net more data to client 3,397 .0 0 0 1.5
SQL*Net message to client 26,509 .0 0 0 11.9
control file sequential read 897 .0 0 0 0.4
SQL*Net more data from clien 13 .0 0 0 0.0
direct path write 3 .0 0 0 0.0
SQL*Net message from client 26,510 .0 1,493 56 11.9
Streams AQ: qmn slave idle w 2 .0 55 27412 0.0
Streams AQ: qmn coordinator 4 50.0 55 13706 0.0
PL/SQL lock timer 10 100.0 49 4897 0.0
Background Wait Events DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 2,228 .0 21 10 1.0
control file parallel write 44 .0 0 9 0.0
db file parallel write 44 .0 0 4 0.0
control file sequential read 71 .0 0 0 0.0
rdbms ipc message 2,412 7.7 525 218 1.1
pmon timer 20 100.0 59 2929 0.0
Streams AQ: qmn slave idle w 2 .0 55 27412 0.0
Streams AQ: qmn coordinator 4 50.0 55 13706 0.0
Operating System Statistics DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
Statistic Total
AVG_BUSY_TIME 847
AVG_IDLE_TIME 5,362
AVG_IOWAIT_TIME 2,692
AVG_SYS_TIME 295
AVG_USER_TIME 549
BUSY_TIME 3,396
IDLE_TIME 21,457
IOWAIT_TIME 10,776
SYS_TIME 1,190
USER_TIME 2,206
LOAD 0
OS_CPU_WAIT_TIME 192,401,000
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 40,960
VM_OUT_BYTES 335,872
PHYSICAL_MEMORY_BYTES 34,328,276,992
NUM_CPUS 4
NUM_CPU_SOCKETS 4
Instance Activity Stats DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
Statistic Total per Second per Trans
CPU used by this session 984 15.8 0.4
CPU used when call started 344 5.5 0.2
CR blocks created 4 0.1 0.0
Cached Commit SCN referenced 208 3.3 0.1
Commit SCN cached 0 0.0 0.0
DB time 2,589 41.6 1.2
DBWR checkpoint buffers written 69 1.1 0.0
DBWR checkpoints 0 0.0 0.0
DBWR object drop buffers written 0 0.0 0.0
DBWR tablespace checkpoint buffe 0 0.0 0.0
DBWR thread checkpoint buffers w 0 0.0 0.0
DBWR transaction table writes 8 0.1 0.0
DBWR undo block writes 15 0.2 0.0
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 1,156 18.6 0.5
IMU Redo allocation size 996,048 16,017.2 447.5
IMU commits 2,100 33.8 0.9
IMU contention 0 0.0 0.0
IMU ktichg flush 0 0.0 0.0
IMU pool not allocated 0 0.0 0.0
IMU recursive-transaction flush 0 0.0 0.0
IMU undo allocation size 22,402,560 360,250.9 10,064.0
IMU- failed to get a private str 0 0.0 0.0
Misses for writing mapping 0 0.0 0.0
SQL*Net roundtrips to/from clien 26,480 425.8 11.9
active txn count during cleanout 34 0.6 0.0
application wait time 106 1.7 0.1
background checkpoints completed 0 0.0 0.0
background checkpoints started 0 0.0 0.0
background timeouts 199 3.2 0.1
branch node splits 0 0.0 0.0
buffer is not pinned count 13,919 223.8 6.3
buffer is pinned count 19,483 313.3 8.8
bytes received via SQL*Net from 884,016 14,215.7 397.1
bytes sent via SQL*Net to client 9,602,642 154,418.1 4,313.9
calls to get snapshot scn: kcmgs 13,641 219.4 6.1
calls to kcmgas 3,029 48.7 1.4
calls to kcmgcs 56 0.9 0.0
change write time 8 0.1 0.0
cleanout - number of ktugct call 42 0.7 0.0
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 0 0.0 0.0
cluster key scan block gets 1,100 17.7 0.5
cluster key scans 1,077 17.3 0.5
commit batch/immediate performed 1 0.0 0.0
commit batch/immediate requested 1 0.0 0.0
commit cleanout failures: block 0 0.0 0.0
commit cleanout failures: buffer 0 0.0 0.0
commit cleanout failures: callba 4 0.1 0.0
commit cleanout failures: cannot 0 0.0 0.0
commit cleanouts 9,539 153.4 4.3
commit cleanouts successfully co 9,535 153.3 4.3
commit immediate performed 1 0.0 0.0
commit immediate requested 1 0.0 0.0
commit txn count during cleanout 26 0.4 0.0
concurrency wait time 0 0.0 0.0
consistent changes 264 4.3 0.1
consistent gets 48,659 782.5 21.9
consistent gets - examination 26,952 433.4 12.1
consistent gets direct 2 0.0 0.0
consistent gets from cache 48,657 782.4 21.9
cursor authentications 2 0.0 0.0
data blocks consistent reads - u 4 0.1 0.0
db block changes 79,369 1,276.3 35.7
db block gets 55,636 894.7 25.0
db block gets direct 3 0.1 0.0
db block gets from cache 55,633 894.6 25.0
deferred (CURRENT) block cleanou 4,768 76.7 2.1
dirty buffers inspected 0 0.0 0.0
enqueue conversions 15 0.2 0.0
enqueue releases 9,967 160.3 4.5
enqueue requests 9,967 160.3 4.5
enqueue timeouts 0 0.0 0.0
enqueue waits 0 0.0 0.0
execute count 24,051 386.8 10.8
failed probes on index block rec 0 0.0 0.0
frame signature mismatch 0 0.0 0.0
free buffer inspected 680 10.9 0.3
free buffer requested 1,297 20.9 0.6
heap block compress 11 0.2 0.0
hot buffers moved to head of LRU 1,797 28.9 0.8
immediate (CR) block cleanout ap 0 0.0 0.0
immediate (CURRENT) block cleano 2,274 36.6 1.0
index crx upgrade (positioned) 47 0.8 0.0
index fast full scans (full) 0 0.0 0.0
index fetch by key 10,326 166.1 4.6
index scans kdiixs1 6,071 97.6 2.7
leaf node 90-10 splits 14 0.2 0.0
leaf node splits 18 0.3 0.0
lob reads 0 0.0 0.0
lob writes 198 3.2 0.1
lob writes unaligned 176 2.8 0.1
logons cumulative 0 0.0 0.0
messages received 2,272 36.5 1.0
messages sent 2,272 36.5 1.0
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 21,083 339.0 9.5
opened cursors cumulative 1,290 20.7 0.6
parse count (failures) 0 0.0 0.0
parse count (hard) 12 0.2 0.0
parse count (total) 1,290 20.7 0.6
parse time cpu 18 0.3 0.0
parse time elapsed 16 0.3 0.0
physical read IO requests 124 2.0 0.1
physical read bytes 1,015,808 16,335.0 456.3
physical read total IO requests 1,030 16.6 0.5
physical read total bytes 15,785,984 253,851.1 7,091.6
physical read total multi block 0 0.0 0.0
physical reads 124 2.0 0.1
physical reads cache 122 2.0 0.1
physical reads cache prefetch 0 0.0 0.0
physical reads direct 2 0.0 0.0
physical reads direct (lob) 0 0.0 0.0
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 0 0.0 0.0
physical write IO requests 47 0.8 0.0
physical write bytes 589,824 9,484.8 265.0
physical write total IO requests 4,591 73.8 2.1
physical write total bytes 25,374,720 408,045.5 11,399.3
physical write total multi block 4,461 71.7 2.0
physical writes 72 1.2 0.0
physical writes direct 3 0.1 0.0
physical writes direct (lob) 3 0.1 0.0
physical writes direct temporary 0 0.0 0.0
physical writes from cache 69 1.1 0.0
physical writes non checkpoint 18 0.3 0.0
pinned buffers inspected 1 0.0 0.0
prefetch warmup blocks aged out 0 0.0 0.0
prefetched blocks aged out befor 0 0.0 0.0
process last non-idle time 0 0.0 0.0
recursive calls 12,197 196.1 5.5
recursive cpu usage 747 12.0 0.3
redo blocks written 11,398 183.3 5.1
redo buffer allocation retries 0 0.0 0.0
redo entries 6,920 111.3 3.1
redo log space requests 0 0.0 0.0
redo log space wait time 0 0.0 0.0
redo ordering marks 96 1.5 0.0
redo size 10,184,944 163,781.9 4,575.5
redo subscn max counts 811 13.0 0.4
redo synch time 2,190 35.2 1.0
redo synch writes 2,220 35.7 1.0
redo wastage 1,377,920 22,158.0 619.0
redo write time 2,192 35.3 1.0
redo writer latching time 0 0.0 0.0
redo writes 2,228 35.8 1.0
rollback changes - undo records 24 0.4 0.0
rollbacks only - consistent read 4 0.1 0.0
rows fetched via callback 1,648 26.5 0.7
session connect time 0 0.0 0.0
session cursor cache hits 1,242 20.0 0.6
session logical reads 104,295 1,677.2 46.9
session pga memory 2,555,904 41,101.0 1,148.2
session pga memory max 0 0.0 0.0
session uga memory 123,488 1,985.8 55.5
session uga memory max 0 0.0 0.0
shared hash latch upgrades - no 66 1.1 0.0
sorts (disk) 0 0.0 0.0
sorts (memory) 148 2.4 0.1
sorts (rows) 54,757 880.5 24.6
sql area evicted 86 1.4 0.0
sql area purged 0 0.0 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 596 9.6 0.3
table fetch by rowid 9,173 147.5 4.1
table fetch continued row 0 0.0 0.0
table scan blocks gotten 982 15.8 0.4
table scan rows gotten 154,079 2,477.7 69.2
table scans (cache partitions) 0 0.0 0.0
table scans (long tables) 0 0.0 0.0
table scans (short tables) 59 1.0 0.0
total number of times SMON poste 0 0.0 0.0
transaction rollbacks 1 0.0 0.0
undo change vector size 3,990,136 64,164.5 1,792.5
user I/O wait time 49 0.8 0.0
user calls 26,517 426.4 11.9
user commits 2,226 35.8 1.0
user rollbacks 0 0.0 0.0
workarea executions - onepass 0 0.0 0.0
workarea executions - optimal 204 3.3 0.1
write clones created in backgrou 0 0.0 0.0
write clones created in foregrou 0 0.0 0.0
-------------------------------------------------------------... and what's even more interesting is the report that I've got using this great Tanel Poder's session snapper script. Take a look at these numbers (excerpt):
SID USERNAME TYPE STATISTIC DELTA HDELTA/SEC %TIME
668 ROVE_EDA WAIT log file sync 2253426 450.69ms 45.1%
668 ROVE_EDA WAIT log file sync 2140618 428.12ms 42.8%
668 ROVE_EDA WAIT log file sync 2088327 417.67ms 41.8%
668 ROVE_EDA WAIT log file sync 2184408 364.07ms 36.4%
668 ROVE_EDA WAIT log file sync 2117470 352.91ms 35.3%
668 ROVE_EDA WAIT log file sync 2051280 341.88ms 34.2%
668 ROVE_EDA WAIT log file sync 1595019 265.84ms 26.6%
668 ROVE_EDA WAIT log file sync 612034 122.41ms 12.2%
668 ROVE_EDA WAIT log file sync 2162980 432.6ms 43.3%
668 ROVE_EDA WAIT log file sync 2071811 345.3ms 34.5%
668 ROVE_EDA WAIT log file sync 2004571 334.1ms 33.4%
668 ROVE_EDA WAIT db file sequential read 28401 5.68ms .6%
668 ROVE_EDA WAIT db file sequential read 29028 4.84ms .5%
668 ROVE_EDA WAIT db file sequential read 24846 4.14ms .4%
668 ROVE_EDA WAIT db file sequential read 24323 4.05ms .4%
668 ROVE_EDA WAIT db file sequential read 17026 3.41ms .3%
668 ROVE_EDA WAIT db file sequential read 6736 1.35ms .1%
668 ROVE_EDA WAIT db file sequential read 33028 5.5ms .6%
764 (LGWR) WAIT log file parallel write 2236748 447.35ms 44.7%
764 (LGWR) WAIT log file parallel write 2150825 430.17ms 43.0%
764 (LGWR) WAIT log file parallel write 2139532 427.91ms 42.8%
764 (LGWR) WAIT log file parallel write 2119086 423.82ms 42.4%
764 (LGWR) WAIT log file parallel write 2134938 355.82ms 35.6%
764 (LGWR) WAIT log file parallel write 2083649 347.27ms 34.7%
764 (LGWR) WAIT log file parallel write 2034998 339.17ms 33.9%
764 (LGWR) WAIT log file parallel write 1996050 332.68ms 33.3%
764 (LGWR) WAIT log file parallel write 1797057 299.51ms 30.0%
764 (LGWR) WAIT log file parallel write 555403 111.08ms 11.1%
764 (LGWR) WAIT log file parallel write 277875 46.31ms 4.6%
764 (LGWR) WAIT log file parallel write 2067591 344.6ms 34.5%Where SID=668 is the session we've been looking for... OK, we've got to get back to monitoring disk array and the corresponding network components.
tOPsEEK -
how do a get a refund on a ridiculously slow downloading rented movie for my apple tv dispite a >5mb/sec down load speed? my first experience with appletv movie rental. no good. rented at 4:30pm. still waiting for standard def movie to finish at 8:10pm. Not good enough.
Thanks for the pointers folks. Yeah, 1.5GB is pretty poor isn't it? There are other suppliers, but you have to get a their phone line installed to access the internet so it's an £85 setup fee plus a day off work for a guy to come round and drill a hole in your wall before you even get to the monthly costs.
Don't get me wrong though - I'm certainly not an Apple basher, but am equally frustrated with both them and the ISP.
Virgin Media spend all kinds of money advertising their broadband as the fastest in the UK, but don't tell you (up front anyway) that you'll be cut off if you actually try and use it.
Apple on the other hand carry on with their 'it just works' mantra, but with Apple TV this only applies if you meet certain criteria (again, not mentioned up front).
I guess it aint a biggie, as i was gifted my Apple TV, and love all the other stuff it does, but from the number of posts on this forum can see that there's a lot of angry customers out there who want explanations as to why they're having similar problems to me.
Will be very interesting to see if Apple release any kind of software update that turns the flash drive from a buffer to permanent storage, to at least ease some of the pain...
Message was edited by: McGinty -
Scenario: Insert update delete into a 3 billion target from a 2 billion source
Both target(3 billion rows) and source (2 billion rows) are SQL server tables partitioned by "Year-month". Now I want Insert update delete from source to target. This is just a scenario that I was thinking about. I was thinking about the best approach
for the load.
With my little knowledge I would write a stored procedure with Merge statement but given the large amount of data will that be the best solution?
Please advise. Thanks in advance for your help.
svkYou need to find out how much data from what dates you need to operate on.
Is it a data sync endeavor?
A few tips: forget about the Lookup, too slow, and the Cache Transform will be a burden.
I would go with the T-SQL MERGE, but may want to do it in chunks, e.g. limit to date ranges.
Arthur My Blog -
Can not insert/update data from table which is created from view
Hi all
I'm using Oracle database 11g
I've created table from view as the following command:
Create table table_new as select * from View_Old
I can insert/update data into table_new by command line.
But I can not Insert/update data of table_new by SI Oject Browser tool or Oracle SQL Developer tool .(read only)
Anybody tell me, what's happend? cause?
Thankyou
thiensu
Edited by: user8248216 on May 5, 2011 8:54 PM
Edited by: user8248216 on May 5, 2011 8:55 PMI can insert/update data into table_new by command line.
But I can not Insert/update data of table_new by SI Oject Browser tool or Oracle SQL Developer tool .(read only)so what is wrong with the GUI tools & why posting to DATABASE forum when that works OK? -
Insert / update data to a table through DBLINK (oracle)
I try to insert / update a table from one instance of oracle database to another one through oracle dblink, get following error:
java.sql.SQLException: ORA-01008: not all variables bound
ORA-02063: preceding line from MYLINK
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:582)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1986)
at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1144)
at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:2152)
at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:2035)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2876)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:609)
The same code to insert / update the exact same table in local instance works fine.No any binding problem. So i am pretty sure all ? mark in SQL are set with some value before sending to Oracle.
Someone please advise what is possible problem. Db link is not set correctly? or we can not update remote table by dblink.
By the way i can do insert / update from TOAD to the remote table through DBLINK. Problem happens only in Java code.
thanks!
Garydblink links from one database instance to another.
So it is certainly a source of possible problems when it works on one database and not another.
You should start by looking at the dblink and it possible testing it in the database not via java.
Note as well that that error suggests that it is coming from the Oracle database. I believe if you had a bind parameter problem in your java code that the error would come from the driver. But that is a guess on my part. -
Hello,
I am using ORACLE DATABASE 11g (EE) and RHEL 5.
I want to insert/update japanese language data in a column which has the datatype as varchar2(256).
I tried to change the NLS_LANGUAGE and NLS_TERRITORY parameters with 'ALTER SESSION set ...' command but no effect.
I tried to bounce back ( shutdown and startup ) the DB but still no effect.
I tried to inset the NLS_LANGUAGE and NLS_TERRITORY in init.ora file but still no use.
If anybody knows the detail steps which i have mentioned above .... let me know. Might be that i am wrong in my method.
Can you please guide me how to change the language of DB for a perticular session to japanese ???
Thanks in advance...
Edited by: VJ4 on May 9, 2011 6:21 PMVJ4 wrote:
Thanks for the info.
Yes i tried with UNISTR function and was able to insert the data successfully.
but the point is that we can't remember unicode for each of the letter. It's their any method that we can directly insert japanese character using an insert.
As you said :-
Note that changing database character set is something complicated that requires many steps.
Can you please provide me some links or some stuffs to study about the detail steps of chaining database character set.
I have gone through the Oracle online documentation.. if you can pin point any good link in it you can else provide me some other stuff.
Thanks .You will need to convert your database characterset to AL32UTF8. This is not a trivial exercise if your database already has data in it. See these MOS Docs
Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) (Doc ID 260192.1)
AL32UTF8 / UTF8 (Unicode) Database Character Set Implications (Doc ID 788156.1)
http://download.oracle.com/docs/cd/E11882_01/server.112/e10729/ch11charsetmig.htm#g1011430
HTH
Srini -
Can you create nested condition in merge(e.g. insert/update) using owb
Hi,
Does OWB9iR2 allow you to build in nested condition in merge. such as
If no match on col1 and col2 then
if col3 match then no new sequence insert <---
else insert new sequence;
else (there is match on col1 and col2)
update col3 and sequence.
I have an incremental load for a lookup table, where insert/update is used. There are two match columns and surrogate key is used. When there is no match, it shall not insert a sequence when there is a match on third column. I can not use the 3rd column in the original match because it shall be updated where there is a match for the two match column.
I am trying to avoid using transformant for performance impact. ThanksHIi I think the misleading thing is that in PL/SQL you can use booleans, which is not possible in SQL. So in a PL/SQL tranformation, this is OK:
a:= case when not orgid_lkup( INGRP1.ORG_ID )
then get_supid(..)
else ...
but, the following SQL does not work:
select case when not orgid_lkup( INGRP1.ORG_ID )
then get_supid(..)
else ...
into a
from dual;
I ended up using only 0/1 as boolean return values for these reasons;
so I can have:
select
case when orgid_lkup( INGRP1.ORG_ID ) = 0 then ...
though true booleans are better if you don't have to embed them in SQL.
Antonio -
Pre Database Insert / Update Validation
Hi,
I have a maintenance view that allow user to insert / update.
Let's say, z_my_table is my table
key1
key2
field1
field2
field3
I want to validate whether field1, field2, field3 whether exist in my table before insert / update.
However, due to some restriction, I cannot make it as part of the key.
My question is, how can I capture OR validate it before the data get inserted into the table ?
I checked some online help, and found that suggestion is to write a function module OR enhancement point.
Just wonder what is the common practice and how should I do it ? Thanks.I don't agree with the design but if you want to do it anyway it can be done using Events in the Maintenance View.
After you've generated the Maintenance View of your Z table, within SE11 pull out menubar Utilities->Table Maintenance Generator, then menubar Environment->Modifcation->Events. Create New Entries and I think for your requirement you would need 2 entries - one at event 05 and another at event 18. Input the name of your routine and write the code by clicking its corresponding button.
The following is an example of a scenario where text of an object is retrieved and filled into the view display-only column according to user input of its related object id - for your requirement you would validate the user input after checking the input values in your existing table:
form fill_text_05.
data begin of w_total.
include structure zfv_fbt_acct_rel.
data: action,
mark,
end of w_total.
data: w_extract type zfv_fbt_acct_rel.
clear: zfv_fbt_acct_rel-fbt_rel_acc_txt,
zfv_fbt_acct_rel-fbt_cost_grp_txt.
* populate GL a/c text
select txt50 up to 1 rows
from skat
into zfv_fbt_acct_rel-fbt_rel_acc_txt
where spras = sy-langu
and saknr = zfv_fbt_acct_rel-fbt_rel_acc.
endselect.
* populate Cost Group text
select single fbt_cost_grp_txt
from zft_fbt_cost_grp
into zfv_fbt_acct_rel-fbt_cost_grp_txt
where company_code = zfv_fbt_acct_rel-company_code
and fbt_cost_grp = zfv_fbt_acct_rel-fbt_cost_grp.
if sy-subrc <> 0.
message e074(zf_enhancements) with zfv_fbt_acct_rel-company_code.
endif.
endform. "fill_text_05
Hope this helps.
Cheers,
Sougata. -
Inserting/updating data in control block based on view
Hi!
I`ve created a block based on a view to display data.
I want this block to be insertable and updateable, that is I will use a on-insert/update trigger to call an insert/update procedure located in the database.
When trying to change/insert a value in the block, the error message "Error: Can not insert into or update data in a view" pops up. I`ve tried to get rid of this error, without success.
How can I make a data block based on a view insertable and updateable?
My guess is that this have something to do with locking the records(there is no rowid in the view)... but I'm not sure.
Pls advise!!Morten
As well as on-update, on-insert, on-delete triggers you also need an on-lock,
(even though it might just contain null;) otherwise the form will try to lock the view and fail.
Actually your terminology is wrong, the block being based on a table or view is not a control block. A control block is not based on anything and has no default functionality for communicating with the database. If it was a control block, the on- triggers would not fire. -
Insert/Update/Delete Non-PO Invoice Line Item via FM/BAPI?
Does anyone know of a way to insert/update/delete an Invoice Line item (Non-PO Accounting Invoice - Transaction FB60 or FV60) using a BAPI or Function Module (or set of function modules) using ABAP? I have been trying to find some code to accomplish this and am stuck on a couple of issues.
I have found PRELIMINARY_POSTING_FB01 and PP_CHANGE_DOCUMENT_ENJ but both seem to submit the details to background processes. This is an issue because it gives the user a success message after execution but later delivers the error to Workflow. This is for an interfacing program so the results should be as real time as possible.
Has anyone accomplished this via FM or BAPI and if so would you mind sharing your experiences?
Thank you very much,
AndySG- Thank you for the reply.
I have been playing with BAPI_INCOMINGINVOICE_PARK and I'm not sure if it is doing exactly what we want, but it is something that I have considered in the past. I plan on looking into BAPI_ACC_INVOICE_RECEIPT_POST this morning, hopefully that will provide some more for us.
If possible I'd like to avoid BDC sessions because this program could hypothetically interface with multiple SAP systems with different configurations.
I will check into those FM's and thank you very much. -
How to find out who made inserts/updates/deletes made to a SQL Table
I want to know WHO MAKES INSERTS/UPDATES/DELETES TO a particular SQL Table. Bascially I want to AUDIT ANY Edites made to a SQL 2008 TABLE. I need info such as WHO AMDE THE Updates i.e. the user first/lastname, When update was made, what row was updated etc...How
can I do that with SQL 2008?One way to achieve that would be to use triggers to detect when a change is made to the table, and then insert a record into another table/database detailing what changed and who by.
You'd need three triggers, one for insert, update and delete respectively, and for each of those you use the "inserted" and "deleted" tables (system tables maintained by SQL) to retrieve what has been done. To retrieve who made the change you can query IDENT_CURRENT
which returns the last identity value for a specific table.
See :
Triggers -
http://msdn.microsoft.com/en-gb/library/ms189799(v=sql.100).aspx
Inserted & deleted tables -
http://technet.microsoft.com/en-us/library/ms191300(v=sql.100).aspx
INDENT_CURRENT -
http://technet.microsoft.com/en-us/library/ms175098(v=sql.100).aspx
There may be better / more up to date ways to do this, but I've used this method successfully in the past, albeit a long time ago (on a SQL 2000 box I think!). -
Search criteria for insert/update bdoc
Hi All,
In our set up we have ecc to crm replication of BP. If one goes and checks the extension data of a stuck bdoc it has an Object Task- Insert or Update.
Can someone help me with search criteria so that i can pull out bdocs which have an Insert as the Object task ??
We have search criteria for errored/intermediate state bdocs; for inbound vs outbound; bdoc type etc etc..
Need one based on Insert/Update Task so that any new data replication if stuck with its very first bdoc( Insert type) can be immediately queried.
Regards
AbhinavHello Abhinav,
I do not think that we have such a search criteria to search for BDocs based on the Task Type, which comes under the
data part of the BDoc.
One alternative way is to find out in which table these data gets stored and write a program to fetch the revelent Bdocs.
Hope thisl helps!
Best Regards,
Shanthala Kudva -
Insert,update and delete data in a table using webdynpro for abap
Dear All,
I have a requirement to create a table allowing the user to add rows in it and update a row as well as delete a row from that table.To do this I guess I have to make use of ALV.But using ALV I am not able to enter data to a table where as I can make a column editable delete a row etc. please guide me to perform these operations(insert,update and delete) on table.
Thanks,
Reddy.Hi Sridhar,
By using ALV you can do all insert delete etc things. if you want to edit i mean you can yenter data in ALV.
Check this...
http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3133474a-0801-0010-d692-81827814a5a1
Editing alv in web dynpro
editing rows in alv reports
Re: editing rows and columns in alv reports in webdynpro abap
Cheers,
Kris.
Maybe you are looking for
-
Hi Guys, Do you know to how update my iCloud ID to match my Apple ID if I don't know the password for the old ID and I don't have access to the old e-mail address?
-
Satellite C50-A-1FT - How to boot from the USB stick
hi, this is last stop really as have run out of ideas,have asked every techie that know including taking it to an independant computer store. I am using a toshiba satelite C55 a-1d5, with windows eight preinstalled. I am trying to get Ubuntu booting
-
Why won't firefox crashes when I try to download the latest version of adobe flash player ?
When I tried to download the latest version of Adobe Flash Player (10.1.85.3), nothing happened. Then something popped up.... http://i489.photobucket.com/albums/rr260/TaDaO0o/asdgahgfa.jpg?t=1287493272 (click it) when I clicked the "install missing p
-
File type and creator are backwards
Acrobat 8 running on Intel based mac running OSX 10.4.11. I have Acrobat using watched folders, but the pdfs it creates have the file name and creator backwards, 'FDP' and 'ORAC'. This doesnt happen all the time, it seems to be different depending on
-
TNSLSNR crashes on Win NT 4.0 SP 6a
I am trying to install 8i Enterprise Edition v 8.1.6 on a Win NT 4 server SP 6a machine. The Listener service starts but after a few minutes or upon trying to access the database it crashes due to an access violation. The log reports packet writer fa