Will insert (ignore duplicates) have a better performance than merge?
Will insert (ignore duplicates) have a better performance than merge (insert if not duplicate)?
Ok. Here is exactly what is happenning -
We had a table with no unique index on it. We used 'insert all' statement to insert record.
But later when we found duplicates in there we started removing them manually.
Now, to resolve the issue we added unique index and added exception handling to ignore DUP_VAL_ON_INDEX exception.
But with this all records being inserted by 'INSERT ALL' statement gets ignored even if only one record is duplicate.
Hence we have finally replaced 'insert all' with merge statement. Which inserts only if a corresponding record is not found (match based on column in unique index) in the table.
But I am wondering how much performance will get impacted.
Similar Messages
-
Insert - Ignore Duplicate Entry Error
Hello
I would like to use an insert statement which also contains a select statement in it and returns a set of records and gets inserted in to a particular table.
But, when I tried to run the sql, it throws error as some of the records are duplicate entry insertions. Is it possible to modify the sql to ignore any duplicate entries and insert the new ones.
Illustration:
TABLE_A
STUDENT_ID GROUP_ID
101 200
101 201
103 200
107 201
and so on.
Here, when I try to insert a set of records like
101 200
108 200
101 300
103 200
103 201
I would like the exact duplicate entries like (101,200), (103,200) to be ignored with out producing any error but insert the other records in to the TABLE_A table.
Thr sql used is similar to this
Insert into TABLE_A (student_id, group_id)
select distinct person.studentid, groups.groupid from person, groups where
Did any one know how to handle this?
Thanks
ShivIf I include 'NOT EXISTS' after insert and before
select statement, it throws the error
missing VALUES keyword
The NOT EXISTS goes in the WHERE clause.
Insert into Table_A (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
where NOT EXISTS (select 1
from table_A a
where a.student_id = p.studentid
and a.group_id = g.groupid)or you could do
Insert into Table_A (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
MINUS
select student_id, group_id
from Table_A -
Why does the SBlive $33 soundcard have a better EQ than the $100 audigy2
When will the upgraded EQ be available for the ZS?
The ZS eq is 7 band while SBli've I just installed for a friend
was 0 maybe? :angry:We're all just your fellow users here; none of us have any insight into Apple's product decisions. You can send your comment through the iTunes feedback page.
-
For my game's better performance, should i use Starling?
I heard that using Starling gives better performance than just using Flash pro Native (GPU mode??) when playing flash games on smartphones.
However, according to this article, there is not much difference between GPU mode and Starling although its recorded in late 2012.
http://esdot.ca/site/2012/runnermark-scores-july-18-2012
My game is tile matching game that uses vectors and many different tile pictures. also upto 64 tiles can be present at the same time.
I don't know how much more performance Starling would provide, but if starling would give more performance, i don't know if its worth the time and effort to learn how to use Starling and change my current codes to use Starling?This is a test between multiple frameworks that all use Stage3D, which is basically the means to get any hardware benefits from the GPU.
These frameworks do nothing else than helping to streamline your game development and doing some optimizing (object pooling etc.) under the hood.
The basic concept is to have spritessheets (for 2D) , that are also called "Textureatlas`" instead of the "old" method of having separated MovieClips/sprites.
If you dont use this method in your game, then you will have indeed no benefit from Starling or any other Stage3D framework.
So if you your game is coded "like in the old days" you would have to rewrite some parts of it and convert all MovieClips to Spritesheets to benefit from the GPU.
The real Performance-comparison reads like this:
CopyPixels (the PreStage 3D method) had a Perfomance gainof 500%/ SpriteSheet (Stage3D) 4000% compared to the "old way".
It all depends if you`re unhappy with your games curretn performance on current mobile devices or not. -
Which method has a better performance ?
Hello !
I'm using entity framework , and I have several cases where I should run a query than return some parent items , and after I display these parents and the related children in one report.
I want to know which of these methods have the better performance : ( or is there any other better method ??? )
Method1: (the childs collection are loaded later , using lazy loading)
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) select t).Tolist
Method2:
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) _
.Include(Function(t2) t2.Childs1) _
.Include(Function(t2) t2.Childs2) _
.Include(Function(t2) t2.Childs2.Child22) _
.Include(Function(t2) t2.Childs1.Childs11) _
.Include(Function(t2) t2.Childs1.Childs12) _
Select t).ToList
Method3:
Dim lista as IQueryable(Of MyObj)
Dim lst= (From t2 In context.MyObjs Where(..condition..) Select New with _
{ .Parent=t2
.ch1=t2.Childs1 _
.ch2=t2.Childs2 _
.ch21=t2.Childs2.Child21) _
.ch11=t2.Childs1.Childs11) _
.ch12= t2.Childs1.Childs12 _
).ToList
lista=lst.Select(Function(t2) t2.parent)
I noticed that the first method cause the report to open very slow. Also I read somewhere that Include() cause repeat of parent items?
But anyway I want a professional opinion in general for the three methods.
Thank you !Hello,
As far as I know, the Entity Framework offers two ways to load related data after the fact. The first is called lazy loading and, with the appropriate settings, it happens automatically. In your case, your first method uses the last loading, while the second
and third are the same actually, both of them are Eager Loading. (In VB, if you could check use code as “DbContext.Database.Log = Sub(val) Diagnostics.Trace.WriteLine(val)” to see the actually generated sql statement, you could see the third and second query
would generate a join syntax). Since you mentions, the lazy loading way is low performance, you could use either the second or third one.
>>Also I read somewhere that Include() cause repeat of parent items?
For this, nor sure if you worry it would firstly use lazy loading and then use eager loading, however, in my test, I do not see this behavior, the Entity Framework seems to be smart enough to use one mode to load data at the same time. Or you could disable
its lazy loading when using eager loading:
context.ContextOptions.LazyLoadingEnabled = false
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
Do we have a shortcut to insert a duplicate record except PK ?
Do we have a shortcut to insert a duplicate record except PK ?
Thanks.Do you want to insert another row with all of the columns the same but with a different PK? If so then:
INSERT INTO t (pk, col1, col2, col3)
SELECT new_value_for_pk, col1, col2, col3
FROM t
WHERE pk = <value>What you use for new_value_for_pk will depend on how the PK is generated.
John -
burner not working, 8.98 GB free, can this 2003/2004 imac model be cleaned for better use or will I continue to have problems? Do I need to just buy a new one? I remember with a PC being able to defrag.
You could buy an external dvd/cd drive.
Suggest other world computing.
You need to delete some files. Many posters to these forums state that you need much more free space: 5 gig to 10 gig or 10 percent of you hd size.
(0)
Be careful when deleting files. A lot of people have trashed their system when deleting things. Place things in trash. Reboot & run your applications. Empty trash.
Go after large files that you have created & know what they are. Do not delete small files that are in a folder you do not know what the folder is for. Anything that is less than a megabyte is a small file these days.
(1)
Run
OmniDiskSweeper
"The simple, fast way to save disk space"
OmniDiskSweeper is now free!
http://www.omnigroup.com/applications/omnidisksweeper/download/
This will give you a list of files and folders sorted by size. Go after things you know that are big.
(2)
This will save you a gig of space.
Monolingual is a program for removing unnecessary language resources from Mac OS X,in order to reclaim several hundred megabytes of disk space. It requires at least Mac OS X 10.3.9 (Panther) and also works on Mac OS X 10.4 (Tiger). It worked for me on 10.4
http://monolingual.sourceforge.net/
A detailed write-up on how to use Monolingual:
http://www.jklstudios.com/misc/monolingual.html
(3)
These pages have some hints on freeing up space:
http://thexlab.com/faqs/freeingspace.html
http://www.macmaps.com/diskfull.html
(4)
Buy an external firewire harddrive.
(5)
Buy a flash card. -
Unexpected URL parameters have been detected and will be ignored.
Hi...
My JDeveloper raise "Raise Application: FND, Message Name: FND-INVALID APPLICATION." error when run test page before, And I download an other version JDeveloper. and it run well of the test_fwktutorial.jsp, but when I click link in test_fwktutorial.jsp, it raises error and the Oracle EBS Logon page show same time(A error "Unexpected URL parameters have been detected and will be ignored." show in Oracle EBS Logon page show ).
please help me, thank you very much.
JackJack,
You need not do hit and trial on various version of jdev. Jdev version should be as per your instance patchset version. Go through document no.416708.1 on metalink to know you need to download which version of jdev, as per ur instance patch set level.
--Mukul -
just purchased blue ray movie from store with a redemption code but will not download with out inserting disc I have a macbook air
I am experiencing the same thing. I have many books that I used to read on my iPad.
Now on iBooks in Mavericks all my books downloaded correctly apart from two which I really want to work (The Lean Startup and Communicating The User Experience).
When I download both using the cloud icon just when it reaches 100% I get this message:
[Insert Book Name] failed to download
To try again, select Check for Available Downloads from the Store menu.
When I check for available downloads it says I have already downloaded all my books.
Anyone know how to fix this.
I do realise that this is only v1 of iBooks so I hope Apple sorts it out soon.
Thanks!!! -
Which one will get better performance when traversing an ArrayList, iterat
hi, everyone,
Which one will get better performance when traversing an ArrayList, iterators, or index(get(i))?
Any reply would be valuable.
Thank you in advance.Use the iterator, or a foreach loop, which is just syntactic sugar over an iterator. The cases where there is a noticeable difference will be extremely rare. You would only use get() if you actually measured a bottleneck, changed it, re-tested, and found significant improvement. An iterator will give O(n) time for iterating over any collection. Using get() only works on Lists, and for a LinkedList, gives O(n^2).
-
Which clause will yield better performance?
I've always been wondering from a performance standpoint if it was better to use a WITH clause withing a cursor or if it was better to use a sub query? So, if say, I could either place my sub query in the FROM clause of my cursor (for this example that would be the case) or use a WITH clause, which would yield better performance? (I'm using Oracle 11g)
Check this link.
http://jonathanlewis.wordpress.com/2010/09/13/subquery-factoring-4/
Regards
Raj -
How to setup airport time capsule to get better performance?
I need to set up my wireless system with my new Airport time capsule 3T as primary base station to get better performance, and If I have a cable modem as primary device to get the signal (5MB) from the ISP then my network has one, Macbook pro, Macbook air, mac mini, 2 ipad's, 2 iphones, but neither of them is connected all time.
What is the best way to do that?
What wifi channel need choose to?What is the best way to do that?
Use ethernet.. performance of wireless is never as good as ethernet.
What wifi channel need choose to?
There is no such thing as the best channel..
Leave everything auto.. and see if it gives you full download speed.
Use 5ghz.. and keep everything up close to the TC for the best wireless speed.
If you are far away it will drop back to 2.4ghz which is slower.
Once you reach the internet speed nothing is going to help it go faster so you are worrying about nothing. -
My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
Please suggest me which method (is Helper classes or Using EJBs) is perferable
1) to get better performance and.
2) for less network traffic
3) for better container resource utilization
I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
Please give detailed explanation.
thank you,
sudheer<i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
1) to get better performance</i>
EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
<i>2) for less network traffic</i>
There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
<i>3) for better container resource utilization</i>
Again, the EJB version will consume a lot more container resources. -
I do a lot of video editing for work. I am currently using the Creative Cloud, and the programs I use most frequently are Premiere Pro CS6, Photoshop CS6, and Encore. My issue is that when I am rendering video in Premiere Pro, and most importantly, transcoding in Encore for BluRay discs, I am getting severe lag from my computer. It basically uses the majority of my computer's resources and doesn't allow me to do much else. This means, that I can't do other work while stuff is rendering or transcoding. I had this computer built specifically for video editing and need to know which direction to go in for an upgrade to get some better performance, and allow me to do other work.
For the record, I do have MPE: GPU Acceleration turned ON, and I have 12GBs of RAM alloted for Adobe in Premiere Pro's settings, and 4GBs left for "other".
Here is my computer:
- Dell Precision T7600
- Windows 7 Professional, 64-bit
- DUAL Intel Xeon CPU E-2620 - 2.0GHz 6-core Processors
- 16GBs of RAM
- 256GB SSD as my primary drive. This is where the majority of my work is performed.
- Three 2TB secondary drives in a RAID5 configuration. This is solely for backing up data after I have worked on it. I don't really use this to work off of.
- nVidia Quadro 4000 2GB video card
When I am rendering or transcoding, my processor(s) performance fluctuates between 50%-70%, with all 12 cores active and being used. My physical memory is basically ALL used up while this is happening.
Here is where I am at on the issue. I put in a request for more RAM (32GBs), this way I can allot around 25GBs of RAM to the Adobe suite, leaving more than enough to do other things. I was told that this was not the right direction to go in. I was told that since my CPUs are working around 50-70%, it means that my video card isn't pulling enough weight. I was told that the first step in upgrading this machine that we should take, is to replace my 2GB video card with a 4GB video card, and that will fix these performance issues that I am having, not RAM.
This is the first machine that has been built over here for this purpose, so it is a learning process for us. I was hoping someone here could give a little insight to my situation.
Thanks for any help.You have a couple of issues with this system:
Slow E5-2620's. You would be much better off with E5-2687W's
Limited memory. 32 GB is around bare minimum for a dual processor system.
Outdated Quadro 4000 card, which is very slow in comparison to newer cards and is generally not used when transcoding.
Far insufficient disk setup. You need way more disks.
A software raid5 carries a lot of overhead.
The crippled BIOS of Dell does not allow overclocking.
The SSD may suffer from severe 'stable state' performance degradation, reducing performance even more.
You would not be the first to leave a Dell in the configuration it came in. If that is the case, you need a lot of tuning to get it to run decently.
Second thing to consider is what source material are you transcoding to what destination format? If you start with AVCHD material and your destination is MPEG2-DVD, the internal workings of PR may look like this:
Convert AVCHD material to an internal intermediate, which is solely CPU bound. No GPU involvement.
Rescale the internal intermediate to DVD dimensions, which is MPE accelerated, so heavy GPU involvement.
Adjust the frame rate from 29.97 to 23.976, which again is MPE accelerated, so GPU bound.
Recode the rescaled and frame-blended internal intermediate to MPEG2-DVD codec, which is again solely CPU bound.
Apply effects to the MPEG2-DVD encoded material, which can be CPU bound for non-accelerated effects and GPU bound for accelerated effects.
Write the end result to disk, which is disk performance related.
If you export AVCHD to H.264-BR the GPU is out of the game altogether, since all transcoding is purely CPU based, assuming there is no frame blending going on. Then all the limitations of the Dell show up, as you noticed.
Maybe you are looking for
-
Creating a Validation for WBS Element in the Asset Master Record
I would like to know how to create a validation in the AMR for the WBS Element field. I would like to have a particular Business Area when used be required to only use a WBS element that starts with, let's say, AL. Can someone shed some light on how
-
Error while using receiver file adpater "401 UNAUTHORIZED"
Hi ALL, I'm doing a Idoc to File scanerio. However, I could see the xml file generated in sxmb_moni in the payload sections. I have used the file adapter for the receiving side, but i am getting the following error under the Soap Header. <?xml versio
-
Non-self-contained movie huge & takes a very very long time to export
This is for FCE 4. I have a 1.5 hour movie, and when I export it to quicktime (Not quicktime conversion) it takes WAY too long. I have "make self-contained movie" unchecked, so I thought creating a reference movie would be very quick. Why is it takin
-
Hey doods, I'm tired of cropping my JPEGs in PShop after exporting from Illustrator. I just want it to crop at the document boundaries, sometimes it works, most of the time it doesn't. Yes, I have the latest update. Thanks, Ron Burgandy
-
How to create icon on the taskbar
please give me some codes for this problem. I dont know how to write in C++. Thanks