Which structure of itab has a better performance
Hi experts,
I have written an ABAP-Report - the performance was good...
Then I must implement "some" more functions which were not requested at the beginning
and now the performance is very bad !!!
In this case I think a new report is the better way than a refactoring of the source code...
Now I am not sure how to organize my data in itabs.
I need a dynamic internal table cause number of key columns is only known at runtime...
What do you think is the better and faster way:
Alternative A:
Structure of itab
- Assuming 5 key columns ( Key01, Key02, Key03,... )
- 4 further columns
estimated lines: 6.000.000 - much smaller linesize than Alternative B
Alternative B:
Structure of itab
- Assuming 5 key columns ( Key01, Key02, Key03,... )
- 100 further columns
estimated lines: 80.000 - much bigger linesize than Alternative B
I think Alternative B would be faster but I want know your opinion...
Regards,
Oliver
First of all yolu should check the total size of your internal table,
no of lines * bytes of the structure = ?
And of course you should either a sorted table or a hashed table! Standard tables should definitely be avoided here!
Hashed tables are only usefull with the complete unique table key, no other access!!!
+ If a hashed tables can be used then A could actually be better than B!
Use Assigning !!!
kind regards Siegfried
Similar Messages
-
Which method has a better performance ?
Hello !
I'm using entity framework , and I have several cases where I should run a query than return some parent items , and after I display these parents and the related children in one report.
I want to know which of these methods have the better performance : ( or is there any other better method ??? )
Method1: (the childs collection are loaded later , using lazy loading)
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) select t).Tolist
Method2:
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) _
.Include(Function(t2) t2.Childs1) _
.Include(Function(t2) t2.Childs2) _
.Include(Function(t2) t2.Childs2.Child22) _
.Include(Function(t2) t2.Childs1.Childs11) _
.Include(Function(t2) t2.Childs1.Childs12) _
Select t).ToList
Method3:
Dim lista as IQueryable(Of MyObj)
Dim lst= (From t2 In context.MyObjs Where(..condition..) Select New with _
{ .Parent=t2
.ch1=t2.Childs1 _
.ch2=t2.Childs2 _
.ch21=t2.Childs2.Child21) _
.ch11=t2.Childs1.Childs11) _
.ch12= t2.Childs1.Childs12 _
).ToList
lista=lst.Select(Function(t2) t2.parent)
I noticed that the first method cause the report to open very slow. Also I read somewhere that Include() cause repeat of parent items?
But anyway I want a professional opinion in general for the three methods.
Thank you !Hello,
As far as I know, the Entity Framework offers two ways to load related data after the fact. The first is called lazy loading and, with the appropriate settings, it happens automatically. In your case, your first method uses the last loading, while the second
and third are the same actually, both of them are Eager Loading. (In VB, if you could check use code as “DbContext.Database.Log = Sub(val) Diagnostics.Trace.WriteLine(val)” to see the actually generated sql statement, you could see the third and second query
would generate a join syntax). Since you mentions, the lazy loading way is low performance, you could use either the second or third one.
>>Also I read somewhere that Include() cause repeat of parent items?
For this, nor sure if you worry it would firstly use lazy loading and then use eager loading, however, in my test, I do not see this behavior, the Entity Framework seems to be smart enough to use one mode to load data at the same time. Or you could disable
its lazy loading when using eager loading:
context.ContextOptions.LazyLoadingEnabled = false
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Which X Series Model Has the Best Performance?
I am looking for the best performing computer in the X series.
Which model should I buy, if that is my criteria?
Thanks!
RichardAgreed: x201 w/ SSD. 8GB would be great *if* you like to have a *lot* of things running simultaneously. The fact is, with 4GB and an SSD you have less latency on the pagefile. (Also it saves you a few hundred bucks!)
-
Which gives better performance in webi using display attr or nav attr -
Hello all,
We are using the Bex query as the datasource for our universes and the end user is using the Web Intelligence as the reporting tool (rich client and infoview) we have employee as one of the infoobject in the cube.
Now employee has a lot of attributes which the user wants to use for reporting (delivered employee infoobject has quite a few attr), we are making some of them as Nav attr like Org unit since they are time dependent and the end user will need to put in the key date to bring the employees from right org unit.
We have enhanced the employee attr to have the all address information of the employee (Z fileds) and we have made those nav attr in RSd1.
So my question is should we make the address Z fields as nav attr in cube as well and use those objects in webi or can we use the objects in webi which fall under employee like details (green icons) rather than separate object.
Please let me know what will keep better performance and what is the best practice.
Thanks you in advance and appreciate everyone's help
Edited by: Cathy on Jun 16, 2011 7:35 AMHi,
BEx Query Design Recommendations:
"Reduce Usage of Navigational Attributes as much as possible Also, if simply displaying a Characteristicu2019s Attribute, DO NOT use the Navigational Attribute u2013 rather utilize the Characteristic Attribute for display in the report This avoids unneeded JOINS, and also reduces total number of rows transferred to WebI"
Source : SAP Document
Thanks,
Amit -
Difference between Temp table and Variable table and which one is better performance wise?
Hello,
Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
Which one is recommended to use for better performance?
also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
Is that Table variable using Memory or Disk space?
Thanks Shiven:) If Answer is Helpful, Please VoteCheck following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
But it also depends upon specific scenarios you are dealing with , can you share it?
~manoj | email: http://scr.im/m22g
http://sqlwithmanoj.wordpress.com
MCCA 2011 | My FB Page -
Which approach is having better performance in terms of time
For large no of data from more then two diffrent tables which are having relations ,
In Oracle in following two approaches which is having better performance in terms of time( i.e which is having less time) ?
1. A single compex query
2. Bunch of simple queriesBecause their is a relationship between each of the tables in the simple queries then if you adopt this approach you will have to JOIN in some way, probably via a FOR LOOP in PL/SQL.
In my experience, a single complex SQL statement is the best way to go, join in the database and return the set of data required.
SQL rules! -
Which provides better performance?
I have a ATI HD3200 video built-in to my motherboard. I also have an old Nvidia 6600 GT 128 MB videocard. Which will give me better performance?
I have the desktop effects enabled in KDE 4.2 and I also like to play the occasional 3D game.For desktop use with compositing i'd agree with Draje. For games, you can easily enough benchmark each card and see which is best. If you go this route I strongly recommend using a lighter weight wm just when you play games. I know my own comp went up 40fps or so when playing nexuiz under fluxbox as opposed to KDE 3.5.x.
-
In the below queries which gives better performance
Hi All,
In the below two queries which gives better performance.
Requirement is I need to find if all the 3 score columns are null then I need to assign -ve value -9999 else some +ve value 2
1)
select case when count(CUST_score1)+count(CUST_score2)+count(CUST_score3)=0 then -111111'
else 11 end
from
customer
where subscriber_id=1050
and cust_system_code='1882484'
2)
select case whenCUST_score1 is null and CUST_score2 is null and CUST_score3 is null then '-9999'
else '11' end
from
customer
where subscriber_id=1050
and cust_system_code='1882484'
Please help, because we have more data in table customer so I need to confirm which is better.
Regards,
Chandauser546757 wrote:
Hi All,
In the below two queries which gives better performance.
Requirement is I need to find if all the 3 score columns are null then I need to assign -ve value -9999 else some +ve value 2
1)
select case when count(CUST_score1)+count(CUST_score2)+count(CUST_score3)=0 then -111111'
else 11 end
from
customer
where subscriber_id=1050
and cust_system_code='1882484'
2)
select case whenCUST_score1 is null and CUST_score2 is null and CUST_score3 is null then '-9999'
else '11' end
from
customer
where subscriber_id=1050
and cust_system_code='1882484'
Please help, because we have more data in table customer so I need to confirm which is better.
Regards,
ChandaThe two statements aren't equivalent. If you know that your where condition is restricting to a single row then there is no point in doing a count as that will introduce an additional aggregate function that isn't required for a single row. If you are dealing with multiple rows from the where condition then the second query will return multiple rows whereas the first query returns 1 row, so they don't do the same thing anyway. -
Which clause will yield better performance?
I've always been wondering from a performance standpoint if it was better to use a WITH clause withing a cursor or if it was better to use a sub query? So, if say, I could either place my sub query in the FROM clause of my cursor (for this example that would be the case) or use a WITH clause, which would yield better performance? (I'm using Oracle 11g)
Check this link.
http://jonathanlewis.wordpress.com/2010/09/13/subquery-factoring-4/
Regards
Raj -
Which one will get better performance when traversing an ArrayList, iterat
hi, everyone,
Which one will get better performance when traversing an ArrayList, iterators, or index(get(i))?
Any reply would be valuable.
Thank you in advance.Use the iterator, or a foreach loop, which is just syntactic sugar over an iterator. The cases where there is a noticeable difference will be extremely rare. You would only use get() if you actually measured a bottleneck, changed it, re-tested, and found significant improvement. An iterator will give O(n) time for iterating over any collection. Using get() only works on Lists, and for a LinkedList, gives O(n^2).
-
How to setup airport time capsule to get better performance?
I need to set up my wireless system with my new Airport time capsule 3T as primary base station to get better performance, and If I have a cable modem as primary device to get the signal (5MB) from the ISP then my network has one, Macbook pro, Macbook air, mac mini, 2 ipad's, 2 iphones, but neither of them is connected all time.
What is the best way to do that?
What wifi channel need choose to?What is the best way to do that?
Use ethernet.. performance of wireless is never as good as ethernet.
What wifi channel need choose to?
There is no such thing as the best channel..
Leave everything auto.. and see if it gives you full download speed.
Use 5ghz.. and keep everything up close to the TC for the best wireless speed.
If you are far away it will drop back to 2.4ghz which is slower.
Once you reach the internet speed nothing is going to help it go faster so you are worrying about nothing. -
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
I do a lot of video editing for work. I am currently using the Creative Cloud, and the programs I use most frequently are Premiere Pro CS6, Photoshop CS6, and Encore. My issue is that when I am rendering video in Premiere Pro, and most importantly, transcoding in Encore for BluRay discs, I am getting severe lag from my computer. It basically uses the majority of my computer's resources and doesn't allow me to do much else. This means, that I can't do other work while stuff is rendering or transcoding. I had this computer built specifically for video editing and need to know which direction to go in for an upgrade to get some better performance, and allow me to do other work.
For the record, I do have MPE: GPU Acceleration turned ON, and I have 12GBs of RAM alloted for Adobe in Premiere Pro's settings, and 4GBs left for "other".
Here is my computer:
- Dell Precision T7600
- Windows 7 Professional, 64-bit
- DUAL Intel Xeon CPU E-2620 - 2.0GHz 6-core Processors
- 16GBs of RAM
- 256GB SSD as my primary drive. This is where the majority of my work is performed.
- Three 2TB secondary drives in a RAID5 configuration. This is solely for backing up data after I have worked on it. I don't really use this to work off of.
- nVidia Quadro 4000 2GB video card
When I am rendering or transcoding, my processor(s) performance fluctuates between 50%-70%, with all 12 cores active and being used. My physical memory is basically ALL used up while this is happening.
Here is where I am at on the issue. I put in a request for more RAM (32GBs), this way I can allot around 25GBs of RAM to the Adobe suite, leaving more than enough to do other things. I was told that this was not the right direction to go in. I was told that since my CPUs are working around 50-70%, it means that my video card isn't pulling enough weight. I was told that the first step in upgrading this machine that we should take, is to replace my 2GB video card with a 4GB video card, and that will fix these performance issues that I am having, not RAM.
This is the first machine that has been built over here for this purpose, so it is a learning process for us. I was hoping someone here could give a little insight to my situation.
Thanks for any help.You have a couple of issues with this system:
Slow E5-2620's. You would be much better off with E5-2687W's
Limited memory. 32 GB is around bare minimum for a dual processor system.
Outdated Quadro 4000 card, which is very slow in comparison to newer cards and is generally not used when transcoding.
Far insufficient disk setup. You need way more disks.
A software raid5 carries a lot of overhead.
The crippled BIOS of Dell does not allow overclocking.
The SSD may suffer from severe 'stable state' performance degradation, reducing performance even more.
You would not be the first to leave a Dell in the configuration it came in. If that is the case, you need a lot of tuning to get it to run decently.
Second thing to consider is what source material are you transcoding to what destination format? If you start with AVCHD material and your destination is MPEG2-DVD, the internal workings of PR may look like this:
Convert AVCHD material to an internal intermediate, which is solely CPU bound. No GPU involvement.
Rescale the internal intermediate to DVD dimensions, which is MPE accelerated, so heavy GPU involvement.
Adjust the frame rate from 29.97 to 23.976, which again is MPE accelerated, so GPU bound.
Recode the rescaled and frame-blended internal intermediate to MPEG2-DVD codec, which is again solely CPU bound.
Apply effects to the MPEG2-DVD encoded material, which can be CPU bound for non-accelerated effects and GPU bound for accelerated effects.
Write the end result to disk, which is disk performance related.
If you export AVCHD to H.264-BR the GPU is out of the game altogether, since all transcoding is purely CPU based, assuming there is no frame blending going on. Then all the limitations of the Dell show up, as you noticed. -
I have the new lower-end 15" MBP. I have never once experienced a time where the discrete GPU outperformed the "HD Graphics 3000". Just the opposite, in fact. On a particular itunes visualizer, I get around 55fps with the integrated GFX, and around 32 with discrete. When playing games from The Orange Box, I get 90-140fps on integrated gfx and 30-50 on discreet. These are the only specific benchmarks I have, but I have noticed the same trend elsewhere too. Is there something wrong with my computer, or is this the norm?
wyager wrote:
@ds_storeI thought the HD 3000 didn't really have any RAM, but it could borrow 384 megs from the system RAM?
Yes the HD 3000 uses main memory where programs and working files are stored, assuming the 384MG is assigned when the machine boots up. If one places more RAM in their machine, there is the possibility that more memory could be used by the HD 3000.
With about 11 on the OpenGL for the dual core with HD 3000, the quad core with HD 3000 might be around 22 or so on the OpenGL which might be just around as fast as the descrete Radeon 6490M. (which is a poor card BTW)
If certain programs were optimized for the HD 3000 by default, yea sure on this machine it's looking like certain programs may be faster on the HD 3000 than on the descrete card.
wyager wrote:
I understand if the HD 3000 gfx performs better at encoding/decoding video, it has a whole new instruction set for that, but it seems like if it performs this well all around, apple shouldn't have wasted the space for the discrete card...
It's looking with the low end 15" one has the same performance on either, all depends upon the software if it's written for integrated or descrete graphics.
Some 3D games won't run unless it has a descrete video card, what's the use.
You can see here though, descrete graphics is way ahead of integrated graphics
http://www.cbscores.com/index.php?sort=ogl&order=desc -
Please help to modifiy this query for better performance
Please help to rewrite this query for better performance. This is taking long time to execute.
Table t_t_bil_bil_cycle_change contains 1200000 rows and table t_acctnumberTab countains 200000 rows.
I have created index on ACCOUNT_ID
Query is shown below
update rbabu.t_t_bil_bil_cycle_change a
set account_number =
( select distinct b.account_number
from rbabu.t_acctnumberTab b
where a.account_id = b.account_id
Table structure is shown below
SQL> DESC t_acctnumberTab;
Name Type Nullable Default Comments
ACCOUNT_ID NUMBER(10)
ACCOUNT_NUMBER VARCHAR2(24)
SQL> DESC t_t_bil_bil_cycle_change;
Name Type Nullable Default Comments
ACCOUNT_ID NUMBER(10)
ACCOUNT_NUMBER VARCHAR2(24) YIshan's solution is good. I would avoid updating rows which already have the right value - it's a waste of time.
You should have a UNIQUE or PRIMARY KEY constraint on t_acctnumberTab.account_id
merge rbabu.t_t_bil_bil_cycle_change a
using
( select distinct account_number, account_id
from rbabu.t_acctnumberTab
) t
on ( a.account_id = b.account_id
and decode(a.account_number, b.account_number, 0, 1) = 1
when matched then
update set a.account_number = b.account_number
Maybe you are looking for
-
Nokia C5-03 Low memory problems ( Other Files )
Please help. I have a nokia C5-03 the handset is showing phone memory is full. I have checked and uninstalled all unecessay applications. But problem is still there. when i check the phone memory it indicates that i have other files of 42mb installed
-
How do you use Keywords?
I'm curious how you use Keywords. I started to go into my pictures today and add them. I went first to the first Event I had and looked for any pictures of my kids. I was going to make my Keywords pretty broad. If they were aged 0 - 5, then that was
-
Mapping lookup for file adapter
Hi Experts, I am doing JDBC to File Scenario, sender side i got the company name field receiver side i got company code field. There is a file in the XI server it contains list of all company name and copany codes. i need to take the company code fro
-
Find the URL of the survey attached in an activity
Hi experts, I have one campaign, when we run the campaign it creates the activities to which survey is attached as a questionaire. Our scenario: when ever a user logs in to the EP. He sees the survey as a hyperlink. Our requirement: how to get the ur
-
HELP!! operating SDO_GEOMETRY with OCCI
hi, there. I have following test script that manipulate sdo_geometry in OCCI which always crash. SDO_GEOMETRY* geo1; string sqlStmt = "select value(p) from geometrytab2 p"; stmt = conn->createStatement(sqlStmt); ResultSet *rs = stmt->executeQuery();