Better Performance: Design2002 or Design2003
A remark was brought up today in another thread that Thorsten Blechinger was still using Design2002 because it was faster (i.e. page reproduction) than 2003.
Re: Developer Days: Walldorf
I have seen the exact opposite. I thought this might be worth further discussion. I will post here, in just a little bit, some more details on my experiences with the two different designs.
In the interest of attempting to understand the difference myself as well as see if there is a siginficant difference in perfermance I created a small page.
<b>Page Attribute</b>
itab TYPE ZUSR02
<i>* In order to do this quickly I made a ZUSR02 Table Type and gave it the Line Type of USR02</i>
<b>Page Layout</b>
<%@page language="abap" %>
<%@extension name="htmlb" prefix="htmlb" %>
<htmlb:content design="design2003" >
<htmlb:page title=" " >
<htmlb:form>
<htmlb:tableView id = "myTable"
width = "100%"
headerVisible = "true"
headerText = "User List"
design = "alternating"
sort = "server"
footerVisible = "true"
visibleRowCount = "20"
selectionMode = "SINGLESELECT"
onRowSelection = "MyTableSelect"
fillUpEmptyRows = "X"
filter = "server"
table = "<%= itab %>" />
</htmlb:form>
</htmlb:page>
</htmlb:content>
<b>OnCreate</b>
select * from USR02 into corresponding fields of table itab.
I made the page as small and as compact as possible using strictly HTMLB tags and nothing more.
There are 45 users in the table and approximately 50% of all data is filled in.
<b>Page Size</b>
The resulting compiled page was 176.39 KB
I then made a copy of the page "table_2003.htm" to "table_2002.htm" and changed the design to "desgin2002" the resulting compiled page was 87.12 KB
Which in itself is about a 49% difference.
<b>Load times</b>
As we have seen from Artem and Brian the testing is rather difficult and not exactly something exact we have also learned from them that 2003 and 2002 should be a difference and 2003 should be a slower in rendering due to the "complex rendering aspects". With this greatly reduced version of my initial test page I'm beginning to notice that as well.
I cleared all my cache, cookies and everything. I set the page to not require a login as well in SICF in order to get as streamlined as possible.
Load 2002 2003
1 584ms 387ms (Initial Start)
2 547ms 225ms (Refresh)
3 181ms 575ms (Refresh)
4 011ms 133ms (Refresh)
5 262ms 576ms (Refresh)
6 517ms 335ms (Refresh)
7 112ms 205ms (Refresh)
8 698ms 425ms (Refresh)
9 162ms 595ms (Refresh)
10 396ms 114ms (Refresh)
Load 2002 2003
1 112ms 342ms (Page 2)
2 098ms 287ms (Page 3)
3 056ms 512ms (First Page)
4 078ms 506ms (Last Page)
5 384ms (Page 1, typed in)
6 298ms (Page 2, typed in)
7 172ms (Page 3, typed in)
What does all that tell me? Well for me it says there are slight differences in the rendering that match what Brian and Artem have said and that any performance problems lie on the server side.
Which seems to agree with Artem <i>b. This time consists of request processing, data preparing and HTML rendering. Request processing is the same (see a.), but the other points are really different and I must say that the Design2003 rendering and data preparation in general is much more complicated than Design2002. This means in this point the 2003 definitely slower than 2002.</i>
therefore I would definetly agree also with his statement <i>So the conclusion is that from the server point of view - Desing2003 is definitely slower than 2002. However it will be unnoticeable for the user and, I guess, the user will choose the desing2003 because of the look and convenience, but not the server processing time :)</i>
<b>"perceived" performance</b>
Now when using Design2003 there seems to be a slight pause before the entire page is displayed whereas with Design2002 you don't seem to have this delay. From the point of view of what "seems" faster I asked 20 people to take a look at the page and tell me which ones loaded faster. Now each of these people have different PC's and different things running at the moment but all have at least Win2k, 512MB or more and IE 6.0 SP1. However from the 20 people 13 said that "table_2002.htm" was faster and 10 said "table_2003.htm" was faster and 2 said they say no difference at all.
On another note, 21 said that they would prefer to use "table_2003.htm" because it looks better 2 mentioning they liked the idea they could type the page number into the footer and jump to that page and 4 said the look didn't matter they just wanted it to work.
I posted here are the averages, I ran the same set of tests 30 times and I took the average of time for round 1 for the 30 overall times (and so on for each round).
Target was "Page Rendering".
Now I have already spent 4 hours doing these tests and what I really want is to try similiar tests with the various "similiar" components between HTMLB,PHTMLB and XHTMLB however sometime today I need to actually do my job.
I'm looking forward to seeing the results of others tests.
Similar Messages
-
Difference between Temp table and Variable table and which one is better performance wise?
Hello,
Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
Which one is recommended to use for better performance?
also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
Is that Table variable using Memory or Disk space?
Thanks Shiven:) If Answer is Helpful, Please VoteCheck following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
But it also depends upon specific scenarios you are dealing with , can you share it?
~manoj | email: http://scr.im/m22g
http://sqlwithmanoj.wordpress.com
MCCA 2011 | My FB Page -
How to setup airport time capsule to get better performance?
I need to set up my wireless system with my new Airport time capsule 3T as primary base station to get better performance, and If I have a cable modem as primary device to get the signal (5MB) from the ISP then my network has one, Macbook pro, Macbook air, mac mini, 2 ipad's, 2 iphones, but neither of them is connected all time.
What is the best way to do that?
What wifi channel need choose to?What is the best way to do that?
Use ethernet.. performance of wireless is never as good as ethernet.
What wifi channel need choose to?
There is no such thing as the best channel..
Leave everything auto.. and see if it gives you full download speed.
Use 5ghz.. and keep everything up close to the TC for the best wireless speed.
If you are far away it will drop back to 2.4ghz which is slower.
Once you reach the internet speed nothing is going to help it go faster so you are worrying about nothing. -
My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
Please suggest me which method (is Helper classes or Using EJBs) is perferable
1) to get better performance and.
2) for less network traffic
3) for better container resource utilization
I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
Please give detailed explanation.
thank you,
sudheer<i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
1) to get better performance</i>
EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
<i>2) for less network traffic</i>
There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
<i>3) for better container resource utilization</i>
Again, the EJB version will consume a lot more container resources. -
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
I do a lot of video editing for work. I am currently using the Creative Cloud, and the programs I use most frequently are Premiere Pro CS6, Photoshop CS6, and Encore. My issue is that when I am rendering video in Premiere Pro, and most importantly, transcoding in Encore for BluRay discs, I am getting severe lag from my computer. It basically uses the majority of my computer's resources and doesn't allow me to do much else. This means, that I can't do other work while stuff is rendering or transcoding. I had this computer built specifically for video editing and need to know which direction to go in for an upgrade to get some better performance, and allow me to do other work.
For the record, I do have MPE: GPU Acceleration turned ON, and I have 12GBs of RAM alloted for Adobe in Premiere Pro's settings, and 4GBs left for "other".
Here is my computer:
- Dell Precision T7600
- Windows 7 Professional, 64-bit
- DUAL Intel Xeon CPU E-2620 - 2.0GHz 6-core Processors
- 16GBs of RAM
- 256GB SSD as my primary drive. This is where the majority of my work is performed.
- Three 2TB secondary drives in a RAID5 configuration. This is solely for backing up data after I have worked on it. I don't really use this to work off of.
- nVidia Quadro 4000 2GB video card
When I am rendering or transcoding, my processor(s) performance fluctuates between 50%-70%, with all 12 cores active and being used. My physical memory is basically ALL used up while this is happening.
Here is where I am at on the issue. I put in a request for more RAM (32GBs), this way I can allot around 25GBs of RAM to the Adobe suite, leaving more than enough to do other things. I was told that this was not the right direction to go in. I was told that since my CPUs are working around 50-70%, it means that my video card isn't pulling enough weight. I was told that the first step in upgrading this machine that we should take, is to replace my 2GB video card with a 4GB video card, and that will fix these performance issues that I am having, not RAM.
This is the first machine that has been built over here for this purpose, so it is a learning process for us. I was hoping someone here could give a little insight to my situation.
Thanks for any help.You have a couple of issues with this system:
Slow E5-2620's. You would be much better off with E5-2687W's
Limited memory. 32 GB is around bare minimum for a dual processor system.
Outdated Quadro 4000 card, which is very slow in comparison to newer cards and is generally not used when transcoding.
Far insufficient disk setup. You need way more disks.
A software raid5 carries a lot of overhead.
The crippled BIOS of Dell does not allow overclocking.
The SSD may suffer from severe 'stable state' performance degradation, reducing performance even more.
You would not be the first to leave a Dell in the configuration it came in. If that is the case, you need a lot of tuning to get it to run decently.
Second thing to consider is what source material are you transcoding to what destination format? If you start with AVCHD material and your destination is MPEG2-DVD, the internal workings of PR may look like this:
Convert AVCHD material to an internal intermediate, which is solely CPU bound. No GPU involvement.
Rescale the internal intermediate to DVD dimensions, which is MPE accelerated, so heavy GPU involvement.
Adjust the frame rate from 29.97 to 23.976, which again is MPE accelerated, so GPU bound.
Recode the rescaled and frame-blended internal intermediate to MPEG2-DVD codec, which is again solely CPU bound.
Apply effects to the MPEG2-DVD encoded material, which can be CPU bound for non-accelerated effects and GPU bound for accelerated effects.
Write the end result to disk, which is disk performance related.
If you export AVCHD to H.264-BR the GPU is out of the game altogether, since all transcoding is purely CPU based, assuming there is no frame blending going on. Then all the limitations of the Dell show up, as you noticed. -
In PI 7.1 better performance is reached using RFC or Proxy?
Hello Experts,
As with PI 7.1 which one would be better option to have better performance?
1)Proxy which goes through the Integration Engine by omiting Advance adaptor Engine
2)RFC which goes through the AAE by omiting Integration Engine
As we know there are alot of advantages of Proxies over RFC:
1. Proxy communication always by passes the Adapter Engine and will directly interact with the application system and Integration engine. So it will give us better performance.
2. Proxies communicate with the XI server by means of native SOAP calls over HTTP.
3. Easy to handle messages with ABAP programming if it is ABAP Proxy .
4. Proxy is good for large volumes of data. we can catch and persist the errors ( both system & application fault ) which was generated by Proxy setting.
Thanks in Advance
RajeevHey
More than the performance,its a question of requirement.
There are several restrictions which you need to consider before using AAE.To name a few
IDOC,HHTP adapter wont be available
No support for ABAP mapping
No support for BPM
No support for proxy
No support for Multimapping,content based routing ( in first release)
So if you want to use any of the above you cant use AAE in first place.but performance is significantly improved,upto 4 times better that simple AE-IE
/people/william.li/blog/2008/01/10/advanced-adapter-engine-configuration-in-pi-71
check the above blog and the article mentioned in it.
Now coming to proxy,it supports all the above and performance is not that bad either.
so it all boils down to what your requirements are:)
Thanks
Aamir -
How to get better perform here
hi there,
bellow code is using with in the loop. how can i modify to get better performance.
SELECT knumv kposn kwert FROM konv
INTO CORRESPONDING FIELDS OF lt_konv
WHERE knumv EQ lt_output-knumv
AND kposn EQ lt_output-posnr
AND kschl EQ 'VPRS'.
COLLECT lt_konv.
ENDSELECT.
thx in adv.the better solution for the select statement whould be to use the aggreagte function sum for the field kwert:
SELECT knumv kposn sum(kwert)
FROM konv
INTO CORRESPONDING FIELDS OF table lt_konv
WHERE knumv EQ lt_output-knumv
AND kposn EQ lt_output-posnr
AND kschl EQ 'VPRS'.
The select is inside the loop an lt_output.
Aggregate functions and FOR ALL ENTRIES can not be combined, the
FOR ALL ENTRIES is a select distinct !!!
So you must leave the loop around the select and you can't use the FOR ALL ENTRIES, but this is o.k.,
Siegfried -
Which approach is having better performance in terms of time
For large no of data from more then two diffrent tables which are having relations ,
In Oracle in following two approaches which is having better performance in terms of time( i.e which is having less time) ?
1. A single compex query
2. Bunch of simple queriesBecause their is a relationship between each of the tables in the simple queries then if you adopt this approach you will have to JOIN in some way, probably via a FOR LOOP in PL/SQL.
In my experience, a single complex SQL statement is the best way to go, join in the database and return the set of data required.
SQL rules! -
Which gives better performance in webi using display attr or nav attr -
Hello all,
We are using the Bex query as the datasource for our universes and the end user is using the Web Intelligence as the reporting tool (rich client and infoview) we have employee as one of the infoobject in the cube.
Now employee has a lot of attributes which the user wants to use for reporting (delivered employee infoobject has quite a few attr), we are making some of them as Nav attr like Org unit since they are time dependent and the end user will need to put in the key date to bring the employees from right org unit.
We have enhanced the employee attr to have the all address information of the employee (Z fileds) and we have made those nav attr in RSd1.
So my question is should we make the address Z fields as nav attr in cube as well and use those objects in webi or can we use the objects in webi which fall under employee like details (green icons) rather than separate object.
Please let me know what will keep better performance and what is the best practice.
Thanks you in advance and appreciate everyone's help
Edited by: Cathy on Jun 16, 2011 7:35 AMHi,
BEx Query Design Recommendations:
"Reduce Usage of Navigational Attributes as much as possible Also, if simply displaying a Characteristicu2019s Attribute, DO NOT use the Navigational Attribute u2013 rather utilize the Characteristic Attribute for display in the report This avoids unneeded JOINS, and also reduces total number of rows transferred to WebI"
Source : SAP Document
Thanks,
Amit -
Scale out SSAS server for better performance
HI
i have a sharepoint farm
running performance point service in a server where ANaylysis srver,reporting server installed
and we have anyalysis server dbs and cubes
and a wfe server where secure store service running
we have
1) application server + domain controller
2) two wfes
1) sql server sharepoint
1) SSAS server ( analysis server dbs+ reporting server)
here how i scaled out my SSAS server for better performance
adilJust trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage that
will then be used to store the UPDs (one VHDX file per user).
Cheers for now
Russell
Sure you can do it. See:
Deploy a Guest Cluster Using a Shared Virtual Hard Disk
http://technet.microsoft.com/en-us/library/dn265980.aspx
Scenario 2: Hyper-V failover cluster using file-based storage in a separate Scale-Out File Server
This scenario uses Server Message Block (SMB) file-based storage as the location of the shared .vhdx files. You must deploy a Scale-Out File Server and create an SMB file share as the storage location. You also need a separate Hyper-V failover cluster.
The following table describes the physical host prerequisites.
Cluster Type
Requirements
Scale-Out File Server
At least two servers that are running Windows Server 2012 R2.
The servers must be members of the same Active Directory domain.
The servers must meet the requirements for failover clustering.
For more information, see Failover Clustering Hardware Requirements and Storage Options and Validate
Hardware for a Failover Cluster.
The servers must have access to block-level storage, which you can add as shared storage to the physical cluster. This storage can be iSCSI, Fibre Channel, SAS, or clustered storage spaces that use a set of shared SAS JBOD enclosures.
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Will insert (ignore duplicates) have a better performance than merge?
Will insert (ignore duplicates) have a better performance than merge (insert if not duplicate)?
Ok. Here is exactly what is happenning -
We had a table with no unique index on it. We used 'insert all' statement to insert record.
But later when we found duplicates in there we started removing them manually.
Now, to resolve the issue we added unique index and added exception handling to ignore DUP_VAL_ON_INDEX exception.
But with this all records being inserted by 'INSERT ALL' statement gets ignored even if only one record is duplicate.
Hence we have finally replaced 'insert all' with merge statement. Which inserts only if a corresponding record is not found (match based on column in unique index) in the table.
But I am wondering how much performance will get impacted. -
Which method has a better performance ?
Hello !
I'm using entity framework , and I have several cases where I should run a query than return some parent items , and after I display these parents and the related children in one report.
I want to know which of these methods have the better performance : ( or is there any other better method ??? )
Method1: (the childs collection are loaded later , using lazy loading)
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) select t).Tolist
Method2:
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) _
.Include(Function(t2) t2.Childs1) _
.Include(Function(t2) t2.Childs2) _
.Include(Function(t2) t2.Childs2.Child22) _
.Include(Function(t2) t2.Childs1.Childs11) _
.Include(Function(t2) t2.Childs1.Childs12) _
Select t).ToList
Method3:
Dim lista as IQueryable(Of MyObj)
Dim lst= (From t2 In context.MyObjs Where(..condition..) Select New with _
{ .Parent=t2
.ch1=t2.Childs1 _
.ch2=t2.Childs2 _
.ch21=t2.Childs2.Child21) _
.ch11=t2.Childs1.Childs11) _
.ch12= t2.Childs1.Childs12 _
).ToList
lista=lst.Select(Function(t2) t2.parent)
I noticed that the first method cause the report to open very slow. Also I read somewhere that Include() cause repeat of parent items?
But anyway I want a professional opinion in general for the three methods.
Thank you !Hello,
As far as I know, the Entity Framework offers two ways to load related data after the fact. The first is called lazy loading and, with the appropriate settings, it happens automatically. In your case, your first method uses the last loading, while the second
and third are the same actually, both of them are Eager Loading. (In VB, if you could check use code as “DbContext.Database.Log = Sub(val) Diagnostics.Trace.WriteLine(val)” to see the actually generated sql statement, you could see the third and second query
would generate a join syntax). Since you mentions, the lazy loading way is low performance, you could use either the second or third one.
>>Also I read somewhere that Include() cause repeat of parent items?
For this, nor sure if you worry it would firstly use lazy loading and then use eager loading, however, in my test, I do not see this behavior, the Entity Framework seems to be smart enough to use one mode to load data at the same time. Or you could disable
its lazy loading when using eager loading:
context.ContextOptions.LazyLoadingEnabled = false
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Better performance setting...
Hi everyone!
I keep my PowerBook on pretty much all the time because I need it for emails. It's now 3 years old and although it runs well it seems a bit faster with the Better Performance setting than the Normal. Is there any harm in leaving it with this setting all the time?
Thanks,
RegThanks for the reply!
Ok that makes me feel better...yes sometimes if I need better battery life I will lower it to Normal when unplugged...but most of the time I use it plugged anyways...
Now I can feel good to always use Better Performance...I don't know if others can notice the difference but I can with all the latest upgrades in software...
Thanks again,
Reg -
Get a better performance with audio interface?
Hi all.
I got a silly question. I run Logic 7 on a Powerbook G4 12" with 1.33 Ghz and (only) 768 Mb RAM and a (hard to built in) 160 Gb Harddrive.
After having often the "Disk is too slow..." message I was thinking about buying a external audio interface (presonus firebox).
My simple question is: does an external audio interface give Logic (or CoreAudio) more free DSP-Power? So that maybe I got a better performance? Or do I need to get something like a TC Powercore Firewire?
An answer would be great ... otherwise I have to sell my nice and small 12incher and buy an Intelbook ...
Thanks. Frank.
Powerbook G4 12" 1.33 Ghz Mac OS X (10.4.8)A new audio interface won't help.
You should start by getting an external 7200 RPM Firewire drive to put your audio on. That right there will help, especially with higher track counts.
Secondly, get more RAM. 768 of RAM is borderline for Logic. I always recommend at least 1 gig. Go ahead and max out that Powerbook to 2 gig of RAM, if you can.
All that to say, an Intel mac would be a huge improvement over the Powerbook. And if you do go that route, still get an external FW drive for audio, and as much RAM as you can afford.
Maybe you are looking for
-
How to deactivate my pandora account?
II've been paying for my pandora account for years but I would like to erase the account and start over and I've emailed different people and it is not that simple. Can someone show me a short cut?
-
Can't find my third party plug-ins in the Audio Effect slot
I've moved my downloaded plug-ins into Macintosh HD > Library > Audio > Plug-ins, and in a new Finder Window dragged the VST file and Component file into it's respective Logic Pro X files folder(s). I still don't see them in the Audio Units slot for
-
How the **** do I use the Dumpster tool???
Dear knowledgeable souls! I have recently downloaded the Dumpster tool, but have no idea how to make it work, I have done the google thang and found no help... can anyone explain? how to simply analyse a quicktime file with this tool, I open it and i
-
html:text styleid problem while using document.getElementById -very urgent
Hi all, I am using the following struts code in my jsp. Each time when I click the checkbox corresponding to each record, I want the screencode to be displayed using the javascript alert. But always I am getting the same screencode (only the first sc
-
Spry menu showing up incorrectly IE6
Hi there, My spry menu horizontal bar is showing up incorrectly in IE6.... The buttons are meant to be maroon during hover, but they're all the default blue, sometimes when you hover over them they turn pink, sometimes blue, sometimes completely gray