Queries Performance impact
Hi Team,
We have few queries which were running good until last week; but for the past 3 days these queries were facing severe some performance issues and timeout dumps in the back-end.
For some selections it is running long and for some selection it is executing quicker and for some selection it is getting time out.
We made a complete data rebuild for the queries connected data targets (data rebuild from the source) before 3 days; after which the query performance issue faced.
No changes were made to the queries or objects for the last 2 months.
Data Flow - Query -> Multi Provider -> Infoset -> InfoCube -> DSO -> Datasource (DB Connect).
Note:
In Query we have nested aggregation to handle the result rows; but again no changes to it for the past 2 months.
We have loaded data in one single request at the InfoCube level.
I mean some 2 million records with different plants in one single request do it have performance impact while reading data?
Can anyone please throw light on the possible cause for the performance issue?
Thanks
Regards
San
Hi San,
As you said that you completely loaded data and then only your performance issues started, can you please tell whether you are using any BIA for reporting?.
If not BIA, can you please delete the DB staticstics for those Infocubes and then create the DB statistics for the same.
Also you completely rebuild the data which means drop and reload, your PSATEMPSPACE OR your temporarrily file space might have completely build. Ask your basis team to check the space in the tables.
Regards,
Rajesh
Similar Messages
-
Performance impact related to workbook
Hi All,
Please let me know the performance impact on workbooks if i insert 13 work sheets,
How much performance impact will be there?
If i insert all reports reports in 2 worksheets by splitting them,
How much performance impact ll be there?
Please let me know which one is better way
Urgent!...
Thanks,
SarauHello Sarau,
The performance is purely based on the Query selection, since that decide the number of records to be fetched.
The more you filter the query the performance will be better.
Since you are having many queries in a workbook, make sure that all have common variable screen.
Thanks
Chandran -
Performance Impact with OR concatenation / Inlist Iterator
Hello guys,
is there any performance impact with using OR concatenations or some IN-Lists?
The function of both is the "same":
1) Concatenation (OR-processing)
SELECT * FROM emp WHERE mgr# = 1 OR job = ‘YOURS’;- Similar to query rewrite into 2 seperate queries
- Which are then ‘concatenated’
2) Inlist Iterator
SELECT * FROM dept WHERE d# in (10,20,30);- Iteration over enumerated value-list
- Every value executed seperately
- Same as concatenation of 3 “OR-red” values
So i want to know if there is any performance impact if using IN-Lists instead of OR concatenations.
Thanks and Regards
StefanThe note is very misleading and far from complete; but there is one critical point of difference that you need to observe. It's talking about using a tablescan to deal with an IN-list (and that's NOT "in-list iteration"), my comments start by saying "if there is a suitable indexed access path."
The note, by the way, describes a transformation to a UNION ALL - clearly that would be inefficient if there were no indexed access path. (Given the choice between one tablescan and several consecutive tablescans, which option would you choose ?).
The note, in effect, is just about a slightly more subtle version of "why isn't oracle using my index". For "shorter" lists you might get an indexed iteration, for "longer" lists you might get a tablescan.
Remember, Metalink is not perfect; most of it is just written by ordinary people who learned about Oracle in the normal fashion.
Quick example to demonstrate the difference between concatenation and iteration:
drop table t1;
create table t1 as
select
rownum id,
rownum n1,
rpad('x',100) padding
from
all_objects
where
rownum <= 10000
create index t1_i1 on t1(id);
execute dbms_stats.gather_table_stats(user,'t1')
set autotrace traceonly explain
select
/*+ use_concat(t1) */
n1
from
t1
where
id in (10,20,30,40,50,60,70,80,90,100)
set autotrace offThe execution plan I got from 8.1.7.4 was as follows - showing the transformation to a UNION ALL - this is concatenation and required 10 query block optimisations (which were all done three times):
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=20 Card=10 Bytes=80)
1 0 CONCATENATION
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
3 2 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
4 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
5 4 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
6 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
7 6 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
8 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
9 8 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
10 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
11 10 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
12 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
13 12 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
14 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
15 14 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
16 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
17 16 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
18 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
19 18 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
20 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
21 20 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)This is the execution plan I got from 9.2.0.8, which doesn't transform to the UNION ALL, and only needs to optimise one query block.
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=80)
1 0 INLIST ITERATOR
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=3 Card=10 Bytes=80)
3 2 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=2 Card=10)Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
Performance Impact of Unique Constraint on a Date Column
In a table I have a compound unique constraint which extends over 3 columns. As a part of functionality I need to add another DATE column to this unique constraint.
I would like to know the performance implications of adding a DATE column to the unique constraint. Would the DATE column behave like another VARCHAR2 or NUMBER column, or would it degrade the performance significantly?
Thanks
Message was edited by:
user627808What performance are you concerned about degrading? Inserts? Or queries? If you're talking about queries, what sort of access path are you concerned about?
Are you concerned that merely changing the definition of the unique constraint would impact performance? Or are you worried that whatever functional change you are making would impact performance (i.e. if you are now retaining historical data in the table rather than just updating it)?
Regardless of the performance impact, unique indexes (and unique constraints) need to be correct. If you need to allow duplicates on the 3 current columns with different dates, then you would need to change the unique constraint definition regardless of the performance impact. Fast and wrong generally isn't going to be preferrable to slow and right.
Generally, though, there probably is no reason to be terribly concerned about performance here. Indexing a date is no different than indexing any other primitive data type.
Justin -
Performance impact of Multi Data Provider in WAD !!
Hello Experts,
I am working on WAD reports with multiple Data Providers,
I.e. Web Template with DP1, DP2, DP3, DP4 --- Query 1
DP5, DP6, DP7, DP8 --- Query 2
Purpose - i have used Tab Strip Item with eight Tabs, each tab belongs to respective DP.
- Each Tab represent differnt view of the Report (using commands)
So, Does multi Data Provider with Multiple Queries have impact on Performance of the web report ? and If it does, then possible ways to improve performance ? Any advice/experience on this..
Many many thanks in advance.. Please help... Thanks
Regards,
Sunil PatelHi,
you could work with the settings for your queries in RSRT.
Ther you can define what data will be read when running a query and what is read when users are doing navigation in the query.
By changing the settings you might get some better performance.
Though working with aggregates has probably more effect on the performance.
regards
Cornelia -
Table has 80 million records - Performance impact if we stop archiving
HI All,
I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.
I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?
Any comments welcomed.What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.
About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.
You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.
In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 . -
Performance impact of using Web Services?
As BEA and other vendors continue to add Web Services support
to their enterprise software, what is your plan for
quantifying the performance impact and the functional
correctness of using web services before going live with the
final application?
Empirix is hosting a free one hour web event discussion on
web services testing and automated web services testing
solutions on Thursday, January 17, 2-3pm Eastern time.
To sign-up for this web event or learn about other web
events being offering by Empirix this month, go to:
http://webevents.empirix.com
For your convenience, here is the complete abstract:
The advent of web services has brought the promises of
integrating multiple software applications from
heterogeneous networks and for exchanging information
from vendor-to-vendor or vendor-to-consumer in a
standardized way.
As web service technologies are deployed within and across
organizations over the next several years, it will be
critical that web services undergo performance testing.
As with any enterprise software project, the adoption of
proper test methodologies and use of testing tools will
play a key part in the overall success or failure of
projects utilizing web services. In a compressed
software project schedule, an organization must
quickly determine if its web services will operate
successfully under a variety of load conditions. Like other
web-based technologies, successful web services will need
to respond quickly and correctly when implemented.
During our presentation, we will discuss the testing
challenges created by this emerging technology, along with
the variety of testing solutions available. Automated
web service testing will be discussed and demonstrated
using FirstACT, the first web services performance testing solution available
on the market. Using a sample web
service, automatic test case creation, scalability testing,
and results analysis will be explored.
If you wish to download FirstACT prior to the web event, you can do so at:
http://www.empirix.com/downloads/FirstACTAs BEA and other vendors continue to add Web Services support
to their enterprise software, what is your plan for
quantifying the performance impact and the functional
correctness of using web services before going live with the
final application?
Empirix is hosting a free one hour web event discussion on
web services testing and automated web services testing
solutions on Thursday, January 17, 2-3pm Eastern time.
To sign-up for this web event or learn about other web
events being offering by Empirix this month, go to:
http://webevents.empirix.com
For your convenience, here is the complete abstract:
The advent of web services has brought the promises of
integrating multiple software applications from
heterogeneous networks and for exchanging information
from vendor-to-vendor or vendor-to-consumer in a
standardized way.
As web service technologies are deployed within and across
organizations over the next several years, it will be
critical that web services undergo performance testing.
As with any enterprise software project, the adoption of
proper test methodologies and use of testing tools will
play a key part in the overall success or failure of
projects utilizing web services. In a compressed
software project schedule, an organization must
quickly determine if its web services will operate
successfully under a variety of load conditions. Like other
web-based technologies, successful web services will need
to respond quickly and correctly when implemented.
During our presentation, we will discuss the testing
challenges created by this emerging technology, along with
the variety of testing solutions available. Automated
web service testing will be discussed and demonstrated
using FirstACT, the first web services performance testing solution available
on the market. Using a sample web
service, automatic test case creation, scalability testing,
and results analysis will be explored.
If you wish to download FirstACT prior to the web event, you can do so at:
http://www.empirix.com/downloads/FirstACT -
Performance impact of Web Services
As WebLogic adds support for Web Services to its platform, what is
your plan for quantifying the performance impact and the functional
correctness of using web services before going live with the final
application.
Empirix is hosting a free one hour web event discussion on web
services testing and automated web services testing solutions on
Thursday, January 17, 2-3pm Eastern time.
To register for this web event or learn about other web events being
offering by Empirix this month, go to:
http://webevents.empirix.com
The complete abstract is below:
The advent of web services has brought the promises of integrating
multiple software applications from heterogeneous networks and for
exchanging information from vendor-to-vendor or vendor-to-consumer in
a standardized way.
As web service technologies are deployed within and across
organizations over the next several years, it will be critical that
web services undergo performance testing. As with any enterprise
software project, the adoption of proper test methodologies and use of
testing tools will play a key part in the overall success or failure
of projects utilizing web services. In a compressed software project
schedule, an organization must quickly determine if its web services
will operate successfully under a variety of load conditions. Like
other web-based technologies, successful web services will need to
respond quickly and correctly when implemented.
During our presentation, we will discuss the testing challenges
created by this emerging technology, along with the variety of testing
solutions available. Automated web service testing will be discussed
and demonstrated using FirstACT, the first web services performance
testing solution available on the market. Using a sample web service,
automatic test case creation, scalability testing, and results
analysis will be explored.Hi,
We test several frameworks and find out that usually JAXB 2.0 performs better than XMLBeans, but that is not a strict rule.
Regards,
LG -
Index creation online - performance impact on database
hi,
I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
Questions:
1. For now i am trying to create an index Online while the business applications are running.
Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
When i created the same index on the same column with NULL, it only took 15 minutes to complete.
Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
Any thoughts would be helpful.
Thanks.
Phil.How are you measuring the "fragmentation" of the table ?
Is the pre-prod database running single instance or RAC ?
Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
The commonest explanation for this type of difference is two-fold:
a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
-- UPDATED: but you did say that you had stopped the application so this bit wouldn't have been relevant.
On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
Regards
Jonathan Lewis -
Performance impact using nested tables and object
Hi,
Iam using oracle 11g.
While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package..
Will it have any performance impact since all the data is stored in the memory.
How can i measure the performance impact when the data grows ?
Regards,
Oracle User
Edited by: user9080289 on Jun 30, 2011 6:07 AM
Edited by: user9080289 on Jun 30, 2011 6:42 AMuser9080289 wrote:
While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package.. Not the best of ideas in general, in PL/SQL. This is not client code that can lay sole claim to most of the memory. It is server code and one of many server processes that need to share the available resources. So capitalism is fine on a client, but you need socialism on the server? {noformat} ;-) {noformat}
Will it have any performance impact since all the data is stored in the memory.Interestingly yes. Usually crunching data in memory is better. In this case it may not be so. The memory used is the most expensive memory Oracle can use - the PGA. Private process memory. This means each process copy running that code, will need lots of memory.
If you're not passing the data structures by reference, it means even bigger demands on memory as the data structure needs to be copied into the call stack and duplicated.
The worse case scenario is that such code consumes so much free server memory, and make such huge demands on having that in pysical memory, it trashes memory management as the swap daemons are unable to keep up with the demand of swapping virtual memory pages into and out of memory. Most CPU time is spend by the swap daemons.
I have seen servers crash due to this. I have seen a single PL/SQL process causing this.
How can i measure the performance impact when the data grows ?Well, you need to look at the impact of your code on PGA memory. It is not SQL performance or I/O performance that is a factor - just how much private process memory your code needs in order to execute. -
Performance impact on the size of the CHM file
Is there any impact on performance depending on the size of a
CHM file?The main issues people have with help file performance
(regardless of whether it is a CHM file) are related to the number
of images, DHTML hotspots, bookmarks and links they have in a
topic. The number of topics in a CHM should not be an issue. What
exactly are you trying to access the performance impact of? -
Regarding performance impact if I do DB accessing coding in comp Controller
Hi ,
This is my project requirement, I have to use some com compoment which in turn fetches data from the database. I am using a java com bridge tool to do this. This tool is generating the java proxy classes for the VB com component.
I am using java proxy classes( This class files are using JNI to connect to VB COM compnent and fetch the data from DB) in my webdynpro component controller.
The architecture is aas below
WEBDYNPRO >> JAVA Classes object( generated by the JAVA- COM bridge tool ) >> JAVA-COM bridge tool >> VB COM+ Component >> SQL server.
The issue
Performance :- first time it is OK but for Consecutive calls the application is going down very visibly and after 4 iteration it hangs . When I look at the log I am getting this
Message : Exception occured during processing of Web Dynpro application com/oreqsrch/com.oreqsrchapp.OReqSrchApp.
The causing exception is nested.
[EXCEPTION]
com.sap.tc.webdynpro.services.session.LockException: Thread SAPEngine_Application_Thread[impl:3]_36 failed to acquire exclusive lock on client session ClientSession(id=(J2EE9536400)ID1120562150DB11245826542790956137End_1159630423). Existing locks: LockingManager(ThreadName:SAPEngine_Application_Thread[impl:3]_36, exclusive client session lock:
ClientSessionLock(SAPEngine_Application_Thread[impl:3]_9), shared client session locks: ClientSessionSharedLockManager([]), app session locks: ApplicationSessionLockManager([]), current request: com/oreqsrch/com.oreqsrchapp.OReqSrchApp).
Hint: Take a thread dump of the server node to find the blocking thread that causes the problem.
Is this issue because I have return the code data access code in the component controller rather wrting in some beans ?
My questions regarding
What would the performance impact if write the DB access code in the webdynpro component controller rather than writing in a bean or an EJB?( I know ideally DB access code has to write in Bean or EJB ).
Please address this with respedct to performance point of view .
thanks
pkiranHi Both,
Thanks for the reply.
Yes they are closed and set it to null;
Connection max and mini properties are controlled at COM+ components in VB.
Since I am using COM - JAVA bridge, I am just invoking the methods defined ijn the VB code thru the bridge tool. all the objects which are retrieving the data are closed and nullify it.
My question is
if I write DB access code in component control instead in EJB or Java bean, will there be any performance issue ?
regards
pkiran -
EBS performance impact using it as a Data Source
I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.
I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.Hi,
Tough to answer without looking at data; my suggestion would be to have a test EBS environment setup, get permission from the vendors to run performance test without buying license - compare AWRs from both scenarios and then decide.
Generally speaking, native XML publisher (BI Publisher) has less of database performance hit than external reporting tools using ODBC.
Hope this helps.
Regards, -
Performance Impact for the Application when using ADFLogger
Hi All,
I am very new to ADFLogger and I am going to implement this in my Application. I go through Duncan Mill's acticles and got the basic understanding.
I have some questions to be clear.
Is there any Performance Impact when using ADFLogger to slower the Appllication.
Is there any Best Practices to follow with ADFLogger to minimize the negative impact(if exists).
Thanks
DkWell, add a call to a logger is a method call. So if you add a log message for every line of code, you'll see an impact.
You can implement it in a way that you only write the log messages if the log level if set to a level which your logger writes (or lower). In this case the impact is like having an if statement, and a method call if the if statement returns true.
After this theory here is my personal finding, as I use ADFLogger quite a lot. In production systems you turn the log lever to WARNING or higher so you will not see many log messages in the log. Only when a problem is reported you set the log level to a lower value to get more output.
I normally use the 'check log level before logging a message' and the 'just print the message' combined. When I know that a message is printed very often, I first check the level. If I assume or know that a message is only logged seldom, I just log it.
I personally have not seen a negative impact this way.
Timo -
Performance Impact When Using SNC Communication
Hello,
Does anybody know if and how much performance impact there is if we use SNC for communication between the SAP Server and SAPGUI?
I think there are two areas that may be impacted; Network and server CPU.
For network load, I did find a part in "Front-End Network Requirements for SAP Business Solutions" document saying "overhead of roughly 350 bytes per user interaction step" but it does not specify the type of encryption. I wonder if there is any other info on this?
For CPU impact, how much overhead should I consider for sapgui access?
I see no field for this in the quicksizer and I can't seem to find any white papers on this subject.
Thank you in advance.>
Peter Adams wrote:
> Ken,
>
> if you plan to use SAPcryptlib for SNC between SAP servers, then you should use a SAPcryptolib-compatible solution for the SNC communication between SAPGUI and SAP server, and there is only one vendor who can provide this. Let me know, if you need help finding it. My contact information is in my SDN business card.
Just so Kan is clear - It is not legal to use the SAP cryptolib provided by SAP for SNC between SAP GUI and SAP servers, so if x.509 is the desired mechanism you need to purchase additional software from the company which Peter works for to provide SAP GUI SNC-based SSO. I think instead, Kan might be using the free SAP supplied SNC Kerberos library, which is why I asked him to confirm this in my last post. I doubt he is interested to buy any third party software.
> As to the performance discussion: first of all, yes, there will be a small performance impact if SNC is used (no matter which type or implementation), but from our experience with many actual SNC implementations, I can state that this is practically not relevant. It is not noticeable by users. There were never any performance discussions with customers. See also SAP Note 1043694.
I agree with this - the performance impact is not noticed by users, but the system managers who look after the servers where SAP is installed, and the team responsible for the network need to be aware of any differences (if any) when SNC is turned on and when SNC is turned off. I think this is why Kan is asking these questions, not because he is concerned about users noticing any difference when they logon to SAP.
> Just a first quick comment on certain statements above: Tim's arguments for proving his overall statement are not conclusive from my perspective. Nor do I think his overall statement itself is correct.
The facts I mentioned are well known facts, e.g. symmetric crypto is far better from performance point of view than asymmetric. I know the examples I have shown which I found when doing a quick google search were not conclusive, but they were shown as initial examples, not necessarily the best examples. This is why I specifically mentioned that if you search in google yourself you will see many more references where comparisons are done between Kerberos (e.g. symmatric) compared with PKI (e.g. asymmetric).
> First of all, he only selects one aspect of performance - CPU impact of encryption algorithms.
No, I didn't. Some of the examples I referred to also discuss other differences. I also mentioend other differences such as memory and what protection level is used when configuring SNC.
> But for a true comparison, you'd have to look at all relevant aspects (latency, network overhead, ...).
Yes, I agree. No doubts here.
>Network performance overhead is usuallly worse with Kerberos than with PKI.
This is not true. When SAP is using SNC, the GSS-API standard is used and so the only network communication involves SAP software sending a standard GSS token from the workstation to the SAP server, and this GSS token is often about the same size, regardless of which mechanism is used, so any network performance differences are not related to the mechanism, but more related to the complexity of the cryptography used on each end (mostly on the server side).
>Second, you need to look at the specific usage scenario. For example, the first report referenced by Tim is an analysis about different Token Profile mechanism for WS Security, for one specific implementation. This does not allow to draw any conclusion for the SNC use case in general, and for sure not for a specific implemenation. It does not take the overhead for the encryption of the message content into account. Third, Tim associates PKI exclusively with asymmetric encryption. Yes, it is well known that asymmetric algorithms are slower than symmetric ones, but it is also well known that the encryption of the message content (by far the majority of the data) happens with symmetric encryption algorithms in the PKI scenario. With PKI-based SNC, you can even select a symmetric algorithm and use a more performant one that the ones that Kerberos prescribes.
Kerberos works with many different symmetric algorithms as well, so mentioning that the alg is selectable is not relavent to any comparison.
> To summarize, I will try and collect facts that will support the opposite point of view. From our practical experience, the performance overhead is not relevant, and criteria like consistency with SAPcryptolib, strength of security, ease of administration, choice of authentication and encryption mechanism, etc. are much more important.
>
> Peter
Maybe you are looking for
-
If it help before this one I had an iopd touch gen 2
-
I have Windows 7 and Firefox 19.0.2. I see that every time I open a PDF file from a website, it prompts me to either save the file or open it with a program (defaults to Adobe Acrobat Reader). If it is the same file name that I keep opening up over a
-
Sometimes playing a movie it will stop and the death wheel pops up for 10 or more seconds then the movie will continue for a while and then stops and wheel pops up over and over again. Also if I scroll by dragging progress bar forward, same thing ha
-
The CD that came with my Powershot SX50 won't open when I double click the setup icon
Any ideas why my Setup won't launch on my Mac that has a current operating system? It flashes and then nothing happens.
-
BPC 7.0 Authomatic Backup
Hi ! To backup my applications I always use the procedure from Server Manager - Backup. I would like to automate this procedure to have a nightly backup scheduled. Some of you have never come across this need?? Thanks, Fabiola