VS2005/CR - Certificate check, performance problem
Post Author: Lars-Inge
CA Forum: General
Hello,I have a problem with the Crystal Reports (with CR ServicePack 1 installed) that shipps with the Visual Studio 2005 Professional IDE. It looks like the reports are using a certificate check that is out on the internet(?). The synpthoms are extremely bad performance the first time the reports are instantiated. E.g say I have this C# code: CrystalReport1 rep = new CrystalReport1();The first time this line is called it takes minute(s) (1 minute + 40 sec) before it completes. After it completes the first time, the next calls to this line takes less than nothing (less than 1 sec). I want it to take less than nothing (less than 5-6 sec is OK) the first time aswell.If I disable the certificate check in the registry in Windows, it takes less than 1 second the first time aswell. E.g in "regedit":My ComputerHKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionWinTrustTrusted ProvidersSoftware PublishingChange the "state" from "0x23c00" to "+ "0x23e00" to switch it off.But I cannot do this on a customer computer.Does anyone have any good suggestions, please?Best regards,Lars-Inge
Hi,
both execution plans have very similar cost values (278461 for the bad plan, 270476 for the good plan) and the necessary I/O is also similar. But the bad plan creates an intermediate result of almost 500M rows before the SORT UNIQUE reduces this result to just 505 rows - while the good plan sorts the input data before doing the join and reduces the 627859 rows before doing a second sort operation (HASH UNIQUE). In addition the Nested Loops join to create the result for the inline-view trt_cells in the first plan seems to be more reasonable than the HASH join in the bad plan - but the treatment of ch_cells seems to be the bigger problem.
With the very similar cost value a small change in the arithemtic can result in a changed plan. But both plans do a bad job to determine the size of the resultsets of the hash joins: the bad plan expects to get 2.3M rows (instead of 500M) before the sort takes place and the good plan shows 1900 rows for the join (and gets 627859 rows).
I would try to check why the cardinalities for the join of ACRM_LEAD_TO_CHANNEL_DAILY and UA_CONTACTHISTORY with the join condition (ch.treatmentcode = daily_leads.treatmentcode) are not accurate. Perhaps the columns statistics for the join columns are misleading (number of distinct values, low_value, high_value in user_tab_cols). You could try to use the values in the standard formulas to calculate join selectivity and cardinality (here in a simplified version from Randolf Geist's blog: Oracle related stuff: Table Functions And Join Cardinality Estimates):
Join Selectivity = 1 / greater(num_distinct(t1.c1), num_distinct(t2.c2))
Join Cardinality = Join Selectivity * cardinality t1 * cardinality t2
Perhaps the results will show where the CBO needs a little bit help (or better statistics).
Regards
Martin
Similar Messages
-
Use Performance Tool to check memory problem
Hi, I am new to mac programming.
How can I use the performance to check memory problem such as "uninitialized memory read", "memory access out of array boundary" etc?
Thanks,
fmDon't know about source code analyzer, but dbx, the Sun Studio debugger, is capable of detecting memory leaks at run time. You can use bcheck script on your application or start dbx, issue "check -leaks" and then run your app under dbx.
-
Performance Problems with Excel Client
Hello all,
We installed BPC 7.5 SP7 NW on our customeru2019s system and experience massive performance problems on our local clients in combination with Excel and MS Office in general.
u2022When opening u201CMaintain Dimension Membersu201D it takes some time (approx. 2 min) to open the Excel Sheet with the elements. The u201CSave to Serveru201D as well as the u201CProcess Dimensionu201D also takes time to execute. When executing u201CProcess Dimensionu201D the sandbox appears and it takes approx 2 min until the Process popup window appears. The process itself has normal performance.
u2022We also experienced slow performance with the opening or saving dynamic templates dialogue in the Excel Client. It takes some time until this dialogue appears.
In general, we experience slow performance every time BPC is opening a pop up window! We also experience slow performance when opening a (non BPC) MS Office document.
In order to resolve this issue we looked at the following notes and forum entries:
Notes: 1541181, 1531059, 1507575, 1479436, 1448502, 1288810
Forum: BPC NW slow Performance (Expand, Refresh, Saving) - How to enhance it?, [BPC for Excel] Dimension clicked, then Excel hangs
Can anybody give me tipps on how to solve this issue?
Thanks in AdvanceHi,
It will work if in IE, Advanced, Security you uncheck "Check for publisher's certificate revocation "
=======================
http://CSC3-2004-crl.verisign.com/CSC3-2004.crl
http://ocsp.verisign.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBQ%2FxkCfyHfJr7GQ6M658NRZ4SHo%2FAQUCPVR6Pv%2BPT1kNnxoz1t4qN%2B5xTcCEH6eR%2B%2FPmw5g6qZVJ2HNXK4%3D
http://crl.verisign.com/pca3.crl
=========================
Regards to all,
Laurentiu Bogdan -
PL/SQL Performance problem
I am facing a performance problem with my current application (PL/SQL packaged procedure)
My application takes data from 4 temporary tables, does a lot of validation and
puts them into permanent tables.(updates if present else inserts)
One of the temporary tables is parent table and can have 0 or more rows in
the other tables.
I have analyzed all my tables and indexes and checked all my SQLs
They all seem to be using the indexes correctly.
There are 1.6 million records combined in all 4 tables.
I am using Oracle 8i.
How do I determine what is causing the problem and which part is taking time.
Please help.
The skeleton of the code which we have written looks like this
MAIN LOOP ( 255308 records)-- Parent temporary table
-----lots of validation-----
update permanent_table1
if sql%rowcount = 0 then
insert into permanent_table1
Loop2 (0-5 records)-- child temporary table1
-----lots of validation-----
update permanent_table2
if sql%rowcount = 0 then
insert into permanent_table2
end loop2
Loop3 (0-5 records)-- child temporary table2
-----lots of validation-----
update permanent_table3
if sql%rowcount = 0 then
insert into permanent_table3
end loop3
Loop4 (0-5 records)-- child temporary table3
-----lots of validation-----
update permanent_table4
if sql%rowcount = 0 then
insert into permanent_table4
end loop4
-- COMMIT after every 3000 records
END MAIN LOOP
Thanks
Ashwin N.Do this intead of ditching the PL/SQL.
DECLARE
TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
pnums NumTab;
pnames NameTab;
t1 NUMBER(5);
t2 NUMBER(5);
t3 NUMBER(5);
BEGIN
FOR j IN 1..5000 LOOP -- load index-by tables
pnums(j) := j;
pnames(j) := 'Part No. ' || TO_CHAR(j);
END LOOP;
t1 := dbms_utility.get_time;
FOR i IN 1..5000 LOOP -- use FOR loop
INSERT INTO parts VALUES (pnums(i), pnames(i));
END LOOP;
t2 := dbms_utility.get_time;
FORALL i IN 1..5000 -- use FORALL statement
INSERT INTO parts VALUES (pnums(i), pnames(i));
get_time(t3);
dbms_output.put_line('Execution Time (secs)');
dbms_output.put_line('---------------------');
dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
END;
Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723 -
Report painter performance problem...
I have a client which runs a report group consists of 14 reports... When we run this program... It takes about 20 minutes to get results... I was assigned to optimize this report...
This is what I've done so far
(this is a SAP generated program)...
1. I've checked the tables that the program are using... (a customized table with more than 20,000 entries and many others)
2. I've created secondary indexes to the main customized table with (20,000) entries - It improves the performance a bit(results about 18 minutes)...
3. I divided the report group by 4... 3 reports each report group... It greatly improves the performance... (but this is not what the client wants)...
4. I've read an article about report group performance that it is a bug.
(sap support recognized the fact that we are dealing with a bug in the sap standard functionality)
http://it.toolbox.com/blogs/sap-on-db2/sap-report-painter-performance-problem-26000
Anyone have the same problem as mine?
Edited by: christopher mancuyas on Sep 8, 2008 9:32 AM
Edited by: christopher mancuyas on Sep 9, 2008 5:39 AMReport painter/Writer always creates a prerformance issue.i never preffred them since i have a option with Zreport
now you can do only one thing put more checks on selection-screen for filtering the data.i think thats the only way.
Amit. -
Interactive report performance problem over database link - Oracle Gateway
Hello all;
This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
query like this: (due to sensitivity issue, I can not disclose real table name)
SELECT apex_item.checkbox(1,b.col3)
, a.col1
, a.col2
FROM table_one a
, table_two b
WHERE a.col3 = 12345
AND a.col4 = 100
AND b.col5 = a.col5
table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
I would like to get ideas from people how to resolve or speed-up the query?
Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
Thank you.
Shaun S.Hi Shaun
Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
WITH a AS
(SELECT /*+ MATERIALIZE */ *
FROM table_one),
b AS
(SELECT /*+ MATERIALIZE */ *
FROM table_two),
sourceqry AS
(SELECT b.col3 x
, a.col1 y
, a.col2 z
FROM table_one a
, table_two b
WHERE a.col3 = 12345
AND a.col4 = 100
AND b.col5 = a.col5)
SELECT apex_item.checkbox(1,x), y , z
FROM sourceqry
WITH a AS
(SELECT /*+ MATERIALIZE */ *
FROM table_one),
b AS
(SELECT /*+ MATERIALIZE */ *
FROM table_two)
SELECT apex_item.checkbox(1,x), y , z
FROM table_one a
, table_two b
WHERE a.col3 = 12345
AND a.col4 = 100
AND b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
Sorry for all the questions but it helps to answer the question, if I can.
Cheers
Ben
http://www.munkyben.wordpress.com
Don't forget to mark replies helpful or correct ;) -
Performance Problems Bex 7.0 and Office 2007 Workbooks
Hi
we had a performance Problem with Bex 7.0 and Worksbooks in Office 2007.
The Workbooks are created with Office 2003 and runs with good performance but in Office 2007 the performance is inacceptable.
E.g. open Workbook with Office 2003 -- 30 seconds
open Workbook with Office 2007 -- 15 minutes
We do everything what we find in SAP Notes, Whitepapers oder SDN Messages.
For Example:
- We installed all Excel Patches witch descriped in: Microsoft Excel 2007 &
SAP Business Explorer Compatibility
- We set the optimize X: RS_FRONTEND_INIT setting u2018ANA_USE_OPTIMIZE_STG = Xu2019
- We open worksbooks in Office 2007 with the repair Flag.
- We used Flag open in XLS format
But same Workbooks are extrem slow.
We try to create a new Workbook with Office 2007 and it runs with good performance.
But there are 500 Workbooks we didn`t wont to create all new.
System Information:
BW: 7.0 Netweaver 7.01 BI_CONT 7.05
Client: SAP Gui 7.10 BI Explorer: 902
Thank your for your Help.
Edited by: Carsten Ziemann on Feb 2, 2011 4:36 PMHello Carsten,
Try to use Workbook compression:
- Open the specific workbook in BEx Analyzer
- Open Workbook Settings dailog
- Check "Use Optimized Storage"
- Click on OK Button
- Save the workbook
But also, your front end tools are on a very old version.
I would like to recommend you to install the latest patch of SAPGUI 7.20 and Business Explore 7.20.
Front End Version 7.10 will be supported until April 2011.
But, if you want to continue using 7.10, update to latest patch:
http://service.sap.com/swdc
> Support Packages and Patches
> Browse our Download Catalog
> SAP Frontend Components
> SAP GUI FOR WINDOWS
> SAP GUI FOR WINDOWS 7.10 CORE
> Win32
_ > gui710_20-10002995.exe
| > BI ADDON FOR SAP GUI
| > BI 7.0 ADDON FOR SAP GUI 7.10
|_ > bi710sp14_1400-10004472.exe
Cheers,
Edward John -
Performance Problems - Index and Statistics
Dear Gurus,
I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
please helpDear Mr Syed ,
Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
For <b>Re-organizing</b> degenerated indexes 3 options are available-
<b>1) DROP INDEX ..., and CREATE INDEX </b>
<b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
<b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
<b>Advantages- option 2</b>
1)Fast storage in a different table space possible
2)Creates a new index tree
3)Gives the option to change storage parameters without deleting the index
4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
I would still leave the Database tech team be the best to judge and take a call on these.
These modus operandi could be institutionalized for all fretful cubes & its indexes as well.
However,I leave the thoughts with you.
Hope it Helps
Chetan
@CP.. -
IPad 1 email and performance problems ios5
Since I upgraded my iPad 1(wifi only) to iOS 5' I have been having numerous performance problems, and my email continuously sys "checking Email". I have tried restarting my iPod ith no luck. Deleting d reading the accounts woks briefly, but after 15-20 minutes the email problem ones back. The update has relly crippled my iPad.
Has anyone else had this problem? Any ideas to resolve it?
Regards,
JasonWhile Downgrading is not actually supported by Apple I found a video on You Type that walked me through it. Just do a search on You tube and you will find it. The one I used was by lizards821. His videos cover any errors you might have.
Once I reverted, I then did a restore from the backup I made when upgrading to IOS 5 and all of my apps were restored while keeping my downgraded IOS.
Good Luck, it was well worth it for me. I have the stable IPAD I love so much back again. -
Hi,
This is similar - yet different - to a few of the old postings about performance
problems with using jdbc drivers against Sql Server 7 & 2000.
Here's the situation:
I am running a standalone java application on a Solaris box using BEA's jdbc driver
to connect to a Sql Server database on another network. The application retrieves
data from the database through joins on several tables for approximately 40,000
unique ids. It then processes all of this data and produces a file. We tuned
the app so that the execution time for a single run through the application was
24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
Sql Server 2000 version. I ran the app and got an alarming execution time of
5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
and set the "useVarChars" property to "true" on the driver. The execution time
for a single run through the application is now 56 minutes.
56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
over twice the execution time that I was seeing against the 6.5 database. Theoretically,
I should be able to switch out my jdbc driver and the DBMS conversion should be
invisible to my application. That would also mean that I should be seeing the
same execution times with both versions of the DBMS. Has anybody else seen a
simlar situation? Are there any other settings or fixes that I can put into place
to get my performance back down to what I was seeing with 6.5? I would rather
not have to go through and perform another round of performance tuning after having
already done this when the app was originally built.
thanks,
mikeMike wrote:
Joe,
This was actually my next step. I replaced the BEA driver with
the MS driver and let it run through with out making any
configuration changes, just to see what happened. I got an
execution time of about 7 1/2 hrs (which was shocking). So,
(comparing apples to apples) while leaving the default unicode
property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
I then set the 'SendStringParametersAsUnicode' to 'false' on the
MS driver and ran another test. This time the application
executed in just over 24 minutes. The actual runtime was 24 min
16 sec, which is still ever so slightly above the actual runtime
against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
56 minutes that BEA's driver was giving me.
I think that this is very interesting. I checked to make sure that
there were no outside factors that may have been influencing the
runtimes in either case, and there were none. Just to make sure,
I ran each driver again and got the same results. It sounds like
there are no known issues regarding this?
We have people looking into things on the DBMS side and I'm still
looking into things on my end, but so far none of us have found
anything. We'd like to continue using BEA's driver for the
support and the fact that we use Weblogic Server for all of our
online applications, but this new data might mean that I have to
switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
between the client and DBMS, you will probably not see any appreciable
difference in the content of the SQL sent be either driver. My suspicion is
that it involves the historical backward compatibility built in to the DBMS.
It must still handle several iterations of older applications, speaking obsolete
versions of the DBMS protocol, and expecting different DBMS behavior!
Our driver presents itself as a SQL7-level application, and may well be treated
differently than a newer one. This may include different query processing.
Because our driver is deprecated, it is unlikely that it will be changed in
future. We will certainly support you using the MS driver, and if you look
in the MS JDBC newsgroup, you'll see more answers from BEA folks than
from MS people!
Joe
>
>
Mike
The next test you should do, to isolate the issue, is to try another
JDBC driver.
MS provides a type-4 driver now, for free. If it is significantly faster,
it would be
interesting. However, it would still not isolate the problem, because
we still would
need to know what query plan is created by the DBMS, and why.
Joe Weinstein at BEA
PS: I can only tell you that our driver has not changed in it's semantic
function.
It essentially send SQL to the DBMS. It doesn't alter it. -
What are the best coding technics which will avoid performance problems
Hi Experts
What are the best coding technics which are avoiding memory problems and performance problems
Some times one of few reports are taking too much time to executing while handling large data.
1.What are the best way to declare a internal table to avoid performance problems.
2.what is the best way to process data.
3.what is the best way to clear memory.
Can you guys help me give some good suggestions for writing better programs to avoid performance problems.
Thanks
SailuHi,
Check this link..[Please Read before Posting in the Performance and Tuning Forum Updated |Please Read before Posting in the Performance and Tuning Forum;
Which will be the first thread in the Performance and Tuning Forum.
Search the SCN first. -
RMAN duplicate target database from active database - performance problem
Hello. I’m running into a major performance problem when trying to duplicate a database from a target located inside our firewall to an auxiliary located outside our firewall. Both target and auxiliary are located in the same equipment room just on different subnets. Previously I had the auxiliary located on the same subnet as the target behind the firewall and duplicating a 4.5T database took 12 hours. Now with the auxiliary moved outside the firewall attempting to duplicate the same 4.5T database is estimated to exceed 35 hours. The target is a RAC instance using ASM and so is the auxiliary. Ping, tnsping, traceroutes to and from target and auxiliary all indicate no problem or latency. Any ideas on things to consider while hunting for this elusive performance decrease?
Thanks in advance.It would obviously appear network related. Have you captured any network/firewall metrics? Are all components set to full duplex? Would it be possible to take the firewall down temporarily and then test the throughput? Do you encounter any latency if you were to copy a large file across the subnets?
You may want to check V$RMAN_BACKUP_JOB_DETAILS, V$BACKUP_SYNC_IO or V$BACKUP_ASYNC_IO when the backup is running. -
Performance Problems with intrinsic array multiplikation in Fortran with Su
Hello,
I found a serious performance problem in my application with intrinsic array multiplikation. Therefore I wrote a small test program to check if this a general problem or only exists in my code.
The test program (as seen below) was compiled with Sunstudio12 und Solaris 5.10 on an 64 bit Amd Opteron Machine. First with high optimization (f95 -fast -g), and the second time with out optimization (f95 -O0 -g). In both cases the intrinsic array mupltiplication had a lot of tlb and L2 cache misses (I made some test with the performance analyzer) which caused a huge increase in computation time compared to the explicit loop. In the ideal case the compiler should create the nested loops from the intrinsic statement and both functions should use exactly the same computing time.
I also tried compiling with Studio11, the problem also occurs there. Maybe its a compiler bug but I'm not sure.
Greetings
Michael Kroeger
program test
!Test zur Ausfuehrung einer einfachen array Multiplikation
implicit none
real,dimension(:,:,:),pointer::check1,check2
integer i,j,k,ni,nj,nk,status
ni=1000
nj=1000
nk=100
allocate(check1(0:ni,0:nj,0:nk),STAT=status)
write(*,*)status
allocate(check2(0:ni,0:nj,0:nk),STAT=status)
write(*,*)status
check1(i,j,k)=25
check2(i,j,k)=25
call intrinsic_f(check1)
call withloop(check2,i,j,k)
deallocate(check1,check2)
contains
subroutine intrinsic_f(check1)
real, dimension(:,:,:),pointer::check1
check1=check1*5
end subroutine intrinsic_f
subroutine withloop(check2,ni,nj,nk)
real, dimension(:,:,:),pointer::check2
integer i,j,k,nk,nj,ni
do k=0,nk
do j=0,nj
do i=0,ni
check2(i,j,k)=check2(i,j,k)*5
enddo
enddo
enddo
end subroutine withloop
end programI will try the large pages, but what me puzzles is, that the intrinsic function has cache misses, but the loop function has not. All documentation says, that the compiler would expand the intrinsic function into 3 nested loops, so that both functions should be essential the same (and the compiler says it has created 3 loops from the intrinsic). Since they are not the same, this is a severe problem.
I have a fairly large code for high performance simulations, and this intrinsic functions are a significant bottle neck. If it is not a compiler bug, I would have to rewrite all intrinsic functions into loops. -
Performance problems in livecache
Hi All,
we are facing huge performance loss in livecache, job are taking almost 10 x longer then usuall.
SCM/Livecache system is not giving any clue / Hint why (Performance loss).
We are using following release:
1. liveCache Version X64/LIX86 7.6.04 Build 015-123-189-221
2. Current ABAP SP LCAPPS 2005_700: patch 000
3. SCM 5.00
4. Oracle 10.2.0.4.0
Any help is highly appreciated
Thanks
sahmadHi,
Check these notes:
Note 497289 - Performance when reading shipment
Note 801419 - Performance problems in shipment (cost) processing
and related notes
Too, check if you have implemented BADI's related with shipment costs.
Other way is to do a trace with ST05 to find the bottleneck. Check with a Basis consultant and perhaps you can solve it with a secondary index. But first check the first notes.
Regards,
Eduardo -
CRM IC Webclient - Massive Performance Problems Searching in Agent Inbox
Hi Forum,
can somebody help me. We have massive Search Performance problems in Interaction Center Webclient in Agent Inbox. When agents searching, for example for open emails, the time to get a result takes approx. 2-3 minutes.
Thats absolute inadmissible and endangered the running business.
The queury reading the eMails from the workitem tables is very very slow.
Could somebody help me with ideas to solve this big performance problem. Have somebody the same problems?
Thank you very much in advance for any information.
We use Interaction Center Webclient on CRM Release 5.0
ThorstenAnother aspect you could check is the following:
Analysis performed didn't show anything strange nor any high consumers
of the response time. The processing time occurs, since IC-Web on CRM 5
is a bit demanding for CPU power.
The only thing that could improve the performance a bit is the
<b>buffering of org structure</b>, which is currently switched off for
SALES and SERVICE scenarios as per table T77OMATTSC.
The report <b>HRBCI_ATTRIBUTES_BUFFER_UPDATE</b> is running regulary
however no scenario is being placed in the buffer to speed up
org structure read. Please check this and provide feedback.
How to use this report, can be found back in the Reports own documentation and also in SAP HELP.
cheers
Davy
Maybe you are looking for
-
Hi guys, BE SURE TO READ THE UPDATE BELOW.... I have searched both Google and the Forums, but cannot find anything on this one. Here is the history and both errors: Recently upgraded to new machines with an upgrade to Windows 7 from XP prof. I pers
-
Sound handling in (game) application
As my previous question did not receive any answers, I'll try to rephrase it. I need to change the volume of sounds during the game (changing pitch would be nice too). I tried to use: double currentVolume, volumeGain=.01, newVolume; FloatControl gain
-
Hello developers. my probelm is this: i am using an exe program within my application with ,.,,RunTime.exec("myprogram.exe");.. the exe program is console application,there for,when calling tha command above,the cmd window is opening. (win 2000). How
-
I cannot view photos on my ipod after syncing from imac 10.9.5
I have a 5th generation ipod, 80gb. I have synced to my late 2012 imac 10.9.5 and am unable to see photos. It says cannot sync as file can't be found. My music has all synced but I just see blank squares for photos, is this an incompatibility issue?
-
Can I export Quicktime movies with the same quality/compression as DVD
I would like to export iMovie HD 6 projects as quicktime movies of about the same size, compresson level and quality as I get with iDVD. I would like to watch them on my macbook, but don't want the dvd interface. My attempts have tended to produce so