Discoverer Performance/ Response Time
Hi everyone,
I have a few questions regarding the response time for discoverer.
I have Table A with 120 columns. I need to generate a report based on 12 columns from this table A.
The questions are whether the factors bellow contribute to the response time of discoverer.
1. The number of items included in the business area folder (i.e. whether to include 120 cols or just 12 cols)
2. The actual size of the physical table (120 cols) although I only selected 12 cols. If the actual size of the physical table is only 12 cols, would it improve the performance?
3. Will more parameters increase the processing time?
4. Does Joins increase the processing time?
5. Will using Custom Folder and writing an sql statement to select the 12 columns increase the performance?
Really appreciate anyone's help on this.
Cheers,
Angeline
Hi,
NP and Rod, thanks a lot for your replies!
Actually I was experiencing a different thing that contradicts your replies.
1. When I reduced the no of items included in my Biz Area from 120 to 12 the response time improve significantly from around 5 minutes to 2-3 minutes.
2. When I tried to create a dummy table with just 12 cols needed for the report, I could get a very fast response time, i.e. 1 second to generate the report. But of course the dummy table contains much less data (only around 500 K records). Btw, is Discoverer able to handle large database? What is the biggest record size can it handle?
3. When I add more parameters it seems to add more processing time to the discoverer.
4. Thanks for the clarification on this one.
5. And the funny thing is, when I use custom folder to just select the 12 columns, the performance also significantly improves with estimated query time reduced from 2 minutes plus to just 1 mins 30 secs. But still the performance time is inconsistent. Sometimes it only takes around 1 mins 40 secs, but sometimes it can run up to 3 mintues for the same data.
Now I am creating my report using the custom folder cause it has the best response time so far for me. But based on your replies it's not really encouraged to use the custom folder?
I need to improve the response time for the Discoverer Viewer as the response time is very slow and users don't really like it.
Would appreciate anyone's help in solving this issue :) Thanks..
Cheers,
Angeline
Similar Messages
-
Macbook diminished performance/response time
I have had my macbook for a few years and am not a heavy user but have noticed significantly reduced performance/response time for loading apps, mail, safari browser etc. Is there something I can do to check system performance or do I have to take it in?
When you stop using an application, how to you stop? Do you click on the little red dot or do you go to the application's name on the menu bar and Quit? If you do the former it does not terminate the program, just closes it but leaves it in a suspended state ready to reopen. Some will use resources in the background depending on how the software is written. It is best to Quit an application so it is fully terminated and all memory has been released.
If you use multiple applications, you may be running into memory use issues and doing a lot of page in/page outs to the hard drive...that slows the system down since the transfer rate is considerably slower than memory use.
Activity Monitor will show you the memory allocation while you are running your applications...see that you are not using all of your memory and does it say you are doing paging. If so, you might want to consider more memory to clear up that bottle neck.
If you are not using resources such as Bluetooth and wireless, turn them off as the radios use power. Just keep on those things you rely on for normal operation. -
Online response time in discoverer viewer?
Dear sir,
Can anyone help me,
In discoverer view and plus,how to maximise the response time for the viewer,while n-number of visitors requesting the same report.
How to analyse that? what settings are there?
Plz help me , its urgent for my project?
Regards
chandrakumarIn case this helps anyone else - this is the reply I got when I asked SAP this question:
the total user response time is response time + frontend network time.
GUI time is already included in the response time (it is the roll in andwait time from the application server's point of view).
The 'FE (Frontend) net time' is the time consumed in the network for thefirst transfer of data from the frontend to the application server andthe last transfer of data from the application server to the frontend
(during one dialog step). -
Performance Tuning: Http Task Time Response Time Threshold
I must analyze the response time of HTTP task on my instance and I found any value from tcode ST03.
For HTTP task with my user I have these values:
N. Step T resp.time A. Resp.Time T. DB Time A. DB Time
3405 3072 902,1 1436 421,7
Now I would like to know if these values are correct or they are very high.
Is there an OSS Note which explain the threshold of all task type??
Thanks in advanced.
MorenoIf its an external ITS,
here is one note:
Note 388198 - ITS performance problems - Monitors and log files
if its an internal ITS,
Note 892442 - Integrated ITS configuration/performance
Regards,
Siddhesh -
Hi Team,
Any suggession on Tools for Performance testing of application designed for Blackberry phones.. which can get some details on CPU usage, memory, response time?? any help on the same is much appriciated.
Thank You,
Best Regards,
neerajHi Team,
Any suggession on Tools for Performance testing of application designed for Blackberry phones.. which can get some details on CPU usage, memory, response time?? any help on the same is much appriciated.
Thank You,
Best Regards,
neeraj -
I have a HP Compaq Presario CQ62-360TX pre-loaded with Windows 7 home premium (64-bit) that I purchased just under a year ago.
Recently my experience has been interrupted by stuttering that ranges from annoying in general use to a major headache when playing music or videos from the hard drive.
The problem appears to be being caused by extremely hard drive high response times (up to 10 seconds). As far as I know I didn't install anything that might have caused the problems before this happened, and I can't find anything of note looking back through event viewer.
In response to this I've run multiple hard drive scans for problems (chkdsk, scandsk, test through BIOS, test through HP software and others) all of which have passed with no problems. The only thing of any note is a caution on crystaldiskinfo due to the reallocated sector count but as none of the other tests have reported bad sectors I'm unsure as to whether this is causing the problem. I've also updated drivers for my Intel 5 Series 4 Port SATA AHCI Controller from the Intel website and my BIOS from HP as well as various other drivers (sound, video etc), as far as I can tell there are none available for my hard drive directly. I've also wanted to mess with the hard drive settings in the BIOS but it appears those options are not available to me even in the latest version.
System Specs:
Processor: Intel(R) Pentium(R) CPU P6100 @ 2.00GHz (2 CPUs), ~2.0GHz
Memory: 2048MB RAM
Video Card: ATI Mobility Radeon HD 5400 Series
Sound Card: ASUS Xonar U3 Audio Device or Realtek High Definition Audio (both have problem)
Hard Drive: Toshiba MK5065GSK
Any ideas?
Edit: The drive is nowhere near full, it's not badly fragmented and as far as I can tell there's no virus or malware.Sounds like failing sectors are being replaced with good spares sucessfully so far, this is done on the fly and will not show in any test, you have a failing drive, I would back up your data and replace the hard drive.
Sector replacement on the fly explains the poor performance also, replacing sectors with spares is normal if it is just a few over many years, but crystal is warning you there are too many, a sign of drive failure is around the corner. -
All,
I am in a predicament with internet browsing speeds...We have a 3rd party look after our line and internet facing f/w so I cant troubleshoot them, so at the moment Im looking at ISA as the potential bottleneck - we have a fairly standard environment:
Internal > Local Host > Perimiter n/work > Firewall > Internet
I have been running custom reports on the ISA server to see what data can be collected - I have noticed that "Average response time for non cached requests" (traffic by time of day) can be as high as 76 seconds!!!!!! Cached hits are between .5
and 2 seconds.
I have also coonfigured a connectivity verifier which is also flagging slow connectivity, massively over the >5000ms and also reporting "cant resolve server name on occassions- and this is configured for
www.Microsoft.com --- DNS ???!?!, however I have looked through DNS (no obvious errors / config issues) which I can see
I have run the BPA on ISA server to ensure its Health - - connectivity verifier errors flagged timeouts to microsoft.com as expected...
Can anyone advise any obvious areas to investigate as Im struggling! - as always the 3rd party have told us the internet pipe is fine :OProblem resolved.
DNS forwarders have been changed on the ISA server / DNS and this has improved lookup speed considerably.
thanks all :) -
SAP GoLive : File System Response Times and Online Redologs design
Hello,
A SAP Going Live Verification session has just been performed on our SAP Production environnement.
SAP ECC6
Oracle 10.2.0.2
Solaris 10
As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
1/
We have been told that our file system read response times "do not meet the standard requirements"
The following datafile has ben considered having a too high average read time per block.
File name -Blocks read - Avg. read time (ms) -Total read time per datafile (ms)
/oracle/PMA/sapdata5/sr3700_10/sr3700.data10 67534 23 1553282
I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
2/
We have been asked to increase the size of the online redo logs which are already quite large (54Mb).
Actually we have BW loading that generates "Chekpoint not comlete" message every night.
I've read in sap note 79341 that :
"The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
Frankly, I have problems undertanding this sentence.
Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
But how is it that frequent chekpoints should decrease the time necessary for recovery ?
Thank you.
Any useful help would be appreciated.Hello
>> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
The recommended ("standard") values are published at the end of sapnote #322896.
23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
>> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
Correct.
>> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
Every dirty block in the buffer cache is written down to the datafiles
The latest SCN is written (updated) into the datafile header
The latest SCN is also written to the controlfiles
If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
Regards
Stefan -
Response time of a function module
Hi Friends,
I'm creating a cutom program where i was using a BAPI ,which exist in other server.
Now i want to record the response time of the BAPI , after placing the request in it and display the Time for the
corresponding record in output.
Is there any procedure to record the response time in the program / I'm not asking the transactions where we can
measure the performances.
Moderator message - please do not ask for or promise rewards.
Thanks & Warm Regards
Krishna
Edited by: Rob Burbank on Oct 1, 2009 8:50 AMHello,
The correct method, as pointed out in previous posts, is with GET RUN TIME. Note that this returns time in microseconds, so you may want to scale this up to a larger unit.
As to the usefulness: it is perfectly legitimate to include time measurements in your program as long as this has a clear purpose, e.g. comparing response times between different remote systems, identifying erratic response times, etc. In that case I would advise you to also include some other measurement, e.g. the amount of data processed (whether you can do this and how depends on the BAPI, e.g. you could use the number of lines in the returned internal tables as a metric). If your time measurement creates separate log/trace records, then it would also be a good idea to have the option to enable and disable the time measurement.
Regards,
Mark -
Strange response time for an RFC call viewed from STAD on R/3 4.7
Hello,
On our R/3 4.7 production system, we have a lot of external RFC calls to execute an abap module function. There are 70 000 of these calls per day.
The mean response time for this RFC call is 35 ms.
Some times a few of them (maybe 10 to 20 per day) are way much longer.
I am currently analysing with STAD one of these long calls which lasted 10 seconds !
Here is the info from STAD
Response time : 10 683 ms
Total time in workprocess : 10 683 ms
CPU time : 0 ms
RFC+CPIC time : 0 ms
Wait for work process 0 ms
Processing time 10.679 ms
Load time 1 ms
Generating time 0 ms
Roll (in) time 0 ms
Database request time 3 ms
Enqueue time 0 ms
Number Roll ins 0
Roll outs 0
Enqueues 0
Load time Program 1 ms
Screen 0 ms
CUA interf. 0 ms
Roll time Out 0 ms
In 0 ms
Wait 0 ms
Frontend No.roundtrips 0
GUI time 0 ms
Net time 0 ms
There is nearly no abap processing in the function module.
I really don't uderstand what is this 10 679 ms processing time especially with 0 ms cpu time and 0 ms wait time.
A usual fast RFC call gives this data
23 ms response time
16 ms cpu time
14 ms processing time
1 ms load time
8 ms Database request time
Does anybody have an idea of what is the system doing during the 10 seconds processing time ?
Regards,
OlivierHi Graham,
Thank you for your input and thoughts.
I will have to investigate on RZ23N and RZ21 because I'm not used to use them.
I'm used to investigate performance problems with ST03 and STAD.
My system is R/3 4.7 WAS 6.20. ABAP and BASIS 43
Kernel 6.40 patch level 109
We know these are old patch levels but we are not allowed to stop this system for upgrade "if it's not broken" as it is used 7/7 24/24.
I'm nearlly sure that the problem is not an RFC issue because I've found other slow dialog steps for web service calls and even for a SAPSYS technical dialog step of type <no buffer>. (what is this ?)
This SAPSYS dialog step has the following data :
User : SAPSYS
Task type : B
Program : <no buffer>
CPU time 0 ms
RFC+CPIC time 0 ms
Total time in workprocs 5.490 ms
Response time 5.490 ms
Wait for work process 0 ms
Processing time 5.489 ms
Load time 0 ms
Generating time 0 ms
Roll (in+wait) time 0 ms
Database request time 1 ms ( 3 Database requests)
Enqueue time 0 ms
All hundreds of other SAPSYS <no buffer> steps have a less than 5 ms response time.
It looks like the system was frozen during 5 seconds...
Here are some extracts from STAD of another case from last saturday.
11:00:03 bt1fsaplpr02_PLG RFC R 3 USER_LECKIT 13 13 0 0
11:00:03 bt1sqkvf_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:04 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 19 19 0 16
11:00:04 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 77 77 0 16
11:00:04 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:04 bt1sqkvf_PLG_18 RFC R 4 USER_LECDIS 14 14 0 16
11:00:05 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 12 12 0 16
11:00:05 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 53 53 0 0
11:00:06 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 76 76 0 0
11:00:06 bt1sqk2t_PLG_18 RFC R 0 USER_LECDIS 20 20 0 31
11:00:06 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 12 12 0 0
11:00:06 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 13 13 0 0
11:00:06 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 34 34 0 16
11:00:07 bt1sqkvh_PLG_18 RFC R 0 USER_LECDIS 15 15 0 0
11:00:07 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 13 13 0 16
11:00:07 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 19 19 0 0
11:00:07 bt1fsaplpr02_PLG RFC R 3 USER_LECKIT 23 13 10 0
11:00:07 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 38 38 0 0
11:00:08 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 20 20 0 16
11:00:09 bt1sqkvg_PLG_18 RFC R 0 USER_LECDIS 9 495 9 495 0 16
11:00:09 bt1sqk2t_PLG_18 RFC R 0 USER_LECDIS 9 404 9 404 0 0
11:00:09 bt1sqkvh_PLG_18 RFC R 1 USER_LECKIT 9 181 9 181 0 0
11:00:10 bt1fsaplpr02_PLG RFC R 3 USER_LECDIS 23 23 0 0
11:00:10 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 8 465 8 465 0 16
11:00:18 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 18 18 0 16
11:00:18 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 89 89 0 0
11:00:18 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 75 75 0 0
11:00:18 bt1sqkvh_PLG_18 RFC R 1 USER_LECDIS 43 43 0 0
11:00:18 bt1sqk2t_PLG_18 RFC R 1 USER_LECDIS 32 32 0 16
11:00:18 bt1sqkvg_PLG_18 RFC R 1 USER_LECDIS 15 15 0 16
11:00:18 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:18 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 14 14 0 0
11:00:18 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 69 69 0 16
11:00:18 bt1sqkvf_PLG_18 RFC R 5 USER_LECDIS 49 49 0 16
11:00:18 bt1sqkve_PLG_18 RFC R 5 USER_LECKIT 19 19 0 16
11:00:18 bt1sqkvf_PLG_18 RFC R 5 USER_LECDIS 15 15 0 16
The load at that time was very light with only a few jobs starting :
11:00:08 bt1fsaplpr02_PLG RSCONN01 B 31 USER_BATCH 39
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 31 USER_BATCH 34
11:00:08 bt1fsaplpr02_PLG /SDF/RSORAVSH B 33 USER_BATCH 64
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 33 USER_BATCH 43
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 34 USER_BATCH 34
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 35 USER_BATCH 37
11:00:09 bt1fsaplpr02_PLG RVV50R10C B 34 USER_BATCH 60
11:00:09 bt1fsaplpr02_PLG ZLM_HDS_IS_PURGE_RESERVATION B 35 USER_BATCH 206
I'm thinking also now about the message server as there is load balancing for each RFC call ?
Regards,
Olivier -
How to find the Response time for a particular Transaction
Hello Experts,
Am implementing a BAdI to achieve some customer enhancement for XD01 Transaction . I need to confirm to customer that after the implementation and before implementation what is the response time of the system
Response time BEFORE BAdI Implementation
Response time AFTER BAdI Implementation
Where can i get this.
Help me in this regard
Best Regards
SRiNiHello,
Within STAD, enter the time range that the user was executing the transaction within as well as the user name. The time field indicates the time when the transaction would have ended. STAD adds some extra time on using your time interval. Depending on how long the transaction ran, you can set the length you want it to display. This means that if it is set to 10, STAD will display statistical records from transactions that ended within that 10 minute period.
The selection screen also gives you a few options for display mode.
- Show all statistic records, sorted by star
This shows you all of the transaction steps, but they are not grouped in any way.
-Show all records, grouped by business transaction
This shows the transaction steps grouped by transaction ID (shown in the record as Trans. ID). The times are not cumulative. They are the times for each individual step.
-Show Business Transaction Tots
This shows the transaction steps grouped by transaction ID. However, instead of just listing them you can drill from the top level down. The top level will show you the overall response time, and as you drill down, you can get to the overall response time.
Note that you also need to add the user into the selection criteria. Everything else you can leave alone in this case.
Once you have the records displayed, you can double click them to get a detailed record. This will show you the following:
- Breakdown of response time (wait for work process, processing time, load time, generating time, roll time, DB time, enqueue time). This makes STAD a great place to start for performance analysis as you will then know whether you will need to look at SQL, processing, or any other component of response time first.
- Stats on the data selected within the execution
- Memory utilization of the transaction
- RFCs executed (including the calling time and remote execution time - very useful with performance analysis of interfaces)
- Much more.
As this chain of comments has previously indicated, you are best off using STAD if you want an accurate indication of response time. The ST12 (combines SE30 ABAP trace and ST05 SQL trace) trace times are less accurate that the values you get from ST12. I am not discounting the value of ST12 by any means. This is a very powerful tool to help you tune your transactions.
I hope this information is helpful!
Kind regards,
Geoff Irwin
Senior Support Consultant
SAP Active Global Support -
Spry menu response time problems with IE
We implemented the spry vertical menu for showing the
categories of a products catalog. It has almost 1400 categories
organizad at about 5 levels, some categories have about 20
subcategories. These categories are in a coldfusion session
variable.
It works perfect in mozilla, but in IE7 and IE6 in some
computers, present this problem:
- The response time is slow when you change from one category
that has subcategories to ahother. If you see the Windows Task
Manager of the computer while you use the menu, the processor use
go up to the top level.
And the effiecience of the menu decrease.
See in
http://edit.panamericana.com.co/
Thanks,
Alejandromdr4win wrote:
i dont think you understood my question, wasnt about body background, but abut the spry image slideshow to work properly in IE
I was not talking about body background, but about having markup that screws up your document when using a browser. Body background just happened to be there. Perhaps you would do well to have a look here http://validator.w3.org/check?verbose=1&uri=http%3A%2F%2Flittletreats.org%2F.
I have noticed that you ignored my solution; your document still shows two bodies.
Perhaps I should have mentioned that I tested in IE6 through to IE9 using IETester and the above was the only thing stopping IE from performing properly.
How did you determine that the slideshow was not working correctly in IE and which versions of IE?
Grumps -
Coherence and EclipseLink - JTA Transaction Manager - slow response times
A colleague and I are updating a transactional web service to use Coherence as an underlying L2 cache. The application has the following characteristics:
Java 1.7
Using Spring Framework 4.0.5
EclipseLink 12.1.2
TopLink grid 12.1.2
Coherence 12.1.2
javax.persistence 12.1.2
The application is split, with a GAR in a WebLogic environment and the actual web service application deployed into IBM WebSphere 8.5.
When we execute a GET from the server for a decently sized piece of data, the response time is roughly 20-25 seconds. From looking into DynaTrace, it appears that we're hitting a brick wall at the "calculateChanges" method within EclipseLink. Looking further, we appear to be having issues with the transaction manager but we're not sure what. If we have a local resource transaction manager, the response time is roughly 500 milliseconds for the exact same request. When the JTA transaction manager is involved, it's 20-25 seconds.
Is there a recommendation on how to configure the transaction manager when incorporating Coherence into a web service application of this type?Hi Volker/Markus,
Thanks a lot for the response.
Yeah Volker, you are absolutely right. the 10-12 seconds happens when we have not used the transaction for several minutes...Looks like the transactions are moved away from the SAP buffer or something, in a very short time.
and yes, the ABAP WP's are running in Pool 2 (*BASE) and the the JAVA server, I have set up in another memory pool of 7 GB's.
I would say the performance of the JAVA part is much better than the ABAP part.
Should I just remove the ABAP part of the SOLMAN from memory pool 2 and assign the JAVA/ABAP a separate huge memory pool of say like 12-13 GB's.
Will that likely to improve my performance??
No, I have not deactivated RSDB_TDB in TCOLL from daily twice to weekly once on all systems on this box. It is running daily twice right now.
Should I change it to weekly once on all the systems on this box? How is that going to help me?? The only thinng I can think of is that it will save me some CPU utilization, as considerable CPU resources are needed for this program to run.
But my CPU utilization is anyway only like 30 % average. Its a i570 hardware and right now running 5 CPU's.
So you still think I should deactivate this job from daily twice to weekly once on all systems on this box??
Markus, Did you open up any messages with SAP on this issue.?
I remember working on the 3.2 version of soultion manager on change management and the response times very much better than this as compared to 4.0.
Let me know guys and once again..thanks a lot for your help and valuable input.
Abhi -
Explain plan - lower cost but higher response time in 11g compared to 10g
Hello,
I have a strange scenario where 'm migrating a db from standalone Sun FS running 10g RDBMS to a 2-Node Sun/ASM 11g RAC env. The issue is with response time of queries -
In 11g Env:
SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
LAST_ANALYZED NUM_ROWS
11-08-2012 18:21:12 3413956
Elapsed: 00:00:00.30
In 10g Env:
SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
LAST_ANAL NUM_ROWS
07-NOV-12 3502160
Elapsed: 00:00:00.04If you look @ the response times, even a simple query on the dba_tables takes ~8 times. Any ideas what might be causing this? I have compared the XPlans and they are exactly the same, moreover, the cost is less in the 11g env compared to the 10g env, but still the response time is higher.
BTW - 'm running the queries directly on the server, so no network latency in play here.
Thanks in advance
aBBy.*11g Env:*
PLAN_TABLE_OUTPUT
Plan hash value: 4147636274
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1104 | 376K| 394 (1)| 00:00:05 |
| 1 | SORT ORDER BY | | 1104 | 376K| 394 (1)| 00:00:05 |
| 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1104 | 376K| 393 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1136 | | 15 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
15 rows selected.
*10g Env:*
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 4147636274
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1137 | 373K| 389 (1)| 00:00:05 |
| 1 | SORT ORDER BY | | 1137 | 373K| 389 (1)| 00:00:05 |
| 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1137 | 373K| 388 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1137 | | 15 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
15 rows selected.
The query used is:
explain plan for
select
NCP_DETAIL_ID ,
NCP_ID ,
STATUS_ID ,
FIBER_NODE ,
NODE_DESC ,
GL ,
FTA_ID ,
OLD_BUS_ID ,
VIRTUAL_NODE_IND ,
SERVICE_DELIVERY_TYPE ,
HHP_AUDIT_QTY ,
COMMUNITY_SERVED ,
CMTS_CARD_ID ,
OPTICAL_TRANSMITTER ,
OPTICAL_RECEIVER ,
LASER_GROUP_ID ,
UNIT_ID ,
DS_SLOT ,
DOWNSTREAM_PORT_ID ,
DS_PORT_OR_MOD_RF_CHAN ,
DOWNSTREAM_FREQ ,
DOWNSTREAM_MODULATION ,
UPSTREAM_PORT_ID ,
UPSTREAM_PORT ,
UPSTREAM_FREQ ,
UPSTREAM_MODULATION ,
UPSTREAM_WIDTH ,
UPSTREAM_LOGICAL_PORT ,
UPSTREAM_PHYSICAL_PORT ,
NCP_DETAIL_COMMENTS ,
ROW_CHANGE_IND ,
STATUS_DATE ,
STATUS_USER ,
MODEM_COUNT ,
NODE_ID ,
NODE_FIELD_ID ,
CREATE_USER ,
CREATE_DT ,
LAST_CHANGE_USER ,
LAST_CHANGE_DT ,
UNIT_ID_IP ,
US_SLOT ,
MOD_RF_CHAN_ID ,
DOWNSTREAM_LOGICAL_PORT ,
STATE
from markethealth.NCP_DETAIL_TAB
WHERE UNIT_ID = :B1
ORDER BY UNIT_ID, DS_SLOT, DS_PORT_OR_MOD_RF_CHAN, FIBER_NODE
This is the query used for Query 1.
Stats differences are:
1. Rownum differes by apprx - 90K more rows in 10g env
2. RAC env has 4 additional columns (excluded in the select statement for analysis purposes).
3. Gather Stats was performed with estimate_percent = 20 in 10g and estimate_percent = 50 in 11g. -
Average HTTP response time going up week by week as said by SAP
Hello All,
I was looking into our SRM earlywatch report,SRM is on windows SQL server 2005 with Windows 2003 server
In the performance indicators,there is an interseting trend though nothing is in yellow or red
The values of following parameters : average response time in HTTP task, the maximum no of HTTP steps per hour and the average DB request time in HTTP task are going up
I understand that as the no of HTTP steps are increasing,the response time is becoming more which is quite obvious.
But how to balance this load out so that the average response time comes down?
what steps should be taken so that I can get this response time to come down though the load on the server is going up
RohitHi Rohit,
Is you system in High Avalibity setup? If yes, then try to load the balance on Node A and Node B. If this is already done or if your system is not in high availability then plan and install a Additional application server (Dialog instance) for load sharing.
Regards,
Sharath
Maybe you are looking for
-
Safari quit unexpectedly while using the .PictureOrganizerv.png plug-in?
Hello For the past week my safari has been closing randomly while having safari open for lengthy periods of time (which I usually do) with the message "Safari quit unexpectedly while using hte .PictureOrganizer.png plug-in I tried to find the specifi
-
Anyone found an answer - Dual display stuttering when 1 display turned off
Hello, I'm having exactly the same problem as described in this thread. Running two displays out of newest Mac Mini (mid 2010 model). Video runs perfectly until I turn off one display, and then the video has a stutter I can't get rid off. Any ideas?
-
Hey guys i m doin a multimapping scenario but getting some errors.its for test purposes so my source and target structures both are same . i m taking help from the followin the following blog /people/jin.shin/blog/2006/02/07/multi-mapping-without-bpm
-
No dimension assigned to restrict calculation scope...
Hi BPC Experts, I am trying to load the rate information into the rate application using the Import data package when I am encountering the following error: "No dimension assigned to restrict calculation scope". I am wondering what might be going on.
-
Unable to set up my keychain says keychain not found
I am unable to reset my keychain on my iMac I need help Brenda G