Response Time on DB size reduction

Hi Folks,
If i have a database size of 1TB and one of my reports has a response time of about 10mins for an instance. If i reduce the database size by 20% (which gives me around 800GB), what will be the impact on the approximate response time for that report?
Srini

Considering that the load remains the same ...
You need to check how much time the report is spending exchanging data with database.
Then you can expect that time taken with database will be directly in proportion with the reduction in size of database.
If the database size is reduced by 20% then you can expect the time to reduce by appx 5 - 15%.
Because most of the times the time is spend doing data exchange with database.
Regards,
Lalit Mohan Gupta.

Similar Messages

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

  • Response Time of a query in 2 different enviroment

    Hi guys Luca speaking, sorry for the bad written english
    the questions is:
    The same query on the same table, for definition, number of rows, defined on the same kind of tablespace, the tables are analized
    *) I have a query in Benchmark with good results in execution time, the execution plan is really good
    *) in Production the execution plan is not so good, the response time isn't comparable (hours vs seconds)
    #### The Execution Plan are different ####
    #### The stats are the same ####
    this a table storico.FLUSSO_ASTCM_INC A with this stats in benchmark
    chk Owner Name Partition Subpartition Tablespace NumRows Blocks EmptyBlocks AvgSpace ChainCnt AvgRowLen AvgSpaceFLBlocks NumFLBlocks UserStats GlobalStats LastAnalyzed SampleSize Monitoring Status
    True STORICO FLUSSO_ASTCM_INC TBS_DATA 2861719 32025 0 0 0 74 NO YES 10/01/2006 15.53.43 2861719 NO Normal, Successful Completion: 10/01/2006 16.26.05
    in Production the stas are the same
    the other one is an external_table
    the only differences that I noticed at the moment is about the tablespace used to defined the table on:
    Production
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
    Benchmark
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    I'm studing on at the moment
    What I have to check to obtain the same execution plan (without change the query)
    This is the query:
    SELECT
    'test query',
    sysdate,
    storico.tc_scarti_seq.NEXTVAL,
    NULL, --ROW_ID
    -- A.AZIONE,
    'I',
    A.CODE_PREF_TCN,
    A.CODE_NUM_TCN,
    'ADSL non presente su CRM' ,
    -- a.AZIONE
    'I'
    || ';' || a.CODE_PREF_TCN
    || ';' || a.CODE_NUM_TCN
    || ';' || a.DATA_ATVZ_CMM
    || ';' || a.CODE_PREF_DSR
    || ';' || a.CODE_NUM_TFN
    || ';' || a.DATA_CSSZ_CMM
    || ';' || a.TIPO_EVENTO
    || ';' || a.INVARIANTE_FONIA
    || ';' || a.CODE_TIPO_ADSL
    || ';' || a.TIPO_RICHIESTA_ATTIVAZIONE
    || ';' || a.TIPO_RICHIESTA_CESSAZIONE
    || ';' || a.ROW_ID_ATTIVAZIONE
    || ';' || a.ROW_ID_CESSAZIONE
    FROM storico.FLUSSO_ASTCM_INC A
    WHERE NOT EXISTS (SELECT 1 FROM storico.EXT_CRM_X_ADSL B
    WHERE A.CODE_PREF_DSR = B.CODE_PREF_DSR
    AND A.CODE_NUM_TFN = B.CODE_NUM_TFN
    AND A.INVARIANTE_FONIA = B.INVARIANTE_FONIA
    AND B.NOME_SERVIZIO NOT IN ('ADSL SMART AGGREGATORE','ADSL SMART TWIN','ALICE IMPRESA TWIN',
    'SERVIZIO ADSL PER VIDEOLOTTERY','WI - FI') )
    Esito di set autotrace traceonly explain ESERCIZIO
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=144985 Card=143086 B
    1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=1899 C
    4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q370300
    4 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
    Esito di set autotrace traceonly explain BENCHMARK
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3084 Card=2861719 By
    tes=291895338)
    1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
    2 1 HASH JOIN* (ANTI) (Cost=3084 Card=2861719 Bytes=29189533 :Q810002
    8)
    3 2 TABLE ACCESS* (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=3082 :Q810000
    Card=2861719 Bytes=183150016)
    4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q810001
    t=2 Card=1 Bytes=38)
    2 PARALLEL_TO_SERIAL SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) US
    E_ANTI(A2) */ A1.C0,A1.C1,A1.C2,A1.C
    3 PARALLEL_FROM_SERIAL
    4 PARALLEL_TO_PARALLEL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
    EF_DSR" C0,A1."CODE_NUM_TFN" C1,A1."
    The differences on the InitOra are on these parameters:
    Could they influence the Optimizer, and the execution plan are so different
    background_dump_dest
    cpu_count
    db_file_multiblock_read_count
    db_files
    db_32k_cache_size
    dml_locks
    enqueue_resources
    event
    fast_start_mttr_target
    fast_start_parallel_rollback
    hash_area_size
    log_buffer
    log_parallelism
    max_rollback_segments
    open_cursors
    open_links
    parallel_execution_message_size
    parallel_max_servers
    processes
    query_rewrite_enabled
    remote_login_passwordfile
    session_cached_cursors
    sessions
    sga_max_size
    shared_pool_reserved_size
    sort_area_retained_size
    sort_area_size
    star_transformation_enabled
    transactions
    undo_retention
    user_dump_dest
    utl_file_dir
    Please Help me
    Thanks a lot Luca

    Hi Luca,
    test and production system are nearly identicall (same OS, same HW Plattform, same software version, same release)
    you're using external tables. Are the speed of these drives are identically?
    have you analyzed the schema with the same statement? Could you send me the statement?
    have you system statistics?
    have you testet the statement in an environment which is nearly like the production? concurrent user etc.
    Could you send me the top 5 wait events from the statspack report.
    Are the data from production and test identical? No data changed. No Index drop? No additional Index? All tables and indexes are analyzed
    Regards
    Marc

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

  • High Response Times with 50 content items in a Publisher 6.5 portlet

    Folks,
    Have set up a load test, running with a single user, in which new News Article content items are inserted into a Publisher 6.5 portlet created of the News Portlet template. Inserts have good response times through 25 or so content items in the portlet. Then response times become linearly longer, until it takes ten minutes to insert a content item, when there are 160 content items already.
    This is a test system that is experiencing no other problems. There are no other users on the system, only the single test user in LoadRunner, inserting one content item at a time. The actual size of the content item is tiny. Memory usage in the Publisher JVM (as seein on the Diagnostics page) does not vary from 87% used with 13% free. So I asked for a DB Trace, to determine if there were long-running queries. I can provide this on request, it zips to less than 700k.
    Have seldom seen this kind of linear scalability!
    Looked at the trace through SQL Server Profiler. There are several items running for more than one second, the Audit Logout EventClass repeatedly occurs with long durations (ten minutes and more). The users are publisher user, workflow user, an NT user and one DatabaseMail transaction taking 286173 ms.
    In most cases there is no TextData, and the ApplicationName is i-net opta 2000 (which looks like a JDBC driver) in the longest-running cases.
    Nevertheless, for the short running queries, there are many (hundreds) of calls to exec sp_execute and IF @@TRANCOUNT > 0 ROLLBACK TRAN. This is most of what fills the log. This is strange because only a few records were actually inserted successfully, during the course of the test. I see numerous calls to sp_prepexec related to the main table in question, PCSCONTENTITEMS, but very short duration (no apparent problems) on the execution of the stored procedures. Completed usually within 20ms.
    I am unble to tell if a session has an active request but is being blocked, or is blocking others... can anyone with SQL Server DBA knowledge help me interpret these results?
    Thanks !!!
    Robert

    hmmm....is this the ootb news portlet? does it keep all content items in one publisher folder? if so then it is probably trying to re-publish that entire folder for every content item and choking on multiple republish executes. i dont think that ootb portlet was meant to cover a use case of multiple content item inserts so quickly. by definition newsworth stuff should not happen to need bulk inserts. is there another way to insert all of the items using publisher admin and then do one publish for all?
    i know in past migration efforts when i've written utilities to migrate from legacy to publisher the inserts and saves for each item took a couple of seconds each. the publishing was done at the end and took quite a long time.

  • Discoverer Performance/ Response Time

    Hi everyone,
    I have a few questions regarding the response time for discoverer.
    I have Table A with 120 columns. I need to generate a report based on 12 columns from this table A.
    The questions are whether the factors bellow contribute to the response time of discoverer.
    1. The number of items included in the business area folder (i.e. whether to include 120 cols or just 12 cols)
    2. The actual size of the physical table (120 cols) although I only selected 12 cols. If the actual size of the physical table is only 12 cols, would it improve the performance?
    3. Will more parameters increase the processing time?
    4. Does Joins increase the processing time?
    5. Will using Custom Folder and writing an sql statement to select the 12 columns increase the performance?
    Really appreciate anyone's help on this.
    Cheers,
    Angeline

    Hi,
    NP and Rod, thanks a lot for your replies!
    Actually I was experiencing a different thing that contradicts your replies.
    1. When I reduced the no of items included in my Biz Area from 120 to 12 the response time improve significantly from around 5 minutes to 2-3 minutes.
    2. When I tried to create a dummy table with just 12 cols needed for the report, I could get a very fast response time, i.e. 1 second to generate the report. But of course the dummy table contains much less data (only around 500 K records). Btw, is Discoverer able to handle large database? What is the biggest record size can it handle?
    3. When I add more parameters it seems to add more processing time to the discoverer.
    4. Thanks for the clarification on this one.
    5. And the funny thing is, when I use custom folder to just select the 12 columns, the performance also significantly improves with estimated query time reduced from 2 minutes plus to just 1 mins 30 secs. But still the performance time is inconsistent. Sometimes it only takes around 1 mins 40 secs, but sometimes it can run up to 3 mintues for the same data.
    Now I am creating my report using the custom folder cause it has the best response time so far for me. But based on your replies it's not really encouraged to use the custom folder?
    I need to improve the response time for the Discoverer Viewer as the response time is very slow and users don't really like it.
    Would appreciate anyone's help in solving this issue :) Thanks..
    Cheers,
    Angeline

  • Response time of form

    I developed a form with triggers like key-up,key-down, post-query, on-lock in three blocks. It works fine in client server environment but when I run the form from the web server it slows down and a job that it has to do on one click of a button actually does when clicked three times.
    In master detail blocks which are both in tabular form I have placed an additional field which works as a current record indicator and when I scroll through the records the current record indicator lacks behind.
    I even have removed most of the triggers but still the response time is very very poor.
    I am using form server with Appachi web server.

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Ahmar Mirza ([email protected]):
    I developed a form with triggers like key-up,key-down, post-query, on-lock in three blocks. It works fine in client server environment but when I run the form from the web server it slows down and a job that it has to do on one click of a button actually does when clicked three times.
    In master detail blocks which are both in tabular form I have placed an additional field which works as a current record indicator and when I scroll through the records the current record indicator lacks behind.
    I even have removed most of the triggers but still the response time is very very poor.
    I am using form server with Appachi web server.
    <HR></BLOCKQUOTE>
    null

  • Slow response time for JSP pages under iAS 6.0 SP4

    Hi,
    I got an application deployed on iplanet app server 6.0 SP4 on solaris
    2.8. Using a single kjs engine and lite sessions. kjs memory size is
    min 256 and max 256 megs. but verbose:gc shows memory is 98% free.
    when i restart the app server, all JSP pages are really rendered fast.
    After a while (1 or 2 days), the time to service the same request to
    JSP pages is getting much longer (even with JSP pages having only
    static content in them). CPU is idle ... It just takes time. KXS log
    shows requet is taking like 2-4 seconds instead of about 150 milli
    secs when the engine is just restarted.
    Now if i call a servlet (which do not dispath to a JSP), the response
    time is ok! Memory is ok. It looks like its related to JSP pages
    only.
    Anyone having an idea what the problem could be? One conig param is
    the JSP page cache in iASAT. Default value is 10. What is a correct
    setting for production? I have 4 different web app deployed in the
    same server instance.
    Tanks a lot for your input
    Andre Paradis

    Andre,
    I have found the answer to my problem and perhaps yours. It seems that I18N (internationalization) in SP4 may have a performance bug in it.
    My soak tests show that with i18N checked in the iAS Admin Tool, testing the fortune cookie sample application with light load (1 request / sec) resulted in a kxs response time initially of 15ms, however this response time increased by roughly 1% per request (i.e after 100 requests the response time had more than doubled).
    Switching I18N off yielded a steady 7ms kxs response time from the fortune cookie application.
    I would add that I turned I18N on AFTER the installation procedure.
    Is this a known issue in SP4? Is there a patch?
    regards,
    Owen

  • High Avg Response time for logon requests via CMS

    Hi Team ,
    We are continuously receiving observing high Average reponse time for logon requests to the BO system  via  Central Management Server.
    We observe response time up to 25043 ms .
    Currently we are on SAP BO 4.0  SP7 patch 9  (4.0.7.9)
    DB = SQL server 2008 R2
    App Server = SAP NW 7.31 SP 7.
    Also the size of our CMS DB is around 15 GB .
    What could be  the possible reasons ?
    Regards ,
    Abhinav

    Hi Abhinav,
    As one of the issues has been raised as a bug which is resolved in BI 4.1 so you can upgrade to resolve this bug. Also if the CMS database size is large then CMS has to search for objects through huge number of rows hence that will affect overall performance. So you can try to reduce this size as per my previous upate.
    Apart this you can try following steps
    Try to ping CMS DB server from BO servers and confirm the response is coming in 1 ms. Run tracert <DB servers name> from CMD and check the number of hops. If response time is not 1 ms or there are more number of hops then ask your network team to resolve network latency issue
    You can increase the "System database connections"  for each CMS from server properties. It is set to 14 by default which means the CMS will establish 14 connections to CMS database at any time. You can increase this value, however please make sure that the system database allows more connections than default 14 from DB side. This needs to be confirmed from your DBA.
    Please add CMS cluster members in platformservices.properties file under Tomcat folder. Please refer following SAP KBA for steps to add the cluster members
           http://service.sap.com/sap/support/notes/1668515
           http://service.sap.com/sap/support/notes/1766935    
       4. Also please confirm number of users simultaneously login to system at peak time. Usually one           CMS is capable of handling around 500 requests. So if you have more then 1000 users then add           another new CMS on same nodes if there is enough free memory on the server
    Regards,
    Hrishikesh

  • High Response time

    Hi All ,
    In my SANDBOX , in time of transaction code execution it's OK. But after that in the time of any customization, table viewing it is using much more response time & even sometimes session is going terminated . How to fix & solve this problem. Can anybody help me ?
    Regards
    Asad

    Hi,
    I would look into following areas in st03n which would give you idea which transactions are taking more time and which area its struggling such as db time, cpu time, number of db reads, gui time and this would give an idea.
    Also check if your system has sufficient resources in terms of memory, cpu and paging files(swap size). Please check the paging out/paging rates as this would indicate problem areas.
    You can do a trace as mentioned earlier and check for the st22 dumps and system logs sm21 on what the errors messages indicate. This should really make it clear on what the core issues are and if you have any queries, pls post it here so that we can check it for you.
    Cheers Sam

  • Problem with response time

    Hi, I am experiencing long response time for my dashboard(40 seconds on some PC). I wonder what is the normal response time for your dashboards, and what is the size of your worksheet?
    For me, I have a tab set with 5 tabs, under each tab, there are about 100 row* AH cloumn. And as it is still the proof of concept stage in my company, I am using offline data, but if we use live data from BW, what would it mean to the resopnse time, even longer?
    Thank you very much.

    Hello,
    We try to keep our SWF size under 1024 kb, but that doesn't mean it will load fast since the connection loads little after Initializing the component. We have around 100 components on an average, and some custom components. One thing you should stay away from is doing lot of grouping in the Object Browser.
    Questions for you:
    Which version of xcelsius and service pack are you using?
    How many connections run when the dashboard loads initially?
    How many total connections?
    How many rows and columns does the initial load connection bring?
    How much time does the same query take in the database?
    Edited by: msaraogi on Dec 15, 2010 10:32 PM

  • Data warehouse response time

    Hi all,
    I have a warehouse about 1/2TB in size an its response time is very slow when reports are ran against it
    The share pool stata are below
    SQL Area get ratio = 95.2%
    pin ratio = 99.8%
    reloads/pins = .0016%
    What should i be looking at

    Are you certain that no query plans have changed?
    am not 100% sure
    What version of Oracle?
    9.2.0.4.0
    Which optimizer are you using?
    none
    initialization parameter
    optimizer_index_caching=0
    optimizer_index_cost_adj=100
    optimizer mode=choose
    When did you last gather statistics?
    today
    Thanx

  • WebDynpro SSR / Browser Response Time

    Good morning,
    When we are visualizing a WebDynpro view we take an unacceptable response time (of almost 1 minute) and the CPU of the computer client rises almost until the 100%.
    The View is composed by a menu to the left (which is a embedded view )and a main view, which is formed by a group that contains a Table within a ScrollContainer. So, the view is not much complex.
    The table is mapped to a simple structure whose attributes are simple objects (string) and the maximum table record size is 100.
    Additionally whenever any event takes place, either or in the menu of the left or the own table, the response time remains in 1 minute although business logic is not executed.
    We have proven to delete the ScrollContainer and show the table but the performance doesn’t improve. We have also tested that communication network problems doesn’t exist.
    The performance of the client browser has been verified including the SSR parameter (“sap.session.ssr.showInfo=true”). A document with an
    image is attached, it is possible to see that the browser response time is 45 seconds to display a content of 1 MB (isn´t it too much?? Why WD generates too much HTML??).
    SAP WAS 6.40 y SP15
    Browser:Internet Explorer 6.0.2800.1106 SP1
    Thanks in advance,
    Eloy

    Hi Eloy,
    We also faced a similar problem in our project. When the page size reaches 0.5MB+ the reposonse becomes too slow.
    This is becuase WebDynpro gets marshalled data from backend and unmarshlles it based on your screen design. So in your case if you have 100 rows * 50 columns it will unmarshall all these records at front-end i.e, the client. Hence you see the response time of your CPU reaching 100 %
    You have very few options
    1) Decrease the no of visible rows on the screen at a time. Say max 10. If you have 40-50 columns explore using Tab Strips with 12-15 columns in each tab.
    2) Increase the RAM and Processing capabilities of your Client PC's. We were kind of lucky that our customer agreed to this and got P4 1GB machines.
    Lets hope the performance is improved in the future releases.
    Regards,
    Shubham

  • Average Response Time ASM

    Hi,
    Installed 10g RAC SE on Enterprise Linux 64. We have two ASM instances with 2 disk groups DATA and FRA. Database Control reports for disk group DATA (that consists of 3 RAID 5 vol.) that Average Response Time is 54-58 ms. Read Response Time is around 40-45ms. Everything is done to Oracle recommendations with a large stripe size and so on. We don't have performance problem but should not the response time be better? Can someone else share response time values for comparison?
    Regards Lars

    Grid Control always has been notoriously bad at measuring I/O performance of ASM, sadly. The service time numbers are usually high, and the IOPs numbers often are not IOPs so much as "I/Os to date with periodic resets to zero" numbers.
    As the previous poster said, use iostat. On Linux, iostat -x, the svctm column.

  • Average Response Time for Reports

    Hi Gurus,
    I am using OAS 10.1.2.0.2 with Business Intelligence and Forms Installation.
    Previously I have never seen Average Response Time of Reports server greater than 10000(ms),
    But It is increasing continously and now within 2-3 days it has increased upto 114668(ms)
    CPU Usage (%)          N/A
         Memory Usage (MB)          N/A
         Average Response Time (ms)          114668
    its Maximum queue size is 1000....
    I am not able to find out why it is increasing in this manner.....
    plz help..
    Thanx

    Hi,
    Today it has been increased upto 170236 ...
    Thanx,
    Santosh

Maybe you are looking for