Performance improvement for af:table

My page consists in a table and button. The button displays a popup containing several tabs with trees inside them that allows the user to filter the data. Clicking OK on the button runs the query and refresh the table. The table is configured as followed
autoHeightRows="1000"
fetchSize="The number of rows returned by the query"
contentDelivery="immediate"
immediate="true"
value="call a method returning a List<MyLineBean> from managed bean"
One requirement is to display the table with no scrollbar.
The first issue is that displaying a table with 1000 rows is slow to render, but also makes the browser slow (Chrome in my case). The corresponding js file is about 11MB. I can understand that processing a 11MB JS file can be slow especially with DOM creation.
The other issue I noticed is that the speed to display the popup depends on the size of the table. With a 1000 rows table, I click on the button, the first server request is done after 3s. The JS size is about 20KB and network latency is low. Closing the popup with no processing is also slow (~2s). Now if I do the same experiments with a table of 13 rows (180KB of JS), the popup displays and closes instantaneously.
My priority is to improve the speed of displaying of the popup. Is there any reason why the speed depends on the size of the table?
ADF 11gR1 + WebCenter Portal

Hi user,
Follow this link GEBS | ADF View Object Performance Tunning Analysis for better performance of table..
Thank You.

Similar Messages

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • DMA Performance Improvements for TIO-based Devices

    Hello!
    DMA Performance Improvements for TIO-based Devices
    http://digital.ni.com/public.nsf/websearch/1B64310FAE9007C086256A1D006D9BBF
    Can I apply the procedure to NI-DAQmx 9? These ini-files dont seem to exist anymore in the newer version.
    Best, Viktor

    Hi Viktor,
    this page is 7 years old and doesn't apply to the DAQmx.
    Regards, Stephan

  • Performance tuning for BSAD table

    Hi All,
    How can I improve performance for this select query?I have only 'Clearing Date' on my selection screen.So I can use only 1 table key out of 10 keys..please suggest if I can use BSEG table or not?
          SELECT augdt        "Clearing Date
                 vbeln        "Billing Document
          FROM bsad
          INTO TABLE it_bseg
          WHERE augdt EQ p_date      "Parameter: Clearing Date
          AND   bschl IN ('01' , '02' , '11' , '12')
          AND   vbeln NE space.

    Hi,
    you do the following syt which is not give 100% performance but it gives good than ur select statment.
    Any way you have the CLREARING date from that u need to pick the physical year and move into one field say w_gjahr.
    w_gjahr = p_date +0(4).
    SELECT augdt "Clearing Date
    vbeln "Billing Document
    FROM bsad
    INTO TABLE it_bseg
    WHERE <b>augdt EQ p_date</b> "Parameter: Clearing Date
         AND <b>gjahr  EQ w_gjahr</b>.
    if sy-subrc = 0.
    sort it_bseg by bukrs.
    endif.
    DELETE it_bseg where vbeln eq space
                                and bschl eq '01'
                                and bschl eq '02'
                                and bschl eq '11'
                                and bschl eq '12'.
    <b>
    Reward with points if helpful.</b>
    Regards,
    Vijay

  • Performance improvement for ALE/IDOC processing

    Dear experts,
    Could you let me know any information to tune performance?
    In our system (SAP R/3 Enterprise 6.20), material master data is distributed from one client to other clients in the same instance. It is because material master is maintained centrally, and is distributed to other clients for member companies.
    During the night batch, distributing master data takes more than 2 hours. Although the distribution usually finishes without errors, we would like to explore ways to improve processing performance. Especially, program RBDMIDOC runs long time to create IDOC for MATMAS, even 0 master IDOC is set eventually.
    OSS notes of the list for OS/DB/SAP parameters related to ALE/IDOC, tips in organizing batch jobs, etc, will be very appreciated.
    Many Thanks,
    Nori

    I'd recommend to profile the program at least once to see where there's possible room for improvements. I.e. an ABAP runtime analysis and SQL trace would be good for a start. You can check out the links in thread Please Read before Posting in the Performance and Tuning Forum, which will give you some references if you need them. Once you have more details, you either know what you need to do or you can provide them and ask a more specific question (and thus you will receive much better answers).
    As a general remark though I've seen often poor performance initially on the changer pointer selection, because there are too many entries in the table (due to system generating unnecessary change pointers, which should be disabled or/and lack of proper reorganization, i.e. deletion of old/processed change pointers via BD22). Sounds like this is most likely also the cause of your problem, because otherwise it's hard to explain long run times without generating any master IDocs. You can check the number of change pointers easily via view BDCPV or BDCP2 - it depends how you store your change pointers (most likely you find them via view BDCPV in a 6.20 system, unless somebody switched it to the newer BDCP2).
    If you still have them in BDCPV (or the underlying tables to be more precise), check out [OSS note 305462 - MOD: Migration change pointer to table BDCP2|https://service.sap.com/sap/support/notes/305462], which will provide you a general hint how to do that (and thus also improve the performance). However, if you're currently not deleting any old change pointers, you should ensure that a regular job runs for transaction BD22 (program RBDCPCLR). You'll easily find other (possibly relevant) OSS notes by doing a search yourself...

  • Control Center performance improvement for large (runtime) repositories

    Hi all,
    I'm currently working on a large OWB project. We have serious performance issues with the Control Center (version 10.2.0.3) I have investigated what happens when you start the Control Center (SQL traces) and have implemented the following:
    -- slow query 1
    -- also added parent_audit_object_id to this index to help with slow query 3
    CREATE INDEX colin_wb_rt_audit_objects_1 ON
      wb_rt_audit_objects(audit_object_id, parent_audit_object_id, audit_unit_id, number_of_script_errors, number_of_script_warnings);
    -- slow query 2
    CREATE INDEX colin_wb_rt_audit_scripts_1 ON
      wb_rt_audit_scripts(audit_object_id, audit_script_id);
    -- slow query 3
    CREATE INDEX colin_wb_rt_audit_objects_2 ON
      wb_rt_audit_objects(object_uoid, audit_object_id DESC, audit_unit_id);
    CREATE INDEX colin_wb_rt_audit_objects_3 ON
      wb_rt_audit_objects(parent_audit_object_id, audit_object_id);The reason this helps is that now the indexes contain all the data needed in the slow queries obviating the need to go to the tables, which in these cases are particularly expensive as the tables contain large LOB columns.
    It is very interesting to see that two of the indexes can replace existing indexes that were added in a recent patch!
    For the wb_rt_warehouse_objects table I've implemented an alternative solution, and that is to move the storage of 2 of the 3 CLOBs out-of-row:
    ALTER TABLE wb_rt_warehouse_objects MOVE
      TABLESPACE <xxx>
      LOB (creation_script)
      STORE AS (
        TABLESPACE <xxx>
        DISABLE STORAGE IN ROW
      LOB (client_info)
      STORE AS (
        TABLESPACE <xxx>
        DISABLE STORAGE IN ROW
    ;where you should replace <xxx> with the tablespaces of your choice.
    I hope this will help some of you with large repositories.
    Cheers,
    Colin

    Hi David,
    I hope these improvements can be implemented! ;-)
    We have a runtime repository with some 2300 deployed tables and an equivalent number of mappings.
    Total number of rows in wb_rt_warehouse_objects is more than 40,000
    I used an SQL trace and tkprof to identify some high cost queries -- individually these queries perform quite reasonably but when executed many times (as happens when starting the Control Center) these can cost many tens of extra seconds.
    If you're interested I can send you
    * traces
    * tkprofs
    * the slow queries
    and then you can see the before and after explain plans and why this works.
    Please contact me at colinthart at the Google mail service :-)
    Cheers,
    Colin

  • PERFORMANCE IMPROVEMENT for a DB view

    Hi,
    There is around 300000 entries with MDBS and we are having very slow access and low performance.
    Following is the query.
    ima61v internal table does have only single entry in a sample run.
    SELECT wemng menge wepos elikz umrez
                 umren matnr werks lgort pstyp retpo           
            FROM  mdbs
            INTO (mdbs-wemng, mdbs-menge, mdbs-wepos, mdbs-elikz,
                  mdbs-umrez, mdbs-umren, mdbs-matnr, mdbs-werks,
                  mdbs-lgort, mdbs-pstyp, mdbs-retpo)         
            WHERE matnr  EQ ima61v-matnr
              AND werks  EQ ima61v-werks                       
              AND loekz  EQ space
              AND elikz  EQ space
              AND bstyp  IN ('F', 'L').
    The following is the ST05 analysis.
                                                                                    Executions - 1
    Identical Duration - 0
      Records  - 0
    Time/exec -  21,766,348
    Rec/exec.   - 0
    AvgTime/R. - 21,766,348
    MinTime/R. -  21,766,348
    Obj.  type   - MDBS                                                                               
    The SQL explain is as follows.
    SELECT STATEMENT ( Estimated Costs = 7 , Estimated #Rows = 1 )                                                                               
    6 TABLE ACCESS BY INDEX ROWID EKET                         
             ( Estim. Costs = 3 , Estim. #Rows = 1 )                                                                               
    5 NESTED LOOPS                                         
                 ( Estim. Costs = 7 , Estim. #Rows = 1 )                                                                               
    3 INLIST ITERATOR                                                                               
    2 TABLE ACCESS BY INDEX ROWID EKPO             
                         ( Estim. Costs = 4 , Estim. #Rows = 1 )                                                                               
    1 INDEX RANGE SCAN EKPO~1                  
                             ( Estim. Costs = 3 , Estim. #Rows = 1 )  
                             Search Columns: 6                                                                               
    4 INDEX RANGE SCAN EKET~0                          
                     ( Estim. Costs = 2 , Estim. #Rows = 1 )          
                     Search Columns: 3
    1.The tables are not going for full scan.
    2. DB stats are up to date.
    3. All indexes show in SQL explain are available at DB
    Apart from all these what else we can check to identify what is the problem? if  we change the variant for multiple mateirals and if we go for b/g execution it is taking more than 30 min to execute.
    and also let me know how to resolve the issue.
    Thanks in Advance.
    Praneeth

    3 simple points:
    I am quite sure that you did not run the statement before you did run the trace, please repeat and show the result of the second or third execution.
    I guess that is the only point, the explain is so simple that it can no take very long.
    And ... there is no record coming back .... I know that there are many executions where no record comes back, but is it really a good point for a discussion of a performance problem? Is this statement never successful?
    Number of records:
    The view has no records, you must check the two tables EKKE and EKPO, how many records are in these tables.

  • Performance improvement for select query

    Hi all,
    need to improve performace for the below select query as it is taking long time
    SELECT vbeln pdstk
             FROM vbuk INTO TABLE it_vbuk1 FOR ALL ENTRIES IN it_likp
          WHERE vbeln = it_likp-vbeln       AND
                wbstk = 'C'  AND "pdstk = ' ' AND
                vbtyp IN gr_delivery AND
                ( fkstk = 'A' OR fkstk = 'B' ) OR
                ( fkivk = 'A' OR fkivk = 'B' ).
    Regards,
    Kumar

    Hi,
        Check if it_likp is sorted on vbeln.
    SELECT vbeln pdstk
    FROM vbuk INTO TABLE it_vbuk1 FOR ALL ENTRIES IN it_likp
    WHERE vbeln = it_likp-vbeln AND
    wbstk = 'C' AND
    vbtyp IN gr_delivery AND
    ( ( fkstk = 'A' OR fkstk = 'B' ) OR      <-- check this condition , if ( ) is needed ...
      ( fkivk = 'A' OR fkivk = 'B' ) ) .
    Regards,
    Srini.

  • Query performance improvement using pipelined table function

    Hi,
    I have got two select queries one is like...
    select * from table
    another is using pielined table function
    select *
    from table(pipelined_function(cursor(select * from table)))
    which query will return result set more faster????????
    suggest methods for retrieving dataset more faster (using pipelined table function) than a normal select query.
    rgds
    somy

    Compare the performance between these solutions:
    create table big as select * from all_objects;
    First test the performance of a normal select statement:
    begin
      for r in (select * from big) loop
       null;
      end loop;
    end;
    /Second a pipelined function:
    create type rc_vars as object 
    (OWNER  VARCHAR2(30)
    ,OBJECT_NAME     VARCHAR2(30));
    create or replace type rc_vars_table as table of  rc_vars ;
    create or replace
    function rc_get_vars
    return rc_vars_table
    pipelined
    as
      cursor c_aobj
             is
             select owner, object_name
             from   big;
      l_aobj c_aobj%rowtype;
    begin
      for r_aobj in c_aobj loop
        pipe row(rc_vars(r_aobj.owner,r_aobj.object_name));
      end loop;
      return;
    end;
    /Test the performance of the pipelined function:
    begin
      for r in (select * from table(rc_get_vars)) loop
       null;
      end loop;
    end;
    /On my system the simple select-statement is 20 times faster.
    Correction: It is 10 times faster, not 20.
    Message was edited by:
    wateenmooiedag

  • Performance Improvment for CTE

    I have below query which is used in a View which checks duplicate rows and selects it for further processing
    with MyCTETable (DefId, CustId,MyId,TacticId,cnt) as
    (select BaseTable.DefId ,
    BaseTable.MyCustId,
    WL.Id,
    BaseTable.TacticId ,
    COUNT(*)
    from WorkList WL
    inner Join WorkListMeta WLMeta ON WL.WorkListMetaId = WLMeta.Id
    and WLMeta.MyAnswer='1'
    Inner Join BaseTable BaseTable ON WL.Id = BaseTable.DefId
    and BaseTable.Org = '101'
    and BaseTable.State<>'d'
    group by BaseTable.DefId,
    WL.Id,
    BaseTable.MyCustId,
    BaseTable.TacticId having COUNT(*)>1)
    select list.Id
    from MyCTETable,
    BaseTable as list
    where list.DefId = MyCTETable.DefId
    and list.MyCustId =MyCTETable.CustId
    and list.MyCustId in
    (Select distinct MyCustId
    from BaseTable
    where DefId =MyCTETable.DefId
    and Done<>'0'
    and Status <>'d')
    Above query is returning 532084 records in around 15 mins.
    Just need your opinion, any changes/optimization can be done in query to improve the performance?
    Note: Indexes and statistics are all maintained.
    -Vaibhav Chaudhari

    Here is a method that can (depending on some other factors) sometimes gives you a method of eliminating one, maybe both of your self joins, if you're using them just to identify duplicates.  It uses the OVER() clause to give you your COUNT(*) as part
    of your primary result set. 
    This is just an example of the possibility, I didn't look closely enough at your code to say for sure it'll help in your case, but on the surface, I bet you could eliminate one of those self joins. 
    ;With X (letter) as (Select 'a' UNION ALL select 'b' UNION ALL select 'b' UNION ALL select 'c')
    , Y as (select *, count(letter) over(partition by letter) as count_this_letter from X)
    Select * from Y where count_this_letter > 1
    You don't have to use the WHERE clause as I did, that was just to show a way to refer to cases where a duplicate existed. 
    EDIT/Clarification: My mocked up example is just an example.  You would tweak your OVER clause probably something like "Count(*) Over(Partition byObDefListPKey, BpaPKey, JobDefid)" to count and group by those fields, while still presenting
    the full result set.

  • SEQUENCE Object for Small Tables Only?

    QUOTE from a recent thread: "Long term solution should be to use SEQUENCE with NO CACHE for small tables instead of identity and benefit from performance improvement for large tables."
    Thread:
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/cf63d145-7084-4371-bde0-eb3b917c7163/identity-big-jump-100010000-a-feature?forum=transactsql
    How about using SEQUENCE objects for large tables? Thanks.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

    Well Erland, either you calm down your manager (with a martini?) or use NO CACHE.
    QUOTE: "This could cause a sequence to run out of numbers much more quickly than an IDENTITY value. It could also cause managers to become upset that values are missing, in which case they’ll need to simply get over it and accept that
    there will be numbers missing.
    If you need SQL Server to use every possible value, configure a cache setting of NO CACHE. This will cause the sequence to work much like the IDENTITY property. However, it will impact the sequence performance due to the additional metadata writes."
    LINK: Microsoft SQL Server: The Sequencing Solution
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Performance Optimization for Cubes

    Hi All,
    In our project, we have a daily proces chain which will refresh four reporting cube, which is consuming 8-10 hours to complete the refresh. We have suggested to archive the historical data to the new cube to improve the performance of the daily load
    In UAT, the performance of the daily load did not improved after we performed the archiving.
    Kindly suggest the performance improvement for the cubes.
    Regards
    Suresh Kumar

    Hi,
    Before loading the cube , you need to delete the index and once the load is complete recreate the same.For this you have to go to the manage screen of the infocube----> Performance Tab.
    Also Create the DB Statistics.For this you have to go to the manage screen of the infocube----> Performance Tab.This will reduce the load time to a considerable amount.
    Also increase the Maximum size of the data packet in the Infopackage. For this you have to go to the infopackage-->Scheduler in the menu bar--> Data S. Default Data Transfer.Increase the size to a considerable amount(not very high).Also increase the Number of Data packets per info IDOC. This field will be available just after Maximum size of the data packet in the Infopackage.
    Hope It Helps,
    Regards,
    Amit
    Edited by: Amit Kr on Sep 4, 2009 5:37 PM

  • Can't we use Binary SEARCH  for TYPE SORTED TABLE?(Performance Improvement)

    Hi Expert!
                       I have declare a sorted type table with NON -UNIQUE, and want to use Binary search in read statement. But while  using bunary search in read statement I'm facing an error. The ERROR is
    "Table LI_MARC is a SORTED TABLE or INDEX TABLE. The BINARY SEARCH
    addition is only allowed for these tables if the key specified is an
    initial part of the table key."
    Please find detail
    TYES: tt_marc  TYPE SORTED TABLE OF marc   WITH NON-UNIQUE KEY matnr,werks.
    DATA: li_marc type tt_marc.
    READ TABLE li_marc INTO marc WITH KEY matnr = i_mbew-matnr     
                                                                          werks = i_mbew-bwkey BINARY SEARCH . 
    To my understanding , there is no need to mention Bianry Search  for sorted table TYPE. Please  let me know can  i use ?

    Hello,
    there is no need to mention Bianry Search for sorted table TYPE.
    Yes, this is because for SORTED TABLEs binary search algorithm is used by default for READ TABLE. Although you can use BINARY SEARCH addition but it's use is redundant.
    As for your case you've defined the KEY fields incorrectly There shouldn't be any "comma(s)" between the fields.
    TYPES: tt_marc TYPE SORTED TABLE OF marc WITH NON-UNIQUE KEY matnr werks.
    When you define it with commas
    TYPES: tt_marc TYPE SORTED TABLE OF marc WITH NON-UNIQUE KEY matnr, werks.
    the result is something like this:
    TYPES: tt_marc TYPE SORTED TABLE OF marc WITH NON-UNIQUE KEY matnr.
    TYPES: werks.
    Hence you were getting the syntax error!
    BR,
    Suhas
    PS: As for MARC you can use UNIQUE KEY addition because MATNR & WERKS are the key fields in the table.

  • Select query performance improvement - Index on EDIDC table

    Hi Experts,
    I have a scenario where in I have to select data from the table EDIDC. The select query being used is given below.
      SELECT  docnum
              direct
              mestyp
              mescod
              rcvprn
              sndprn
              upddat
              updtim
      INTO CORRESPONDING FIELDS OF TABLE t_edidc
      FROM edidc
      FOR ALL ENTRIES IN t_error_idoc
      WHERE
      upddat GE gv_date1 AND
      upddat LE gv_date2 AND
      updtim GE p_time AND
      status EQ t_error_idoc-status.
    As the volume of the data is very high, our client requested to put up some index or use an existing one to improve the performance of the data selection query.
    Question:
    4.    How do we identify the index to be used.
    5.    On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
    6.    What will be the impact on the table performance if we create a new index.
    Regards ,
    Raghav

    Question:
    1.    How do we identify the index to be used.
    Generally the index is automatically selected by SAP (DB Optimizer )  ( You can still mention the index name in your select query by changing the syntax)
      For your select Query the second Index will be called automatically by the Optimizer, ( Because  the select query has u2018Updatu2019 , u2018uptimu2019 in the sequence before the u2018statusu2019 ) .
    2.    On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
    (Create a new Index with MANDT and the 4 fields which are in the where clause in sequence  )
    3.    What will be the impact on the table performance if we create a new index.
    ( Since the index which will be newly created is only the 4th index for the table, there shouldnu2019t be any side affects)
    After creation of index , Check the change in performance of the current program and also some other programs which are having the select queries on EDIDC ( Various types of where clauses preferably ) to verify that the newly created index is not having the negative impact on the performance. Additionally, if possible , check if you can avoid  into corresponding fields .
    Regards ,
    Seth

  • Why using workarea for internal table is better in performance

    Please tell me
    why using workarea for internal table is better in performance

    Hi Vineet ,
      Why would we choose to use an internal table without a header line when it is easier to code one  with a header line?
    it has following reason.
    1)Separate Internal Table Work Area:
         The work area (staging area) defined for the internal table   is not limited to use with just one internal table.
    take ex-
    Suppose you want two internal tables for EMPLOYEE – one to contain all records and one to contain only those records where country = ‘USA’.  You could create both of these internal tables without header lines and use only one work area to load data into both of them. You would append all records from the work area into the first internal table.  You would conditionally append the ‘USA’ records from the same work area into the second internal table.
    2)  Performance Issues:  Using an internal table without a header line is more efficient than one  with a header line
    3) Nested Internal Tables:  If you want to include an internal table within a structure or another   internal table, you must use one without a header line.
    If this one is helpful ,then rewards me
    Regards
    Shambhu

Maybe you are looking for