Performance Considerations of ASO

Hi All,
I'm facing issue when trying to retrieve data from excel add-in, where it takes around 5-10 min to retrieve the data.
There are couple of members with MDX Formulas ( guessing these are the real culprits)
Retrieval at upper level is pretty fast, if i drill-down to level0 ( with members with MDX Forumale) are taking more time for retrieval.
Can any one suggest what can be done to improve the performance.
Thanks.

Thanks for suggesting I take a second look, specifically at the comments.
I did find http://richardfoote.wordpress.com/2007/12/14/constraints-dont-make-them-deferrable-or-novalidate-unless-you-need-to/#comment-10291
22.     Dale Cook - December 18, 2010
Richard, interesting and useful blog. Do you know of any performance implications for making foreign key constraints deferrable?
Reply
Richard Foote - December 21, 2010
Hi Dale
Not that I know of although others may have experiences worth mentioning.
Reply
oraculix - February 17, 2011
Dale,
are you thinking about negative or positive performance implications?
To give an example with positive impact: We run a data synchronization interface between two databases. After passing through a staging area, all staged data gets merged into existing tables that are modeled with foreign keys.
The FK contraints are deferrable (check contraints, btw, aren’t); primarily, to import all the data on a “all-or-nothing” basis, but secondly, run time of the batch process is better with all the FK constraints checked in bulk after the commit.
So far we haven’t run into any issues with sub-optimal SQL plans related to deferred FKs.
Regards,
Uwe
Reply

Similar Messages

  • XML Embedded in Stored Function - Performance Considerations

    A developer in my company approached us with a question about performance considerations while writing stored procedures or functions that embed XML.
    The primary use for this function would be to provide a quick decision given a set of parameters. The function will use the input parameters along with some simple calculations and DB lookups to come up with an answer. These parameters will be stored in the database. Potentially even more parameters that are currently represented in the xml will be available in the DB and therefore could be looked up by the function.
    My biggest question is if this way of using XML as an input parameter introduces any performance considerations or concerns for storage/bandwidth etc.
    Thank you
    Edited by: user8699561 on May 19, 2010 9:24 AM

    user8699561 wrote:
    A developer in my company approached us with a question about performance considerations while writing stored procedures or functions that embed XML.
    The primary use for this function would be to provide a quick decision given a set of parameters. The function will use the input parameters along with some simple calculations and DB lookups to come up with an answer. These parameters will be stored in the database. Potentially even more parameters that are currently represented in the xml will be available in the DB and therefore could be looked up by the function.
    My biggest question is if this way of using XML as an input parameter introduces any performance considerations or concerns for storage/bandwidth etc.
    Thank you
    Edited by: user8699561 on May 19, 2010 9:24 AMStorage/bandwith will be determined regarding the size of the XML doc, but there are ways to minimize those to the minimum (binary XML support in JDBC eg.). Performance overhead in general...eh..."it depends" (how you set it up)...

  • Physical Database Design Steps & Performance Considerations

    Hi,
    We have a Oracle 9i install need help to create a DB.
    Required to know the Physical Database Design Steps & Performance Considerations.
    like
    1-Technical consideration of DB as per server capacity. how to calculate..?
    2- What will be the best design parameter for DB...?
    Can you please help how to do that. Any metalink ID help to get those information.
    thanks
    kishor

    there is SOOO much to consider . . . .
    Just a FEW things are . . .
    Hardware - What kind of Host is the database going to run on?
    CPU and Memory
    What kind of Storage
    What is the Network like?
    What is the database going to do OLTP or DW?
    Start with your NEEDS and work to fulfill those needs on the budget given.
    Since you say Physical Database Design . . . is your Logical Database Design done?
    Does it fulfill the need of your application?

  • FXMLLoader and fx:root performance consideration

    Hi, All
    While I have a customization table cell(three row two collumn at single cell, using the Java method will be not easy to mantain), I using the fx:root FXML. But my table have thousands of tableView cells need to be rendered. If everytime cellFactory paint the cell I load the cell from FXML, the IO usage will very high, leading to the poor rending performance. Is there have any better method that I can defined customorize cell?

    Post a short executable sample which illustrates your performance consideration and maybe somebody will post back with a better performing solution.

  • Performance Consideration when updating NX-OS?

    Is there any performance consideration on the SAN switches we need to monitor prior to updating NX-OS?

    if it's a non-disruptive upgrade then you never lose FC connectivity. Always read the release notes for the version you are upgrading to make sure there are no surprises (hardware support ..etc)
    @dynamoxxx

  • Document size performance considerations

    We are trying to determine the performance implications of different approaches to document storage with BDB. Most of the XML we need to store will contain anywhere between 5000 to 20000 nodes at about 6MB per XML block. Best practices or document size analysis/data breakdown isn't a topic that is well explained on the docs websites. We are working primarily in a single-threaded environment.
    1. Does performance degrade considerably when BDB containers hold documents upwards of 500MB? 2GB? or should documents generally be small? (Assuming Node type storage)
    2. Does anyone know of or have any best practices for data breakdown and storage within BDB?
    And completely unrelated:
    3. Are Environments completely portable when fully moved to different systems?
    Many thanks.

    Hi Taka,
    1. Does performance degrade considerably when BDB
    containers hold documents upwards of 500MB? 2GB? or
    should documents generally be small? (Assuming Node
    type storage)Documents being written need to be parsed (larger docs take longer), inserted into the database and the appropriate indices updated (the more indices, the longer it takes). I think that the best thing to do is to build a prototype of your application, populate a database and benchmark the performance.
    3. Are Environments completely portable when fully moved to different systems?There are two issues with copying or moving databases: database page log sequence numbers (LSNs), and database file identification strings.
    Because database pages contain references to the database environment log records (LSNs), databases cannot be copied or moved from one transactional database environment to another without first clearing the LSNs. Note that this is not a concern for non-transactional database environments and applications, and can be ignored if the database is not being used transactionally. Specifically, databases created and written non-transactionally (for example, as part of a bulk load procedure), can be copied or moved into a transactional database environment without resetting the LSNs. The database's LSNs may be reset in one of three ways: the application can call the DB_ENV->lsn_reset method to reset the LSNs in place, or a system administrator can reset the LSNs in place using the -r option to the db_load utility, or by dumping and reloading the database (using the db_dump and db_load utilities).
    Because system file identification information (for example, filenames, device and inode numbers, volume and file IDs, and so on) are not necessarily unique or maintained across system reboots, each Berkeley DB database file contains a unique 20-byte file identification bytestring. When multiple processes or threads open the same database file in Berkeley DB, it is this bytestring that is used to ensure the same underlying pages are updated in the database environment cache, no matter which Berkeley DB handle is used for the operation.
    The database file identification string is not a concern when moving databases, and databases may be moved or renamed without resetting the identification string. However, when copying a database, you must ensure there are never two databases with the same file identification bytestring in the same cache at the same time. Copying databases is further complicated because Berkeley DB caches do not discard cached database pages when database handles are closed. Cached pages are only discarded when the database is removed by calling the DB_ENV->remove or DB->remove methods.
    Bogdan Coman

  • ESS Cats Timesheet Performance Considerations

    Hi Experts,
    We are using ESS Standard Portal Cats timesheet.
    But the client is not happy with way the Timesheet performs
    Like for example to fill up time in one of the cell -  when the user places the cursor it takes 3-4 secs for the entire row to get selected ( orange in color ) . Once selected the user can fill that cell.
    Our users timesheet runs from 20-30 lines so u can imagine how painful its for a user to complete booking his time.
    The row selections is one of the major pain points in our timesheet.
    Can we do any thing about the slow performance of our timesheet?. I mean something from the design perpective of Java Webdynpro.
    We are also open to develop something in custom.  But we would still end up developing a java wd application. Would it be worth the effort considering the standard timesheet is also in java wd/
    All your thoughts/experiences would be appreciated.
    Thanks alot

    Hi,
    Maybe is more a Basis problem, Talk with the basis guys and check this [Link...|http://www.sdn.sap.com/irj/scn/shop?rid=/media/uuid/90eea55f-a10a-2a10-fca3-b9e42239c7c3]
    Jose
    Regards.

  • Performance considerations for XI

    Hello
    Suppose if we have scenario where there are around 1000 IDocs to be pumped into XI (say everyday 100 orders are getting generated and sent to different vendors at a time) are posting orders and we should create at  what typical performance problems we should anticipate while going ahead with design and how we should take care of that?
    Few things that came to my mind was
    1] Going with message mapping with increase in heap size (since I guess message mapping is best in performance as compared to other mappings)
    2] Taking unwanted segments off at the sending side itself by use of IDoc distribution technique.
    pl provide any suggestions and additions to this
    Thanks in advance.
    Regards
    rajeev

    1. U may also use xsl mapping in case the mapping is complex
    2. This is a good way of handling idocs
    In addition, u may use idoc packaging option
    /people/michal.krawczyk2/blog/2007/12/02/xipi-sender-idoc-adapter-packaging
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/877c0d53-0801-0010-3bb0-e38d5ecd352c
    Regards,
    Prateek

  • Performance considerations with mapping outside the BPE

    Hello together,
    we have an Integration Process with 2 more or less complex transformations steps (i.e. Message Merging).
    Since the overall performance is rather poor, we consider to let the mapping take place outside the IP. This
    means we would have no more transformation step within our IP. Instead we use Interface-Mappings between
    the abstract an the Inbound-Interfaces.
    Does anybody know  whether this approach leads to any significant performance advantages or not?
    Regards Gunnar

    The design with mapping outside BPM is always better in terms of performance.
    Regards,
    Prateek

  • Performance considerations between a cross join and inner join

    Hi there,
    What's the performance difference and impact on running a cross-join based query against an inner join based query?
    Regards and thanks

    Before going to the performance issue - ensure you get the required data and not just some data shown.
    Performance should be checked only between equivalent queries which produce same output but with different processing.
    Are you sure you get same output in cross join as well as inner join?
    If so pass on your different queries and then discuss with one is better.

  • Any performance considerations with FABridge?

    Hi, I've put together my first Flash / Ajax application using
    FABridge, which looks an exciting new technology. So far, my trials
    suggest I can create any number and type of Flash controls
    dynamically from Javascript......however, can anyone from Adobe
    advise what the performance difference is between a dynamically
    generated Flash application and a compiled .swf file and whether
    performance should be similar in a production environment ? - do
    they recommend a limit on the number or type of Flash controls
    which are created dynamically from Javascript?
    Thanks

    are you sperak turkish

  • Performance of Financial Reports against a ASO Cube to a BSO Cube

    Hi All,
    I am working on Financial Reporting and Essbase. I wanted to understand and find some relevant documentation, which tells about the performance issues or difference when a financial report hits a BSO cube or an ASO cube.
    1. If there is a difference in the performance for an ASO vs BSO for Financial Reports, where can I find the document or details for it?
    2. If there is a difference in the performance for an ASO vs BSO for Financial Reports, what is the reason for the same?
    3. How can we improve the ASO performance for the reports?
    Any insights for the same, would be highly appreciated.
    Thanks
    Ankur Jain.

    Thanks Sean V,
    Its quite amazing for me as well, and that is why I am trying drill into any of the FR documentation which might contain something like this. As of now,since I don't have access to the cube, I have nothing to add, nor have the insights to the cube, based upon which I could explain, more on the Cube and Outline design.
    But as soon as I get the access, I will bring this back and have a discussion on the forum.
    Thanks for confirming the same, of what I had been explaining to my Client side as well. Though, I still need some basis to explain and prove them exactly and a prototype as well, which might be needed as well.
    Thanks,
    Ankur Jain.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • About report performance

    Hi Friends,
    I created a report with 45 ref.cursors,
    All ref.cursors are in a Package,
    The package is in Database side.
    The report is report server.
    IF i start to run the report through application
    the report is taking 50% of cpu memory around 40 seconds.
    is this report performance problem ?
    if i have more ref.cursors in report
    is there any problem in report performance ?
    Can somebody help me ?

    One performance consideration I'd do is try to avoid multiple similar queries or even repeats of the same query.
    Is
    from invoice
    where trunc(invoice_date) between :date1 and :date2
    and currency_code = '$' -- sometimes 'euro' and so no
    and ISSUE_PLACE = 'xx'
    and investor_code = :investor_code;
    return(v_comm*5.5137);
    in main query? Can those Formulas be included/replaced into the main query? Are appropriate Indexes created for the joins?

  • HOW TO IMPROVE PERFORMANCE

    HI ALL ,,,
    MY SELECT STATEMENT IS LIKE THIS. IN SM30 ITS SHOWING THAT ITS HAS TAKE THE MAXIMUM TIME. SO HOW CAN I REDUCE THE HITING DB TABLE TIME OR IMPROVE THE PERFORMANCE?
    IF LT_JCDS[] IS NOT INITIAL.
        SELECT  OBJNR
                WKGBTR
                BELNR
                WRTTP
                BEKNZ
                PERIO
                GJAHR
                VERSN
                KOKRS
                VRGNG
                GKOAR
                BUKRS
                REFBZ_FI
                MBGBTR
                FROM COEP
          INTO CORRESPONDING FIELDS OF TABLE LT_COEP
          FOR ALL ENTRIES IN LT_JCDS
          WHERE KOKRS EQ 'DXES'
          AND  OBJNR EQ LT_JCDS-OBJNR
          AND GJAHR <= SO_GJAHR-LOW
          AND  VERSN eq '000'
          AND ( VRGNG EQ 'COIN'  OR VRGNG EQ 'RKU1' OR  VRGNG EQ 'RKL').
          IF SY-SUBRC <> 0.
           MESSAGE  e000(8i) WITH 'DATA NOT FOUND IN "CO Object: Line Items (by Period)"'.
         ENDIF.
      ENDIF.

    Hi
    see these points
    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    Points # 1/2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    Point # 1
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    Point # 2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Point # 3
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    Point # 4
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    Point # 5
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements           contd..  SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    Point # 1
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    Point # 2
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    Point # 3
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements       contd…           Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements    contd…For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements    contd…  Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    Point # 1
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    Point # 2
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Point # 3
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    Point # 2
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    Point # 3
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    Point # 5
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    Point # 6
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    Point # 7
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    Point # 8
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    Point # 9
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 10
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    Point # 11
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    Point # 12
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 13
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

Maybe you are looking for