Aggregates Doubts

Hello All
We have a cube and aggregtaes built on that cube and daily delta load has been loaded and rollup also done, now data is available in aggrgetaes and the <b>aggregates data has been compressed</b>(No the cube data).Now we need to delete one request which was loaded with wrong selections.
My doubt is <b>if i delete a request in the cube</b>
a) will the data in aggregate also will be deleted,since data in aggrgetae has been compressed???
b)will the data in aggregate also will be deleted,when data in aggrgetae has not been compressed???
Please let me know as this is very urgent for me,i need in this 2 scenarios!!!!
Regards
Balji

Hi Udo
Helpful answer,but I need this 2 scenarios can you plz let me know individually
My doubt is if i delete a request in the cube
a) will the data in aggregate also will be deleted,since data in aggrgetae has been compressed???
b)will the data in aggregate also will be deleted,when data in aggrgetae has not been compressed???
Please let me know as this is very urgent for me,i need in this 2 scenarios!!!!
Regards
Balji

Similar Messages

  • Doubt about aggregates?

    hai
    i have doubt that aggregates contain request ids or not?
    and if one cube has 4 aggregates and i want to load latest request only into 2 aggregates where can we do ?
    bye
    srinu

    hai
    No siva,i dont want to deactive the aggregates of those and do it.
    i want the set up where we can load only those 2 aggregates with out avtivate and deactivate?
    bye
    srinu

  • Doubt regarding Aggregates

    Hello Experts
    I have created a aggregate in development enviornment on a infocube. When i execute the query and check the 'usage' tab in ' Maintainence of Aggregates '  it is non-zero, which means the aggregate is being used, but this usage counter is not getting incremented each time i execute the query...
    So my doubt is, is it that if a excute a given query let say 10 times...the corresponding aggregate usage counter should accordingly be increased to 10 ?
    Because in my case sometimes it gets incremented and some times not.
    Thanx and Regards
    Priyanka

    Hi Priyanka,
       Under the execute and Debug near the bottom is an option to not use cache. You should check that box, that way that test will not look in the cache for results. You can confirm that the cache was used or not becuase when the results were retrieved from OLAP cache, the DB Rows Selected and Rows Transferred counts in RSDDSTAT will be 0 but when it hits the aggregate or base cube, you'll have counts in those fields.
    If the query is able to retrieve results from the OLAP Cache, it will never actually go to the database (which is a good thing!)
    The pecking order a query uses is
    look in local cache for the results
    look in the Global OLAP cache
    look for an suitable aggregate
    go to the base cube
    Hope this will help you,
    Thanks,
    Ravindra.

  • Doubts in Aggregates?

    hello all
    Having a cube and aggregate on this cube.Now one request has been deleted in that cube due to some incorrect data,now will the corresponding data in aggregates will also deleted or will the data remain?
    Regards
    balji

    hi,
    1.If you <b>delete data selectively</b> from an InfoCube, the aggregate belonging to this InfoCube is automatically deleted.
    2.If the data in aggregate is compressed, then the aggregate is deleted and reconstructed  which takes a long time.
    3. If the aggregate is  not compressed then  the corresponding data will be deleted automatically.
    hope its clear.
    shylaja.
    Message was edited by: shylaja thota
    Message was edited by: shylaja thota

  • Error handling in process chain-doubts

    Hi ,
    I have some doubts in error handling of process chains.
    1) I have  aprocess load infopackeage and subsequent process is update fromPSA.
    process load infopackage got failed  , so i loaded  the IP manually and repeated the next i.e process update from PSA .
    How to correct the process chain from now?
    2) I have  aprocess load infopackeage and subsequent process is Delete request in infocube .process load infopackage got failed  , so i loaded  the IP manually and repeated the next process i.e Delete request in infocube. Chain continued by deleting the right request . How this is possible ?
    Plz  help me  as this is urgent and daily i have to deal with this  issues. If any documents on error handling is greatly appreciated.
    My mail id is [email protected]
    Regards,
    Pavan

    Hi Pavan,
    Hope the following links will give u a clear idea about process chains and clear ur doubts.
    Business Intelligence Old Forum (Read Only Archive)
    http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/c08b3baaa59649e10000000a11402f/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8da0cd90-0201-0010-2d9a-abab69f10045
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/19683495-0501-0010-4381-b31db6ece1e9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/36693695-0501-0010-698a-a015c6aac9e1
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/9936e790-0201-0010-f185-89d0377639db
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3507aa90-0201-0010-6891-d7df8c4722f7
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/263de690-0201-0010-bc9f-b65b3e7ba11c
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    Errors in monitoring of process chains can be categorized into 4 different sections...
    Master data - Full Update
    Master data - Delta Update
    Transaction data - Full Update
    Transaction data - Delta Update.. in terms of loading of data which can be due to server shut down or system maintenance... errors due to incorrect entries in the OLTP system in which case you'll have to fix the errors in the PSA and manually load the data...
    Otherwise it can have errors on Attribute change run being locked by some other job... Aggregate Roll up failing because of attribute change run being run at the same time... Problem with hierarchies and save hierarchies...
    There can be problems with the data store activation if the ODS object contains any incorrect request sitting inside it... then, you need to delete the incorrect request and reload the data again...
    In case of Transaction Delta failure, you'll have to request for a repeat either manually in the infopackage or using the repeat option if available on right clicking the load event...
    For Master Data Delta failures, you need to do an Re-init by deleteing the previous initalization condition in the "initalization option for source systems" in the menu scheduler or reschedule the enitre chain... because, master data generally do not support repeat of last delta ...
    U can even look into these links:
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    For common data load errors check this link:
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    ****Assign Points if Helpful****
    Regards,
    Ravikanth.

  • Aggregates on Technical Content

    Hi Guru's,
    I have installed the Technical Content Cube ( 0BWTC_C05) ,which is for ware house performance monitoring, Now I have to create a process chain to load data into this Infocube. As we are not using the Multiprovider,we cant have the standard queries on this cube. I have created a Query to get the data necessary . I want to create an aggregate for this infocube . How to manage this in the process chain ,and My doubt is if we create the aggr in Dev system, will this be carried to QA system and prd ?? I will be really Glad if any of you can send me the PC Chain used to load this Infocube, as I am not sure what process to include .
    Thanks,
    Krish

    Hi Krish,
    You can transport the aggregates created in Dev to QA and Prod boxes. Also most of the projects do allow the creation of aggregates in Prod/QA also so you can create the aggregates there.
    The load steps for 0BWTC_C05 would be like this:
    1. Load all the attribute/text associated with the cube 0BWTC_C05.
    2. Aplly Hier/Attr Change.
    3. Load the data for 0BWTC_C05.
    Bye
    Dinesh

  • Doubt in precedence

    hello,
    i have a doubt in the order in which a sql query will execute.
    when there are joins and where conditions in an sql query, which would be executed first? the joins or the where conditions??
    Regards,
    Roopa.

    you can generate an execution plan and read it to see how Oracle intends to execute a query, something like
    SELECT STATEMENT     ALL_ROWS     
      SORT(AGGREGATE)     
         HASH JOIN
           INDEX(FAST FULL SCAN) FEE_INDX1
           INDEX(FAST FULL SCAN) FIE_INDX1generally you go in as far as you can, then out again so the first one read should be FEE_INDX1, then FIE_INDX1. If other items existed you would then go down to the next operation and repeat the process until you are done.

  • Correlation condition in Aggregate Pattern

    Hi Team,
    Just Started the Aggregate Pattern and
    My Requirement is to collect the 3 messages of Pencils based on their color(should be same for three messages)
    Below is My pattern:
    Start:
    I have initialized the Counter to zero
    CurrentPencilID is of type string(what should we initialize here?)
    Numberof Pencils is of type integer(what should we initialize here?))
    Mapping
    I have Incremented the Loop and mapped the outputs.
    Exclusive choice
    Here I have given the condition for the loop if counter reaches the count 3 it will send the message and end the activity.
    Else
    Intermediate Step.
    Here the incoming message should match the correlation field and But I am unable to do so.
    Please help me out for this condition evaluation.
    As it should collect messages of three Blue Pencils and aggregate all three messages and send to the target.
    Regards,
    Novveen

    Hi Novveen,
    from what i understand, the difficulty is what to initialize CurrentPencilID and Numberof Pencils. I assume CurrentPencilID is storing the color of pencil. (Please correct me if its not so)
    perhaps you can try something like below condition in correlation. Below condition should return true for first message i.e. when counter = 0 and accept whatever color is supplied. In IntermediateMesasge output, you should map the first incoming color into CurrentPencilID. So first pass will initialize color id. Also you will increment counter.
         IF(Counter=0,true,IncomingMessage/Color=CurrentPencilID)
    For next passes, as counter is non-zero, the result of IncomingMessage/Color is evaluated against initial color ID i.e. CurrentPencilID (IncomingMessage/Color=CurrentPencilID) and returned.
    For each successful return, NumberOfPencils is incremented (little doubt - why its used, isnt counter doing same ? )
    -Abhijeet

  • Aggregates in query being used or not.

    Hi Guys,
    I have a doubt. wondering if somebody can clarify it. When we run a query on cube with aggregates, how do we know whether the query is using the aggregates or not?
    Can we check in ST03?
    Thanks in advance.

    hi
    go to tcode RSRT
    give the query name
    click execute + query
    and in aggregates tab check mark select display agggregates
    & check display aggregate found
    regards
    Dina
    Edited by: Dinavel on Mar 14, 2008 10:37 PM

  • Modifying the cube model & others doubts

    Hello Experts,
    I have a few doubts about certain BW30B concepts which I would like to seek help from experts in this very impressive forum.
    A. From which ODS table does Init/Delta process use to load into connected data target?
    B. When data model is changed to add a new attribute, must the Transfer Structure/Data Source be changed as well to include it, just as comm structure, transfer & update rules are modified?
    C. To add a new characteristic into the model, I deleted existing cube data & added a new characteristic to this cube to avoid copying existing queries. A new cube is created with existing cube as template. Existing cube data is loaded into this new cube. Then, I adapted txfr & update rules and infosource. One way to complete the change is to load historical data for only the new characteristic from source system and then reload existing data from the new copy cube. The other way is simply to load all history from source system into emptied modified existing cube. I would like to clarify on the steps for the earlier method I mentioned. I cannot understand how data for only the new char can be loaded and then reload from copy cube back into modified, existing cube. Please help to clarify on this method.     
    D. When adding a new keyfigure, datasource is changed accordingly but its not stated that for char change, data source also need to be changed accordingly. I would think it needs to be changed too for any char / kf added to the model. Please could you confirm this with me.
    E. For aggregates, how many maximum characteristics can be used as line item dimensions, not inclusive of data package,time/unit dimensions? Is it that only unit is optional dimension? Please confirm this with me as well.
    F. Under what condition must the inventory marker not be updated?
    Would be greatful for any useful help given.
    warmest regards,
    Alfonso Spinelli

    Hi Alfonso,
    Some answers for you:
    A. Data is taken from the Change log table.
    B. It depends...if you need to get the data fro the new attribute from the sourec system, then yes the data source and transfer structure will change. If the new data is being populated by reading some values that are already in BW tables, then you would not need to change the datasource. The you would change the comm structure and update rules.
    C. The method you are talking about, i.e. create a copy of the cube, move data from original to copy and then reload back to original...is used to preserve the data. If you add a new char to an existing dimension, you need to delete the data of the cube. So this method is the workaround, because you have parked the data and then would not need to reload from the source. However, if you are adding a new char to a new dimension, you do not need to delete the data. Of course for the ne char, you will have data only from this point onwards. If you need all history data for the new char, you would need to reload from the source.
    D. The answer to this is similar as point B. If you have the data for that char and can reuse it in your cube load (by coding in your update rules), then you would not need to change the datasource.
    E. For line item dimensionswe will always have only one char, be it cube or aggregate. See dimensions are groupings of chars. P, T and U dimensions will always be there, like Time dimension will contain the chars like 0CALMONTH, 0FISCPER...
    Hope this helps...

  • Secondary index on aggregate table and cube

    Hi to all,
    I have small doubt.
    1. Primary index created in F and E table of CUBE by system.
      then how secondary index in created in F and E table of cube,
      or
    is that B* tree index created as secondary index on F/E table of cube, when we use process type create / deleted index in PC for CUBE
    or if we do manually under performance tab for cube.
    2. Primary index also created on F and E table of aggregate by system.
      then how secondary index in created in F and E table of aggregate,
      or
    is that B* tree index created as secondary index on F/E table of aggregate, when we use process type create / deleted index in PC for CUBE
    or if we do manually under performance tab for aggregate
    Regards
    Pavneet Rana

    Hi,
    Thru process chains you can't handle them.
    You can do it by manual with help basis or ABAP team.
    BW server--> T code  SE11, Select table - Enter table name and display it.
    On next screen on tool bar you can see indexes --> click on it.
    Thanks

  • Query for functioning of Aggregate

    Hello experts,
    Could you please clarify me one fundamental question on the working of an aggregate.
    I have a cube with the follwing information
    CHAR1        CHAR2     CHAR3          KF 1
    I have build an aggregate with
    CHAR1         CHAR 2         KF1
    I have my doubt here :
    For example:
    If  I create a query with CHAR1 and CHAR2 in rows and KF1 in coloumns then the data is fetched from the Aggregate in stead of cube there by reducing the time. Is this correct?
    OR
    If I create  a query is created with CHAR1 CHAR2 AND CHAR 3 in rows and KF1 in coloumns
    then The data is read from the cube and noway the aggregate comes into picture.
    Could you please clarify me how exactly the aggregate is working.

    Hi,
    Goto RSRT transaction>Give report name>Execute+Debug>a pop up sceen will appear with multiple check boxes>select "Display aggregates found" option--> It will show all the aggregates those are hitting the query.
    To propose aggregates follow the below procedure to improve the query performance:
    First try to execute the query in RSRT on which u required to build aggregates. Check how much time it is taking to execute.....and whether it is required to build aggregate on this querry?? To get this information, Goto SE11> Give tabl name RSDDSTAT_DM in BI7.0 or RSDDSTAT in BW3.x.> Disply -> Contnts-> Give from date and to date values as today, user name as Ur user name, and give the query name
    --> execute.
    Now u'll get a list with fields like Object anme(Report anme), Time read, Infoprovider name(Multiprovider), Partprovider name (Cube), Aggregate name... etc. If the time read is less than 100,000,000 (100 sec) is acceptable. If the time read is more than 100 sec then it is recommended to create Aggregates for that query to increase performance. Keep in mind this time read.
    Again goto RSRT> Give query name> Execute+Debug-->
    A popup will come in that select the check box display aggregates found--> continue. If any aggregates or exist for that
    query it will display first if u press on continue button, it will display from which cube which fields are coming it will display...try to copy this list of objects on which aggregate can be created into one text file...
    then select that particular cube in RSA1>context>Maintain Aggregates-> Create by own> click on create aggregate button on top left side> Give discription of the aggregate>continue> take first object from list and fclick on find button in aggregates creation screen> give the object name and search... drag and drop that object into aggregate name right side (Drag and drop all the fields like this into aggregate).---->
    Activate the aggregate--> it will take some time once the activation finishes --> make sure that aggregate is in switch on mode.
    Try to xecute the query from RSRT again and find out the time read and compare this with first time read. If it is less tahn first time read then u can propose this aggregate to incraese the performance of the query.
    I hope this will help u... go through the below links to know about aggregates more clear.
    http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    Hope it helps you..
    Regards,
    Ramki.

  • Doubt regarding FOR ALL ENTRIES and INDEXES

    Hi iam Aslam ..
    and i have a  doubt  ..regrding .. .
    1)   what are  the  disadvs of using FOR ALL ENTRIES
    2)  what are the disadvs of using INDEXES
    3)    what is the  disadvs of  using  Binary search ..
    4) . how can u do performance tuning ...if u have    more than one SELECT  statements  between ... Loop and Endloop .......
    please answer to these   questions   or  reply me to [email protected] ..
    thanks  in advance ..
    bye

    HI
    <b>1) what are the disadvs of using FOR ALL ENTRIES</b>
    if there is no data available for you condition mentioned in the where condition then it will retrive all the data from the database table , which we don't want , but we can solve that easily
    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    Points # 1/2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    Point # 1
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    Point # 2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Point # 3
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    Point # 4
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    Point # 5
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements           contd..  SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    Point # 1
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    Point # 2
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    Point # 3
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements       contd…           Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements    contd…For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements    contd…  Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    Point # 1
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    Point # 2
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Point # 3
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    Point # 2
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    Point # 3
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    Point # 5
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    Point # 6
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    Point # 7
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    Point # 8
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    Point # 9
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 10
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    Point # 11
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    Point # 12
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 13
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Is aggregate DB index is required for query performance

    Hi,
    Cube mange tab when i check for the aggregate DB index it shows red, here my doubt is, is the DB index is required for query performance
    is the check statics should be in green?
    Please let me know
    Regards,
    Kiran

    Hi,
    Yes it improves the reports performance..
    Using the Check Indexes button, you can check whether indexes already exist and whether these existing indexes are of the correct type (bitmap indexes).
    Yellow status display: There are indexes of the wrong type
    Red status display: No indexes exist, or one or more indexes are faulty
    You can also list missing indexes using transaction DB02, pushbutton Missing Indexes. If a lot of indexes are missing, it can be useful to run the ABAP reports SAP_UPDATE_DBDIFF and SAP_INFOCUBE_INDEXES_REPAIR.
    See SAP Help
    http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    Thanks
    Reddy

  • Report data is from Cube or Aggregates ??

    Hello Friends,
    Can any one please tell.
    The data in the BW Report is comming from a cube where Aggregates are bulid on it. The data is loading into the cube daily and roll up of aggregates are done one month back, since a month roll up was not taking place.
    When i run report now which data i can see in the report only rollup data which is one month old (or) fresh updated data in the cube ??
    Please tell me in detail.
    Thanks in advance..
    Tony
    null

    HI tony,
    If request has not been rolled up, then it would not be available for reportin ...check roll up flag in manage of cube...
    So data is coming from aggregates, if your query can access aggregates( drilldown data is avail in aggregates),..otherwise from cube...
    But there would be data sync b/w cube and aggregates...
    Also you can check RSRT ( display aggregates found) to check if your quer is accessing aggregates or not.
    Let me knw if you hve more doubts
    Gaurav

Maybe you are looking for

  • Version mismatch after patch of Reader 10.0.0 to 10.0.1

    I'm in the process of rolling out Adobe Reader 10.0.1 to roughly 100 computers via GPO.  I've created an AIP using AdbeRdr1000_en_US.msi then patched this with AdbeRdrUpd1001_Tier1.msp update.  When I ran the patch it definately notified me that it w

  • Can I restore my music from an iPod to Itunes?

    I had my Itunes library wiped on my computer. I still have my iPod with all the music on it, but I can't find any way to get the music from the iPod loaded back to the computer. Any help would be greatly appreciated!!

  • Can iDVD create disc images from VIDEO_TS folder, and burn them?

    That first question is pretty self-explanatory. Background is this: I've downloaded some video folders/files from forum/torrent trading websites. When finished, I have a VIDEO_TS folder, often an EXTRAS_TS folder as well. A while back, I downloaded t

  • I now have two accounts with same number!

    On May Tenth my account was charged, but the card on file had been cancelled due to security reasons on my end! I put in my new card, but at that time it wouldn't let me renew! I was told The only way to renew was to get another account under my sign

  • ITunes 7.7 'Part of a gapless album' Sham

    G'day, I've just upgraded to iTunes 7.7 and to my disgust realised that when selecting the 'Part of a gapless album' option now no longer works for certain tracks in playback mode. The problem tracks are okay when played back in my iPod, but as I lis