Database access in update rules

Hi all,
      I am planning to write an ABAP routine for a update rule. Now I want to know whether I can find out some data from some tables. These data are a part of my master data.
      Let me explain; I have a field in the communication structure called BookID and two dimensions in the cube called BookID and AuthorName. Now both these dimensions are characteristic infoobjects with some master data already uploaded.
      I have to write a update rule routine, where I'll fetch the AuthorName from the BookID. AuthorName is an attribute of BookID.
      Can you tell me if this is possible? If yes, then which are the database tables I need to look at? Also, some code examples will really help.
      If it's not possible, please suggest some way to do it. Currently I have AuthorName as "Master data attrib. of" BookID. But still it doesn't update the records. New records are being created with only BookID. No AuthorName.
      Please help.
Thanks,
Satyajit.

hi,
are you going to update AuthorName in master data BookID or AuthorName in the cube ? do the AuthorName set as 'navigational' attribute of BookID ?
Both are possible for update.
Master data has 3 tables (beside other table) : text, attribute and hierarhy.
For master data that's not time-dependent, attribute are stored in /bi0/p[infoobject name] - without 0, e.g infoobject 0customer has /bi0/pcustomer table for attribute. for our own created infoobject it's stored in /biC/p[infoobject name], e.g ZCUSTOMER has /bic/pZcustomer table.
For update in infocube, if you didn't set it as navigational attribute, you may use Update Method 'Master data attrib. of'.
sample code :
data : it_data_package like DATA_PACKAGE occurs 0 with header line,
       begin of it_authorname occurs 0,
          /bic/zbookid like /bic/pzbookid-/bic/zbookid,
       end of it_authorname,
       l_tabix like sy-tabix.
tables : /bic/pzbookid.
select /bic/zbookid
from /bic/pzbookid
into corresponding fields of table it_authorname
for all entries in data_package
where /bic/zbookid = data_package-/bic/zauthorname.
loop at DATA_PACKAGE.
endloop.

Similar Messages

  • Read from database (Access) and update fields using MS ADODB

    Hello,
    I am trying to get records from database using MS ADODB._connection & ADODB._Recordset objects (from LabVIEW 6)
    I can:
    1. open connection (with ADODB._connection)
    2. write into tables (with ADODB._command)
    I can not: gets records and update fields using ADODB._connection & ADODB._Recordset.
    I do not know:
    1. how to connect between the connection that was opened and the recordset object
    2. where I can write the SQL text as input to the recordset
    Attached is DataBase.llb with DB_read.vi that display my steps.
    Thanks.
    Attachments:
    DataBase.llb ‏40 KB

    Try this one, after updating the names for database and the table you want.
    Let me know if it's working.
    p.s.: if you have problems, it could be a different adodb version ... but the sequence of methods is the same
    Attachments:
    EditdatabaseMe.vi ‏57 KB

  • Update rule access to be given

    Hello Experts,
                            I have to give access of update rule and transfer rule display access to one of the users, which Authorization objects to be maintained in his role? currently I have SAP BW 3.5 version.I tried searching in Role maintenance but couldn't find any specific object as there are values for cube or ods. I have added S_DEVELOP object of ABAP workbench but still user is unable to see update or transfer rule.
    Please suggest.
    Regards,
    Priyanka Joshi

    Hi ,
      To get access , the one who does not have access go to display transformation/tranfer rule in Rsa1first  ,which would throw an authorization error .
      then go to /osu53 ,here he will get a dump of the required authorisation.
      take a screenshot of the dump and send it to basis team,they will provide the required authorisation .

  • Coding in Update Rules accessing a Z-Table

    Hi,
    I have a datarecord that looks as follows:
    0MATERIAL:                     1000
    ZMC (Manufacturing Company)      03
    ZFAC (Facility ID)               16
    ZSHIP (Total Shipments)         500 Units
    <b>There is a Z-Table in R/3 (ZPLANT) that maps ZMC and ZFAC to R/3 Plants (ZPLANT).</b>
    Assuming this Z-Table is available in BW <b>what would the  coding in the update rules into a Cube  look like to do that mapping?</b>
    Thanks again
    Christian

    dear Christian,
    try use start routine, assume the table in bw is zplant and has fields zmc and zfac ...
    PROGRAM UPDATE_ROUTINE.
    $$ begin of global - insert your declaration only below this line -
    TABLES: zplant.
    DATA: l_tabix like sy-tabix,
    it_plant like zplant occurs 0 with header line.
    $$ begin of routine - insert your code only below this line -
    fill the internal tables "MONITOR" and/or "MONITOR_RECNO",
    to make monitor entries
    data : it_data_package like DATA_PACKAGE occurs 0 with header line.
    first select the master data to internal table, this to
    avoid each time has to fetch from database table for
    single record, for better performance
    select * from zplant
    into table it_plant.
    if table zplant has high volume data, you can use for * select * from zplant
    into table it_plant.
    all entries
    for all entries in data_package
    where material ? = data_package-material.
    loop at data_package.
    l_tabix = sy-tabix.
    move-corresponding DATA_PACKAGE to it_data_package.
    read it_plant with key [field name e.g zplant] = it_data_package-[/bic/zplant] BINARY SEARCH
    if more key add it_data_package-[fieldname] - without * AND
    if sy-subrc = 0.
    it_data_package-/bic/zmc = it_plant-zmc.
    it_data_package-/bic/zfac = it_plant-zfac.
    modify data_package from it_data_package index l_tabix.
    endif.
    endloop.
    take a look some sample code
    Re: Is it possible to read a third ODS in update rules between two ODS?
    Re: Update Rules
    hope this helps.

  • Access to update the GRC rule set is limited

    Hello - What is the process (tcode) to see who has access to update the GRC rule set?
    Thanks!

    Hi Sam,
       What is the version of your RAR (CC)? If it is CC 4.0 then you enter the product via tcode and go to rule architect to make changes. If you have CC 5.X then you go through the web browser and go to Rule architect to make changes to the rule set.
    The process to change a rule set is as below:
    1) Creats Function
    2) Create risk
    3) Create Rule
    Regards,
    Alpesh

  • Access to unmapped field of ODS in the update rule of CUBE

    Hi Gurus,
    I have 8 fields in ODS out of which only 5 are mapped to the characteristics in the CUBE. I have to write a start routine in the update rules of the cube. But, I need to access one field, which is in ODS but not mapped to characteristic in the cube.
    So, is it possible to use that field in the logic of start routine. Or do I have to create a new characteristic in the CUBE for that 6th field also.
    Thanks,
    Regards,
    aarthi
    [email protected]

    Hi
    You certaily should insert your infobject field in the cube to populate it. After This you can choose if populate it by direct mapping or by routine.(start routine or transfer routine)
    A classic example of start routine to populate infobject from master data
    tables : /bi0/pcostcenter.
    data : lt_data_package like DATA_PACKAGE occurs 0 with header line,
    lt_costcenter like /bi0/pcostcenter occurs 0 with header line.
    select * from /bi0/pcostcenter into table lt_costcenter.
    lt_data_package[] = DATA_PACKAGE[].
    loop at lt_data_package.
    read table lt_costcenter with key costcenter = comm_structure-costcenter.
    if sy-subrc = 0.
    lt_data_package-/bic/zpcaccgh = lt_costcenter-/bic/zpcaccgh.
    modify lt_data_package.
    endif.
    endloop.
    DATA_PACKAGE[] = lt_data_package[].

  • How to access basic cube data as part of update rule routine?

    Dear colleagues:
    I am developing a routine as part of the update rule to define a characteristic value to feed an infocube. However, I need to peek up a data that is available on another infoprovider, in this case a basic cube.
    So far, I have done that by peeking up data form ODS as part of the update routine, but this is the first time I need to peek up a data from a basic cube.
    <b>Does that works the same way as ODS?</b>
    Best regards
    Waldemar

    Hi Krzysztof Konitz:
    I have posted an inquire about getting that from the PSA. By mistake I have posted that once before your reply and another after that. Then I red you reply.
    However, you replay solved the problem.
    Best regards
    Waldemar

  • Access master data in update rules

    Hi,
    I try to calculate a weight in my update rules. Therefore I need the product weight and the number of peaces. I try to read the product weight from master data. The problem is, that it always calculates zero as the result. Is it possible to read master data in update rules?
    If I try to calculate the weight in the transfer rules, I get the message in the monitor, that the transfer rules don't finish. Do you have an idea what to do?
    Regards,
    Gabi

    Hi Robert,
    here is my code in the update rule.
    fill the internal table "MONITOR", to make monitor entries
      DATA: l_s_errorlog TYPE RSMONITOR.
      data: temp_prod_weight type /BIC/PZL2_P_ID.
    general values
      RESULT = COMM_STRUCTURE-/BIC/ZL2_POTMW.   "weight
      UNIT = COMM_STRUCTURE-UNIT_OF_WT.
      RETURNCODE = 0.
      ABORT = 0.
    calculate weight from master data, if it is zero
      IF COMM_STRUCTURE-/BIC/ZL2_POTMW = 0
      or COMM_STRUCTURE-/BIC/ZL2_POTMW = '0'
      or COMM_STRUCTURE-/BIC/ZL2_POTMW is initial.
        select single /BIC/ZL2_NETW UNIT_OF_WT
        from /BIC/PZL2_P_ID
        into corresponding fields of temp_prod_weight
        where /BIC/ZL2_P_ID = COMM_STRUCTURE-/BIC/ZL2_P_ID
        and objvers = 'A'.
        if sy-subrc <> 0.
          l_s_errorlog-MSGTY = 'I'.
          append l_s_errorlog to MONITOR.
          RESULT = 0.
    abort, if calculation is not possible
          ABORT = 1.
        else.
          RESULT = temp_prod_weight-/BIC/ZL2_NETW
    COMM_STRUCTURE-/BIC/ZL2_POTMQ.
          UNIT = temp_prod_weight-UNIT_OF_WT.
        endif.
    ENDIF.
    convert unit
      CALL FUNCTION 'UNIT_CONVERSION_SIMPLE'
        EXPORTING
          INPUT                      = RESULT
        NO_TYPE_CHECK              = 'X'
        ROUND_SIGN                 = ' '
          UNIT_IN                    = COMM_STRUCTURE-UNIT_OF_WT
          UNIT_OUT                   = 'KG'
        IMPORTING
        ADD_CONST                  =
        DECIMALS                   =
        DENOMINATOR                =
        NUMERATOR                  =
          OUTPUT                     = RESULT
        EXCEPTIONS
          CONVERSION_NOT_FOUND       = 1
          DIVISION_BY_ZERO           = 2
          INPUT_INVALID              = 3
          OUTPUT_INVALID             = 4
          OVERFLOW                   = 5
          TYPE_INVALID               = 6
          UNITS_MISSING              = 7
          UNIT_IN_NOT_FOUND          = 8
          UNIT_OUT_NOT_FOUND         = 9
          OTHERS                     = 10
      IF SY-SUBRC <> 0.
        l_s_errorlog-MSGTY = 'I'.
        append l_s_errorlog to MONITOR.
        UNIT = COMM_STRUCTURE-UNIT_OF_WT.
        RESULT = COMM_STRUCTURE-/BIC/ZL2_POTMW.
      ELSE.
        UNIT = 'KG'.
      ENDIF.
    I'm loading via PSA, but I have problems to debug the code. If I set a break point in my code (command break-point), I can't stop at it.
    Thanks,
    Gabi

  • Can you help me interpret the following lines in UPDATE rule?

    Hi,
    Can you help me interpret the following lines in UPDATE rule?
    1. What is the role of role of u201CCHANGING RESULT.u201D and u201CCHANGING lc_local_value.u201D?
    2. What is the role of the CALL FUNCTION 'CONVERT_TO_LOCAL_CURRENCY', in particular the Exporting and Importing parts?
    3. Can I say that u201CCOMM_STRUCTURE-ORDER_VALu201D in the subroutine is passed to u201Clc_document_valueu201D in the u201CFORM loc_curr_convertu201D; and further passed to u201Cforeign_amountu201D in the u201CCALL FUNCTION 'CONVERT_TO_LOCAL_CURRENCY'u201D?
    4. Finally, what becomes of my original u201CActual Goods receipt quantityu201D( 0GR_QTY ) which I am writing the routine for? I donu2019t see any where in the code that it is being referred to? Do any of these codes affect the value of 0GR_QTY?
    5. Also, if there are 3 different subroutines in the INCLUDE and I am making the change described in #4 above, how do I know which of the 3 subroutines to call?
    ===============================
    ===============================
    So I am reviewing a transfer routine in for u201CActual Goods receipt quantityu201D and routine an INCLUDE statement: INCLUDE RS_BCT_MM_UPDATE_RULES.
    The update rule also includes the following properties to run the following subroutine in the Include:
    IF u2026..
    perFORM LOC_CURR_CONVERT
               USING    COMM_STRUCTURE-ORDER_VAL
                        COMM_STRUCTURE-DOC_DATE
                        COMM_STRUCTURE-ORDER_CURR
                        COMM_STRUCTURE-LOC_CURRCY
                        COMM_STRUCTURE-EXCHG_RATE
               CHANGING RESULT.
    I verified in the INCLUDE (RS_BCT_MM_UPDATE_RULES) and the subroutine is as follows:
    FORM loc_curr_convert
      USING    lc_document_value
               lc_date
               lc_document_currency
               value(lc_local_currency)
               lc_rate
      CHANGING lc_local_value.
    conversion of lc_rate from floating-point to decimal. Necessary for *
    call of CONVERT_TO_LOCAL_CURRENCY.
    data lc_rate_dec type p decimals 5.
    lc_rate_dec = lc_rate.
      IF lc_document_currency = lc_local_currency
      no conversion necessary -> Main case 1
        AND NOT ( lc_document_currency IS INITIAL
               OR lc_local_currency IS INITIAL ) .
        lc_local_value = lc_document_value.
      ELSEIF NOT ( lc_document_currency IS INITIAL
      OR lc_local_currency IS INITIAL OR lc_date IS INITIAL ) .
      conversion necessary with lc_date -> Normally not possible
        CALL FUNCTION 'CONVERT_TO_LOCAL_CURRENCY'
          EXPORTING
            date                 = lc_date
            foreign_amount       = lc_document_value
            foreign_currency     = lc_document_currency
            local_currency       = lc_local_currency
            rate                 = lc_rate_dec
          IMPORTING
          EXCHANGE_RATE        =
            local_amount         = lc_local_value
          EXCEPTIONS
            NO_RATE_FOUND        = 1
            OVERFLOW             = 2
            NO_FACTORS_FOUND     = 3
            NO_SPREAD_FOUND      = 4
            DERIVED_2_TIMES      = 5.
        IF sy-subrc NE 0.
      message a802 with lc_date lc_document_currency lc_local_currency
                        sy-subrc.
        ENDIF.
      ELSE.
      if conversion not possible -> assign target values
        lc_local_value = lc_document_value.
        lc_local_currency = lc_document_currency.
      ENDIF.
    ENDFORM.

    HI,
    Thanks so much the explanations.
    I just verified again on our dev system and the update rule for 0GR_QTY (Actual goods receipt quantity) include the following:
        perFORM QUANTITY_CONVERT
           USING    COMM_STRUCTURE-CPQUAOU
                    COMM_STRUCTURE-po_UNIT
                    COMM_STRUCTURE-base_uom
                    COMM_STRUCTURE-numerator
                    COMM_STRUCTURE-denomintr
           CHANGING RESULT
    Now, in the include, I also found:
    FORM QUANTITY_CONVERT
      USING    QC_SOURCE_VALUE
               QC_SOURCE_UNIT
               VALUE(QC_TARGET_UNIT)
               QC_UMREZ
               QC_UMREN
      CHANGING QC_TARGET_VALUE.
    i.  Does it mean it actually does quantity conversion?
    ii. If you have access to the INCLUDE I will appreciate some hints on what the subroutine QUANTITY_CONVERT is doing. It does not appear do to be saying anything about quantity conversion; but it is supposed to be doing something with the parameters being passed from the update routine.
    iii. In your response to #5, after all the computation in the INCLUDE, what comes back to the Update rule  i.e. what comes back to become the value of 0GR_QTY?
    Is it the u201CRESULT.u201D in the update rule or u201CQC_TARGET_VALUE.u201D in the subroutine in the INCLUDE.
    iv. So, am to create an Update rule for 0PSTNG_DATE and the source is BUDAT; and I need to write a routine using the include INCLUDE RS_BCT_MM_UPDATE_RULES.
    I looked through the INCLUDE and identified all the subroutines in this INCLUDE as follows:
    QUANTITY_CONVERT
    LOC_CURR_CONVERT
    GET_WEEK
    WEEK_DAY
    QUARTER_DAY
    --Does it mean that to use this subroutine, I can only use the USING parameters of one of these listed subroutines?
    --Also, does it mean that because 0PSTNG_DATE is a date, I can only use one of
    GET_WEEK
    WEEK_DAY
    QUARTER_DAY
    --Or, are there other includes to be used for 0PSTNG_DATE
    Thanks

  • How to calculate the runntime of a routine in an update rule

    I'm facing a performance problem with the loading of an ODS.  After some analysis I'm almost sure that the problem is due two routines (in the update rule) that are looking into master data tables 200.000 times ( quantity of records loaded every day).  But before moving that logic back into the extractor I need to be sure that the runntime will be reduced.  Is there any way to obtain the amount of time spent by those routines?.  I have been checking into the transactions ST03N and STAD but I'm not able to find this information.
    Thank you very much in advance.
    My best regards.
    Pablo Mazzucchi

    Hi,
    For performance problem you can try to get the look data in an internal table at start routine and use them and read for the fields you need which will help you in decreasing the run time vs hitting the database direclty for getting the look up information.
    This will bring you a noticable change in run time of data load.
    Lets wait if some other has routines to get run time of the routine
    Thanks,
    Arun.

  • SCCM 2012 SP1 - Offline Servicing failure - Failed to find or access the update binaries to be applied on the image

    Hi there
    Trying to patch a new Windows 7 SP1 image within SCCM 2012 SP1, but it's failing.
    I've searched for information on the failure messages I am seeing, but although there is a LOT of information online concerning Offline Servicing failures, I can't find anything on the errors I am seeing.
    I've tried injecting a single update, five updates and ten updates, no difference, same messages.
    We have McAfee Access Protection disabled, as we know Offline Servicing simply won't work if this is running.
    In the console, in Schedule Update Status for the image I am trying to update, the following message is shown:
    "Failed to find or access the update binaries to be applied on the image."
    That sounds as if the process can't find the actual .cab file for any update I've tried to inject, but I don't know why it wouldn't be able to do that, we have Software Updates configured and the .cab files are on the same server.
    When I looked at the OfflineServicingMgr.log file, I see the following entries:
    Processing image at index 1        SMS_OFFLINE_SERVICING_MANAGER        14/06/2014 14:52:49        8272 (0x2050)
    Mounting image at index 1. Image file='D:\ConfigMgr_OfflineImageServicing\PackageID\W7_Image.wim', MountDirectory='D:\ConfigMgr_OfflineImageServicing\PackageID\ImageMountDir', ImageFileType='WIM', Mode='ReadWrite'        SMS_OFFLINE_SERVICING_MANAGER       
    14/06/2014 14:52:49        8272 (0x2050)
    Image OS information : MajorVersionMS = 6, MinorVersionMS = 1, MajorVersionLS = 7601, MinorVersionLS = 17514        SMS_OFFLINE_SERVICING_MANAGER        14/06/2014 14:53:31       
    8272 (0x2050)
    Failed to find properties of file 4        SMS_OFFLINE_SERVICING_MANAGER        14/06/2014 14:53:31        8272 (0x2050)
    UnMounting Image (Commit Changes = 0) ...        SMS_OFFLINE_SERVICING_MANAGER        14/06/2014 14:53:31        8272 (0x2050)
    Completed processing image package PackageID. Status = Failed        SMS_OFFLINE_SERVICING_MANAGER        14/06/2014 14:54:04        8272 (0x2050)
    Updated history for image package PackageID in the database        SMS_OFFLINE_SERVICING_MANAGER        14/06/2014 14:54:04        8272 (0x2050)
    Schedule processing failed        SMS_OFFLINE_SERVICING_MANAGER        14/06/2014 14:54:04        8272 (0x2050)
    Processing completed for Schedule with ID 16777237        SMS_OFFLINE_SERVICING_MANAGER        14/06/2014 14:54:04        8272 (0x2050)
    STATMSG: ID=7910 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_OFFLINE_SERVICING_MANAGER" SYS=SCCMServer.domain SITE=Site_Code PID=8560 TID=8272 GMTDATE=Sat Jun 14 13:54:04.964 2014 ISTR0="16777237" ISTR1="" ISTR2=""
    ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0        SMS_OFFLINE_SERVICING_MANAGER       
    14/06/2014 14:54:04        8272 (0x2050)
    Schedule processing thread stopped        SMS_OFFLINE_SERVICING_MANAGER        14/06/2014 14:54:05        8272 (0x2050)
    I'm not sure what file "Failed to find properties of file 4" is referring to, whether dism.exe, an update or the image itself, but immediately after this message appears the image is unmounted. After that this message shows:
    "Completed processing image package PackageID. Status = Failed"
    As I say, there's a lot of information available re Offline Servicing but I haven't found anything with these particular messages.
    If anyone has encountered this before, I'd appreciate any information you have.
    Regards,
    John.

    Hi,
    I think file named 'NO_SMS_ON_DRIVE.SMS’ might be causing this issue. If this file is present in logical drives, then please give it a shot one more time after deleting this file from the logical drives.
    Due to this file, it might be preventing 'smsexec' service to skip the drive when looking for content. So worth a try!
    After deleting this file, you also need to restart 'smsexec' service to reflect the changes. You can also verify from below registry value & ensure that all of your logical drives (specially where SCCMContentLib directory resides) should be listed
    over there 
    'HKLM\Software\Microsoft\SMS\DP\ContentLibUsableDrives'
    Hope this will help!
    Cheers | Navdeep Sidhu

  • Need to create a transformation based on Update Rules Logic

    Hi,
    I have an existing complex Update Rule. I need to manually create a Transformation based on this Update Rule logic. The Start Routine of the Update Rule comprises of:
    1) All data declarations in the Global Area
    2) The local coding area consists of various select statements from various r/3 tables used later for mapping. It also calculates and stores values in the Data Package final internal table for few infoobjects that are not present in the Source object but and are used to populated data in the target.
    3) then we have the various one to one individual Infoobject mappings/constants and many individual infoobject routines that pick result from the comm_structure
    I am not very clear as to where each of the above coding logic should be put in the transformation coding area...... I can see four main coding areas in the transformation....global area, 2nd Global Area, Method Start_Routine and Method Inverse_Start_Routine........... I think the global data declarations(point 1 above) should be put in the Global declaration area of the start routine. The local area of the update rule logic (point 2 above) that contains select statements should be put in 2nd Global part or should it be put in Method Start_routine????? Point 3 above for individual filed mappings will be done through Rule groups. Also can anyone let me know what is the methos inverse_start_routine used for?????
    Thanks.

    Hi,
    Point 1 you mentioned should be put under2nd Global declaration part of start routine.
    Point 2 should be put under Method Start_routine.
    Point 3 as you only mentioned can be done by individual field rule mappings.
    Method inverse_start_routine
          This subroutine needs to be implemented only for direct access
          (for better performance) and for the Report/Report Interface
          (drill through)
    Not very clear though about Inverse routine. But definetly not being used for the case you mentioned.
    Hope it helps.

  • Can routine replace "master data attribute of" update rule for performance?

    Hi all,
    We are working on CRM-BW data modeling, We have to look up agent master data for agent level and position for each transaction data. So now we are using "Master data attribute of" update rule. Can we use routine instead of "Master data Attribute of" ? Will it improve the loading performance? Since we have to load 1 lack transaction records , where as we have 20,000 agent details in agent master data.My understanding is, for each record in data package the system has to go to master data table and bring the agent details & store in cubes. Say one agent created 10 transactions, then this option "master data attribute of" will read the agent master data 10 times even though we are going to pull same details for all 10 transactions from master data. if we use routine, we can pull the agent details& storing in internal table removing all duplicates and in update routine we can read the internal table.
    Will this way improve performance?
    let me know if you need further info?
    Thanks in advance.
    Arun Thangaraj

    Hi,
    your thinking is absolutely right!
    I don't recommend to use the standard attribute derivation since it will perform a SELECT to the database for EACH record.
    Better implement a sorted table in your start routine; fill it with SELECT <fields> FROM <master_data_table> FOR ALL ENTRIES OF datapak WHERE OBJVERS = 'A' etc...
    In your routine perform a READ itab ... BINARY SEARCH.... I believe that you won't be able to go faster...
    hope this helps...
    Olivier.

  • ABAP assistance - start routine logic in update rule

    I have used an existing update rule and have based my logic around the same.  The purpose of the rule is to look up customer master data and get a subset of customer numbers from the transaction records so that the values for customer number from the transactional data will not be updated if it does not match with existing master data customer numbers.
    The loads are full and we drop the data before we load.
    I have listed the logic below (the number at the front is to be considered as the line number) and a list of open questions that I have thereafter:
    Start routine logic:
    1  DATA: l_index LIKE sy-tabix.
    2  DATA: BEGIN OF ls_customer,
    3        customer TYPE /BI0/OICUSTOMER,
    4        objver TYPE RSOBJVERS,
    5     END OF ls_customer,
    6      lt_customer LIKE TABLE OF ls_customer.
    7  REFRESH: lt_customer.
    8  LOOP AT DATA_PACKAGE.
    all customers from data package
    9    ls_customer-custno = DATA_PACKAGE-custid.
    10  ls_customer-objver = 'A'
    11    APPEND ls_customer TO lt_customer.
    12   ENDLOOP.
    12  SORT lt_customer.
    13  DELETE ADJACENT DUPLICATES FROM lt_customer.
    14   IF NOT lt_customer[] IS INITIAL.
    15    SELECT /BI0/OICUSTOMER RSOBJVERS
    16      FROM /BI0/PCUSTOMER
    17      INTO CORRESPONDING FIELDS OF TABLE lt_customer
    18      FOR ALL ENTRIES IN lt_customer
    19      WHERE ls_customer-custno = DATA_PACKAGE-custid
    20      AND ls_customer-objver = 'A'
    21    SORT lt_customer BY customer ASCENDING                    
    22  ENDIF.
    Questions
    Line
    1 - what is the purpose of this line? What is it that is being declared
    2 - in some code I have seen this line with OCCURS 0 at the end what does this mean with and without the term?
    4 - I am using the Data Element name is this correct or should I use the field name?
    3 - 5 here I declare an internal structure/table is that correct?
    6 - here I declare a work area based on the internal table is that correct?
    7 - What would happen if I avoided using the REFRESH statement?
    8 - 12 - Is this syntactically correct, I am trying to get a set of data which is the customer numbers which match the master data customers and the master data record is án active version and than appendíng to the work area?
    13 - My understanding is this will reduce the number of records in the work area is this correct and needed?
    14 - 22 I am trying to identify my required set of data but feel I am repeating myself, could someone advise?
    Finally what logic would I actually need to write in the key figure object, could I use something like:
    Result = lt_customer.
    Thanks
    Edited by: Niten Shah on Jun 30, 2008 8:06 PM

    1. This line is not required
    2. OCCURS 0 is the OLD way of defining an internal table with that structure.  As it is, it just defines a flat structure.
    3. Data element is usually best
    3-5 Yes
    6. No.  Here you are declaring a table of the type of the flat structure.  Just as the ABAP says!
    7. Nothing.  But by putting this in, you ensure that you know the state of the table (empty) before you start looping through the data package
    8-12. You can tell if it is syntactically correct by pressing Ctrl-F2 when in the editor.  Looks ok.
    13. Ensures your list of customers contains no duplicated.  The code up to this point is building a list of all the unique customers in the data package.
    14-22. Goes to the database and brings back ONLY those customers which are found in the master data.  Looks ok.
    This is a start routine (that's why you've got a data package).  You don't use result.  You should update the datapackage.  But this you haven't done.  Double click on the table name /BIC/PCUSTOMER to get the correct field names.
    So you have to loop through the data package again, and check if the customer in the datapackage is lt_customer.  If it is, fine, otherwise you blank it and report an error, or set an error message or whatever.
    I wouldn't do it like this.  I'd do something like this:
    STATICS: st_customer TYPE HASHED TABLE OF TYPE /bi0/oicustomer
                                  WITH UNIQUE KEY TABLE_LINE.
    * st_customer retains its value between calls, so only populate if empty
    * In one run of the infopackage, this will mean you do only one read of
    * the master data, so very efficient.
    IF st_customer IS INITIAL.
      SELECT customer FROM /BI0/PCUSTOMER
                              INTO TABLE st_customer
                              WHERE objvers EQ 'A'. " Only active values
    ENDIF.
    * Go through data package
    LOOP AT DATA_PACKAGE.
    * Check whether the customer exists.
      READ TABLE st_customer TRANSPORTING NO FIELDS
                  WITH TABLE KEY table_line = DATA_PACKAGE-custid.
      CHECK sy-subrc IS NOT INITIAL.
    * If you get here, the customer isn't valid.  So I'm just setting it blank
      CLEAR DATA_PACKAGE-custid.
      MODIFY DATA_PACKAGE. " Updates the datapackage record
    ENDLOOP.
    Even this is not fully optimised, but it's not bad.
    I strongly suggest that you get yourself sent on the basic ABAP programming course if you're going to do a lot of this.  Otherwise, read the ABAP documentation in the help.sap.com, and, from the editor, get the cursor on each ABAP keyword and press F1 to read the ABAP help.
    matt

  • Performance Tuning in case of Database Access

    Hi,
      I am using following code...database access is huge for this code...pls help me out to make database access minimum. I am using 3 internal tables.
    select partner1 partner2 into (mtab-busi_part, mtab-BUT051_PART)
    from but051.
    Select  name_first name_last PARTNER_GUID into (mtab-bp_first, mtab-bp_last, MTAB-R_PARTNER_GUID)
    From but000 where partner = mtab-busi_part.
    *MTAB-OBJECT_ID = ITAB-OBJECT_ID.
    append mtab.
    endselect.
    ENDSELECT.
    *ENDLOOP.
    loop at mtab.
    CONCATENATE mtab-bp_FIRST mtab-bp_LAST INTO mTAB-bp_full
                                        separated BY SPACE.
    modify mtab.
    endloop.
    loop at mtab.
    if mtab-bp_full = ' '.
      select name_org1 into (mtab-bp_full)
      from but000 where partner = mtab-busi_part.
    append mtab.
      endselect.
    endif.
    modify mtab.
    clear mtab.
    endloop.
    SELECT OBJECT_ID GUID INTO (NTAB-object_id, Ntab-guid)
    FROM CRMD_ORDERADM_H
    for all entries in itab
    where process_type = '1001' and object_id in o_id.
    select single date_1 date_2 from crmv_item_index into (ntab-date_1, ntab-date_2 )
    where object_id = ntab-object_id.
    endselect.
    Select partner_no partner_fct into (Ntab-partner_guid, Ntab-partner_fct)
    from bbp_pdview_bup where guid_hi = Ntab-guid .
    *and partner_fct <> '00000015'
    Select partner name_org1 into (Ntab-partner_no2, Ntab-others)
    from but000 where partner_guid = Ntab-partner_guid.
      if sy-subrc = 0.
      SELECT SINGLE DESCRIPTION FROM CDBC_PARTNER_FT INTO NTAB-DESC
      WHERE PARTNER_FCT = NTAB-PARTNER_FCT AND SPRAS = 'EN'.
      endif.
      SELECT  PAFKT ABTNR PAAUTH
      FROM BUT051 INTO corresponding fields of  nTAB
      WHERE PARTNER2 = ntab-partner_no2  .
           if sy-subrc = 0.
           SELECT single BEZ30 FROM TB913
        INTO CORRESPONDING FIELDS OF nTAB
        WHERE PAFKT = nTAB-PAFKT AND SPRAS = 'E'.
        endif.
         if sy-subrc = 0.
        SELECT single BEZ20 FROM TB915
        INTO CORRESPONDING FIELDS OF nTAB
        WHERE PAAUTH = nTAB-PAAUTH AND SPRAS = 'E'.
        endif.
    *endselect.
            if sy-subrc = 0.
        SELECT  single BEZ20 FROM TB911
        INTO (nTAB-BEZ2)
        WHERE ABTNR = nTAB-ABTNR AND SPRAS = 'E'.
    endif.
    endselect.
    APPEND NTAB.
    *clear ntab.
    *ENDSELECT.
    ENDSELECT.
    *clear ntab.
    ENDSELECT.
    ENDSELECT.
    loop at ntab.
      if ntab-others = ' '.
        select name_first name_last into (ntab-first_name1, ntab-last_name1)
        from but000 where partner = ntab-partner_no2.
        endselect.
        CONCATENATE ntab-FIRST_NAME1 ntab-LAST_NAME1 INTO nTAB-others
                                        separated BY SPACE.
      endif.
      modify ntab.
      clear ntab.
    endloop.
    SORT NTAB BY GUID.
    SELECT OBJECT_ID GUID INTO (KTAB-object_id, Ktab-guid)
    FROM CRMD_ORDERADM_H
    for all entries in itab
    where process_type = '1001' and object_id in o_id.
    Select  partner_no into (Ktab-partner_no1)
    From crmd_order_index where header = Ktab-guid and pft_8 = 'X' and object_type = 'BUS2000126'.
    *endselect.
    Select name_first name_last into (Ktab-first_name, Ktab-last_name)
    From but000 where partner = Ktab-partner_no1.
    *endselect.
    APPEND KTAB.
    ENDSELECT.
    ENDSELECT.
    ENDSELECT.
    loop at Ktab.
    CONCATENATE Ktab-FIRST_NAME Ktab-LAST_NAME INTO KTAB-RESP_EMPLOYEE
                                        separated BY SPACE.
    MODIFY KTAB.
    clear Ktab.
    endloop.
    loop at Ktab.
      if Ktab-RESP_EMPLOYEE = ' '.
        select name_ORG1 into (Ktab-RESP_EMPLOYEE)
        from but000 where partner = Ktab-partner_no1.
        endselect.
        endif.
      modify Ktab.
      clear Ktab.
    endloop.
    SELECT OBJECT_ID GUID INTO (itab-object_id, itab-guid)
    FROM CRMD_ORDERADM_H
    where process_type = '1001' and object_id in o_id.
    append itab.
    endselect.
    LOOP AT iTAB.
       LOOP AT NTAB .
         IF NTAB-object_id = iTAB-object_id .
              itab-date_1 = ntab-date_1.
              ITAB-DESC = NTAB-DESC.
              itab-partner_no2 = NTab-partner_no2.
              itab-partner_fct = ntab-partner_fct.
              itab-bez30 = ntab-bez30.
              itab-bez20 = ntab-bez20.
              itab-bez2 = ntab-bez2.
              itab-others = ntab-others.
              INSERT lines of nTAB INTO ITAB.
              modify itab.
              CLEAR ITAB.
             delete itab where object_id = ' ' and partner_no2 = ' '.
         ENDIF.
      endloop.
    endloop.
    sort itab by OBJECT_ID descending PARTNER_NO2 .
              delete adjacent duplicates from itab comparing partner_no2 object_id.
    sort itab by OBJECT_ID descending PARTNER_NO2 .
    loop at iTab where partner_fct = '00000015'.
      LOOP AT mTAB WHERE BUT051_PART = iTAB-partner_no2 .
       itab-busi_part = mtab-busi_part.
       itab-bp_full = mtab-bp_full.
       ITAB-R_PARTNER_GUID = MTAB-R_PARTNER_GUID.
      INSERT  LINES OF mTAB INTO iTAB.
       modify itab transporting busi_part bp_full r_partner_guid.
        endloop.
    endloop.
    sort itab by busi_part descending partner_no2.
    delete itab where object_id = ' '.
    loop at ITab.
      LOOP AT KTAB.
       IF KTAB-GUID = ITAB-GUID.
        move Ktab-partner_no1 to itab-partner_no1.
       move Ktab-R_partner_GUID to itab-R_partner_GUID.
        move Ktab-RESP_EMPLOYEE to itab-RESP_EMPLOYEE.
        modify itab.
       ENDIF.
    endloop.
    endloop.

    Hi
    i will give you some tips to reduce the daya base load please apply that
    <b>Tips and Tricks</b>
    Optimizing the load of the database
    Using table buffering
         Using buffered tables improves the performance considerably. Note that in some cases a statement can not be used with a buffered table, so when using these statements the buffer will be bypassed. These statements are:
    Select DISTINCT
    ORDER BY / GROUP BY / HAVING clause
    Any WHERE clause that contains a sub query or IS NULL expression
    JOIN s
    A SELECT... FOR UPDATE
         If you wan t to explicitly bypass the buffer, use the BYPASS BUFFER addition to the SELECT clause.
         Optimizing the load of the database
    2.  Use the ABAP SORT Clause Instead of ORDER BY
    The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The database server will usually be the bottleneck, so sometimes it is better to move the sort from the database server to the application server.
    If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT statement to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the database server sort it.
    Optimizing the load of the database
    3.   Avoid the SELECT DISTINCT Statement
    As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplicate rows.
    Additional Info
    Use of CONTEXT can highly optimize the code.
          Context can be created using Context Builder (SE33)
         Context advantages:  - no double fetch by DEMAND: 1. fetch, 2. get from buffer  - more performance (best SELECT-statement)  - better survey of code
    Use of PARALLEL CURSOR  increases the Performance to a great extent.
    INDEXES help to speed up selection from the database. The primary index is always created automatically in the SAP System. It consists of the primary key fields of the database table. If you cannot use the primary index to determine a selection result (for example, WHERE condition may not contain any primary index fields), you can create a secondary index.
    Optimal number of indexes for a table  You should not create more than five secondary indexes for any one table because:
    Whenever you change table fields that occur in the index, the index itself is also updated.
    The amount of data increases.
    The optimizer has too many chances to make mistakes by using the 'wrong' index.
             If you are using more than one index for a database table, ensure that they do not overlap.
    reward if useful

Maybe you are looking for