Userexit / attribut derivation in BCS

Hy,
it is possible to use an userexit and derivation of attributes in SEM BPS.
The same I am looking for in SEM-BCS.
I want to fill an attribut of the transactional cube with variable values, dependent on used versions and subversions. This means that the additional programm has to update any document with further information at the moment the document is written to the Cube (and ODS as well).
Does anybody know about an userexit or any other possibility to make an update in this way?
Thanks in advance
Thomas
Message was edited by:
        Thomas Fabricius

Hi,
Good evening and greetings,
Please go through the following OSS Note
Note 536797 - General availability for SEM-BCS 3.1B
Please reward points if found useful
Thanking you
With kindest regards
Ramesh Padmanabhan

Similar Messages

  • Can routine replace "master data attribute of" update rule for performance?

    Hi all,
    We are working on CRM-BW data modeling, We have to look up agent master data for agent level and position for each transaction data. So now we are using "Master data attribute of" update rule. Can we use routine instead of "Master data Attribute of" ? Will it improve the loading performance? Since we have to load 1 lack transaction records , where as we have 20,000 agent details in agent master data.My understanding is, for each record in data package the system has to go to master data table and bring the agent details & store in cubes. Say one agent created 10 transactions, then this option "master data attribute of" will read the agent master data 10 times even though we are going to pull same details for all 10 transactions from master data. if we use routine, we can pull the agent details& storing in internal table removing all duplicates and in update routine we can read the internal table.
    Will this way improve performance?
    let me know if you need further info?
    Thanks in advance.
    Arun Thangaraj

    Hi,
    your thinking is absolutely right!
    I don't recommend to use the standard attribute derivation since it will perform a SELECT to the database for EACH record.
    Better implement a sorted table in your start routine; fill it with SELECT <fields> FROM <master_data_table> FOR ALL ENTRIES OF datapak WHERE OBJVERS = 'A' etc...
    In your routine perform a READ itab ... BINARY SEARCH.... I believe that you won't be able to go faster...
    hope this helps...
    Olivier.

  • Show Display Attributes for LOV in af:query

    ADF 11g (11.1.1.3.0)
    Hi all,
    I'm using an af:query that uses an LOV for a PERSON_ID column, with an EMPLOYEE view as the View Accessor.
    Since the user experience may require the user to search for a particular employee, using either a "Combo Box with List of Values" or an "Input Text with List of Values" makes most sense, but those UI controls only display a the PERSON_ID when returning from the search pop-up. This is in contrast to using a Choice List, which would show, in addition to the original column (PERSON_ID), the selected Display Attributes (eg PersonID, FirstName, LastName).
    It is possible to show the additional Display Attributes when using the "Combo Box with List of Values" or "Input Text with List of Values" controls for an af:query, and if so, how?
    Thanks!

    Hi user,
    Take your base VO and create a transient attribute or an attribute derived from a reference entity as the display value (let's say "PerFirstName"). Then, create the list of values of employees on this attribute and set the return values "FirstName" > "PerFirstName" and "EmployeeId" > "PersonId". Define a view criteria to be used for your af:query component and add "PerFirstName" with '=' operator (this will render as a LOV).
    Let me know if it helps...
    Barbara

  • SEM-BCS on SAP-BW master data alignment

    We are implementing SEM-BCS rel 3.2 on SAP-BW 3.2
    In the SEM infoproviders some standard BW info-objects are used such as 0CO_AREA, 0COMPANY etc.
    These info objects are also used in BW infoproviders.
    In SEM-BCS the masterdata that is already available in these infoobjects is not visible. The documentation describes that the SEM system is leading concerning the master data.
    When we create masterdata in SEM for these infoobjects, with te same key as used in BW (the same company number), it creates new records in the BW masterdata table and so deleting the attribute values in the BW tables.
    Note 689229 describes some synchronization programs that can be used to synchronize master data from SEM into BW. It is also possible to use these programs to initially synchronize the BW masterdata to SEM so you don't have to create the master data in SEM but only have to enhance it.
    Has anyone have experience with this issue and can you provide me with information what to do and what not to do.
    Thanks, Wil Heijmans
    Trespa International

    Hi Wil,
    it's right you'll have to synchronize the master data. When we implemented SEM-BCS 3.1 we had some problems with the reports so we used the following routine.
    1. Upload attributes into SEM-BCS.
    2. Put them into the hierarchies if needed.
    3. Synchronize attributes to BW.
    4. Do attribute change run in BW.
    Synchronizing hierarchies was a bit painful so we decided to use different hierarchies (it helped at some other points too because we needed additional features in BW hierarchies like other InfoObjects).
    Synchronizing BW attributes into SEM-BCS attributes was even worse because the standard BCS attributes could not be filled this way.
    We worked on early SEM 3.1 as one of the ramp-up customers. The programs may have changed in the meantime but the one above has worked for us.
    Best regards
       Dirk

  • Apply QoS profile using RADIUS attributes

    Hi all,
    Anyone delved into the use of RADIUS attributes to apply QoS values (DSCP/802.1p) to wireless users via a WLC?
    With the emergence of ISE and the concept of a shared SSID for several user types I may want to apply QoS profiles by user rather than SSID.
    Do you need to apply the maximum value to the SSID for the attribute-derived value to work?
    Can non-WMM client traffic be marked using this approach?
    Plenty to think about here...
    Any discussion welcome!
    Cheers
    Rob

    Yo can apply QoS RADIUS override.
    http://www.cisco.com/en/US/products/ps6307/products_tech_note09186a0080870334.shtml
    Yes it would be best to apply the wlan max qos value to the level that you intend to use with the radius override. for example if you want to apply platinum qos for voice clients on the ssid, i would map the wlan to platinum qos.
    i am not sure on the next question. I think u can assign a DSCP/802.1p to a non WMM clients but I dont think the non wmm clients will benefit from it as they will not tag their traffic and hence the AP and subsequently the wired network will treat it as best effort (untagged).
    Thanks,

  • Material type not getting displayed in the cube........

    Hi,
    In my infocube material type for one of the material is not getting displayed.
    When I check in the content of the cube for this material all the fileds are getting displayed except material type.
    However it is present in the material master data from which it is put into the update rules to populate in the cube.
    Its getting displayed for some other materials , so we cant say that mapping is wrong or problem with update rules.
    Can some body let me know what could be the reason.
    Thanks,
    Jeetu

    Hi Jeetu,
    can you check in your cube if you have for one material, entries with AND entries without the MATL_TYPE? If this is the case then you were loading transactional data before having the corresponding material master data.
    You should adapt your scenario:
    - first do not use the standard attribute derivation during your URules: performance is very bad.
    - implement a start routine filling an internal table with your material and MATL_TYPE for all entries of material in your datapackage.
    - implement an update routine on the MATL_TYPE with a READ on this internal table an raise an ABORT = 4 if the MATL_TYPE is initial or the material in not found.
    Now to fix your situation you'll have to reload your cube or alternatively just reload your missing MATL_TYPE MATERIAL from your cube itself and selective delete those which are empty.
    hope this helps...
    Olivier.

  • Material not getting displayed in create asn screen

    Hi All,
    We are not able to see the material under create asn screen in SUS tho the same appears wen we open the PO in SUS.Also,please let me know which function module in sus governs the display of material in create asn screen
    Thanks,
    Manu

    Hi Jeetu,
    can you check in your cube if you have for one material, entries with AND entries without the MATL_TYPE? If this is the case then you were loading transactional data before having the corresponding material master data.
    You should adapt your scenario:
    - first do not use the standard attribute derivation during your URules: performance is very bad.
    - implement a start routine filling an internal table with your material and MATL_TYPE for all entries of material in your datapackage.
    - implement an update routine on the MATL_TYPE with a READ on this internal table an raise an ABORT = 4 if the MATL_TYPE is initial or the material in not found.
    Now to fix your situation you'll have to reload your cube or alternatively just reload your missing MATL_TYPE MATERIAL from your cube itself and selective delete those which are empty.
    hope this helps...
    Olivier.

  • HOW TO: Add /manipulate columns for a GridControl

    HOW TO: Add /manipulate columns for a GridControl when the columns (attributes) are from different entity objects.
    This HOWTO describes the basic steps of using attributes from different entity objects for a GridControl.
    One way you can create a GridControl which contain attributes from different entity objects is to create a view object and base it on the entity objects which contain
    the desired attributes.
    Here are the basic steps:
    1.Create a new view object (or use an existing view object) by selecting File>New from the menu, clicking the Business Components tab and double-clicking
    on the View Object icon.
    2.In the View Object wizard change the name to something meaningful.
    3.Select the entity objects you will base your view object on.
    4.Nivigate to the attribute screen and select the attributes you would like to include in your view object from each entity object. At this point you can also create
    a new attribute by clicking the "New" button. The new attribute can be a concatenation of other attributes, derived from a calculation etc.
    5.In the query panel of the View Object wizard, click "Expert mode" and enter a query statement. You write complex queries such as decoding a set of attribute
    values.
    6.Add your newly to your newly created view object to the application module by double-clicking on the application module in the navigation pane and selecting
    your view object from the list.
    7.Create a new row set.
    8.Bind row set to a query by editing their queryinfo property and selecting your view object and its attributes from the queryInfo pane.
    9.Create a GridControl and bind it to the row set by editing the dataItemName property of the GridControl. Since the GridControl is bound at the row set level
    all of the related attributes are automatically added.
    null

    Michael,
    Are you intending this as a commercial solution or a work around?
    To take an existing equivalent, one would build a view in the database tailored for each grid in an Oracle Forms application. Or a separate query layered over tables for each form/grid in a Delphi or Access application? Even if it is ninety nine percent the same over half a dozen forms/grids?
    And now you've added a whole slew of "slightly different" rowSetInfos to maintain.
    So if you wanted to add a column that needs to appear everywhere... you've just increased the workload multi-fold?
    That would be a management nightmare, wouldn't it? Not to mention yet more performance cost and a slower system?
    Hmmmm..... I'm not sure I like where this is headed... someone needs to do some convincing...
    null

  • Decision summaries in what-if analysis spreadsheets

    We are testing what-if analysis spreadsheets and we love it.
    We have an interesting puzzle. In one test, we have 100 source attributes and another 50 attributes derived from the rules.
    The rules execute and return monetary values. Everything works as advertised.
    However, we want to discover which source attributes were really the key factors in the decision as we look at the spreadsheet. Ideally, we would like to be able to produce 2 additional summary columns in Excel. One summary column would tell us that attributes A, C, and D increased the monetary amount. The other column would tell us that attributes B, E, and F decreased the monetary amount.
    At the moment, the only thing I can think of doing is some complex formulas / reports in Excel itself combined with OPA rules. This is less than ideal.
    I can put logic in OPA, but I am basically trying to recreate my own summary decision report and I still have the problem of concatenating text to create the summaries. I am not sure how to concatenate text into a running description. For instance, I can't put a rule that concatenates text: txtPositiveKeyFactor = txtPositiveKeyFactor & "The age column was a major positive factor", etc... ...or can I?
    Any thoughts?
    BTW, 2nd question: how do you put a carriage return / line feed into a text attribute in OPA?
    Edited by: Paul Fowler on Aug 3, 2012 6:21 AM

    To tell whether or not an attribute increased or decreased the overall value, you'll definitely need to create additional goal attributes for those things. E.g. attribute increased the outcome if...
    You could then have each of these as additional outcomes, and use Excel to create the concatenated text in an additional (non OPA-populated) column.
    As for your second question, I'm not sure that OPA actually supports CRLF in a text attribute - but I could be wrong.
    Davin

  • Query on EO inheritance

    HI,
    i had a requirement to create inherited EOs, the process i normally to identify a discriminator attribute and specify the default values for that attribute as required. Now the problem for me is for the same child EO i had two values for discriminator column, Can any one suggest me how to implement this instead of creating two EOs for each default value?
    The ways we discussed to solve this are
    1) Creating Two EOs or each value -- Not meets the requirement
    2) Create database view for the underlying table with a decode for that column deriving the same value for both default values like ( select col1,col2,decode(column3,'x','p','y','p',column3 from table1 -- where a,y are the default values for the same child EO) , and create EOs based on that view, but the problem here is the view becomes non updatable.
    Not very clear on this below process,
    3) We can create a transient attribute for the EO and derive the value for it based on the two values -- but the problem here is first EO gets created , the transient attribute derives its value, so while the EO gets created the discriminator value needs to be identified , so no use in deriving the transient attribute value after EO creation.
    So can any one please suggest me any other way or any way to use the above methods with improvements ?

    Hi,
    did you see the developer guide on this matter?
    http://download.oracle.com/docs/html/B25947_01/bcadveo006.htm#sm0327
    Frank

  • Count rows on SAP BW table thru SAP BODS

    Hi guys.
    I have the next situation. I have two DataStores defined, one of them to SAP BW and the other one to SQL Server.
    I need to know how many rows has a table before download on the SQL Server database.
    I done a Dataflow with 2 objects only, SQL object and Template table. In the SQL editor i've defined the SAP BW Datastore and in the SQL text, i was put the sentence "Select count(1) from DS_SAPBW.."/BIC/AZTB_XXXXXX" ", but when i executed appear an error with "Invalid object" description.
    I have tryed write the table name with diferent ways, but all of them generate the same error.
    Can i use sql command direct over SAP BW or exists something restriction about that?.
    Another way to count the rows is with a Dataflow with 3 objects, Source, Query and template table. In the Query i choose one port and use it to count() and write on the template table in SQL Server, but this solution take a lot of time when the source table is a large table, because try read all rows before give the result.
    Can anybody help me?.
    1st. situation.
    Another thing, when save the Dataflow, disappears the Database type when re-entry into that SQL object again ¿¿¿???..
    To execute appear the next error.
    Help me please. I need to know how can i exceute this sql query in the correct way.
    Thanks.

    Hi Wil,
    it's right you'll have to synchronize the master data. When we implemented SEM-BCS 3.1 we had some problems with the reports so we used the following routine.
    1. Upload attributes into SEM-BCS.
    2. Put them into the hierarchies if needed.
    3. Synchronize attributes to BW.
    4. Do attribute change run in BW.
    Synchronizing hierarchies was a bit painful so we decided to use different hierarchies (it helped at some other points too because we needed additional features in BW hierarchies like other InfoObjects).
    Synchronizing BW attributes into SEM-BCS attributes was even worse because the standard BCS attributes could not be filled this way.
    We worked on early SEM 3.1 as one of the ramp-up customers. The programs may have changed in the meantime but the one above has worked for us.
    Best regards
       Dirk

  • Master Data Management in JAPAN ECC 6 Deployment

    Deployment of SAP ECC 6.0 in Japan.
    We are located in USA and are in the process of deploying FI, SD, MM in our Japan branch in Tokyo.  We are trying to find other companies that had similar deployment in Japan and learn from their experiences as to what to do and what not to do.  Moreover; understand how did they resolve the language in master data setting for customer master espacially addresses in KANJI and English and how they managed for standard report to pick English and/or Japan address in Kanji.  I hope that I was clear as to what we are trying to find out.
    My apologies if I posted this in the wrong forum.
    Thank you
    Walid

    Hi and welcome to SDN!
    You are absolutely right, you cannot see (and load through BW) all attributes of the BCS FS items.
    You may use a flexible upload for this. Here is an example how to load both master data and hierarchies of FS items:
    /people/sap.user72/blog/2006/07/21/how-to-upload-hierarchy-and-master-data-of-fs-items-in-sap-sem-bcs-40-by-using-flexible-upload-method
    Best regards,
    Eugene

  • Rename User View

    Hi,
    I'm going to adopt user renames, using Rename User View, and automatic attribute derivation (like email addresses) is a must. Everything's working fine except changing attributes of the waveset name space (email, organization).
    Here's the rule:
          <Rule name='Set Attributes for Rename'>
            <RuleArgument name='renameView' value='$(renameView)'/>
            <dolist name='resource'>
              <ref>renameView.toRename</ref>
              <set>
                <concat>
                  <s>renameView.resourceAccounts.currentResourceAccounts[</s>
                  <ref>resource</ref>
                  <s>].selected</s>
                </concat>
                <s>true</s>
              </set>
              <cond>
                <ref>user.newLogin</ref>
                <block>
                  <set name='renameView.newAccountId'>
                    <ref>user.newLogin</ref>
                  </set>
                  <set>
                    <concat>
                      <s>renameView.accounts[</s>
                      <ref>resource</ref>
                      <s>].login</s>
                    </concat>
                    <ref>user.newLogin</ref>
                  </set>
                  <set>
                    <concat>
                      <s>renameView.accounts[</s>
                      <ref>resource</ref>
                      <s>].accountId</s>
                    </concat>
                    <ref>user.newLogin</ref>
                  </set>
                  <set>
                    <concat>
                      <s>renameView.accounts[</s>
                      <ref>resource</ref>
                      <s>].email</s>
                    </concat>
                    <concat>
                      <ref>user.newLogin</ref>
                      <s>@domain.com</s>
                    </concat>
                  </set>
                </block>
              </cond>
            </dolist>
          </Rule>The problem is that the email address is only changing on accounts like Active Directory (account[AD].email), but the old value is still in waveset.email and global.email ('cause there isn't an account[Lighthouse].email attribute).
    Is there a way to change these values too? It seems that it's no use inserting lines into the rule about waveset and global attributes.
    Thanks,
    Adam

    check the workflows, forms, and views document. Its detailed there.
    Note that IDM doesn not support domain moves...only OU moves within the same domain.
    Dana Reed
    AegisUSA
    Denver, Co
    [email protected]
    "We are the Identity Company"

  • Repost based on Char Relationships help !

    Hi,
    I need help with the above function please. I'm doing this in IP but I think it would be equally relevant in BPS.
    I have 2 derivations
    1. VAT is derived as its an attribute of contract. (so its characteristic relationship of type attribute)
    2. Cash date is derived in an exit from Event date and Debtor days.(so its characteristic relationship of type exit)
    When doing a repost based on characteristic relationships (after changing master data attributes), the VAT derives properly but I get an error on the cash date derivation.
    I'm not sure if its important, but VAT flag is actually in the cube whereas Debtor days is simply looked up from contract master and then used to calculate cash date in the exit.
    The characteristic relationship does work properly when simply planning data into the cube, but not on the repost function.
    Any help/ideas would be appreciated.
    Cheers
    sue

    Hi,
    No its not possile. If you put CALWEEK as a target in the derivation, the CHECK on the derivation will fail as you cannot derive Generic Time characteristics.
    I overcame the problem however by ticking CALWEEK in the Repost with Characteristic Relationships function. ie. even though the function did not suggest that it could be derived, ticking it did not lead to error and in this way the function worked.
    Just further FYI, I've found that Exit derivations and Attribute derivations cannot be achieved in a single function that they have to be created separately.
    Regards
    Sue

  • Link between Financial Statement Item and Break-Down-Category

    Dear all,
    we try to create an own data entry functionality in SAPNetweaver for our BCS Data.
    Therefore we need to implement a coding vor a validation (IP-coding). Do you know in which table we get the link between the financial statement item and the break-down category?
    The customizing phath is
    TA UCWB > Master Data > Items > Item (select item) > Field Breakdown Category (field name: ITGRP).
    We need the tecnical link (table/structure) so that we can create a validation based on the financial statement and the breakdown category).
    Thanks!
    XmchX

    The tables themselves are generated objects that are structured based on how you've defined your master data (custom attributes, etc). 
    You can probably find them through table UGMD2011 and entering the fieldname of your FS Item (I think the delivered item is /1FB/CS_ITEM).
    The following snippet is from a function module I wrote to check movement type restrictions (min/max selection set) might be useful.  I've previously created a custom task for posting custom documents (that could not be defined with standard reclassifications or IU eliminations). 
    I've never worked with the portal, but presumably you can create an RFC interface and replicate the selections which normally come from the cons monitor (Cons Area, GC, FYV, Period, Year, Cons Chart, + and additional fields fixed in the cons area) on a portal page and inherit them that way.  I actually thought about creating an Excel workbook that would send postings directly into BCS...just never got the time, and it was never a "requirement" as the volume of manually entries has been pretty low (or, at least always planned to be low)
      CONSTANTS:
      c_area            TYPE uc_area      VALUE 'US',              " Consolidation Area
      c_chart           type uc_value     value 'US',              " Chart of Accounts
      c_item_fieldname  TYPE uc_fieldname VALUE '/1FB/CS_ITEM',    " FS Item Fieldname
      c_chart_fieldname TYPE uc_fieldname VALUE '/1FB/CS_CHART',   " Chart of Accounts Fieldname
      c_mt_fieldname    TYPE uc_fieldname VALUE '/1FB/MOVE_TYPE'.  " Movement Type Fieldname
      TYPES:
    " Item attributes required from BCS databasis
      BEGIN OF s_item,
        /1fb/cs_chart TYPE /bi0/oics_chart,
        /1fb/cs_item TYPE /bi0/oics_item,
        txtmi TYPE uc_txtmi,
        itgrp TYPE uc_itgrp,
        itgrp_max_set TYPE  uc_itgrp_max_set,
      END OF s_item,
      t_item TYPE HASHED TABLE OF s_item WITH UNIQUE KEY /1fb/cs_chart /bic/bcs_litem,
    " Breakdown Category attributes required from BCS databasis
      BEGIN OF s_itgrp,
        itgrp TYPE uc_itgrp,
        selid_max TYPE uc_selid,
        txtmi TYPE uc_txtmi,
      END OF s_itgrp,
      t_itgrp TYPE HASHED TABLE OF s_itgrp WITH UNIQUE KEY itgrp,
    " Movement Type and Description
      BEGIN OF s_mt_txt,
        move_type TYPE /bi0/oimove_type,
        txtsh TYPE uc_txtsh,
      END OF s_mt_txt,
      t_mt_txt TYPE HASHED TABLE OF s_mt_txt WITH UNIQUE KEY move_type.
      DATA:
    " BCS Interfaces
      do_factory        TYPE REF TO if_ug_md_factory,
      do_char           TYPE REF TO if_ug_md_char,
      do_value          TYPE REF TO if_ug_md_char_value,
      do_area           TYPE REF TO if_uc_area,
      do_model          TYPE REF TO if_uc_model,
      do_context        TYPE REF TO if_uc_context,
      do_char_itgrp     TYPE REF TO if_ug_md_char,
      lt_field_val      TYPE ugmd_ts_field_val,
      ls_field_val      TYPE ugmd_s_field_val,
      lt_value          TYPE uc0_ts_value,
      ls_value          TYPE uc0_s_value,
      lt_itgrp          TYPE t_itgrp,
      ls_itgrp          TYPE s_itgrp,
    " Lookup tables
      lt_sel            TYPE ugmd_ts_sel,
      ls_sel            TYPE ugmd_s_sel,
      lt_fieldname      TYPE ugmd_ts_fieldname,
      ls_fieldname      TYPE fieldname,
      ls_item           TYPE s_item,
      ls_item2          TYPE s_item,
      lt_item           TYPE t_item,
      lt_mt_txt         TYPE t_mt_txt,
      ls_mt_txt         TYPE s_mt_txt,
      l_tabname         TYPE tabname,
      l_itgrp           TYPE uc_itgrp,
    " Output tables
      lt_lookup_bdc     TYPE STANDARD TABLE OF zsbcs_movement_type_lookup_bdc,
      lt_lookup         TYPE STANDARD TABLE OF zsbcs_movement_type_lookup,
      ls_lookup_bdc     TYPE zsbcs_movement_type_lookup_bdc,
      ls_lookup         TYPE zsbcs_movement_type_lookup.
    " Initialize BCS interfaces
      IF do_area IS INITIAL.
        CALL METHOD cl_uc_area=>if_uc_area~get_area_instance
          EXPORTING
            i_area  = c_area
          IMPORTING
            eo_area = do_area.
      ENDIF.
      IF do_model IS INITIAL.
        CALL METHOD do_area->get_model
          IMPORTING
            eo_model = do_model.
      ENDIF.
      IF do_factory IS NOT BOUND.
        CALL METHOD cl_uc_area=>if_uc_area~get_md_factory
          EXPORTING
            i_area        = c_area
          IMPORTING
            eo_md_factory = do_factory.
      ENDIF.
    " Create reference to FS Item characteristic
      do_char = do_factory->get_char_instance(
          i_fieldname = c_item_fieldname ).
    " Define restrictions on FS Item for reading values
    " Always filter on chart of accounts
      ls_sel-fieldname = c_chart_fieldname.
      ls_sel-sign      = uc00_cs_ra-sign_i.
      ls_sel-option    = uc00_cs_ra-option_eq.
      ls_sel-low       = c_chart.
      INSERT ls_sel INTO TABLE lt_sel.
    " filter on FS Item if provided as input parameter
      IF NOT i_item IS INITIAL.
        ls_sel-fieldname = c_item_fieldname.
        ls_sel-sign      = uc00_cs_ra-sign_i.
        ls_sel-option    = uc00_cs_ra-option_eq.
        ls_sel-low       = i_item.
        INSERT ls_sel INTO TABLE lt_sel.
      ENDIF.
    " Get FS Item values
      CALL METHOD do_char->read_value
        EXPORTING
          it_sel   = lt_sel
        IMPORTING
          et_value = lt_item.
    " Exit if invalid item was passed as parameter
      if lines( lt_item ) = 0.
        raise no_data.
      endif.
    " Additional logic for next finding ITGRP details, etc.........
    Would recommend debugging these two methods to get an idea of what BCS is doing:
    cl_uc_tx_data_change->if_uc_tx_data_change~analyze_and_add_data
    cl_uc_tx_data_change->save_data
    Anyway, hope that helps.  It took me a lot of debugging to figure out exactly which objects were being used by the posting process, getting the refresh right (delete/reverse existing documents based on special version customizing), etc.  I wouldn't recommend slamming them into the cube and ODS directly with a RSDRI_CUBE_WRITE_PACKAGE or anything...
    - Chris

Maybe you are looking for