Automatic mapping of fields in transformation between DSO and Infosource.

Hello BI Experts,
When i migrate the update rule between DSO and infosource in to transformation, the log of successful copy of routines and hence the creation of transformation appears. But when i open the transformation for activating it, there is no mapping between the fields as well no routines appear.
Please suggest me the way out of it.
Points will be assigned.
Regards,
Vinayak

VGM,
By the way how are you migrating the routines into BI 7 from 3.x??? And moreover the update routine has to be a local routine in BI 7 or else a start routine or else end routine??? You need to decide where it will be applicable and  then copy the old code and modify it and paste the same. If you are creating any start or end routines to populate data for some object, do remember that the objects should map with any one of the object from Infosource.
Hope this helps
Regards
Gattu

Similar Messages

  • What is the difference between, DSO and DTP in BI 7.0

    Hi Guru's
    what is the difference between, DSO and DTP in BI 7.0 , how it will come the data from r/3 to BI 7.0, can u discribe?
    points will be assined?
    Thanks & Regards,
    Reddy.

    Hi,
    The data will be replicated in the same way as we do in 3.5.
    Activating, and Transporting the same DS in BW, and Replicating them in BW from R/3.
    First you need to know Diff b/w 3.5 nd 7.0, for that check the below doc's:
    http://help.sap.com/saphelp_nw04s/helpdata/en/a4/1be541f321c717e10000000a155106/content.htm
    blogs:
    /people/sap.user72/blog/2004/11/01/sap-bi-versus-sap-bw-what146s-in-a-name
    Re: How to identify Header, Item and Schedule item level data sources?
    For Transformations in BI:
    http://help.sap.com/saphelp_nw70/helpdata/en/33/045741c0c28447e10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/f8/7913426e48db2ce10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/a9/497f42d540d665e10000000a155106/frameset.htm
    For DTP:
    DTP:
    http://help.sap.com/saphelp_nw70/helpdata/en/20/a894ed07e75648ba5cf7c876430589/frameset.htm
    For DSO:
    Data Store Objects:
    http://help.sap.com/saphelp_nw70/helpdata/en/f9/45503c242b4a67e10000000a114084/frameset.htm
    Reg
    Pra

  • What is the difference between DSo and Infocube

    Hello,
             Kindly tell me what is the difference between DSO and Infocube?
    And please tell me how to take the desicion that in whichi case we can use DSO and in which case we can use Infocube..

    Hi ,
    DataStore object serves as a storage location for consolidated and cleansed transaction data or master data on a document (atomic) level.
    This data can be evaluated using a BEx query.
    A DataStore object contains key fields (for example, document number/item) and data fields that can also contain character fields (for example, order status, customer) as key figures. The data from a DataStore object can be updated with a delta update into InfoCubes and/or other DataStore objects or master data tables (attributes or texts) in the same system or across different systems.
    Unlike multidimensional data storage using InfoCubes, the data in DataStore objects is stored in transparent, flat database tables. The system does not create fact tables or dimension tables.
    Use
    The cumulative update of key figures is supported for DataStore objects, just as it is with InfoCubes, but with DataStore objects it is also possible to overwrite data fields. This is particularly important with document-related structures. If documents are changed in the source system, these changes include both numeric fields, such as the order quantity, and non-numeric fields, such as the ship-to party, status and delivery date. To reproduce these changes in the DataStore objects in the BI system, you have to overwrite the relevant fields in the DataStore objects and set them to the current value. Furthermore, you can use an overwrite and the existing change log to render a source delta enabled. This means that the delta that is further updated to the InfoCubes, for example, is calculated from two successive after-images.
    An InfoCube describes (from an analysis point of view) a self-contained dataset, for example, for a business-orientated area. You analyze this dataset in a BEx query.
    An InfoCube is a set of relational tables arranged according to the star schema: A large fact table in the middle surrounded by several dimension tables.
    Use
    InfoCubes are filled with data from one or more InfoSources or other InfoProviders. They are available as InfoProviders for analysis and reporting purposes.
    Structure
    The data is stored physically in an InfoCube. It consists of a number of InfoObjects that are filled with data from staging. It has the structure of a star schema.
    The real-time characteristic can be assigned to an InfoCube. Real-time InfoCubes are used differently to standard InfoCubes.
    ODS versus Info-cubes in a typical project scenario
    ODS
    why we use ods?
    why is psa  & ods nessasary
    Hope this helps,
    Regards,
    CSM Reddy

  • I cannot transform between Arabic and English language in writing

    I cannot transform between Arabic and English language in writing >>>> I should every time restart my browser >>>> i need a solution

    If you are talking about typing text then make sure that the language toolbar is visible.
    * http://windows.microsoft.com/en-US/windows7/The-Language-bar-overview
    In Firefox you can switch the bidi text direction via Ctrl+Shift+X.

  • Transformation between DS and DSO missing Non-cumulative KYF

    Hi SDN,
    I installed the business content for CML, and am working with 0CML_DELTA_CAP datasource.  I would like to create a transformation between this datasource and DSO 0CML_O03.
    By default, when you install the CML business content, the datasources use the 3.x dataflow concept (transfer rules, infosources, and update rules).
    I would like to use the 7.x dataflow concept and created a transformation between the 0CML_DELTA_CAP datasource and the 0CML_O03 DSO.  In the transformation, it is missing the fields 0CML_B330, 0CML_B340, 0CML_B360, 0CML_B380 in the datatarget.  These key figures are non-cumulative with NCum Value change (0CML_B330 uses 0CML_B331 as value change).  The value change key figures show up in the transformation, but the non-culmulative key figures do not. 
    Does anyone have any ideas why the non-cuml. kyf are not showing up in the transformation?
    Thanks,

    Hi Brendon,
    The non- cumulative key figures are not mapped in the transformation. Only the 'Inflow' and 'Outflow' key figures for the non- cumulative key figure are mapped.
    You can check the property for the non- *** KF in RSD1, where you would find the corresponding inflow and outflow kf. If both of these are mapped in the transformation, the non- *** kf would calculate the value in the report in which it is included as:
    Non-*** KF value = Inflow value - Outflow value.
    Hope this helps.
    Regards,
    Abhishek.

  • How to Map the Unit field  in case of DSO and INFOCUBE

    Dear Experts,
    I have a issue ,Please help me to solve this
    I have DSO as provider ,
    And, i have to map transformations  btw the Datasource and DSO.
    In generic Data source,  i have unit fields like BASME,MEINS (Quantity units) & STWAE (currency field)
    and normal Quantity fields  like KWMNG,OAUME(quantity related),OAUWE (value related).
    In DSO data fields as Key figure info objects like  0Quantity (which have 0Unit as unit of measure) and some other  key figures which have there respective unit of measure in info object  definition.
    So you Please tell me how to map the Quantity ,Amounts, unit fields to key figures that we have.
    (How it will be for both DSO and Info cube is there any difference?)
    Edited by: AnjunathNaidu on Jan 18, 2012 1:20 PM

    Navasamol ,
    If it is works ,will u please tell me what is the difference ,if the transformations btw data source and DSO and
    what is the difference btw data source and info cube and btw DSO to Infocube or cube to cube .
    And i have  seen the Quantity fields  and there respective unit fields are mapped directly  to key figure info object
    in case of Info cube . Its working fine .
    If only 1:1 mapping allowed in DSO data fields key figures and there respective unit of measure characteristic.
    why this difference btw DSO and Info cube can any one explain me in detail.
    Expecting your valuable suggestions.
    Thanks & Regards,
    Anjunath Naidu
    Edited by: AnjunathNaidu on Jan 18, 2012 4:05 PM

  • Delta between DSO and Cube

    Hi All,
    I have been working on a scenario where in we have to realign org structure information of some sales orders , now we have existing sales orders data in DSO , which feeds to a cube . As part of the realignment some master data attributes, more specifically Customer Master attributes,  will be changed in R3 for those Sales Orders.
    These attributes are part of the transaction data load which is happening to the DSO from the source, now since this transaction load is taking time, to speed up the process of realignment in BW I have used a routine to lookup the masterdata attributes from 0CUSTOMER in BW.
    so before BW pulls the realigned data ( Sales Orders ) from R3, I will run a self loop in the DSO where in it will do a lookup in the 0CUSTOMER master data in BW to get the updated customer attributes (ofcourse the 0CUSTOMER load will happen before we do any realignment of the source data) .
    Once the self loop transformation is completed and the sales orders are updated with the updated customer attributes, I will load the data to the cubes. (delta load)
    Now once the sales order realignment is finished in R3 , I will pull the same set of data in the DSO .
    Now the question is, is it necessary that I do a reinit of the cube before I load the second set of data to the cube?
    I guess it is not necessary.
    Please share your thoughts.

    Hello Partho
    I understand your KF are overwrite in DSO. But in BI7 flow delta from DSO to cube is request based.
    When you will do a self-loop, same records ( I guess you have already loaded them to cube) will have different request id.
    As a result your Change Log table will get different id after activation. So same record will get loaded to cube and which will make all your KF in cube doubled.
    I believe to make correction in historical records in cube either you have to delete cube content and then run delta from DSO to cube.  As first time delta will read Active table of DSO you will get all KF correct in cube..
         Else, you have to delete all those previously loaded record from cube selectively.
    Regards
    Anindya

  • Field Status review between Dev and Prod.

    Hello All,
    During solving one of the ticket, I found that there are some difference in selection of some of fields as suppress,optional and compulsory for account groups with screen layout (customers) - (T. Code - OBD2) between Development and Production server. Now I would like to test for all account groups between Prod and Development.It is taking lot of time to see screen by screen.
    Can any body throw some light on this regarding any alternative way or can we download table information ?
    Thanks in advnace for your cooperation.
    Regards,
    santosh

    Try this:
    transaction: SE16
    Table: T077D
    Execute
    Do same in both systems and compare.
    Once you identified differences go back into the transaction as mentioned - OBD2- for detail analysis.
    GL
    Hein
    Please assign points if helpful.

  • Transformation between  xfaform and document types

    Hi again,
    As I can understand that I cannot send an xfaform as an attachment to an email (but document is ok) I would like to know how I can transform an xfaform (with an xml schema to a document). Do I need to use the exportdata and importdata to do this?.
    It is important to me because I need to retrieve a form by email (in either xml/xdp or pdf) do some manual work in my process and return an email to the user with and answer in the attachment.
    Do you have any best practices for such a scenario?
    Sincerely
    Kim Christensen
    Dafolo A/S
    Denmark

    I tried using exportData operation; I defined its input variable as document and output as xfaform and I got the below InvalidCoercionException. Should I use setvalue operation instead?<br /><br />2007-12-05 19:56:13,795 INFO  [com.adobe.idp.scheduler.SchedulerServiceImpl] OneShot Trigger created ----------------------------------------<br />2007-12-05 19:56:13,811 ERROR [com.adobe.workflow.AWS] stalling action-instance: 508 with message: ALC-DSC-119-000: com.adobe.idp.dsc.util.InvalidCoercionException: Cannot coerce object: <document state="passive" senderVersion="3" persistent="true" senderPersistent="false" passivated="true" senderPassivated="true" deserialized="true" senderHostId="127.0.0.1/192.168.100.234" callbackId="0" senderCallbackId="52" callbackRef="null" isLocalizable="true" isTransactionBound="false" defaultDisposalTimeout="600" disposalTimeout="600" maxInlineSize="65536" defaultMaxInlineSize="65536" inlineSize="5896" contentType="null" length="-1"><cacheId/><localBackendId/><globalBackendId/><senderLocalBackendId/><senderGl obalBackendId/><inline><?xml version="1.0" encoding="UTF-8"?><xfa:datasets xmlns:xfa="http://www.xfa.org/schema/xfa-data/1....</inline><senderPullServantJndiName>ad obe/idp/DocumentPullServant/adobejb_server1</senderPullServantJndiName><attributes/></docu ment> of type: com.adobe.idp.Document to type: class com.adobe.idp.taskmanager.form.impl.xfa.XFARepositoryFormInstance<br />     at com.adobe.workflow.datatype.CoercionUtil.toType(CoercionUtil.java:878)<br />     at com.adobe.workflow.datatype.runtime.impl.pojo.POJODataTypeRuntimeHandler.coerceFrom(POJOD ataTypeRuntimeHandler.java:101)<br />     at com.adobe.workflow.datatype.runtime.impl.pojo.POJODataTypeRuntimeHandler.getNode(POJOData TypeRuntimeHandler.java:127)<br />     at com.adobe.workflow.dom.VariableElement.setBoundValue(VariableElement.java:93)<br />     at com.adobe.workflow.pat.service.PATExecutionContextImpl.setProcessDataValue(PATExecutionCo ntextImpl.java:729)<br />     at com.adobe.workflow.engine.PEUtil.invokeAction(PEUtil.java:583)<br />     at com.adobe.workflow.engine.ProcessEngineBMTBean.continueBranchAtAction(ProcessEngineBMTBea n.java:2863)<br />     at com.adobe.workflow.engine.ProcessEngineBMTBean.asyncInvokeProcessCommand(ProcessEngineBMT Bean.java:646)<br />     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)<br />     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)<br />     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)<br />     at java.lang.reflect.Method.invoke(Method.java:585)<br />     at org.jboss.invocation.Invocation.performCall(Invocation.java:345)

  • Errors: Length for text entry field must be between 1 and 32

    Hi Guys,
    I have has the following error messgage when defining and using a text variable in a series of columns in a characteristic structure.  In addition the text display does not work for 1 specific calculated key figure, It just displays the name of the text variable.  Where ever this type of calculation is used I get the text variable tecnical name displayed!
    i.e &0T_FPER& -2 % on LY where calc =
    '&0T_FPER& -2 Actual'/'&0T_FPER& -2 Last Year' -1) * 100
    There is a calc KF as above for -1 % on LY, and LY
    Why do i get the above error message and how can I get the text variable to display for the this Calc Key Figure?
    Thanks
    DV

    Hi Depesh,
    I had the same issue and I solved it by moving the rest of the text beside the Text Variable to the next line. I.e.,
    &0T_FPER&
    -2 %
    To do this just go to the end of the text variable and click enter.
    Hope this helps.
    Kumar

  • How to map new field in DSO

    I have added a new field to the dso and i have replicated the datasource as well which is showing the new field newly added in ecc. how can i map the new keyfigure in the transformation between the datasource and ods. its showin in the ds, can u pls tell me how to map now.

    Ya but the problem is I have an infosource in between and its not showing in the transformation between the dso and infosource.....nor the infosource and the datasource. But its there in the first transformation on the dso side, not on the inforsource side..... but how to get that field in the infosource side...so that i can map the field between both the transformations....between infosoucre and dso and infosouce and ds.....You know what I mean. the zfield is showin in the dso field structure in the first level mapping between dso and infosouce....but where do i map it.
    Edited by: Daniel on Nov 28, 2011 9:59 AM

  • DB-fields duplicated between FI and JVA

    Hello,
    I am a BW consultant and need the field "Transaction type" (RMVCT) from FI (table FAGLFLEXT) for reporting in SAP BW. I know several fields are duplicated between FI and JVA whenever a transaction is created. In JVA (table JVSO1) there is a field named "Asset Transaction Type" (ANBWA). Does these two fields hold the same value?
    Is this documented anywhere? I can't find anything except general information in the documentation. I find some information on "integration activities" between FI and JVA, but nothing about the specific fields involved.
    Best regards,
    Christoffer Owe

    ANBWA are basically asset accounting related transaction types while RMVCT is used to report all types of movement in balances whether asset or any other accounts for consolidaiton purposes. So both these fields are not the same.
    Thanks and regards
    Kedar

  • Mapping authorization field IDs to field names (description)

    Hello Folks,
    I am trying to map the authorization field (technical ) names to field descriptions (long names)
    Backgroud: I do not have enough input from the functional team yet on restrictions at data level. I have pulled out a list of orgfield values from USORG. Now i am trying to map the other authorization fields to their descriptions.
    Table AUTHX only maps the field to the data element and while table DD04V lists the long field names for a given data element, i need to drill down before i get the description and thats for each fieldname / data element.
    Question: Is going through SU20 for each object the best way to map the field description or is there a better way to map the auth field IDs to their description. And i mean for multiple input as there are scores of auth fields...
    Regards,
    Prashant

    Thanks Jurjen, that helped a lot.
    I downloaded the authorizations into an excel tough, (did not use uncoverted format).
    Solution in Excel: Since there are a lot of blank cell values
    -- name the columns clearly
    create a pivot table
    In the pivot select only the field technical name and field description columns
    For some reason when the other columns are selected, everything is going blank for values
    I am sure others are better with Excel than i am
    Regards,
    Prashant

  • Purchasing Data Not Tying Up between BW and R/3.

    Hi,
    As a part of our project, we reloaded the all the data related to Purchasing through the extractors   
    2LIS_02_SCL
    2LIS_02_ITM
    2LIS_02_HDR
    As these were corrupted due to patch application.
    We took all the possible care during reloading of all the data from R/3 like locking the user from making any transactions, Deleting the setup tables etc in a step by step process etc.
    But however, after the extraction the data is not tying up between R/3 and BW Purchasing ODS.
    Is there some thing wrong did we do or How can we get that missing records into BW again.
    Appreciate your help.
    Thanks.

    Hi......
    Did u fill the set up table with any selection...........?.....Previously no of records picked was larger than this time........right?..........Hav u checked RSA3 in the source system..........is it showing correct no of records........? if yes............then check the selection tab of the IP..whether u hav maintained any selection there.........
    If no..........then......first check the selection in the set up table..............then the function module....which is extracting the data..........u will get the function module in RSA2.........there just double click on the Extractor.........u will get the function module  name........
    Anyways.........did u know which records r missed.........then just fill the set up table for those values only.........then do full load......
    Anyways,............I think u know the steps in LO Extraction...........still for ur reference..........
    1. Transfer the logistics DataSource for transaction data from Business Content
    With the transfer of the DataSource, the associated extraction structure is
    also delivered, but the extraction structure is based on LIS communication
    structures. Furthermore, based on the extraction structure for the DataSource,
    a restructuring table that is used for the initialization and full update to BI
    is generated.
    Naming convention:
    DataSource 2LIS_<Application>_<Event><Suffix>; where <Event> is
    optional
    Examples:
    2LIS_02_HDR: 02 = MM purchasing HDR (u2192 HEADER ) = Document
    header...
    Extraction structure MC<Application><Event/group of
    events>0<Suffix>; where MC is derived from the associated communication
    structures and <Suffix> is optional
    Examples:
    MC02M_0HDR: Extraction structure for the DataSource 2LIS_02_HDR,
    where M_ indicates the group for the events MA (order), MD (delivery
    schedule), ME (contact) and MF (request).
    Restructuring table (= setup table) <Extraction structure>SETUP
    Example:
    Extraction structure: MC11VA0IT u21D2 Restructuring table:
    MC11VA0ITSETUP
    2. Maintain extraction structure (transaction LBWE)
    This means that fields can be added to the extraction structure that is
    delivered with the DataSource without modifying anything. On the one
    hand, fields from the LIS communication structures that are assigned to the
    extraction structure can be used, that means standard fields that SAP has
    not selected, and on the other hand, customer fields that were attached to
    the LIS communication structures with the append technique can be used.
    After the extraction structure is created, it is generated automatically and the
    associated restructuring table is adapted.
    3. Maintain/generate DataSource
    In the DataSource maintenance, you can assign the properties Selection,
    Hide, Inversion (= Cancellation) and Field Only Known in Customer Exit to
    the fields of the extraction structure. After enhancing the extraction structure,
    the DataSource always has to be generated again!
    4. Replicate and activate DataSource in SAP BI (=metadata upload)
    5. Maintain Data Target (Data Store Object, InfoCube)
    6. Maintain Transformations between DataSource and Data Target
    7. Create a Data Transfer Process
    the Data Transfer Process will be used later to update the data from the PSA
    table into the Data Target.
    8. Set extraction structure for updating to active (transaction LBWE)
    In this way, data can be written to the restructuring table or the delta queue
    from then on using the extraction structure (see following steps).
    9. Filling the restructuring table/restructure (OLI*BW)
    During this process, no documents should be created or changed in the
    system! In some applications, it is possible to fill the restructuring table
    beforehand in simulation mode. These results are listed in a log (transaction
    LBWF). Before filling the restructuring table, you must ensure that the
    content of the tables is deleted (transaction LBWG), preventing the table
    from being filled multiple times. Once the restructuring tables are filled,
    document editing can resume as long as Unserialized V3 Update or Queued
    Delta is selected in the next step. Be absolutely sure that no V3 collection
    run is started until the next successful posting of an InfoPackage for delta
    initialization (see step 11).
    10. Select update method
    • Unserialized V3 update
    • Queued delta
    • Direct delta
    11. Create an InfoPackage for the DataSource and schedule the Delta
    Initialization in the Scheduler
    This updates the BI-relevant data from the restructuring table to the
    PSA table. Since the restructuring table is no longer needed after delta
    initialization, the content can be deleted (transaction LBWG).
    Use the Data Transfer Process created in step 7 to update the data from
    the PSA table into the Data Targets. After successful delta initialization,
    document editing can resume, as long as the direct delta update method
    was selected in step 13. This means that BI-relevant delta data is written
    directly to the delta queue.
    Note:
    If the DataSource supports early-delta initialization, the delta data can
    be written to the delta queue during delta initialization. This feature is
    controlled with an indicator in the Scheduler.
    12. Start V3 collection run (transaction LBWE)
    This step is only necessary when the update method unserialized V3 Update
    or Queued Delta was selected in step 10. By starting a corresponding job for
    an application, the BI-relevant delta data is read from the update tables or
    extraction queue and written to the delta queue.
    13. Create an InfoPackage for the DataSource in BI and schedule the Delta
    Update in the Scheduler
    The BI-relevant delta data from the delta queue for the DataSource is updated
    to the PSA table. Use the Data Transfer Process created in step 7 to update
    the data from the PSA table into the Data Targets.
    Hope this helps.......
    Regards,
    Debjani...

  • Data model that takes part of data to DSO and part to cube

    Hello experts,
    I have a question about the data model I am currently working on. This model is about a reservation agency, where a customer contacts the agency through a sales representative and makes a reservation for a certain event. He later has the option of confirming or cancelling the reservation. We will be receiving data for future reservations every day, with the updated status of future and recently past reservations, sales representative in charge of the reservations, income that will be perceived from each reservation (if the reservation is confirmed) or that was perceived if the event already took place, and more.
    I have found the need to use a DSO to update the reservation status of incoming reservations, as well as other fields. I am loading from a flat file with 20 fields, and the destination is a cube, after the data gets cleansed in the DSO, where there are 2 key fields (reservation code and reservation date).  However, not all the remaining data from the flat file needs to get updated. Out of the other 18 fields, only around 9 may change on an update, while the others will never change (for example, the customer name, confirmation date and event name will not change, while the sales representative, reservation status and income perceived can change from one day to another).
    I was designing a model in which the DSO contained all 18 non-key fields as data fields, so that everything got updated and passed to the cube, but then I thought that perhaps since many fields will not change, it would be better if the DSO only updated those fields that could actually change, while the other fields "jumped" directly to the cube. After the DSO updates the changing fields, the others will already be in the cube. This way, the DSO would receive and have to process less amount of data, making the DSO tables smaller and the process more efficient. But I cannot find a proper way to do this from a single data source, since the data has to pass from the data source to the DSO and then through a transformation from the DSO to the cube.
    Is there a way to accomplish this, or should I just have all fields go through the DSO and from there to the cube?

    Hi, thanks for the suggestion. The reservation code, as of right now, is not master data, and it is unique almost always, except for rare occurences where the same customer makes several reservations, each one for a different date (that is why for the DSO I am using as key fields both reservation date and reservation code). Because of that and the fact that there are lots of reservations made each day, we had defined that object without master data (since the master data table would be huge in a short amount of time). Perhaps I should change it to become master data? Or what should we do?

Maybe you are looking for