Logic in PCA extraction

Hello,
I need to add field MATNR to the extract structure for data source 0EC_PCA_1 (ISPCACST) in order to be able to report on material number. But how should the user exit for this be written?
I know that the original table for actual PCA postings is GLPCA and the plan postings are in table GLPCP. All postings end up in table GLPCT (aggregated per period) and then function module ECPCA_BIW_GET_DATA reads this and populates structure ISPCACST. Am I right?
But how do I write the user exit code so that MATNR is added and populated in ISPCACST? Which is the source table to use? I would have guessed on GLPCT (since this seems to be used for 0EC_PCA_1) but I can't find MATNR in this one.
How come ds 0EC_PCA_3, which uses the same function module for extraction, and thereby reads from the same source table: GLPCT, includes MATNR?
Please explain the logic of this and I'll be eternally greatful! :-)..
Best regards,
Fredrik

pooja,
It depends upon which fields you have defined as key fields in the DSO.
If the "creation date" is a key field, then the system will load the records as 2 separete records.
Thanks.

Similar Messages

  • What is logic of LO extraction?

    Hi,
    Can anyone explain about the logic of LO Extraction? I searched for it couldn't get the answer.
    thanks,
    Nithi

    delta mode :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f03da665-bb6f-2c10-7da7-9e8a6684f2f9?quicklink=index&overridelayout=true
    full mode :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/50326ace-bac2-2b10-76bb-bd1a20ed2b57?quicklink=index&overridelayout=true
    a quick search on LO extraction/set up table gave over 10.000 hits (articles, discussions, notes,...) i'm sure you can find the information you're looking for
    M.

  • BW PCA extraction with the new GL

    Hi SCN
    I have a customer who is currently running classic GL loading into BW which also includes PCA and COPA.
    The R3 system is also running the new flexible ledger in parallel and the plan is to switch off classic GL at some point.
    I am preparing to unwrap the new GL extractors 0FI_GI_10 and 0FI_GL_14 but can anyone advise what affect of using new GL will be for PCA?  Do I need to replace the extractors as for GL ?
    regards
    Ian

    Hi Ian,
    Generally speaking New GL can replace PCA. That's why usually you will switch off PCA after activating New GL. Only the Profit Center as master data object will remain and shall be used as characteristic in New GL.
    As a consequence, you can deactivate the PCA transaction data flow(s) in BW at some point in the future. Especially if PCA is switched off in the ECC system, it won't make sense to continue loading from PCA. You must continue the Profit Center master data loads.
    Best regards,
    Sander

  • How to extract this criteria? Logic For material Extraction?

    Hi,
    If we specify, say, 3 raw materials, we only want a listing of products that contain all three raw materials.
    I know by FM CS_WHERE_USED_MAT we get list for single material.
    But How Do I Achieve it in report which has all three materials?
    Basically Cs15 Fullfills their criteria but they need it for range of materials now..
    regards
    Kumar

    I tried exactly like that but my requirement is I should display only
    the products which has all three of them( entered via slection screen).
    right now it is displaying all products..irrespctive of 1,2 or 3 materials.
    Check my code.
    SELECT matnr FROM mara
    INTO TABLE itab
    WHERE matnr IN s_matnr.
      IF sy-subrc IS INITIAL.
        LOOP AT itab.
          MOVE itab-matnr TO p_matnr.
          CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
            EXPORTING
              input  = p_matnr
            IMPORTING
              output = p_matnr.
          CALL FUNCTION 'CS_WHERE_USED_MAT'
            EXPORTING
              datub                      = p_datub
              datuv                      = p_datuv
              matnr                      = p_matnr
              postp                      = p_postp
              stlan                      = p_stlan
              werks                      = p_werks
              stltp                      = stltp_in
            IMPORTING
              topmat                     = selpool
            TABLES
              wultb                      = ltb
              equicat                    = equicat
              kndcat                     = kndcat
              matcat                     = matcat
              stdcat                     = stdcat
              tplcat                     = tplcat
              prjcat                     = prjcat
            EXCEPTIONS
              material_not_found         = 02
              no_where_used_rec_found    = 03
              no_where_used_rec_selected = 04
              no_where_used_rec_valid    = 05.
          IF sy-subrc IS INITIAL.
            LOOP AT ltb.
              MOVE ltb-matnr TO t_data-matnr.
              MOVE ltb-idnrk TO t_data-idnrk.
              MOVE ltb-stlan TO t_data-stlan.
              MOVE ltb-werks TO t_data-werks.
              MOVE ltb-ojtxb TO t_data-ojtxb.
              MOVE ltb-postp TO t_data-postp.
              WRITE ltb-menge TO t_data-menge.
              WRITE ltb-meins TO t_data-meins.
              WRITE ltb-bmein TO t_data-bmein.
              SHIFT t_data-matnr LEFT DELETING LEADING '0'.
              SHIFT t_data-idnrk LEFT DELETING LEADING '0'.
              SELECT SINGLE maktx FROM makt
              INTO t_data-maktx
              WHERE matnr = ltb-idnrk
              AND spras = 'EN'.
              APPEND t_data.
            ENDLOOP.
          ENDIF.
        ENDLOOP.
      ELSE.
        MESSAGE 'No data exists for chosen selection' TYPE 'I'.
        SUBMIT (sy-repid) VIA SELECTION-SCREEN.
      ENDIF.
    regards
    Kumar
    Message was edited by:
            praveen kp

  • Needs the table from which "project definition" field is to be extracted

    Hi,
    As a requirement in BW, we have to add the new field project definition in the BW cube 0COOM_C02(description of the cube:- CO-OM: Costs and Allocations Project (Delta Extraction)).For this, we have to add the same in all the data sources related to this cube.
    Data sources under this cube are:
    0CO_OM_CCA_9 (Cost Centers: Actual Costs Through Delta Extraction)
    Take any one of the data source and search that in the R/3 side tcode RSA6, double click and you will find the entire structure (fields) for that data source.Double click the extract structure appearing above it to find the technical details of each field.
    Now we have to add the field project definition in the data source.For extracting the same, we have to write the logic which will extract the value of project field from a table.
    We dont have any problem in writing the code but
    Our requirement is that during coding, from which table we should extract the value ? and also on what basis?(means what should be the selecting criteria for populating that field) .
    If you want any information on these data sources, do reply me.
    Thanking you in advance,
    Tarun Brijwani.

    The cube is related to module
    Financials Management & Controlling----->Controlling----->Overhead Cost Controlling

  • Pl. provide ABAP logic

    Can any body help me in providing the logic (ABAP logic-infopackage) for extracting data for 6th month considering sydatum.
    Logic should be on Caday.
    Example: If I am executing I/P today i.e. 13.01.2012, I should get the data of 01.06.2011 to 30.06.2011.
    Please help. 
    Thanks in Advance.
    Maddali VSKP

    Hi,
    As I am not core ABAPER, I tried below logic in the development. Logic is working fine. I am getting expected results.
    DATA: l_cur_month(2) type n,
    l_pre_month(2) type n,
    l_cur_year(4) type n,
    z_ppredat type DATS,
    z_ppredat1 type INT1,
    z_ppredat2(8) type n,
    z_ppredat3(2) type n,
    l_pre_year(4) type n.
    data: l_idx like sy-tabix.
      read table l_t_range with key
           fieldname = 'CALDAY'.
      l_idx = sy-tabix.
      l_cur_month = sy-datum+4(2).
      l_cur_year = sy-datum(4).
      l_pre_year = sy-datum(4).
      if l_cur_month >= 7.
        l_pre_month = l_cur_month - 7.
      endif.
      if l_cur_month <= 6.
        l_pre_year = l_cur_year - 1.
        l_pre_month = 12 - ( 7 - l_cur_month ).
      endif.
      break-point.
      concatenate l_pre_year  l_pre_month  '01'
      into z_ppredat.
      CALL FUNCTION '/OSP/GET_DAYS_IN_MONTH'
        EXPORTING
          IV_DATE       = z_ppredat
        IMPORTING
          EV_DAYS       = z_ppredat1 .
          z_ppredat3 = z_ppredat1.
      concatenate l_pre_year  l_pre_month z_ppredat3
      into z_ppredat2.
    concatenate  "l_pre_year  l_pre_month  z_ppredat1" into z_ppredat2
      l_t_range-sign = 'I'.
      l_t_range-option = 'BT'.
      l_t_range-low = z_ppredat.
      l_t_range-high = z_ppredat2.
      modify l_t_range index l_idx.
      p_subrc = 0.
    Request you to help me to validate the logic.
    Thanks in Advance,
    Maddali VSKP

  • Datasources and Extract structure

    Hi guys,
    I am pretty confused about DataSources and Extract structure.
    Can someone please explain it me in simple words.

    Hi Zubin,
    Data Source is a consoliated list of fields available and the extract structure is a data dictionary structure which illustrates the fields with additional technical elements like DATA ELEMENT , DOMAIN ans so on...
    To make it more clear look at this description from F4 help:
    A DataSource is an object for retrieving data. The DataSource is localized in the OLTP system.
    It has
    an extract structure,
    an extraction type, and
    an extraction method.
    The extract structure describes the fields of the internal table that contain the extracted data. The extraction type describes which type of extractor the DataSource uses. The extraction method extracts the data and transfers it into an internal table with the same type as the extract structure.
    The DataSource also contains information on the type of data that it stages, for example, attributes, transactional data, hierachies, or texts. It can also support different types of data update.
    Extract Structure for a DataSource
    The extraction structure for a data source shows the format in which the DataSource, or the extractor for the DataSource, transfers its data.
    A data element must be assigned to each field of the extraction structure. This allows in the Business Information Warehouse an intelligent mapping between field names and InfoObjects using just this data element.
    The extract structure must be created in the DDIC as a dictionary structure or transparent table. A view is not permitted here since it would then not give you the option to add an append.
    Appends enable you to convert your individual requirements and own "Business Logic" in the extraction process. You can fill the fields in the append using function enhancements.
    Hope this helps
    Thanks,
    Raj

  • How to extract CATSDB ?

    I have a request where I need to replicate BW CATSDB extractor 'logic' within R3. extract data from CATSDB. I have used FM CATS_BIW_GET_DATA2 (extractor 0CA_TS_IS_1) and FM requires value in I_REQUNR but nothing works. Does anyone have any other suggestion on how to get the data?
    note: I have tried CATS_BIW_GET_DATA and that worked when I entered i_requnr = '000'. However I received few thousand records less then the actual number in table itself.

    Hi,
    CATS_BIW_GET_DATA selects only status '30' (entries which are approved) from CATSDB. This might be the difference.
    Regards
    Nicola

  • SRM Extraction Issue

    Hi,
    We are extracting data for the shopping cart approval process with the SAP standard DataSource 0BBP_TD_SC_APPR_1 on our SRM system. The extract structure BBP_SC_A_BW_GET_TD_STRUC for this DataSource is not containing an item level . Adding an item level to the structure is possible, but we are not able to extend the logic of the extraction process to fill this new level.
    And also sene me the SRM related tables and item fields.
    Regards,
    Jayapal.

    Myself fixed an issue

  • Same set of Records not in the same Data package of the extractor

    Hi All,
    I have got one senario. While extracting the records from the ECC based on some condition I want to add some more records in to ECC. To be more clear based on some condition I want to add addiional lines of data by gving APPEND C_T_DATA.
    For eg.
    I have  a set of records with same company code, same contract same delivery leg and different pricing leg.
    If delivery leg and pricing leg is 1 then I want to add one line of record.
    There will be several records with the same company code contract delivery leg and pricing leg. In the extraction logic I will extract with the following command i_t_data [] = c_t_data [], then sort with company code, contract delivery and pricing leg. then Delete duplicate with adjustcent..command...to get one record, based on this record with some condition I will populate a new line of record what my business neeeds.
    My concern is
    if the same set of records over shoot the datapackage size how to handle this. Is there any option.
    My data package size is 50,000. Suppose I get a same set of records ie same company code, contract delivery leg and pricing leg as 49999 th record. Suppose there are 10 records with the same characteristics the extraction will hapen in 2 data packages then delete dplicate and the above logic will get wrong. How I can handle this secnaio. Whether Delta enabled function module help me to tackle this. I want to do it only in Extraction. as Data source enhancement.
    Anil.
    Edited by: Anil on Aug 29, 2010 5:56 AM

    Hi,
    You will have to do the enhancement of the data source.
    Please follow the below link.
    You can write your logic to add the additional records in the case statement for your data source.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c035c402-3d1a-2d10-4380-af8f26b5026f?quicklink=index&overridelayout=true
    Hope this will solve your issue.

  • Report for delivered but not billed orders

    Hi All,
    I am trying to build a query using logical database to extract all orders for which delivery has been done, but not yet billed. I believe i can extract this data using VAV logical database, but not quite sure if i should use VAV or VFV.
    Can someone let me know, which database to use & also what input conditions i need to maintain to extract only the information which i am looking for?
    Hope my question is clear, await inputs.
    Vivek

    Hi Vivek,
    What I wanted to say that you won't get information from VAV about billing since as far as I know it doesn't contain any relevant table.
    You should try create a query using LIKP-LIPS-(joint type: left outer)-VBRK. This will connect deliveries to billing document, you can also get SO number. Please try this.
    (Sorrowully I don't know which logical database would be good for you)
    BR
    Csaba

  • Open hub destination issue

    Hi,
    In our project we have Client Instance (eg: BWPD) and Application server (eg: BWPRD) defined for load balancing.
    We have created open hub and assigned destination server BWPD.
    When i execute the DTP manually in BWPD, it runs succesfully.
    However, the same DTP when placed in the process chain fails with error message :
    No Such File or Directory
    Could not open file D:\usr\sap\BWPD\A01\work\Material on application server
    Error while updating to target ZXXXX.
    Options Tried:
    Schedule process chain in the background server BWPD (same server which has been mentioned in Open hub dest) still DTP failed.
    Tried with Application server it failed.
    Tried with HOST as option it failed.
    couldn't make out what is going wrong. Any thoughts ?
    Regards.

    Hi there,
    found that doc quite useful.
    Maybe could shed some light to your issue.
    [Creating  Open Hub Destination  using a Logical file to extract the data|Creating  Open Hub Destination  using a Logical file to extract the data]
    Also, what OS do you have?
    is the Syntax Group accordingly created ?

  • Missing PARTNO field in Write Optimized DSO

    Hi,
    I have a write optimized DSO, for which the Partition has been deleted (reason unknown) in Dev system.
    For the same DSO, partition parameters exists in QA and production.
    Now while transporting this DSO to QA, I am getting an error "Old key field PARTNO has been deleted", and the DSO could not be activated in the target system.
    Please let me know, how do I re-insert this techinal key of PARTNO in my DSO.
    I pressuming it has something to do with partitioning of the DSO.
    Please Help.......

    Hi,
    Since the write-optimized DataStore object only consists of the table of active data, you do not have to activate the data, as is necessary with the standard DataStore object. This means that you can process data more quickly.
    The loaded data is not aggregated; the history of the data is retained. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, so that the aggregation of data can take place later in standard DataStore objects.
    The system generates a unique technical key for the write-optimized DataStore object. The standard key fields are not necessary with this type of DataStore object. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical keys. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD). Only new data records are loaded to this key.
    You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
    PS: Excerpt from http://help.sap.com/saphelp_nw2004s/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
    Hope this helps.
    Best Regards,
    Rajani

  • CO-PA Account Based enhancement : BELNR-CO

    Hi all,
    I have successfully implemented a CO-PA ETL for both Costing and account based CO-PA in my system.
    I enhanced the costing based Extraction with no major concern.
    However I now wish to enhance the CO-PA Accoutn BAsed, and here I struggle , because I cannot get the controlling document number - part of COEP key - in my extract.
    I need to use this controlling document number in order to conduct some logic in the extraction, but the creation of the account based datasoource in KEB0 does not provide the option to have this field as part of the extract structure ( as a sidenote, it will not be defined in KEQ3 either, as the granularity of the profitability segment would then be to fine)
    ... Any idea on how to collect the CO doc number in this scenario? I am missing something...
    I would very much appreciate some insight from experienced CO-PA - BW community members !
    Cheers
    Patrick

    Patrick,
    The link between COSP / COSS and CE4xxxx is the objnr of the tables COSP / COSS. The first 2 positions = 'EO'. The following positions, the name of the results area and rest of the objnr relates to the PAOBJNR of table CE4xxx. In the table CE4xxxx the field KNT_FRM_KZ = '1'. This means account based CO-PA object.
    For the standard extraction process you don't need to build things between these tables. The extractor knows the relationships already.
    Regards,
    Peter

  • CO-PA Summarization Level

    Hi,
    I am very unclear about summarization levels in CO-PA. Please can any one explain me what is the real significance of summarization level, why do we need it and steps to configure it, and on what logic is data extracted from it.
    Thank you,
    Sam

    1. What is use of summarizattion in COPA?
    summarization level in Profitability Analysis (CO-PA):
    A summarization level is a tool for improving performance.
    Summarization levels store an original dataset in summarized form. This gives quick access to presummarized data for functions that do not need detailed information, and means that the data does not need to be summarized separately each time.
    you can check this in this link
    http://help.sap.com/saphelp_nw04/helpdata/en/ae/bd116e940f11d2b63c0000e82debc6/frameset.htm
    2. If we dont give summarization in COPA Account based will records be available in RSA3 - yes
    2. How to create summarization level for a datasource?
    You can do this in Transaction - KCDU/KCDV or SPRO
    Hope it helps.
    Regards

Maybe you are looking for

  • Help with Syntax

    <soapenv:Envelope xmlns:soapenv= "http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns="urn:xxx/xx/ecbs/object/" xmlns:quer="urn:xxx/xx/ecbs/object/Query"> <soapenv:Header/> <soapenv:Body> <ns:operation_Input> <quer:ListOfContact pagesize="100" startr

  • OfficeJet Pro 8600 won't print OS 10.7.5

    I'm attempting to use an HP OfficeJet Pro 8600 N911a wirelessly with OS 10.7.5 I had some issues with my Mac that required me to install a new hard drive and re-install the OS from scratch. But, instead of re-installing OS 10.6.8 I decided to upgrade

  • How can i change the pitch of a song and burn it?

    Okay, my sister has a dance competition coming up and she says that the song is just a little to fast for her to keep up with. I have already tried using the windows media player to try to change the pitch of her song and burn it to a cd but there se

  • Problem with Profiles in AC RAR 5.3

    Hi friends, I'm trying to perform a syncro from the profiles in an HR ERP system with RAR 5.3 (SP 11.2). I only get a part of the profiles. The other roles are read by the system, but somehow, they are ignored. Log: Jan 9, 2012 11:05:30 AM com.virsa.

  • BPM - Generate file when message is successful

    Hi, We have designed the BPM to update database system by using JDBC receiver adapter. Following are the steps in BPM : 1. Send data to XI from R/3 system by using ABAP proxy 2. Make Sync call to R/3 system to get details of the data sent by using AB