COPA EXTRACTION---BASICS TO ADVANCE

HI GURUS,
I AM NEW AND PRACTISING COPA EXTRACTION.
SORRY I AM EDITING IT FOR MORE CLARITY.
I KNOW THAT THEIR R
CHARCTERSTICS
CHARCTERSTICS VALUES
VALUE FIELDS
SEGMENT LEVEL
SEGMENT TABLE
HOW THEY R LINKED TO EACH OTHER-- BECOZ WHEN I COMPLETE THE SELECTION -
THEY ALL APPEAR IN ONE GROUP --AND HOW TO MODEL A INFOCUBE (DO WE NEED TO GO BACK TO R/3 TO CHECK.)
ON R/3 SIDE-- WHEN WE DEFINE COPA DATASOURCE WE DO AT OPERATING CONCERN LEVEL...RIGHT??
I WANT TO KNOW WHEN DEFINING WE DO SOME SELECTION OF
CHARCTERSTICS FROM THE DATA SOURCE...WHAT IS THAT BASED
ON WE HAVE TO DO OUR OWN ANALYSIS R ELSE ....THE CLIENT
WILL BE GIVING THE REQUIRED SELECTION .CRITERIA.
SORRY ONCE AGAIN FOR POSTING CONFUSED MSSG I AM DESPARATE TO UNDERSTAND THIS....PLZ TRY TO UNDERSTAND AND HELP ME.
Message was edited by: neelkamal moganti
Message was edited by: neelkamal moganti

If i've understood now you have to link your datasource to the infosource and then create the infopakage; there you can find selection criteria.
Regards

Similar Messages

  • COPA Extraction

    Hi Friends,
    Im new2 BI7.
    1)Plz provide detailed info on hw 2 go abt for COPA extraction.
    2)I need 2 make Customer specific Datasources.So plz guide on this
    3)Since my existing Copa reps r frm Report painter/Abap,they just drill down 2 multiple screen,,Is multiple drill down functionality possible in BI?If so plz provide details on hw to make sample drill report.
    4)Wat are the crucial steps to take in Copa extraction.
    Plz provide any links/ppts/pdf's.
    It vll be highly appreciated.
    Thanks & Regards.

    CO-PA:
    CO-PA collects all the OLTP data for calculating contribution margins (Sales, cost of sales, overhead costs)
    CO-PA also has power reporting tools and planning functions however co-pa reporting facility is limited to
              Integrated cross-application reporting concept is not as differentiated as it is in BW.
         OLTP system is optimized for transaction processing and high reporting work load has negative impact on the overall performance of the system.
    Flow of Actual Values:
    During Billing in SD Revenues and Payments are transferred to profitability segments in Profitability Analysis at the same time sales quantities are valuated using standard cost of goods manufactured.
    In Overhead cost controlling primary postings is made to objects in overhead cost controlling and assigned to relevant cost object. Actual cost of goods manufactured also assigned to cost objects and at the same time performing cost centers are credited
    The Production Variances calculated for the cost objects i.e. the difference between the actual cost of the goods manufactured and the standard costs are divided into variance categories and settled to profitability segments (example production orders).
    What are the top products and customers in our different divisions. This is one of the typical questions that can be answered with CO-PA Module.
    Characteristics are the fields in an operating Concern according to which data can be differentiated in Profitability Analysis.
    Each characteristic in an operating concern has a series of valid characteristic values.
    Profitability is a fixed combination of valid characteristic values.
    Characteristics:
    Some Characteristics are predefined in Operating concern like Material, Customer, and Company Code. In addition to these fixed characteristics we can define up to 50 characteristics of our own. In most cases we will be able to satisfy our profitability analysis requirements with between 10 to 20 Characteristics.
    Value Fields:
    Key Figures like revenue, cost of goods sold, overhead costs are stored in value fields.
    Organizational Structure:
    The Value fields and Characteristics that are required to conduct detailed analysis vary from industry to Industry and between Individual Customers.
    In CO-PA we can configure structure of one or more operating concerns in each individual installation.
    An operating concern is an Organizational Structure that groups controlling areas together in the same way controlling areas Groups Company’s together
    Data Base Structures in CO-PA:
    Actual Line Item table:      CE1xxxx
    Plan Line Items: CE2xxxx
    Line Item Contain some information at document level that most cases is too detailed for analysis example  CO-PA Document Number, Sales document Number, Posting date.
    CO-PA Maintains summarization of data used by all CO-PA functions like reporting, planning, assessments, settlements and so on.
    Actual Line Item table:      CE1xxxx
    Plan Line Items: CE2xxxx
    Line Item Contain some information at document level that most cases is too detailed for analysis example CO-PA Document Number, Sales document Number, Posting date.
    CO-PA Maintains summarization of data used by all CO-PA functions like reporting, planning, assessments, settlements and so on.
    Segment Table: CE4xxxx
    The Characteristics that describes the market are first separated from the rest of the line Items.
    Each combination of characteristic vales is stored in profitability segment number. The Link between the profitability segment number and characteristic values is maintained in Segment Table.
    Segment Level: CE3xxxx
    The Value fields are summarized at profitability segment and period levels and stored together with these fields in Table CE3xxxx.
    This table contains total values of the period for each Profitability segment number.
    Storage Procedure:
    We can compare an operating Concern associated by segment table and segment level to an Info cube. Info cube comprises Dimension Table Segment Table) and the fact table (Segment Level).
    Unlike Fact table the segment level key contains other keys like processing type in addition to the key field from the segment table.
    Characteristics in Info Cube correspond to characteristics (Or Attributes) in Info Cube.
    Value fields can be regarded as a Key Figures.
    Summarization level in Operating Concern have the same function as aggregates for an Info Cube, the difference is that aggregates for Info Cube are managed together with the Info Cube Itself while summarization levels are updated at regular intervals usually daily.
    Line Items in CO-PA is comparable with Operational Data Store
    Data Staging Overview:
    To Provide Data Source for BW all CO-PA Data Sources must be generated in the Source System.
    Data Sources can be defined at Operating Concern and client level.
    Data Source contains the following Information
    Name of the Operating Concern
    Client
    Subset of the Characteristics
    Subset of the Value fields
    Time Stamp which data has already been loaded in to BW.
    Creating Data Source:
    Since Data Source is always defined at operating concern and Client levels a standard name is always generated which starts with 1_CO_PA_<%CL>_<%ERK . We can change this name if necessary however the prefix 1_CO_PA is Mandatory.
    Data Source-Segment Table:
    Characteristics from the segment table are the characteristics that are maintained in transaction KEQ3.
    By Default all characteristics are selected.
    Data Source-Accounting Base:
    When we generate CO-PA Data Source select accounting based option.
    Fields KOKRS, BUKRS, KSTAR are compulsory.
    There are no Characteristics available for line Items because accounting based data sources do not contain characteristics.
    There are no value fields or calculated key figures available
    KOKRS and PERIO must be selected as selection fields.
    Hope it will helps you........

  • Upgrading Oracle soa suite 10.1.3.1 to 10.1.3.4 by Basic and Advance instal

    Hi,
    i have installed oracle soa suite 10.1.3.1 basic installation and needs to upgrade to 10.1.3.4 . I searched in google some suggested
    1) install database 10g xe or some version 10.2+
    2) to run irca.cmd script for creating dehydration store
    3) Install oracle soa suite basic installation 10.1.3.1
    4) Before applying patch run sql scripts
    5) apply patch to upgrade to 10.1.3.4
    Here i have one doubt in basic installation oracle lite database will be installed by default then how do i relate external database to soa suite in basic installation . Is it possible ?
    If we use default database oracle lite for upgrade to 10.1.3.4 and i think it will ask for orabpel and oraesb used id and pwd when we are running the sql scripts which i am not aware of?
    and do i still need to run irca.cmd script for creatinv schemas.
    I am so confused here i just want to know the steps of upgrade in basic and advance soa installation.
    Any answer will be very much appreciable.
    will be so keenly waiting for your answer.
    Thanks,
    Ahmed.

    Yes there is some additional steps
    if you are doing clustering this should be your bible
    http://download.oracle.com/docs/cd/E10291_01/core.1013/e10294/toc.htm
    to install 64bit have a look at these references
    http://download.oracle.com/docs/cd/B31017_01/getstart.htm
    Metalink note 465159.1
    cheers
    James

  • COPA extraction issues - Incorrect field name for selection

    Dear gurus,
    I've issues with my COPA extraction, hope you guys could give me some clue here, point will be rewarded if I could fine the solution.
    Requirement:-
    To add additional field.
    Step:-
    As usual 1) Delete
                   2) Create back with select the new field. (KEB0)
                   3) Do not have issues when check RSA3 from DEV, but when transport to QAS I hit this issues.
    Issues:
    At RSA3 , when do check hit the error, "Errors occurred during the extraction"
    When check from RSA3 log here I get..
    ** Did try to find any solution in SCN forum, with no luck. Hope get some help here.
    thank you
    rgds
    Edi.

    Yes I did get this feedback ..
    Hi Edi,
    As per the standard SAP, once you have assigned characteristics and values fields to your operating concern (KEA0), you can't de-assign them. When characteristics or values fields are in assigned status, you can't delete them.
    But I hope there won't be much problem of having that characteristic also. If you look at performance wise, then instead of deleting, make "not used" under profitability segment characteristics, remove from the characteristic groups etc. While doing so, please check any derivation rules written based on these, or any assessment cycles defined with these also..
    Regards...
    Jose
    forward to my FICO ..

  • Basic or Advanced OBIEE installation

    We are trying to installed Advanced OBIEE question: Are we able to use the Web logic Server instead of the Oracle Application Server to perform the Advanced Installation type for OBIEE. We are using Weblogic 10.3, OBIEE 10.1.3.4
    During a i am getting this message
    ■ You are evaluating the Oracle Business Intelligence product.
    ■ The Web Server is a J2EE Application Server other than Oracle Application Server.
    ■ Choose the Advanced Installation type when deploying Oracle BI with Oracle Application Server
    If you plan to perform an Advanced Installation, Oracle Application Server (version 10.1.3.1.0 or
    later) must be installed before you run the Oracle Business Intelligence installer.
    Lastly, what is the difference between basic and advanced installation.

    Hi,
    I think you need to go ahead with Basic installation(which has OC4j as application server). Now after your OBIEE installation is complete you need to install right version of Weblogic and deploy the analyitcs.war file and BI publisher war file (if needed) so that applications would be created on you application server. Once this is done you are good to use OBIEE which would be running on weblogic server as an application listening on the port you configured.
    Hope this helps
    Regards,

  • Copa extraction and tables read

    We want to perform currency translation for a copa extraction.  The requirement to determine if translation takes place is currency type, company code and plan/act indicator.
    I understand copa dynamically picks which table to read during extraction (I've read the notes). 
    The problem we face is that CE3 has the plan/act indicator, but no company code.  CE4 has company code, but no plan/act indicator.
    The only table that has all this information is CE1.  How can we be sure all the correct records are translated if it reads from CE3 and CE4?
    Our COPA datasource has line item information coming over.  So I know it will be reading CE1, but what if it reads CE3 and CE4, this will create inconsistencies??
    Thanks,
    Mark.

    Totalling can be done in reporting.
    This is what I think - Since you are extracting line item level data, summarized data can't be extracted at that level as it will be incorrect( a total getting repeated for each line). If you have chosen even a single line item characteristic, summary level values can't be included in the datasource.
    When you run a query, you have all the option to see the totals in various ways based on the line item level values.

  • Any good basic to advanced material available online for syndication

    Hi Guys,
    anybody has got basic to advanced materials related to all about MDM Syndication?
    And also since I am trying my hands more on MDM nowadays, I would like to have some must read blogs/articles on MDM technical and functional
    Thx
    mike

    Hi Michael,
    Apart from the detailed Syndicator guide you can go through following guides:
    A very crisp and detailed document detailing steps from various systems:
    MDM Syndication
    A good document talking about selective syndication:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80eb6ea5-2a2f-2b10-f68e-bf735a45705f?QuickLink=index&overridelayout=true
    A advance syndicator document :
    How to Trigger syndication when certain fields of a Record are changed WITHOUT creating any Time StampsSDNWeblogs_MasterDataManagementmdm%2528SAPNetworkWeblogs%253AMasterDataManagement%2528MDM%2529%2529
    Thanks,
    Ravi

  • How old licenses migration during basic and advanced cisco ise?

    Hello,
    How old licenses migration during basic and advanced cisco ise?
    Regards,
    Alvaro

    Hi,
    What do you mean by migration? you are migrating to another hardware? or you are upgrading from basic to advanced license?
    here is the install/upgrade process:
    http://www.cisco.com/en/US/docs/security/ise/1.1/user_guide/ise_man_license.html#wp1059946
    If you are migrating from one device to another I think you need to use this link:
    https://tools.cisco.com/SWIFT/LicensingUI/Home
    HTH
    Amjad

  • af:query basic and advanced mode

    When UI Hints Search Region Mode is Advanced, how can I display <af:query> in basic mode on the first time loading without doing any action?

    There is a usage for 'Adanced' mode so cannot change mode to 'Basic'.
    However, I only want to display "Basic" mode in my ui. No buttons to switch between "Basic" and "Advance".
    I set following properties:
    displayMode="simple"
    SaveQueryMode = "hidden"
    ModeChangeVisible = "false"
    In this situation, How can I do it? The ways you suggested require some action(like clicking on buttons or change input text value). Those do not exist in my UI.

  • Difference between LO and COPA Extraction and Steps

    Hello All,
    Can you please share the COPA extraction steps and also advise why COPA is differnt from other extraction (e.g.LIS/LO etc.) mechanisms?
    And also advise what should be my aproach for any COPA extraction related reporting projects.
    Thanks,
    Mainak

    Hi Mainak,
    COPA Extraction -steps
    R/3 System
    1. KEB0
    2. Select Datasource 1_CO_PA_CCA
    3. Select Field Name for Partitioning (Eg, Ccode)
    4. Initialise
    5. Select characteristics & Value Fields & Key Figures
    6. Select Development Class/Local Object
    7. Workbench Request
    8. Edit your Data Source to Select/Hide Fields
    9. Extract Checker at RSA3 & Extract
    BW
    1. Replicate Data Source
    2. Assign Info Source
    3. Transfer all Data Source elements to Info Source
    4. Activate Info Source
    5. Create Cube on Infoprovider (Copy str from Infosource)
    6. Go to Dimensions and create dimensions, Define & Assign
    7. Check & Activate
    8. Create Update Rules
    9. Insert/Modify KF and write routines (const, formula, abap)
    10. Activate
    11. Create InfoPackage for Initialization
    12. Maintain Infopackage
    13. Under Update Tab Select Initialize delta on Infopackage
    14. Schedule/Monitor
    15. Create Another InfoPackage for Delta
    16. Check on DELTA OptionPls r
    17. Ready for Delta Load
    LIS, CO/PA, and FI/SL are Customer Generated Generic Extractors, and LO is BW Content Extractors.
    LIS is a cross application component LIS of SAP R/3 , which includes, Sales Information System, Purchasing Information System, Inventory Controlling....
    Similarly CO/PA and FI/SL are used for specific Application Component of SAP R/3.
    CO/PA collects all the OLTP data for calculating contribution margins (sales, cost of sales, overhead costs).
    FI/SL collects all the OLTP data for financial accounting, special ledger
    Difference: CO-PA / LIS / LO / FI-SL
    1)Method for Delta: Time Stamp / LIS InfoStructure / Change Log / PseudoDelta
    2) Application Specific: All are
    3) Extractors: Customer Generated (CG) / SAP BW Content / CG / CG
    4) Tables Used: Line item tables / 1 structure (s5nn). 2 tables (BWl1 and 2) and control change log (TMCBIW) / V3 update(asynchronous) and two tables: VBDATA and ARCFC (change log from SAP R-3 / Three types of tables - Line Item Tables- ZZP(plan) & ZZA(actual), - Total Table ZZT, - Objects Tables ZZO & ZZC
    Pls refer:
    http://help.sap.com/saphelp_nw04/helpdata/en/ee/cd143c5db89b00e10000000a114084/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sapportals.km.docs/documents/a1-8-4/how%20to%20set%20up%20connection%20co-pa%20-%20extraction
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sapportals.km.docs/documents/a1-8-4/how%20to%20set%20up%20connection%20co-pa%20-%20retraction
    Differences between LIS,LO,CO/PA and FI-SL extractors
    *Pls   Assign points if  info is useful*
    Regards
    CSM Reddy

  • Step by step procedure for copa extraction

    hi experts,
                what are the steps involved in copa extraction, please give me step by step procedure for copa extraction.
    Thanks,
    Rk

    Hi Rama Krishna
    &#61656;     COPA STEPS
    &#61656;     SERVER R/3 SERVER &#61664; CONTOLLING & PROBALITY ANALYSIS &#61664; ENTER COPA WITH T-CODE 9TRANSACTION CODE KEBO) &#61664; CREATE (OPERATING CONCERN ; IDEAS (CLICK ON EXCUTE) &#61664; ENTER (SHORT TEXT,MEDIUM TEXT,LONG TEXT) &#61664; FIELD NAME FROM PERITION ; BURKS &#61664; DESLECT SOME VALUE FIELDS &#61664; LICK ON CHECK BUTTON &#61664; CLICK ON SUMMATIATION LEVEL&#61664; CLICK ON INFO CATALOG &#61664; IT GOES TO ATA SOURE [CUSTOMER VERSION EDIT. &#61664; SELECT IN SELECTION , HIDE FIELD &#61664; CLICK ON SVE BUTTON &#61664; NEXT CLICK ON GENERATE &#61664; GO BACK  &#61664; TO CHECK THE DATA IN COPA DATA SOURCE &#61664; T- CODE RSA3 –IMP &#61664; DATA SOURCE &#61664; 1-CO –PA800 STISH &#61664; CLICK ON EXTRACTION.
    &#61656;     SERVER BW SYSTEM  &#61664; GO TO INFO SOURCE YSTEM &#61664; SELECT YOUR SOURCE SYSTEM &#61664; DOUBLE CLICK ON SOURCE SYSTEM &#61664; SELECT BW DATA SOURCE &#61664; SAP APPLICATION COMPONENTS &#61664; CONTROLLING &#61664; PROFITABILITY ANALYSIS &#61664; RIGHT CLICK &#61664; REPLICATE DATA SOURCES &#61664; ELECT COPA DATA SOURCE (R.C) &#61664; SELECT REPLICATE DATA SOURCE &#61664; DOUBLE CLICK ON DATA SOURES &#61664; APPLICATION PROPOSAL &#61664; 1-CO-PA800 SATISH &#61664; CLICK ON TRANSFER BUTTON &#61664; YES &#61664; PROPOSE TRANSFER RULES &#61664; IF WARNING OCCURS SAY CONTINUE &#61664; ACTIVATE &#61664; GO BACK &#61664; CREATE INFO AREA &#61664; INFO PROVIDER &#61664; CREATE INFO CUBE &#61664; INFO SOURCE &#61664; 1-CO –PA8000SATISH &#61664; CREATE UPDATE RULES &#61664; GO BACK TO INFO SOURCES &#61664; REFERESH &#61664; FIND THE 1-CO-PA800 &#61664; PC ON SOURCE SYSTEM AND CREATE INFO PACKAGE &#61664; SCHEDULER &#61664; MONITOR.
    Regards
    Ashwin

  • Scenarion of Delta Load problem in COPA Extraction

    Please get me the solution for this particular scenario.
    Let us suppose that i created the datasource some 1 year back and running the deltas successfully since then.
    Let us number the requests from 1,2,3... and so on since 1st jan 2009. And let us suppose each request loads
    some 1000 records everyday from ECC to BI.
    Now on 10th august some change was made to ECC( such as change in the User Exit ) that caused the mismatch in number of data records ( loading only 950 records ) since the change was made on 10th August. But I identified the problem on 17th august with request number 160. Let us say those request numbers are 153(10th),154(11th),155(12th),156(13th),157(14th),158(15th),159(16th),160(17th). I fixed the program in ECC on 17th and further the requests from request number 160(17th) are fetching the correct number of data records. Now I need to delete the data in BI for request numbers 153 till 159 since they are not reconciling the data of ECC with BI. Now  we reload the deltas for such requests from ECC to BI. So how do I reload the deltas now in this scenario since ECC system keeps track of the last delta ie.. on 17th august(Req no 160). But how do I Load the deltas from req number 153 to 159 from ECC to BI.
    One proposed solution to this problem was setting the timestamp of COPA to 10th august. But it is not advised to set the timestamp for COPA datasources using the KEB5 transaction in a production system.
    Now in this scenario how do I load the request numbers 153 to 159 from ECC to Bi to fetch the correct data records for that period. Please propose a solution for this problem ASAP. U can also call me back whenever u need.
    Thanks in advance

    As stated in a previous post...
    How you choose to do this can be summed up by the question: Does the target for this data have cumulative or non-cumulative Key Figures?
    If you are doing an overwrite to the Key Figures, you could just execute a Full Repair InfoPackage that extracts the data for the days that you found inconsistencies instead of attempting to change the timestamp in KEB5 on the source ECC environment.
    However, if you're target has cumulative Key Figures you're left with very few options (a couple of them are time-consuming):
    1) Changing the timestamp in KEB5, if your plug-in is PI 2003.1 or older. Newer versions of the plug-in do not allow the changing of the timestamp as was pointed out by Ferruccio earlier.
    2) Deleting the keys in the target InfoProvider for those records that you extract in a Full Repair extract.
    3) Delete all data in BW for COPA and start from the beginning. This may be your best bet for cleaning up the issues in this scenario.

  • COPA extraction is taking too much time

    Hi,
    We are extracting data from R/3 by using COPA data Source with Delta update. It is taking 3 -4 hrs  time to extract around 3000 records.
    In the Job log:
    1 LUWs confirmed and 1 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
    Call customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 2,535 records
    Between these two statements 3hrs is taking.
    Can anyone help to solve this .
    Thanks in Advance
    Chetan.

    Try a smaller set of data .. like one day if you can.  Also look at what else is happening in the system (SM50/SM51/SM66).  If you are running in the middle of the day while someone else is running a big job, that could severely slow down a small system.
    Also, the COPA datasets are usually very wide and contain a lot of data.  You could potentially be going out and gathering a few gig of data ... which will take awhile.
    Also, look at transaction KEB2 and make sure there are no errors (you may have to scroll down very far to find an error).  Also, simulate it with KEB5 and see how fast it goes through that transaction. 
    The key is to let it do it's work.  If you see nothing happening and cancel the job or start a new one, you have corrupted the queue and need to let everything finish running .... then reinitialize.  I did this once .... restarted, waited 10 minutes ... nothing .. cancelled job, restarted again .... etc...
    It wasn't until I walked away for a day and came back on a Saturday to reinitialize that everything went smoothly and quick.
    Brian

  • All About COPA extraction and business view

    Hi All,
    I have gone through lots of questions posted in this forum about the functionla side of COPA and also on extraction side. Is there any document( preferably a pdf) where I can find all the info in one document about COPA functional aspects .
    Thanks in advance.
    Regards
    Srini

    Hi Srinivas,
    Give us ur mail ID, i can send u one doc.
    Re: COPA R/3 vs BW(aCCOUNT BASED-PA)
    One of the differences between account based CO-PA and costing-based CO-PA that there is no delta extraction. There is also 1 other rule. You can only select 1 fiscal-period. The set-up is similar to costing based CO-PA.
    The big difference between Account Based CO-AP and Costing Based CO-PA is of course the data is stored in the CO Modules.
    Account based CO-PA is part of the CO concept. This means when you post to a cost element the data is posted to a CO-module (CCA, OPA, PA, PS etc). The way the data is stored is also different from costing based PA.
    The CO tables like COEP, COBK (for the individual item are used and the COSS and COSP are used for the totals.
    The object where the posting are related to they are stored in the CE4... table. So if you want to analyse the data there is 1 keyfigure and the cost-element is a characteristic (account model).
    COsting Based CO-PA data is statistical data. This means that the update of CO-PA is not always equal to what is stored in the CO modules or in finance. The cost element is also not always updated and there are also more key-figures used to store info about the type of costs or revenues. Tables as CE1, CE2 , CE3 and CE4 are used.
    Extraction for Account Based CO-PA is only based on the total records (full loads are only possible). For costing based CO-PA there it is possible work with init and delta packages and also to select data from the individual postings databases (CE1, CE2).
    The way you generate the extractor is similar.
    Hopelfully this gives you a bit more background of the CO-PA module.
    Assign points if it helps..
    Regards,
    ARK

  • COPA Extraction-Characteristics value not appearing in RSA3

    Hi All,
    We have developed Costing based COPA DataSource in KEB0 based on our operating concern. Included all standard fields(Characteristics from the segment  level,  Characteristics from the segment table, Characteristics from the line items, Value fields) in data source.
      When we do extract data in RSA3, All the key figure value fields are appearing correctly but I cannot see any Characteristics values. all the characteristic value  appearing blank value.
    I have checked everything is active and all (CE1 **** ,CE2 and CE4) tables have data with characteristics and value fields .
    My questions are:
      1)why COPA characteristics value are not coming in RSA3?
      2)what do I need to get this work?
       I hope it helps. let me know if you need  more further details
    Thanks,
    L.Prathap

    Hi,
    Worked on CO-PA but never got such an issue..........
    Do you have any of the chars selected from the line items section other than rec_waers?
    Also check if you have any summarization levels...transaction KEDV. If no other char from the line items are selected, data could be read from the summarization levels.
    There would be 2 tables for a summarization level...key table and the totals table. Check what chars are selected in the key table.
    http://help.sap.com/saphelp_470/helpdata/en/7a/4c40794a0111d1894c0000e829fbbd/content.htm
    Edited by: Murali Krishna K on Mar 8, 2011 3:32 PM

Maybe you are looking for

  • The Split tool is not working for me?

    I need to split one shot into two or maybe more shots. I where choosing the split tool and I mowed it down to the timeline, and I mowed the pointer down to the Timeline and I where clicking in a point in the shot. The same working procedure as I norm

  • Getting error message when opening a PDF thats in a curriculum

    I have a pdf that is in a curriculum, and the window is able to open and the pdf opens for a split second, then the message error=0, error_text=none appears in the window.  See the attached image. Some background information:  I just revised this con

  • DBMS_SQL.PARSE with PL/SQL types

    Hi I need to use DBMS_SQL.PARSE with PL/SQL types defined in a package. I tried with a type record in a declare ..begin..end  script but I got an error ..(on second parameter): DBMS_SQL.PARSE(cursor_name, XXXXX, DBMS_SQL.NATIVE); It's possible? WIth

  • USB 8451 no timer / counter ?

    Hi all, i´m not sure if i´m in the right topic now, but i found no answer with the search function. I tried to generate an digital pulse using timer/counter functions with Labview 8.5 and an USB 8451, but it didn´t work. Is it possible that the USB 8

  • Why paper sizes limited?

    I am viewing www.lynda.com online training for Pages. About 1/3 of the way into the tutorial on document setup, he shows the options available under Page Setup in paper sizes. The sizes that he shows available are much broader (his Pages app. shows a