Data LoadingProblem in To PP cube

Hi Pradip.. & experts.
When i am Loading Data in to Cube 0pp_C11 (2LIS_04_P_COMP)
it is Throwing following error.. i  installed every thing Properly.
<b>Diagnosis</b> 
   The application program for the extraction of the data was called using update mode C . However, the Info Source does not support this
<b>  System response</b>
   The data extraction is terminated
<b>  Procedure</b>
   Check for relevant OSS Notes, or send a problem message of your own Procedure for System Administration
Please help it is Little Bit Urgent.
Cheers
Purushottam.

Hi Purushottam,
Check this link:
http://help.sap.com/saphelp_nw04/helpdata/en/42/f5030275b68546a1858764934e66ed/content.htm
http://help.sap.com/saphelp_nw04/helpdata/en/ae/e5e138d7d15678e10000000a11405a/content.htm
http://help.sap.com/saphelp_nw04/helpdata/en/2a/afce3879107218e10000000a11405a/content.htm
Bye
Dinesh

Similar Messages

  • Navigational Attribute data is not display in Cube level and reporting

    Hello ALL
    Iam facing a problem like,I created Navigational attribute and I selected that navigational attribute in cube level also but data is not display in cube level.
    What will be the problem?Pl help me out.
    regards
    balaji

    Hi Dinesh
    You mean base characteristic means loading data after creating Navigational attribute?Is it this way,(orelse can u tell me about this base characterstic)
    Yes after making that particular attribute as a navigational attribute I loaded data in master data tables.
    How to run Attribute change run?can u give me steps
    regards
    balaji

  • Nav.attr data is not coming in cube

    Hi guys,
            Nav.attr data is not coming in cube even though there is data in master data tables.

    Hi Ajay,
    In your case for the Nav Attr data to appear firstly load the master data where your Nav Attr resides and then do the apply hier/attr change for that master data.
    Hope this helps.
    Bye
    Dinesh

  • Partial data loading from dso to cube

    Dear All,
    I am loading the data from dso to the cube through DTP. The problem is that some records are getting added  through data
    packet 1 to the cube around 50 crore records while through data packet2 records are not getting added.
    It is full load from dso to cube.
    I have tried deleting the request and executing DTP but again the same no of records get added to the cube through datapacket 1   and after that records are not added to the cube through datapacket 1  ;request remains in yellow state only.
    Please suggest .

    Nidhuk,
    Data load transfers package by package. Your story sounds like it got stuck in second package or something. Suggest you check package size, try to increase to a higher number to see if anything  changes ? 50 records per package kind of low, your load should not spread out into too many packages.
    Regards. Jen
    Edited by: Jen Yakimoto on Oct 8, 2010 1:45 AM

  • How to load data from a ODS to CUBE Request ID - by - Request ID?

    <i>How to load data from a ODS to CUBE Request ID - by - Request ID?</i>
    The problem is that... some requests had been eliminated of the cube and the delta control between the ODS and CUBE was lost. The flag "<b>data mart status of request</b>" of all the requests of the ODS had been blank.
    Now it is necessary to load some requests from the ODS for the cube.
    Notes:
    - it is not possible to make a complete load selecting the data to be loaded;
    - the PSA is not being used;
    - considering the data volume it is impracticable to reload the cube completely.
    Thanks in advance,
    Wesley.

    Dear R B,
    Considering the following:
    -> the delta control was lost;
    -> the data already are active in the ODS;
    -> part of the data of the ODS already is in the cube.
    The indicated procedure it only guarantees the load of the data that are in the ODS and that are not in the cube.
    Tks,
    Wesley.

  • Data not loading /updating into cube 0lc_c03

    Dear Friends,
    I extracted data for three data sources i.e. 2lis_03_bx,
    2lis_03_bf,2lis_03_um i got data in BW PSA with out any
    error but that data is not updating in cube 0lc_c03
    i have done settings at r/3 at t.code mcnb,SBIW and I had make NDI flag active at t.code FIBF. do i need to right.
    at SBIW I have done like this settings: Inventory controlling
    1,Determine Industry sector I have selected Standard (core).
    2,Transaction key maintenance for SAP BW I have selected application MM and operat all i.e. 0,1,2,3,4,5,6........all of MM and saved
    3,Then stock initialization.
    and deleted setup table data for inventory controlling
    and after assigning keys I again loaded setuptable.
    I am geting data in r/3 rsa3 and in BW PSA with out errors for all three data sources but not geting in cube
    but at data source 2lis_03_um update rules there are few key figures with out green light i.e. 'x' but update rules are active please guide me do ineed to write
    any update rule level routines please guide me.
    please guide me how to get data into cube.
    Thanks in Advances
    rafeeq ahmed
    <b></b>

    Hi rafeeq,
    Are you using business content update rules or custom.
    check each rule seperately. do a simulation on the data load, if you feel the update rules are correct.
    Check if your validity table is maintained.
    hope it helps.
    kris

  • Problem in loading 0calday infoobject date format through Flatfile in cube

    Hi All,
    I am facing problem  in loading the 0calday infoobject  data through flat file( format - test.csv) in infocube..
    Suppose consider we are having two flat files(test1.csv,test2.csv).
    1.First file(test1.csv) has a proper date format (ie YYYYMMDD), while loading it is succesfully .
    2.Second file(test2.csv) has improper date format(ie DDMMYYYY), loading fails because of this format..
    Is it possible to write the Routine(Start Routine) in the Infopackage (External data-Tab) in such a way the if the flat file(test1.csv) is proper date format load calday data without any conversion, if the file is test2.csv convert the date field format from (DDMMYYYY) to (YYYYMMDD) and finally laod data in cube.
    With regards,
    Hari.
    +91 9323839017

    Hello Dinesh,Anil
    There is no distinguishing field between the two flat file loads.
    We are using only one infoobject(ie 0calday) for two loads.
    We are using two external source system(one system generate file as YYYYMMDD date format,and another system generate date formate as DDMMYYY, here two file names are unique.
    Here my requirement is i have to compare two file names using start routine of the package (tab:External data) .
    if(test1.csv)
    load as it is 0calday data (since it is in proper format YYYYMMDD)
    else if(test2.csv)
    then convert from DDMMYYYY to YYYYMMDD and load data to 0calday infoobject in cube.
    Is it possible to compare two files names using start routine.
    with regards,
    Hari

  • Movement types data not capturing from BW cube data load

    Dear Experts,
    We are supporting BPC 10.0 Netweaver consolidation activities.
    As a part of the monthly consolidation, we are loading the ECC data available in the BW cubes into BPC.
    0FIGL_C10 cube: From this standard cube we are capturing the movement types (i.e, transaction type details) for the balance sheet items. So opening minus closing should be equal to the movement types.
    ZFIGl_CA10  : From this cube the YTD closing balances of the balance sheet items are captured.
    Opening balances are copied from the last period of the previous year through balance carry forward.
    For a balance sheet itens, we are getting the F99 closing balances correctly captures.
    But, the movement types are not getting captured correctly and the difference between the (opening + movements) - closing is sitting the controls.
    I have tried to check ECC data for these accounts in FS10N and 0FIGL_C10. The monthly data is all flowing properly. But when coming into BPC movement type alone has the problem.
    The transformation and conversion has all worked fine time June 2014 but July 2014 it has incorrect figures.
    No modifications are done for both the file.
    Please find the below attached screen shot and kindly help me on the same.
    Regards,
    Shilpa

    Hi Shilpa,
    I had the same problem in mi last consolidation implementation and the solution in this case was
    implement all the data flow for the 0FI_GL_014,
    Regards,
    Mario

  • Report with data that is present in cube and not in ODS from a multiprovide

    Hi Friends,
    I have a multiprovider with one cube and one ODS. I have a requirement to create a exception report. This exception report should have records that are present in cube but not in ODS. If cube has a record which is present in both infoproviders then report should not display that.
    I hope I am clear on my question.
    Thanks and regards,
    Balaraj

    Thanks Timur,
    But this is going to give me all the data present in that selected cube. I want to the report to display data only if it is present in cube but not in ODS.
    ODS data
    Material............Component...........Amount
    10001.................30001................100
    10001.................30002................200
    Cube Data
    Material....................Amount
    10001.......................100
    20002.......................200
    My report should display
    Material................Amount
    20002...................200
    It should not display 10001 as it is present in ODS.
    I hope I am clear this time.
    Thanks and regards,
    Balaraj

  • How the data is stored in Info cube...in the back end what will happen???

    Hi Experts,
    How the data is stored in Info cube and DSO...in the back end what will happen???
    I mean  Cube contain Fact table and Dimension tables How the data will store and what will happen in the backend???
    Regards,
    Swetha.

    Hi,
    Please check :
    How is data stored in DSO and Infocube
    InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available independent of the InfoCube. Characteristics refer to master data with their attributes and text descriptions.
    An InfoCube consists of several InfoObjects and is structured according to the star schema. This means there is a (large) fact table that contains the key figures for the InfoCube, as well as several (smaller) dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions.
    An InfoCube fact table only contains key figures, in contrast to a DataStore object, whose data part can also contain characteristics. The characteristics of an InfoCube are stored in its dimensions.
    The dimensions and the fact table are linked to one another using abstract identification numbers (dimension IDs) which are contained in the key part of the particular database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube.
    Characteristics that logically belong together (for example, district and area belong to the regional dimension) are grouped together in a dimension. By adhering to this design criterion, dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume. This is beneficial in terms of performance. This InfoCube structure is optimized for data analysis.
    The fact table and dimension tables are both relational database tables.
    Characteristics refer to the master data with their attributes and text descriptions. All InfoObjects (characteristics with their master data as well as key figures) are available for all InfoCubes, unlike dimensions, which represent the specific organizational form of characteristics in one InfoCube.
    http://help.sap.com/saphelp_nw04s/helpdata/en/4c/89dc37c7f2d67ae10000009b38f889/frameset.htm
    Check the threads below:
    Re: about Star Schema
    Differences between Star Schema and extended Star Schem
    What is the difference between Fact tables F & E?
    Invalid characters erros
    -Vikram

  • How the data is fetched from the cube for reporting - with and without BIA

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at      ***** Records, Selected     *****Records, Transported
    Cube A     ***** blank ***** 0.624305      ***** 8,087,502      ***** 2,011
    Cube B     ***** E ***** 42.002653 ***** 1,669,126      ***** 6
    Cube B     ***** F ***** 98.696442 ***** 2,426,006 ***** 6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi,
    yes, Vitaliy could be guess right. Please check if FEMS compression is enabled (note 1308274).
    What you can do to get more details about the selection is to activate the execurtion plan SQL/BWA queries in data manager. You can also activate the trace functions for BWA in RSRT. So you need to know how both queries select its data.
    Regards,
    Jens

  • How the data is fetched from the cube for reporting

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1:  I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at          ***** Records, Selected     *****Records, Transported
    Cube A     *****             blank                 *****           0.624305         *****          8,087,502        *****             2,011
    Cube B     *****                     E   *****                      42.002653  *****                  1,669,126            *****                    6
    Cube B     *****                     F  *****                     98.696442    *****                  2,426,006       *****                    6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi Jay,
    Thanks for sharing your analysis.
    The only reason I could think logically is BWA is having information in both E and F tables in one place and hence after selecting the records, it is able to aggregate and transport the reords to OLAP.
    In the second case, since E and F tables are separate, aggregation might be happening at OLAP and hence you see more number of records.
    Our Experts in BWA forum might be able to answer in a better way, if you post this question over there.
    Thanks,
    Krishnan

  • Is it possible to use same data source for two info cube

    Hi,
    My Problem is in BW we can not have value of material at storage location level.In R/3 also value is maintained at plant level.
    Then we searched and we found out one hot to doc for summarized display of stock values on storage location level.
    Problem is that we have gone live in last December and we are using " 0AFMM_C02 " and it contains around 1,81,26,000 records. and according to note we have to use
    "0IC_C03".
    Both the cube uses same data sources for the data.So, how to get the data for "0IC_C03".
    and how to delete the data of existing info cube.And is it possible to delete data selectively from the info cube.
    Pls. help.
    Regards,
    viren.

    Hi,
    You can't create update rule from PSA.You can create from the infosource or from ODS or from cube to cube or ODS to ODS.
    In your scenario, what you can do is create update rules from the ODS to the new cube and then transfer the data from there. Or from the Infosource create rules to the new data target and then upload the full data and then set up the delta.
    Third option is to create update rules from the existing cube to the new cube and then load all the data one time. Then you can deactivate the update rules as that was needed only for 1 time data transfer.
    Cheers,
    Kedar

  • Error While loading data from 8 DSOname to Cube.

    Hi All,
    while loading data from 8<DSONAME> to CUBE , the data is Sucessfully loaded data into Cube, but the Request Status is shown in Yellow,why it is showing, please tell me how to make it Sucessfull.
    Thanks and Regards,
    santosh.

    HI All,
    In ST22 i'm getting Short Dump, the Error massage is
    MESSAGE_TYPE_X
    WHAT HAPPEN
    The current application program detected a situation which really
    should not occur. Therefore, a termination with a short dump was
    triggered on purpose by the key word MESSAGE (type X).
    Error Analyis is:
    Short text of error message:
    Generic process chain
    Long text of error message:
    Technical information about the message:
    Message class....... "RSMPC"
    Number.............. 000
    Variable 1.......... " "
    Variable 2.......... " "
    Variable 3.......... " "
    Variable 4.......... " "
    "MESSAGE_TYPE_X" " "
    "CL_RSSM_LOADING===============CP" or "CL_RSSM_LOADING===============CM005"
    "IF_RSPC_EXECUTE~EXECUTE"
    there is any Patch Problem...
    Please let me Know how to resolve this issue.
    thanks and regards,
    santosh.

  • Is it possible to delete data selectively from Business content cubes

    Dear Experts,
             Requesting you to help me out to know, is it possible to delete data selectively from Business content cubes.
    When I'm trying to delete selectively from Business content cubes, the background job gets cancelled with ST22 logs stating
    A RAISE statement in the program "SAPLRSDRD" raised the exception  condition "X_MESSAGE".                                                                               
    Since the exception was not intercepted by a superior program, processing was terminated.  
    and i tried  with few more Technical content cubes but the same thing happens.
    Pls let me know how to selectively delete data from Business content cubes if it's possible?.
    Thanks in advance for your favorable assistance.
    Regards,
    Ramesh-Kumar.

    Hi Ramesh,
    Follow below steps for selective deletion:
    1.     Transaction code: Use the Transaction code DELETE_FACTS.
    2.     Generate selective deletion program:
    A report program will be generated of the given name, here .
    3.     Selection screen:
    Take the deletion program u201CZDEL_EPBGu201D to the transaction code SE38 to see/execute the program.
    After executing it will take you to a selection screen:
    As we need to carry out deletion selective on Calendar week, we need to get the screen field for the field Calendar week. For this, click on the Calendar week field and press F1.
    Click on the technical information button (marked in red box above) you will get below screen:
         ABAP program to carry out the Calendar week calculation
    Problem scenario: As stated earlier the requirement is to delete the data from the cube based on the calendar week. Thus a code must be developed such that the number of weeks should be taken as input and corresponding calendar week should be determined. This calendar week should be then passed to the deletion program in order to carry out the data deletion from the InfoCube.
         Transaction code: Use T-code SE38 in order to create a program.
    Logic: Suppose we need to delete the data older than 100 weeks.
    a.     Get the number of weeks and system date in variables and calculate the total number of days :
    lv_week = 100.      *number of weeks      
    lv_dte = sy-datum.     *system date
    v_totaldays = lv_week * 7.      *total days
    b.     Get the corresponding calendar day from the total days. This is obtained by simply subtracting the total no. of days from the system date.
    lv_calday = lv_dte - v_totaldays. *corresponding calday.     
    c.     Now in order to get the calendar week corresponding to the calculated calendar day we must call a function module 'DATE_TO_PERIOD_CONVERT'. This function module takes input as Calendar day and Fiscal year variant and returns the appropriate fiscal period.
    Get the sales week time elements
      call function 'DATE_TO_PERIOD_CONVERT'
        exporting
          i_date                      = lv_calday
          i_periv                     = lc_sales
        importing
          e_buper                     = lv_period
          e_gjahr                     = lv_year
        exceptions
          input_false                 = 1
          t009_notfound               = 2
          t009b_notfound              = 3.
      if sy-subrc = 0.
        ls_time-calweek(4)      = lv_year.
        ls_time-calweek+4(2)    = lv_period.
      endif.
    v_week = ls_boots_time-calweek.
    Note: We can pass the fiscal year variant which can be obtained from the table T009B.For e.g. here fiscal year variant lc_sales = Z2. LS_TIME will be any table with suitable time units.
    d.     Now we have obtained the required calendar week in the v_week variable. This calendar week is the week till which we need to keep the data. And older data than this week will be deleted. This deletion will be done by the deletion program
    Submitting the Data deletion program for ZEPBGC01 and key field
    SUBMIT ZDEL_EPBG WITH C039 LT v_week.
              Here the calendar week value is submitted to the deletion program ZDEL_EPBG with the screen field of calendar week.
    Hope ... this will  help you..
    Thanks,
    Jitendra

Maybe you are looking for

  • Error occured while importing xsd.file(Name contains invalid characters: -

    Hello, i muss import xsd.file.  while importing this error occured: İFMEXTDEF CCTS_CCT_SchemaModule-2.0 | http://example.org/XXX/XXX/OUT/NOZ (SC_NOZ V001 of example.org): Name contains invalid characters: - . Only a(A)-z(Z), 0-9, and "_" are permitte

  • GPS not picking up signal

    I have been having a problem with the google maps and google navigation since monday 10/01/12.  On google maps my location is off anywhere from 40 ft to 1.5 miles.  I have contacted both verizon customer support and what I could get from google and h

  • Setting max record per page property at runtime (very urgent)

    We need to display 7 record for first page and 25 for other pages... we tried a reperating frame max record per page property is a report level property. So it cannot change for pages. Then We try other way.. we seperated our query to the different f

  • Dynamic Availability Check in MB26

    Hi! We have turned on the dynamic availability check function to output an error if an issue is attempted and where there is no stock. E.g. Stock level - 100 Reservation 1 - 60 -> system committed 60 Reservation 2 - 40 -> system committed 40 Reservat

  • Itunes 11 Homeshare not working?

    Spec: Two computers: Acer pc desktop running windows 7 64x my ´11 & macbookpro running 10.7.5 both itunes 11.0.1. The laptop is wirelessly connected to the network while the Pc is wired to the router. (which is a Huawei HG8247). Both are loged into h