0IC_C01  No data available in the infocube maintenance

Hi Experts,
Recently i've implemented Inventary. After loading Inventary stock, Material movements, Revaluations. I can't view data in the infocube contents.
Where i'm doing mistakes please share your ideas.
Siri

Hi Siri,
   You mean to say that data laod was successful but you could'nt find records in the Infocube.What was the processing type you used in the Processing tab page?.
Try loading data from PSA manually if nothing works(Temporary solution only).
If there are any errors during dataload please do the appropriate corrections.
Regards,
Harold.

Similar Messages

  • Lms 4.1 prime psirt/eox no data available in the report

     Hi:
    I´m running LMS 4.1 Prime for Windows.
    I tried an immediate report with PSIRT/EOX Report option: Cisco.com
    The job succeeded, but without any data in the report
    <TABLE style="WIDTH: 100%" border=0 cellSpacing=2 cellPadding=2 align=right mcestyle="width: 100%;">
    No data available in the report. The problem could be any one of the following:
    <TD style="COLOR: #686868" bgColor=#ffffff height=24 vAlign=top align=left mcestyle="color: #686868;">1. No PSIRT data available in the LMS database for the selected device(s).
    2. PSIRT/EOX system job might not have run, or might have failed.
    3. You might have entered the wrong cisco.com credentials.
    Then I tried logging to Cisco.com (with the same user/password as configured in LMS) and download the <SPAN style="FONT-FAMILY: Times-Roman; FONT-SIZE: 10pt" mcestyle="font-family: Times-Roman; font-size: 10pt;"><SPAN style="FONT-FAMILY: Times-Roman; FONT-SIZE: 10pt" mcestyle="font-family: Times-Roman; font-size: 10pt;">PSIRT_EOX_OFFLINE.zip file.
    <SPAN style="FONT-FAMILY: Times-Roman; FONT-SIZE: 10pt" mcestyle="font-family: Times-Roman; font-size: 10pt;"><SPAN style="FONT-FAMILY: Times-Roman; FONT-SIZE: 10pt" mcestyle="font-family: Times-Roman; font-size: 10pt;">I unzipped it and put it in the folder C:/PROGRA~1/CSCOpx/files/rme/jobs/inventory/reports/EOX_PSIRT/local_xml (please read \ instead of /, because it is wrong in the LMS display).
    <SPAN style="FONT-FAMILY: Times-Roman; FONT-SIZE: 10pt" mcestyle="font-family: Times-Roman; font-size: 10pt;"><SPAN style="FONT-FAMILY: Times-Roman; FONT-SIZE: 10pt" mcestyle="font-family: Times-Roman; font-size: 10pt;">I set SIRT/EOX Report option: local
    And again, the job succeeded, but without any data in the report (same message as before)
    What do you think is happening? How could I debug the process?
    Thanks a lot
    Julio
    PD: I`m selecting old devices to be sure that they already had an EOL/EOS.

    Sorry, the copy/paste didn´t work as expected, here it is again:
    Hi:
    I´m running LMS 4.1 Prime for Windows.
    I tried an immediate report with PSIRT/EOX Report option: Cisco.com
    The job succeeded, but without any data in the report
    No data available in the report. The problem could be any one of the following:
    1. No PSIRT data available in the LMS database for the selected device(s).
    2. PSIRT/EOX system job might not have run, or might have failed.
    3. You might have entered the wrong cisco.com credentials.
    Then I tried logging to Cisco.com (with the same user/password as configured in LMS) and download the PSIRT_EOX_OFFLINE.zip file.
    I unzipped it and put it in the folder C:/PROGRA~1/CSCOpx/files/rme/jobs/inventory/reports/EOX_PSIRT/local_xml (please read \ instead of /, because it is wrong in the LMS display).
    I set SIRT/EOX Report option: local
    And again, the job succeeded, but without any data in the report (same message as before)
    What do you think is happening? How could I debug the process?
    Thanks a lot
    Julio
    PD: I`m selecting old devices to be sure that they already had an EOL/EOS.

  • Spliting data wrt data available in the columns

    I have one table with column id,amnt1 and amnt2 with id as primary key
    WITH table_1 AS
    select '1' id, '200' amnt1,'100' amnt2 from dual union all
    select '2' id, '200' amnt1,'' amnt2 from dual union all
    select '3' id, '' amnt1,'100' amnt2 from dual union all
    select '4' id, '50' amnt1,'' amnt2 from dualunion all
    select '5' id, '150' amnt1,'270' amnt2 from dualunion all
    select * from table_1
    Depending up on the data in amnt1 and amnt2 i need to split the record.
    In first case id = 1 i need to check if the amount is available in the columns.If both columns amnt is available then split that record in to 2 as below
    WITH table_1 AS
    select '1' id, '200' amnt1,'' amnt2 from dual union all
    select '1' id, '' amnt1, '100' amnt2 from dual
    select * from table_1
    in second case id =2 only one row as only one row only one amnt contains data
    WITH table_1 AS
    select '2' id, '200' amnt1,'' amnt2 from dual
    select * from table_1
    In third case one row and in fourt case 1 row and fith case again two rows.
    Basically if amnt1 and amnt2 contains data it has to split in to two.So the final result will be like
    WITH table_1 AS
    select '1' id, '200' amnt1,'' amnt2 from dual union all
    select '1' id, '' amnt1,'100' amnt2 from dual union all
    select '2' id, '200' amnt1,'' amnt2 from dual union all
    select '3' id, '' amnt1,'100' amnt2 from dual union all
    select '4' id, '50' amnt1,'' amnt2 from dual union all
    select '5' id, '150' amnt1,'' amnt2 from dual union all
    select '5' id, '' amnt1,'270' amnt2 from dual
    select * from table_1
    Please help
    WITH table_1 AS
    select '200' amnt1,'2010-02-02' date1,'100' amnt2,'2010-03-08' date2 from dual union all
    select '500' amnt1,'2010-02-15' date1,'300' amnt2,'2010-02-08' date2 from dual union all
    select '500' amnt1,'2010-02-18' date1,'300' amnt2,'2010-04-09' date2 from dual
    select * from table_1

    user10285699 wrote:
    Not correct.this is not what i need.sorryYou said the final result should be...
    SQL> WITH table_1 AS
      2  (
      3  select '1' id, '200' amnt1,'' amnt2 from dual union all
      4  select '1' id, '' amnt1,'100' amnt2 from dual union all
      5  select '2' id, '200' amnt1,'' amnt2 from dual union all
      6  select '3' id, '' amnt1,'100' amnt2 from dual union all
      7  select '4' id, '50' amnt1,'' amnt2 from dual union all
      8  select '5' id, '150' amnt1,'' amnt2 from dual union all
      9  select '5' id, '' amnt1,'270' amnt2 from dual
    10  )
    11  select * from table_1
    12  /
    I AMN AMN
    1 200
    1     100
    2 200
    3     100
    4 50
    5 150
    5     270Saad Nayef's solution gives...
    SQL> WITH table_1 AS (SELECT '1' id, '200' amnt1, '100' amnt2 FROM DUAL
      2                   UNION ALL
      3                   SELECT '2' id, '200' amnt1, '' amnt2 FROM DUAL
      4                   UNION ALL
      5                   SELECT '3' id, '' amnt1, '100' amnt2 FROM DUAL
      6                   UNION ALL
      7                   SELECT '4' id, '50' amnt1, '' amnt2 FROM DUAL
      8                   UNION ALL
      9                   SELECT '5' id, '150' amnt1, '270' amnt2 FROM DUAL)
    10  SELECT id, amnt1, NULL amnt2 FROM table_1
    11  UNION ALL
    12  SELECT id, NULL, amnt2 FROM table_1
    13  MINUS
    14  SELECT id, NULL, NULL FROM table_1
    15  /
    I AMN AMN
    1 200
    1     100
    2 200
    3     100
    4 50
    5 150
    5     270And if I did it my way I would get...
    SQL> ed
    Wrote file afiedt.buf
      1  WITH table_1 AS
      2  (
      3  select '1' id, '200' amnt1,'100' amnt2 from dual union all
      4  select '2' id, '200' amnt1,'' amnt2 from dual union all
      5  select '3' id, '' amnt1,'100' amnt2 from dual union all
      6  select '4' id, '50' amnt1,'' amnt2 from dual union all
      7  select '5' id, '150' amnt1,'270' amnt2 from dual
      8  )
      9  select id, amnt1, null amnt2 from table_1 where amnt1 is not null
    10  union all
    11  select id, null, amnt2 from table_1 where amnt2 is not null
    12* order by 1, 2, 3
    SQL> /
    I AMN AMN
    1 200
    1     100
    2 200
    3     100
    4 50
    5 150
    5     270All of which are identical results.
    If there's something wrong in those, you'd better explain what you want, because you've been given a correct answer (hence why I didn't answer this question myself earlier as I could see it was correct).

  • Checking the Data Loaded in the InfoCube

    Hi all,
    With 3.x version and also with RSA1OLD, it's possible to see infosources that can be loaded for this data target and their status with infosources overview.
    Now, with 2004s version, there's no Infosources. It was really helpful to see if all master data are loaded.
    Is there any new functionality? RSA1OLD is still necessary?
    Thanks
    Best regards

    Hi Andreas and Michael,
    Thanks for replies.
    Andreas, not exactly, it's more about DTP than PSA.
    I know how to view all information one by one. But for an Infocube you can have x Master Data (texts, attributes and hierarchies).
    How can I see if all data are loaded? (date and status)
    As Michael said, we can use the process chain to check if all init, delta or full are successfully loaded. But we need to check it in more than 5 process chain.
    With "overview infosources" it was possible to check with just one click if all data are loaded.
    You're right Michael it's clearly more about ergonomy.
    Best regards

  • Wrong data posting in the infocube

    Hi Gurus,
    When we are writing in our infocube with BPS (not depending on interface type), data are stored on wrong characteristic.
    For example : we are writing on the Characteristic "1" and data is stored on another characteristic. As a result, no data have been changed and we cannot do the budget.
    Thank you in advance for your help,

    Hi;
    I am not sure how to do correct your wrongly assigned data; but if you don't find any good answers may be you can try deleting the wrongly assigned data from that char and post again to the correct one...
    BK

  • Solution manager 7.1  Wily (no data available in the BI)

    Hello, I am using solution manager 7.1 SP08  on hp/ux  with wily 9
    There are times when  Wily show 200% capacity then later show 51%  capacity
    Trying  to figure out what going on so I click on the performance tab and I get  a 'no data message in BI '
    Any ideas on why no data is  showing in BI ?
    We are getting wily data .

    Hi Kenneth,
    How many agents have been connected to Wily EM ?
    For Wily EM capacity problem, you may check below SAP note
    1871677 - What to do when Wily Introscope Enterprise Manager Capacity is significantly above 100%
    Hope this helps.
    Thanks & Regards,
    Nisha

  • I have my IPad 2 synced to Itunes. My Laptop crashes. No data available on the HDD. Bought a new HDD installed Itunes. Is their any way to do a restore to itunes from my IPad?

    I have my IPad 2 synced to itunes on my corporate laptop. The laptop crashed and I needed to send the laptop away. IT re-immaged the laptop. I loaded Itunes, and my question to anyone is is their a way to restore a Itunes lib back to a PC from a IPad

    It has always been very basic to alway maintain a backup copy of your computer for just such an occasion.
    Use your backup copy to put everything back.

  • DTP to load master data, gives the message 'No more data available'.

    hi,
    when i execute the DTP to load master data from DSO, it executes and gives the message 'No more data available'. The request is green but no data is transferred from the DSO to master data.
    I want the DSO data to get into master data how do i do it?

    Hi Hardik,
    Since the request is green, there is no error in extraction. But, as they've rightly pointed out, the Update mode of the DTP must be Delta and there must not be any new Master data that you are looking for.
    Compare the data and check if the Delta DTP should bring anything at all i.e. new records. If there are no new records, then the Delta DTP will not bring any more data. Else, change mode to Full and check again.
    Reg,
    Dhaval

  • Data availability report in SAP BI

    Hi All,
    I need to create a report on "Data Availability" using standard Infocube from SAP BI statistics. For the same i would like to know
    The Infocube (From BI statistics) which will provide how much data got into the cube
    The Infocube (From BI statistics) which will provide date and time duration taken for data load.

    Hi Akash,
    The below standard queries will provide you the required details,
    Info
      Provider
    Bex Query
      Technical Name
    Bex Query
      description
    0TCT_MC11
    0TCT_MC11_Q0140
    InfoCube Status
    0TCT_MC11
    0TCT_MC11_Q0141
    InfoCube
      Correctness
    0TCT_MC11
    0TCT_MC11_Q0240
    InfoCube Status: Analysis
    0TCT_MC22
    ZCS_CUBE_DATADETAIL_STAT
    ZCS_CUBE_DATADETAIL_STAT
    0TCT_MC21
    ZTCT_MC21_Q_FB_01
    Dashboard - process
      chain historical loading time
    0TCT_MC22
    ZCS_STAT_SPEND_LOADS
    Statistics Spend
      Overview Loads
    0TCT_MC22
    ZCS_STAT_SPEND_LOADS_PERF
    Statistics Spend
      Overview Loads Performance
    0TCT_MC22
    ZTCT_MC22_Q_FB_02
    Dashboard - DSO
      loading time top 10
    0TCT_MC22
    ZTCT_MC22_Q_FB_04
    Dashboard - IC
      loading time top 10
    0TCT_MC22
    ZTCT_MC22_Q_FB_06
    Dashboard - IO
      loading time top 10
    -Arun.M.D

  • How can I add my Sold to field in the infocube via transformation if it is a separate infoobject?

    Hello all,
    Really need your help...I want to include the Sold_to field in my Revenue BEx query but it is not available in the infocube I am sourcing from. It is however in BW as a separate infoobject. Is it possible to include the data in my infocube by adding a field to transformation and sourcing it from 0SOLD_TO? If so, how can it be related to the current data in the infocube (i.e. meaning how will it know which records are applicable to the said Sold to fields if there is no document number in the cube. 
    Please advise if possible or if there are better options which would not require updating the standard datasource 0CO_PA_1.
    Thanks!

    Hi,
    Sorry, Unable to understand your requirement....however below are options which probably may help you:
    1) Add Sold to Infoobject in Cube...Enhance your datasource write logic to populate the datasource field and map accordingly.
    2) Add Sold to Infoobject in Cube...Write Routine/Read Master data to populate in the cube....you should have logic and relationship to derive the same.
    Regards,
    Mayank

  • Remove double records during data upload from one InfoCube to another

    Dear Experts
    We have transactional financial data available in an InfoCube including cummulated values by period . Some companys have 12 reporting periods (0FISCPER3) and some companys have 16 periods (but all 16 periods are not always filled) . The data must be prepared for a consolidation system which expects only 12 periods. Therefore I bulit up a routine with the following logik:
    If period > 12, result = 12, else result = source field.
    But as the data target is (must be) an InfoCube the new values with reporting period 12 are not overwritten, but summarised instead. This means the original records with period 12 and the new records - see example:
    Records before transformation:
    Period   Amount
    12          100
    13           120
    Records after transformation in the InfoCube:
    Period   Amount
    12          100
    12           120
    This would lead to the following aggregation:
    Period   Amount
    12          240
    But as the values per period are cummulated, the consolidation system only needs the last period. So there should be only  one record left in the InfoCube:
    Period   Amount
    12           120
    Is it possible to delete dublicate records or do you have any other idea to to keep only the record with the last period (e.g. period 13) and to assign the value 12?
    Thanks a lot in advance for your help!
    Regards
    Marco

    Hi,
    You have two options here, you can put DSO in between the Datasource and infocube, and load the delta using the change log.
    Second is use delete the overlapping request from the infocube, it will delete the previuos requests and load the new request.
    Check the below article:
    [Automatic Deletion of Similar or Identical Requests from InfoCube after Update|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/e0431c48-5ba4-2c10-eab6-fc91a5fc2719]
    Hope this helps...
    Rgs,
    Ravikanth

  • Info IDoc received with status 8 in BI but Data available in source system

    Hello Experts,
    I am loading Budget Period (attributes) into PSA in BI.
    Budget Period data Loading Issue : No data available while loading. But source system ie ECC has data in that data source
    Data source: 0PU_BDGTID_ATTR
    This is full load.
    I checked that whether I am making any mistake during data selection. I could not find any issue. Message while loading data into PSA (Yellow Traffic light)
    No data available
    Diagnosis
    The data request was a full update.
    In this case, the corresponding table in the source system does not
    contain any data.
    System Response
    Info IDoc received with status 8.
    Procedure
    Check the data basis in the source system.
    Already looked this thread and googled also.
    Re: IDoc received with status 8.
    Check points:
    sm58 - NO issue
    BD87 in ECC (RSINFO Idoc status: 03) -sucessful
    BD87 in BI (RSINFO Idoc status : 53)
    ST22 - NO issue
    sm37 - Job is successful (I could see Idoc no in ECC source system in Job log)
    I am not sure if I am missing any thing.
    Thanks
    Rana

    Hi all,
    I've got exactly the same issue with several extractors. I'm inialising the Master Data in a all new system, in the Quality the loads succeeded but in Prod it failed.
    For instance with the basic DataSource "0FUNCT_LOC_TEXT", I've made the following test:
    RS3 (R/3) - > 0 data
    ST22 -> no short dump
    WE05 (Idoc) - > everything is ok
    I reactivated the Datasource in R/3 (RSA5) and then deleted it and replicated it in BW but the same issue occured.
    Any ideas?
    Thanks
    jak

  • Flash chart - Error - No chart data available

    Hi
    I have a Flash chart that shows the message "Error - No chart data available" (not the no data found message).
    If I run its query in sql*plus (using the same parameters) I get 4 rows (with url, label and value columns). I am clueless about how to start investigating this as the chart does not show any other information... Any advice?
    What does that message mean? Obviously is something different from no data found?
    Thanks
    Luis

    Hi Marco
    Thanks for the reply. I did what you said and found out the problem:
    chart Flash Chart error: ORA-20001: get_data error: ORA-20001: Fetch error: ORA-01843: not a valid monthThis was caused because I used the following construction:
    where date_from = :p5_date_fromIf this was a report, this would run fine as I set the date format mask using a application level process. However, as Flash charts use separate database connections, they ignore that setting (btw I posted a message about this problem).
    I fixed it hardcoding a format mask in my SQL. Not nice but it works!
    Thanks
    Luis

  • No Data Available (Flat File Loading Problem))

    Hi Friends,
    I loaded the data from FlatFile(Application Server) I am able to see in the Preview
    but the data is not laoding into the Cube. Request status is green but with zero records.
    Following is the message
    No data available
    Diagnosis
    The data request was a full update.
    In this case, the corresponding table in the source system does not
    contain any data.
    System response
    Info IDoc received with status 8.
    Procedure
    Check the data basis in the source system
    Can anybody help
    Regards,
    CV

    Hi chakri,
    first u have cto check  ur flat file in the Presentation server OR Application server
    2. Ur flat file fields sequence same In the Trasfer structure .
    U should maintain the same sequence in Transferstructure  what u have maintained in the flat file.
    3. when ur loading date from flatfile source System u have to use FULLUPDATE  mode .
    check the above .
    Thanks,
    kiran

  • I have request in the report level but the same is missing in the infocube

    Dear Experts,
    I have request in the report level but the same is missing in the compressed infocube level. What could be the cause? does the compressed infocube deletes the request ? if so, I could able to view other requests under infocube manage level.
    Kindly provide with enough information.
    Thanks.............

    Hi
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    Edited by: Allu on Dec 20, 2007 3:26 PM

Maybe you are looking for

  • SQL to link Invoice and receipts in Receivables

    Hi, Kindly provide a SQL to find the link between invoice and Receipts in Receivables module. Thanks and Regards Sathya

  • PeerToPeerRtmfp.mxml sample Error #2025

    Hi, I'm trying out the PeerToPeerRtmfp.mxml sample of the LCCS SDK. Environement: Windows 7 Flash Builder 4.5 SDK 4.5 LCCS 10.3 Player 10.3 Scenario: enter all requie room parameters and try to login Problem: After login I'm gettign an error:      Ar

  • Reading .sql files

    am reading .sql file using {BufferedReader in= new BufferedReader(new FileReader("my_sql_file.txt"));} when i get the string from the buffered reader it has spaces in between the characters eg s a m p l e o u t p u t but when i try with ..txt the out

  • Batch processing - HDR feature Lightroom 6 - Is this possible?

    We shoot a lot of real estate interiors; is there a way to batch process sets of images (in this case 5 per shot) in Lightroom 6. I did vaguely remember seeing a video from Adobe saying that you could line up the new shot when the first was merging b

  • Lightroom 4.4 crashing

    I recently moved all my stuff to a new computer. i found several discussion about this. Unfortunately it was like they were speaking another language. I am a beginner, I know this can be fixed but need step to step how it can. Could any of this be th