Issue with Compression of Cube 0IC_C03

Dear Mates,
Before initiating this Thread, i tried to search on SDN but couldnot get something that will work in my case.
Issue: Compression scheduled in Process Chain starts, runs for a long time and then gets cancelled,as i can see in Process (right Click) >> Display messages.
Compression was running fine since last one year. It started with the above mentioned behavior since Nov 2010.
One Area which i feel fishy, is in November we had a Database movement to DB2. But i really couldnot relate the Issue with compression to Change in Database.
We are currently working on SAP BW 3.5
Please share your Comments/ Suggestion. Below is the Log observed after Compression cancelled.
Job log     Job log     Message text uncoded
12/19/2010     03:03:48     Job started
12/19/2010     03:03:48     Step 001 started (program RSPROCESS, variant &0000000530859, user ID BW_CPIC)
12/19/2010     03:03:48     Performing check and potential update for status control table
12/19/2010     03:03:58     FB RSM1_CHECK_DM_GOT_REQUEST called from PRG RSSM_PROCESS_COMPRESS; row 000200
12/19/2010     03:03:58     Request '758.334'; DTA '0IC_C03'; action 'C'; with dialog 'X'
12/19/2010     03:03:58     Leave RSM1_CHECK_DM_GOT_REQUEST in row 70; Req_State ''
12/19/2010     03:03:58     FB RSM1_CHECK_DM_GOT_REQUEST called from PRG RSSM_PROCESS_COMPRESS; row 000200
12/19/2010     03:03:58     Request '758.348'; DTA '0IC_C03'; action 'C'; with dialog 'X'
12/19/2010     03:03:58     Leave RSM1_CHECK_DM_GOT_REQUEST in row 70; Req_State ''
12/19/2010     03:03:58     FB RSM1_CHECK_DM_GOT_REQUEST called from PRG RSSM_PROCESS_COMPRESS; row 000200
12/19/2010     03:03:58     Request '761.202'; DTA '0IC_C03'; action 'C'; with dialog 'X'
12/19/2010     03:03:58     Leave RSM1_CHECK_DM_GOT_REQUEST in row 70; Req_State ''
12/19/2010     03:03:58     FB RSM1_CHECK_DM_GOT_REQUEST called from PRG RSSM_PROCESS_COMPRESS; row 000200
12/19/2010     03:03:58     Request '763.019'; DTA '0IC_C03'; action 'C'; with dialog 'X'
12/19/2010     03:03:58     Leave RSM1_CHECK_DM_GOT_REQUEST in row 70; Req_State ''
12/19/2010     03:03:58     FB RSM1_CHECK_DM_GOT_REQUEST called from PRG RSSM_PROCESS_COMPRESS; row 000200
12/19/2010     03:03:58     Request '763.397'; DTA '0IC_C03'; action 'C'; with dialog 'X'
12/19/2010     03:03:58     Leave RSM1_CHECK_DM_GOT_REQUEST in row 70; Req_State ''
12/19/2010     03:04:06     SQL: 19.12.2010 03:04:06 BW_CPIC
12/19/2010     03:04:06     INSERT INTO "/BI0/L0IC_C03" ( "SID_0REQUID"
12/19/2010     03:04:06     ,"SID_0PLANT" ,"SID_0CALDAY_F" ,"SID_0CALDAY_T" )
12/19/2010     03:04:06     SELECT 2000000000 AS "SID_0REQUID"
12/19/2010     03:04:06     ,"/BI0/L0IC_C03"."SID_0PLANT" , MIN (
12/19/2010     03:04:06     "/BI0/L0IC_C03"."SID_0CALDAY_F"  )  AS
12/19/2010     03:04:06     "SID_0CALDAY_F" , MAX (
12/19/2010     03:04:06     "/BI0/L0IC_C03"."SID_0CALDAY_T"  )  AS
12/19/2010     03:04:06     "SID_0CALDAY_T" FROM "/BI0/L0IC_C03" WHERE (
12/19/2010     03:04:06     "/BI0/L0IC_C03"."SID_0REQUID" BETWEEN 0 AND
12/19/2010     03:04:06     763397 ) GROUP BY "/BI0/L0IC_C03"."SID_0PLANT"
12/19/2010     03:04:06     SQL-END: 19.12.2010 03:04:06 00:00:00
12/19/2010     03:04:06     SQL: 19.12.2010 03:04:06 BW_CPIC
12/19/2010     03:04:06     INSERT INTO "/BI0/L0IC_C03" ( "SID_0REQUID"
12/19/2010     03:04:06     ,"SID_0PLANT" ,"SID_0CALDAY_F" ,"SID_0CALDAY_T" )
12/19/2010     03:04:06     SELECT -1 AS "SID_0REQUID"
12/19/2010     03:04:06     ,"/BI0/L0IC_C03"."SID_0PLANT" , MIN (
12/19/2010     03:04:06     "/BI0/L0IC_C03"."SID_0CALDAY_F"  )  AS
12/19/2010     03:04:06     "SID_0CALDAY_F" , MAX (
12/19/2010     03:04:06     "/BI0/L0IC_C03"."SID_0CALDAY_T"  )  AS
12/19/2010     03:04:06     "SID_0CALDAY_T" FROM "/BI0/L0IC_C03" WHERE (
12/19/2010     03:04:06     "/BI0/L0IC_C03"."SID_0REQUID" BETWEEN 0 AND
12/19/2010     03:04:06     763397 ) GROUP BY "/BI0/L0IC_C03"."SID_0PLANT"
12/19/2010     03:04:06     SQL-END: 19.12.2010 03:04:06 00:00:00
12/19/2010     03:04:07     SQL: 19.12.2010 03:04:07 BW_CPIC
12/19/2010     03:04:07     TRUNCATE TABLE "/BI0/0100000095"
12/19/2010     03:04:07     SQL-END: 19.12.2010 03:04:07 00:00:00
12/19/2010     03:04:12     SQL: 19.12.2010 03:04:12 BW_CPIC
12/19/2010     03:04:12     TRUNCATE TABLE "/BI0/0100000091"
12/19/2010     03:04:12     SQL-END: 19.12.2010 03:04:12 00:00:00
Thanks & Regards
Sameer
Edited by: Sameer A Ganeshe on Dec 28, 2010 10:51 AM

Hi Zeeshan,
I  handled inventory scenarios, actually total stock qty is non *** Key figure with cumulative key figures of Issue qty and receipt qty, receipt - issues gives you the the total stock qty, and in inventory cube you can capture daily movments as well as monthly since you do have the calmonth as Time char in the dimensions.
Regards
Ram

Similar Messages

  • Problem with data in Cube 0IC_C03 in Bex Query

    Hi,
    (1) I am loading data into cube 0ic_c03 from data sources 2LIS_03_BX,2LIS_03_BF and 2LIS_03_UM. I need one more field <b>Manufacturing date</b> in my cube.
    (2) That's why I enhanced datasource 2LIS_03_BF, and populated HSDAT (Manufacturing date from MSEG table.) Date is visible in datasource.
    (3) I loaded data in Cube from 03_BX, did collapse. Then loaded data from 03_BF did collapse. In my cube data display (or LISTCUBE) I am able to see Manufacturing date and keyfigure values (<b>Received stock qty</b>:transit and <b>Issued stock qty: transit</b>) in one row.
    eg.-
    Plant           Manufac.date            Issued Qty            Received qty
    5102            01.07.2007                 2000                       3000
    But In my <b>Bex query</b>, I see -
    Plant           Manufac.date            Stock:in Transit
    5102            01.07.2007                 #
    5102                  #                         1000
    Ideally I want to show data in query -
    Plant           Manufac.date            Stock:in Transit
    5102            01.07.2007                 1000
    Should I enhance 2LIS_03_BX data source also with MAnufacturing date ?
    Please suggest...It is urgent..
    Thanks
    Saurabh

    Thats great Shalini.
    If it is helpful..please assign points.
    If you need any information on that please mail me [email protected]
    Regards,
    RK Ghattamaneni.

  • Issue with Building OLAP Cubes in Project Server 2010

    Hi
    There is some issue with while building OLAP cubes. 
    I have created OLAP cube then successfully cube has builded. When i add resource level custom field which has lookup tables values in ASSIGNMENT CUBE  then getting cube failure meesage.
    I deleted and recreated custom field and lookup table but no luck
    Below error message from manage queue jobs
    General
    CBS message processor failed:
    CBSOlapProcessingFailure (17004) - Failed to process the Analysis Services database <DB NAME> on the 10.3.66.12 server. Error: OLE DB error: OLE DB or ODBC error: 
    Warning: Null value is eliminated by an aggregate or other SET operation.; 01003. Errors in the OLAP storage engine: An error occurred while processing 
    the 'Assignment Timephased' partition of the 'Assignment Timephased' measure group for the 'Assignment Timephased' cube from the <DB NAME> database. 
    Internal error: The operation terminated unsuccessfully. Server:  Details: id='17004' name='CBSOlapProcessingFailure' uid='f2dea43a-eeea-4704-9996-dc0e074cf5c8'
     QueueMessageBody='Setting UID=afb5c521-2669-4242-b9f4-116f892e70f5 
    ASServerName=10.3.66.12 ASDBName=<DB NAME> ASExtraNetAddress= RangeChoice=0 PastNum=1 PastUnit=0 NextNum=1 NextUnit=0 FromDate=02/27/2015 02:10:15 
    ToDate=02/27/2015 02:10:15 HighPriority=True' Error='Failed to process the Analysis Services <DB NAME> on the 10.3.66.12 server. Error:
     OLE DB error: OLE DB or ODBC error: Warning: Null value is eliminated by an aggregate or other SET operation.; 01003. Errors in the OLAP storage engine: An error 
    occurred while processing the 'Assignment Timephased' partition of the 'Assignment Timephased' measure group for the 'Assignment Timephased' cube from the 
    <DB NAME> database. Internal  
    Queue:
    GeneralQueueJobFailed (26000) - CBSRequest.CBSQueueMessage. Details: id='26000' name='GeneralQueueJobFailed' uid='b7162f77-9fb5-49d2-8ff5-8dd63cc1d1d3' 
    JobUID='76837d02-d0c6-4bf8-9628-8cec4d3addd8' ComputerName='WebServer2010' GroupType='CBSRequest' MessageType='CBSQueueMessage' MessageId='2' Stage=''.
     Help me to resolve the issue
    Regards
    Santosh

    Is the SQL Server and Analysis Server are running on different servers and not on default ports? 
    If yes, then check if the same alias’s name added in Project Server is added on the Analysis Server.
    Cheers! Happy troubleshooting !!! Dinesh S. Rai - MSFT Enterprise Project Management Please click Mark As Answer; if a post solves your problem or Vote As Helpful if a post has been useful to you. This can be beneficial to other community members reading
    the thread.

  • ACE issue with compression when SSL Initiation is turned on?

    We currently doing an evaluation of the Cisco ACE 4710 and have some sites where the backend is Tomcat and SSL is turned on. When we set Default L7 Load-Balancing Action to Load Balance with Compression Method Deflate (I haven't tried gzip yet), requests to these sites return badly mangled stuff. Like a gif image at 7,700 bytes comes back as a 7 bytes file, even default should only try compression on text/*.
    Has anyone seen a similar issue?

    It turned out the problem was a configuration issue and my understanding of the ACE works with compression, policies, etc.
    In conjunction with this I seemed to have found a bug in the GUI, which is also still present in A3 (2.3). I now have a default L7 policy which just set SSL Initiation to ssl client. Added another L7 policy but when looking at the virtual server afterwards the GUI doesn't show that policy.
    switch/Development# show running-config policy-map FORD-APP.PERF.AUTC.COM-l7slb
    Generating configuration....
    policy-map type loadbalance first-match F-APP.PERF.AUTC.COM-l7slb
    class default-compression-exclusion-mime-type
    serverfarm F-APP.PERF.AUTC.COM
    compress default-method deflate
    insert-http rl_client_ip header-value "%is"
    ssl-proxy client Backend
    class class-default
    serverfarm F-APP.PERF.AUTC.COM
    insert-http rl_client_ip header-value "%is"
    ssl-proxy client Backend
    See attachment with screen shot of GUI

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • Compress the request of inventory cube 0ic_c03

    Hi experts,
    I want to compress the infocube 0ic_c03 daily for the delta requests through process chains.
    In the process chain, process types there are 2 options:
    1.Collapse only those requests that were loaded XXX days ago
    2.Number of requests that you do not want to collapse
    Which one i should select and how many days i need to go.
    I want to the daily delta request to be compressed.
    Pls guide me accordingly.
    Regards,
    Nishuv.

    Hi,
    Inventory data is non-cumulative data and we usually keep 30 days. Once compressed we cannot make any changes if there are any errors on particular day. We had problem when FICO consultants changed valuation in the systems directly and we were not aware of that. Later when values in the FICO is different in Inventory then we figured out that there were some manual changes in FICO. Luckly we didn't compress those requests. Later thru some routines we fixed the issue.
    Thanks
    Srikanth

  • TRCS 2LIS_03_BF_TR - CUBE 0IC_C03

    Hi Gurus
    TI am unable to solve the routine for RCS 2LIS_03_BF_TR -> CUBE 0IC_C03. It is giving me the following error.
    E:In PERFORM or CALL FUNCTION "ROUTINE_9998", the actual parameter
    "SOURCE_PACKAGE" is incompatible with the formal parameter
    "DATA_PACKAGE".
    I have almost tried all options available on SDN. Any better solution.
    Thanking You,
    Regards
    Mo

    Hi Muhammad Ali,
    we are facing same issue, most of the keyfigures in transformations are not mapped from infosource to transformations.
    can you please share transformations with routines.
    thanks in advance
    Regards
    Raj.

  • Issue with loading of Delta data

    Hi all,
    I have constructed a generic datasource using the tables ESLH,ESLL,ESKL tables with delta field as AEDAT(Changed on) from ESKL table.Created a Infocube and DSO basing on the datasource and loading is completed.Delta loads are running daily.
    Now my question is how to delete the data from the infocube in the BW system if the fields ESLL-DEL(deletion indicator) and ESLL-STOKZ(reversal document) is set to active in the ESLL table in the R/3 system when the delta loading of data is completed.
    can anyone have an idea of how to resolve this issue.
    Thanks in Advance,
    Vinay Kumar

    There are a couple of ways to do this, but I would question your logic as to why you need to delete them first:
    For records that are marked as "deleted" you should just filter these records in the Query designer. Doing it this way is non-destructive, and later if your business needs to know some KPI based around number of deleted records - then you will be able to answer this. If you permanently remove the data, then you cannot answer this KPI.
    For records marked "reversal" you should never remove these because they affect your key figures. A reversal can also be a partial reversal and so your KPIs will be wrong if you don't include them, and in any case in reporting you never see the reversal because almost all KFs are aggregated. The only time you will see a reversal record is when you include the characteristic that identifies it as a reversal, and in most reports that is unnecessary.
    Right - so now to your solution(s) - although I repeat I advise not doing any of these.
    As these are custom data sources, make sure that in your source system you create abap code to fill in thr RECORDMODE for each record.
    This way when you load your data to the DSO, the activation step will take care of processing deletions and reversals.
    Next compress your cube and use selective deletion on the cube to remove the deleted and reversal records.
    Alternatively:
    Compress your cube and use selective deletion on the cube to remove the deleted and reversal records.
    Then in your transformation to the cube, create a start routine which removes the records from the source package. ie the records are never allowed to go to the cube.
    I hope that helps.

  • Issue with Delta Request

    Hi Friends,
    I have an issue with loading.
    1. From source 2lis_13_VDITM, data loads to 2 targets.
    ZIC_SEG, 0SD_C03.and from 0sd_C03 it again loads to ZSD_C03W and ZSD_C03M through DTP.
    I have done a repair full load on 08.08.2011 to PSA and loaded this request manually to cube ZIC_seg.
    I forgoted to delete this request in PSA as i dont want to load this to other targets.
    Before i notice ,already delta requests got loaded and has pulled my repair full request also to other targets.
    As i have not done any selective deltions on other cubes there may be double entries.
    I am planning to do the below steps inorder to rectify the issue.
    1. Do a selective deletion of the delta request in all 3 targets which got loaded on 8th along with repair full.
    2. Delete the repair full request from PSA.
    So now delta request which got loaded after repair full request, is left in PSA.
    My question is if my PC runs today will this delta request in PSA also pulls again to cube through DTP?
    Kindly share any other ideas please urgent...
    Regards,
    Banu

    Hi Banu,
    If the data in the CUBE's is not compressed, then follow the below steps
    1)Delete the latest request in CUBE ZIC_SEG
    2)Delete the latest request from cubes  ZSD_C03W and ZSD_C03M and then from 0SD_C03.
    3)load the delta request manually using DTP from your cubes(in DTP you can load by giving request number). to cubes ZIC_SEG and 0SD_C03.
    3)now run the delta DTP to load the delta request from 0SD_C03 to ZSD_C03W and ZSD_C03M.
    Next when your PC runs it will load only that particular day delta request.
    It will work
    Regards,
    Venkatesh

  • Inventory loads to cube 0IC_C03

    Hi Guys ,
    I am loading data to Inventory Cube 0IC_C03 .I have  done all the steps .I have 1 question ie ..intilally i loaded data to cube from  2LIS_03_BF data source .I did the following steps :
    u2022 Init Delta (2LIS_03_BF) with data transfer.
    u2022 Release Request with No Marker Update (Tick).
    Now from tomorro we will get delta loads .My question is do we need to compress delta request with NO MARKET UPDATE (TICK) ?
    Or
    Do i need to unTick The option NO MARKER UPDATE .
    The data is very critical here .Please let me know if you have the right answer .
    Regards
    Santosh

    Hi,
    Pl look at below
    Marker Update is used to reduce the time of fetching the non-cumulative key figures while reporting. It helps to easily get the values of previous stock quantities while reporting. The marker is a point in time which marks an opening stock balance. Data up to the marker is compressed.
    The No Marker Update concept arises if the target InfoCube contains a non-cumulative key figure. For example, take the Material Movements InfoCube 0IC_C03 where stock quantity is a non-cumulative key figure. The process of loading the data into the cube involves in two steps:
    1) In the first step, one should load the records pertaining to the opening stock balance/or the stock present at the time of implementation. At this time we will set marker to update (uncheck 'no marker update') so that the value of current stock quantity is stored in the marker. After that, when loading the historical movements (stock movements made previous to the time of implementing the cube) we must check marker update so that the marker should not be updated (because of these historical movements, only the stock balance / opening stock quantity has been updated; i.e. we have already loaded the present stock and the aggreagation of previous/historical data accumulates to the present data).
    2) After every successful delta load, we should not check marker update (we should allow to update marker) so that the changes in the stock quantity should be updated in the marker value. The marker will be updated to those records which are compressed. The marker will not be updated to those uncompressed requests. Hence for every delta load request, the request should be compressed.
    Check or uncheck the Marker Option:
    Compress the request with stock marker => uncheck the marker update option.
    Compress the loads without the stock maker => check the marker update option.
    Relevant FAQs:
    1) The marker isn't relevant when no data is transferred (e.g. during an delta init without data transfer).
    2) The marker update is just like a check point (it will give the snapshot of the stock on a particular date when it is updated).
    Reference information:
    Note 643687 Compressing non-cumulative InfoCubes (BW-BCT-MM-BW)
    Note 834829 Compression of BW InfoCubes without update of markers (BW-BEX-OT-DBIF-CON)
    Note 745788 Non-cumulative mgmnt in BW: Verifying and correcting data (BW-BCT-MM-BW)
    Note 586163 Composite Note on SAP R/3 Inventory Management in SAP BW (BW-BCT-MM-IM)
    Thanks and regards

  • Inventory Management Extractors via DSO to Cube 0IC_C03

    Dear experts,
    to fulfill the requirements of DataWarehouse spirit using Entry-, Harmonization- and Reporting-Layer I want to use for Inventory Management a Data Flow from extractors 2LIS_03_BX, 2LIS_03_BF and 2LIS_03_UM via DSO/DSOs and later on to Cube 0IC_C03.
    In this forum there are some opened threads about this issue, but none has been resolved! Many hints are about the "How to Inventory Management..." - some other say: "You can use DSO's!" - others say: "Don't use DSOs!". So: Where is the truth? Is there anybody who can provide a really praticable way to implement the above mentioned data flow?
    I'm so excited about your suggestions and answers.
    Thanks in advance and regards
    Patrick
    --> Data Flow has to be in BW 7.0

    Hi Patrick,
    Yes indeed there is.
    Using DSOs in inventory flow is absolutely possible. Here's what you need to know and do:
    1 - Firstly don't be apprehensive about it. Knowledge is power! We know that inventory uses ABR delta update methodology and that is supported by DSOs so no doubt abt it.
    2 - Secondly Inventory is special because of non-cumulative and several rule group at cube level for BX and BF.
    3 - Now as you want to use a DSO and I am presuming that this will be used for staging purpose, use a write optimized DSO as the first layer. While mapping from DS to this DSO, keep one to one mapping from field to Info-objects.
    4- Keep in mind that from Infosource to 0IC_C03 there would be multiple rule groups present in transformations.
    These rule groups will have different KPIs mapped with routines. The purpose is that from ECC only 1 field for quantity (BWMNG) and one field for value (BWGEO) is coming but from Infosource to 0IC_C03 the same fields in different rule groups are mapped to different KPIs (Stock in transit, vendor consignment, valuated stock etc) based on stock category, stock type and BW transaction keys.
    5- So create a write optimized DSO. map it one to one from datasource fields. create/copy transformations from this DSO to cube maintaining the same rule groups and same logic.
    6- you can also use standard DSO and create rule groups and logic at DSO level and then do one to one from DSO to Cube.
    Keep in mind that these rule groups and logic in them should be precisely the same as in standard flow.
    This should work.
    Debanshu

  • Common issues with BW environment

    Hello Experts,
    could you please mention all teh common issues faced in BW environment with the solution available.
    I have many issues and I wante dto know whether all teh teams face the same.
    This would a good oppurtunity to have everything in one thread.
    Thanks and regards
    meps

    Hi,
    We will have somany issues plz check some of them
    1. DTP Failure
    Select the step-> right click and select “Display Message”-> there we will get the message which gives the reason for ABEND.
    A DTP can failure due to following reasons, in such case we can go for restarting the job.
    System Exception Error
    Request Locked
    ABAP Run time error.
    Duplicate records
    Erroneous Records from PSA.
    Duplicate records:            In case of duplication in the records, we can find it in the error message along with the Info Provider’s name. Before restarting the job after deleting the bad DTP request, we have to handle the duplicate records. Go to the info provider -> DTP step -> Update tab -> check handle duplicate records -> activate -> Execute DTP. After successful competition of the job uncheck the Handle Duplicate records option and activate. DTP Log Run:
    If a DTP is taking log time than the regular run time without having the back ground job, then we have to turn the status of the DTP into Red and then delete the DTP bad request (If any), repeat the step or restart the job.
    Before restarting the Job/ repeating the DTP step, make sure about the reason for failure.
    If the failure is due to “Space Issue” in the F fact table, engage the DBA team and also BASIS team and explain them the issue. Table size needs to be increased before performing any action in BW. It’ll be done by DBA Team. After increasing the space in the F fact table we can restart the job.
    Erroneous Records from PSA:      When ever a DTP fails because of erroneous records while fetching the data from PSA to Data Target, in such cases data needs to be changed in the ECC. If it is not possible, then after getting the approval from the business, we can edit the Erroneous records in PSA and then we have to run the DTP. Go to PSA -> select request -> select error records -> edit the records and save.Then run the DTP. 2.      INFO PACKAGE FAILURE: The following are the reasons for Info Pack failure.
    Source System Connection failure
    tRFC/IDOC failure
    Communication Issues
    Processing the IDOC Manually in BI
    Check the source system connection with the help of SAP BASIS, if it is not fine ask them to rebuild the connection. After that restart the job (Info Pack).
    Go to RSA1 -> select source system -> System -> Connection check.
    In case of any failed tRFC’s/IDOC’s, the error message will be like “Error in writing the partition number DP2” or “Caller 01, 02 errors”. In such case reprocess the tRFC/IDOC with the help of SAP BASIS, and then job will finish successfully.
    If the data is loading from the source system to DSO directly, then delete the bad request in the PSA table, then restart the job
    Info Pack Long Run: If an info pack is running long, then check whether the job is finished at source system or not. If it is finished, then check whether any tRFC/IDOC struck/Failed with the help of SAP BASIS. Even after reprocessing the tRFC, if the job is in yellow status then turn the status into “Red”. Now restart / repeat the step. After completion of the job force complete.
    Before turning the status to Red/Green, make sure whether the load is of Full/Delta and also the time stamp is properly verified.
    Time Stamp Verification:
    Select Info Package-> Process Monitor -> Header -> Select Request -> Go to source System (Header->Source System) -> Sm37-> give the request and check the status of the request in the source system -> If it is in active, then we have to check whether there any struck/failed tRFC’s/IDOC’s If the request is in Cancelled status in Source system -> Check the Info Pack status in BW system -> If IP status is also in failed state/cancelled state -> Check the data load type (FULL or DELTA) -> if the status is full then we can turn the Info Package status red and then we can repeat/restart the Info package/job. -> If the load is of Delta type then we have to go RSA7 in source system-> (Compare the last updated time in Source System SM37 back ground job)) Check the time stamp -> If the time stamp in RSA7 is matching then turn the Info Package status to Red -> Restart the job. It’ll fetch the data in the next iterationIf the time stamp is not updated in RSA7 -> Turn the status into Green -> Restart the job. It’ll fetch the data in the next iteration.
    Source System
    BW System
    Source System RSA7
    Source System SM37
    Action
    I/P Status RED(Cancelled)
    I/P Status (Active)
    Time stamp matching with SM37 last updated time
    Time stamp matching with RSA7 time stamp
    Turn the I/P Status into Red and Restart the Job
    I/P Status RED(Cancelled)
    I/P Status (Cancelled)
    Time stamp matching with SM37 last updated time
    Time stamp matching with RSA7 time stamp
    Turn the I/P Status into Red and Restart the Job
    I/P Status RED(Cancelled)
    I/P Status (Active)
    Time stamp is not  matching with SM37 last updated time
    Time stamp is not matching with RSA7 time stamp
    Turn the I/P status into Green and Restart the job
    I/P Status RED(Cancelled)
    I/P Status (Cancelled)
    Time stamp is not  matching with SM37 last updated time
    Time stamp is not matching with RSA7 time stamp
    Turn the I/P status into Green and Restart the job
    Processing the IDOC Manually in BI:
    When there is an IDOC which is stuck in the BW and successfully completed the background job in the source system, in such cases we can process the IDOC manually in the BW. Go to Info Package -> Process Monitor -> Details -> select the IDOC which is in yellow status(stuck) -> Right click -> Process the IDOC manually -> it’ll take some time to get processed.******Make sure that we can process the IDOC in BW only when the back ground job is completed in the source system and stuck in the BW only.   3.      DSO Activation Failure: When there is a failure in DSO activation step, check whether the data is loading to DSO from PSA or from the source system directly. If the data is loading to DSO from PSA, then activate the DSO manually as follows
    Right click DSO Activation Step -> Target Administration -> Select the latest request in DSO -> select Activate -> after request turned to green status, Restart the job.
    If the data is loading directly from the source system to DSO, then delete the bad request in the PSA table, then restart the job
    4.      Failure in Drop Index/ Compression step: When there is a failure in Drop Index/ compression step, check the Error Message. If it is failed due to “Lock Issue”, it means job failed because of the parallel process or action which we have performed on that particular cube or object. Before restarting the job, make sure whether the object is unlocked or not. There is a chance of failure in Index step in case of TREX server issues. In such cases engage BASIS team and get the info reg TREX server and repeat/ Restart the job once the server is fixed. Compression Job may fail when there is any other job which is trying to load the data or accessing from the Cube. In such case job fails with the error message as “Locked by ......” Before restarting the job, make sure whether the object is unlocked or not.   5. Roll Up failure: “Roll Up” fails due to Contention Issue. When there is Master Data load is in progress, there is a chance of Roll up failure due to resource contention. In such case before restarting the job/ step, make sure whether the master data load is completed or not. Once the master data load finishes restart the job.   6. Change Run – Job finishes with error RSM 756       When there is a failure in the attribute change run due to Contention, we have to wait for the other job (Attribute change run) completion. Only one ACR can run in BW at a time. Once the other ACR job is completed, then we can restart/repeat the job. We can also run the ACR manually in case of nay failures. Go to RSA1-> Tool -> Apply Hierarchy/Change Run -> select the appropriate Request in the list for which we have to run ACR -> Execute. 7. Transformation In-active: In case of any changes in the production/moved to the production without saving properly or any modification done in the transformation without changing, in such cases there is a possibility of Load failure with the error message as “ Failure due to Transformation In active”. In such cases, we will have to activate the Transformation which is inactive. Go to RSA1 -> select the transformation -> Activate In case of no authorization for activating the transformation in production system, we can do it by using the Function Module - RSDG_TRFN_ACTIVATE Try the following steps to use the program "RSDG_TRFN_ACTIVATE” here you will need to enter certain details:Transformation ID: Transformation’s Tech Name (ID)Object Status: ACTType of Source: “Source Name”Source name: “Source Tech Name”Type of Target: Target NameTarget name: “Target Tech Name”
    Execute. The Transformation status will be turned into Active.
    Then we can restart the job. Job will be completed successfully.
         8. Process Chain Started from the yesterday’s failed step:
    In few instances, process chain starts from the step which was failed in the previous iteration instead of starting from the “Start” step.
    In such cases we will have to delete the previous day’s process chain log, to start the chain form the beginning (from Start variant).
    Go To ST13-> Select the Process Chain -> Log -> Delete.
    Or we can use Function Module for Process Chain Log Deletion: RSPROCESS_LOG_DELETE.
    Give the log id of the process chain, which we can get from the ST13 screen.
    Then we can restart the chain.
    Turning the Process Chain Status using Function Module:
    At times, when there is no progress in any of the process chains which is running for a long time without any progress, we will have to turn the status of the entire chain/Particular step by using the Function Module.
    Function Module: RSPC_PROCESS_FINISH
    The program "RSPC_PROCESS_FINISH" for making the status of a particular process as finished.
    To turn any DTP load which was running long, so please try the following steps to use the program "RSPC_PROCESS_FINISH" here you need to enter the following details:
    LOG ID: this id will be the id of the parent chain.
    CHAIN: here you will need to enter the chain name which has failed process.
    TYPE: Type of failed step can be found out by checking the table "RSPCPROCESSLOG" via "SE16" or "ZSE16" by entering the Variant & Instance of the failed step. The table "RSPCPROCESSLOG" can be used to find out various details regarding a particular process.
    INSTANCE & VARIANT: Instance & Variant name can be found out by right clicking on the failed step and then by checking the "Displaying Messages Options" of the failed step & then checking the chain tab.
    STATE: State is used to identify the overall state of the process. Below given are the various states for a step.
    R Ended with errors
    G Successfully completed
    F Completed
    A Active
    X Canceled
    P Planned
    S Skipped at restart
    Q Released
    Y Ready
    Undefined
    J Framework Error upon Completion (e.g. follow-on job missing)
    9. Hierarchy save Failure:
    When there a failure in Hierarchy Save, then we have to follow the below process...
    If there is an issue with Hierarchy save, we will have to schedule the Info packages associated with the Hierarchies manually. Then we have to run Attribute Change Run to update the changes to the associated Targets. Please find the below mentioned the step by step process...
    ST13-> Select Failed Process Chain -> Select Hierarchy Save Step -> Rt click Display Variant -> Select the info package in the hierarchy -> Go to RSA! -> Run the Info Package Manually -> Tools -> Run Hierarchy/Attribute Change Run -> Select Hierarchy List (Here you can find the List of Hierarchies) -> Execute.

  • Error ORA-01422 while compressing a cube

    Hello,
    I am trying to compress the initial balnce of the European Inventory KPI  (KPINVMOEU) cube after reloading it and I encountered the oracle error-01422 which says:
    system error:error-01422:exact fetch returns more than requested
    system error:CONDENSE_FACTTABLE-5-ERROR-01422
    Has anyone dealt with this error before? Any input is appreciated.
    Thanks in advance,
    madhu

    Madhu,
        can you please look into these notes: 485766 and 385660.
    at the same time look into these links which is for similar problem.
    ERROR: System error: CONDENSE_FACTTABLE-5- ERROR:12805
    Error during compress 0IC_C03
    error ORA-01422 while compressing a cube
    Re: Cube compression failing
    All the best.
    Regards,
    Nagesh Ganisetti.

  • Multiple issues with Creative Cloud File Syncing

    I'm have issues with syncing files on my mac to Adobe Creative Cloud. I don't seem to have issues syncing via drag and drop to my web browser. I keep getting this error while syncing via finder:
    I also have a fairly complex PSD with folders that i'm toggling on and off using Layer Comps. I don't see any difference when switching Layer Comps within Extract in the browser. I do see the changes when turning on and off the folders though.. What's the deal with that?
    Why does it take so long to render the PSD after I turn on a folder or switch layer comps? It takes about 30 seconds to re-render, which is crippling for our workflow.
    This would be an amazing tool for our developers if these 3 issues were resolved.

    Thanks for the information.
    Could you now send me some log files please..
    The log files can be found here:
    Mac: /Users/<yourusername>/Library/Application Support/Adobe/CoreSync/
    Windows: C:\Users\<yourusername>\AppData\Roaming\Adobe\CoreSync\
    The logs have the date in the filename, like "CoreSync-2014-03-25.log". Please compress (zip) all the CoreSync-2014-MM-DD.log files and email them to me directly at [email protected]
    Thanks
    Warner

  • Design issue with the multiprovider

    Design issue with the multiprovider :
    I have the following problem when using my multiprovider.
    The data flow is like this. I have the info-objects IobjectA, IobjectB, IobjectCin my Cube.(Source for this data is s-systemA)
    And from another s-system I am also loading the masterdata for IobjectA
    Now I have created the multiprovider based on the cube and IobjectA.
    However, surprisingly join in not workign in multiprovider correctly.
    Scenario :
    Record from the Cube.
    IObjectA= 1AAA
    IObjectB = 2BBB
    IObjectC = 3CCC
    Records from IobjectA =1AAA.
    I expect the record should be like this :
    IObjectA : IObjectB: IObjectC
    1AAA       :2BBB       :3CCC
    However, I am getting the record like this:
    IObjectA : IObjectB: IObjectC
    1AAA       :2BBB       :3CCC
    1AAA         : #             :#
    In the Identification section I have selected both the entries for IobjectA still I am getting this error.
    My BW Version is 3.0B and the SP is 31.
    Thanks in advance for your suggestion.

    May be I was not clear enough in my first explanation, Let me try again to explain my scenario:
    My Expectation from Multi Provider is :
    IObjectA
    1AAA
    (From InfoObject)
    Union
    IObjectA     IObjectB     IObjectC
    1AAA     2BBB     3CCC
    (From Cube)
    The record in the multiprovider should be :
    IObjectA     IObjectB     IObjectC
    1AAA     2BBB     3CCC
    Because, this is what the Union says .. and the Definition of the multiprovider also says the same thing :
    http://help.sap.com/saphelp_bw30b/helpdata/EN/ad/6b023b6069d22ee10000000a11402f/frameset.htm
    Do you still think this is how the behaviour of the multiprovider.. if that is the case what would be the purpose of having an infoobject in the multiprovider.
    Thank you very much in advance for your responses.
    Best Regards.,
    Praveen.

Maybe you are looking for

  • Problem install any O/S

    Hi there I'm a beginner in computerbuilding so don't laugh now. Everytime I try to install XP Pro, Win 2000 or Win 98 my computer will hang at different places. It's not at the same place every time. Every part in the computer is new. I haven't flash

  • L lost my files for internet conncetions. Do you have a disc that I can load to my computer hard drive?

    Some how my files to connect to the internet (mozilla Firefox) were remove by mistake and or a worm. Do you have a disc that can be used to upload into the internet.

  • Switching JSP dynamically

    Hi I have a webapplication deployed in one server connecting to 2 different databases. Some customizations has to be done to make it appear different like the text appearing in webpages. there is one place where i have to use a different jsp dependin

  • Older Mac, New Nano Compatible?

    Hi! I have an eMac, with USB1.1 ports, and also FW400. I have a OWC external hard drive with FW400, 800, and also USB2.0 ports. The USB2.0 is the very small connection, like on a camera. The external is connected via the FW to the eMac. My daughter w

  • HT4972 how do i update from a 4.2.1 ios to a 5 ios

    how do i update an ios 4.2.1 to an ios 5?