Data packet not getting processed

Hi SDN's
I m loading data from one ODS to 4 regions , the source ODS is successfully loaded from der to the data targets the load is getting failed or taking loang time .
upto transfer rules the data is successful, in update rules data packets are not getting processed
kindly suggest solution, points will be assigned
thx in advance

Hi Katam,
In the target ODSs go to the monitor screen for a particular request -> in the menu bar go to environment -> transactRFC-> in the datawarehouse -> give the ID and date -> execute.
Check if there are entries in that. Usually this queue will be stuck and you need to execute the LUWs in the queue manually.
if it says transaction recorded u need to execute it manually
Please revert if any issues
Edited by: Pramod Manjunath on Dec 19, 2007 4:48 PM

Similar Messages

  • Data packet not yet processing in ODS load??

    Hi all,
    I got an error when I loaded data from IS to the ODS. Can someone let me know why and how to resolve it. Thank you in advance.
    Here is the error message in the monitor:
    <b>Warning: data packet 1 & 2 arrived BW; processing: data packet not yet processing.
    (No data packet numbers could be determined for request REQU_77H7ERP54VXW5PZZP5J6DYKP7)</b>
    <b>Processing end:
    transfer rules (0 record): missing message
    Update PSA (0 record): messing messages
    Update rules (0 record): messging messages</b>

    John,
    I dont think its space problem.In st22 go with detail note.
    What happend, how to correct it.Will help you to solve the problem.
    Check this note <b>613440</b> also.
    <b>Note : 647125</b>
    Symptom
    A DYNPRO_FIELD_CONVERSION dump occurs on screen 450 of the RSM1 function group (saplrsm1).
    Other terms
    DYNPRO_FIELD_CONVERSION, 450, SAPLRSM1
    Reason and Prerequisites
    This is caused by a program error.
    The screen contains unused, hidden fields/screen elements that are too small for the screen check that was intensified with the current Basis patch (kernel patch 880). These fields originate in the 4.0B period of BW 1.0 and are never used.
    Solution
    Depending on your BW system release, you must solve the problem as follows:
    BW 3.0B
               ImportSupport Package 14 for 3.0B (BW 3.0B Patch 14 or SAPKW30B14) into your BW system. This Support Package will be available when note 571695 with the short text,"SAPBWNews BW 3.0B Support Package 14", which describes this Support Package in more detail, is released for customers.
    BW 3.1 Content
               ImportSupport Package 8 for 3.1 Content (BW 3.10 Patch 08 or SAPKW31008) into your BW system.This Support Package will be availablewhen note 571743 with the short text, "SAPBWNews BW 3.1 Content Support Package 08", is released for customers.
    The dump occurs with the invisible G_NEW_DATUM date field on the bottom right of the screen, which is only 1 byte long and can be deleted.
    You can delete the following unused fields/screen elements:
    %A_G_NEW_NOW                Selectionfield group
    G_NEW_ZEIT                  Input/output field
    G_NEW_UNAME                Input/output field
    G_NEW_DATUM                Input/output field
    %#AUTOTEXT021               Text field
    G_NEW_NOW                  Selection button
    G_NEW_BATCH                 Selection button
    You can delete these fields/screen elements because they are not used anywhere.
    This deletion does not cause any problems.
    After you delete the fields/screen elements, you must also delete the following rows in the flow logic in screen 450:
    FIELD G_NEW_DATUM           MODULE DOKU_NEW_DATUM.
    FIELD G_NEW_ZEIT            MODULE DOKU_NEW_ZEIT.
    The function group is then syntactically correct.
    Unfortunately, we cannot provide an advance correction.
    The aforementioned notes may already be available to provide information in advance of the Support Package release.However, the short text will still contains the words "preliminary version" in this case.
    For more information on BW Support Packages, see note 110934.
    Thanks
    Ram

  • Arrived in BW Processing: Data packet not yet processed

    Hello Gurus,
    Data is being loaded from export datasource (8RL*****) ti 2 data targets
    The overall QM status is red (says processing is overdue)
    The details tab shows
    Extraction: Error occurred
    Transfer(IDoc and TRFC): Error occurred
    Processing (Data packet): Error occurred
       -- Transfer rule: Error occurred
         Bewegungsdaten received.Processing being started
         Message missing(35676 records) : Transfer rule finished
      Update rule (0 records):Missing messages
      Update(0 new/0 changed): Missing message
      Processing end: Missing message
    I checked the LUW in SM58 but didnt find anything.
    I checked infosource (PSA) - one request is in RED, but when i check the data in PSA there are no errors.
    What should I do in this case?
    What might be the excat error.
    Kindly inform.
    Regards,
    NIKEKAB

    Hi,
    Check the DataSource in RSA3, if it is working fine and able to see the data in RSA3, there is no problem in DS level, then checl the Mappings and any routines in BW for that DS, if this is also fine then check the below options.
    See Dumps in ST22, SM21 also.
    Check RFC Connection between ECC and BW systems, i.e. RSA1-->Source System->Right Click on Source system and Check.
    You must have the following profiles to BWREMOTE or ALEREMOTE users.So add it. Bcoz either of these two users will use in background to get extract the data from ECC, so add these profiels in BW.
    S_BI-WHM_RFC, S_BI-WHM_SPC, S_BI-WX_RFC
    And also check the following things.
    1.Connections from BW to ECC and ECC to BW in SM59
    2.Check Port,Partner Profiles,and Message Types in WE20 in ECC & BW.
    3.Check Dumps in ST22, and SM21.
    4.If Idocs are stuck i.e see the OLTP Idoc numbers in RSMO Screen in (BW) detials tab see in bottom, you can see OLTP Idoc number and take the Idoc numbers and then goto to ECC see the status in WE05 or WE02, if error then check the log else goto to BD87 in ECC and give the Idoc numbers and execute manually and see in RSMO and refresh.
    5.Check the LUWs struck in SM58,User Name = * (star) and run it and see Strucked LUWs and select our LUW and execute manually and see in RSMO in BW.
    See in SDN
    Re: Loading error in the production  system
    Thanks
    Reddy

  • Popularity trend/usage report is not working in sp2013. Data was not being processed to EVENT STORE folder which was present under the Analytics_GUID folder.

    Hi
     I am working in a sharepoint migration project. We have migrated one SharePoint project from moss2007 to sp2013. Issue is when we are clicking on Popularity trend > usage report,  it is throwing an error.
    Issue: The data was not being processed to EVENT STORE folder which was present under the
    Analytics_GUID folder. Also data was not present in the Analytical Store database.
    In log viewer I have found the bellow error.
    HIGH -
    SearchServiceApplicationProxy::GetAnalyticsEventTypeDefinitions--Error occured: System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly
    secured fault was received from the other party.
    UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
    System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    HIGH - Getting Error Message for Exception System.Web.HttpUnhandledException
    (0x80004005): Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly secured fault was received from the other party.
    CRITICAL - A failure was reported when trying to invoke a service application:
    EndpointFailure Process Name: w3wp Process ID: 13960 AppDomain Name: /LM/W3SVC/767692721/ROOT-1-130480636828071139 AppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-
    UNEXPECTED - Could not retrieve analytics event definitions for
    https://XXX System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
    System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    I have verified few things in server which are mentioned below
    Two timer jobs (Microsoft SharePoint Foundation Usage Data Processing, Microsoft SharePoint Foundation Usage Data Import) are running fine.
    APPFabric Caching service has been started.
    Analytics_GUID folder has been
    shared with
    WSS_ADMIN_WPG and WSS_WPG and Read/Write access was granted
    .usage files are getting created and also the temporary(.tmp) file has been created.
    uasage  logging database for uasage data being transported. The data is available.
    Please provide pointers on what needs to be done.

    Hi Nabhendu,
    According to your description, my understanding is that you could not use popularity trend after you migrated SharePoint 2007 to SharePoint 2013.
    In SharePoint 2013, the analytics functionality is a part of the search component. There is an article for troubleshooting SharePoint 2013 Web Analytics, please take a look at:
    Troubleshooting SharePoint 2013 Web Analytics
    http://blog.fpweb.net/troubleshooting-sharepoint-2013-web-analytics/#.U8NyA_kabp4
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

  • Data is not getting replicating to the destination db.

    I has set up streams replication on 2 databases running Oracle 10.1.0.2 on windows.
    Steps for setting up one-way replication between two ORACLE databases using streams at schema level followed by the metalink doc
    I entered a few few records in the source db, and the data is not getting replication to the destination db. Could you please guide me as to how do i analyse this problem to reach to the solution
    setps for configuration _ steps followed by metalink doc.
    ==================
    Set up ARCHIVELOG mode.
    Set up the Streams administrator.
    Set initialization parameters.
    Create a database link.
    Set up source and destination queues.
    Set up supplemental logging at the source database.
    Configure the capture process at the source database.
    Configure the propagation process.
    Create the destination table.
    Grant object privileges.
    Set the instantiation system change number (SCN).
    Configure the apply process at the destination database.
    Start the capture and apply processes.
    Section 2 : Create user and grant privileges on both Source and Target
    2.1 Create Streams Administrator :
    connect SYS/password as SYSDBA
    create user STRMADMIN identified by STRMADMIN;
    2.2 Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    In 10g :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
    2.3 Create streams queue :
    connect STRMADMIN/STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    Section 3 : Steps to be carried out at the Destination Database PLUTO
    3.1 Add apply rules for the Schema at the destination database :
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'APPLY ',
    streams_name => 'STRMADMIN_APPLY',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    3.2 Specify an 'APPLY USER' at the destination database:
    This is the user who would apply all DML statements and DDL statements.
    The user specified in the APPLY_USER parameter must have the necessary
    privileges to perform DML and DDL changes on the apply objects.
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'STRMADMIN_APPLY',
    apply_user => 'SCOTT');
    END;
    3.3 Start the Apply process :
    DECLARE
    v_started number;
    BEGIN
    SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
    FROM DBA_APPLY WHERE APPLY_NAME = 'STRMADMIN_APPLY';
    if (v_started = 0) then
    DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
    end if;
    END;
    Section 4 :Steps to be carried out at the Source Database REP2
    4.1 Move LogMiner tables from SYSTEM tablespace:
    By default, all LogMiner tables are created in the SYSTEM tablespace.
    It is a good practice to create an alternate tablespace for the LogMiner
    tables.
    CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON
    MAXSIZE UNLIMITED;
    BEGIN
    DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    END;
    4.2 Turn on supplemental logging for DEPT and EMPLOYEES table :
    connect SYS/password as SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP dept_pk(deptno) ALWAYS;
    ALTER TABLE scott.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP dep_pk(empno) ALWAYS;
    Note: If the number of tables are more the supplemental logging can be
    set at database level .
    4.3 Create a database link to the destination database :
    connect STRMADMIN/STRMADMIN
    CREATE DATABASE LINK PLUTO connect to
    STRMADMIN identified by STRMADMIN using 'PLUTO';
    Test the database link to be working properly by querying against the
    destination database.
    Eg : select * from global_name@PLUTO;
    4.4 Add capture rules for the schema SCOTT at the source database:
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'CAPTURE',
    streams_name => 'STREAM_CAPTURE',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    4.5 Add propagation rules for the schema SCOTT at the source database.
    This step will also create a propagation job to the destination database.
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name => 'SCOTT',
    streams_name => 'STREAM_PROPAGATE',
    source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
    destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@PLUTO',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    Section 5 : Export, import and instantiation of tables from
    Source to Destination Database
    5.1 If the objects are not present in the destination database, perform
    an export of the objects from the source database and import them
    into the destination database
    Export from the Source Database:
    Specify the OBJECT_CONSISTENT=Y clause on the export command.
    By doing this, an export is performed that is consistent for each
    individual object at a particular system change number (SCN).
    exp USERID=SYSTEM/manager@rep2 OWNER=SCOTT FILE=scott.dmp
    LOG=exportTables.log OBJECT_CONSISTENT=Y STATISTICS = NONE
    Import into the Destination Database:
    Specify STREAMS_INSTANTIATION=Y clause in the import command.
    By doing this, the streams metadata is updated with the appropriate
    information in the destination database corresponding to the SCN that
    is recorded in the export file.
    imp USERID=SYSTEM@pluto FULL=Y CONSTRAINTS=Y FILE=scott.dmp IGNORE=Y
    COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
    5.2 If the objects are already present in the desination database, there
    are two ways of instanitating the objects at the destination site.
    1. By means of Metadata-only export/import :
    Specify ROWS=N during Export
    Specify IGNORE=Y during Import along with above import parameters.
    2. By Manaually instantiating the objects
    Get the Instantiation SCN at the source database:
    connect STRMADMIN/STRMADMIN@source
    set serveroutput on
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
    END;
    Instantiate the objects at the destination database with
    this SCN value. The SET_TABLE_INSTANTIATION_SCN procedure
    controls which LCRs for a table are to be applied by the
    apply process. If the commit SCN of an LCR from the source
    database is less than or equal to this instantiation SCN,
    then the apply process discards the LCR. Else, the apply
    process applies the LCR.
    connect STRMADMIN/STRMADMIN@destination
    BEGIN
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
    SOURCE_SCHEMA_NAME => 'SCOTT',
    source_database_name => 'REP2',
    instantiation_scn => &iscn );
    END;
    Enter value for iscn:
    <Provide the value of SCN that you got from the source database>
    Note:In 9i, you must instantiate each table individually.
    In 10g recursive=true parameter of DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
    is used for instantiation...
    Section 6 : Start the Capture process
    begin
    DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE');
    end;
    /

    same problem, data not replicated.
    its captured,propagated from source,but not applied.
    also no apply errors in DBA_APPLY_ERROR. Looks like the problem is that LCRs propagated from source db do not reach target queue.can i get any help on this?
    queried results are as under:
    1.at source(capture process)
    Capture Session Total
    Process Session Serial Redo Entries LCRs
    Number ID Number State Scanned Enqueued
    CP01 16 7 CAPTURING CHANGES 1010143 72
    2. data propagated from source
    Total Time Executing
    in Seconds Total Events Propagated Total Bytes Propagated
    7 13 6731
    3. Apply at target(nothing is applied)
    Coordinator Session Total Total Total
    Process Session Serial Trans Trans Apply
    Name ID Number State Received Applied Errors
    A001 154 33 APPLYING 0 0 0
    4. At target:(nothing in buffer)
    Total Captured LCRs
    Queue Owner Queue Name LCRs in Memory Spilled LCRs in Buffered Queue
    STRMADMIN STREAMS_QUEUE 0 0 0

  • IDOCS Error - Not getting processed automatically

    Hi All,
    We are loading hierarchy for a Product from R/3 system using the standard datasource.
    When we execute the info package, IDOCs are not getting processed automatically.
    We are facing the below error message.
    Error when updating Idocs in Business Information Warehouse
    Diagnosis
    Errors have been reported in Business Information Warehouse during IDoc update:
    No status record was passed to ALE by the applicat
    System Response
    Some IDocs have error status.
    Procedure
    Check the IDocs in Business Information Warehouse . You do this using the extraction monitor.
    Error handling:
    How you resolve the errors depends on the error message you get.
    But when we goto BD87 and process the IDOCs manually, these are getting posted and the hierarchy is loading.
    Can someone please guide me on what is the issue with the IDOCs and how to make them to post automatically.
    Thanks in Advance
    Regards,
    Sachin

    Hi,
    This will happen due to Non-updated IDOCu2019s in the Source system i.e., occurs whenever LUWu2019s are not transferred from the source system to the destination system. If you look at RSMO of the status tab, the error message would appear like u201CtRFC Error in Source Systemu201D or u201CtRFC Error in Data Warehouseu201D or simply u201CtRFC Erroru201D depending on the system from where data is being extracted. Sometimes IDOC are also stuck on R/3 side as there were no processors available to process them. The solution for this Execute LUWu2019s manually. Go to the menu Environment -> Transact. RFC -> In the Source System from RSMO which will asks to login into the source system. The u201CStatus Textu201D for stucked LUWu2019s may be Transaction Recorded or Transaction waiting. Once you encounter this type of status Execute LUWu2019s manually using u201CF6u201D or Editexecute LUWu2019s(F6).Just keep on refreshing until you get the status u201CTransaction is executingu201D in the Production system. We can even see the stuck IDOCu2019c in Transaction BD87 also.Now the data will be pulled into the BW.
    Hope it helps a lot.
    Thanks and Regards,
    Kamesham

  • IDOC status 64 not getting processed.

    Hi Gurus,
          I have a problem where IDOC's with status 64 are not getting processed via background job. the IDOCs weere getting processed with the background job till today evening, but suddenly no changes were made and the processing of IDOC's with status 64 has stopped and they keep filling up, when i checked the log of the Background job it says finished, and the log has the following information,
    17.09.2009 19:05:23 Job started                                                                   00           516          S
    17.09.2009 19:05:23 Step 001 started (program RBDAPP01, variant POS_IDOCS, user ID SAPAUDIT)      00           550          S
    17.09.2009 19:05:25 No data could be selected.                                                    B1           083          I
    17.09.2009 19:05:25 Job finished                                                                  00           517          S
    Kindly advise on this.
    Regards,
    Riaz.

    When i process the IDOC's using BD87, they gets processed wiwth out aany issues no dump is displayed, idoc gets a status 53 after processing via BD87.
    There is another problem how ever there are other jobs scheduled to run in the background other than this background job which are getting cancelled and have two different error logs for different jobs they are as follows.
    1) 18.09.2009 02:00:06 Job started                                             00           516          S
    18.09.2009 02:00:06 Logon not possible (error in license check)             00           179          E
    18.09.2009 02:00:06 Job cancelled after system exception ERROR_MESSAGE      00           564          A
    2) 18.09.2009 00:55:27 Job started                                                          00           516          S
    18.09.2009 00:55:27 Step 001 started (program RSCOLL00, variant , user ID JGERARDO)      00           550          S
    18.09.2009 00:55:29 Clean_Plan:Gestartet von RSDBPREV                                   DB6PM         000          S
    18.09.2009 00:55:29 Clean_Plan:beendet                                                  DB6PM         000          S
    18.09.2009 01:00:52 ABAP/4 processor: DBIF_RSQL_SQL_ERROR                                00           671          A
    18.09.2009 01:00:52 Job cancelled                                                        00           518          A
    This is a production server and the Licence is valid till 31.12.9999, and has no issues with the license.

  • Error "Data packet not complete; for example, 000013" when reading from PSA

    Dear all,
    I have an issue with my data loading, when i execute my infopackage, i specify that it should be loaded into PSA before going to the data target. in my infopackage request i notice that i have got missing data package
    e.g.
    Data package 1 : everything OK
    Data package 2 : everything OK
    Data package 3 : everything OK
    Data package 8 : everything OK
    Data package 10 : everything OK
    Data package 12 : everything OK
    Data package 13 : everything OK
    what happen to data package inbetween e.g. DP4, DP5 etc etc?
    In my "step by step analysis" under the status tab everything is green and nothing seem to have went wrong, no short dump, no job cancelled, nothing.
    Since everything "seems" to be alright, when i try to update subsequently to the data target, i get the following errors below.
    Data packet not complete; for example, 000013
    Request has errors / is incomplete
    I got 2 questions here...
    1. why do i have missing data package and it still showes me a green idicator?
    2. how can i solve this and is this something i should be worried about?
    Thank you very very much!

    Hii SCHT,
    you encounter these type of errors rarely...
    but in RSMO screen, check for any TRFCs struck..
    also go to the JOB in Source System and analyze the JOBLOG,
    there you can find some information about what tht particular job did..!!!
    and after that check for the same in the BW system...
    perhaps while transferring?extracting the recordsthrough data packages, it might have missed ..
    Force the request RED and repeat the IP again...
    you dont need to worry about anything...simply force the request red and reload.. just let us know if the problem still persists after repeating the load...
    Regards
    Prince

  • Data is not getting displayed in the report from an Infoset.

    Hi All,
    I am having a report  based on an infoset. This report is displaying the data in the Dev. envmt. When it is transported to the QA, it is not displaying the data in the BEx as well as RSRT, in the QA envmt. The patch levels of both the Dev. and QA are the same. The Queries are same in the Dev and QA also.
    While trying to display the data from the infoset (rt.click- display data), i am able to view the data, in the QA.
    Could anyone please suggest why the data is not getting displayed in the query designer.
    Thanks & Regards,
    A.V.N.Rao

    Hi Ashish,
    I ran the "ZPS/!ZPS" in RSRT where ZPS is the infoset name. In Dev, it displayed the values. In QA, it displayed the below messages:
    ECharacteristic 0TCAKYFNM does not exist. Check authorizations
    WThere are calculated elements. These results are bracketed [  ]
    and below that, it displayed the values for Number of records. But, it has not displayed the values for the other figures.
    Does this has any impact in QA.
    Thanks & Regards,
    AVN Rao.

  • Data is not getting updated in table using RFC

    Hi Experts,
    In my scenario, I am calling one RFC using RFC receiver channel. After running scenario, channel is showing status that RFC executed successfully. But when I am checking tables in R/3 system, data is not getting updated.
    Moreover , when we tried to execute the RFC manually in R/3 system, that time data uploaded into table successfully.
    Could anybody tell me what would the reason that data not uploading into table when we send it through XI.
    Regards,
    Sari

    HI Sari,
    as you have scenario with RFC receiver.. and as you mentioned that it not updating tables when run through PI but when you execute RFC manually tables got updates.. then following are the options you can check..
    -- if you check RFC communication channel and if everything ok on then.. this means that your RFC is getting triggered successfully..but as you said tables are not updated.. for this you can go to SXMB_MONI and check the log take payload after mapping.. and compare it with the input when you try to execute it manually.. I think the input when you try manually and input to RFC when you try through PI is different and that is causing the Problem.. you will be able to see the difference in input then check.. I think the problem is data and not RFC communication channel..so by using this you will come to know difference
    -- else if possible configure your ID in PI in RFC Receiver and then check and put breakpoint on ABAP side.. so that when PI will hit RFC you will get it in debug mode and able to see what is going wrong..
    Thanks,
    Bhupesh

  • Data is not Getting refresh in Dashboard

    Hi I have implemented Xcelsius on the top of SAP BI System.
    I have the following architecture -
    SAP BI Cube --> BO XI 3.1 Universe Based on Cube --> QWaaS on the Universe --> Xcelsius Dashboard on the QWaas.
    When i created a dashboard and run it , it worked fine and showed me data , which is lying in Cube.
    But When the Data get updated in Cube and i again refresh the Dashboard , It still showing me the old data , the Data is not getting refresh in Dashboard .
    what could be the possible casue ?
    Is QWaas always fetch the dat at run time ?
    Thanks

    QaaWS web service call needs to be invoked to get the data updates from the datasource. Xcelsius provides several options to command how frequently web services connections are calle/refreshed ('refresh on load', 'refresh every X seconds' or trigger cells...).
    You should also keep in mind, that, since QaaWS hits datasource each time web service is called, this can bring some performance issues (web service calls WebI server to run corresponding query, which performs a refresh). In order to mitigate this, since XI 3.0, QaaWS features a cache mechanism, that is used to store data results from each query refresh (corresponding to a web service call), in order to serve it again with better efficiency, in case same request is performed during a specific time interval (which typically corresponds to dashboard interactions).
    Cache sessions are sorted according to the user names and prompt values used, cache is emptied after a defauilt duration without any interaction (request from the same session).
    Cache timeout duration is set for each QaaWS query, and can be tuned from QaaWS Designer when modifying/creating the query : go to Advanced... button on first step of query edition wizard, cache lifetime corresponds to timeout value (in seconds) displayed on Advanced parameter panel (default value being set to 60 seconds).
    Cache lifetime may also be an explanation why you do not see data refreshed (if you are with QaaWS XI 3.0 or later).
    Hope that helps,
    David.

  • Data is not getting posted to R/3 from CRM

    Hi to All,
    When i am creating sales order in CRM data is not getting replicated to R/3. System is showing that B Doc is created , posted and validated. But here in CRM system outbound queue is generated but in R/3 there is no data in inbound queue.
    Can anybody tell me the reason behind this?
    Regards,
    Anurag

    Hi,
    Could you please check the if the Site-R/3 in SMOEAC has the subscription All Business Transactions (MESG)? If not please create a subscription to the R/3 site.
    If this doesn't work please update the post.
    Hope this helps.
    Thanks,
    Karuna.

  • Data is not getting updated in DB table

    hi all
    i am doing IDOC to jdbc scenario
    i am triggering idoc from R/3 and the data is going into DB table
    sender side:       ZVendorIdoc
    receiver side:
    DT_testVendor
      Table
        tblVendor
          action       UPDATE_INSERT
          access      1:unbounded
            cVendorName 1
            cVendorCode 1
         fromdate    1
         todate      1
          Key
            cVendorName  1
    if i trigger idoc for example vendor 2005,2006 and 2010 data is getting updated in the table
    but again if i trigger idoc for same vendor nos data does not get updated in DB table while message is successfull in moni and RWB both
    plz suggest if any change need to be done to update the data
    Regards
    sandeep sharma

    Hi Ravi
    you are right, vendor no is my key field . problem is when i send data again then it should Update the data but it's not updating the data again
    i did on exp with this : i deleted all the record from the table and then  triggered idoc for vendor 2005 , 2006,2010 after this data is updated in the table i deleted the rows for vendor no 2006 and 2010 and kept the row for vendor 2005
    then i again trigered the idoc for vendor no 2005,2006 and 2010 now this should update and it should insert rows for vendor no 2006 and 2010 but i am surprised its not updating the data
    Thanks
    sandeep

  • Data is not getting posted in ABAP Proxy.

    Hi,
    I am working on File to ABAPProxy scenario. The data is not going to proxy.
    In PI sxmb_moni is status is successfull and in the R3 sxmb_moni the status is successfull. But the data is not getting posted in the tables(ABAPProxy). I had checked the inbound and oubound queues. And with the same input data, abapers cheked their code in Abapproxy, then the table is getting updated. When we trigger from PI, the tables are not getting updated.Please help me.
    Thanks,
    Pragathi.

    Hi,
    The problem may be
    Case 1: Cache is not getting refreshed(Check SXI_CACHE)
    Case 2: The Queue Is blocked (Check SMQ1 & SMQ2)
    Regards,
    Sainath

  • Table PA0045 is getting updated but data is not getting displayed in Infty

    Hi,
             When i save entries for any employee in Infty 0045 , the table pa0045 is getting updated with the record.
              But FOR FEW USERS, who created the record, the data is not getting displayed in one tab of the tabstrip bur rest tabs are filled with data. Its the case with few users only .
    Regards,
    Jyoti

    Check if there is proper data to be shown in the tabstrip.
    EX.. Loan Amount Granted in the Tabstrip Basicdata ...

Maybe you are looking for

  • How can I delete an old Development Component (DC)

    Hello, We've a sca-package with Java-WebDynpro Code and try to deploy it, but unfortunately the deployment doesn't work. Therefore we analysed the package and remarked, that two sda-components exist. An DevelopmentComponent with our current sources a

  • How to check 6 digital signals change value at the same time with PCI-6229??

    I am using DAQ card PCI-6229. Channel 1 generate a digital signal. Channel 2,3,4,5,6,7 acquire digital signals. I want to check: 1. Whether the rising edge of Channel 2,3,4,5,6,7 occures at the same time; 2. The time delay from the rising edge of Cha

  • USB Mass Storage Device won't Eject/Unmount

    My brother bought a Sanyo ICR-B180NX digital voice recorder last year to use with his iBook. It was working very well up until about August of 2005. At that point, the recorder would mount on the desktop, and he could get all the files off (it record

  • Mac OS X Lion 10.7.3 GSSAPI support

    When trying to authenticate a user against a GSSAPI enabled OpenLDAP server, it seems that opendirectoryd skips GSSAPI mechanism and uses CRAM-MD5. The OpenLDAP server correctly advertises GSSAPI as a supported SASL mechanism (and GSSAPI works with l

  • In scripts  using  ADDR_GET_NEXT_COMM_TYPE  function module

    hi, in my program these two function modules are used to send email. ADDR_GET_NEXT_COMM_TYPE CONVERT_COMM_TYPE_DATA i want to know in format the email will be , in otf or in pdf, where we can find it. using this function module can we able to send e-