Entire Scenario how the data is being process.

Hi,
I need the full scenario in detail, when the sender adapter pick the file from the source directory, how the data is passed to IS and how the data is passed to adapter engine and how the adapter engine process the data and how the data is send to adapter framework and wat all the steps adapter framework perform and on wat step the audit logs is maintain, how messaging, logging and queing will done in AFW and after processing the data in adpter engine how the data is being passed to Integration Engine and how the pipeline steps will get execute and how the data is been transfered to receiver.
All others steps being process while sending the data from sender system to receving steps and how the data is process internally and where audit log is maintain etc.

Hi,
Please see the below links
see the below links to helps you lot
http://help.sap.com/saphelp_nw2004s/helpdata/en/fd/16e140a786702ae10000000a155106/content.htm
/people/siva.maranani/blog/2005/05/25/understanding-message-flow-in-xi
http://help.sap.com/saphelp_nw2004s/helpdata/en/6a/a12241c20af16fe10000000a1550b0/content.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/e4/6019419efeef6fe10000000a1550b0/content.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/327dc490-0201-0010-d49e-e10f3e6cd3d8
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/34a1e590-0201-0010-2c82-9b6229cf4a41
Regards
Chilla

Similar Messages

  • My performance is very slow when I run graphs. How do I increase the speed at which I can do other things while the data is being updated and displayed on the graphs?

    I am doing an an aquisition and displaying the data on graphs. When I run the program it is slow. I think because I have the number of scans to read associated with my scan rate. It takes the number of seconds I want to display on the chart times the scan rate and feeds that into the number of samples to read at a time from the AI read. The problem is that it stalls until the data points are aquired and displayed so I cannot click or change values on the front panel until the updates occur on the graph. What can I do to be able to help this?

    On Fri, 15 Aug 2003 11:55:03 -0500 (CDT), HAL wrote:
    >My performance is very slow when I run graphs. How do I increase the
    >speed at which I can do other things while the data is being updated
    >and displayed on the graphs?
    >
    >I am doing an an aquisition and displaying the data on graphs. When I
    >run the program it is slow. I think because I have the number of
    >scans to read associated with my scan rate. It takes the number of
    >seconds I want to display on the chart times the scan rate and feeds
    >that into the number of samples to read at a time from the AI read.
    >The problem is that it stalls until the data points are aquired and
    >displayed so I cannot click or change values on the front panel until
    >the updates occur on the graph. What can I do to be a
    ble to help
    >this?
    It may also be your graphics card. LabVIEW can max the CPU and you
    screen may not be refreshing very fast.
    --Ray
    "There are very few problems that cannot be solved by
    orders ending with 'or die.' " -Alistair J.R Young

  • How much data is being used when i mirror my ipad to my tv through the apple tv?

    how much data is being used when i mirror my ipad to my tv through the apple tv?

    wwcswapmeet wrote:
    I will try that. Thank you.  I am hoping to buy a wifi hotspot, but wondered what type of data plan I should get if I were to use it consistantly for 8 hours a day (only on saturdays and sundays).  A great tool for work & presentations!!
    It will use minimal internet bandwidth unless your content is streaming from web>iPad>AppleTV. 
    Local network comms for content already on the iPad (eg presentations) will not generally use internet bandwidth apart from protected iTunes material that requires brief internet authorisation.
    AC

  • Error RCIRAS0546, failure occured while the report was being processed

    Hello all,
    We use Crystal Reports 2008 V1 server on Linux. With 2 reports now we get the error when viewing the report from the Central Management Console (CMC)
    "Your request could not be completed because a failure occurred while the report was being processed. Please contact your system administrator. [RCIRAS0546]"
    In de /var/log/messages, I get messages like these:
    Sep 28 17:37:30 vsrv01 boe_crprocd[25630]: A failure occurred while the server was processing report file 2:11154 (RCIRAS0568)
    Sep 28 17:37:40 vsrv01 boe_crprocd[25630]: A failure occurred while the server was processing report '10. LEN Rapportage per scenario (kapitaallasten, investeringen, algemene gegevens) (V1)' (id=11154) for user 12 (RCIRAS0567)
    I tried several parameters on the CrystalReportsProcessingServer, like increasing/decreasing the Maximum Current Jobs and Number of Prestarted Children. None of these make a difference. Als the error occurred when viewing the report as Administrator.
    The reports has a dynamic parameter, wtih content coming from the database (drop-down list). When selecting one value the report is viewing fine. With a selection of more values the error occurred. In Crystal Reports 2008 there is no problem.
    I restarted the CRProcessingServer with -trace. In the trace I see the following lines:
    It seems that several subprocesses start for retrieving data based on the parameter values.
    After the crash of the first child, the above mentioned error occurs.
    2010/09/28 15:37:23.075|==| | |25630|1474829200| |||||||||||||||(ProcWorkerManager.cpp:82) PageChildDesc constructor (id=3)
    2010/09/28 15:37:23.075|==| | |25630|1474829200| |||||||||||||||(ProcWorkerManager.cpp:5489) doCreateChild() created a new child 3
    2010/09/28 15:37:30.011|==| | |25630|1474562960| |||||||||||||||(ProcWorkerManager.cpp:6814) cleanupChildren() starting
    2010/09/28 15:37:30.011|==| | |25630|1474562960| |||||||||||||||[ProcWorkerManager.cpp : 6854]  RAS-CORE-METRICS  (before cleanup) number of child processes = 3
    2010/09/28 15:37:30.011|==| | |25630|1474562960| |||||||||||||||(ProcWorkerManager.cpp:6979) child id=1 crashed
    2010/09/28 15:37:30.011|==| | |25630|1474562960| |||||||||||||||(ProcWorkerManager.cpp:6994) cleanupChildren() removing child id=1
    2010/09/28 15:37:30.012|==| | |25630|1474562960| |||||||||||||||(ProcWorkerManager.cpp:101) PageChildDesc destructor (id=1)
    2010/09/28 15:37:30.012|==| | |25630|1474562960| |||||||||||||||(ProcWorkerManager.cpp:7028) cleanupChildren() marking as stopped: worker id=1 in child id=1
    2010/09/28 15:37:30.012|==| | |25630|1474562960| |||||||||||||||(ProcWorker.cpp:2703) stopping worker id=1
    2010/09/28 15:37:30.012|==| | |25630|1474562960| |||||||||||||||(ProcWorkerManager.cpp:7168) cleanupChildren() ending
    Does someone know this problem and what to do about this?
    With kind regards,
    Pim van Stam
    SvSnet

    We got this error message on a report that had 5 subreports, 3 of which were based on stored procedures. The report was running fine in our Dev environment and in the CR developer, but not when we published it to another environment. The problem was caused because the stored procedures had been changed in Dev (so that they ran correctly) but these changes had not been released to the other environment. Once the scripts were run to update the stored procedures the report ran successfully. So it apepars that the problem was because the stored procedure/s the subreports were using were failing, but we only got the RCIRAS0546 error message.

  • How the data is fetched from the cube for reporting - with and without BIA

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at      ***** Records, Selected     *****Records, Transported
    Cube A     ***** blank ***** 0.624305      ***** 8,087,502      ***** 2,011
    Cube B     ***** E ***** 42.002653 ***** 1,669,126      ***** 6
    Cube B     ***** F ***** 98.696442 ***** 2,426,006 ***** 6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi,
    yes, Vitaliy could be guess right. Please check if FEMS compression is enabled (note 1308274).
    What you can do to get more details about the selection is to activate the execurtion plan SQL/BWA queries in data manager. You can also activate the trace functions for BWA in RSRT. So you need to know how both queries select its data.
    Regards,
    Jens

  • How the data is fetched from the cube for reporting

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1:  I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at          ***** Records, Selected     *****Records, Transported
    Cube A     *****             blank                 *****           0.624305         *****          8,087,502        *****             2,011
    Cube B     *****                     E   *****                      42.002653  *****                  1,669,126            *****                    6
    Cube B     *****                     F  *****                     98.696442    *****                  2,426,006       *****                    6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi Jay,
    Thanks for sharing your analysis.
    The only reason I could think logically is BWA is having information in both E and F tables in one place and hence after selecting the records, it is able to aggregate and transport the reords to OLAP.
    In the second case, since E and F tables are separate, aggregation might be happening at OLAP and hence you see more number of records.
    Our Experts in BWA forum might be able to answer in a better way, if you post this question over there.
    Thanks,
    Krishnan

  • Report to show me how many invoices are being processed by users

    Hi, 
    I am currently working as a Accounts Payable Supervisor and I would like to run a report to show me how many invoices are being processed by users in my team. 
    Currently I am using Transaction F.98  (Posted Docs by user report), but this report includes posting from intercompany so all users will have duplicate invoices showing on this report. 
    Is there any other report I could run to get the result I am after?
    Many thanks
    Alex

    Hello Alex,
    You can also use the GL line item report (FBL3N) with the offsetting accounts (for example GR/IR A/c) which would be posted while posting the invoice. You can display the results with User Name and also can create a layout of your own which can be used time and again.
    Kind Regards // Shaubhik
    Edited by: Shaubhikg on Nov 10, 2010 6:02 AM

  • Delivery Document has been created - showing the status of Being Processed.

    Hi All.
    I am facing the following typical problem.
    1. Sales order has been created - showing the status of COMPLETED.
    2. Delivery Document has been created - showing the status of Being Processed.
    3. Goods Issue document created successfully- showing status as COMPLETE
    4. Billing Document has created successfully - showing the status as COMPLETED.
    5. Customer Account got updated properly.
    Query: Despite of completion of SO, DELIVERY, GOODS ISSUED, AND BILLING DOCUMENT u2013 and accounting document is cleared why the outbound Delivery document status is showing as BEING PROCESSED.
    If I go T-code Vlo3n see document flow
    Delivery 5080789885 being processed.
    Go to VF03 document flow also it is showing
    Delivery 5080789885 being processed.
    Aditya

    Hi ALL,
    Further to my query I would like to clarify the following:
    It is happening since 2004 all Bom components status (Delivery being processed) all the BOM main items header status is showing completed.
    It not impacting the business, but suddenly user raise query SO, DELIVERY, GOODS ISSUED, AND BILLING DOCUMENT completed why still delivery showing as being processed.
    1. I have checked at SO level -- all the line items were delivered. ie: delivery, goods issue, billing completed.
    2. I checked the Header Level data completed and Item Level data being processed.
    3. If I look into the each item level also it is showing as being processed.
    4. Even the SALES ORDER HEADER LEVEL showing the status of Completed.
    5. Removed credit check in delivery.
    The only issue is OUTBOUND DELIVERY DOCUMENT: Status alone it is showing as ' Being Processed' -- surprisingly!
    Aditya.

  • Error: Your request could not be completed because a failure occurred while the report was being processed.

    Post Author: sagimann
    CA Forum: Deployment
    Hello,
    I'm not sure if this is due to bad deployment, but I suspect it is, at least due to bad environment.
    My env is:
    Solaris 10 64 bit
    Oracle 10g client installed under /opt/oracle/app/oracle/product/10.2.0/client_1
    BOXIR2 + FP2.6 installed under /opt/reporting, running under user 'bouser'. by the way, 'tnsping' works via that user.
    Important bouser env:
    ORACLE_HOME=/opt/oracle/app/oracle/product/10.2.0/client_1
    TNS_ADMIN=$ORACLE_HOME/network/admin
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
    I reboot the box, then start BO like so:
    cd /opt/reporting/bobje
    . setup/env.sh
    ./mysqlstartup.sh
    ./startservers
    ./tomcatstartup
    Then, I import a BIAR file with some reports, and whenever I try to open any report via infoview, I get a gray error message INSTEAD of the input param form:
    Error:
    Your request could not be completed because a failure occurred while the report was being processed. Please contact your system administrator.
    This does not occur in our BOXIR2 on Windows, which is why I'm guess it's a deployment/env issue. The main obstacle I have is: how do I diagnose this? there's nothing in the bobje/logging folder, nothing in the /var/adm/message file except for the following line:
    May  5 15:23:23 testbox1 boe_pagesd[3865]: [ID 253862 user.error] A failure occurred while the Page Server was processing report 'test report' (id=777) for user Administrator
    pls advise,
    thank you.

    The solution is to turn off the printer preference (bottom of Page Setup screen) in the Crystal Report before uploading to the Server. Thanks to: http://pinchii.com/home/2011/12/error-adding-reports-to-crystal-reports-server/
    The explanation, from a comment in the above linked blog post:
    "To elaborate more on the error, what basically happens is that the Crystal Reports engine tries to get the status of the u201Creport printeru201D aka u201Cdefault printeru201D which was present on the system when the report was created, but the printer does not exist anymore. This causes the Crystal engine to error out. By setting the report to u201CNo Printeru201D, it tells the Crystal Engine not to look for that report printer when opening the report."
    Edited by: abirdman on Feb 3, 2012 9:49 PM

  • How the data is entered  in the customized table

    Hi,
    In implemenation scenario when we create generic extraction ,   how the data is entered
    in the customized table if it is huge data  ( around 5000 records)
    Regards,
    Vivek

    Hi Vivek,
    Follow bellow steps:
    1.Goto RSO2.
    Choose Datasource from bellow of Three
    a). Transaction Data
    b). Master data Attributes
    c). Master data Text
    2.Specify Application component(SD/MM..)
    3.There are three extraction methods to fill datasource.
    4.Select extraction method extracts the data from a transparent table or database view.
    5.Select Extraction from View, then we have to create the View.
    a).Specify the view name.
    b).Choose the view type (Database view) from bellow mentioned views.
    i). Database view.
    ii). Projection view.
    iii).Maintainance view.
    iv). Help view.
    6. Specify Tables and Join Conditions and define view fields.
    7. Assign View to Datasource
    8. Once you specify view in Data source, the extract structure will generate.
    9. you can check the data in RSA3.
    Regards,
    Suman

  • What is RT , BT in HR Reporting ? How the Data is Populated into RT, BT ?

    Hi
    Iam Debugging an HR ( used LDB )Report. In that Report 'RT' is used.
    What is the Meaning of 'RT'. And how the Data is populated in to that RT ?
    What is the Meaning of 'BT'. And how the Data is populated in to that BT ?
    Kindly clarify my doubts.
    Regards,
    N.L.

    Hi nl,
    1. These are related to payroll results.
    2. Whenver salary is processed,
       VAST & varied amount of information needs
       to be stored.
    3. Hence, sap uses the concept of CLUSTER table.
    4. when salary is processed,
       some wage types,amounts etc
       are generated.
       ie. RESULTS are generated.
       the table name is RT
    5. Same way, BANK Transfer
       ie. bank code, name, amount etc.
       also needs to be stored.
      Its table name is BT.
    Similary there are other tables also viz WPBP etc.
    6. Payroll data can be retrived using
       macros and also using FM.
    7. Below is the technique
      DATA: myseqnr LIKE hrpy_rgdir-seqnr.
      DATA : mypy  TYPE payin_result.
      DATA : myrt LIKE TABLE OF pc207 WITH HEADER LINE.
          SELECT SINGLE seqnr FROM hrpy_rgdir
          INTO myseqnr
          WHERE pernr = mypernr
          AND fpper = '200409'
          AND srtza  = 'A'.
          IF sy-subrc = 0.
            CALL FUNCTION 'PYXX_READ_PAYROLL_RESULT'
              EXPORTING
                clusterid                    = 'IN'
                employeenumber               = mypernr
                sequencenumber               = myseqnr
              CHANGING
                payroll_result               = mypy
              EXCEPTIONS
                illegal_isocode_or_clusterid = 1
                error_generating_import      = 2
                import_mismatch_error        = 3
                subpool_dir_full             = 4
                no_read_authority            = 5
                no_record_found              = 6
                versions_do_not_match        = 7
                error_reading_archive        = 8
                error_reading_relid          = 9
                OTHERS                       = 10.
            myrt[] = mypy-inter-rt.
            READ TABLE myrt WITH KEY lgart = '1899'.
            IF sy-subrc = 0.
              entl-cumbal = myrt-betrg.
              MODIFY entl.
              cumul = entl-cumbal.
            ENDIF.
    regards,
    amit m.

  • How the data populated into tables like USR01,USR02 etc

    Hi,
    I have one theoritical doubt. How the data is populated into tables like USR01, USR02 etc after creating the
    user using SU01. Let me know the process behind it.
    Rgds,
    Chandra.

    Hi Chinna,
    When you create users using SU01, SU10 transaction codes, it uses BAPI_USER_CREATE1 which will update the data in the respective tables.
    Same way BAPI_USER_CHANGE is used when you modify any existing users.
    Hope this answers!!
    Warm Regards,
    Raghu

  • What's "Please wait while the document is being processed" mean ?

    When open a WebI report and click "Refresh Data", then "Please wait while the document is being processed" prompt page be poped up.
    Does it mean datas had been refreshed completely when the prompt page disappeared?

    When you hit "Refresh Data" button the data will be fetched. If you have lov's in the report then all the lov's data will be fetched and the report data is refreshed accordingly to the lov's fetched.If you hit the refresh data button again without changing the lov values then the report data is fetched for the cache and will be refreshed

  • Please wait  while the document is being processed

    warning...newbie question....
    We are using the crystal report viewer on an ASP.NET application.
    on the parameter panel there is an edit icon, if the user double clicks this,  the report viewer triggers the page to re-load,  then it displays a tiny dialogbox that says "Please wait  while the document is being processed" but the report viewer never refreshes.  the dialogbox never goes always.
    1) why does the edit icon show up on all the parameters?
    2) what is it doing when someone double clicks it?
    3) how can i hide or disable it?
    any help would greatly be appreciated!!!

    Is this happening on your development computer or after you deploy the app?
    it happenes both in dev and on the servers
    Is this an app you wrote? (I don't recognize this error "Please wait while the document is being processed" as a Crystal reports error)
    no, it's a small dialog that the viewer is generating when the user clicks on the edit icon in the viewer
    If this is on a deployed system - how was the CR runtime deployed?
    uggh, i grabbed the MSI that was referenced in the product.xml
    What is the OS?
    XP on dev, win2k3 on the server
    What is the database?
    MS SQL 2005
    What is the database connection method?
    ADO Client
    Are yo changing the database connection information (e.g.; new server, database)?
    when the report is instantiated, we set the db connections and the report works,  it just kind goes off to never never land when the user click the edit icon in the report viewer.

  • How the data is stored in Info cube...in the back end what will happen???

    Hi Experts,
    How the data is stored in Info cube and DSO...in the back end what will happen???
    I mean  Cube contain Fact table and Dimension tables How the data will store and what will happen in the backend???
    Regards,
    Swetha.

    Hi,
    Please check :
    How is data stored in DSO and Infocube
    InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available independent of the InfoCube. Characteristics refer to master data with their attributes and text descriptions.
    An InfoCube consists of several InfoObjects and is structured according to the star schema. This means there is a (large) fact table that contains the key figures for the InfoCube, as well as several (smaller) dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions.
    An InfoCube fact table only contains key figures, in contrast to a DataStore object, whose data part can also contain characteristics. The characteristics of an InfoCube are stored in its dimensions.
    The dimensions and the fact table are linked to one another using abstract identification numbers (dimension IDs) which are contained in the key part of the particular database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube.
    Characteristics that logically belong together (for example, district and area belong to the regional dimension) are grouped together in a dimension. By adhering to this design criterion, dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume. This is beneficial in terms of performance. This InfoCube structure is optimized for data analysis.
    The fact table and dimension tables are both relational database tables.
    Characteristics refer to the master data with their attributes and text descriptions. All InfoObjects (characteristics with their master data as well as key figures) are available for all InfoCubes, unlike dimensions, which represent the specific organizational form of characteristics in one InfoCube.
    http://help.sap.com/saphelp_nw04s/helpdata/en/4c/89dc37c7f2d67ae10000009b38f889/frameset.htm
    Check the threads below:
    Re: about Star Schema
    Differences between Star Schema and extended Star Schem
    What is the difference between Fact tables F & E?
    Invalid characters erros
    -Vikram

Maybe you are looking for