Design for Request of Mass Data

Hi,
i would like to hear your recommandations for a special PI design:
I have a synchronous scenario
3rdParty (SOAP) <-> PI <-> SAP (ABAP proxy)
where i expect some responses of 30.000 data sets. This would take up to 30 minutes in the SAP system.
How can such a scenario be setup by taking performance challenges into account?
I considered to insert a request "package" parameter, to split the response into 10 parts and forward the responsibility to request the right part to the client, but i m not very happy with that.
Do you have any other proposals?
Regards,
Udo

>
Udo Martens wrote:
> Hi,
>
> i would like to hear your recommandations for a special PI design:
>
> I have a synchronous scenario
> 3rdParty (SOAP) <-> PI <-> SAP (ABAP proxy)
> where i expect some responses of 30.000 data sets. This would take up to 30 minutes in the SAP system.
>
> How can such a scenario be setup by taking performance challenges into account?
>
> I considered to insert a request "package" parameter, to split the response into 10 parts and forward the responsibility to request the right part to the client, but i m not very happy with that.
>
> Do you have any other proposals?
>
> Regards,
> Udo
in similar scenarios this is what we have done;
1. Request the sending application to limit the number of messages per call. Hence each call will have a lesser number of data and your response time will be met
2. what is the relevance of the response to the sender? Is it a mock ack to say that the receiver system has received the data or do you need data back from the receiver system to be send to the sender? If it is only a ack and the data in the response is not relevant then use a Syn to async scenario. That is create the response from PI itself and send it to the sender while you can do an async call to your ABAP proxy to push data to R3

Similar Messages

  • Recommended Design for WAAS in both Data center and Branch Offices

    Hi All,
    I need to purchase different appliances for WAAS, but before I decide what to purchase, I need to know exactly how I am going to put these devices so that I can know which one to purchase and how the designs will be.
    My environment is as follows:
    I have two core routers (ASR 1000 series) at Data center, two 6509 switches (expecting to insert the ACE module, and FW module) and the I have access switches which connects servers.
    At the branch offices, I am expecting to place ASR1000 series also.
    Now, I need to know the recommended designs for these WAAS appliances so that, I can know in advance what to purchase(i.e. how many WAAS CM, Core WAE, and Edge WAE).
    Any input will highly be appreciated.
    Thanks,

    If you purchase the Standard Edition, your license supports:
    One installation of Cisco Security Manager on one Windows-based server.
    The configuration or management of 5 devices (in the Standard-5 option) or 25 devices (in the Standard-25 option). This excludes Catalyst 6500 and 7600 Series devices and their associated service modules.
    If you purchase either the Standard-5 or Standard-25 license, you cannot purchase an incremental device license. Your license is fixed at either 5 or 25 devices.

  • Best method for state machine design for multi-dimensional clustered data

    I have an application that is collecting analog tag data(1000 points) and displaying on a graph. In each vi, I may collect data for as many as 32 channels, but only one channel at a time. Usually this is collected in numeric order, but maybe not all the time. I am also saving an "expected profile" to a file for each channel. I display both expected and actual data on the graph and select which channel to view with a ring control. My question is about how to design the state machine for best memory usage/execution speed/... If I use a shift register with a cluster datatype, that holds both 1000 point arrays, some statistical data like max/min/pass/fail/test limits for 32 channels, won't all of the u
    nbundle/bundle functions use a ton of memory??? Right now I am writing the cluster data to memory tags for each channel. I use the ring control to determine which tag to read. I have the tags grouped by channel into several groups for actual data, expected data, results data,... What design methods would provide the best function using a shift register/queues and notifiers/memory tags in the tag engine?? I also use CITADEL to read the max and average as a trend display on each channel and to select a specific 1000 point dataset to view if the max and average are out of limits. Does anyone have similar applications??

    It sounds as if you will be working with alot of data here so I can understand your concerns for memory management. If you were to unbundle and bundle data in your application, you are correct in saying that it will require more memory. It is somewhat difficult to get the overall picture of what you are needing to set up, but from what I can gather, it sounds as if you could have an array of 32 elements; each element a cluster of your data as well as the statistical data, and then you could index through that array to determine what to display on the graph. The shift registers will be reusing the same memory over and over so there should not be a problem there. Queues wouldn't necessarily help you in any way in this situation. Depending on how many memory tag
    s you are using, you could increase the amount of memory usage required by Citadel and the tag engine, but if you are needing to write your cluster data to the database, then there is not much you can do about it. Ultimately it sounds like a pretty involved application and will require a fair amount of memory regardless to ensure smooth functionality.
    In general, questions about the LabVIEW architecture and memory management are typically better answered if they are posted to the general LabVIEW discussion forum area. But I hope that this has addressed some of your concerns. Have a good day.
    Patrick R.

  • Value Mapping Replication for Mass Data

    Hi SDN,
    I have to design an interface in XI 3.0 (SP14 on AIX) for replicating mass data from file databse etc to the XI runtime cache.
    I went thru the follwing weblog which proved to be helpful:<b>/people/sreekanth.babu2/blog/2005/02/23/value-mapping-replication
    <b>ValueMappingReplicationOut</b> being my Asynchronous outbound interface and <b>ValueMappingReplication</b> being Asynchronous Inbound Interface of in the software component SAP BASIS, software component version SAP BASIS 6.40 in the namespace http//sap.com/xi/XI/System.
    Am aware that for this scenario the receiver is predefined (it must be on the Integration Server).
    My question here is how do i use the outbound interface to send data to XI. <b>I mean in my case i have a file and SQL server as the sender.</b>
    Can somebody detail the steps involved. Suppose i dont want to use ABAP proxy or JAVA proxy HTTP adapter sending the message,
    Do i have other options to send data to XI value mapping interface?
    Regards,
    Vineet

    When you want to send data from a file, just map the structure from the arriving XML to ValueMappingReplication. There is no need for using the interface ValueMappingReplicationOut or use an ABAP proxy.
    Regards
    Stefan

  • Report Slow Due to Mass Data , Soln for Performance Tuning

    Dear All,
    I am making report with mass data so for this i have to put For All enteries in & Ranges at lodz of places.
    1. For all enteries in is making my report works very slow.
    2. If i change for all enteries by Ranges then if the no. of records are large the system will thn
        throw dump.
    If the no. of records are large or the logic applied in complex then report is taking very long
    execution time , Can anyone suggest me the method by which i can optimize my report and make it
    run smoother in performance.
    Thanks
    Ankesh Jindal

    Hi,
    >
    Ankesh Jindal wrote:
    > The problem is with FAE and ranges acc to uptill i have discovered,
    > As I have mentioned if I take FAE and number of records are large the execution will take
    > very large amount of time for that i have changed FAE to ranges but still if no. of records are
    > large in ranges system will throw dump
    >
    so far so good. SQL statements must not get too large.
    >
    Ankesh Jindal wrote:
    > so the best soln for this which i have considered for
    > my reports is to use ranges but with some logic applied that is ;
    >  suppose i have 20,000 records then send data to ranges in 3k or 4k lots .
    > so in this case i am using ranges with 3k or 4k lots so the sytem will not throw dump and i will get
    > faster execution of query with ranges...
    >
    General question: How big is the runtime difference for your SELECT with Range (4k) and the default FAE (5 in case of ORACLE)... more than 20%? could you post your actual run time figures?
    Your range approach is faster because you do less database calls. 5 db calls if you have 4 k entries in the range.
    You can influence the number of database calls as well for the FAE.
    Assuming you are running on ORACLE with a default configuration you have 5 entries per call.
    (Parameters rsdb/max_in_blocking_factor, rsdb/max_blocking_factor). So you will end up
    with 4000 db calls with 5 records each. 
    You compare that with 5 db calls with 4000 records for your range... this is not fair
    Hint your FAE with this, this would lead to 5 db calls for the FAE as well.:
        %_hints oracle '&max_in_blocking_factor 4000&'.
    now compare again... .
    Note1: Be care full with big ranges and blocking factors... cost based optimizers may react sensitive to big inlists or or concatenations and may change plans suddenly... .
    Note2: If you are not on ORACLE your blocking factors may be considerably higher (30, 60, ...).
    Kind regards,
    Hermann

  • Error "cannot load request real time data targets" for new cube in BI 7.

    Hi All,
    WE have recently upgarded our SCM system from 4.1 to SCM 7.0 which incorporated BI 7.0.
    I am using BI 7.0 for first time and ahve the following issue:
    I ceated a new infocube and data source of flat file and succesfully created transformation, and Data Transfer Process. Everything looked fine. I added flat file and checked preview and could see data. Now when I start job to load data in infocube the follwing error is shown "cannot load request real time data targets". 
    I checked cube type in setting in infcune is shows as Standard.  When I doube clicked on error the following message showed up
    You are trying to load data into a real-time InfoCube using a DTP.
    This is only possible if the correct load settings have been defined for the InfoCube.
    Procedure
    In the object tree of the Data Warehousing Workbench, call Load Behavior of Real-Time InfoCube from the context menu of the InfoCube. Switch load behavior to Transactional InfoCube can be loaded; planning not allowed.
    I did not understand what it is meant and how to set changes. Can someone advice and follow me through.
    Thanks
    KV

    Hi Kverma,
    Real-time InfoCubes can be filled with data using two different methods: using the transaction for entering planning data, and using BI staging, whereby planning data cannot be loaded simultaneously. With Real time cube you can select the method you want to use for update as
    Real Time data Target can be loaded With Data; Planning not allowed &
    Real Time data Target can be Planned; Data loading not allowed
    You can change this behaviour by right clicking on cube and selecting Change real time load behaviour and select first option. You will be able to load the data then
    Regards,
    Kams

  • Customizing tables not asking for Customizing Request while saving data

    Hi,
    I have some customizing tables in my development server (Delivery Class = 'C').
    These always used to ask for a Customizing Request whenever data was saved in them.
    Suddenly, I have noticed they are no more asking for any Customizing Request. I cross-checked in the Transport Organizer and confirmed that there are no customizing requests of mine which may be already holding any data entries of these tables.
    I wonder why this may be happening (believe it to be some basis configuration issue. also asked my basis guy but he has no clue).
    Kindly suggest.
    Thanks,
    Z

    Thanks Navneet and Gautham.
    My problem is now solved. Let me summarize the problem and the solution now.
    -> The customization tables suddenly stopped asking for a request while saving data.
        Somehow the settings had been reset, and as Gautham pointed out, it was corrected in the tcode SCC4
    -> Most of the tables now worked fine, but still few of them didnt ask for requests
        Here, I found out that the developers had chosen "no, or user, recording routine" instead of  "standard recording routine". For such tables, when i check in the tcode SE54, menu path Environment -> Maintenance Objects -> Change, I find the Transport category 'No Transport'. Regenerating the maintenance generator choosing "standard recording routine" fixes this and the tables now ask for a customizing request.
    Thanks, both, for the quick response.

  • Standard tcode for (mass) data change of internal orders or ... ??

    Hi!
    I really need some info if there by any chance SAP has standard transaction for mass data change of internal orders (more particular, distribution rules in settlement rule section, which can be found in <b>KO02 transaction</b>  ).
    I am trying to change distribution rules for settlement receivers in <b>settlement rule section</b>, that is finish past distribution rules by filling TO PERIOD and TO FISCAL YEAR fields on the right of each rule, and then entering new rules (which i get from external source -flat file, ms excel, csv...).
    If i wanted to import data in SAP i guess i would have to develop a Batch Input. But that would take me some time to develop because it is pretty complicated.
    I found tcode KO08 but i do not really know how to use it. Maybe there is another tcode that i am not aware of?
    I would appreciate any suggestions!
    Thnx, UK

    Hi Srilakshimi,
    If you are familiar with MASS transaction, then you can modify User Responsible field for Internal Orders from transaction KOK2.
    As first step you must create a selection variant in order to define which orders you want to modify. Once selection variant was created, excute transaction with it and you'll get a screen similar to MASS transaction. Select the field you want and massively replace it. Do not forget to save.
    Best Regards!
    Mgitur

  • Creating a external content type for Read and Update data from two tables in sqlserver using sharepoint designer

    Hi
    how to create a external content type for  Read and Update data from two tables in  sqlserver using sharepoint designer 2010
    i created a bcs service using centraladministration site
    i have two tables in sqlserver
    1)Employee
    -empno
    -firstname
    -lastname
    2)EmpDepartment
    -empno
    -deptno
    -location
    i want to just create a list to display employee details from two tables
    empid firstname deptno location
    and same time update  in two tables
    adil

    When I try to create an external content type based on a view (AdventureWorks2012.vSalesPerson) - I can display the data in an external list.  When I attempt to edit it, I get an error:
    External List fails when attached to a SQL view        
    Sorry, something went wrong
    Failed to update a list item for this external list based on the Entity (External Content Type) 'SalesForce' in EntityNamespace 'http://xxxxxxxx'. Details: The query against the database caused an error.
    I can edit the view in SQL Manager, so it seems strange that it fails.
    Any advice would be greatly GREATLY appreciated. 
    Thanks,
    Randy

  • Report design for hierarchical xml data

    I need to create a report that shows hierarchical xml data. I already have an xml saved to a database. How would I go bout creating a design for such a report? Should I be doing groups on every parent with children? Any example?
    Thanks

    Hi markgoldin,
    I tested the issue in my local machine by following steps:
      1. Created a table and store the xml into the table with the following query:
    CREATE TABLE xmlTbl (id INT, xmlVal xml);
    INSERT INTO xmlTbl values(1,
    '<Customers>
    <Customer ID="11">
    <FirstName>Bobby</FirstName>
    <LastName>Moore</LastName>
    </Customer>
    <Customer ID="20">
    <FirstName>Crystal</FirstName>
    <LastName>Hu</LastName>
    </Customer>
    </Customers>'
     2. Created a stored procedure to retrieve data from the table with the following query:
    create procedure xml_report
    as
    DECLARE @xmlDoc XML;
    SELECT @xmlDoc = xmlVal FROM xmlTbl WHERE id=1;
    SELECT T.c.value('(@ID)','int') AS ID,
    T.c.value('(FirstName[1])','varchar(99)') AS firstName,
    T.c.value('(LastName[1])','varchar(99)') AS lastName
    FROM @xmlDoc.nodes('/Customers/Customer') T(c)
    GO
      3. In the Report Data pane, right-click Data Sources and click Add Data Source.
      4. For an embedded data source, verify that Embedded connection is selected. From the Type drop-down list, select a data source type; for example, Microsoft SQL Server or OLE DB. Type the connection string directly or click Edit to open the Connection
    Properties dialog box and select Server name and database name from the drop down list.
      5. For a shared data source, verify that Use shared data source reference is selected, then select a data source from the drop down list.
      6. Right-click DataSets and click Add Dataset, type a name for the dataset or accept the default name, In Data source, select the name of an existing shared data source, select StoredProcedure in Query type, then select xml_report from stored procedure
    name drop down list.
      7. In the Toolbox, click Table, and then click on the design surface.
      8. Drag the Date field from the dataset to the cells in the table.
    The following screenshot is for your reference:
    For more information about how to use the xml data type methods, please refer to the following document:
    http://msdn.microsoft.com/en-us/library/ms190798.aspx
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu

  • Value Mapping Replication for Mass Data - Performance Issues

    Hi All,
    We are looking into Value Mapping Replication for Mass Data. We have done this for less number of fields.
    Now we might have to have 15,000 records in the cache for the Value Mapping. I am not sure how this would effect the Java Cache and Java Engine as a whole.
    There might be a situation where we will have to leave the 15K records in the cache table on Java Engine...
    Are there any parameters that we can look into just to see how this hits the performance.
    Any links/ guidance in the right direction might help me..
    reg

    Naveen,
    Check jins reply in this thread (they have done with API and without API using graphical but still some issues):
    Value mapping performance using LookUp API
    ---Satish

  • Anyone worked on Value Mapping Replication for Mass Data

    hi all,
    Is there any one who worked on <b>Value Mapping Replication for Mass Data</b> stuff.
    What is this?
    The on page 139 of the doc below tells about registereing
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/627d1cbc-0601-0010-aea2-c275521673f2
    Can any one explain guide me in that direction.
    reg,

    Naveen,
    Have a look at the weblog /people/sreekanth.babu2/blog/2005/02/23/value-mapping-replication
    Refer the below discussion too
    Re: How to edit roles for specific tasks in IR & ID?
    Best regards,
    raj.
    Message was edited by:
            Raj

  • How do I AUTOMATICALLY generate Transport request for a Z table data?

    Hello friends,
    I would like to automatically create/generate a transport request for a Z table every time I change, add or delete data in that table.
    Currently, every time I modify data in the Z table, I need to go and manually create a transport request and then indicate keys for the row(s) which were modified. I find this a tedious, time consuming, and error prone process.
    So I would like to automate this process. That is to say, the transport request is generated for the rows that I modified. The same way as is done when you change a program. It automatically prompts you for a transport request thingy.
    Any ideas?
    Your help is greatly appreciated.

    You need to make the table contents transportable by creating a "table maintenance generator" in SE11 under Utilities -> Table maintenance generator.
    In that screen for the section "Dialog data transport Details" the "Standard Recording Routine" option should be selected.
    Then only will the system ask for Transport request when entering data through the generated maintenance view in SM30.
    Also change the "delivery class' in the table attributes to 'C' - Customizing table

  • Solution for mass data import to VB02

    hello
    do you have any procedure to mass data import to VB02 exclusion list for ZSD1 3rd list key
    sales dep/ distribution chanel / material / customer
    we don't want to do that manually - more then 600 records
    do you have any idea how to import it - any easier way ?

    HI
    you can import the records eaither LSMW or BAPI which ever your are comfortable to do that.
    in LSMW we have 14 steps , i hope you know .
    thanks
    surya

  • Alert: The backup operation for the cluster configuration data has been canceled due to an abort request

    Hello,
    Alert: The backup operation for the cluster configuration data has been canceled due to an abort request
    Alert description: The backup operation for the cluster configuration data has been canceled. The cluster Volume Shadow Copy Service (VSS) writer received an abort request.
    This is the backup of VSS which is sending this alert every morning.
    Event ID 1544
    All fixes I found are applied..
    kb2277439 has already been applied
    978527 is there too
    975921 is there too..
    any other id
    Cluster Node /Status gives both nodes up A & B
    The error is coming only on Node A...
    Any idea?
    Thanks,
    Dom
    System Center Operations Manager 2007 / System Center Configuration Manager 2007 R2 / Forefront Client Security / Forefront Identity Manager

    Hi,
    Which backup software do you use to do a backup? Please also try to apply those hotfix on the Cluster:
    A transient communication failure causes a Windows Server 2008 R2 failover cluster to stop working
    http://support.microsoft.com/kb/2550886
    The network location profile changes from "Domain" to "Public" in Windows 7 or in Windows Server 2008 R2
    http://support.microsoft.com/kb/2524478
    Recommended hotfixes and updates for Windows Server 2008 R2 SP1 Failover Clusters
    http://support.microsoft.com/kb/2545685/EN-US
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

Maybe you are looking for