How to handle large payloads in WCF services?

Hi All,
I have developed WCF service,Which takes list of Employee Ids and return employee details.
The service is working fine with 500 employee Ids but when we receive request with 5000 employee Ids the service taking much time to complete its action.
In side the Method i am doing the following operations
1) Each employee id i am calling stored procedure and getting the employee object and adding to list of employees
looping through all the employeeId and contruction the list of all the employee objects
could you please suggest us the best way /Fastest way to retreive the employee details for more than 5000 employeeIds to implement the above operation
1) Can we use multithreading here? if possible give me an idea,
Thanks
Amar

You can try using a Table Value parameter to a Stored Procedure.
Basically:
Create a Stored Procedure that takes the list of CustomerID's as a Table Parameter.
Generate the Schemas using the WCF Adapter Wizard.
Map the request message of CustomerIDs to the SQL Request message.
In the Stored Procedure, select the customers by JOIN'ing to the Table Parameter.  In the SP, the Table Parameter will behave just like a physical table.
Return the result set of all customers in whatever format is you need.
This way, SQL Server is doing heavy data operation in one shot vs many individual queries.
Here's two articles that describe how to use Table Value Parameters:
http://msdn.microsoft.com/en-us/library/dd787869.aspx
http://connectedcircuits.wordpress.com/2011/08/17/using-sql2008-data-table-type-biztalk-to-insert-parent-child-rows/

Similar Messages

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • How to properly pass credentials from WCF service to another?

    I have a client application that calls a WCF service and passes client credentials like so:
    OrderService.OrderServiceClient sc = new OrderService.OrderServiceClient();
    sc.ClientCredentials.UserName.UserName = userName;
    sc.ClientCredentials.UserName.Password = password;
    The service uses custom authentication and validates the user/pass combination against my SQL db. All is well.
    Now I have another WCF service, lets say Customer. In the OrderService I'm adding a Service Reference to the CustomerService and in the middle of one of the OrderService methods I want to call a CustomerService method. How can I pass the credentials? Please
    note that the Customer and Order service ARE on the same IIS server. I am also using custom authentication.
    It looks like I can access the username from the ServiceSecurityContext like so:
    ServiceSecurityContext.Current.PrimaryIdentity.Name
    Do I then have to make a trip back to the DB to get the password to send to the CustomerService call since it is not available? Seems silly... Any help is much appreciated. Thanks.
    Should I be looking into impersonation?

    Hi BBauer42,
    Based on your description, I know that your scenario is Client-> OrderService->(Add service reference)CustomerService, then it seems that you want to implement the function like the double hop authentication. In my mind I think you try
    to use the impersonation. The impersonation is a common technique that WCF services use to assume the original caller's identity in order to authorize access to service resource. In order to implement use the impersonation, we need do some configuration in
    both service and IIS side, for more information, please try to refer to the following articles:
    #Delegation and Impersonation with WCF:
    https://msdn.microsoft.com/en-us/library/ms730088(v=vs.110).aspx .
    #WCF: Learning Impersonation:
    http://blogs.msdn.com/b/saurabs/archive/2012/07/16/wcf-learning-impersonation.aspx .
    Best Regards,
    Amy Peng
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • BizTalk Tracking Profile Editor not tracking the data and how to implement the Orchestration as wcf service over SSL

    Hi Ashwinprabhu,
    thank you very much for your answer.
    i have one more query, I have orchestration published as wcf service in IIS and internally orchestration calling one more service , it means orchestration sending a request and getting response back from the service.
    actually we are implementing the copy of that called service through biztalk orchestration for system automatic and tracking failed messages and n/w failures.
    But tracking profiler not tracking the Data.
    And we need to develop the http service as https(Over SSL), we implemented in iis using self 
    signed certificate, it is working just browser for wsdl(in browser), we are not able to test the service in wcf test client, it is giving wsdl error, in wsdl schema reference showing with HTTP only,
    please help me how to resolve the issue.
    Teegala

    First things first, I think it's best to publish only schemas as WCF service for dependency management reasons. That said - WSDL availability is covered in the WCF adapter under the behaviors. If you're using HTTPBasic this may be hard to modify, but using
    WCFCustom allows you to add the WSDL behavior and specify that it should be available via HTTPS.
    As to the BAM, are you using TPE within the orchestration or at the port level?  I'd imagine your TPE tracks the start and end events of your orchestration using the Orchestration Schedule.  If you're fairly confident that the TPE is correct and
    yet don't see BAM data 1) make sure your SQL Agent is running healthy and all jobs look OK and 2) check the TDDS tables in both the message box and the BAMPrimaryImport databases.  These will show you if there has been some sort of sync issue. There's
    even a TDDS errors tables - so check that out.
    Kind Regards,
    -Dan
    If this answers your question, please Mark as Answer

  • How to handle large heap requirement

    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue?
    Thanks,
    Atul

    user13640648 wrote:
    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue? That isn't how you design it (based on your brief description.)
    For any transaction A you need a set of data X.
    For another transaction B you need a set of data Y which might or might not overlap with X.
    The set of data (X or Y) is represented by discrete hunks of data (form is irrelevant) which must be loaded.
    One can preload the server with this data or do a load on demand.
    Once in memory it is cached.
    One can refine this further with alternative caching strategies that define when loaded data is unloaded and how it is unloaded.
    JEE servers normally support this in a variety of forms. But one can custom code it as well.
    JEE servers can also replicate cached data across server instances. Custom code can do this but it is more complicated than doing the custom caching.
    A load balanced system exists for performance and failover scenarios.
    Obviously in a failover situation a "shared heap" would fail completely (as asked about) because the other server would be gone.
    One might also need to support very large data sets. In that case something like Memcached (google for it) can be used. There are commercial solutions in this space as well. This allows for distributed caching solutions which can be scaled.

  • How to get IDoc Status from WCF Service Adapter?

    I have successfully transfered a strong typed Idoc to SAP using a BizTalk WCF Service Adapter.  I did so based on this example:  [http://msdn.microsoft.com/en-US/library/cc185231(v=BTS.10).aspx].  We are using the IDoc to create a purchase order.  The IDocClient.Send(idoc, ref guid); method returns only a guid.  I can translate that using the SapAdpaterUtilities to a transaction ID.  When I look at the Idoc in WE02, I can see that it has status information which includes the purchase order number.  Is there any way for me to capture this status information in my .Net application when I send the IDoc?  Can I query the transaction ID?  Or do I just need to get the last PO Number through an RFC?
    Scott

    I ended up using the SAP .Net Data Provider in the Biztalk Adapter Pack to retrieve the PO Number.  Essentially when you pass the IDoc you won't get anything in return unless the file transfer fails.
    Scott

  • How to handle large images?

    Hi,
    Does anyone know how to handle big jpg images (1280*960) so that they could be presented in a midlet.
    The problem is that the images requires so much memory that they can't be decoded to an Image object with Image.createImage method. One solution would be to extract thumbnail image from exif headers. Unfortunately at least images taken with Nokia 6680 don't contain thumbnail in exif headers.
    So the only solution seems to be to decode the byte presentation of the image and resize it before creating an Image object.
    Do anybody know any library for this or tips where to start?
    Br, Ilpo

    Hi,
    I think it is not possible. My application contains a file browser (which uses jsr-75). User can use the browser to select an image either from phone memory or memory card. After the selection I would like to present the selected image for that user can be sure it is the right image. The selected image will be then sent to the server side with some additional data for further processing (but that is another story).
    Now the problem is that for example with Nokia 6680 user can take images as big as 1280*960 and I can't present them anymore because of the memory restrictions. With 640*480 image there is no problem because I can create an image object and then use a simple algorithm to resize the image for presentation.
    Br, Ilpo

  • How to handle large data in file adapter

    We have a scenario Proxy -> PI -> File Sever using File adapter.
    File adapter is using FCC for conversion.
    recently we had wave 2 products live and suddenly for this interface we have increase in volume of messages, due to which File adapter is not performing well, PI goes slow or frequent disconnect from file server problem. Due to which either we will have duplicate records in file or file format created is wrong.
    File size is somewhere around 4.07 GB which I also think quite high for PI to handle.
    Can anybody suggest how we can handle such large data.
    Regards,
    Vikrant

    Check this Blog for Huge File Processing:
    Night Mare-Processing huge files in SAP XI
    However, you can take a look also to this Blog, about High Volume Messages:
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    PI Performance Tuning Best Practice:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?QuickLink=index&overridelayout=true&45896020746271

  • How to send large files using web service

    hello everyone,
    I am new to this forum, so please pardon me if I post some silly problem...
    I have created one service which sends file when client (jsp) request it. I am using JBOSS as my server. purpose of this application is when client request some fle then service will send this file... and most of the time we need to send only pdfs and ppts...
    Problem is, this service sends txt, java files easily of any size but when i tried sending PDF, PPT then i got xml.SAXParseException.......
    I thought this error is because of some characters, but how to fix it....
    I am working on Linux.
    code snippet is:
    import java.io.*;
    public class MyHelloService
    public String file_size (String name)
         String s = new String("");
    byte[] sendata1=new byte[100];
         try
              System.out.println("name recived is :::::::::::"+name);
              FileInputStream in=new FileInputStream(name);
              int size=0;
              size=in.available();
              System.out.println("FILE SIZE IS:::::"+size);
              byte[] sendata11=new byte[size];
              i=in.read(sendata11);
              System.out.println(new String(sendata11));
              s=new String(sendata11);
         catch(Exception e)
                   System.out.println("EXCEPTION IN JWS:::"+e);
                   s=new String("nofilefounderror");
         return s;
    pls tell me what am i doing wrong ad how to fix this?
    and one more thing can i send byte array from a web service as i tried but couldnt do that... so i am reading everything in a single byte array and then converted to string.....
    is it possibel to send file in a chunk?if yes, how to do that?
    waiting for the reply..... pls reply as soon as possible....
    Rashi

    hi,
    I am sending file from server to client i.e client will request for a file and service will send it back....... no socket connection is there...I am using JBOSS and apache axis.
    pls help me out.....
    Rashi

  • Payload Streaming (for handling large payload) in Oracle JCA Adapter for AQ

    Hi All-
    Oracle Documentation indicates that it supports Payload Streaming in Oracle JCA Adapter for AQ. Link http://download.oracle.com/docs/cd/E14571_01/integration.1111/e10231/adptr_aq.htm#CBAIAABF
    However when I tried configuring an AQ Adapter in Jdeveloper, I was not able to see the check box for enabling Payload Streaming.
    Do we have to manually update the .jca file to add the property "EnableStreaming" in the AQ Adapter Activation Spec? Is it supported and is it going to work?
    What is the Message Size limit that the AQ Adapter can handle?
    Please let me know.
    Thanks,
    Dibya

    If the StreamPayload property does not exist, then the default value false is assumed.
    <activation-spec className="oracle.tip.adapter.aq.inbound.AQDequeueActivationSpec">
    <property name="QueueName" value="RAW_IN_QUEUE"/>
    <property name="DatabaseSchema" value="SCOTT"/>
    <property name="StreamPayload" value="true"/>
    </activation-spec>
    you can add <property name="StreamPayload" value="true"/>
    to the .jca file but rememeber This property is applicable when processing Raw messages, XMLType messages, and ADT type messages for which a payload is specified though an ADT attribute.

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • How to handle large number (7200+) identical HorizontalRule's

    I have an application where performance is becoming an issue. I have 30 VBoxes that each contain identical HorizontalRules, the number of
    HorizontalRules is unknown until render time but can easily be up to 240 per VBox. As there are 30 VBoxes this results in 7200 HorizontalRules being added to the stage. This results in large memory consumption and poor rendering time on lower specification machines. To speed this up I have tried using Sprites and graphics.draw to render the line, but I think it is the time taken to create the Object that is the problem.
    Is there any way to create one HorizontalRule and add it to the stage multiple times? I know that Flex will remove the child if it is added again with a different [x,y] coordinate so my attempts to do that failed.
    Thanks for any suggestions.

    The VBoxes are typically about 20 pixels wide and each has a line about 4 or 5 pixels apart all the way down their length. That is why there are so many of the little blighters.
    You are probably right, but I had a problem with graphics.draw in that they had to be added as rawChildren. The component is resizable and I found it impossible to move the drawn lines. I could not get a good reference to them, even if I stored each line in a seperate Array before adding it as a rawChild, and so I could not even delete them reliably. I know that the rawChildren and children/elements have things at different indexes, but I didn't manage to find a way to use that information.
    You have given me the idea that I shouldn't draw them all on the screen at once though. With the full 7,200 lines, only about 30% are on the screen, I will look into using an item renderer. Is that the correct way to do it?
    Does anyone know how to relibly access rawChildren so they can be moved? Maybe my app is just a little wierd. I will try building a test case and see if I can add and remove from the rawChild list freely in a simpler application.

  • How to handle large data while acquisition? BNC 2110

    I want to acquire data using  BNC 2110, I am writing a software in VB 6. We will use 3 channels. We are supposed to scan about 10000 points before AcquiredData is triggered. in all we will need to scan 10000 * 1000 * 1000 before data is put into a binary fall. Can anybody let me know, how to hande this large number points

    Hello Vjuno,
    In order to acquire 10,000,000,000 points you are going to have to be streaming this data to your hard drive as you go.  To do this you'll need to write the data you read to a file each loop iteration.  In general it is a good practice to make your "samples to read" at least 10% of your sample rate in seconds to avoid overflowing buffers, however, depending on your computer you may be able to go faster.  I made an example program in LabVIEW and was able to read 10,000 points at a time from each of 3 analog inputs at 333MHz and write the values to file without overflowing a buffer.  However, even opening a web browser while the code was running was enough to delay the VI long enough for the buffer to overflow.
    You can use the DAQmx Configure Input Buffer call to increase the buffer size and account for spikes in CPU usage from other processes, and you should also monitor the "Available Samples Per Channel" property to make sure you aren't steadily gaining samples in your buffer.  Since you want to acquire 10 billion samples at 1MHz this acquisition will take several hours; if you're not able to keep the buffer empty then it will become apparent before the end of your acquisition.  By monitoring the samples in the buffer you can tell if you're pulling the samples out fast enough, if you find that this number is steadily increasing then you should either reduce the sample rate or increase the number of samples to read each time you call the DAQmx Read.
    In my example program I used a write to TDMS (binary) file and a PCI-6251.
    I hope this helps, and have a good night.
    Cheers,
    Brooks

  • How to handle large library, limited data plan

    I've been using Itunes Match for about six months now, and I'm having problems...
    I live rurally and so I use an AT&T hotspot with a 10 gig/month data plan for my phone, ipad, and desktop mac. My wireless connection speed is pretty good.
    I have about 15000 tracks in my Itunes library, most not purchased through Itunes.
    When I signed up for Match and went through the initial process, it took about 24 hours and used up about six gig, which caused a significant overage in my data plan. Consequently, I just use Match on my Mac and decided not to use it on my other devices because of the data limitations. My motivation to use it currently is as a back up for my music. I connect my phone to my mac manually to transfer some of my music to my phone.
    I figured that the data overuse problem was a one time deal if I didn't use my other devices.
    But recently, when I purchase a song from emusic or Amazon, the icloud processing image pops up after the download purchase is complete. Itunes will then start the process of sending data to Amazon, which was still going at 4 hours yesterday when I manually stopped it as I saw my data plan being used up. It seemed to restart the process of sending periodically and never got to the analyzing or returning data stages.
    Now my recently purchased music shows up with both the Icloud processing symbol and a faded download icloud icon, one for each separate track. I can play the "processing" track, but Itunes won't allow me to add it to a playlist. Further, if I'm online with my hotspot, sometimes the regular Itunes Icon is visible, but at other times the icon with the thunderbold through it. I'm guessing this means it's streaming with the first icon, playing from my harddrive with the second.
    So a couple of questions, and sorry for the length, I'm a first-time support user
    1) Does it make sense to use Match with such a large library, and limited data plan?
    2) "Where" is the music I've purchased - on my computer or in the cloud?
    3) Should it take all day to send data to itunes, when the only updates to my library are songs I've deleted and a handful of new albums I purchased.
    4) Am I using up my data plan when the regular Icloud Icon appears and I'm online? Is there a way to manually play from the hardrive to reduce use of my data plan? I turned off Match once and payed with a half-day sending and receiving session when I turned it back on.
    4) Will I lose music if I unsubscribe from Match?
    Thanks to anybody who has the inclination to respond to any of these questions!

    This datasource sends After images in delta loads which is compatible with loading to a Std DSO only. You cannot load from this datasource directly to a cube.
    You can check with business the number of years they would need history. If they need for 5 years, you could delete data older than that or Archive.
    For GL, there would not be any changes to years that are closed for posting. There may be adjustments carried out for the previous fiscal year. I guess there would not be any changes to years prior to that. So archiving old data should not affect delta.

  • How to handle large number of Threads !!?

    hello friends,
    i done a program that is digging all files and folders for finding given filename
    in this logic i created one thread per directory and and that thread list out all
    sub files and directory and matches the search filename string , in that if sub
    directory found than it creates new thread and forth ..
    problem is after running program , at point it is creating large thread around
    3000 threads and system goes down ,
    so what should be solution for that?
    requesting you to suggest the good scheduling of thread that give better performance of the system
    logic code is given below:
    import java.io.File;
    class DiggtheFiles implements Runnable
         private File     currentdir     = null;
         private String     search          = "";
         DiggtheFiles(File file, String search)
              this.currentdir = file;
              this.search = search;
         private File     files[]     = null;
         public void run()
              files = currentdir.listFiles();
              if (files != null && files.length > 0)
                   synchronized (this)
                        for (int i = 0; i < files.length; i++)
                             if (files.isDirectory())
                                  data.dircount++;
                                  if(files[i].getName().contains(search))data.filearry.add(files[i].getAbsolutePath());
                                  *new Thread(new DiggtheFiles(files[i],search)).start();*
                             else
                                  if(files[i].getName().contains(search))data.filearry.add(files[i].getAbsolutePath());
                                  data.filecount++;
                   Filechooserpanel.consolearea.setText("DIR:" + data.dircount
                             + " FILE:" + data.filecount + " Thread Completed:"
                             + data.threadcount);
                   data.threadcount++;
                   return;
              else
                   data.threadcount++;
                   return;

    Remove the synchronized and use a work list instead of doing new Thread(), do workList.add() instead and have a threadpooledexceutor take workitem of the list.
    Or go non-parallel and just do a recursion-call?

Maybe you are looking for