Loading in chunks

i just designed a friends site and seems to be loading in chunks..blocky..even though the image files are very small
http://brettnicoletti.com/
ideas?

944098 wrote:
I want to create a procedure in ODI that writes a data into a file by importing data in small chunks from a table stored in a database. How can I perform this task. Any help would be much appreciated.What is your exact requirement ?
Because you wrote
create a procedure in ODI that writes a data into a file and again
importing data in small chunks from a table stored in a database. Which step you wants to do via ODI procedure ?
You can do both operation from PL/SQL procedure ,
you need to use utl_file package for writing data into a file i.e 1st part
2nd pert is a simple stright forward procedure .
Thanks,
Sutirtha

Similar Messages

  • Can we load data in chunks using data pump ?

    We are loading data using data pump. So I want to clear my understanding.
    Please correct me if I am wrong on my understandings -
    ODI will fetch all data from source (whether it is INIT or CDC ) in one go and unload into staging area.
    If it is true, will performance hamper in case very huge data (50 million records at source) at source as ODI tries to load entire data in one go. I believe it will give better performance if we load in chunks using data pump.
    Please confirm and correct.
    Also I would like to know how can we configure chunk load using data-pump.
    Thanks in Advance.
    Regards,
    Dinesh.

    You may consider usingLKM Oracle to Oracle (datapump)
    http://docs.oracle.com/cd/E28280_01/integrate.1111/e12644/oracle_db.htm#r15c1-t2
    In 11g ODI reads from source and write to target in parallel. This is the case where you specify select query in source command and insert/update query in the target command. At source side Odi reads records from source and add them to a data queue. At target side a parallel thread reads data from the data queue and writes to the target. So the overall performance would be the slower of the read or write process.
    Thanks,

  • Using URLLoader to send and load multiple xml nodes

    I am trying to use the URLLoader class to send and load
    multiple chunks of xml to a php script. Essentially I have the
    following bit of XML:
    <pages>
    <page>page children here</page>
    <page>more page children here</page>
    <page>even more page children here</page>
    </pages>
    I'm using e4x to loop through the XML and isolate each page
    node. I then want to send each node to a php script to be written
    to a .xml file on the server.
    I've tried inserting the URLLoader.load method within the
    loop but it only sends the last iteration. Do I have to create a
    new URLLoader instance for each iteration and if so is there a way
    to dynamically set the instance names?
    I'm new to 3.0 and have had success doing this in 2.0. I
    mostly want to get it right and in good form.
    Thanks for any help!

    The URLLoader class allows you to send and load data in the
    same pass. As help suggests...sendToURL Sends a URL request to a
    server, but ignores any response. To examine the server response,
    use the URLLoader.load() method instead. The send and load works
    fine, my issue is with needing to send multiple xml chunks using a
    loop.
    Here's the basics of the code I'm using. I was thinking that
    the try{....} section could be put in a loop attempting to send
    multiple items to the PHP page but only the last iteration is
    actually sent. I know I'm making a fundamental error here but need
    direction.
    var urlLoader:URLLoader = new URLLoader();
    urlLoader.dataFormat = URLLoaderDataFormat.VARIABLES;
    urlLoader.addEventListener(Event.COMPLETE, handleComplete);
    urlLoader.addEventListener(IOErrorEvent.IO_ERROR,
    ioErrorHandler);
    var variables:URLVariables = new URLVariables();
    variables.xml = "<some_node>some
    child</some_node>";
    var request:URLRequest = new
    URLRequest("sendAndLoadXML.php");
    request.method = URLRequestMethod.POST;
    request.data = variables;
    for each(blah){
    try {
    sendToURL(request);
    } catch (error:Error) {
    trace(\n***\nUnable to load requested document." + error);
    }

  • How to load file with 500M into noncontiguous memory and visit them

    My data file consist of  a serial of 2d images. Each image is 174*130*2 butes and the total images in the file is up to 11000, which takes almost 500M bytes. I hope to load them into memory in one turn since I need to visit either image many times in random order during the data processing. When I load them into one big 3d array, the error of "out of memory" happended frequenctly.
    First I tried to use QUEUE to load big chunk of data into noncontiguous memory. Queue structure is a good way to load and unload data into noncontiguous memory. But it dosn't fit here since I may need to visit any of the all the images at anytime. And it is not pratical if I dequeue one image and enqueue it into the opposite end, since the image visited may be not in sequential order.
    Another choice is to put the whole file into multiple small arrays. In my case, I may load the data file into 11000 small 2d arrays and each array holds the data of one image. But I don't know how to visit these 2d array and I didn't get any cues in the posters here.
    Any suggestion?
    PC with 4G physical memory, Labview 7.1 and Win XP.
    Thanks.

    I'll try to get the ball rolling here -- hopefully there's some better ideas than mine out there.
    1. I've never used IMAQ, but I suspect that it offers more efficient methods than stuff I dream up in standard LabVIEW.  Of course, maybe you don't use it either -- just sayin'...
    2. Since you can't make a contiguous 11000-image 3D array, and don't know how to manage 11000 individual 2D arrays, how about a compromise?  Like, say, 11 3D arrays holding 1000 images each?   Then you just need ~50 MB chunks of contiguous memory.
    3.  I'm picturing a partially hardcoded solution in which there are a bunch of cloned "functional globals" which each hold 1000 images of data.  This kind of thing can be done using (A) Reentrant vi's or (B) Template vi's.  You may need to use VI Server to instantiate them at run-time.
    4. Then I'd build an "Action Engine" wrapper around that whole bunch of stuff.  It'd have an input for image #, for example, image # 6789.  From there it would do a quotient & remainder with 1000 to split 6789 into a quotient of 6 and remainder of 789.  Then the wrapper would access function global #6 and address image # 789 from it.
    5. Specific details and trade offs depend a lot on the nature of your data processing.  My assumption was that you'd primarily need random access to images 1 or 2 at a time. 
    I hope to learn some better ideas from other posters...
    -Kevin P.
    P.S.  Maybe I can be one of those "other posters" -- I just had another idea that should be much simpler.  Create a typedef custom control out of a cluster which contains a 2D image array.  Now make an 11000 element array of these clusters.  If I recall correctly, LabVIEW will allocate 11000 pointers.  The clusters that they point to will *not* need to be contiguous one after the other.  Now you can retrieve an image by indexing into the array of 11000 clusters and simply unbundling the 2D image data from the cluster element.

  • LoadClip fails to load Hi-Res image

    Hi Guys,
    I am trying to load a image with resolution of 3849 x 3535
    using the loadclip method of a MovieClipLoader. When I catch the
    onLoadProgress event of the MovieClipLoader, its shows that the
    image is downloaded completely (by reading the bytesLoaded and
    bytesTotal parameters). But the image is not downloaded completely
    it displays most of the region of the image, I can see about 75% -
    80% of the image but the rest of the image is missing.
    Currently I am using Player 9.0, but I have also tried it
    with Player 8.0.
    Is this happened with anybody else? Kindly help me out with
    this?

    You're welcome. I wanted to say also, that if resizing isn't
    an option,
    loading in chunks can work really well. We did an interactive
    JLo timeline,
    and the seamless background image was over 11,000 pixels wide
    - loading it
    in pieces, into one master clip, worked without a hitch.
    Dave -
    www.offroadfire.com
    Head Developer
    http://www.blurredistinction.com
    Adobe Community Expert
    http://www.adobe.com/communities/experts/

  • File Chunking on upload

    I have to solve a problem of uploading large 4gb files.
    Anyone have any experience or point to an article on the following,
    With a file reference we can get to the fileReferance.data and load it into memory.
    I wonder if you could part load a chunk of data say 80mb, upload that 80mb, then discard that 80mb and load the next 80mb
    You would loop until end of data, I can reassemble the data at the server not an issue.
    The trick here is that I do not want to load the entire file into memory of the flash player just a chunk at a time and discard the uploaded chunk to free up the memory.
    The idea would be to build on this is to be chunking the next bit of data while the previous chunk is uploading, also resumable uploading.
    If we start the upload again, start so many chunks (bytes) into the file.
    When we start the file process take the  file size from the file reference and divide by the chunk size, when we send the first chunk we tell the server how many chunks of data to expect.
    Thoughts, ideas appreciated.
    TIA

    not sure if you can do this outside of AIR, but i've done it as an AIR app.
    use FileStream to openAsync() a file for FileMode.READ
    use FileStream's .readAhead property to define the amount of bytes you want to read in at a time
    define your own chunkSize value for the size of the chunked files
    use the ProgressEvent fired by FileStream to let you know when enough bytes are available to fill a buffer of chunkSize
    create a new File and write those chunkSize bytes into it
    when that file has finished writing to the local disk, use File.upload() to upload it to your server
    repeat until all chunks are there.  using openAsync with a smaller chunk size means you don't have to load the entire file into memory.  using File.upload() gives you upload progress.  (Sockets are still broken in as3 in that they don't report the progress of the file)
    it's quite simple to set up your own mini protocol for resuming broken transfers.  just keep track of the last chunk index that was successfully uploaded, and start from that index to resume a broken upload.  reassembling the chunked files back into the original file on the server side is trivial.
    i don't think this will work without a lot of user intervention in a flash app running inside a browser, due to flash's security restrictions.  with flash 10 you can write to the local disk, yes, but i believe manual user intervention/permission is required each time.  furthermore, i don't think you could upload() the chunk files that were dynamically created without manual user intervention/permission for security reasons.

  • First load with DTP with mode Delta

    Hello Experts,
    I'd like some suggestions based on the following scenario.
    DSO-A
    - total records = 44,000,000
    - with criteria X = 230,000
    DSO-B (New DSO)
    - due to this DSO is a new object. It is required to retrieve data from DSO-A with criteria X
    - this means that the target records to be added are 230,000 records
    - DTP for extracting data from DSO-A to DSO-B --> use Extraction Mode = Delta
    Questions
    - For the first time loading, how to extract data in small chunks at a time from DSO-A to DSO-B when Extraction Mode is set to Delta?
      I have concerns on performance if data were loaded all at once with Extraction Mode = Delta. I do not want to interrupt other existing schedule jobs much.
      After the first load, this DTP will be set to be run daily. The next day, data will not be loaded in huge amount again.
    Any best practice on this.
    All suggestions would be appreciated.
    Thank you very much.
    -WJ-

    *- For the first time loading, how to extract data in small chunks at a time from DSO-A to DSO-B when Extraction Mode is set to Delta?*
    When you are loading first time from DSO-A to DSO-B using DTP it acts like Full load only, even if you keep that as Delta mode, then for further loads it takes as Delta load. If you have any selections active for that we can load small chunks using the Filters option in the DTP in the Tab Extracion.
    I have concerns on performance if data were loaded all at once with Extraction Mode = Delta. I do not want to interrupt other existing schedule jobs much.
    You have just 230,000 records to be loaded, i dont think it gives any performance issues. We can load it.
    After the first load, this DTP will be set to be run daily. The next day, data will not be loaded in huge amount again.
    Yes for further loads from this DTP gets only delta records hope they are fewer.
    Hope this helps.
    Veerendra.

  • Open VI Reference prevents execution of other parallel threads

    I am using splash screen to start an application. I use dynamic loading of the Main.vi and an animation during the loading, both in parallel threads (see image below). However, when the Open VI reference VI is called everything else stops executing (the animation stops running) until the Open VI reference is done. I must call the Main.vi dynamically, because it takes some time to load and I want to notify a user that the application is loading (using animation). Is there an option to prevent the Open VI reference to block the execution of other threads? Or should I use some other approach?

    andrej wrote:
    But the problem is not in the Run.VI method because the Open VI Reference blocks the execution of the top loop and I need the Open VI Reference to call the Get VI Dependencies method. Other approach could be to create a static array of dependencies and then to use this array to load VIs from bottom up.
    The only question now is if the Get VI Dependencies method returns dependencies in top to bottom order? If it does then I can just load VIs in reverse order from the array.
    Well the problem is the Open VI Reference! This executes in the UI thread, as several people have explained already (and really can't be made to do otherwise without causing a number of possible and nasty race conditions, some of them even related to the underlaying OS and not just LabVIEW itself), just as your two Property Nodes in the upper loop have to use the UI thread too.
    Once Open VI starts it only returns if it has loaded the required VI fully into memory (and that includes any dependencies that aren't already in memory) and linked them all properly together or runs into an error during loading. For this duration, NO UI Property Node can execute, which is what stalls your upper loop. If you would use local variables or terminals instead the upper loop would happily run along while Run VI loads the entire VI hierarchy but it would still not show on the UI, because in order to draw the new data from the diagram passed to locals and terminals LabVIEW has to catch the UI thread to do the UI drawing.
    So first fix is to kick out any property nodes from the upper loop and the second part of the solution is to load your VI hierarchy in chunks instead of simply loading only the top level VI directly. It would be nice if Open VI had an option to allow UI thread release between loading of chunks of VIs, but the implications are not that nice. It would be quite easy for an uncareful LabVIEW programmer to create a lockout situtation then, where two functions are in fact waiting on each other to release some locks so that the program gets hang up. And a simple warning in the documentation to watchout as this option can allow to create such lockout situations is not very useful as nobody reads them anyhow.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • HT1947 The gyro no longer works when using the remote app on my iPad. I updated to iOS 7.0.1. This seem to happen after the update. Any suggestions?

    The gyro no longer works when using the remote app on my iPad. I updated to iOS 7.0.1.
    This seem to happen after the update. Any suggestions?

    Dear Apple friends,
    I have the exact same problem with some apps.
    Even when I try some websites in Safari and Chrome that require loading big chunks of data they crash.
    Today it crashed in two new places. In the notification center and when I opened newstand app I no longer was able to return to the main window and the dock disappeared.
    I have my ipad retina for about three months now but latelly this issue has became more frequent. What should I do?
    Should I update to the latest firmware and do what raymond73 said?
    Is there a fix for this issue?
    Has Apple said anything about this? Because I see that this issue is happening to a lot of people.
    Is it possible to downgrade to iOS 6?
    Is it a firmware problem? That can be fixed in future releases? Or an hardware issue?
    Should I wait or go to the place I bought it and try to get the money back?

  • All websites garbled in Safari

    Hello everyone,
    I searched the topics in this forum about my problem, and I have already learned a lot. The responses were incredibly helpful providing advice--so thank you. Unfortunately, none of the existing suggestions worked for me, so I hope someone can shed new light on this problem.
    Safari is garbling every website that I load. Chunks of each website do get rendered properly, but then there are long strings of code that show up on screen. Here is a screenshot example, when I pull up the BBC news site:
    http://pages.nyu.edu/~mcd321/screenshot.jpg
    Normally, the website should look like:
    http://news.bbc.co.uk/
    (From what I can tell, the garbled text is just an output of the website title and then all of the comment tags starting with <!-- )
    My problem seems to be identical to the one discussed in this forum already at <a class="jive-link-external-small" href="http://">http://discussions.apple.com/thread.jspa?messageID=2037658&
    I read that whole thread and followed all of the suggestions, but none of them resolved my problem. I am running Safari 2.0.4(419.3) on OSX 10.4.7 on an iBook G4. I have run Software Updater and am up to date. I searched for Helvetica Fractions and Times Phonetic, but as far as I can tell, they are not on my computer. Using Font Book, I resolved duplicates, validated all fonts, and disabled classic fonts. Using Disk Utility, I repaired disk permissions. Safari was still garbling, so then I emptied all fonts from ~\Library\Fonts and \Library\Fonts. I removed all but the 8 essential fonts from \System\Library\Fonts. I used Font Finagler to clean my font cache. Then I restarted the system. And it's still producing the same garble.
    Any other ideas? Thanks for reading this whole post--I would appreciate any information or ideas that anyone has.
    iBook G4, 14"   Mac OS X (10.4.7)  

    Hello, I want to add that I have the exact same problem. Just started two days ago. I use Internet Explorer now until I can get a fix.
    Went to the Apple store and no one in the store had ever seen or heard of this problem. (I brough the laptop along for clarity). I was informed to save all user data on a sep. drive, and reinstall OSX. I haven't done that yet.
    Also, I noticed that certain emails in the iMail account I use come in with the same garbled appearance. It seems like all the HTML and flash code or something.
    Can anyone help?
    Thanks, Lisa

  • Processing a cursor of 11,000 rows and Query completed with errors

    So I have 3rd party data that I have loaded into a SQL Server Table. I am trying to determine if the 3rd party Members reside in our database by using a cursor and going through all 11,000 rows...substituting the #Parameter Values in a LIKE statement...trying
    to keep it pretty broad. I tried running this in SQL Server Management Studio and it chunked for about 5 minutes and then just quit. I kind of figured I was pushing the buffer limits within SQL Server Management Studio. So instead I created it as a Stored
    Procedure and changed my Query Option/Results and checked Discard results after execution. This time it chunked away for 38 minutes and then stopped saying
    Query completed with errors. I did throw a COMMIT in there thinking that the COMMIT would hit and free up resources and I'd see the Table being loaded in chunks. But that didn't seem to work.
    I'm kind of at a loss here in terms of trying to tie back this data.
    Can anyone suggest anything on this???
    Thanks for your review and am hopeful for a reply.
    WHILE (@@FETCH_STATUS=0)
    BEGIN
    SET @SQLString = 'INSERT INTO [dbo].[FBMCNameMatch]' + @NewLineChar;
    SET @SQLString = ' (' + @NewLineChar;
    SET @SQLString = ' [FBMCMemberKey],' + @NewLineChar;
    SET @SQLString = ' [HFHPMemberNbr]' + @NewLineChar;
    SET @SQLString = ' )' + @NewLineChar;
    SET @SQLString = 'SELECT ';
    SET @SQLString = @SQLString + CAST(@FBMCMemberKey AS VARCHAR) + ',' + @NewLineChar;
    SET @SQLString = @SQLString + ' [member].[MEMBER_NBR]' + @NewLineChar;
    SET @SQLString = @SQLString + 'FROM [Report].[dbo].[member] ' + @NewLineChar;
    SET @SQLString = @SQLString + 'WHERE [member].[NAME_FIRST] LIKE ' + '''' + '%' + @FirstName + '%' + '''' + ' ' + @NewLineChar;
    SET @SQLString = @SQLString + 'AND [member].[NAME_LAST] LIKE ' + '''' + '%' + @LastName + '%' + '''' + ' ' + @NewLineChar;
    EXEC (@SQLString)
    --SELECT @SQLReturnValue
    SET @CountFBMCNameMatchINSERT = @CountFBMCNameMatchINSERT + 1
    IF @CountFBMCNameMatchINSERT = 100
    BEGIN
    COMMIT;
    SET @CountFBMCNameMatchINSERT = 0;
    END
    FETCH NEXT
    FROM FBMC_Member_Roster_Cursor
    INTO @MemberIdentity,
    @FBMCMemberKey,
    @ClientName,
    @MemberSSN,
    @FirstName,
    @MiddleInitial,
    @LastName,
    @AddressLine1,
    @AddressLine2,
    @City,
    @State,
    @Zipcode,
    @TelephoneNumber,
    @BirthDate,
    @Gender,
    @EmailAddress,
    @Relation
    END
    --SELECT *
    --FROM [#TempTable_FBMC_Name_Match]
    CLOSE FBMC_Member_Roster_Cursor;
    DEALLOCATE FBMC_Member_Roster_Cursor;
    GO

    Hi ITBobbyP,
    As Erland suggested, you can compare all rows at once. Basing on my understanding on your code, the below code can lead to the same output as yours but have a better performance than cursor I believe.
    CREATE TABLE [MemberRoster]
    MemberKey INT,
    FirstName VARCHAR(99),
    LastName VARCHAR(99)
    INSERT INTO [MemberRoster]
    VALUES
    (1,'Eric','Zhang'),
    (2,'Jackie','Cheng'),
    (3,'Bruce','Lin');
    CREATE TABLE [yourCursorTable]
    MemberNbr INT,
    FirstName VARCHAR(99),
    LastName VARCHAR(99)
    INSERT INTO [yourCursorTable]
    VALUES
    (1,'Bruce','Li'),
    (2,'Jack','Chen');
    SELECT * FROM [MemberRoster]
    SELECT * FROM [yourCursorTable]
    --INSERT INTO [dbo].[NameMatch]
    --[MemberNbr],
    --[MemberKey]
    SELECT y.MemberNbr,
    n.[MemberKey]
    FROM [dbo].[MemberRoster] n
    JOIN [yourCursorTable] y
    ON n.[FirstName] LIKE '%'+y.FirstName+'%'
    AND n.[LastName] LIKE '%'+y.LastName+'%'
    DROP TABLE [MemberRoster], [yourCursorTable]
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Bulk collect forall vs single merge statement

    I understand that a single DML statement is better than using bulk collect for all having intermediate commits. My only concern is if I'm loading a large amount of data like 100 million records into a 800 million record table with foreign keys and indexes and the session gets killed, the rollback might take a long time which is not acceptable. Using bulk collect forall with interval commits is slower than a single straight merge statement, but in case of dead session, the rollback time won't be as bad and a reload of the not yet committed data will not be as bad. To design a recoverable data load that may not be affected as badly, is bulk collect + for all the right approach?

    1. specifics about the actual data available
    2. the location/source of the data
    3. whether NOLOGGING is appropriate
    4. whether PARALLEL is an option
    1. I need to transform the data before, so I can build the staging tables to match to be the same structure as the tables I'm loading to.
    2. It's in the same database (11.2)
    3. Cannot use NOLOGGING or APPEND because I need to allow DML in the target table and I can't use NOLOGGING because I cannot afford to lose the data in case of failure.
    4. PARALLEL is an option. I've done some research on DBMS_PARALLEL_EXECUTE and it sounds very cool. Can this be used to load to two tables? I have a parent child tables. I can chunk the data and load these two tables separately, but the only requirement would be that I need to commit together. I cannot load a chunk into the parent table and commit before I load the corresponding chunk into its child table. Can this be done using DBMS_PARALLEL_EXECUTE? If so, I think this would be the perfect solution since it looks like it's exactly what I'm looking for. However, if this doesn't work, is bulk collect + for all the best option I am left with?
    What is the underlying technology of DBMS_PARALLEL_EXECUTE?

  • Insert Text file based on URL Parameter

    Hello! I have been able to get data to load using PHP into my
    document. BUT - that is only data that is IN the URL. So, for
    example, the URL might be ...index.php?profile=123456789
    In that case I can make the customer ID appear in all the
    other URLs on the page.
    i.e. : <a href="page2.php?profile=<?php echo
    ("$profile"); ?>">Click Here</a>
    Today, I would like a section of code (probably in a text
    file) to load in the page based on a parameter in the URL.
    Something like a URL ...index.php?destination=Australia
    And then in the body would something like "load australia.txt
    file here".
    I know this is probably bigger than I am making it sound. But
    I don't know how to load a chunk of code inside another chunk of
    code.
    Any starting points would be much appreciated. Are there any
    built-in Dreamweaver CS3 features that do this?
    Jon

    The purpose is to enter an earnest money dollar amount (Real Estate Short Sale contracts), and based on the amount, a corresponding file with a copy of a check with the proper amount will be inserted into the file at page 16.  A button would be fine , but it needs to be on the earnest money deposit line (the look of the form may not be altered).  The earnest money amounts are $1,500, $2,000, $2,500, $3,000, $3,500, $4,000, $4,500 and $5,000.

  • XML going into Arrays is killing my movie

    I'm loading a large xml file into flash.
    I think the amount of data going into Arrays is killing my
    move. So what are the work arounds?

    If the built-in XML parser is getting bogged because there is
    too much to
    handle, you ought to try loading smaller chunks of data.
    "mcshaw" <[email protected]> wrote in
    message
    news:e7ueac$md0$[email protected]..
    > I'm loading a large xml file into flash.
    > I think the amount of data going into Arrays is killing
    my move. So what
    are the work arounds?

  • Application redraw issue over Citrix and Terminal Server

    Hi All,
    We provide a client-server application which connects to a SQL Server database. The middle-tier is hosted on an application server (Windows Server 2008 R2) which in turn connects to the SQL Server database. The fat client can either be installed on user laptops/desktops
    or published using Citix/Terminal services.
    We have a long standing issue which frankly I just cannot fathom. A customer has published the client via Citrix to users and using roaming profiles. If an employee is using the application in London, the roaming profile is created on a server in London and
    connects to the middle-tier in London. If an employee is using the application in Glasgow, the roaming profile is created in Edinburgh and the user connects to the middle-tier in London. The customer is also using DFS
    The roaming profile consists of the 'My Documents', 'My Pictures', 'My Videos', 'My Music' and 'Windows' folder. Distributed File System (DFS) is used for roaming profile folder replication between offices. See http://technet.microsoft.com/en-gb/library/cc732863%28v=ws.10%29.aspx
    The Edinburgh users are experiencing application redraw issue where the interface loads in chunks. For example, when a user scrolls up and down or left and right, the data loads immediately (from SQL Server) but the interface (GUI) loads in blocks. You can
    actually see each segment of the GUI components loading. The issue also occurs if connecting via a Terminal Server where the application is also installed.
    For London users, it all works fine. If an Edinburgh user comes to London, they have no issues.
    The network connection is super fast between the various offices.
    The application is built using C++ and Delphi and uses the GDI API to draw the objects.
    Any guidance is appreciated.

    Hello partner,
    Thanks for contacting Microsoft. This is Sophia who is going to help with this issue. From your description, I learnt that users from Edinburgh have application redraw issue. However, London users worked fine. Please let me know if I misunderstand your purpose.
    Based on the information, it seems that the issue located in the middle-tier in London. Could you try building a middle-tier in Edinburgh and then test how the issue goes?
    Besides, based on my experience and research, by default the allocation of the bandwidth is 70 percent for graphics data and 30 percent for virtual channel data, meaning when bandwidth usage is under pressure, graphics data is guaranteed to get 70 percent
    of the available bandwidth.  And we can tweak the settings a bit for some scenarios. To change the settings, we can set registry values. Please reference the information below.
    ===========================================================================================================================================
    Note: For these settings to take effect, the computer must be restarted.
    Following is the list of registry values that affect the bandwidth allocation behavior. These are all DWORD values under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TermDD:
    ·         FlowControlDisable: When set to 1, this value disables the new flow control algorithm, making it essentially First-In-First-Out (FIFO) for all packet requests. This provides results similar to Windows Server
    2003. (The default for this value is 0).
    ·         FlowControlDisplayBandwidth / FlowControlChannelBandwidth: These two values together determine the bandwidth distribution between display and virtual channels. You can set these values in the range of 0–255.
    For example, setting FlowControlDisplayBandwidth = 100 and FlowControlChannelBandwidth = 100 creates an equal bandwidth distribution between video and VCs. The default is 70 for FlowControlDisplayBandwidth and 30 for FlowControlChannelBandwidth, thus making
    the default distribution equal to 70–30.
    ·         FlowControlChargePostCompression: If set to 1, this value bases the bandwidth allocation on post-compression bandwidth usage. The default for this value is 0, which means that the bandwidth distribution is applied
    on pre-compressed data.
    For more information about RDP Bandwidth, please reference the article below.
    ================================================
    Bandwidth Allocation for Terminal Server connections over RDP
    http://blogs.msdn.com/b/rds/archive/2007/04/09/bandwidth-allocation-for-terminal-server-connections-over-rdp.aspx
    Top 10 RDP Protocol Misconceptions – Part 1
    http://blogs.msdn.com/b/rds/archive/2009/03/03/top-10-rdp-protocol-misconceptions-part-1.aspx
    If you have any concerns about the action plan above, feel free to let me know.
    Best regards,
    Sophia Sun
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

Maybe you are looking for

  • Blue Screen Error Please Help.

    Hello ... i m getting this error when ever i start my pc Multiprocessor_configuration_not_supported... this error came when my system finished booting then eventually this blue screen error occured and i needed to restart my computer .. after restart

  • Read characters from console.

    Hii I'm trying for 2 days to read characters from console on the fly, but it didn't work. I want to read each char that user press on line and without waiting for the Enter button. It's like an keyboard event but there is no GUI. I will appreciate an

  • I prefer my Firefox 7 layout How do I get rid of this Firefox 8 and go back top my old browser

    On my old Firefox I had a button that told me when a new email came to my hotmail account I clicked on it and it opened my inbox straight away, It had a check for new emails as well, this has now gone. I had another that I could click on and it showe

  • Goodreader will allow me to send a PDF via email but my notes wont go?

    I do apologize as I am very new to this. I did just download goodreader onto my iPad 2. It is very simple to use but I do have an issue. I was forwarding a PDF via email. I attached a note along with it, just above my signature. The email did go to a

  • 00 671: ABAP/4 processor: SAPSQL_ARRAY_INSERT_DUPREC

    Hi, While creating a billing document I get following error 00 671: ABAP/4 processor: SAPSQL_ARRAY_INSERT_DUPREC Pls help Points will be rewarded. Thanks