DSS problems when publishing large amount of data fast

Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
My questions are
1. Is there any limit in speed (frequency) for data publishing in DSS?
2. Can DSS be unstable if loaded to much?
3. Can I lose/miss data in any situation?
4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
Regards
Idriz Zogaj
Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
Memory Profesional
direct: +46 (0) - 734 32 00 10
http://www.zogaj.se

LuI wrote:
>
> Hi all,
>
> I am frustrated on VISA serial comm. It looks so neat and its
> fantastic what it supposes to do for a develloper, but sometimes one
> runs into trouble very deep.
> I have an app where I have to read large amounts of data streamed by
> 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
> same time.)
> I use either a Moxa multiport adapter C320 with 16 serial ports or -
> for test purposes - a Keyspan serial-2-USB adapter with 4 serial
> ports.
Does it work better if you use the serial port(s) on your motherboard?
If so, then get a better serial adapter. If not, look more closely at
VISA.
Some programs have some issues on serial adapters but run fine on a
regular serial port. We've had that problem recent
ly.
Best, Mark

Similar Messages

  • Freeze when writing large amount of data to iPod through USB

    I used to take backups of my PowerBook to my 60G iPod video. Backups are taken with tar in terminal directly to mounted iPod volume.
    Now, every time I try to write a big amount of data to iPod (from MacBook Pro), the whole system freezes (mouse cursor moves, but nothing else can be done). When the USB-cable is pulled off, the system recovers and acts as it should. This problem happens every time a large amount of data is written to iPod.
    The same iPod works perfectly (when backupping) in PowerBook and small amounts of data can be easily written to it (in MacBook Pro) without problems.
    Does anyone else have the same problem? Any ideas why is this and how to resolve the issue?
    MacBook Pro, 2.0Ghz, 100GB 7200RPM, 1GB Ram   Mac OS X (10.4.5)   IPod Video 60G connected through USB

    Ex PC user...never had a problem.
    Got a MacBook Pro last week...having the same issues...and this is now with an exchanged machine!
    I've read elsewhere that it's something to do with the USB timing out. And if you get a new USB port and attach it (and it's powered separately), it should work. Kind of a bummer, but, those folks who tried it say it works.
    Me, I can upload to Ipod piecemeal, manually...but even then, it sometimes freezes.
    The good news is that once the Ipod is loaded, the problem shouldnt' happen. It's the large amounts of data.
    Apple should DEFINITELY fix this though. Unbelievable.
    MacBook Pro 2.0   Mac OS X (10.4.6)  

  • Error when exporting large amount of data to Excel from Apex4

    Hi,
    I'm trying to export over 30,000 lines of data from a report in Apex 4 to an Excel spreadsheet, this is not using a csv file.
    It appears to be working and then I get 'No Response from Application Web Server'. The report works fine when exporting smaller amounts of data.
    We have just upgraded the application to Apex 4 from Apex 3, where it worked without any problem.
    Has anyone else had this problem? We were wondering if it was a parameter in Apex4 that needs to be set.
    We are using Application Express 4.1.1.00.23 on Oracle 11g.
    Any help would be appreciated.
    Thanks
    Sue

    Hi,
    >
    I'm trying to export over 30,000 lines of data from a report in Apex 4 to an Excel spreadsheet, this is not using a csv file.
    >
    How? Application Builder > Data Workshop? Apex Page Process? (Packaged) procedure?
    >
    It appears to be working and then I get 'No Response from Application Web Server'. The report works fine when exporting smaller amounts of data.
    We have just upgraded the application to Apex 4 from Apex 3, where it worked without any problem.
    >
    Have you changed your webserver in the process? Say moved from OHS to ApexListener?
    >
    Has anyone else had this problem? We were wondering if it was a parameter in Apex4 that needs to be set.
    We are using Application Express 4.1.1.00.23 on Oracle 11g.
    Any help would be appreciated.

  • Azure Cloud service fails when sent large amount of data

    This is the error;
    Exception in AZURE Call: An error occurred while receiving the HTTP response to http://xxxx.cloudapp.net/Service1.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being
    aborted by the server (possibly due to the service shutting down). See server logs for more details.
    Calls with smaller amounts of data work fine. Large amounts of data cause this error.
    How can I fix this??

    Go to the web.config file, look for the <binding> that is being used for your service, and adjust the various parameters that limit the maximum length of the messages, such as
    maxReceivedMessageSize.
    http://msdn.microsoft.com/en-us/library/system.servicemodel.basichttpbinding.maxreceivedmessagesize(v=vs.100).aspx
    Make sure that you specify a size that is large enough to accomodate the amount of data that you are sending (the default is 64Kb).
    Note that even if you set a very large value here, you won't be able to go beyond the maximum request length that is configured in IIS. If I recall correctly, the default limit in IIS is 8 megabytes.

  • Power BI performance issue when load large amount of data from database

    I need to load data set from my database, which have large amount of data, it will take so many time to initialize data before I can build report, is there any good way to process large amount of data for PowerBI? As I know many people analysis data based
    on PowerBI, is there any suggestion for loading large amount of data from database?
    Thanks a lot for help

    Hi Ruixue,
    We have made significant performance improvements to Data Load in the February update for the Power BI Designer:
    http://blogs.msdn.com/b/powerbi/archive/2015/02/19/6-new-updates-for-the-power-bi-preview-february-2015.aspx
    Would you be able to try again and let us know if it's still slow? With the latest improvements, it should take between half and one third of the time that it used to.
    Thanks,
    M.

  • Airport Extreme Intermittent Network Interruption when Downloading Large Amounts of Data.

    I've had an Airport Extreme Base Station for about 2.5 years and have had no problems until the last 6 months.  I have my iMac and a PC directly connected through ethernet and another PC connected wirelessly.  I occasionally need to download very large data files that max out my download connection speed at about 2.5Mbs.  During these downloads, my entire network loses connection to the internet intermittently for between 2 and 8 seconds with a separation between connection losses at around 20-30 seconds each.  This includes the hard wired machines.  I've tested a download with a direct connection to my cable modem without incident.  The base station is causing the problem.  I've attempted to reset the Base Station with good results after reset, but then the problem simply returns after a while.  I've updated the firmware to latest version with no change. 
    Can anyone help me with the cause of the connection loss and a method of preventing it?  THIS IS NOT A WIRELESS PROBLEM.  I believe it has to do with the massive amount of data being handled.  Any help would be appreciated.

    Ok, did some more sniffing around and found this thread.
    https://discussions.apple.com/thread/2508959?start=0&tstart=0
    It seems that the AEBS has had a serious flaw for the last 6 years that Apple has been unable to address adequately.  Here is a portion of the log file.  It simply repeats the same log entries over and over.
    Mar 07 21:25:17
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 21:25:17
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 21:26:17
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 21:26:17
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:30:43
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:36:41
    Severity:5
    Clock synchronized to network time server time.apple.com (adjusted +0 seconds).
    Mar 07 21:55:08
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 21:55:08
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 21:55:32
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 21:55:33
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:59:47
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:24:53
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 22:24:53
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 22:25:18
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 22:25:18
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:30:43
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:36:42
    Severity:5
    Clock synchronized to network time server time.apple.com (adjusted -1 seconds).
    Mar 07 22:54:37
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 22:54:37
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Anyone have any ideas why this is happening?

  • NMH305 dies when copying large amounts of data to it

    I have an NMH305 still set up with the single original 500GB drive.
    I have an old 10/100 3COM rackmount switch (the old white one) uplinked to my Netgear WGR614v7 wireless router.  I had the NAS plugged into the 3COM switch and everything worked flawlessly.  Only problem was it was only running at 100m.
    I recently purchased a TRENDnet TEG-S80g 10/100/1000 'green' switch.  I basically replaced the 3com with this switch.  To test the 1g speeds, I tried a simple drag & drop of about 4g worth of pics to the NAS on a mapped drive.  After about 2-3 seconds, the NAS dropped and Explorer said it was no longer accessible.  I could ping it, but the Flash UI was stalled.
    If I waited several minutes, it could access it again.  I logged into the Flash UI and upgraded to the latest firmware, but had the same problem.
    I plugged the NAS directly into the Netgear router and transfered files across the wireless without issue.  I plugged it back into the green switch and it dropped after about 6-10 pics transfered.
    I totally bypassed the switch and plugged it directly into my computer.  Verified I can ping & log in to the Flash UI, then tried to copy files and it died again.
    It seems to only happend when running at 1g links speeds.  The max transfer I was able to get was about 10mbps, but I'm assuming that's limited by the drive write speeds & controllers.
    Anyone ran into this before?
    TIA!

    Hi cougar694u,
    You may check this review "click here". This is a thorough review about the Media Hub's Write and Read throughput vs. File Size - 1000 Mbps LAN.
    Cheers

  • Osx server crashes when copying large amount of data

    Ok. I have set up a mac os x server on a G4 Dual 867. Set to standalone server. The only services running are, VPN, AFP, DNS (I am pretty sure the DNS is set up correctly). I have about 3 Firewire drives and 2 USB 2.0 drives hooked up to it.
    When I try and copy roughly 230GB from one drive to another, it either just stops in the middle or CRASHES the server! I can't see anything out of the ordinary in the logs, though I am a newbie.
    I am stumped. Could this be hardware related? I just did a complete fresh install of os x server!

    This could be most anything, whether a disk error, a non-compliant device, a firewire error (I've had FireWire drivers tip over Mac OS X with a kernel panic; if the cable falls out at an inopportune moment when recording in GarageBand, toes up it all goes), to a memory error. This could also be a software error. This could be a FireWire device(s) that's simply drawing too much power.
    Try different combinations of drives, and replace one or more of these drives with another; start a sequence of elimination targeting the drives.
    Here's what Apple lists about kernel panics as an intro; it's details from the panic log that'll most probably be interesting...
    http://docs.info.apple.com/article.html?artnum=106228
    With some idea of which code is failing, it might be feasible to find a related discussion.
    A recent study out of CERN found three hard disk errors per terabyte of storage, so a clean install is becoming more a game of moving the errors around than actually fixing anything. FWIW.

  • Finder issues when copying large amount of files to external drive

    When copying large amount of data over firewire 800, finder gives me an error that a file is in use and locks the drive up. I have to force eject. When I reopen the drive, there are a bunch of 0kb files sitting in the directory that did not get copied over. This is happens on multiple drives. I've attached a screen shot of what things look like when I reopen the drive after forcing an eject. Sometime I have to relaunch finder to get back up and running correctly. I've repaired permissions for what it's worth.
    10.6.8, by the way, 2.93 12-core, 48gb of ram, fully up to date. This has been happening for a long time, just now trying to find a solution

    Scott Oliphant wrote:
    iomega, lacie, 500GB, 1TB, etc, seems to be drive independent. I've formatted and started over with several of the drives and same thing. If I copy the files over in smaller chunks (say, 70GB) as opposed to 600GB, the problem does not happen. It's like finder is holding on to some of the info when it puts it's "ghost" on the destination drive before it's copied over and keeping the file locked when it tries to write over it.
    This may be a stretch since I have no experience with iomega and no recent experience with LaCie drives, but the different results if transfers are large or small may be a tip-off.
    I ran into something similar with Seagate GoFlex drives and the problem was heat. Virtually none of these drives are ventilated properly (i.e, no fans and not much, if any, air flow) and with extended use, they get really hot and start to generate errors. Seagate's solution is to shut the drive down when not actually in use, which doesn't always play nice with Macs. Your drives may use a different technique for temperature control, or maybe none at all. Relatively small data transfers will allow the drives to recover; very large transfers won't, and to make things worse, as the drive heats up, the transfer rate will often slow down because of the errors. That can be seen if you leave Activity Monitor open and watch the transfer rate over time (a method which Seagate tech support said was worthless because Activity Monitor was unreliable and GoFlex drives had no heat problem).
    If that's what's wrong, there really isn't any solution except using the smaller chunks of data which you've found works.

  • MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    I Have the same problem, I switched off location services, maps in data, whatever else maps could be involved in nd then just last nite it chewed 100mb... I'm also on vodacom so I'm seeing a pattern here somehow. Siri was switched on however so I switched it off now nd will see what happens. but I'm gonna go into both apple and vodacom this afternoon because this must be sorted out its a serious issue we have on our hands and some uproar needs to be made against those responsible!

  • ERROR MESSAGE WHEN DISPLAYING LARGE RETRIEVING AND DISPLAYING LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • ERROR MESSAGE WHEN DOING SIMPLE QUERY TO RETRIEVE LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • Streaming large amounts of data of socket causes corruption?

    I'm wrinting an app to transfer large amounts of data via a simple client/server architecture between two machines.
    Problem: If I send the data too 'fast', the data arrives corrupted:
    - Calls to read() returns wrong data (wrong 'crc')
    - Subsequent calls to read() do not return -1 but allow me to read e.g. another 60 or 80 KBytes.
    - available() returns always '0'; but I'll get rid of that method anyway (as recommended in other forum entries).
    The behaviour is somewhat difficult to repeat, but it fails for me reliably when transferring the data between two separate machines and when setting the number of packets (Sender.TM) to 1000 or larger.
    Workaround: Reduce number of packages send to e.g. 1; or intruduce the 'sleep' on the Sender side. Another workaround: Changing alone to java.nio.* did not help, but when I got rid of the Streams and used solely ByteBuffers, the problem disappeared. Unfortunately the Streams are required by other parts of my application.
    I'm running the code on two dual-CPU machines connected via
    Below are the code of the Sender and the Listener. Please excuse the style as this is only to demonstrate the problem.
    import java.io.IOException;
    import java.io.OutputStream;
    import java.net.InetSocketAddress;
    import java.nio.channels.Channels;
    import java.nio.channels.SocketChannel;
    import java.util.Arrays;
    public class SenderBugStreams {
        public static void main(String[] args) throws IOException {
            InetSocketAddress targetAdr = new InetSocketAddress(args[0], Listener.DEFAULT_PORT);
            System.out.println("connecting to: "+targetAdr);
            SocketChannel socket = SocketChannel.open(targetAdr);
            sendData(socket);
            socket.close();
            System.out.println("Finished.");
        static final int TM = 10000;
        static final int TM_SIZE = 1000;
        static final int CRC = 2;
        static int k = 5;
        private static void sendData(SocketChannel socket) throws IOException {
            OutputStream out = Channels.newOutputStream(socket);
            byte[] ba = new byte[TM_SIZE];
            Arrays.fill(ba, (byte)(k++ % 127));
            System.out.println("Sending..."+k);
            for (int i = 0; i < TM; i++) {
                out.write(ba);
    //            try {
    //                Thread.sleep(10);
    //            } catch (InterruptedException e) {
    //                // TODO Auto-generated catch block
    //                e.printStackTrace();
    //                throw new RuntimeException(e);
            out.write(CRC);
            out.flush();
            out.close();
    import java.io.IOException;
    import java.io.InputStream;
    import java.net.InetSocketAddress;
    import java.nio.channels.Channels;
    import java.nio.channels.ServerSocketChannel;
    import java.nio.channels.SocketChannel;
    public class ListenerBugStreams {
        static int DEFAULT_PORT = 44521;
         * @param args
         * @throws IOException
        public static void main(String[] args) throws IOException {
            ServerSocketChannel serverChannel = ServerSocketChannel.open();
            serverChannel.socket().bind(new InetSocketAddress(DEFAULT_PORT));
            System.out.print("Waiting...");
            SocketChannel clientSocket = serverChannel.accept();
            System.out.println(" starting, IP=" + clientSocket.socket().getInetAddress() +
                ", Port="+clientSocket.socket().getLocalPort());
            //read data from socket
            readData(clientSocket);
            clientSocket.close();
            serverChannel.close();
            System.out.println("Closed.");
        private static void readData(SocketChannel clientSocket) throws IOException {
            InputStream in = Channels.newInputStream(clientSocket);
            //read and ingest objects
            byte[] ba = null;
            for (int i = 0; i < SenderBugStreams.TM; i++) {
                ba = new byte[SenderBugStreams.TM_SIZE];
                in.read(ba);
                System.out.print("*");
            //verify checksum
            int crcIn = in.read();
            if (SenderBugStreams.CRC != crcIn) {
                System.out.println("ERROR: Invalid checksum: "+SenderBugStreams.CRC+"/"+crcIn);
            System.out.println(ba[0]);
            int x = in.read();
            int remaining = 0;
            while (x != -1) {
                remaining++;
                x = in.read();
            System.out.println("Remaining:"+in.available()+"/"+remaining);
            System.out.println(" "+SenderBug.TM+" objects ingested.");
            in.close();
    }

    Here is your trouble:
    in.read(ba);read(byte[]) does not read N bytes, it reads up to N bytes. If one byte has arrived then it reads and returns that one byte. You always need to check the return value of read(byte[]) to see how much you got (also check for EOF). TCP chops up the written data to whatever packets it feels like and that makes read(byte[]) pretty random.
    You can use DataInputStream which has a readFully() method; it loops calling read() until it gets the full buffer's worth. Or you can write a little static utility readFully() like so:
        // Returns false if hits EOF immediately. Otherwise reads the full buffer's
        // worth. If encounters EOF in mid-packet throws an IOException.
        public static boolean readFully(InputStream in, byte buf[])
            throws IOException
            return readFully(in, buf, 0, buf.length);
        public static boolean readFully(InputStream in, byte buf[], int pos, int len)
            throws IOException
            int got_total = 0;
            while (got_total < len) {
                int got = in.read(buf, pos + got_total, len - got_total);
                if (got == -1) {
                    if (got_total == 0)
                        return false;
                    throw new EOFException("readFully: end of file; expected " +
                                           len + " bytes, got only " + got_total);
                got_total += got;
            return true;
        }

  • Deleting large amounts of data

    All,
    I have several tables that have about 1 million plus rows of historical data that is no longer needed and I am considering deleting the data. I have heard that deleting the data will actually slow down performance as it will mess up the indexing, is this true? What if I recalculate statistics after deleting the data? In general, I am looking for advice what is best practices for deleting large amounts of data from tables.
    For everyones reference I am running Oracle 9.2.0.1.0 on Solaris 9. Thanks in advance for the advice.
    Thanks in advance!
    Ron

    Another problem with delete is that it generates a vast amount of redo log (and archived logs) information . The better way to get rid of the unneeded data would be to use TRUNCATE command:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_107a.htm#2067573
    The problem with truncate that it removes all the data from the table. In order to save some data from the table you can do next thing:
    1. create another_table as select * from &lt;main_table&gt; where &lt;data you want to keep clause&gt;
    2. save the indexes, constraints, trigger definitions, grants from the main_table
    3. drop the main table
    4. rename &lt;stage_table&gt; to &lt;main_table&gt;.
    5. recreate indexes, constraints and triggers.
    Another method is to use partitioning to partition the data based on the key (you've mentioned "historical" - the key could be some date column). Then you can drop the historical data partitions when you need it.
    As far as your question about recalculating the statistics - it will not release the storage allocated for index. You'll need to execute ALTER INDEX &lt;index_name&gt; REBUILD :
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_18a.htm
    Mike

  • JSP and large amounts of data

    Hello fellow Java fans
    First, let me point out that I'm a big Java and Linux fan, but somehow I ended up working with .NET and Microsoft.
    Right now my software development team is working on a web tool for a very important microchips manufacturer company. This tool handles big amounts of data; some of our online reports generates more that 100.000 rows which needs to be displayed on a web client such as Internet Explorer.
    We make use of Infragistics, which is a set of controls for .NET. Infragistics allows me to load data fetched from a database on a control they call UltraWebGrid.
    Our problem comes up when we load large amounts of data on the UltraWebGrid, sometimes we have to load 100.000+ rows; during this loading our IIS server memory gets killed and could take up to 5 minutes for the server to end processing and display the 100.000+ row report. We already proved the database server (SQL Server) is not the problem, our problem is the IIS web server.
    Our team is now considering migrating this web tool to Java and JSP. Can you all help me with some links, information, or past experiences you all have had with loading and displaying large amounts of data like the ones we handle on JSP? Help will be greatly appreciated.

    Who in the world actually looks at a 100,000 row report?
    Anyway if I were you and I had to do it because some clueless management person decided it was a good idea... I would write a program in something that once a day, week, year or whatever your time period produced the report (in maybe a PDF fashion but you could do it in HTML if you really must have it that way) and have it as a static file that you link to from your app.
    Then the user will have to just wait while it downloads but the webserver or web applications server will not be bogged down trying to produce that monstrosity.

Maybe you are looking for