Handling Huge Amount of data in Browser

Some information regarding large data handling in Web Browser. Browser data will be downloaded to the
cache of local machine. So when the browser needs data to be downloaded in terms of MBs and GBs, how can we
handle that ?
requirement is as mentioned below.
A performance monitoring application is collecting performance data of a system every 30 seconds.
The size of the data collected can be around 10 KB for each interval and it is logged to a database. If this application
runs for one day, the statistical data size will be around 30 MB (28.8 MB) . If it runs for one week, the data size will be
210 MB. There is no limitation on the number of days from the software perspective.
User needs to see this statistical data in the browser. We are not sure if this huge amount of data transfer to the
browser in one instance is feasible. The user should be able to get the overall picture of the logged data for a
particular period and if needed, should be able to drill down step by step to lesser ranges.
For e.g, if the user queries for data between the dates 10'th Nov to 20'th Nov, the user expects to get an overall idea of
the 11 days data. Note that it is not possible to show each 30 second data when showing 11 days data. So some logic
has to be applied to present the 11 days data in a reasonably acceptable form. Then the user can go and select a
particular date in the graph and the data for that day alone should be shown with a better granularity than the overall
graph.
Note: The applet may not be a signed applet.

How do you download gigabytes of data to a browser? The answer is simple. You don't. A data analysis package like the one you describe should run on the server and send the requested summary views to the browser.

Similar Messages

  • How can we transfer huge amount of data from database server to xml format

    hi guru
    how can we transfer huge amount of data from database server to xml format.
    regards
    subhasis.

    Create ABAP coding
    At first we create the internal table TYPES and DATA definition, we want to fill with the XML data. I have declared the table "it_airplus" like the structure from XML file definition for a better overview, because it is a long XML Definition (see the XSD file in the sample ZIP container by airplus.com)
    *the declaration
    TYPES: BEGIN OF t_sum_vat_sum,
              a_rate(5),
              net_value(15),
              vat_value(15),
             END OF t_sum_vat_sum.
    TYPES: BEGIN OF t_sum_total_sale,
            a_currency(3),
            net_total(15),
            vat_total(15),
            vat_sum TYPE REF TO t_sum_vat_sum,
           END OF t_sum_total_sale.
    TYPES: BEGIN OF t_sum_total_bill,
            net_total(15),
            vat_total(15),
            vat_sum TYPE t_sum_vat_sum,
            add_ins_val(15),
            total_bill_amount(15),
           END OF t_sum_total_bill.TYPES: BEGIN OF t_ap_summary,
            a_num_inv_det(5),
            total_sale_values TYPE t_sum_total_sale,
            total_bill_values TYPE t_sum_total_bill,
           END OF t_ap_summary.TYPES: BEGIN OF t_ap,
            head    TYPE t_ap_head,
            details TYPE t_ap_details,
            summary TYPE t_ap_summary,
           END OF t_ap.DATA: it_airplus TYPE STANDARD TABLE OF t_ap
    *call the transformation
    CALL TRANSFORMATION ZFI_AIRPLUS
         SOURCE xml l_xml_x1
         RESULT xml_output = it_airplus
         .see the complete report: Read data from XML file via XSLT program
    Create XSLT program
    There are two options to create a XSLT program:
    Tcode: SE80 -> create/choose packet -> right click on it | Create -> Others -> XSL Transformation
    Tcode: XSLT_TOOL
    For a quick overview you can watch at the SXSLTDEMO* programs.
    In this example we already use the three XSLT options explained later.
    As you can see we define a XSL and ASX (ABAP) tags to handle the ABAP and XML variables/tags. After "

  • Changes to write optimized DSO containing huge amount of data

    Hi Experts,
    We have appended two new fields in DSO containg huge amount of data. (new IO are amount and currency)
    We are able to make the changes in Development (with DSO containing data).  But when we tried to
    tranport the changes to our QA system, the transport hangs.  The transport triggers a job which
    filled-up the logs so we need to kill the job which aborts the transport.
    Does anyone of you had the same experience.  Do we need to empty the DSO so we can transport
    successfully?  We really don't want to empty the DSO's as it will take time to load? 
    Any help?
    Thank you very muhc for your help.
    Best regards,
    Rose

    emptying the dso should not be necessary, not for a normal dso and not for a write optimized DSO.
    What are the things in the logs; sort of conversions for all the records?
    Marco

  • Report in Excel format fails for huge amount of data with headers!!

    Hi All,
    I have developed an oracle report which fetches upto 5000 records.
    The requirements is to fetch upto 100000 records.
    This report fetches data if the headers are removed. If headers are given its not able to fetch the data.
    Have anyone faced this issue??
    Any idea to fetch huge amount of data by oracle report in excel format.
    Thanks & Regards,
    KP.

    Hi Manikant,
    According to your description, the performance is slow when display huge amount of data with more than 3 measures into powerpivot, so you need the hardware requirements for build a PowerPivot to display huge amount of data with more than 3 measures, right?
    PowerPivot benefits from multi-core processors, large memory and storage capacities, and a 64-bit operating system on the client computer.
    Based on my experience, large memory, multiprocessor and even
    solid state drives are benefit PowerPivot performance. Here is a blog about Memory Considerations about PowerPivot for Excel for you reference.
    http://sqlblog.com/blogs/marco_russo/archive/2010/01/26/memory-considerations-about-powerpivot-for-excel.aspx
    Besides, you can identify which query was taking the time by using the tracing, please refer to the link below.
    http://blogs.msdn.com/b/jtarquino/archive/2013/12/27/troubleshooting-slow-queries-in-excel-powerpivot.aspx
    Regards,
    Charlie Liao
    TechNet Community Support

  • Data Transfer Prozess (several data packages due two huge amount of data)

    Hi,
    a)
    I`ve been uploading data from ERP via PSA, ODS and InfoCube.
    Due to a huge amount of data in ERP - BI splits those data in two data packages.
    When prozessing those data to ODS the system delete a few dataset.
    This is not done in step "Filter" but in "Transformation".
    General Question: How can this be?
    b)
    As described in a) data is split by BI into two data packages due to amount of data.
    To avoid this behaviour I enterd a few more selection criteria within InfoPackage.
    As a result I upload data a several time, each time with different selction criteria in InfoPackage.
    Finally I have the same data in ODS as in a), but this time without having data deleted in step "Transformation".
    Question: How is the general behaviour of BI when splitting data in several data packages?
    BR,
    Thorsten

    Hi All,
    Thanks a million for your help.
    My conclusion from your answers are the following.
    a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
    b) Uploading a huge amount of datasets is possible in two ways:
       b1) with selction criteria in InfoPackage and several uploads
       b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
    c) both ways should have the same result within the ODS
    Ok. Thanks for that.
    So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
    Guess this is normal technical behaviour of BI.
    I am fine when results in ODS are the same for b1 and b2.
    Have a nice day.
    BR,
    Thorsten

  • I have huge amount of data on a windows external drive and want to transfer to a Mac drive.  Does anyone know an easy way to do this?  I have almost 2TB of data to transfer.  Thanks.

    I have huge amount of data 2TB on a windows Fantom external drive and want to transfer to a Mac drive.  Does anyone know an easy way to do this?  Thanks.  I have an IMac 3.5 GHz Intel Core i7.  I haven't bought a Mac external yet. 

    Move your data to a new Mac - Apple Support

  • Handle big amount of data

    Hello,
    I have to analyse a great amount of data (more than 20MByte), that I read from a logfile. I can't split these data into smaller parts because some of my analysis-methods need all data (regression,....).
    Are there any tricks for LabVIEW (5.0.1), how to handle big amounts of data?
    hans

    You might be able to do as you would like. If analysing process needs
    all amont of data,
    the whole process takes a couple of minutes but no problem. It gives you
    no information
    on the way of process like as hanging, so using "read files", not "read
    from spread
    sheet files", is better to have confirming process including the graphic
    monitor involved
    in the analysing vi that you like to build.
    Thanks in advance,
    Tom
    hans wrote:
    > Hello,
    >
    > I have to analyse a great amount of data (more than 20MByte), that I
    > read from a logfile. I can't split these data into smaller parts
    > because some of my analysis-methods need all data (regression,....).
    >
    > Are there any tricks for LabVIEW (5.0.1), how to handle big amounts of
    > data?
    >
    > hans

  • TS3992 My Icloud back up shows huge amount of data stored from back up yet listed as incomplete. Unable to access data. Is data lost

    My ICloud backup shows a huge amount of data stored, yet it is listed as incomplete backup. I am unable to access the backed up data. Is the data lost?
    Thank you in advance for your assistance.

    Unfortunately, if an iCloud backup is incomplete you can't access any of the data in it.  The only way to access anything in the backup is to restore the entire backup, which can't be done if it is incomplete.

  • How do you handle and distribute huge amounts of data?

    I've got about 315 channels and will be collecting data at about 1300 Hz.  Data collection will last about 3-4 minutes.  I'm at a loss on how to distribute the data to researchers for evaluation.  These researchers will not have LabView or any other NI software to use.  I've tried the TDM stuff, and the add-in for excel 2007 but the files are just too large for excel to handle in a timely manner.  It took just over three hours to open one of my test files.  Anybody have any suggestions on how to manage and distribute files this large?

    Frank Rizzo wrote:
    Well I'm sitting in Cleveland Ohio right now so I am going to have to disregard your message.....hahhahaha...
    I'll be curious to see if Jamal Lewis's yardage improves for you guys this year after having some definite dropoffs with us the last couple.
    Though now that he's not playing against you,  your defense's numbers against the run should improve.
    I'd be surprised if you could get that data to the researchers at all using Excel.  It has a limit of 256 columns and 65536 rows.  If you had a column per channel and a row per data point, you'd be talking 315 columns and 312,000 rows for 4 minutes of data.  I guess you could always break it up into several files being sure to leave some rows and columns to give them a chance to do some data calculation.
    Out of curiosity, I created a spread sheet where every cell was filled with a 1.  It took a good 30 seconds to save and was over 100 MB.  That was probably about 1/10 of the amount of data your dealing with.  And going back to my earlier calculations, I would guess that as a text file, that much data would need about 10 bytes per value thus getting you to about 1 GB in text files.
    I would ask them what kind of software they will use to analyze these files and what format they would prefer it in.  There are really only 2 ways to get it to them, either ASCII text file which could be very large, but would be the most flexible to manipulate.  Or a binary file, which would be smaller, but there could be a conflict if they don't interpret the file the same way you write it out.
    I haven't used a TDM file add-in for Excel before.  So I don't know how powerful it is, or if there is a lot of overhead involved that would make it take 3 hours.  What if you create multiple TDM files?  Let's say you break it down by bunches of channels and only 20-30 seconds of data at a time.  Something that would keep the size of the data array within the row and column limits of Excel.  (I am guessing about 10 files).  Would each file go into Excel so much faster with that add-in that even if you need to do it 10 times, it would still be far quicker than one large file?  I am wondering if the add-on is spending a lot of time figuring out how to break down the large dataset on its own.
    The only other question.  Have you tried the TDMS files as opposed to the TDM files?  I know they are an upgrade and are supposed to work better with streaming data.  I wonder if they have any improvements for larger datasets.

  • Since few days iPhone 4 suddenly uses huge amount of data and battery, how to find out which App is guilty?

    Hi Community,
    since Friday my iPhone 4 startet using huge amounts of mobile data and rapidly wastes battery lifetime (50% in 3 hous!) It also gets quite warm even when I don't use it at all.
    I suspect an app does this, because in flight mode battery is ok.
    How do I find out which app is to blame without having to uninstall all apps?
    Thanks for your help.
    Kind regards
    Nymphenburg

    You need to look into using the SQL*Loader utility:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96652/part2.htm#436160
    Also, Oracle supports BULK inserts using PL/SQL procedures:
    http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#28178

  • Huge amount of data in a view

    Hi All,
    I need to fetch data from a view. The size of the view is huge. If select statements are used, the time taken is large which results in  dump. Could anybody suggest me as to how I can extract data from this view?
    Regards,
    Pavan.

    Hi pavan,
    If the view is having tables like MKPF and MSEG, then the join which happens automatically in the view, will really take a lot of time (at the database level).
    Depending upon the view and its related tables (number of tables, simplicity ) etc,
    if the view was consisting of two tables for eg. MKPF and MSEG.
    Then it is better to use two select queries on MKPF and then MSEG (instead of select query on view).
    1. First query on MKPF (if possible, using primary key fields in the where clause)
    2. Then query on MSEG using FOR ALL ENTRIES in MKPF.
    This would be definitely little better than the view query.
    Regards,
    Amit Mittal.

  • HUGE amount of data in flat file every day to external system

    Hello,
    i have to develop several programs to export all tables data in a flat file for external system ( EG. WEB).
    I have some worries like if is possible by SAP export all KNA1 data that contains a lot of data in a flat file using the extraction:
    SELECT * FROM KNA1 ITO TABLE TB_KNA1.
    I need some advices about these kind of huge extractions.
    I also have to extract from some tables, only the data changes, new record, and deleted records; to do this I thought of developing a program that every day extract all data from MARA and save the extraction in a custom table like MARA; the next day when the programs runs compare all data of the new extraction with the old extraction in the ZMARA table to understand the datachanges, new records or deleted record.. It's a righ approach? Can have problems with performance? Do you now other methods?
    Thanks a lot!
    Bye

    you should not have a problem with this simple approach, transferring each row to the output file rather than reading all data into an internal table first:
    open dataset <file> ...
    select * from kna1 into wa_kna1
      transfer wa_kna1 to <file>
    endselect
    close dataset <file>
    Thomas

  • How to manage huge amount of data into OBIEE 11g?

    Hi all Experts,
    I have some business requirements for a BANK where I need to get data for around 2 crores accounts with different product lines with 50 columns.
    from a staging table generated from data ware house. .
    ** I don't need to manage any modeling and business model based criteria (dimension and fact) its going from direct database request.
    how to handle and make the report output faster.
    ***If I  create the same report from OBIEE rpd based subject area , presentation tables (with filters to get less no of rows) than it never comes up with result any result and fails than return errors.
    any suggestion will help a lot.
    Thanks in advance
    Raj M

    "if the product does not peform"...
    Let's put the problem into a perspective that guys (which I assume we all are for the sake of the argument): cars.
    Using your direct database request as a starting point it's a bit like trying to use a Ferrari to pull a camper. Yes, it will work, but it's the wrong approach. Likewise (or conversely) it's a pretty bad idea to take a Land Rover Defender around the Nurburg Ring.
    In both cases "the product" (i.e. the respective cars) "will not perform" (i.e. not fulfill the intended duties the way you may want them to).
    I never get why everyone always bows to the most bizarre requests like "I MUST be able to export 2 million rows through an analysis exposed on a dashboard" or "This list report must allow scrolling through 500k records per day across 300 columns".

  • Best way of handling large amounts of data movement

    Hi
    I like to know what is the best way to handle data in the following scenario
    1. We have to create Medical and Rx claims Tables for 36 months of data about 150 million records each - First month (month 1, 2, 3, 4, .......34, 35, 36)
    2. We have to add the DELTA of month two to the 36 month baseline. But the application requirement is ONLY 36 months, even though the current size is 37 months.
    3 Similarly in the 3rd month we will have 38 months, 4th month will have 39 months.
    4. At the end of 4th month - how can I delete the First three months of data from Claim files without affecting the performance which is a 24X7 Online system.
    5. Is there a way to create Partitions of 3 months each and that can be deleted - Delete Partition number 1, If this is possible, then what kind of maintenance activity needs to be done after deleting partition.
    6. Is there any better way of doing the above scenario. What other options do I have.
    7 My goal is to eliminate the initial months data from system as the requirement is ONLY 36 months data.
    Thanks in advance for your suggestion
    sekhar

    Hi,
    You should use table partitioning to keep your data on monthly partitions. Serach on table partitioning for detailed examples.
    Regards

  • Another victim of huge amounts of data being used, is downgrading a solution?

    I too am one of the latest victims of skyrocketing data usage on my 2 week old iPhone 4. I randomly got a text from AT&T this morning stating that I had used 65% of my 200MB data plan in just 10 DAYS. I am on wifi for 80% of the day and at work I only check FB, twitter, and the occasional email that comes through on 3G but otherwise it's all wifi. I've read wifi isn't running when it's asleep so maybe that's the problem. I deleted my iCloud account and made my phone ancient by only getting calls. Many of you say this started with iOS5 so I was thinking that downgrading may not even be an option but a necessity. AT&T can't help at all since the data used is "consistent" yet I had 3 Blackberries and my brother's Blackberry Curve ever go/went near 100MB, and this is from the latest Blackberry phones. I still have time to return the phone and get a new one but I'm hooked on it and don't want to lose it, it's just frustrating to go through this. Has anyone successfully downgraded the OS and noticed anything?

    How many email accounts are you accessing with the iPhone's Mail app?
    Any accounts that support push access for received messages?
    If not, is Fetch set to automatically check the account or accounts for new messages?
    If the answer is yes to either question and your iPhone is not connected to a power source the entire time it is in range of an available wi-fi network the iPhone has access to or is not being actively used the entire time it is connected to an available wi-fi network, connection with the wi-fi network will be dropped. If Cellular Data is enabled, your iPhone will use the cellular data or internet network to download emails in that situation. If you have Notifcations enabled for any apps, notifications received will be downloaded via the carrier's cellular data or internet network. If your employer email account is an Exchange account, push access is supported with an Exchange account for recevied messages and for contacts and calendar events. You can turn Push access off for email but if Fetch is set to automatic, the account will be checked for new messages and new messages will be downloaded when the iPhone is not actively being used. which will be over the cellular network unless the iPhone remains connected to a power source the entire time it is connected to an available wi-fi network.
    If your employer email account is an Exchange account, that is likely the culprit. Turn Push access off for received messages for starters and use Fetch set to automatic or manually.

Maybe you are looking for

  • Pictures in two projects?

    Hello, It's taken me a little while to figure out how to organize my library and now that I've figured it out, I'm cleaning up the database and trying to make sure that the photos I want in a yearly project are all there (i.e., all pictures from 2011

  • Query on Screen Exits

    Hi, We have a requirement to add the field 'Cost Centre' to the transaction VA01.I believe that the same can be done using a screen exit. I would like to know the screen exit that can be used for this purpose and also how to go about doing this. Than

  • Illustrator is not printing whole artboard.

    My art board is 1066mm x 83mm. Width of my paper is 1075mm. I am printing with custom size to those specs. Everything looks good on print preview. Printer is pushing out about 4 inches of blank paper. Then prints only the first 11 inches of what's on

  • IGS and how to change the fonts

    Hello (i m not sure it's the right place for this topic, os you can move it if needed) I have some trouble when trying to include a graph made with Bex in a display in the portal from BW. I try to change the fonts, but when the data go from Bex to Bw

  • The differences between Nokia N73 & N73ME's firmwa...

    Wat is the latest firmware of N73 Music edition? I search for the update & it appears only V.3.0705.1.0.31 , besides, it cant search for the V4.0726.2.0.1. I'm currently using V3.638.0.0.30. The V4.0726.2.0.1 is only applicable for Normal N73? is it