Efficient (Semi) Large Array Data Set Manipulation

Hi Everyone.
I am trying to figure out the most efficient way to manipulate somewhat large array of data (up to about 120 Megabyte) that I am currently getting from a FPGA. This data represent an image and it need to be manipulated before it can be displayed to the user. The data need to be unpacked from a U64 to I16 and some of it need to be chopped (essentially chop off 10% on each side of the image so if an image is 800 x 480 it becomes 640 x 480).
I have tried several approaches and the image below show the one that is the quickest but there might be further optimization that could be done.
I am looking forward to see what other can come up with.
Note 01: I am including a link to the benchmark VI that has a quite large image in it so this VI is about 40MB.
Note 02: this is cross-posted on Lava
Thanks
Attachments:
BenchmarkImageDataManipulation.png ‏68 KB

johnsold wrote:
Using Array Subset rather than Reshape Array to truncate the 1D array is faster: 151 ms compared to 175 ms. 
Lynn
Thanks Lynn, this is good to know!
Unfortunately the solution in this frame is still about 2x slower than the faster one (the "Reshape & Chop & Reshape & Unpack & Reshape").
PJM

Similar Messages

  • Large OLTP data set to get through the cache in our new ZS3-2 storage.

    We recently purchased a ZS3-2 and are currently attempting to do performance testing.  We are using various tools to simulate load within our Oracle VM 3.3.1 cluster of qty5 Dell m620 servers-- swingbench, vdbench, and dd.  The OVM repositories are connecting via NFS.  The Swingbench load testing servers have a base OS disk mounted from the repos and NFS mounts via NFS v4 from within the VM (we would also like to test dNFS later in our testing). 
    The problem I'm trying to get around is that the 256G of DRAM (and a portion of that for ARC) is large enough where my reads are not touching the 7200 RPM disks.  I'd like to create a large enough data set so the amount of random reads cannot possible be stored within the ARC cache  (NOTE: we have no L2ARC at the moment).
    I've run something similar to this in the past, but have adjusted the "sizes=" to be larger than 50m.  My thought here is that, if the ARC is up towards around 200 or so MB's, if I create the following on four separate VM's and run vdbench at just about the same time, it will be attempting to read more data than can possibly fit in the cache.
    * 100% random, 70% read file I/O test.
    hd=default
    fsd=default,files=16,depth=2,width=3,sizes=(500m,30,1g,70)
    fsd=fsd1,anchor=/vm1_nfs
    fwd=fwd1,fsd=fsd*,fileio=random,xfersizes=4k,rdpct=70,threads=8
    fwd=fwd2,fsd=fsd*,fileio=random,xfersizes=8k,rdpct=70,threads=8
    fwd=fwd3,fsd=fsd*,fileio=random,xfersizes=16k,rdpct=70,threads=8
    fwd=fwd4,fsd=fsd*,fileio=random,xfersizes=32k,rdpct=70,threads=8
    fwd=fwd5,fsd=fsd*,fileio=random,xfersizes=64k,rdpct=70,threads=8
    fwd=fwd6,fsd=fsd*,fileio=random,xfersizes=128k,rdpct=70,threads=8
    fwd=fwd7,fsd=fsd*,fileio=random,xfersizes=256k,rdpct=70,threads=8
    rd=rd1,fwd=fwd1,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd2,fwd=fwd2,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd3,fwd=fwd3,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd4,fwd=fwd4,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd5,fwd=fwd5,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd6,fwd=fwd6,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd7,fwd=fwd7,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    However, the problem I keep running into is that vdbench's java processes will throw exceptions
    ... <cut most of these stats.  But suffice it to say that there were 4k, 8k, and 16k runs that happened before this...>
    14:11:43.125 29 4915.3 1.58 10.4 10.0 69.9 3435.9 2.24 1479.4 0.07 53.69 23.12 76.80 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 7.36 0.1 627.2 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.071 30 4117.8 1.88 10.0 9.66 69.8 2875.1 2.65 1242.7 0.11 44.92 19.42 64.34 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 12.96 0.1 989.1 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.075 avg_2-30 5197.6 1.52 9.3 9.03 70.0 3637.8 2.14 1559.8 0.07 56.84 24.37 81.21 16383 0.0 0.00 0.0 0.00 0.0 0.00 0.1 6.76 0.1 731.4 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:15.388
    14:12:15.388 Miscellaneous statistics:
    14:12:15.388 (These statistics do not include activity between the last reported interval and shutdown.)
    14:12:15.388 WRITE_OPENS Files opened for write activity: 89 0/sec
    14:12:15.388 FILE_CLOSES Close requests: 81 0/sec
    14:12:15.388
    14:12:16.116 Vdbench execution completed successfully. Output directory: /oracle/zfs_tests/vdbench/output
    java.lang.RuntimeException: Requested parameter file does not exist: param_file
      at Vdb.common.failure(common.java:306)
      at Vdb.Vdb_scan.parm_error(Vdb_scan.java:50)
      at Vdb.Vdb_scan.Vdb_scan_read(Vdb_scan.java:67)
      at Vdb.Vdbmain.main(Vdbmain.java:550)
    So I know from reading other posts, that vdbench will do what you tell it (Henk brought that up).  But based on this, I can't tell what I should do differently to the vdbench file to get around this error.  Does anyone have advice for me?
    Thanks,
    Joe

    ah... it's almost always the second set of eyes.  Yes, it is run from a script.  And I just looked and realized that the list last line didn't have the \# in it.  Here's the line:
       "Proceed to the "Test Setup" section, but do something like `while true; do ./vdbench -f param_file; done` so the tests just keep repeating."
    I just added the hash to comment that out and am rerunning my script.  My guess is that it'll complete   Thanks Henk.

  • Accessing large data sets via UME

    NW 7
    What is the best way to access large user data sets via the UME?  Attribute mapping
    provides String[] for User Profiles but what is the best approach for larger user data sets?
    Say there is a list of user data exceeding 5 thousand records. What UME API/Approach is used to access this type of data.  I want to use UME API to access user data without being limited to 10 multi-valued String Array attributes? 
    Thanks

    NW 7
    What is the best way to access large user data sets via the UME?  Attribute mapping
    provides String[] for User Profiles but what is the best approach for larger user data sets?
    Say there is a list of user data exceeding 5 thousand records. What UME API/Approach is used to access this type of data.  I want to use UME API to access user data without being limited to 10 multi-valued String Array attributes? 
    Thanks

  • Just in case any one needs a Observable Collection that deals with large data sets, and supports FULL EDITING...

    the VirtualizingObservableCollection does the following:
    Implements the same interfaces and methods as ObservableCollection<T> so you can use it anywhere you’d use an ObservableCollection<T> – no need to change any of your existing controls.
    Supports true multi-user read/write without resets (maximizing performance for large-scale concurrency scenarios).
    Manages memory on its own so it never runs out of memory, no matter how large the data set is (especially important for mobile devices).
    Natively works asynchronously – great for slow network connections and occasionally-connected models.
    Works great out of the box, but is flexible and extendable enough to customize for your needs.
    Has a data access performance curve so good it’s just as fast as the regular ObservableCollection – the cost of using it is negligible.
    Works in any .NET project because it’s implemented in a Portable Code Library (PCL).
    The latest package can be found on nugget. Install-Package VirtualizingObservableCollection. The source is on github. 

    Good job, thank you for sharing
    Best Regards,
    Please remember to mark the replies as answers if they help

  • Working with Large data sets Waveforms

    When collection data at a high rate ( 30K ) and for a long period (120 seconds) I'm unable rearrange the data due to memory errors, is there a more efficient method?
    Attachments:
    Convert2Dto1D.vi ‏36 KB

    Some suggestions:
    Preallocate your final data before you start your calculations.  The build array you have in your loop will tend to fragment memory, giving you issues.
    Use the In Place Element to get data to/from your waveforms.  You can use it to get single waveforms from your 2D array and Y data from a waveform.
    Do not use the Transpose and autoindex.  It is adding a copy of data.
    Use the Array palette functions (e.g. Reshape Array) to change sizes of current data in place (if possible).
    You may want to read Managing Large Data Sets in LabVIEW.
    Your initial post is missing some information.  How many channels are you acquiring and what is the bit depth of each channel?  30kHz is a relatively slow acquisition rate for a single channel (NI sells instruments which acquire at 2GHz).  120s of data from said single channel is modestly large, but not huge.  If you have 100 channels, things change.  If you are acquiring them at 32-bit resolution, things change (although not as much).  Please post these parameters and we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • BC4J: Any way for fast storing a large array of data

    Hi!
    I know 3 ways for storing data by using BC4J (EJB):
    1.ScrollableRowsetAccess rsAccessC =....
    rsAccessC.setColumnValue(....
    2.oracle.jbo.Transaction currentTRS = ...
    currentTRS.executeCommand("insert ....
    3.By invoking embedded function in ViewObject that use methods from oracle.jbo.server.ViewObjectImpl class.
    The last clause is more quickly than previous ones but not enough fast.
    Maybe somebody know about most efficient ways?
    Because I need store large arrays to Oracle database.
    The average size such array is 130 field multiplying by 15000 records.
    Thanks in advance
    Gali
    null

    Assume you have a big text file to work that you need
    to store and work with, what would be the best way to
    do that?Parse it into a database. Then use SQL to manipulate.

  • I have used a 'for loop' to create an array of output data, however I want the each of the data sets from the array to be placed into a 1-D array. How do I concatenate the output of the for loop into a 1-D array?

    I am using this to create a data set that will be passed as an anolog output therefore it needs to have the correct array dimensions to go into the analog write vi.

    I'm updating my request... I've figured out how to do this by copying an example that uses a simple FOR loop (as seen in the attached current version of my VI). My question now becomes this: Is there a way to save the Y values corresponding to those X values into two more arrays? That is, just for argument's sake, let's say I took the first 100 elements from the X array, and found them to be positive. Then I would like to take the first 100 elements of the Y array and put them into a 'Y Values for X > 0' array. ...And likewise with the negative X values, they should have a separate array of corresponding 'Y values for X < 0' array.
    I can see that I should somehow save the indices of the positive X values from the large arrray wh
    en I sort them out, and use those indices to pick out the elements from the main Y array with the same indices.
    I just can't seem to set up the code necessary to do this. ...Can anyone help, please?
    Attachments:
    Poling_Data_Reader_5i.vi ‏79 KB
    30BLEND.txt ‏3 KB

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

  • Writing large xmltype data to UTL_FILE and setting max row per file

    Hey Gurus,
    I am trying to create a procedure (in Oracle 9i) that writes out xml data I have created into several xml files (file would probably be to large for one xml file output...I am doing this for 270,000 rows of data), setting the max rows to 1000 rows per file. I know one would have to create a looping contsruct to do this but I am just not adept enough in PL/SQL to figure it out at the moment.
    So essentially their would be some sort of loop construct and substr process that creates a file after looping through 1000 rows and then continues the count and creates a another file until all 270 xml files are created. Simple enough right...lol? Well I've tried doing this and haven't gotten anywhere. My pl/sql coding skills are too elementary I am guessing. I've only been doing this for about three months and could use the assistance of a more experienced person here.
    Here are the particulars...
    This is the xmltype view code that I used to create the xml data.
    select XMLELEMENT("macess_exp_import_export_file",
    XMLELEMENT("file_header",
    XMLELEMENT("file_information")),
    XMLELEMENT("items",
    XMLELEMENT("documents",
    (SELECT XMLAGG(XMLELEMENT("document",
    XMLELEMENT("external_reference"),
    XMLELEMENT("processing_instructions",
    XMLELEMENT("update", name)),
    XMLELEMENT("filing_instructions",
    XMLELEMENT("folder_ids",
    XMLELEMENT("folder",
    XMLATTRIBUTES(folder_id AS "id", folder_type_id AS "folder_type_id")))),
    XMLELEMENT("document_header",
    XMLELEMENT("document_type",
    XMLATTRIBUTES(document_type AS "id")),
    XMLELEMENT("document_id", document_id),
    XMLELEMENT("document_description", document_description),
    XMLELEMENT("document_date",
    XMLATTRIBUTES(name AS "name"), document_date),
    XMLELEMENT("document_properties")),
    XMLELEMENT("document_data",
    XMLELEMENT("internal_file",
    XMLELEMENT("document_file_path", document_file_path),
    XMLELEMENT("document_extension", document_extension)
    ))))from macess_import_base WHERE rownum < 270000))))AS result
    from macess_import_base WHERE rownum < 270000;
    This is the Macess_Import_Base table that I am creating xml data from
    create table MACESS_IMPORT_BASE
    MACESS_EXP_IMPORT_EXPORT_FILE VARCHAR2(100),
    FILE_HEADER VARCHAR2(20),
    ITEMS VARCHAR2(20),
    DOCUMENTS VARCHAR2(20),
    DOCUMENT VARCHAR2(20),
    EXTERNAL_REFERENCE VARCHAR2(20),
    PROCESSING_INSTRUCTIONS VARCHAR2(20),
    PATENT VARCHAR2(20),
    FILING_INSTRUCTIONS VARCHAR2(20),
    FOLDER_IDS VARCHAR2(20),
    FOLDER_ID VARCHAR2(20),
    FOLDER_TYPE_ID NUMBER(20),
    DOCUMENT_HEADER VARCHAR2(20),
    DOCUMENT_PROPERTIES VARCHAR2(20),
    DOCUMENT_DATA VARCHAR2(20),
    INTERNAL_FILE VARCHAR2(20),
    NAME VARCHAR2(20),
    DOCUMENT_TYPE VARCHAR2(40),
    DOCUMENT_ID VARCHAR2(64),
    DOCUMENT_DESCRIPTION VARCHAR2(200),
    DOCUMENT_DATE VARCHAR2(100),
    DOCUMENT_FILE_PATH VARCHAR2(200),
    DOCUMENT_EXTENSION VARCHAR2(200)
    Directory name to write output to "DIR_PATH"
    DIRECTORY PATH is "\app\cdg\cov"
    Regards,
    Chris

    I also would like to use UTL_FILE to achieve this functionality in the procedure.

  • Large data sets and key terms

    Hello, I'm looking for some guidance on how BI can help me. I am a business analyst in a health solutions firm, but not proficient in SQL. However, I have to work with large data sets that just exceed the capabilities of Excel.
    Basically, I'm having to use Excel to manaully search for key terms and apply a values to those results. For instance, I have a medical claims file, with Provider Names, Tax ID, Charges, etc. It's 300,000 records long and 15-25 columsn wide. I need to search for key terms in the provider name like Ambulance, Fire Dept, Rescue, EMT, EMS, etc. Anything that resembles an ambulance service. Also, need to include abbreviations of them such as AMB, FD, or variations like EMT, E M T, EMS, E M S, etc. Each time I do a search, I have filter and apply an "N/A" flag.
    That's just one key term. I also have things like Dentists or DDS, Vision, Optomemtry and a dozen other Provider Types that need to be flagged as "N/A".
    Is this something that can be handled using BI? I have access to a BI group, but I need to understand more about the capabilities of what can be done. As an analyst, I'm having to deal with poor data inegrity. So, just cleaning up the file can be extremely taxing and cumbersome.
    Some insight would be very helpful. Thanks.

    I am not sure if you are looking for an explanation about different BI products? If so, may be this forum is not the place to get a straight answer.
    But, Information Discovery product suite might be useful in your case. Regarding the "large date set" you mentioned, searching and analyzing 300,000 records may not be considered a large data set at least in Endeca standards :).
    All your other requests, could also be very easily implemented using Endeca's product suite. Please reach out to Oracle's Endeca product team and they might guide you on how this product suite would help you.

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • Need to load large data set from Oracle table onto desktop using ODBC

    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.

    hillelhalevi wrote:
    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.
    Use Oracle's free Sql Developer
    http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    You can just issue a query like this
    SELECT /*csv*/ * FROM SCOTT.EMP
    Then just save the results to a file
    See this article by Jeff Smith for other options
    http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/

  • Can u have a spry data set Object array.....

    Ok this is what i need to do. I need a spry object array to
    be created with in a function.
    Just like this.....
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
    Transitional//EN" "
    http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="
    http://www.w3.org/1999/xhtml"
    xmlns:spry="
    http://ns.adobe.com/spry">
    <head>
    <meta http-equiv="Content-Type" content="text/html;
    charset=iso-8859-1" />
    <title>Hijax Demo - Notes 1</title>
    <script language="JavaScript" type="text/javascript"
    src="includes/xpath.js"></script>
    <script language="JavaScript" type="text/javascript"
    src="includes/SpryData.js"></script>
    <script language="JavaScript" type="text/javascript">
    * @author nadeerak
    function test5(){
    funObjects = new Array();
    funObjectsObserver = new Array();
    funListList = new Array();
    x = new Array();
    x[0] = "First";
    x[1] = "Second";
    x[2] = "Third";
    x[3] = "Fourth";
    for(y in x)
    funObjects[x[y]] = new
    Spry.Data.XMLDataSet("notes"+y+".xml", "notes/note");
    funObjectsObserver[x[y]] = new Object;
    funListList[x[y]] = null;
    funObjects[x[y]].useCache = false;
    funObjects[x[y]].loadData();
    funObjectsObserver[x[y]].onDataChanged = function(dataSet,
    data)
    funListList[x[y]]=funObjects[x[y]].getData();
    alert(funListList[x[y]].length);
    funObjectsObserver[x[y]].onPreLoad = function(dataSet, data)
    alert("preload");
    funObjectsObserver[x[y]].onPostLoad = function(dataSet,
    data)
    funObjects[x[y]].addObserver(funObjectsObserver[x[y]]);
    </script>
    </head>
    <body>
    <input type="button" value="button" name="button"
    onclick="test5();">
    </body>
    </html>
    i have note0.xml to note6.xml
    once the button clicked i need all the alerts to give the
    length. But this dyanimically created objects does not load. Why?
    Can we do something like this in SPY? like creating a spry
    object array........... can we?
    The error im getting is after two alerts i get
    "funListList[]. is null or not an object " individually it works
    very well?
    why is this?
    To my knowladge the data is not loaded after the second....
    Hmmmm how come? im using seperate objects....
    PLS ALL SPRY LOVERS help me......

    Hi,
    Check this sample:
    http://labs.adobe.com/technologies/spry/samples/DataSetSample.html
    Check the source code to see how we build a data set from an
    array.
    Hope this helps,
    Donald Booth
    Adobe Spry Team

  • AdvancedDataGrid - create Array (cfquery) with children for hierarchical data set

    I'm trying to create an AdvancedDataGrid with a hierarchical
    data set as shown below. The problem that I am having is how to
    call the data from a ColdFusion remote call and not an
    ArrayCollection inside of the Flex app (as below). I'm guessing
    that the problem is with the CFC that I've created which builds an
    array with children. I assume that the structure of the children is
    the issue. Any thoughts?
    Flex App without Remoting:
    http://livedocs.adobe.com/labs/flex3/html/help.html?content=advdatagrid_10.html
    <?xml version="1.0"?>
    <!-- dpcontrols/adg/GroupADGChartRenderer.mxml -->
    <mx:Application xmlns:mx="
    http://www.adobe.com/2006/mxml">
    <mx:Script>
    <![CDATA[
    import mx.collections.ArrayCollection;
    [Bindable]
    private var dpHierarchy:ArrayCollection= new
    ArrayCollection([
    {name:"Barbara Jennings", region: "Arizona", total:70,
    children:[
    {detail:[{amount:5}]}]},
    {name:"Dana Binn", region: "Arizona", total:130, children:[
    {detail:[{amount:15}]}]},
    {name:"Joe Smith", region: "California", total:229,
    children:[
    {detail:[{amount:26}]}]},
    {name:"Alice Treu", region: "California", total:230,
    children:[
    {detail:[{amount:159}]}
    ]]>
    </mx:Script>
    <mx:AdvancedDataGrid id="myADG"
    width="100%" height="100%"
    variableRowHeight="true">
    <mx:dataProvider>
    <mx:HierarchicalData source="{dpHierarchy}"/>
    </mx:dataProvider>
    <mx:columns>
    <mx:AdvancedDataGridColumn dataField="name"
    headerText="Name"/>
    <mx:AdvancedDataGridColumn dataField="total"
    headerText="Total"/>
    </mx:columns>
    <mx:rendererProviders>
    <mx:AdvancedDataGridRendererProvider
    dataField="detail"
    renderer="myComponents.ChartRenderer"
    columnIndex="0"
    columnSpan="0"/>
    </mx:rendererProviders>
    </mx:AdvancedDataGrid>
    </mx:Application>
    CFC - where I am trying to create an Array to send back to
    the Flex App
    <cfset aPackages = ArrayNew(1)>
    <cfset aDetails = ArrayNew(1)>
    <cfloop query="getPackages">
    <cfset i = getPackages.CurrentRow>
    <cfset aPackages
    = StructNew()>
    <cfset aPackages['name'] = name >
    <cfset aPackages
    ['region'] = region >
    <cfset aPackages['total'] = total >
    <cfset aDetails
    = StructNew()>
    <cfset aDetails['amount'] = amount >
    <cfset aPackages
    ['children'] = aDetails >
    </cfloop>
    <cfreturn aPackages>

    I had similar problems attempting to create an Array of
    Arrays in a CFC, so I created two differents scripts - one in CF
    and one in Flex - to build Hierarchical Data from a query result.
    The script in CF builds an Hierarchical XML document which is then
    easily accepted as HIerarchical Data in Flex. The script in Flex
    loops over the query Object that is returned as an Array
    Collection. It took me so long to create the XML script, and I now
    regret it, since it is not easy to maintain and keep dynamic.
    However, it only took me a short while to build this ActionScript
    logic, which I quite like now (though it is not [
    yet ] dynamic, and currently only handles two levels of
    Hierarchy):
    (this is the main part of my WebService result handler)....
    // Create a new Array Collection to store the Hierarchical
    Data from the WebService Result
    var categories:ArrayCollection = new ArrayCollection();
    // Create an Object variable to store the parent-level
    objects
    var category:Object;
    // Create an Object variable to store the child-level
    objects
    var subCategory:Object;
    // Loop through each Object in the WebService Result
    for each (var object:Object in results)
    // Create a new Array Collection as a copy of the Array
    Collection of Hierarchical Data
    var thisCategory:ArrayCollection = new
    ArrayCollection(categories.toArray());
    // Create a new instance of the Filter Function Class
    var filterFunction:FilterFunction = new FilterFunction();
    // Create Filter on the Array Collection to return only
    those records with the specified Category Name
    thisCategory.filterFunction =
    filterFunction.NameValueFilter("NAMETXT", object["CATNAMETXT"]);
    // Refresh the Array Collection to apply the Filter
    thisCategory.refresh();
    // If the Array Collection has records, the Category Name
    exists, so use the one Object in the Collection to add Children to
    if (thisCategory.length)
    category = thisCategory.getItemAt(0);
    // If the Array Collection has no records, the Category Name
    does not exist, so create a new Object
    else
    // Create a new parent-level Object
    category = new Object();
    // Create and set the Name property of the parent-level
    Object
    category["NAMETXT"] = object["CATNAMETXT"];
    // Create a Children property as a new Array
    category["children"] = new Array();
    // Add the parent-level Object to the Array Collection
    categories.addItem(category);
    // Create a new child-level Object as a copy of the Object
    in the WebService Result
    subCategory = object;
    // Create and set the Name property of the child-level
    Object
    subCategory["NAMETXT"] = object["SUBCATNAMETXT"];
    // Add the child-level Object to the Array of Children on
    the parent-level Object
    category["children"].push(subCategory);
    // Convert the Array Collection to a Hierarchical Data
    Object and use it as the Data Provider for the Advanced Data Grid
    advancedDataGrid.dataProvider = new
    HierarchicalData(categories);

  • XY graphs under-perf​orm on large data sets

    If for example you have 3 signals with 8 million points each and you plot these on a regular waveform graph, the user interface is able to display the data smoothly. All graph palette operations (zoom, scroll etc.) respond in "real-time".
    Put the same 3x8 million points on an XY graph, and you have one sluuuuggish user interface. Scrolling is for example no longer possible in any practical fashion.
    I'm sure a lot of it has to do with the overhead of having all those X-values (often unnecessarily many - as discussed in this idea), but the performance degradation compared to a regular waveform graph (even if the latter is fed twice the amount of Y values for example) is severe.
    Are there ways around this performance issue? Sure. We can e.g. write code that decimates the data we send to the indicator, and refills it when the user zooms or scrolls and therefore needs additional data points. But this requires lots of code, and can never become as transparent/integrated and smooth as an implementation within the indicator itself. 
    And competing products are already there, that's what bugs me right now. I've got colleagues that get such functionality "for free" with the graphing tools they have.
    So, we're about to develop an XControl that makes it possible to present such large non-continuous data sets in a smooth manner. (Ironically one solution is to add data points so that I have continuous data - and then use the regular graph...) But has anyone already done this? Andhowfaroffisa nativeXYgraphindicatorthatmakes such code obsolete?
    MTO

    Have a look to the Topic "Lost reference of main controller within popup" Lost reference of main controller within popup
    "I hate windows popups" and MVC too.
    In newest versions there is a nice popup managed via DHTML (like Web Dynpro does) but basically you should have a common reference to the data somewhere. You can use server side cookies, attributes of your application class, public and static attributes of a specific controller....
    Sergio

Maybe you are looking for