Best way to pass large amounts of data to subroutines?

I'm writing a program with a large amount of data, around 900 variables.  What is the best way for me to pass parts of this data to different subroutines?  I have a main loop on a PXI RT Controller that is controlling hydraulic cylinders and the loop needs to be 1ms or better.  How on earth should I pass 900 variables through a loop and keep it at 1ms?  One large cluster??  Several smaller clusters??  Local Variables?  Global Variables??  Help me please!!!

My suggestion, similar to Altenbach and Matt above, is to use a Functional Global Variable (FGV) and use a 1D array of 900 values to store the data in the FGV. You can retrieve individual data items from the FGV by passing in the index of the desired variable and the FGV returns the value from the array. Instead of passing in an index you could also use a TypeDef'd Enum with all of your variables as element of the Enum, which will allow you to place the Enum constant on the diagram and make selecting variables, as well as reading the diagram, simpler.
My group is developing a LabVIEW component/example code with this functionality that we plan to publish on DevZone in a month or two.
The attached RTF file shows the core piece of this implementation. This VI off course is non-reentrant. The Init case could be changed to allocate the internal 1D array as part of this VI rather than passing it from another VI.
Message Edited by Christian L on 01-31-2007 12:00 PM
Christian Loew, CLA
Principal Systems Engineer, National Instruments
Please tip your answer providers with kudos.
Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense
Attachments:
CVT_Double_MemBlock.rtf ‏309 KB

Similar Messages

  • Best way of handling large amounts of data movement

    Hi
    I like to know what is the best way to handle data in the following scenario
    1. We have to create Medical and Rx claims Tables for 36 months of data about 150 million records each - First month (month 1, 2, 3, 4, .......34, 35, 36)
    2. We have to add the DELTA of month two to the 36 month baseline. But the application requirement is ONLY 36 months, even though the current size is 37 months.
    3 Similarly in the 3rd month we will have 38 months, 4th month will have 39 months.
    4. At the end of 4th month - how can I delete the First three months of data from Claim files without affecting the performance which is a 24X7 Online system.
    5. Is there a way to create Partitions of 3 months each and that can be deleted - Delete Partition number 1, If this is possible, then what kind of maintenance activity needs to be done after deleting partition.
    6. Is there any better way of doing the above scenario. What other options do I have.
    7 My goal is to eliminate the initial months data from system as the requirement is ONLY 36 months data.
    Thanks in advance for your suggestion
    sekhar

    Hi,
    You should use table partitioning to keep your data on monthly partitions. Serach on table partitioning for detailed examples.
    Regards

  • What is the most efficient way of passing large amounts of data through several subVIs?

    I am acquiring data at a rate of once every 30mS. This data is sorted into clusters with relevant information being grouped together. These clusters are then added to a queue. I have a cluster of queue references to keep track of all the queues. I pass this cluster around to the various sub VIs where I dequeue the data. Is this the most efficient way of moving the data around? I could also use "Obtain Queue" and the queue name to create the reference whenever I need it.
    Or would it be more efficient to create one large cluster which I pass around? Then I can use unbundle by index to pick off the values I need. This large cluster can have all the values individually or it co
    uld be composed of the previously mentioned clusters (ie. a large cluster of clusters).

    > I am acquiring data at a rate of once every 30mS. This data is sorted
    > into clusters with relevant information being grouped together. These
    > clusters are then added to a queue. I have a cluster of queue
    > references to keep track of all the queues. I pass this cluster
    > around to the various sub VIs where I dequeue the data. Is this the
    > most efficient way of moving the data around? I could also use
    > "Obtain Queue" and the queue name to create the reference whenever I
    > need it.
    > Or would it be more efficient to create one large cluster which I pass
    > around? Then I can use unbundle by index to pick off the values I
    > need. This large cluster can have all the values individually or it
    > could be composed of the previously mentioned clusters (i
    e. a large
    > cluster of clusters).
    It sounds pretty good the way you have it. In general, you want to sort
    these into groups that make sense to you. Then if there is a
    performance problem, you can arrange them so that it is a bit better for
    the computer, but lets face it, our performance counts too. Anyway,
    this generally means a smallish number of groups with a reasonable
    number of references or objects in them. If you need to group them into
    one to pass somewhere, bundle the clusters together and unbundle them on
    the other side to minimize the connectors needed. Since the references
    are four bytes, you don't need to worry about the performance of moving
    these around anyway.
    Greg McKaskle

  • Best way to store large amounts of data

    Greetings!
    I have some code that will parse through XML data one character at a time, determine if it's an opening or closing tag, what the tag name is, and what the value between the tags is. All of the results are saved in a 2D string array. Each parent result can have a variable number of child results associated with it and it is possible to have over 2,000 parent results.
    Currently, I initialize a new string that I will use to store the values at the beginning of the method.
    String[][] initialXMLValues = new String[2000][45]I have no idea how many results will actually be returned when the method is initially called, so I don't know what to do besides make initialXMLValues around the maximum values I expect to have.
    As I parse through the XML, I look for a predefined tag that signifies the start of a result. Each tag/value that follows is stored in a single element of an arraylist in the form "tagname,value". When I reach the closing parent tag, I convert the arraylist to a String[], store the size of the array if it is bigger than the previous array (to track the maximum size of the child results), store it in initialXMLValues[i.] (<- peroid to avoid post showing up in italics), then increment i
    When I'm all done parsing, I create a new String String[][] XMLValues = new String[i][j], where i is equal to the total number of parent results (from last paragraph) and j is equal to the maximum number of child results. I then use a nested for loop to store all of the values from initialXMLValues into XMLValues. The whole point of this is to minimize the overall size of the returned String Array and minimize the number of null valued fields.
    I know this is terribly inefficient, but I don't know a better way to do it. The problem is having to have the size of the array initilized before I really know how many results I'm going to end up storing. Is there a better way to do this?

    So I'm starting to rewrite my code. I was shocked at how easy it was to implement the SAX parser, but it works great. Now I'm on to doing away with my nasty string arrays.
    Of course, the basic layout of the XML is like this:
    <result1>
    <element1>value</element1>
    <element2>value</element2>
    </result1>
    <result2>
    I thought about storing each element/value in a HashMap for each result. This works great for a single result. But what if I have 1000 results? Do I store 1000 HashMaps in an ArrayList (if that's even possible)? Is there a way to define an array of HashMaps?

  • What is the best way to migrate large amounts of data from a g3 to an intel mac?

    I want to help my mom transfer her photos and other info from an older  blueberry G3 iMac to a new intel one.  There appears to be no prmigration provision on the older mac.  Also the firewire caconnestions are different.  Somebody must have done this before.

    Hello
    the cable above can be use to enable Target Disk mode for data transfert
    http://support.apple.com/kb/ht1661 for more info
    to enable Target Disk mode just after startup sound Hold on "T" key on key board until see at  screen firewire symbol aka screen saver , then plug fire wire cable betwen 2 mac
    HTH
    Pierre

  • How to pass large amount of data between steps

    Hi all,
    I have some LabVIEW VIs for data acquisition。
    I need to pass large amount of data(array size >5000000 each time) from one step to another.
    But it is not allowed to set array size larger than 5000000.
    Any suggestion?
    czhen
    Win 7 SP1 & LabVIEW 2012 SP1, Teststand 2012 SP1
    Solved!
    Go to Solution.
    Attachments:
    Array Size Limits.png ‏34 KB

    In your LabVIEW code, put the data into a data value reference.  Pass this reference between your TestStand steps.  As an added bonus, you will not get an extra copy of the data at each step.  You will need to use the InPlace element structure to get your data out of the data value reference.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Is the only way to import large amount of data and database objects into a primary database is to shutdown the standby, turn off archive log mode, do the import, then rebuild the standby?

    I have a primary database that need to import large amount of data and database objects. 1.) Do I shutdown the standby? 2.) Turn off archive log mode? 3.) Perform the import? 4.) Rebuild the standby? or is there a better way or best practice?

    Instead of rebuilding the (whole) standby, you take an incremental (from SCN) backup from the Primary and restore it on the Standby.  That way, if, for example
    a. Only two out of 12 tablespaces are affected by the import, the incremental backup would effectively be only the blocks changed in those two tablespaces (and some other changes in system and undo) {provided that there are no other changes in the other ten tablespaces}
    b. if the size of the import is only 15% of the database, the incremental backup to restore to the standby is small
    Hemant K Chitale

  • What is the best way to extract large volume of data from a BW InfoCube?

    Hello experts,
    Wondering if someone can suggest the best method that is availabe in SAP BI 7.0 to extract a large amount of data (approx 70 million records) from an InfoCube.  I've tried OpenHub and APD but not working.  I always need to separate the extracts into small datasets.  Any advice is greatly appreciated.
    Thanks,
    David

    Hi David,
    We had the same issue but that was loading from an ODS to cube. We have over 50 million records. I think there is no such option like parallel loading using DTPs. As suggested earlier in the forum, the only best option is to split according to the calender year of fis yr.
    But remember even with the above criteria sometimes for some cal yr you might have lot of data, even that becomes a problem.
    What i can suggest you is apart from Just the cal yr/fisc, also include some other selection criteria like comp code or sales org.
    yes you will end up load more requests, but the data loads would go smooth with lesser volumes.
    Regards
    BN

  • Best way to store big amount of data

    Hi, i need to store a big amount of data, written in a txt its size is almost 12 mg, anyway it depends on the computer it runs, beacause what i want to store is all the shared files in a computer.
    Which is the best way to store it? Array string? textfile? List? I don�t need the data after the app close.
    Thanks

    Well, then which is the best solution? LinkedList or
    Tree? i only need to store the full path.
    What i didn�t say, my fail, is that i need to search
    for a file name once i have stored them...For searching, LinkedList will be very slow if it's very large. I think the same is true of javax.swing .tree.DefaultTreeModel, which is the JDK's only tree implementation. I don't know what Jakarta-collections has - it's possible they have a tree that offers fast searching. If you want to stick to the standard Java libraries, you'll want a Set for fast searching. TreeSet keeps the entries in sorted order. If you also need to display them as a tree, you can keep them in both a Set and a tree. If you don't have enough memory to do that, then displaying the whole tree isn't going to be useful to the user anyway, so rethink your goal.

  • Passing large amount of data between steps

    Hello,
    in my application, I developed some step types for data acquisition and analysis.
    I need to pass large arrays (>100000 points) from one step to another.
    Actually I pass the sequence context, and I use Variants to retrieve data (in the second step).
    This method seems however quite slow.
    Has anyone any suggestion?
    I use TS3.5 and CVI 8.0
    Thank you
    baloss

    Hi,
    You should be able to pass the data back directly via the parameter list of the function and like wise directly into the next function without having to use the sequence context.
    Regards
    Ray Farmer
    Regards
    Ray Farmer

  • Best strategy to retrieve large amount of data with EJB ?

    Hi all
    I have an EJB 3 which is fronted by a Web Service. The EJB retrieves a large amount of objects (TaskList) and returns it to the client.
    @Stateless
    public List <TaskList> findByRole(String role){
    List <TaskList> tasklist = em.createNamedQuery("findByRole")
    .setParameter("Role", role)
    .getResultList();
    return tasklist;
    }The problem is the query is rather slow but especially returning lots of objects in the SOAP envelopes takes a lot of time.
    So I'd like to retrieve only a "slot" of the data. Is there any EJB3/Hibernate object which could help ?
    By myself the only solution that I found is to turn the SLSB into SFSB , add a parameter to the method and return only a slot, not the whole TaskList.
    @Stateful
    List <TaskList> tasklist;
    public List <TaskList> findByRole(String role, int slot){
    tasklist = em.createNamedQuery("findByRole")
    .setParameter("Role", role)
    .getResultList();
    // Take a piece of the TaskList
    return tasklist;
    }What do you think ? any idea on how to improve this ?
    Thanks
    Frank

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • Best way to handle large amount of text

    hello everyone
    My project involves handling large amount of text.(from
    conferences and
    reports)
    Most of them r in Ms Word. I can turn them into RTF format.
    I dont want to use scrolling. I prefer turning pages(next,
    previous, last,
    contents). which means I need to break them into chunks.
    Currently the process is awkward and slow.
    I know there wud b lots of people working on similar
    projects.
    Could anyone tell me an easy way to handle text. Bring them
    into cast and
    break them.
    any ideas would be appreciated
    thanx
    ahmed

    Hacking up a document with lingo will probably loose the rtf
    formatting
    information.
    Here's a bit of code to find the physical position of a given
    line of on
    screen text (counting returns is not accurate with word
    wrapped lines)
    This stragety uses charPosToLoc to get actual position for
    the text
    member's current width and font size
    maxHeight = 780 -- arbitrary display height limit
    T = member("sourceText").text
    repeat with i = 1 to T.line.count
    endChar = T.line[1..i].char.count
    lineEndlocV = charPosToLoc(member "sourceText",
    endChar).locV
    if lineEndlocV > maxHeight then -- fount "1 too many"
    line
    -- extract identified lines "sourceText"
    -- perhaps repeat parce with remaining part of "sourceText"
    singlePage = T.line[1..i - 1]
    member("sourceText").text = T.line[i..99999] -- put remaining
    text back
    into source text member
    If you want to use one of the roundabout ways to display pdf
    in
    director. There might be some batch pdf production tools that
    can create
    your pages in pretty scalable pdf format.
    I think flashpaper documents can be adapted to director.

  • Best way to store small amount of data to file?

    If I need to store, edit, and retrieve small amounts of data for a desktop app (let's say a small address book with name, address, phone, etc), what are my choices with JavaFX? As far as I can tell, there is no built-in database to handle this...
    What about the same need for a web-based app?
    thanks

    You can use with Netbeans Composer a DataSource and manipulate items with the Query Langage.
    Link

  • Deleting large amounts of data

    All,
    I have several tables that have about 1 million plus rows of historical data that is no longer needed and I am considering deleting the data. I have heard that deleting the data will actually slow down performance as it will mess up the indexing, is this true? What if I recalculate statistics after deleting the data? In general, I am looking for advice what is best practices for deleting large amounts of data from tables.
    For everyones reference I am running Oracle 9.2.0.1.0 on Solaris 9. Thanks in advance for the advice.
    Thanks in advance!
    Ron

    Another problem with delete is that it generates a vast amount of redo log (and archived logs) information . The better way to get rid of the unneeded data would be to use TRUNCATE command:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_107a.htm#2067573
    The problem with truncate that it removes all the data from the table. In order to save some data from the table you can do next thing:
    1. create another_table as select * from &lt;main_table&gt; where &lt;data you want to keep clause&gt;
    2. save the indexes, constraints, trigger definitions, grants from the main_table
    3. drop the main table
    4. rename &lt;stage_table&gt; to &lt;main_table&gt;.
    5. recreate indexes, constraints and triggers.
    Another method is to use partitioning to partition the data based on the key (you've mentioned "historical" - the key could be some date column). Then you can drop the historical data partitions when you need it.
    As far as your question about recalculating the statistics - it will not release the storage allocated for index. You'll need to execute ALTER INDEX &lt;index_name&gt; REBUILD :
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_18a.htm
    Mike

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

Maybe you are looking for

  • SOLVED: W510 Wired Network Stops working

    I had a problem with a W510 I ordered with Windows 7 32 bit. At random times, the exclamation point icon would appear over the network icon in the system tray, and the network adapter would no longer be recognized. I could use the built in Windows fa

  • Will I lose quality when compressing?

    I usually import at AAC 256kps as that setting is the best compromise between quality and filesize for me. Obviously there is a little loss of quality but hardly anything noticeable. Now I need to clear some space on my disk. Some of my less importan

  • My vaio latop cannot be restored to factory settings

    I cannot restore my VAIO to factory settings. When I press alt+f10 it goes straight to a black screen. I do not know what to do. I really need to restore my laptop. I have even turned on my laptop with the assist button and it still goes to the black

  • IBooks Author: Open store links without leaving app?

    I am trying to link from within a book (created in iBooks Author) without leaving the iBooks app. I can add the iTunes link, but it opens the iTunes app. I want it to stay within the iBooks app. There are other ebooks, including ones created in iBook

  • Sequence of process types in process chains

    Hi all, Can any one help me in listing down the sequence of process types we must have when we are using process chains. Like If I am loading data from R/3 to ODS and to cube then what all things should be included (creating indexex, deleting indexes