Can express vi handle large data

Hello,
I'm facing problem in handling large data using express vi's. The input to express vi is a large data of 2M samples waveform & i am using 4 such express vi's each with 2M samples connected in parallel. To process these data the express vi's are taking too much of time compared to other general vi's or subvi's. Can anybody give the reason why its taking too much time in processing. As per my understanding since displaying large data in labview is not efficient & since the express vi's have an internal display in the form of configure dialog box. Hence i feel most of the processing time is taken to plot the data on the graph of configure dailog box. If this is correct then Is there any solution to overcome this.
waiting for reply
Thanks in advance

Hi sayaf,
I don't understand your reasoning for not using the "Open Front Panel"
option to convert the Express VI to a standard VI. When converting the
Express VI to a VI, you can save it with a new name and still use the
Express VI in the same VI.
By the way, have you heard about the NI LabVIEW Express VI Development Toolkit? That is the choice if you want to be able to create your own Express VIs.
NB: Not all Express VIs can be edited with the toolkit - you should mainly use the toolkit to develop your own Express VIs.
Have fun!
- Philip Courtois, Thinkbot Solutions

Similar Messages

  • How to handle large data in file adapter

    We have a scenario Proxy -> PI -> File Sever using File adapter.
    File adapter is using FCC for conversion.
    recently we had wave 2 products live and suddenly for this interface we have increase in volume of messages, due to which File adapter is not performing well, PI goes slow or frequent disconnect from file server problem. Due to which either we will have duplicate records in file or file format created is wrong.
    File size is somewhere around 4.07 GB which I also think quite high for PI to handle.
    Can anybody suggest how we can handle such large data.
    Regards,
    Vikrant

    Check this Blog for Huge File Processing:
    Night Mare-Processing huge files in SAP XI
    However, you can take a look also to this Blog, about High Volume Messages:
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    PI Performance Tuning Best Practice:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?QuickLink=index&overridelayout=true&45896020746271

  • Handling large data -urgent

    i have to compare fieldname from 2 different servers(as in newly added/deleted/length) the problem here im facing is handling huge data ..
    i have made an rfc for fetching fieldname from remote server n then reading on tabname n then comparing the fields...
    even scheduling in the b/g doesnt help ..its taking around 6 hours of time..
    is there ne ways through which i can segregate the data n execute ,i cant make a select option for tabname in the selection screen ..
    someways through which i can pass alphapets dynamically/groups to rfc ..

    pass chunk of data (i.e. 200 to 300 records ) to RFC ... and fetch records w.r.t to that data only..
    this will improve RFC performance

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • How to handle large data while acquisition? BNC 2110

    I want to acquire data using  BNC 2110, I am writing a software in VB 6. We will use 3 channels. We are supposed to scan about 10000 points before AcquiredData is triggered. in all we will need to scan 10000 * 1000 * 1000 before data is put into a binary fall. Can anybody let me know, how to hande this large number points

    Hello Vjuno,
    In order to acquire 10,000,000,000 points you are going to have to be streaming this data to your hard drive as you go.  To do this you'll need to write the data you read to a file each loop iteration.  In general it is a good practice to make your "samples to read" at least 10% of your sample rate in seconds to avoid overflowing buffers, however, depending on your computer you may be able to go faster.  I made an example program in LabVIEW and was able to read 10,000 points at a time from each of 3 analog inputs at 333MHz and write the values to file without overflowing a buffer.  However, even opening a web browser while the code was running was enough to delay the VI long enough for the buffer to overflow.
    You can use the DAQmx Configure Input Buffer call to increase the buffer size and account for spikes in CPU usage from other processes, and you should also monitor the "Available Samples Per Channel" property to make sure you aren't steadily gaining samples in your buffer.  Since you want to acquire 10 billion samples at 1MHz this acquisition will take several hours; if you're not able to keep the buffer empty then it will become apparent before the end of your acquisition.  By monitoring the samples in the buffer you can tell if you're pulling the samples out fast enough, if you find that this number is steadily increasing then you should either reduce the sample rate or increase the number of samples to read each time you call the DAQmx Read.
    In my example program I used a write to TDMS (binary) file and a PCI-6251.
    I hope this helps, and have a good night.
    Cheers,
    Brooks

  • Help on handling large data urgent

    i have to compare fieldname from 2 different servers using table dd03l
    the problem im facing here is of huge data..
    i have created rfc to fetch data from the higher server
    can neone tell me by ne possible ways i can reduce the load on fetching
    there is memory issue in most of the servers cos of around 30 -30 lakhs records in each server ..
    newasy through which i can do the comparing in parts or fetching of data in parts
    thank you

    hi renu,
    instead of getting all the data in one go,
    it would be better to have multiple RFC calls
    and get smaller amount of data to compare.
    Regards,
    Samson Rodrigues.

  • Dispatching large data from DB.

    Hi all.
    My web service will act as the middle tier between front-end web apps. and back-end Oracle DB. Some request will cause a large returning data, say 30,000 - 40,000 records.
    How can I relay such large data from the back-end DB. to the front-end web apps. ? Can general SOAP string msg. will suite this request ? This approach will cause a very large string stream.
    Thanks in advance.
    Tanin

    Hi,
    I haven't tested the specific code below, but here how it would go.
    In a Page Load process, Before or After Header, write a PL/SQL Block that creates and loads the CLOB into the collection
    declare
      l_clob clob;
    begin
      -- wipe out if already exists and creates new empty collection
      apex_collection.create_or_truncate_collection('LOBCOLLECTION');
      for r in ( select * from remote_tab@db_link ) loop
        apex_collection.add_member('LOBCOLLECTION', p_clob001 => r.clob_column);
       -- loads the clobs one by one from the cursor
    end loop;
    end;Once this is done you can query the collections using the code below
    select clob001 from apex_collections where collection_name = 'LOBCOLLECTION'Regards,

  • How can i open large data?

    he code in the screenhot opens realy all data
    formats, but only small data, when the data becomes too large, LabView
    say: "Not enough Memory to complite this operation". But I have enough
    Memory. This appear already by a data-largeness of fwe bytes (maybe
    more than 10 MB).
    I have 1 Gbyte Memory, also enough, but when i
    will open a small data with only 10 MByte it say's to me that i have
    not enough memory.
    Ok when i would open a data with more than
    1Gbyte, for example a zip-file, i can understand it, but the data are
    realy not large.
    Can somebody say waht is wrong?
    ThX
    Attachments:
    Read all files.vi.png ‏12 KB

    You should also read the tutorial Managing Large Data Sets in LabVIEW.  Some things you will learn:
    Your current code is making several copies of your data.  The tutorial will teach you how to find them and eliminate them.  The tutorial has not been updated for LabVIEW 8.5 yet, and there are several enhancements in LabVIEW 8.5.  The updated version is posted below.  The code examples did not change.
    For best speed in reading from the disk, you want to use 65,000 byte chunks.
    Store your data in a single-element queue.  This will give you best performance for a large array.
    Store your data as a set of arrays instead of one array (this has been mentioned above).  You can break it up into several single-element queues or save it as an array of clusters, each cluster containing a sub-array of the data.  The cluster acts sort of like a handle in C.
    Let us know if you need more help.
    This account is no longer active. Contact ShadesOfGray for current posts and information.
    Attachments:
    Memory Management in LabVIEW.doc ‏132 KB

  • How can I get the Airport Express to handle all the PPPoE stuff?

    Hi, I’m visiting my family in China, and now trying to help my dad, with his Airport Express and how to set up a PPPoE connection.
    We have currently set up the Airport Express in bridge mode (not distributing IP adresses and selecting DHCP under the Internet tab in admin utility). The Airport settings on our two computers is set up to connect using PPPoE using the given login name and password. (ps! we can not see the Base station in Airport Admin Utility when using these settings, we would have to select a new location from the Apple menu to see it and make condigurations.)
    What we want is to do, is to have the Airport Express connect to the ISP using a PPPoE connection and not through the computer.
    I know there is a 'Connect using PPPoE' option in Airport admin util, letting me input account name and password. If I select this setting instead of DHCP, enable distribution of IP addresses and configure my Airport card to NOT connect using PPPoE, I will see my base station in the Airport admin util with the IP address of 10.0.1.1 (or similar) and my computer will have x.x.x.2. Next to the Airport icon in the menubar, a scrolling message will say 'Looking for PPPoE host' without anything happen. I am sure my account name and password is correct as they've both worked when using this computer to connect to PPPoE (like now)
    How can I get the Airport Express to handle all the PPPoE stuff without using bridge mode?
    Ps! Both me and my dad have iPhones whom we can’t seem to get to connect unless its been distributed an IP address cause there's as fars as I know, no options of inputing a PPPoE user name and password.

    Any solutions to this? I'm in China also, in Beijing, trying to get my Airport Express to work with an ADSL modem.
    Direct ethernet cable connection to my Macbook works fine.
    When I configure the Airport Express with the ID and password that seems to be fine also – Airport Express shows a green light.
    But I cannot figure out the settings to connect wirelessly from my Macbook to the Airport Express. I get a constantly scolling message: "Looking for PPPoEhost..."
    thanks
    Paul

  • Can my 600MHz P-3, 256MB Ram handle a data acquisition of more than 1kHz with waveform chart support?

    please suggest tools for acquiring high speed 8 analog channel data.
    can my 600MHz P-3, 256MB Ram handle a data acquisition of more than 1kHz with waveform chart support?

    Arun,
    I have performed data acquisition on a computer with lower specs than the one you described, at rates greater than 1 kHz. I don't quite understand the "waveform chart support" portion of your question, but if you are asking if it is possible to plot the acquired data on a Waveform Chart in LabVIEW, this should not be a problem.
    If you have more specific questions, let me know.
    -D
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

  • If i encrypted a large data using rsa,what i can do

    if i encrypted a large data,the date size > 1024(rsa keysize),the type of data is
    byte[],what i can do

    You'll have to block it yourself, and encrypt each block on its own. On the decrypt side, your algorithm needs to expect a series of blocks, and it needs to decrypt them each and rebuild the original plaintext. It's not hard - but you're in for some tedious times with byte[].
    And you'll end up with something that runs at a snail's pace - but at least the security will be weaker! It's a very bad answer to encryption. I realize that you know that - but feel free to tell whoever is requiring you to be stupid, that what they're asking you to do is stupid. ;)
    Grant

  • Have large data downloads for no apparent reason can be upwards of a half a gb. Any ideas

    Hi,
    My sons iPhone 4S with 64 megs of memory periodically has a large data usage. Can be around .5 gb for no apparent reason. Wifi is on. Sometimes this download occurs in middle of night. Any ideas. Thanks

    If it is cellular data usage you should be able to identify the cause with a little detective work. Settings > Cellular > Use Cellular Data For will show data usage by app and if you click on System Services in that list it will also show the details of system service usage. The numbers are from the last time the statistics were reset (bottom of the same screen).

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • How do I handle large resultsets in CRXI without a performance issue?

    Hello -
    Problem Definition
    I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
    How do you handle large resultsets in Crystal Reports without a performance issue?
    Environment
    Crystal Reports XI
    Apache WebSvr 2.X, Jboss 4.2.3, Struts
    Java Reporting Component (JRC),Crystal Report Viewer (CRV)
    Firefox
    DETAILS
    I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
    User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
    I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
    So.....
    - Request received to generate a report with a filter criteria
    - Query DB to get resultset
    - Initialize JRC and CRV
    - finally display the report by calling
        reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
    The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
    CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
    I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
    Solutions?
    So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
    I have thought of some half baked ideas.
    A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
    Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
    B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
    C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
    D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
    I'd appreciate it if someone can point me in the right direction.

    Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

  • Problem with large data report

    I tried to run a template I got from release 12 using data from the release we are using (11i). The xml file is about 13,500 kb. when i run it from my desktop.
    I get the following error (mostly no output is generated sometimes its generated after a long time).
    Font Dir: C:\Program Files\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\fonts
    Run XDO Start
    RTFProcessor setLocale: en-us
    FOProcessor setData: C:\Documents and Settings\skiran\Desktop\working\2648119.xml
    FOProcessor setLocale: en-us
    I assumed there may be compatibility issues between 12i and 11i hence tried to write my own template and ran into same issue
    when i added the third nested loop.
    I also noticed javaws.exe runs in the background hogging a lot of memory. I am using Bi version 5.6.3
    I tried to run the template through template viewer. The process never completes.
    The log file is
    [010109_121009828][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setData(InputStream) is called.
    [010109_121014796][][STATEMENT] Logger.init(): *** DEBUG MODE IS OFF. ***
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setTemplate(InputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutput(OutputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutputFormat(byte)is called with ID=1.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setLocale is called with 'en-US'.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.process() is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.generate() called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] createFO(Object, Object) is called.
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] oracle.xdo Developers Kit 10.1.0.5.0 - Production
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] Scalable Feature Disabled
    End of Process.
    Time: 436.906 sec.
    FO Formatting failed.
    I cant seem to figure out whether this is a looping or large data or BI version issue. Please advice
    Thank you

    The report will probably fail in a production environment if you don't have enough heap. 13 megs is a big xml file for the parsers to handle, it will probably crush the opp. The whole document has to be loaded into memory and perserving the relationships in the documents is probably whats killing your performance. The opp or foprocessor is not using the sax parser like the bursting engine does. I would suggest setting a maximum range on the amount of documents that can be created and submit in a set of batches. That will reduce your xml file size and performance will increase.
    An alternative to the pervious approach would be to write a concurrent program that merges the pdfs using the document merger api. This would allow you to burst the document into a temp directory and then re-assimilate them it one document. One disadvantage of this approach is that the pdf is going to be freakin huge. Also, if you have to send that piggy to the printer your gonna have some problems too. When you convert it pdf to ps the files are going to be massive because of the loss of compression, it's gets even worse if the pdf has images......Then'll you have a more problems with disk on the server and or running out of memory on ps printers.
    All of things I have discussed I have done in some sort of fashion. Speaking from experience your idea of 13 meg xml file is just a really bad idea. I would go with option one.
    Ike Wiggins
    http://bipublisher.blogspot.com

Maybe you are looking for