I need a more efficient method of transferin​g data from RT in a FP2010 to the host.

I am currently using LV6.1.
My host program is currently using Datasocket to read and write data to and from a Field Point 2010 system. My controls and indicators are defined as datasockets. In FP I have an RT loop talking to a communication loop using RT-FIFO's. The communication loop is using Publish to send and receive via the Datasocket indicators and controls in the host program. I am running out of bandwidth in getting data to and from the host and there is not very much data. The RT program includes 2 PID's and 2 filters. There are 10 floats going to the Host and 10 floats coming back from the Host. The desired Time Critical Loop time is 20ms. The actual loop time is about 14ms. Data is moving back and forth between Host and FP several times a second without regularity(not a problem). If I add a couple more floats each direction, the communications goes to once every several seconds(too slow).
Is there a more efficient method of transfering data back and forth between the Host and the FP system?
Will LV8 provide faster communications between the host and the FP system? I may have the option of moving up.
Thanks,
Chris

Chris, 
Sounds like you might be maxing out the CPU on the Fieldpoint.
Datasocket is considered a pretty slow method of moving data between hosts and targets as it has quite a bit of overhead assosciated with it.  There are several things you could do. One, instead of using a datasocket for each float you want to transfer (which I assume you are doing), try using an array of floats and use just one datasocket transfer for the whole array.  This is often quite a bit faster than calling a publish VI for many different variables.
Also, as Xu mentioned, using a raw TCP connection would be the fastest way to move data.  I would recommend taking a look at the TCP examples that ship with LabVIEW to see how to effectively use these. 
LabVIEW 8 introduced the shared variable, which when network enabled, makes data transfer very simple and is quite a bit faster than a comparable datasocket transfer.  While faster than datasocket, they are still slower than just flat out using a raw TCP connection, but they are much more flexible.  Also, the shared variables can fucntion in the RT fifo capacity and clean up your diagram quite a bit (while maintaining the RT fifo functionality).
Hope this helps.
--Paul Mandeltort
Automotive and Industrial Communications Product Marketing

Similar Messages

  • Why does my laptop give me a warning saying that firefox is using too much memory and to restart firefox to be more efficient? I just bought this laptop so I know it has the power to run what I need it to.

    Why does my laptop give me a warning saying that firefox is using too much memory and to restart firefox to be more efficient? I just bought this laptop so I know it has the power to run what I need it to.

    You appear to have AVG installed:
    *See --> http://forums.avg.com/ww-en/avg-forums?sec=thread&act=show&id=173969#post_173969
    From reading on the internet, it appears that when there is a spike in memory usage, AVG "interprets" that as a memory leak, possibly caused by malware. AVG could be incorrect concerning that assumption. Maybe they are being a bit too conservative about memory usage; just my opinion.
    The decision is yours to turn off the "advisor" or leave it on.
    '''If this reply solves your problem, please click "Solved It" next to this reply when <u>signed-in</u> to the forum.'''
    Not related to your question, but...
    You may need to update some plug-ins. Check your plug-ins and update as necessary:
    *Plug-in check --> http://www.mozilla.org/en-US/plugincheck/
    *Adobe Shockwave for Director Netscape plug-in: [https://support.mozilla.com/en-US/kb/Using%20the%20Shockwave%20plugin%20with%20Firefox#w_installing-shockwave Installing ('''''or Updating''''') the Shockwave plugin with Firefox]
    *'''''Adobe PDF Plug-In For Firefox and Netscape''''': [https://support.mozilla.com/en-US/kb/Using%20the%20Adobe%20Reader%20plugin%20with%20Firefox#w_installing-and-updating-adobe-reader Installing/Updating Adobe Reader in Firefox]
    *Shockwave Flash (Adobe Flash or Flash): [https://support.mozilla.com/en-US/kb/Managing%20the%20Flash%20plugin#w_updating-flash Updating Flash in Firefox]
    *'''''Next Generation Java Plug-in for Mozilla browsers''''': [https://support.mozilla.com/en-US/kb/Using%20the%20Java%20plugin%20with%20Firefox#w_installing-or-updating-java Installing or Updating Java in Firefox]

  • Purchasing new iMac (27 i5), what is best method to transfer programs/files/documents from 24" iMAC?  What is the best method to remove personal data from OS?

    Purchasing a new iMac (27 i5), what is the best method to transfer programs/files,documents from a 24" iMac?
    What is the best method to remove personal data from the 24" iMac?

    Use setup assistant which is offered when you setup your new Mac.  It will transfer information from a Time Machine backup, Clone or another Mac.
    It's best to do this during setup to avoid issues with duplicate IDs.
    Regards

  • 3 Table Joins -- Need a more efficient Query

    I need a 3 table join but need to do it more efficiently than I am currently doing. The query is taking too long to execute (in excess of 20 mins. These are huge tables with 10 mil + records). Here is what the query looks like right now. I need 100 distinct acctnum from the below query with all the conditions as requirements.
    THANKS IN ADVANCE FOR HELP!!!
    SELECT /*+ parallel  */
      FROM (SELECT  /*+ parallel  */  DISTINCT (a.acctnum),
                                  a.acctnum_status,
                                  a.sys_creation_date,
                                  a.sys_update_date,
                                  c.comp_id,
                                  c.comp_lbl_type,
                                  a.account_sub_type
                  FROM   account a
                         LEFT JOIN
                            company c
                         ON a.comp_id = c.comp_id AND c.comp_lbl_type = 'IND',
                         subaccount s
                 WHERE       a.account_type = 'I'
                         AND a.account_status IN ('O', 'S')
                        and s.subaccount_status in ('A','S')
                         AND a.account_sub_type NOT IN ('G', 'V')
                         AND a.SYS_update_DATE <= SYSDATE - 4 / 24)
    where   ROWNUM <= 100 ;

    Hi,
    Whenever you have a question, post CREATE TABLE and INSERT statements for a little sample data, and the results you want from that data.  Explain how you get those results from that data.
    Simplify the problem, if possible.  If you need 100 distinct rows, post a problem where you only need, say, 3 distinct rows.  Just explain that you really need 100, and you'll get a solution that works for either 3 or 100.
    Always say which version of Oracle you're using (e.g. 11.2.0.3.0).
    See the forum FAQ: https://forums.oracle.com/message/9362002
    For tuning problems, also see https://forums.oracle.com/message/9362003
    Are you sure the query you posted is even doing what you want?  You're cross-joining s to the other tables, producing all possible combinations of rows, and then picking 100 of those in no particular order (not even random order).  That's not necessarily wrong, but it certainly is suspicious.
    If you're only interested in 100 rows, there's probably some way to write the query so that it picks 100 rows from the big tables first. 

  • Pointers: more efficient method(s), styles for making unconventional UI's

    i currently use mages on my custom panels to give the customized look i want for my apps. But i just can shake the feeling that there are more efficient ways to do it. i just need pointers to some materials (books, articles, documentation, etc) for some technology i can use.
    thanks!

    i currently use mages on my custom panels to give the customized look i want for my apps. But i just can shake the feeling that there are more efficient ways to do it. i just need pointers to some materials (books, articles, documentation, etc) for some technology i can use.
    thanks!

  • Most efficient method of storing configuration data for huge volume of data

    The scenario in which i'm boggled up is as follows:
    I have a huge volume of raw data (as CSV files).
    This data needs to be rated based on the configuration tables.
    The output is again CSV data with some new fields appended to the original records.
    These new fields are derived from original data based on the configuration tables.
    There are around 15 configuration tables.
    Out of these 15 tables 4 tables have huge configurations.
    1 table has 15 million configuration data of 10 colums.
    Other three tables have around 1-1.5 million configuration data of 10-20 columns.
    Now in order to carry forward my rating process, i'm left with the following methods:
    1) Leave the configurations in database table. Query the table for each configuration required.
    Disadvantage: Even if the indexes are created on the table, it takes a lot of time to query 15 configuration tables for each record in the file.
    2) Load the configurations as key value pairs in RAM using a suitable collection (Eg HashMap)
    Advantage: Processing is fast
    Disadvantage: Takes around 2 GB of RAM per instance.
    Also when the CPU context swithes (as i'm using a 8 CPU server), the process gets hanged up for 10 secs.
    This happens very frequently, so the net-net speed which i get is again less
    3) Store the configurations as CSV sorted files and then perform a binary search on it.
    Advantages: No RAM usage, Same configuration shared by multiple instances
    Disadvantages: Only 1 configuration table has an integer key, so cant use this concept for other tables
    (If i'm wrong in that please correct)
    4) Store the configurations as an XML file
    Dont know the advantages/disadvantages for it.
    Please suggest with the methodology which should be carried out....
    Edited by: Vishal_Vinayak on Jul 6, 2009 11:56 PM

    Vishal_Vinayak wrote:
    2) Load the configurations as key value pairs in RAM using a suitable collection (Eg HashMap)
    Advantage: Processing is fast
    Disadvantage: Takes around 2 GB of RAM per instance.
    Also when the CPU context swithes (as i'm using a 8 CPU server), the process gets hanged up for 10 secs.
    This happens very frequently, so the net-net speed which i get is again lessSounds like you don't have enough physical memory. Your application shouldn't be hanging at all.
    How much memory is attached to each CPU? e.g. numactl --show                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • RDA method to extract MAster Data from R/3

    We need to implement the way to extract the Master Data from R/3 As soon as it available in R/3, but it's not like the new MAster Data is loaded in R/3 every 5 minutes, it's more like 3 o 4 times a month and i thought about RDA method, just wanted to see you opinion on wheather it is a good option for such an infrequent loads to BW
    Does this DAEMON eats a lot of system resources? Pleas if the answer could be provided by someone who actually use RDA, it be great!

    Hi,
    We use RDA for systems where the load is very frequent, but a Process Chain would do the required in your case as the data gets updated only a few times in a month.
    RDA will be occupied in the Background Process, so that would create some problems. And it keeps on chiecking for new data.
    I feel its unnecessary. So, go for Process Chains
    Edited by: Vishal Sanghvi on Sep 16, 2008 12:33 PM

  • Method for Downloading Huge Data from SAP system

    Hi All,
    we need to download the huge data from one SAP system  & then, need to migrate into another SAP system.
    is there any better method, except downloading data through SE11/SE16 ? please advice.
    Thanks
    pabi

    I have already done several system mergers, and we usually do not have the need to download data.
    SAP can talk to SAP. with RFC and by using ALE/IDOC communication.
    so it is possible to send e.g. material master with BD10 per IDOC from system A to system B.
    you can define that the IDOC is collected, which means it is saved to a file on the application server of the receiving system.
    you then can use LSMW and read this file with several hundred thousand of IDOCs as source.
    Field mapping is easy if you use IDOC method for import, because then you have a 1:1 field mapping.
    So you need only to focus on the few fields where the values changes from old to new system..

  • Connect a tablet as input method or push asynchronously data from the WebAs

    If have to connect a tablet to a WebDynpro application. Not a tablet PC or something, just an ordinary tablet. This is one of those big plates with hundreds of imaginary buttons.
    The user pushes with a pen some imaginary buttons on the tablet and the WebDynpro program should act accordingly. E.g. the user pushes in the oder entry WebDynpro the button "TheMP3Player" and directly after that the button "colour green", then pushes two time the "battery pack" button and then "type enhanced capacity". The WebDnypro component should react accordingly. Should jump between field, etc.
    Do we have the possibility to add an input device like this to the WebDynpro application or are we stuck with mouse an keyboard? At least, I could not find a way to do this in WebDynpro.
    If I would use an ordinary, classical Dynpro, I would write and RFC server, write the data from the tablet to the WebAs and from there write the data directly to the Dynpro.
    Since WebDynpro is more or less stateless, we can do this here, right? Or is there some "add on" that I could use to poll on a regular or on an event basis (an event not triggered by mouse or keyboard though) data from the WebAs?
    Any help would be appreciated.
    Thank you!
    Kind regards,
    Andreas

    Hi Maaniks,
    I'm a little puzzled by step 5. The return from the tpcall() will be TypedBuffer, so you shouldn't need to create another TypedBuffer. You may need to downcast it to the particular type of TypedBuffer returned, but you shouldn't need to create a new TypedBuffer or TypedFML32.
    Otherwise what you are doing is probably pretty reasonable and common. I'm not aware of any general purposes classes that do what you are looking for, although creating one probably wouldn't be hard. Using reflection you could make it such that the same converter class could handle any POJO or Bean, assuming you can easily map the field IDs to attributes or properties of the POJO or Bean.
    The only comment I might have is whether you use an iterator or look for specific fields is largely going to depend on which there are fewer of. If the class you are populating only takes a few fields from the FML32 buffer, you might just extract those fields instead of iterating through the entire FML32 buffer.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

  • Need to create a transport of SD config data from DEV 120 to DEV 110

    During our conversion some config modifications were mad in DEV 120 instead of DEV 110.
    I need to get the config:
    SPRO> Sales Distribution> Billing> Billing Documents> copy sales documents to Billing documents
    From 120 to 110.
    I have tried to force a transport creation in 120, and used SCC1 to import into 110, but no data comes over.
    How can I force all the data from these config tables to load into a transport so can send from 120 to 110.
    Thanks,
    Bev Barbush

    Hi Beverly,
    I would try bringing this transport in via STMS and then reviewing the import log to see why the data is not going across. What I've normally seen in this situation is the data was not correctly added to the transport before it was imported.
    I would NOT copy config tables back from the QAS system. Copying select portions of config data from one system to another is likely to break all sorts of logical relationships between tables.
    Have you guys tried running a config comparison between 110 and 120 to see what the actual differences are?
    Hope that helps.
    J. Haynes

  • Need to specify LEFT OUTER JOIN while using data from logical database BRM?

    I'm trying to extract data for external processing using SQVI. The fields required are in tables BKPF (Document Header) and BSEG (document detail) so I'm using logical database BRM. Note: required fields include the SPGR* (blocking reasons) which don't appear to be in BSIS/BSAS/BSID/BSAD/BSIK/BSAK, hence I can't just use a Table Join on any of these but have to use BSEG, hence BRM.
    If the document type is an invoice, I also need to include the PO number from table EKKO (PO header), if present, hence I'd like to add this to the list. However, if I do this, it seems that some records are no longer display, e.g. AB documents.
    The interesting thing is that not all records are suppressed, so it's not a simple case of the logical database using an effective INNER JOIN, but the effect is similar.
    In any event, is there a way to specify that the link to table EKKO should be treated as an effective LEFT OUTER JOIN, i.e. records from BKPF/BSEG should be included irrespective of whether any records from EKKO/EKPO exist or not?
    Alternatively, is there some other way to get the SPGR* fields (for example) from BSEG and still join the BKPF? Of course, one solution is to use multiple queries, but I was hoping to avoid this.

    Thanks for everyone's responses, I know how to work around the problem with sql, I am wanting to see if there is a way to make the outer joins filter go in the join clause instead of the where clause with Crystal Reports standard functionality. 
    We have some Crystal Reports users that are not sql users, i.e. benefit specialists, payroll specialists and compensation analysts who have Crystal Reports.  I was hoping this functionality was available for them.  I just made my example a simple one, but often reports have multiple outer joins with maybe 2 or three of the outer joins needing a filter on them that won't make them into an inner join. 
    Such as
    Select person information
    outer join address record
    outer join email record
    outer join tax record (filter for active state record & filter for code = STATE )
    outer join pay rates record
    outer join phone#s  (filter for home phone#)
    I thought maybe the functionality may be available, that I just don't know how or where to use it.  Maybe it is just not available.
    If it is not available, I will probably need to setup some standard views for them to query, rather than expecting them to pull the tables together themselves.

  • ADF method call to fetch data from DB before the initiator page loads

    Hello everyone
    I'm developing an application using Oracle BPM 11.1.1.6.0 and JDeveloper 11.1.1.6.0
    I want to fetch some data from the database before the initiator task, so that when the user clicks on the process name, his/her history will be shown to them before proceeding.
    It was possible to have a service task before the initiator task in JDeveloper 11.1.1.5.0, but I have moved to 11.1.1.6.0 and it clearly mentions this to be an illegal way.
    I came across this thread which suggested to do this using an ADF method call, but I don't know how since I'm new to ADF.
    Re: Using Service Task Activity before Initiator Task issue
    Can anyone show me the way?
    Thanks in advance

    Thanks Sudipto
    I checked that article however I think I might be able to do what I want using ADF BC.
    See, what I'm trying to do is to get a record from a database and show it to the user on the initiator UI.
    I have been able to work with ADF BC and View Objects to get all the rows and show them to the user in a table.
    However, when I try to run the same query in the parameterized form to just return a single row, I hit a wall.
    In short, My problem is like this:
    I have an Application Module which has an entity object and a view object.
    My database is SQL Server 2008.
    when I try to create a new read only view object to return a single row I face the problem.
    The query I have in the query section of my View Object is like this:
    select *
    from dbo.Employee
    where EmployeeCode= 99which works fine.
    However when I define a bind variable, input_code for example, and change the query to the following it won't validate.
    select *
    from dbo.Employee
    where EmployeeCode= :input_codeIt just keeps saying incorrect syntax near ":" I don't know if this has to do with my DB not being Oracle or I'm doing something wrong.
    Can you help me with this problem please?
    thanks again
    bye

  • Efficient way to copy business data from Production DB to Test DB

    Hi.
    I'm a DBA in a software dev company.
    The testing team people ask me replicate a customer (and all its data) from the System-Test DB to a DB of a certain tester.
    Problem is the CUSTOMER table has releations to 6 other tables (ORDERS, ORDER_ITEMS, CONTACTS, etc) which in turn has relations to other tables.
    As I see it I have two options:
    1. Copy the entire System Tests DB to the other DB. This is bad - this DB is very large and we don't have enough disk space.
    2. Work out manually all the relations of the CUSTOMER and write a script to copy just the relevant tables/records for the specific customer. This seems too much work...
    Anybody familiar with a script/tool to perform this?

    Ok.. I gave this tool a test run. Pretty impressive.
    It allows you to set a 'root' entity (Customer in my case) and then it calculates all the relations by reading my schema's foreign-keys. I have one table that is logically connected, but has no FK defined in my schema. No problem - I can add this relation manually.
    Now, I need to set the WHERE on my root entity (e.g. CUSTOMER_ID = 1234567) and the tool just shows me all the relevant tables with only the appropriate records displayed. Charm!
    Lastly, I copied the tables/record to my test DB (the tool has a 'Sync' window).
    Mission accomplished.

  • Which is more efficient way to get result set from database server

    Hi,
    I am working on a project where I require to query database to fetch result set and then iterate through the resultset. Now, What I want is that I want to create one single java code that would call many different SQLs and create a list out of resultset. There are two approaches for me.
    1.) To create a txt file where I can store my queries. My java program can read this file and get the appropriate query to be used.
    2.) To create a stored procedure containing the queries and call the stored procedure from my java program. Also, not that some of the queries needs to be created dynamically depending upon the parameteters supplied.
    Out of these two approches which is optimum and why?
    Also, following things to be noted.
    1. At times I want to create where clause of the query dynamically depenending upon the parameters passed.
    2. I want one single java file that will handle all database calls.
    3. Paramters to the stored procedure can be passed using array descriptor.
    4. Conneciton I am making using JNDI.
    Please do provide me optimum way of out these two. You may also suggest some other approaches, if any.
    Thanks,
    Rajan
    Edited by: RP on Jun 26, 2012 11:11 PM

    RP wrote:
    In case of queries stored in text files. I will require to replace some pre defined placeholder with actual parameters and then pass that modified query to db engine. Even I liked the second approach as it is more easily maintainable. There are a couple of issues. Shared SQL is one. Irrespective of the method used, the SQL cursor that is created needs to have bind variables. This ensures re-usability of the cursor. This reduces the risk of Shared Pool fragmentation. This lowers hard parsing and reduces CPU utilisation.
    Another issue is flexibility. If the SQL cursors are created by stored procedures, this code resides on the server and abstracts the Java client from the complexities of SQL and SQL performance. The code can easily be updated and fine tuned to deliver faster/better SQL cursors, or modified to take new Oracle features, changes in data model, and so on, into consideration. This stored proc can be updated without having to touch or recompile a single byte of Java client code.
    There's also the security issue. What is more secure? SQL encapsulated in stored procs in a secure database and server environment? Or SQL "encapsulated" in text files on the client?
    The less code you have running on the client, the less code you have running in the wild that can be compromised without having to first compromise the server.
    Only I was worried about any performace issue might happen using this approach. Performance is not a factor of who creates the SQL cursor.
    Whether Java client creates a SQL cursor, or a PL/SQL stored proc creates a SQL cursor, or a .Net client creates a SQL cursor - that SQL cursor does not know what the client is. It does not care what the client is. The SQL cursor performs as well as it is capable of.. given the execution plan, data volumes, server resources and speed/performance of the server.
    The client language and SQL cursor interface used by the client (there are several in PL/SQL), determines the performance of the client's interaction with the cursor (e.g. round trips to the database when interfacing with the cursor). The client language (and its client interface to the cursor) does not dictate the actual performance of that SQL cursor on the database (does not make joins faster, or I/O faster)
    One more question, Will my java program close the cursor that I opened in Procedure?That you need to ask your Java code. Java code leaking ref cursors are unfortunately all too common. You need to make sure that your Java client interface to SQL cursors, closes the cursor handle when done.

  • Method to hide Application data from Offshore Team

    Dear All,
    The client has a requirement that the offshore DBA team should not be able to view the few Applications data but they should be able to perform their other activities.
    I am looking for options to achieve this. Please let me know if you are aware on any method to achieve this

    It really depends on what kind of tasks will be assigned to the offshore team. Some tasks e.g. applying Oracle patches require sys which will be able to do anything , including working around whatever home-built access control that can be put into the database. If they have admin access to the server where the database is housed they can also gain access (O/S authentication) without needing to know any of the userids and passwords.

Maybe you are looking for