Best practice for handling data for a large number of indicators

I'm looking for suggestions or recommendations for how to best handle a UI with a "large" number of indicators. By large I mean enough to make the block diagram quite large and ugly after the data processing for each indicator is added. The data must be "unpacked" and then decoded, e.g., booleans, offset binary bit fields, etc. The indicators are updated once/sec. I am leanding towards a method that worked well for me previously, that is, binding network shared variables to each indicator, then using several sub-vis to process the particular piece of data and write to the appropriate variables.
I was curious what others have done in similar circumstances.
Bill
“A child of five could understand this. Send someone to fetch a child of five.”
― Groucho Marx
Solved!
Go to Solution.

I can certainly feel your pain.
Note that's really what is going on in that png  You can see the Action Engine responsible for updating the display to the far right. 
In my own defence: the FP concept was presented to the client's customer before they had a person familliar with LabVIEW identified.  So I worked it this way from no choice of mine.  I knew it would get ugly before I walked in the door and chose to meet the challenge head on anyway.  Defer Panel Updates was my very good friend.  The sensors these objects represent were constrained to pass info via a single ZigBee network so I had the benefit of fairly low data rates as well but even changing view (Yes there is a display mode that swaps what information is displayed for each sensor) was updated fast enough that the user still got a responsive GUI.
(the GUI did scale poorly though!  That is a lot of wires!  I was greateful to Jack for the Idea to make align and distribute work on wires)
Jeff

Similar Messages

  • Best practice to upload data for Appraisals: BDC, LSMW or functional module

    Hi,
    I have heard that BDC and LSMW do not work for data upload in appraisals. Is it true?
    Can we use ECATT or SECATT to upload data in appraisals?
    I somewhere got the information that functional modules
    HRHAP_DOCUMENT_PREPARE
    HRHAP_DOC_UPDATE_BODY_AND_SAVE
    HRHAP_DOCUMENT_CREATE
    are used to upload the data for appraisals?
    Many of my earlier clients found it very hectic to create appraisal templates every year (PHAP_CREATE). They needed something automated for this.  I managed to suggest them manual upload or SECATT, but I am not sure if BDC/LSMW work for this.
    Can somebody throw some light on this?
    Best regards,
    Veera Sasidhar Jangam

    Hi,
    You need to write a code for same and use function modules available as those does direct updates to database.
    If client is not bothered about look and feel during dataload or do not care about display infotypes during data updates then use above method otherwise BDC needs to be written with screen control programming in it.
    Thanks,
    Ameet

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • Best practice to define length for varchar field of table in sql server

    What is best practice to define length for a varchar field in table
    where field suppose Remarks By Person  varchar(max) or varchar(4000)
    Could it affect on optimization in future????
    experts Reply Must ... 
    Dilip Patil..

    Hi Dilip,
    Varchar(n/max) is a variable-length, non-unicode character data. N defines the string length and can be a value from 1 through 8,000. Max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered
    + 2 bytes. We always use varchar when the sizes of the column data entries vary considerably. While if the filed data size might exceed 8,000 bytes in some way, we should use varchar(max).
    So the conclusion is just like Uri said, use varchar(max) or varchar(4000) is depends on how much characters we are going to store.
    The following document about varchar in SQL Server is for your reference:
    http://technet.microsoft.com/en-us/library/ms176089.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Best practice on Oracle VM for Sparc System

    Dear All,
    I want to test Oracle VM for Sparc System but I don't have new model Server to test it. What is the best practice of Oracle VM for Sparc System?
    I have a Dell laptop which has spec as below:
    -Intel® CoreTM i7-2640M
    (2.8GHz, 4MB cache)
    - Ram: 8GB DDR3
    - HDD: 750GB
    -1GB AMD Radeon
    I want to install Oracle VM VirtualBox on my laptop and then install Oracle VM for Sparc System in Virtual Box, is it possible?
    Please kindly give advice,
    Thanks and regards,
    Heng

    Heng Horn wrote:
    How about computer desktop or computer workstation with the latest version has CPU supports Oracle VM or SPARC?Nope. The only place you find SPARC T4 processors is in Sun Servers (and some Fujitsu servers, I think).

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • What is the best practice to handle JPA methods in JSF app?

    I am building a JSF-JPA web app(No EJB).
    I have several methods that has JPA QL inside.
    Because I have to put those methods inside JSF beans to inject EntityManagerFactory (am I right about this?).
    And I do want to separate those methods from regular JSF beans which are used by page authors.
    And I may need to use them in different JSF managed beans.
    My question here is that what is the best practice to handle that?
    I. write a or a few separate JSF Beans and inject them into regular Beans?
    II. write a or a few separate JSF Beans and access them into regular Beans using FacesContext?
    III. others?
    Waiting to hear from you opinions.

    You can create named queries on your Entities themselves then just call entityMgr.createNamedQuery("nameOfQuery");
    Normally, we put these named queries in the class of the entity which will be returned. This allows for all information pertaining to a given entity and all ways of accessing that entity (except em.find() and stuff, of course) to be in one place. As long as the entity is defined in your persistence.xml file, any named queries which reside on that entity will be available through the EntityManager.
    As for the EntityManagerFactory, we normally create an application scope bean which holds the factory itself (because this is a heavy-weight object) and then just get all EntityManager instances from that by injecting this bean into whatever needs it. For example, I might have:
    //emfBB is the injected app scope bean which holds the entity manager factory.
    private EmfBB emfBB;
    private void lookupSomeData()
    EntityManager em = this.getEmfBB().getEmf()
    I hope this answered your question?
    ~Zack
    Edited by: zmarr on Nov 6, 2008 1:29 PM

  • Is it a best practice to Input Data directly to Parent Currency ?

    Dear Gurus,
    I understand this question is very simple, but I am wondering is it a best practice to Input data directly into <Parent Currency>?
    I am having a scenario where users do translation in Oracle and they wish to see the same number in HFM. As the process that they follow is unique in nature by Entity->Account [OR] for all Entities->Account, creating Override accounts is not helping us and rapidly increasing the Override hierarchy month-on-month.
    Thanks...
    Satya

    By overriding the accounts, I assume you are referring to accounts that are translated historically as opposed to the default translation performed by HFM?
    If so, I have seen two general approaches to achieve this:
         a) Allowing the user to directly override the translation at IsTransCurr (i.e. <parent currency> if parent is in a currency other then entity)
         b) Entering in a special currency rate in a rate account and then adjusting the translate routine to translate your account by that rate.
    I personally, am a bigger fan of the first option.

  • The table for storing data for infocube and ODS

    Hi all:
        could you please tell me how to find the table for storing data for infocube and ODS?
    thank you very much!

    Hi Jingying Sony,
    To find tables for any infoprvider go to SE11.
    In database table field enter the following
    Cube -
    Has fact table and dimension table
    For customized cube - ie cube names not starting with ' 0 '
    Uncompressed Fact table - /BIC/F<infocubename>
    Compressed fact table - /BIC/E<infocubename>
    Dimension table - /BIC/D<infocubename>
    For standard cube - ie cube names  starting with ' 0 '
    Uncompressed Fact table - /BI0/F<infocubename>
    Compressed fact table - /BI0/E<infocubename>
    Dimension table - /BI0/D<infocubename>
    Click on display.
    For DSO,
    For standard DSO active table- /BI0/A<DSO name>00.
    You use 40 for new table.
    Click on display.
    For customized DSO use- /BIC/A<DSO name>00.
    An easier way is in the database table field, write the name of the cube/DSO preceeded and followed by ' * ' sign. Then press F4 . It shall give you the names of the available table for that info provider.
    Double click on the name and choose display.
    Hope this helps,
    Best regards,
    Sunmit.

  • Best practice to handle contents greater than 1 TB

    Hello All,
    I am using Sharepoint 2010 and I need to know whats the best practice to handle contents greater than 1 TB
    Specifics
    1) Contents will be collection of images (Jpeg format) and collectively the sizes can go above 1 TB till 10 TB or more
    2) Image will be uploaded to sharepoint though webservice
    So any of below option suitable? if not, then any other option?
    - Document Library
    - Document Center
    - Record Center
    - Asset Library
    - Picture Library
    Thanks in advance ...

    Theres several aspects to this.
    Large lists:
    http://technet.microsoft.com/en-gb/library/cc262813%28v=office.14%29.aspx
    A blog summarising large databases here:
    http://blogs.msdn.com/b/pandrew/archive/2011/07/08/articles-about-scaling-sharepoint-to-large-content-database-capacity.aspx
    Boundaries and limits:
    http://technet.microsoft.com/en-us/library/cc262787%28v=office.14%29.aspx#ContentDB
    If at all possible make your web service clever enough to split content over multiple site collections to allow you to have smaller individual databases.
    It can be done but you need to do a lot of reading on this to do it well. You'll also need a good DBA team to maintain the environment.

  • Best Practices on Routine Data Load.

    Can someone please tell me what are the best practices on routine data load from one database to another?
    We have PeopleSoft system where new employees' records are created; however, these new employees are required to take new employee tests that is being tracked by an application outside Peoplesoft on an Oracle db. Therefore, we need to populate the Oracle db with the new employee's information - on a daily basis or as needed. The data we will need to track are new employees or rehires, changes on existing employees - position, title, etc, terminated employees - date of termination, etc.
    What is the best practice to get the employee's information to the Oracle db?
    Any suggestions are appreciated.
    -andy

    Depends on your source and your database version which you didn't mention. What is the easiest way to get them out of your source database?
    Perhaps a database link though that might be a security violation.
    Perhaps as a delimited ASCII file loaded using SQL*Loader or an external table.
    Can you provide more information and database version numbers?

  • Best Practices for Handling queries for searching XML content

    Experts: We have a requirement to get the count of 4 M rows from a specific XML tag with value begins with, I have a text index created but the query is extremely slow when I use the contains operator.
    select count(1) from employee
    where
    contains ( doc, 'scott% INPATH ( /root/element1/element2/element3/element4/element5)') >0
    what is oracle's best practices recommendation to query / index such searches?
    Thanks

    Can you provide a test case that shows the structure of the data and how you've generated the index? Otherwise, the generic advice is going to be "use prefix indexing".

  • Best practices with LDIF Development for RBAC?

    I'm currently working on enforcing RBAC (Role Based Access controls) in OID that may be subject to change every few months. What I've currently been doing is writing LDIF files to make changes to the existing RBAC once the changes have been finalized.
    Unfortunately, now we have ended up with a growing list of LDIF files that must be run in sequential order if we were to build a new environment. Any defects or development errors that slip through developer unit testing must be handled in the same manner.
    What is the best practice process for performing this type of development? Would it make more sense to have one LDIF file that removes all of the RBAC enforcement (via ldapmodify -c), and then a separate file that will install the latest and most up to date version? I've also considered just using one LDIF file, appending any updates to the end of it and using the ldapmodify command with the -c parameter

    With regard to the 29.97/30 thing, you'll find that video people are idiosyncratically imprecise about that. We say 60 when we mean 59.94, we say 30 when we mean 29.97 and we say 24 when we mean 23.976.
    We're quirky.
    Whenever somebody says one of those nice, round numbers, you can assume they're really talking about the corresponding ugly fraction.
    Unless they're film people, in which case +24 means 24, dangit.+

  • Best Practice setting up NICs for Hyper V 2008 r2

    I am looking at some suggestions for best practice for setting up a hyper V 2008 r2 at a remote location with 5 nics, one for managment vlan and other 4 on the data vlan.  This server will host  2 virtual machines, one is a DC and the other
    is a member local DHCP server.  The server is setup now with one nic on the management Vlan and the other nic's set to get there ip from the local dhcp server on on the host.   We have the virtual networks setup in Hyper V to
    point to each of the nics using the "external connection".  The virtual servers 'DHCP and AD" have there own ip set within them.  Issues we are seeing,  when the site looses external connections for a while they cannot get ip
    addresses from the local dhcp server anymore.
    1. NIC on management Vlan -- IP Static -- Physical host
    2. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V  -- virtual server DHCP
    3. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- Virtual server domain controller
    4. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    5. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    Thanks in advance

    Looks like you may be over complicating things here.  More and more of the recommendations from Microsoft at this point would be to create a Logical Switch and then layer on Logical Networks for your management layers, but here is what I would do for
    you simple remote office.  
    Management NIC:  Looks good (Teaming would be better, but only if you had 2 different switching to protect against link failures at the switch level.  Doesn't seem relevant in this case however.
    NIC for Data Network VLAN:  I would use one NIC in your case if you can have the ability to Trunk multiple VLANs at the switch level to the NIC.  That way you are setting the VLAN on the VMs NIC that you want to access and your
    Virtual Switch configuration is very simple.  On this virtual switch however, I would uncheck IPv4 and IPv6.  There is no need to give this NIC an address as you are just passing traffic through them from the VMs that are marked with VLAN tags.  Again,
    if you have multiple physical switches in the building teaming could be an option, but probably adds more complexity than is necessary for a small office. 
    Even if you keep your Virtual Switches linked to separate NICs unchecking IPv4 and IPv6 makes sense. 
    Disable all the other NICs
    Beyond that, check your routing.  Can you ping between all hosts when there is not interruption? What DHCP server are they getting there addresses on normally?  Where are your name resolution servers (DNS, WINS)?  
    No silver bullet here, but maybe a step in the right direction.
    Rob McShinsky (VirtuallyAware.com)
    VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

  • Archiving Best Practices / How To Guide for Oracle 10g - need urgently

    Hi,
    I apologize if this is a silly question. But i need a step by step archiving guide for Oracle 10g and cannot find any reference document. I am in a rather remote part of S.E. Asia & can't seem to find DBA's with the requisite experience to do the job properly. I have had 1 database lock up this week at a big telecoms provider and another one at a major bank is about to go. I can easily add LUNS & re-structure mirrors etc at the Unix level [ i am a Unix engineer ]
    but i know that is not the long run solution. I am sure the 2 databases i am concerned about have never been archived properly.
    This is the sort of thing DBA's must do all the time. Can someone point me to the proper documentation so i can do a proper job and archive a few years data out of these databases. I do not want to do a hack job. At least i can clone the databases and practise on the clones first before i actually do production.
    -thanks very much
    -gregoire
    [email protected]

    I'm not so sure this is a general database question, as it would be specific to an application and implementation, and as the technology has changed, the database options to support it has too.
    So for example, if you have bought the partitioning option, there may be some sensible procedure for partitioning off older data.
    Things may depend on whether you are talking about an OLTP, a DW, a DSS, or mixed systems.
    DBA's do it all the time because the requirements are different everywhere. Simply deleting a lot of data after copying the old data to another table (as some older systems do) may just wind up giving you performance problems scanning swiss-cheesed data.
    Some places may not archive at all, if they've separated out OLTP from reporting. If all the OLTP stuff is accessed through indices, all the older stuff just sits there. The reporting DB may only have what is needed to be reported on, or be on a standby db where range scans are sufficient to ignore old data. There there's exadata, which has it's own strengths.
    Best Practices have to be on similar enough systems, otherwise they are a self-contradiction.
    Get yourself someone who understands your requirements and can evaluate the actual problem. No apology needed, it is not a silly question. But what is silly is assuming what the problem is with no evidence.

Maybe you are looking for