Best practice updating plots ?

Folks -
I'm looking for best practice advice, better yet point-me-to-the-FAQ.  What's the one-true-Labview-way to keep a stacked plot of a waveform chart updated ?  I've got a main loop consisting of a flat sequence, the first two frames of which  which may be updating either of two 1-D arrays. There is  a time axis common to both.  I need both plotted soon (1-2 sec) after the update happens. RIght now, the three arrays are just shared variables , written in the subvis , while the plot is outside the flat sequence , inside the Until-stop. I put the three together into a waveform, but I'm not at all sure this is good practice . Advice ?
thanks
Alex
Attachments:
OUTLINE-PPS-V2.vi ‏74 KB

Thanks 10^6 . I am confused but have a hardware blockage to events (6133) , and can't find coherent guidance from NI on the one true path to Labview goodness, only asking stupid questions.
Whatever you are doing to the chart data does NOT create multiple traces, you create a single waveform, but writing the y data twice, the lower set simply overwriting the data wired higher up.
Ahh, thanks. Tried to find documents on how to build a waveform of multiple plots. the NI examples I've found don't have time axes.  I can't find one summary document about plots, so have to try things until they work. XY charts did but Could not reliably update. Tryng to 
Never hide an event structure inside a long sequence. If you would press the "commit" button during a time where the code is elsewehere, you might lock up the front panel forever.
I was afraid of that , intended on disabling the commit button except when needed (second frame) .  
 Among my problems are : in a 2 minute period, I wait for a switch signal external to me. That must start a sequence of waiting for another switch to close and checking a file frequently for updates. if the operator likes that new file, then commit (copy to another SV array). The second signal is my cue to set up and arm a few usb and pci digitizers. then about 30 sec of things happen in a sequence. If I could get events out of the 6133, a state machine would be possible, but NI says no, gotta poll.
What is the point for all these network shared variables? 
Bind variables from subvi's  to indicators.
What else needs to access those? Any other remote code?
A few are actual Network SV from elsewhere, or are local copies I make and then need on indicators.
In any case, I recommend to rearchitect this entire thing as a plain state machine. One outer loop. One case structure, and each frame a state of a single case structure. Now you only need once instance of each variable.
Yeah, that was the plan until I found out that 6133 doesn't support events. Need to try harder to re-arrange .
thanks again.

Similar Messages

  • Best practice - updating figure numbers in a file, possibly to sub-sub-chapters

    Hi,
    I'm a newbie trying to unlearn my InDesign mindset to work in FrameMaker. What is best practice for producing figure numbers to accompany diagrams throughout a document? A quick CTRL+F in the Framemaker 12 Help book doesn't seem to point me in a particular direction. Do diagrams need to be inserted into a table, where there is a cell for the image and a cell for the figure details in another? I've read that I should  use a letter and colon in the tag to keep it separate from other things that update, e.g. F: (then figure number descriptor). Is there anything else to be aware of, such as when resetting counts for chapters etc?
    Some details:
    Framemaker12.
    There are currently 116 chapters (aviation subjects) to make.
    Each of these chapters will be its own book in pdf form, some of these chapters run to over 1000 pages.
    Figure number ideally takes the form: "Figure (a number from one of the 1-116 chapters used) - figure number" e.g. "Figure 34 - 6." would be the the 6th image in the book 'chapter 34'.
    The figure number has to cross reference to explaining text, possibly a few pages away.
    These figures are required to update as content is added or removed.
    The (aviation) chapter is an individual book.
    H1 is the equivalent of the sub-chapter.
    H2 is the equivalent of the sub-sub-chapter.
    H3 is used in the body copy styling, but is not a required detail of the figure number.
    I'm thinking of making sub-chapters in to individual files. These will be more manageable on their own. They will then be combined in the correct order to form the book for one of these (1 of 116) subject chapters.
    Am I on the right track?
    Many thanks.
    Gary

    Hi,
    Many thanks for the link you provided. I have implemented your recommendation into my file. I have also read somewhere about sizing anchored frames to an imported graphic using 'esc' + 'm' + 'p'.
    What confuses me, coming from InDesign is being able to import these graphics at the size they were made ( WxH in mm at 300ppi) and keeping them anchored to a point in the text flow.
    I currently have 1 and 2 column master pages built. When I bring in a graphic my process is:
    insert a single cell table on the next space after current text > drop the title below the cell > give the title a 'figure' format. When I import a graphic it either tries to fit it in the current 2 column layout with only part of it showing in a box which is half the width of a single column!
    A current example: page 1 (2 column page) the text flows for 1.5 columns. At the end of the text I inserted a single cell table, then imported and image into the cell.
    Page 2 (2 column page) has the last line of page 1's text in the top left column.
    Page 3 (2 page column)  has the last 3 words of page 1 in its top left column.  The right column has the table in it with part of the image showing. The image has also bee distorted, like it's trying to fit. These columns are 14 cm wide, the cell is 2 cm wide at this point. I have tried to give cells for images 'wider' attributes using the object style designer but with no luck.
    Ideally I'm trying to make 2 versions. 1) an anchored frame that fits in a 1 column width on a 2 column width page. 2) An anchored frame that fits the full width of my landscape pages (minus some border dimension),  this full width frame should be created on a new proceeding page. I'd like to be able drop in images to suit these different frames with as much automation as possible.
    I notice many tutorials tell you how to do a given area of the program, but I haven't been able to find one that discusses workflow order. Do you import all text first, then add empty graphic boxes and/or tables throughout and then import images? I'm importing text from Word,  but the images are separate, having been vectored or cleaned up in Photoshop - they won't be imported from the same word file.
    many thanks

  • Best Practices - Update Explorer Properties of BW Objects

    What are some best practices of using the process chain type "Update Explorer Properties of BW Objects"?
    We have the option of updating Conversion Indexes, Hierarchy Indexes, Authorization Indexes, and RKF/CKF Indexes.
    When should we run each update process?
    Here are some options we're considering:
    Conversion Indexes - Run this within InfoCube load process chains that contain currency conversions within explorer objects.
    Hierarchy Indexes - When would this need to be run? Does this need to be run for PartProviders and/or Snapshots? Do ACRs handle this update? Should this be run within InfoCube load chains, or after ACRs?
    Authorization - We plan to run this a couple times a day for all explorer objects.
    RKF/CKS - Does this need to run after InfoCube loads? With PartProvider and/or Snapshot indexes? After transports have completed?
    Thanks,
    Cote

    Does anyone productively use explorer and this process type for process chains?

  • Best Practice: Update OWB 10.2.0.1 to 10.2.0.3 ?

    Hey OWB-Guys,
    I've searched the forum and metalink, but I could not find what I'm looking for. I want to update my OWB from version 10.2.0.1 to 10.2.0.3 (I'm working on WinXP Pro Sp2, so I can't use 10.2.0.4, right?).
    Our system architecture will change soon, so I have to start the Control Center Service locally on my client computer in the future. I think there will be no OWB installed on DB server, if I understood our DBA correctly.
    What will I have to do, to update OWB? Install patchset from Metalink on my client computer? What about the repository owner and repository user in the database?
    Do you have a document / link with a guideline through the things to do?
    Thanks in advance and have a nice weekend!
    Steffen

    Hey, nobody who upgraded from 10.2.0.1 to 10.2.0.3 ??? Please help me with some general hints or links describing that issue...
    Thank you!
    Steffen

  • Best Practice for Software Update Structure?

    Is there a best practice guide for Software Update Structure?  Thanks.  I would like to keep this neat and organized.  I would also like to have a test folder for updates with test group.  Thanks.

    Hi,
    Meanwhile, please refer to the following blog get more inspire.
    Managing Software Updates in Configuration Manager 2012
    http://blogs.technet.com/b/server-cloud/archive/2012/02/20/managing-software-updates-in-configuration-manager-2012.aspx
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Best Practice for CQ Updates in complex installations (clustering, replication)?

    Hi everybody,
    we are planning a production setup of CQ 5.5 with an authoring cluster replicating to 4 publisher instances. We were wondering what the best update process looks like in a scenario like this. Let's say, we need to install the latest CQ 5 Update - which we actually have to -:
    Do we need to do this on every single instance, or can replication be utilized to distribute updates?
    If updating a cluster - same question: one instance at a time? Just one, and the cluster does the rest?
    The question is really: can update packages (official or custom) be automatically distributed to multiple instances? If yes, is there a "best practice" way to do this?
    Thanks for any help on this!
    Henning

    Hi Henning,
    The CQ5.5 servicepacks are distributed as CRX packages. You can replicate these packages and on the publishs they are unpacked and installed.
    In a cluster the situation is different: You have only 1 repository. So when you have installed the servicepack on one node, the new versions of bundles and other stuff is unpacked to the repository (most likely to /libs). Then the magic (essentially the JcrInstaller) takes care, that the bundles are extracted to started.
    I would not recommend to activate the service pack in a production environment, because then all publishs will be updated the same time. And as a restart is required, you might encounter downtimes. Of course you can make it work when you play with the replication agents :-)
    cheers,
    Jörg

  • Best Practice for Expired updates cleanup in SCCM 2012 SP1 R2

    Hello,
    I am looking for assistance in finding a best practice method for dealing with expired updates in SCCM SP1 R2. I have read a blog post: http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    I have been led to believe there may be a better method, or a more up to date best practice process in dealing with expired updates.
    On one side I was hoping to keep a software update group intact, to have a history of what was deployed, but also wanting to keep things clean and avoid issues down the road as i used to in 2007 with expired updates.
    Any assistance would be greatly appreciated!
    Thanks,
    Sean

    The best idea is still to remove expired updates from software update groups. The process describes in that post is still how it works. That also means that if you don't remove the expired updates from your software update groups the expired updates will
    still show...
    To automatically remove the expired updates from a software update group, have a look at this script:
    http://www.scconfigmgr.com/2014/11/18/remove-expired-and-superseded-updates-from-a-software-update-group-with-powershell/
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for auto update flex web applications

    Hi all
    is there a best practice for auto update flex web applications, much in the same way AIR applications have an auto update mechanism?
    can you please point me to the right direction?
    cheers
    Yariv

    Hey drkstr
    I'm talking about a more complex mechanism that can handle updates to modules being loaded into the application ect...
    I can always query the server for the verion and prevent loading from cach when a module needs to be updated
    but I was hoping for something easy like the AIR auto update feature

  • Not a question, but a suggestion on updating software and best practice (Adobe we need to create stickies for the forums)

    Lots of you are hitting the brick wall in updating, and end result is non-recoverable project.   In a production environment and with projects due, it's best that you never update while in the middle of projects.  Wait until you have a day or two of down time, then test.
    For best practice, get into the habit of saving off your projects to a new name by incremental versions.  i.e. "project_name_v001", v002, etc.
    Before you close a project, save it, then save it again to a new version. In this way you'll always have two copies and will not loose the entire project.  Most projects crash upon opening (at least in my experience).
    At the end of the day, copy off your current project to an external drive.  I have a 1TB USB3 drive for this purpose, but you can just as easily save off just the PPro, AE and PS files to a stick.  If the video corrupts, you can always re-ingest.
    Which leads us to the next tip: never clear off your cards or wipe the tapes until the project is archived.  Always cheaper to buy more memory than recouping lost hours of work, and your sanity.
    I've been doing this for over a decade and the number of projects I've lost?  Zero.  Have I crashed?  Oh, yeah.  But I just open the previous version, save a new one and resume the edit.

    Ctrl + B to show the Top Menu
    View > Show Sidebar
    View > Show Staus Bar
    Deactivate Search Entire Library to speed things up.
    This should make managing your iPhone the same as it was before.

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • Best practice how to retrieve & update data w/o any jsf-lifecycle-overhead

    I have a request scoped jsf managed bean called "ManagedBean". This bean has a method annotated with "@PostConstruct" that retrieves data from a database. The data is shown in a jsp "showAndEditData.jsp" in <h:inputText /> components - so the data is editable.
    The workflow is as follows:
    First, when navigating to "showAndEditData.jsp", the ManagedBean is created, the "@PostConstruct"-method is invoked, and the data retrieved from the database is shown to the user.
    Second, the user changes the data.
    Third, the user presses the submit button, the ManagedBean is created again, the "@PostConstruct"-method is invoked again, and the data is retrieved from the database again. Then the data is overridden by the changes the user made and passed to the business-tier (where it will be saved to the database).
    Every step that i marked with "*again*" is completely unneccessary and a huge overhead.
    Is there a way to prevent these unneccessary steps.
    Or asking in other words: Is there a best practice how to retrieve and update data efficently and without any overhead using JSF?
    I do not want to use session scoped managed beans, because this would be a huge overhead as well.

    The first "again" is neccessary, because after successfull validation, you need new object in request to store the submitted value.
    I agree to the second and third, really unneccessary and does not make sense.
    Additionally I think it�s bad practice putting data in session beansTotal agree, its a disadvantage of JSF that we often must use session.
    Think there is also an bigger problem with this.
    Dont know how your apps are working, my apps start an new database transaction per commit on every new request.
    So in this case, if you do an second query on postback, which uses an different database transaction, it could get different data as for the inital request.
    But user did his changes <b>accordingly</b> to values of the first snapshot during the inital request.
    If these values would be queried again on postback, and they have been changed meanwhile, it becomes inconsistent, because values of snapshot two, do not fit to user input.
    In my opionion zebhed has posted an major mistake in JSF.
    Dont now, where to store the data, perhaps page scope could solve this.
    Not very knowledge of that section, but still ask myself, if this data perhaps could be stored in the components and on an postback the data are rendered from components + submittedvalues instead of model.

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • Updating of data in APO DP - what is 'best practice'?

    I am using APO DP V5.
    I am using XI as an integration broker to receive data from external sources which is to be loaded into APO DP.
    There are of course different ways of doing this, for example:
    1. From XI, pass an XML message which then loads a delta queue in APO BW, loads an InfoCube, which is then used to load data into an APO planning area
    2. From XI, pass the data into BAPI parameters, which are then used to directly update data in an APO planning area via a BAPI call (eg BAPI PlanningBookAPS)
    What is the recommended 'best practice' here?
    Thanks,
    Bob Austin

    Hi Bob,
    I do not have experience in SCM 5.0 and XI but shall give my views from general experience.
    What kind of data you are loading in APO DP is the driver for the two ways mentioned by you.
    Traditionally (older versions of SCM/APO and non XI interface) Sales History is captured into an Infocube and then loaded to DP Planning Area. The reason - you normally use this Infocube for generation of CVCs which is the Master Data in APO DP. Moreover you will use this data to generate Proportional Factors for disaggregation of Statistically generated Forecast at aggregate level to the lowest level. So there is a requirement for loading Historical data from external systems to APO DP via an Infocube.
    The other kind of data would be market and sales forecast. This is a timeseries data which need to be directly loaded to the Planning Area for immediate Interactive Planning. In that case rather than putting it in an Infocube and then periodically loading the data from the Infocube to the Planning Area, its better to load the data directly in the Planning Area. There the BAPI will be a good option.
    Hope this gives some ideas to your query.
    Thanks,
    Somnath

Maybe you are looking for

  • Buttons don't work at end of slide in Captivate 4.

    I have three buttons in a slide that when clicked show text. The slide is 11 seconds long at 30 fps ( audio is playing). The system var "rdinfoFrameCount" shows 329 frames. When the slide reaches the end, the buttons no longer provide an action. The

  • Oracle 9i speed problem

    Hy everybody! Our firm have a one big problem with oracle 9i database. Problem is with speed functioning. We have an aplication maded in Visual basic, and we geting data from oracle 9i So we have instaled 9i. Before we had oracle 8i. Now when we inst

  • HT4114 iPad and Keynote

    Which iPad works best with a keynote presentation on a projector? Thanks!

  • How to read a Unicode text file in a non-unicode SAP system

    I'm currently has a text file which is saved as UTF-8 format. And inside this file has Thai characters. However, my SAP system is a non-unicode system. When I use the code open dataset DSN for output in TEXT MODE encoding UTF-8 the thai characters be

  • Ipod touch doesn't sync on Win 7 64 Bit

    Hi all, this is my first post, sorry for my english. I have a big problem with the syncronizaztion with my ipod touch 32g 2gen firmware 3.1.2 and ITunes 9.0.2 When I put it into my usb port I encoured on several errors (error 0xE8000065, the ipod can