Data Logger Design

I am working on a app for personal use that will log changes
in environment and target objects over time (units like
temperature, and volume and subjective rating on fixed
scale). The logged data is visualized (graphs, - creates
some problems) and stored for reference.
New log entries would have to be created manually.
This means entries would be made in irregular fashion and
not necessarily all of the details would be updated.
I am wondering what kind of design would be the best?
(arrays/vectors of objects or vectors of variables within
objects?!?!? Or maybe Objects in Hashmap with date and
time as key?!?)
I am thinking that the following would be ok solution but I
am not sure if it would be the best.
All of the needed loggable info is modeled into an object.
For example Room-object could have some constants
and then variables like temperature and noise-level.
When logging, a logging wizard asks the user to input all
variable data. Once the wizard is finished, The created
Room - object is stored into an Hashmap, with the current
date as key.
Once the user wants info to be graphed, a graphing wizard
pops up and asks which variables should be graphed and
how (ie. what do the x and y represent, which data to
graph etc). Then the hashmap is iterated for the data
which is the visualized [this part is completely fantasy at
the moment, no code written yet]. Any help? Do you
see any problems with this kind of design or maybe easier
solutions?
Any help is welcome.. I am working on the app mostly for
practicing my java skills.. Are there any design patterns
that I should know of?
Thanks for reading
Jani

)*bump*(
anyone?

Similar Messages

  • How to install "Compact FieldPoint Embedded Data Logger Example Program" on cfp2200

    i finally managed to get max to see the devices of my cfp2200.
    (i had to disable every unused networkadapter due to overlapping subnets)
    now i want to deploy the example programm "Compact FieldPoint Embedded Data Logger Example Program"
    http://zone.ni.com/devzone/cda/epd/p/id/3221
    i therefor have tried to install the runtime 8.5 on the target. however this is not possble, because i have labview 2010 installed and it gives the message:
    "the host has a newer version available"
    ok. runtime 2010 should work to - not?
    i open a new project - import my *.iak-file and drag-n-drop the content of "cfp_data_logger_source.zip" onto the target.
    now if i open the file cFPEmbeddedDataLogger.vi the run-button is cracked and by pressing it an errorlist opens.
    the errors say stuff like:
    target does not support this function/subvi
    and
    vi has been modified with a different applicationinstance
    please help me to make this work.
    i know that it is possible to get the programm to run on the cfp2200 even dough it is primaly designed for the   cFP-2000, cFP-2010, cFP-2020, cFP-2100, cFP-2110,
    cFP-2120.
    big THANX already for you help !!!!!

    Hi, thanks for your help=) I've manage to drag the file into my targeted controller. I am now doing the setting of the data logger program, but there's one thing I'm unsure of. May I know what does it mean by the cFP controller clock? http://zone.ni.com/devzone/cda/tut/p/id/3219 . Thanks. =)
    Log On Startup
     Start logging when the controller powers up.
    Start Log Time
     The time to start logging.
    Stop Log Time
    The time to stop logging.
    Note: Make sure the clock is set correctly on the controller. The settings described above refer to the cFP controller clock.
    Log Rate
    The number of milliseconds to wait between each acquisition.
    FP Drive For Data Files
    The drive to store the log files in. The C: drive is the default for all cFP controllers. If you want to save to the removable CompactFlash drive on the cFP controller, select the D: drive.
    Digital "Pause" Line
    The Digital Input item to use as a "pause" button for data logging. Logging pauses when the signal is high. When the signal goes low again, a new file is created and logging continues if appropriate.
    Note: Use only Digital Input items for the Digital "Pause" Line.
    Tip: You can use DIP Switch 3 to block the current data from being logged. This switch works the same way as the Digital "Pause" line.
    Logging Session Tag
    Is stamped on the spreadsheet that the data logger creates.

  • How to Plot number and string in one row (data logger counter) ?

    hi all i made data log quantity using Digital Counter via modbus to monitoring quantity and reject that has and Name Operator, Machine and Part Number.
    i have problem about plot the number & string in one row, as shown on the picture below :
    how to move that string on one row ? i attach my vi.
    Thanks~
    Attachments:
    MODBUS LIB Counter.vi ‏39 KB

    Duplicate and answered - http://forums.ni.com/t5/LabVIEW/How-to-Plot-number-and-string-in-one-row-data-logger-counter-via/m-p...

  • I purchased a Holux M-1200E Bluetooth GPS Data Logger.  The device paired with my laptop just fine so I know it is working right.  The device will not even show up on my iphone or ipad to pair via bluetooth.  Is there a way to pair without jailbreaking???

    I purchased a Holux M-1200E Bluetooth GPS Data Logger.  The device paired with my laptop just fine so I know it is working right.  The device will not even show up on my iphone or ipad to pair via bluetooth.  Is there a way to pair without jailbreaking???  I haven't had any trouble to date pairing any devic with my iphone or ipad, this is ridiculous!!!!  Is there a way to update my bluetooth settings on my iphone 4s to be able to have this device be recognized???  I love my apple devices, but this is very frustrating.  I bought this Data Logger for a specific purpuse to help out with my Search and Rescue volunteer activities, and I really need help with this!!!  I am hoping apple will help me!!!
    Thanks,
    Melissa

    melissafromlenexa wrote:
    That is not the right device, it is the
    Holux
    M-1200E Bluetooth GPS Logger
    I looked at the same sight for that and it does not say that.  This is the first time I have posted a question, you don't have to be mean about it.  I am trying to get this to work for a good cause. 
    It may be the best source of assistance is the manufacturer of the device. I suspect you will need a specific app for the iPhone to get it to work, but the site is rather ambiguous about that.

  • How do I use the High Speed Data Logger with multiple I/O devices?

    I am using the High Speed Data Logger vi to read from a 16 channel A/D card (NI PCI-MIO-16E). The project may require more than 16 channels. How can I use High Speed Data Logger to read from two A/D cards? Will it be able to write the data to one file?

    The High Speed Data Logger vi will not acquire and right to multiple DAQ boards at the same time without modification. LabVIEW is more than capable of doing this what you are trying to do, but you will have to modify the code.
    Regards,
    Anuj D.

  • How can I automatically control the channel selection on the data logger

    I am an in-experienced use with a simple system which measures and displays temperature from 16 thermocouples. Not all 16 thermocouples are always connected at once and those that aren't are blanked off from view. I would also like to use the Advanced Data Logger VI to log the measurements but would like to have the logger only record or display the thermocouples that are connected. Un-connected channels give a signal of 1264 degrees so I can use a comparator to give a boolean output to switch them off. Can anyone please tell me how to do this.
    Also, I need to expand the VI to read all 16 channels instead of the current 8. Any help would be appreciated

    Mike,
    Unfortunately I couldn't get the reference of the example you mention, probably you may want to add it as an attachment so the person who decides to answer this question knows exactly which example you are describing. So for the part of making visible/invisible some of the channels I attached an example that I built for demonstrating this feature, simply open the VI and run it, if you want one of the channels or plots from the graph to be invisible just click in the push button that corresponds to it. Switch to the block diagram and review the use of property nodes, using the property of visible.
    If you want to generate a support request please visit www.ni.com/ask to see the options of reaching us.
    Good luck!
    Nestor Sanchez
    Applications Engine
    er
    National Instruments
    Nestor
    National Instruments
    Attachments:
    Visible_Plots.vi ‏46 KB

  • I am using a GPIB card to send a pulse to data logger to control a firing system

    I need to be able to fire my linear accelerator from the computer, by sending a negative pulse and I need to be able to control the pulse duration, and also its voltage, I am using an HP34970A data logger with a HP34907A module.

    The trigger 488.2 command might be of help. Trigger sends the Group Execute Trigger (GET) GPIB message to the device described by address. If address is the constant NOADDR, the GET message is sent to all devices that are currently listen-active on the GPIB.

  • Question - new to BO Data Services Designer - how to automatically transform varchar column data to upper(varchar) across all columns in a table?

    Hello -
    New user to BO Data Services Designer. Company is using Data Services Version 12.2.
    I have many tables that all need the same transformation- converting varchars fields to upper(varchar) fields
    Example:
    I have a table called Items. It has 40 columns, 25 of which are of type varchar.
    What I have been doing is ....
    I make the Item table as the source table then create a Query transform that is then attached to a new target table called - ITEMS_Production.
    I can manually drag and drop columns from the source table to the query and then in the mapping area I can manually type in upper(table_name.column_name) or select the upper function and then select the table.column name from the function drop down list.
    Obviously, I want to do this quicker as I have to do this for lots and lots of tables.
    How can set up Data Services so that I can drag and drop an upper transform quickly and easily or automate this process.
    I also know Python-Java so am happy to script something if need be.
    the logic would be something like -
    if there is a column with a type varchar, add the upper() transformation to column. Go to next column.
    Excited to be using this new tool-
    Thanks in advance.
    Ray

    Use the DS Workbench.

  • Some queries on Data Guard design

    Hi, I have some basic questions around Data Guard design
    Q1. Looking at the Oracle instructions for creating a logical standby, it firstly advocates creating a physical standby and then converting it to a logical standby. However I thought a logical standby could have a completely different physical structure from the primary. How can this be the case if a logical standby first starts its life as a physical standby ( where structure needs to be identical ) ?
    Q2. Is it normal practice to backup your standby database as well – if so why ?
    Q3. Can RMAN backup a Standby Database whilst it is in the mounted state ( rather than open ) ?
    Q4. What's the point of Cascaded Redo Apply rather than simply getting the Primary to ship to each Standby ?
    I guess you could be trying to reduce node to node latency if some of the standby were quite distant from the primary
    Q5. Is it possible to convert a Logical Standby back to a Physical one ?
    Q6. What the maximium number of standbys you can have - Oracle suggests 30 but I thought I remember reading somewhere in relation to 11gR2 that this limit has now been increased ?
    thanks,
    Jim

    Hi,
    Q5 . Is it possible to convert a Logical Standby back to a Physical one ?  --- Yes it is possible .
    -- Primary                                                                                                        
    select switchover_status from v$database;                                                                         
    SWITCHOVER_STATUS                                                                                                 
    SESSIONS ACTIVE                                                                                                   
    2) switch operation                                                                                            
    -- Primary - starat database switch                                                                         
    alter database commit to switchover to physical standby with session shutdown;             
    -- see alertlog.                                                                               
    Switchover: Complete - Database shutdown required                                                                 
    Completed: alter database commit to switchover to physical standby with session shutdown                          
    -- database role ve switchover_status                                                  
    select NAME,OPEN_MODE,DATABASE_ROLE,SWITCHOVER_STATUS,PROTECTION_MODE,PROTECTION_LEVEL from v$database;           
    NAME      OPEN_MODE            DATABASE_ROLE    SWITCHOVER_STATUS    PROTECTION_MODE      PROTECTION_LEVEL        
    TESTCRD   READ WRITE           PHYSICAL STANDBY RECOVERY NEEDED      MAXIMUM AVAILABILITY UNPROTECTED             
    -- Primary                                                                                                        
    shutdown abort                                                                                                    
    -- standby - switch to  primary                                                                           
    alter database commit to switchover to primary with session shutdown;                                             
    -- alert log                                                                                                      
    Completed: alter database commit to switchover to primary with session shutdown                                   
    alter database open                                                                                               
    3) Old Primary  open                                                                                                 
    sqlplus / as sysdba                                                                                               
    startup nomount                                                                                                   
    alter database mount standby database                                                                             
    -- database role ve switchover_status                                                  
    select NAME,OPEN_MODE,DATABASE_ROLE,SWITCHOVER_STATUS,PROTECTION_MODE,PROTECTION_LEVEL from v$database;           
    NAME      OPEN_MODE            DATABASE_ROLE    SWITCHOVER_STATUS    PROTECTION_MODE      PROTECTION_LEVEL        
    TESTCRD   READ WRITE           PHYSICAL STANDBY RECOVERY NEEDED      MAXIMUM AVAILABILITY UNPROTECTED             
    recover managed standby database using current logfile disconnect from session;                                   
    4) Old  standby
    SQL> select NAME,OPEN_MODE,DATABASE_ROLE,SWITCHOVER_STATUS,PROTECTION_MODE,PROTECTION_LEVEL from v$database;      
    NAME      OPEN_MODE            DATABASE_ROLE    SWITCHOVER_STATUS    PROTECTION_MODE      PROTECTION_LEVEL        
    AZKKDB    READ WRITE           PRIMARY          TO STANDBY           MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE     
    Q2 . . Is it normal practice to backup your standby database as well – if so why ?
    Hence you only have to backup one of the both (you can decide to only backup the standby though, if you want to remove load on the primary). But if you are confident with the backup of the primary, there is no need for backing up the standby. (Since you can recreate the standby with the backup from primary).
    Please see link : http://blog.dbvisit.com/rman-backups-on-your-standby-database-yes-it-is-easy/
    Thank you

  • How to save data model design in pdf or any format..?

    how to save data model design in pdf or any format..?
    i ve created design but not able to save it any mage or pdf format

    File -> Print Diagram -> To PDF File

  • Aspnet_compiler web.config(132): error ASPCONFIG: Could not load type 'System.Data.Entity.Design.AspNet.EntityDesignerBuildProvider'

    My latest MVC 5 project built with VS 2013 compiled and works fine on my PC but can not be published to server using command line aspnet_compiler. I kept getting the web.config(132): error ASPCONFIG: Could not load type 'System.Data.Entity.Design.AspNet.EntityDesignerBuildProvider'
    In my web.config file, I have the following as suggested by many for solution:
    <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
     <add assembly="System.Data.Entity.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
    However, the error is persisting.  I have no clue how to fix this.  Hope someone can help.

    Hello FredVanc,
    As you mentions, it could work with your develop environment, so I am wondering it is that on the server machine, the assembly is not there actually. Here are some information I found which might be helpful:
    https://support.microsoft.com/en-gb/kb/958975?wa=wsignin1.0
    https://social.msdn.microsoft.com/Forums/en-US/1295cfa8-751b-4e9b-a7a7-14e1ad1393b6/compiling-error?forum=adodotnetentityframework
    By the way, since this issue is related with ASP.ENT project, I suggest that you could post it to the ASP.NET forum, there are web experts will help you:
    http://forums.asp.net/
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Performance impacts of attributes versus entities in data model design

    I'm trying to understand the performance implications of two possible data model designs.
    Here's my entity structure:
    global > the person > the account > the option
    Typically at runtime I instantiate one person, one account, and five option 's.
    There are various amounts determined by the person's age that need to be assigned to the correct option.
    Here are my two designs:
    Design one
    attributes on the person entity:
    the person's age
    the person's option 1 amount
    the person's option 2 amount
    the person's option 3 amount
    the person's option 4 amount
    the person's option 5 amount
    attributes on the option endity:
    the option's amount
    supporting rule table:
    the option's amount =
    the person's option 1 amount if the option is number 1
    the person's option 2 amount if the option is number 2
    the person's option 3 amount if the option is number 3
    the person's option 4 amount if the option is number 4
    the person's option 5 amount if the option is number 5
    Design two
    attributes on the person entity:
    the person's age
    attributes on the option entity:
    the option's amount
    the option's option 1 amount
    the option's option 2 amount
    the option's option 3 amount
    the option's option 4 amount
    the option's option 5 amount
    supporting rule table:
    the option's amount =
    the option's option 1 amount if the option is number 1
    the option's option 2 amount if the option is number 2
    the option's option 3 amount if the option is number 3
    the option's option 4 amount if the option is number 4
    the option's option 5 amount if the option is number 5
    Given the two designs, I can see what looks like an advantage for Design one in that at runtime you have less attributes (6 on the one pension member + 1 on each of 5 options =11) than Design two (1 on the one pension member + 6 on each of 5 options = 31), but I'm not sure. An advantage for Design two might be that the algorithm has to do less traversing of the entity structure: the supporting rule table finds everything for the option's amount on the option.
    Either way there is a rule table to determine the amounts:
    Design one
    the person's option 1 amount =
    2 if the person's age = 10
    5 if the person's age = 11
    7 if the person's age = 12, etc.
    Design two
    the option's option 1 amount =
    2 if the person's age = 10
    5 if the person's age = 11
    7 if the person's age = 12, etc.
    Here it looks like the rulebase would have to do more traversing of the entity structure for Design two.
    Which design is going to have better performance with a large amount of rules, or would it make a difference at all?

    Hi!
    In our experience you only need to think about things like this if you were dealing with 100s or 1000s of instances (typically via ODS). As you have a very low number, the differences will be negligible, and you should (usually) go with the solution which is the most similar to the source material or the business user's understanding. I also assume this is an OWD project? Which can be even better, since the inferencing is done incrementally when new data is added to the rulebase, rather than in one "big bang" like ODS.
    It looks like design 1 is the simplest to understand and explain. I'm just wondering why you need the option entity at all, since it seems like a to-one relationship? So the person can only have one option 1 amount, one option 2 amount etc, and there are only ever going to be (up to) 5 options...is that assumption correct? If so, you could just keep these as attributes on the person level without the need for instances. If there are other requirements for an option instance then of course, use them, but given the information here, the option entity doesnt seem to be needed. That would be the fastest of all :-)
    Either way, as the number of instances is so low, you should have nothing to worry about in terms of performance.
    Hope this helps! Write back if you have any more info / questions.
    Cheers,
    Ben

  • Data logger and online processing application

    Dear All
    My application should include both data logging and online processing threads which are synchronized. I already implemented that by means of something we call "Ring buffer" which is an array containing the data and when the data stream reach its end it start over again from first index. It works almost well but sometimes it seems that my "timed loop" which is my processing thread is a little bit ahead of my data logger thread and it makes some distortion on data stream in processing thread but of course not in data logger.
    Since this type of application is very classic, we should have a kind of general prototype (example) for its implementation which provides fully synchronization between processing and logger threads so I was wondering if somebody knows where I should look into?
    Best regards
    Afshin

    Hello Afshin,
    Have a look at Software Circular Buffer in LabVIEW and Software Circular Buffer Reference Library for Multi-Channel Data Acquisition.
    The latter comes with an example.
    Regards,
    Eirikur Runarsson
    Platinum Applications Engineer
    NI Denmark

  • Embedded data logger gaps

    Hi all,
       it seems that the embedded data logger does not record gaps when the log trigger is triggered multiple times when recording on the same file.
    Is mine a correct assumption? I tried viewing the excel export and the TDMS viewer output and there is no trace of the "missing" data.
    If I try recording 10 minutes and strop the log for 2 minutes I get a solid file which shows 8 minutes of recording with no gaps. Does the TDMS format support gaps such that a custom TDMS exctraction tool can show the gaps (implying a limit on  the TDMS viewer that ship with VeriStand 2011)?

    The embedded data logger does not log gaps as you noticed. There are two possible approaches that can help you organize your data&colon;
    1. Timestamp your file names. Whenever you want to enable logging again, set th Log File Command channel to 1 to open a new file instance. Then set the Log Status channel to 1 to start logging in the new file. This way every file has only contiguous data.
    2. Log a Timestamp channel such as System Time in every channel group. Then your data becomes xy data, and you should be able to plot it as such in Excel.
    Jarrod S.
    National Instruments

  • Suggest best data flow design

    <<Do not ask for or offer points>>
    Hi
    Right now  we need to implement  HR related  data - the main concentration is PA, OM
    Please suggest me best suitable data flow design
    Note: All my datasource will not support delta
    How we can handle these loads in SAP BI side  -  I mean to stay how to handle these loads by using DTP
    Please suggest some good data flow. I will assign some good points on this
    Regards
    Madan Mohan
    Edited by: Matt on Feb 19, 2010 7:03 AM

    Hi Madan,
       You can find the data flow in metadata repository in RSA1. Goto the metadata repository-> select your datasource ->
       network  display.
       DTP: You can extract both full/delta using the DTP.
       Please refer the below links also.
       Personnel Administration:
       [http://help.sap.com/saphelp_nw04/helpdata/EN/7d/eab4391de05604e10000000a114084/frameset.htm]
       Organizational Management:
       [http://help.sap.com/saphelp_nw04/helpdata/EN/2d/7739a2a6d5cc4c8d63a514599dc30f/frameset.htm]
    Regards
    Prasad

Maybe you are looking for