Can I difine write the data to the catch myself?

Hi all!
I had the MSS system,But our company has almost one hundred thousand employee.When the boss want to see about the employee's data,it is very slowly .
And I found when he logon the sencond time one day ,it's faster.Because the data has write to the catch.
Then,can I make the system write to the catch every night?

I'm not an expert on network communication, so maybe you can explain how data can travel on an IP based network without being wrapped in some packets from a protocol like TCP or UDP? Maybe it's possible for LabVIEW to do what you want, or maybe compiling your C code into a DLL (no need for a CIN) and calling that will be fast enough for what you want.
In any case, I think I can answer one question:
So, how long would we have to expect LV add/improve these functions? Could it be in next minor update?
Not likely, nor is it likely to be in any other version soon. LabVIEW is not built to handle this sort of thing any more than PHP or perl are and I don't think NI will change it to be able to do so. For that you have C, which you can call through a DLL. Some DLLs are supported on RT targets as well, but I'm not sure what the criteria are. They are probably listed somewhere on this site.
I assume that calling a DLL in an RT target will hurt your determinism, but communication code should not be time-critical code, unless your application is an application which requires high speed data transfers and synchronization between different nodes.
Try to take over the world!

Similar Messages

  • Why can I only write the samples to read during continuous acquisition?

    I am acquiring analog voltage signals and attemting to write to a text file when condition is true, thus there is continuous acquisition. However, the written files only contain 100 samples (the size of my buffer).. I need my files to read samples until the while loop is stopped. How can I go abotu doing this?
    Thanks,
    TFlax
    Solved!
    Go to Solution.

    I have been able to make this work, but  it only writes the buffer size. I am new to labview and programming in general and I have tried to make a producer and consumer design however I dont seem to understand the concept properly and fail to run my program properly... So here's my code before any attempt to producer/consumer design which only writes 100 samples to file. Advice on how to modify it to producer/consumer because Im not even sure where to start.
    Thanks,
    Tflax
    Attachments:
    Test_Signals_Dec17.vi ‏745 KB

  • How can I use "write to spreadsheet" during the data is taking but not the end of all the loops

    Hi,
    I have to run an experiment on Labview 6 or 5. I don't have Labview 7 on that computer for some reason. My experiment is talking about 1000 hours, and I have a probelm on storing the data. Right now I am using "Write to spread sheet" and I set the "append" to false. And the data is installed at the end of the experiment, that means after 1000 hours. In the mean time if somthing oges wrong like power cut or what, I will lose all the data. So what I want to do is to save the data evertime when the data is took. I tired to set the "append" to true, but it does not let me to choose the "file path"--- when I choose this and select write on new or excisting file, it said" you may not be able to save on a exciting file" and it does not let me create a new file either. If anyone have the lidea like how can I use the "wrtie to spreadsheet" function and at the same time can install the data everytime inside the look, please let me know. Thanks alot.
    KL

    KL,
    It sounds like you want to periodically save your data so it is in smaller files. (great idea) For this operation you will not want to append the data...if something happens to your system while the file is open it could become corrupt and ruin all the data. You need to write the new block of data to a new file every time. Now...this depends on how big your data is...if you only have like 500k of data to write in a block you should probably write several blocks before starting a new file. I don't know enough about how much data you are acquiring. In either case...the write to spreadsheet file.vi will need a different name each time it is called and you will not want to append. Append = false.
    -Brett

  • How can i write the data to PIC16F819 using labview?

    how can i write the data to PIC16F819 using labview?
    Need help!
    im using labview in gathering the datas that i need to right to the PIC, then after getting all the datas i am using another program which is ICD2 in order to write it to the PIC. Is it possible to do this task through LV? coz we are spending a lot of time transferring the data from LV to ICD2 manually and its prone to mistake as well.
    any suggestion?
    thanks,
    Pedz

    LabVIEW does not currently have a built-in method to communicate with
    i2c, but there are other vendors that sell devices to communicate in
    this manner with LabVIEW development kits.  One that I know of is
    from MCC... here is a link:  http://www.mcc-us.com
    They sell a device called iPort, and then you can buy LabVIEW VIs to go with it.  I hope this is helpful to you!
    john m

  • How can I save/write data on the hard disc of the Real-Time System?

    I would like to acquire huge size of data via LabView Real-Time System.
    Since data is so huge, I am now thinking that it might be good idea to
    write/save data on the hard disc of the Real-Time System first and then
    transfer the data file to a data processing PC using FTP etc.
    If you know how to save/write data on the hard disc of the Real-Time System
    (Ver. 7.1), please let me know.
    Thanks,

    Just to add to Aitortxo's good answer,
    Since you have only one Drive ( C:\) on your RT, just keep monitering the availible disc space.
    After every few set of file writes keep transferring Backups delete these files, so that you  keep C:\ drive space for running your applications.
    Regards

  • Is there a routine one can use to shift the column of data by one each time the loop index increments? In other words, increment the columns that the data is being saved by using the index?

    The device, an Ocean Optics spectrometer in columns of about 9000 cells.I'm saving this as a lvm file using the "write to measurement file.vi". But it doesn't give me the flexibility as far as I can tell.
    I need to move the column by the index of the for loop, so that when i = n, the data will take up the n+1 column. (the 1st column is used for wavelength). How do I use the "write to spreadsheet file.vi" to do this? Also, if I use the "write to spreadsheet file.vi", is there a way one can increment the file name, so that the data isn't written over. I like what "write to measurement file.vi" does.
    I'd really appreciate any help someone can give me. I'm a novice at this, so the greater the detail, the better. Thanks!!!

    You cannot write one column at a time to a spreadsheet file, because a file is arranged linearly and adding a column would need to move (=read and rewwrite elsewhere) almost all existing elements to interlace the new data. You can only append new rows without having to touch the already written data.
    Fields typically don't have fixed width. An exception would be binary files that are pre-allocated at the final size. In this case you can write columns by setting the file positions for each element. It still will be very inefficient.
    What you could do is append rows until all data is written, the read, transpose, and write back the final file.
    What you also could to is build the final array in a shift register and write the entire things to file at once after all data is present.
    LabVIEW Champion . Do more with less code and in less time .

  • Can multiple threads write to the database?

    I am a little confused from the statement in the documentation: "Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time."
    1. Can multiple threads write to the "Simple Data Store"?
    2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    #include "stdafx.h"
    #include <stdio.h>
    #include <windows.h>
    #include <db.h>
    static DB *db = NULL;
    static DB_ENV *dbEnv = NULL;
    DWORD WINAPI th_write(LPVOID lpParam)
    DBT key, data;
    char key_buff[32], data_buff[32];
    DWORD i;
    printf("thread(%s) - start\n", lpParam);
    for (i = 0; i < 200; ++i)
    memset(&key, 0, sizeof(key));
    memset(&data, 0, sizeof(data));
    sprintf(key_buff, "K:%s", lpParam);
    sprintf(data_buff, "D:%s:%8d", lpParam, i);
    key.data = key_buff;
    key.size = strlen(key_buff);
    data.data = data_buff;
    data.size = strlen(data_buff);
    db->put(db, NULL, &key, &data, 0);
    Sleep(5);
    printf("thread(%s) - End\n", lpParam);
    return 0;
    int main()
    db_env_create(&dbEnv, 0);
    dbEnv->open(dbEnv, NULL, DB_CREATE | DB_INIT_MPOOL | DB_THREAD, 0);
    db_create(&db, dbEnv, 0);
    db->open(db, NULL, "test.db", NULL, DB_BTREE, DB_CREATE, 0);
    CreateThread(NULL, 0, th_write, "A", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "C", 0, 0);
    th_write("C");
    Sleep(2000);
    }

    Here some clarification about BDB Lock and Multi threads behavior
    Question 1. Can multiple threads write to the "Simple Data Store"?
    Answer 1.
    Please Refer to http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    A Data Store (DS) set up
    (so not using an environment or using one, but without any of the DB_INIT_LOCK, DB_INIT_TXN, DB_INIT_LOG environment regions related flags specified
    each corresponding to the appropriate subsystem, locking, transaction, logging)
    will not guard against data corruption due to accessing the same database page and overwriting the same records, corrupting the internal structure of the database etc.
    (note that in the case of the Btree, Hash and Recno access methods we lock at the database page level, only for the Queue access method we lock at record level)
    So,
    if You want to have multiple threads in the application writing concurrently or in parallel to the same database You need to use locking (and properly handle any potential deadlocks),
    otherwise You risk corrupting the data itself or the database (its internal structure).
    Of course , If You serialize at the application level the access to the database, so that no more one threads writes to the database at a time, there will be no need for locking.
    But obviously this is likely not the behavior You want.
    Hence, You need to use either a CDS (Concurrent Data Store) or TDS (Transactional Data Store) set up.
    See the table comparing the various set ups, here: http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    Berkeley DB Data Store
    The Berkeley DB Data Store product is an embeddable, high-performance data store. This product supports multiple concurrent threads of control, including multiple processes and multiple threads of control within a process. However, Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time. The Berkeley DB Data Store is intended for use in read-only applications or applications which can guarantee no more than one thread of control updates the database at a time.
    Berkeley DB Concurrent Data Store
    The Berkeley DB Concurrent Data Store product adds multiple-reader, single writer capabilities to the Berkeley DB Data Store product. This product provides built-in concurrency and locking feature. Berkeley DB Concurrent Data Store is intended for applications that need support for concurrent updates to a database that is largely used for reading.
    Berkeley DB Transactional Data Store
    The Berkeley DB Transactional Data Store product adds support for transactions and database recovery. Berkeley DB Transactional Data Store is intended for applications that require industrial-strength database services, including excellent performance under high-concurrency workloads of read and write operations, the ability to commit or roll back multiple changes to the database at a single instant, and the guarantee that in the event of a catastrophic system or hardware failure, all committed database changes are preserved.
    So, clearly DS is not a solution for this case, where multiple threads need to write simultaneously to the database.
    CDS (Concurrent Data Store) provides locking features, but only for multiple-reader/single-writer scenarios. You use CDS when you specify the DB_INIT_CDB flag when opening the BDB environment: http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envopen.html#envopen_DB_INIT_CDB
    TDS (Transactional Data Store) provides locking features, adds complete ACID support for transactions and offers recoverability guarantees. You use TDS when you specify the DB_INIT_TXN and DB_INIT_LOG flags when opening the environment. To have locking support, you would need to also specify the DB_INIT_LOCK flag.
    Now, since the requirement is to have multiple writers (multi-threaded writes to the database),
    then TDS would be the way to go (CDS is useful only in single-writer scenarios, when there are no needs for recoverability).
    To Summarize
    The best way to have an understanding of what set up is needed, it is to answer the following questions:
    - What is the data access scenario? Is it multiple writer threads? Will the writers access the database simultaneously?
    - Are recoverability/data durability, atomicity of operations and data isolation important for the application? http://docs.oracle.com/cd/E17076_02/html/programmer_reference/transapp_why.html
    If the answers are yes, then TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    Question 2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    Answer 2.
    Definitely yes, You can see data loss and/or data corruption.
    You can check the behavior of your testcase in the following way
    1. Run your testcase
    2.After the program exits
    run db_verify to verify the database (db_verify -o test.db).
    You will likely see db_verify complaining, unless the thread scheduler on Windows weirdly starts each thread one after the other,
    IOW no two or ore threads write to the database at the same time -- kind of serializing the writes
    Question 3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    Answer 3.
    In Your case the TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    doing this You have proper deadlock handling in place and proper transaction usage
    so
    You are protected against potential data corruption/data loss.
    see http://docs.oracle.com/cd/E17076_02/html/gsg_txn/C/BerkeleyDB-Core-C-Txn.pdf
    Multi-threaded and Multi-process Applications
    DB is designed to support multi-threaded and multi-process applications, but their usage
    means you must pay careful attention to issues of concurrency. Transactions help your
    application's concurrency by providing various levels of isolation for your threads of control. In
    addition, DB provides mechanisms that allow you to detect and respond to deadlocks.
    Isolation means that database modifications made by one transaction will not normally be
    seen by readers from another transaction until the first commits its changes. Different threads
    use different transaction handles, so this mechanism is normally used to provide isolation
    between database operations performed by different threads.
    Note that DB supports different isolation levels. For example, you can configure your
    application to see uncommitted reads, which means that one transaction can see data that
    has been modified but not yet committed by another transaction. Doing this might mean
    your transaction reads data "dirtied" by another transaction, but which subsequently might
    change before that other transaction commits its changes. On the other hand, lowering your
    isolation requirements means that your application can experience improved throughput due
    to reduced lock contention.
    For more information on concurrency, on managing isolation levels, and on deadlock
    detection, see Concurrency (page 32).

  • How to create a procedure in oracle to write the data into file

    Hi All,
    I am just wondered on how to create a procedure which will do following tasks:
    1. Concat the field names
    2. Union all the particular fields
    3. Convert the date field into IST
    4. Prepare the statement
    5. write the data into a file
    Basically what I am trying to achieve is to convert one mysql proc to oracle. MySQL Proc is as follows:
    DELIMITER $$
    USE `jioworld`$$
    DROP PROCEDURE IF EXISTS `usersReport`$$
    CREATE DEFINER=`root`@`%` PROCEDURE `usersReport`(IN pathFile VARCHAR(255),IN startDate TIMESTAMP,IN endDate TIMESTAMP )
    BEGIN
    SET @a= CONCAT("(SELECT 'User ID','Account ID','Gender','Birthdate','Account Registered On') UNION ALL (SELECT IFNULL(a.riluid,''),IFNULL(a.rilaccountid,''),IFNULL(a.gender,''),IFNULL(a.birthdate,''),IFNULL(CONVERT_TZ(a.creationDate,'+0:00','+5:30'),'') INTO OUTFILE '",pathFile,"' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '' LINES TERMINATED BY '\n' FROM account_ a where a.creationDate>='",startDate,"' and a.creationdate <='",endDate,"')");
    PREPARE stmt FROM @a;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt ;
    END$$
    DELIMITER ;
    Regards,
    Vishal G

    1. Concat the field names
    Double Pipe (||) is the concatenation operator in Oracle. There is also a function CONCAT for this purpose
    2. Union all the particular fields
    Not sure what do you mean by UNION ALL particular fields? UNION ALL is a set operation applied on two different result sets that have the same projection.
    3. Convert the date field into IST
    SQL> select systimestamp "Default Time"
      2       , systimestamp at time zone 'Asia/Calcutta' "IST Time"
      3    from dual;
    Default Time                                       IST Time
    05-05-15 03:14:52.346099 AM -04:00                 05-05-15 12:44:52.346099 PM ASIA/CALCUTTA
    4. Prepare the statement
    What do you mean by prepare the statement?
    5. write the data into a file
    You can use the API UTL_FILE to write to a file.

  • Race conditions - can you actually lose the data in memory?

    I've seen posts where a race condition can actually destroy the contents of a global memory. I understand about not getting the right information if your read just beats your write, but can it actually lose the data altogether? This would mean you have to semaphore every time you write to a global if you have something polling on it to guarantee the data doesn't get lost. I've never seen this in any example programs - its hard to believe!

    I think the idea is merely that when you write to a global (or any)
    variable the previous contents are overwritten. If you intended to read the
    global before the write but the write ends up happening first then you lose.
    "mikema111" wrote in message
    news:[email protected]..
    > I've seen posts where a race condition can actually destroy the
    > contents of a global memory. I understand about not getting the right
    > information if your read just beats your write, but can it actually
    > lose the data altogether? This would mean you have to semaphore every
    > time you write to a global if you have something polling on it to
    > guarantee the data doesn't get lost. I've never seen this in any
    > example programs - its hard to be
    lieve!

  • How to write the oracle data as XML format. (.XML file)

    create or replace procedure pro(p_number )
    is
    cursor c1 is select *from emp where empno=p_number;
    v_file utl_file.file_type;
    begin
    v_file := utl_file.fopen('dirc','filename.txt','w');
    for i in c1 loop
    utl_file.put_line(v_file,i.ename || i.empno ||i.job);
    end loop;
    closef(v_file);
    end;
    Now my client want instead of .txt file he need .xml files
    File should contains xml tags. can any one help regarding this.. with one example.
    How to write the oracle data as XML format. (.XML file)

    hi,
    hope this example will do something....
    SQL> select employee_id, first_name, last_name, phone_number
    2 from employees where rownum < 6
    EMPLOYEE_ID FIRST_NAME LAST_NAME PHONE_NUMBER
    100 Steven King 515.123.4567
    101 Neena Kochhar 515.123.4568
    102 Lex De Haan 515.123.4569
    103 Alexander Hunold 590.423.4567
    104 Bruce Ernst 590.423.4568
    SQL> select dbms_xmlgen.getxml('select employee_id, first_name,
    2 last_name, phone_number from employees where rownum < 6') xml
    3 from dual;
    *<?xml version="1.0"?>*
    *<ROWSET>*
    *<ROW>*
    *<EMPLOYEE_ID>100</EMPLOYEE_ID>*
    *<FIRST_NAME>Steven</FIRST_NAME>*
    *<LAST_NAME>King</LAST_NAME>*
    *<PHONE_NUMBER>515.123.4567</PHONE_NUMBER>*
    *</ROW>*
    *<ROW>*
    *<EMPLOYEE_ID>101</EMPLOYEE_ID>*
    *<FIRST_NAME>Neena</FIRST_NAME>*
    *<LAST_NAME>Kochhar</LAST_NAME>*
    *<PHONE_NUMBER>515.123.4568</PHONE_NUMBER>*
    *</ROW>*
    *<ROW>*
    *<EMPLOYEE_ID>102</EMPLOYEE_ID>*
    *<FIRST_NAME>Lex</FIRST_NAME>*
    *<LAST_NAME>De Haan</LAST_NAME>*
    *<PHONE_NUMBER>515.123.4569</PHONE_NUMBER>*
    *</ROW>*
    *<ROW>*
    *<EMPLOYEE_ID>103</EMPLOYEE_ID>*
    *<FIRST_NAME>Alexander</FIRST_NAME>*
    *<LAST_NAME>Hunold</LAST_NAME>*
    *<PHONE_NUMBER>590.423.4567</PHONE_NUMBER>*
    *</ROW>*
    *<ROW>*
    *<EMPLOYEE_ID>104</EMPLOYEE_ID>*
    *<FIRST_NAME>Bruce</FIRST_NAME>*
    *<LAST_NAME>Ernst</LAST_NAME>*
    *<PHONE_NUMBER>590.423.4568</PHONE_NUMBER>*
    *</ROW>*
    *</ROWSET>*
    ask if you want more assistance.
    thanks.

  • I Need to Write the current Date and Time

    I need to write the current date and time to calculate the spend time in each instruction.
    What is the instrucion?
    Thanks.
    MIGUEL ANGEL CARO
    [email protected]

    The current time can be determined with a Date object:
    Date now = new java.util.Date() ;or if you are interested in the seconds since the epoch:
    System.currentTimeMillis() ;If you wanted to print them out:
    System.out.println( new java.util.Date() ) ;
    System.out.println( System.currentTimeMillis() ) ;Kenny

  • How can i write the DOM tree to the XML File?

    Asslamo ala mn etb3a Alhoda.
    My problem priefly is that i can't write the DOM tree to XML file.
    I write following code to give the user the ability to input the name of data base which he want to create.
    If i wrote DB name,its atrributes and its values succefully to XML file ,then i'll read it succefully to RAM.
    It'll be simple DBMS.
    ***My code***
    import javax.swing.*;
    import javax.xml.parsers.*;
    import org.w3c.dom.Document;
    import org.w3c.dom.DOMException;
    import org.w3c.dom.Element;
    public class Main {
         * @param args
    public static void main(String[] args) {
         // TODO Auto-generated method stub     
         String DBname;
    DBname=JOptionPane.showInputDialog("Enter Your Data Base Name");
    OPerations O=new OPerations();
         O.creatDB(DBname);
    class OPerations {
    public void creatDB(String name){
         //get an empty Document
         DocumentBuilderFactory f; f=DocumentBuilderFactory.newInstance();
         try {
              builder = f.newDocumentBuilder();
              Document doc= builder.newDocument();
         //Build tree DOM With the root Node
              Element root=doc.createElement(name);
         //add the root element to thevdocument
              doc.appendChild(root);
    catch (ParserConfigurationException e) {
         // TODO Auto-generated catch block
              e.printStackTrace();
    }

    Do an identity transformation to output it:TransformerFactory tf = TransformerFactory.newInstance();
    Transformer tr = tf.newTransformer();
    tr.transform(new DOMSource(doc), new StreamResult(new FileOutputStream(new File(filename))));(typed without syntax check)

  • Why can't I see the data in the spreadsheet ?

    why can't I see the data in the spreadsheet file in Which I wrote my output data? I set the format of write to spreadsheet file to be %.11f. Though I can see only colums of zeros, I can plot out the data.

    I tried th "%.11f" format and it worked in the attached VI (6.1). Please respond and attach a small example VI that demonstrates your problem.
    Chris_Mitchell
    Product Development Engineer
    Certified LabVIEW Architect
    Attachments:
    array_test.vi ‏15 KB

  • Can u write the following query without using group by clause

    select sp.sid, p.pid, p.name from product p, supp_prod sp
    where sp.pid= p.pid and
    sp.sid = ( select sid from supp_prod group by sid
    having count(*) =(select count(*) from product));
    thru this, we retrieving all the products delivered by the supplier.
    Can you write the following query without using the group by clause

      select sp.sid, p.pid, p.name
        from product p, supp_prod sp
       where sp.pid= p.pid the above query will still retrieve all the products supplied by the supplier. sub-query is not necessary.
    maybe if you can post some sample data and output will help us understand what you want to achieve.

  • Data warehouse Loader did not write the data

    Hi,
    I need to know which products are the most searched, I know the tables responsible for storing this information
    and are ARF_QUERY ARF_QUESTION. I already have the Data Warehouse module loader running, if anyone knows
    why the data warehouse loader did not write the data in the database, I thank you.
    Thank.

    I have configured the DataWarehouse Loader and its components.Even I have enabled the logging mechanism.
    I can manually pass the log files into queue and then populate the data into Data Warehouse database through scheduling.
    The log file data is populated into this queue through JMS message processing" and should be automated.I am unable to
    configure this.
    Which method is responsible for adding the log file data into loader queue and how to automate this.

Maybe you are looking for