Does Concurrent Data Store use synchronous writes?

When specifying the DB_INIT_CDB flag in opening a dbenv (to use a Concurrent Data Store), you are unable to specify any other flags except DB_INIT_MPOOL. Does this mean that logging and transactions are not enabled, and in turn that db does not use synchronous disk writes? It would be great if someone could confirm...

Hi,
Indeed, when setting up CDS (Concurrent Data Store) the only other subsystem you may initialize are the shared memory buffer pool subsystem (DB_INIT_MPOOL). CDS suites applications where there is no need for full recoverability or transaction semantics, and where you need support for deadlock-free, multiple-reader/single writer access to the database.
You will not initialize the transaction subsystem (DB_INIT_TXN) nor the logging subsystem (DB_INIT_LOG). Note that you cannot specify recovery configuration flags when opening the environment with DB_INIT_CDB (DB_INIT_RECOVER or DB_RECOVER_FATAL).
I assume that by synchronous/asynchronous writes you're referring to the possibility of using DB_TXN_NOSYNC and DB_TXN_WRITE_NOSYNC for transactions in TDS applications to influence the default behavior (DB_TXN_SYNC) when committing a transaction (which is to synchronously flush the log when the transaction commits). Since in a CDS set up there is no log buffer, no logs or transactions these flags do not apply.
The only aspect pertaining to writes in CDS applications that needs discussion is flushing the cache. Flushing the cache (or database cache) - DB->sync(), DB->close(), DB_ENV->memp_sync() - ensures that any changes made to the database are wrote to disk (stable storage). So, you could say that since there are no durability guarantees, including recoverability after failure, that disk writes in CDS application are not synchronous (they do not reach stable storage, you need to explicitly flush the environment/database cache).
More information on CDS applications is here:
[http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/cam.html]
Regards,
Andrei

Similar Messages

  • Reclaiming memory when using concurrent data store

    Hello,
    I use the concurrent data store in my python application and I'm noticing that
    the system memory usage increases and is never freed when the application
    is done. The following python code is a unit test that simulates my app's workload:
    ##########BEGIN PYTHON CODE##################
    """TestCases for multi-threaded access to a DB.
    #import gc
    #gc.enable()
    #gc.set_debug(gc.DEBUG_LEAK)
    import os
    import sys
    import time
    import errno
    import shutil
    import tempfile
    from pprint import pprint
    from random import random
    try:
        True, False
    except NameError:
        True = 1
        False = 0
    DASH = '-'
    try:
        from threading import Thread, currentThread
        have_threads = True
    except ImportError:
        have_threads = False
    import unittest
    verbose = 1
    from bsddb import db, dbutils
    class BaseThreadedTestCase(unittest.TestCase):
        dbtype       = db.DB_UNKNOWN  # must be set in derived class
        dbopenflags  = 0
        dbsetflags   = 0
        envflags     = 0
        def setUp(self):
            if verbose:
                dbutils._deadlock_VerboseFile = sys.stdout
            homeDir = os.path.join(os.path.dirname(sys.argv[0]), 'db_home')
            self.homeDir = homeDir
            try:
                os.mkdir(homeDir)
            except OSError, e:
                if e.errno <> errno.EEXIST: raise
            self.env = db.DBEnv()
            self.setEnvOpts()
            self.env.open(homeDir, self.envflags | db.DB_CREATE)
            self.filename = self.__class__.__name__ + '.db'
            self.d = db.DB(self.env)
            if self.dbsetflags:
                self.d.set_flags(self.dbsetflags)
            self.d.open(self.filename, self.dbtype, self.dbopenflags|db.DB_CREATE)
        def tearDown(self):
            self.d.close()
            self.env.close()
            del self.d
            del self.env
            #shutil.rmtree(self.homeDir)
            #print "\nGARBAGE:"
            #gc.collect()
            #print "\nGARBAGE OBJECTS:"
            #for x in gc.garbage:
            #    s = str(x)
            #    print type(x),"\n ", s
        def setEnvOpts(self):
            pass
        def makeData(self, key):
            return DASH.join([key] * 5)
    class ConcurrentDataStoreBase(BaseThreadedTestCase):
        dbopenflags = db.DB_THREAD
        envflags    = db.DB_THREAD | db.DB_INIT_CDB | db.DB_INIT_MPOOL
        readers     = 0 # derived class should set
        writers     = 0
        records     = 1000
        def test01_1WriterMultiReaders(self):
            if verbose:
                print '\n', '-=' * 30
                print "Running %s.test01_1WriterMultiReaders..." % \
                      self.__class__.__name__
            threads = []
            for x in range(self.writers):
                wt = Thread(target = self.writerThread,
                            args = (self.d, self.records, x),
                            name = 'writer %d' % x,
                            )#verbose = verbose)
                threads.append(wt)
            for x in range(self.readers):
                rt = Thread(target = self.readerThread,
                            args = (self.d, x),
                            name = 'reader %d' % x,
                            )#verbose = verbose)
                threads.append(rt)
            for t in threads:
                t.start()
            for t in threads:
                t.join()
        def writerThread(self, d, howMany, writerNum):
            #time.sleep(0.01 * writerNum + 0.01)
            name = currentThread().getName()
            start = howMany * writerNum
            stop = howMany * (writerNum + 1) - 1
            if verbose:
                print "%s: creating records %d - %d" % (name, start, stop)
            for x in range(start, stop):
                key = '%04d' % x
                #dbutils.DeadlockWrap(d.put, key, self.makeData(key),
                #                     max_retries=12)
                d.put(key, self.makeData(key))
                if verbose and x % 100 == 0:
                    print "%s: records %d - %d finished" % (name, start, x)
            if verbose:
                print "%s: finished creating records" % name
    ##         # Each write-cursor will be exclusive, the only one that can update the DB...
    ##         if verbose: print "%s: deleting a few records" % name
    ##         c = d.cursor(flags = db.DB_WRITECURSOR)
    ##         for x in range(10):
    ##             key = int(random() * howMany) + start
    ##             key = '%04d' % key
    ##             if d.has_key(key):
    ##                 c.set(key)
    ##                 c.delete()
    ##         c.close()
            if verbose:
                print "%s: thread finished" % name
            d.sync()
            del d
        def readerThread(self, d, readerNum):
            time.sleep(0.01 * readerNum)
            name = currentThread().getName()
            for loop in range(5):
                c = d.cursor()
                count = 0
                rec = c.first()
                while rec:
                    count += 1
                    key, data = rec
                    self.assertEqual(self.makeData(key), data)
                    rec = c.next()
                if verbose:
                    print "%s: found %d records" % (name, count)
                c.close()
                time.sleep(0.05)
            if verbose:
                print "%s: thread finished" % name
            del d
        def setEnvOpts(self):
            #print "Setting cache size:", self.env.set_cachesize(0, 2000)
            pass
    class BTreeConcurrentDataStore(ConcurrentDataStoreBase):
        dbtype  = db.DB_BTREE
        writers = 10
        readers = 100
        records = 100000
    def test_suite():
        suite = unittest.TestSuite()
        if have_threads:
            suite.addTest(unittest.makeSuite(BTreeConcurrentDataStore))
        else:
            print "Threads not available, skipping thread tests."
        return suite
    if __name__ == '__main__':
        unittest.main(defaultTest='test_suite')
        #print "\nGARBAGE:"
        #gc.collect()
        #print "\nGARBAGE OBJECTS:"
        #for x in gc.garbage:
        #    s = str(x)
        #    print type(x),"\n ", s
    ##########END PYTHON CODE##################Using the linux command 'top' prior to and during the execution of
    the python script above, I noticed that a considerable amount of memory
    is used up and never reclaimed when it ends.If you delete the db_home,
    however, the memory is reclaimed.
    Am I conjuring up the bsddb concurrent db store incorrectly somehow?
    I'm using python 2.5.1 and the builtin bsddb module.
    Thanks,
    Gerald
    Message was edited by:
    user590005
    Message was edited by:
    user590005

    I think I am seeing what you are reporing, but I need to check further into
    the reason for this.
    Running your program and monitoring with Top/vmstat before/after the test, and
    after deleting db_home is:
    BEFORE RUNNING PYTHON TEST:
    ++++++++++++++++++++++++++
    top - 17:00:17 up 7:00, 6 users, load average: 0.07, 0.38, 0.45
    Tasks: 111 total, 1 running, 109 sleeping, 0 stopped, 1 zombie
    Cpu(s): 3.6% us, 0.7% sy, 0.0% ni, 95.5% id, 0.0% wa, 0.2% hi, 0.0% si
    Mem: 1545196k total, 1407100k used, 138096k freeTerminal, 20700k buffers
    Swap: 2040212k total, 168k used, 2040044k free, 935936k cached
    [swhitman@swhitman-lnx python]$ vmstat
    procs -----------memory---------- ---swap-- -----io---- system ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    1 0 160 247096 20860 833604 0 0 31 22 527 675 7 1 91 1
    AFTER RUNNING PYTHON TEST:
    ++++++++++++++++++++++++++
    top - 17:02:00 up 7:02, 6 users, load average: 2.58, 1.36, 0.80
    Tasks: 111 total, 1 running, 109 sleeping, 0 stopped, 1 zombie
    Cpu(s): 3.7% us, 0.5% sy, 0.0% ni, 95.8% id, 0.0% wa, 0.0% hi, 0.0% si
    Mem: 1545196k total, 1508156k used, 37040k free, 20948k buffers
    Swap: 2040212k total, 168k used, 2040044k free, 1035788k cached
    [swhitman@swhitman-lnx python]$ vmstat
    procs -----------memory---------- ---swap-- -----io---- system ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    0 0 160 143312 21120 935784 0 0 31 25 527 719 7 1 91 1
    AFTER RUNNING PYTHON TEST & DB_HOME IS DELETED:
    ++++++++++++++++++++++++++++++++++++++++++++++
    But I think DB_ENV->close
    top - 17:02:48 up 7:02, 6 users, load average: 1.22, 1.17, 0.76
    Tasks: 111 total, 1 running, 109 sleeping, 0 stopped, 1 zombie
    Cpu(s): 8.8% us, 0.5% sy, 0.0% ni, 90.5% id, 0.0% wa, 0.2% hi, 0.0% si
    Mem: 1545196k total, 1405236k used, 139960k free, 21044k buffers
    Swap: 2040212k total, 168k used, 2040044k free, 934032k cached
    [swhitman@swhitman-lnx python]$ vmstat
    procs -----------memory---------- ---swap-- -----io---- system ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    1 0 160 246208 21132 833852 0 0 31 25 527 719 7 1 91 1
    So the Top/vmstat Memory Usage Summary is:
    before test after test after rm db_home/*
    Top 1407100k 1508156k 1405236k
    mem used
    vmstat
    free/cache 247096/833604 143312/935784 246208/833852

  • Lock-timeout does not work with Concurrent Data Store

    Hello,
    If my application (Concurrent Data Store) does not close all locks (e.g. crash, ...), these
    locks are never removed. My application sets the following timeout values:
    DB_ENV->set_timeout(DB_SET_LOCK_TIMEOUT,60*1000*1000);
    DB_ENV->set_lk_detect(DB_LOCK_EXPIRE);
    Once a minute:
    DB_ENV->lock_detect( 0, DB_LOCK_EXPIRE, NULL );
    If a process keeps a lock open, all further processes will be blocked. "db_deadlock"
    also does not remove these old locks.
    Thank you very much
    Josef

    The functions you called are supposed to work when the locking subsystem is enabled, and if you use CDS, you can't startup the locking subsystem. When using CDS, you normally don't have to worry about locks, BDB will take care of locks and make it deadlock free. Have a look at the documentation at http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/frame.html

  • Concurrent Data Store (CDB) application hangs when it shouldn't

    My application hangs when trying to open a concurrent data store (CDB) database for reading:
    #0 0x0000003ad860b309 in pthread_cond_wait@@GLIBC_2.3.2 ()
    from /lib64/libpthread.so.0
    #1 0x00007ffff7ce67de in __db_pthread_mutex_lock (env=0x610960, mutex=100)
    at /home/steve/ldm/package/src/Berkeley-DB/dist/../mutex/mut_pthread.c:318
    #2 0x00007ffff7ce5ea5 in __db_tas_mutex_lock_int (env=0x610960, mutex=100,
    nowait=0)
    at /home/steve/ldm/package/src/Berkeley-DB/dist/../mutex/mut_tas.c:218
    #3 0x00007ffff7ce5c43 in __db_tas_mutex_lock (env=0x610960, mutex=100)
    at /home/steve/ldm/package/src/Berkeley-DB/dist/../mutex/mut_tas.c:248
    #4 0x00007ffff7d3715b in __lock_id (env=0x610960, idp=0x0, lkp=0x610e88)
    at /home/steve/ldm/package/src/Berkeley-DB/dist/../lock/lock_id.c:68
    #5 0x00007ffff7da1b4d in __fop_file_setup (dbp=0x610df0, ip=0x0, txn=0x0,
    name=0x40b050 "registry.db", mode=0, flags=1024, retidp=0x7fffffffdd94)
    at /home/steve/ldm/package/src/Berkeley-DB/dist/../fileops/fop_util.c:243
    #6 0x00007ffff7d70c8e in __db_open (dbp=0x610df0, ip=0x0, txn=0x0,
    fname=0x40b050 "registry.db", dname=0x0, type=DB_BTREE, flags=1024,
    mode=0, meta_pgno=0)
    at /home/steve/ldm/package/src/Berkeley-DB/dist/../db/db_open.c:176
    #7 0x00007ffff7d673b2 in __db_open_pp (dbp=0x610df0, txn=0x0,
    fname=0x40b050 "registry.db", dname=0x0, type=DB_BTREE, flags=1024, mode=0)
    at /home/steve/ldm/package/src/Berkeley-DB/dist/../db/db_iface.c:1146
    I suspect that the database environment believes that another process has the database open for writing. This cannot be the case, however, as all applications that access the database do so via an interface library I wrote that registers a termination function via the atexit() system-call to ensure that both the DB and DB_ENV handles are properly closed -- and all previously-executed applications terminated normally.
    The interface library opens the database like this (apparently, this forum doesn't support indentation, sorry):
    int status;
    Backend* backend = (Backend*)malloc(sizeof(Backend));
    if (NULL == backend) {
    else {
    DB_ENV* env;
    if (status = db_env_create(&env, 0)) {
    else {
    if (status = env->open(env, path,
    DB_CREATE | DB_INIT_CDB | DB_INIT_MPOOL, 0)) {
    else {
    DB* db;
    if (status = db_create(&db, env, 0)) {
    else {
    if (status = db->open(db, NULL, DB_FILENAME, NULL,
    DB_BTREE, forWriting ? DB_CREATE : DB_RDONLY, 0)) {
    else {
    backend->db = db;
    } /* "db" opened */
    if (status)
    db->close(db, 0);
    } /* "db" allocated */
    if (status) {
    env->close(env, 0);
    env = NULL;
    } /* "env" opened */
    if (status && NULL != env)
    env->close(env, 0);
    } /* "env" allocated */
    if (status)
    free(backend);
    } /* "backend" allocated */
    This code encounters no errors.
    The interface library also registers the following code to be executed when any process that uses the interface library exits:
    if (NULL != backend) {
    DB* db = backend->db;
    DB_ENV* env = db->get_env(db);
    if (db->close(db, 0)) {
    else {
    if (env->close(env, 0)) {
    else {
    /* database properly closed */
    As I indicated, all previously-executed processes that use the interface library terminated normally.
    I'm using version 4.8.24.NC of Berkeley DB on the following platform:
    $ uname -a
    Linux gilda.unidata.ucar.edu 2.6.27.41-170.2.117.fc10.x86_64 #1 SMP Thu Dec 10 10:36:29 EST 2009 x86_64 x86_64 x86_64 GNU/Linux
    Any ideas?

    Bogdan,
    That can't be it. I'm using a structured programming style in which the successful initialization of a cursor is ultimately followed by a closing of the cursor. There's only one place where the code does this and it's obvious that the cursor gets released.
    I've also read the CDB section.
    --Steve Emmerson                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Where does Master Data Store?

    Hi All,
    Where does Master Data Store?
    What are limitations while deleting master data?
    <b>All the answers will be rewarded with points.</b>
    <b>Regards,
    Jackie.</b>

    Hi,
    Master Data is stored in the tables that are generated when you activate the Charachteristics...
    You can see the tables in the infoobject maintenence ...
    There are tables for Mater data Text, Attributes and hierarchy....
    e.g. /BI0/TMATERIAL is the text table. You could find it in the Master data/texts tab in infoobject maintenence.
    Secondly Deletion of master data is allowed in case the master data is not linked to any Transaction data within BW. This is because in the Infocubes the SID's would be generated that would point to the masterdata tables. Hence SAP does not allow the data to be deleted incase it exists in dependent tables.
    Hope it helps!!!!
    Regards,
    Nitin

  • Reverse engg data store using ODI SDK

    how to I reverse engineer a data store and not a model using ODI SDK
    Is there any class for reverse engg? Or should I use KM's?

    When I reverse engg my delimited or fixed file, I am able to see the data of datastore; I used same datastore in interface
    Yes , I used the data store in interface using code(sdk)
    I am not doing anything manually in studio; everything is done through java code (using sdk)...but I have checked data of datastore by right clicking on it, its giving proper data
    filedescriptor contains start position and num of bytes
    your text file is looks like FIXED format,
    my Delimited format text file:
    empNo,empName
    1,Deepa
    2,Deepali
    3,Patil
    4,Deeps
    and FIXED format text file:
    10   
    Georges                                     
    Hamilton                                    
    15/01/2001 00:00:00
    11   
    Andrew                                      
    Andersen                                    
    22/02/1999 00:00:00
    12   
    John                                        
    Galagers                                    
    20/04/2000 00:00:00
    13   
    Jeffrey                                     
    Jeferson                                    
    10/06/1988 00:00:00
    20   
    Jennie                                      
    Daumesnil                                   
    28/02/1988 00:00:00
    21   
    Steve                                       
    Barrot                                      
    24/09/1992 00:00:00
    22   
    Mary                                        
    Carlin                                      
    14/03/1995 00:00:00
    30   
    Paul                                        
    Moore                                       
    11/03/1999 00:00:00
    31   
    Paul                                        
    Edwood                                      
    18/03/2003 00:00:00
    32   
    Megan                                       
    Keegan                                      
    29/05/2001 00:00:00
    40   
    Rodolph                                     
    Bauman                                      
    29/05/2000 00:00:00
    41   
    Stanley                                     
    Fischer                                     
    12/08/2001 00:00:00
    42   
    Brian                                       
    Schmidt                                     
    25/08/1992 00:00:00
    50   
    Anish                                       
    Ishimoto                                    
    30/01/1992 00:00:00
    51   
    Cynthia                                     
    Nagata                                      
    28/02/1994 00:00:00
    52   
    William                                     
    Kudo                                        
    28/03/1993 00:00:00
    if file format is Delimited we need not to worry about physical length or logical length, bcoz we are able to do reverse engg for that; which automatically set all fields
    odiColumn.setLength(length); method sets both physical and logical length

  • Why does my app store use a different account for downloading and another for updating apps?

    My app store used my own account for downloading and my sister's for updating, and it keeps telling me to reset my account. I already did that the other few times it told me to do that!

    Have you restored your device from your sister's backup?

  • Does the data transfer used in the back up in time capsule included in the data allowance from my service provider?

    Is the data transfer used in the backup with airport time capsule included in the data allowance with my service provider?

    gheefromsa wrote:
    I am using Ethernet cable to connect TC to the modem. I am only backing up my mac book air within range of the TC. So I guess it will be OK from your reply. As a trial I did interrupt the back up by turning off the TC, disconnecting the Ethernet cable, turning TC back on. The mac would not connect wirelessly with the TC without the Ethernet connection.
    It is not using ISP .. absolutely for sure. Well 99.999%.. absolute tends to be only in maths.
    But you did the test wrongly.. I said to disconnect the modem.. during the backup.. not turn off the TC.. obviously once interrupted the TC will not work again until everything is plugged in.. but it will work even bridged for a short time.. just unplug the WAN port from the modem.
    The test is moot now.. as I can give you a 99.999% promise it is not using ISP.

  • Why does the app store use so much bandwidth?

    My download is taking days and slowing down the connection for everyone else at home. With the download as slow as it is, it shouldn't have the effect it is. Is the app store similar to a bit torrent client? Is there file sharing happening? Does the download improve if the app is not free? I would hesitate to buy anything else using the app store? How long does it take to download something really big like Lion?

    It is not a bit torrent. The download usually depends on your ISP's service. My 10 mega service usually downloads things in a couple of minutes. Lion took a couple of hours because it is 7 GB or so.
    Something is wrong. I would check with the ISP.

  • Why does new Date() sometimes use different timezone?

    I have date values stored as longs (milliseconds). I debug my app because I somehow get time parts that are not zero (00:00:00). Then I noticed this:
    315529200000 : Tue Jan 01 00:00:00 CET 1980
    1104447600000 : Fri Dec 31 00:00:00 CET 2004
    1059602400000 : Thu Jul 31 00:00:00 CEST 2003
    All values have a time of zero (correct) but show a diffetent time zone? How is that possible? I created the time strings all via the same
    Systemn.out.println("Date of long: "+new Date(l));
    The last number suddenly hast "CEST" instead of "CET" and therefore somewhere else lead to a time 23:00:00.

    I just checked this simple case:
    I have a date and add exactly 1 day and the result is not the original result + 1 day:
            long d1 = 1099173600000L; //2004-10-31 00:00:00
            System.out.println(new Date(d1));
            long d2 = 86400000L; //1 day in milliseconds
            long d3 = d1 + d2;
            System.out.println(new Date(d3)); //2004-10-31 23:00:00 !!!
    ]/code]
    the output is:
    Sun Oct 31 00:00:00 CEST 2004
    Sun Oct 31 23:00:00 CET 2004
    So, instead of added 1 day, this just added 23 hours!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • TT0819: Data store not compatible with library

    Hi,
    We have TimesTen 6.0.4 being upgraded to TimesTen 7.0.5.
    I need to destroy the data store (which was created using library version 6.0.4) after the upgrade and I receive the following error:
    ===
    "Failed to destroy data store: TT0819: Data store not compatible with library (lib 7.0.5.tt60, file 6.0.4.) -- file "db.c", lineno 21089, procedure "sbDbDestroy"
    To force removal of the data store, specify the '-force' option."
    ===
    I need to upgrade from 6.0.4 to 7.0.5 and also downgrade from 7.0.5 to 6.0.4.
    What means that I need to:
    1-) UPGRADE: Destroy the data store (created with the TT library version 6.0.4) using the TT library 7.0.5.
    2-) DOWNGRADE: Destroy the data store (created with the TT library version 7.0.5) using the TT library 6.0.4.
    Can it be done? How?
    Thanks,
    Milena

    If I use the option ttDestroy -force and I manage to destroy a 6.0.4 data store using 7.0.5 library.
    I just don't know if this procedure is recommended.

  • Data Acquisition - using local variables to write data to a file

    Hello,
    I am running a Data Acquisition vi (currently in LabVIEW 7.1 but soon to be updated to 8.2) that collects ~100 parameters of data from several sources contained in a while loop. The current configuration (which I did not write) uses very few subVIs and writes to ~100 local variables to store each parameter. It then reads all the local variables and builds an array of all the strings, converts then to a spreadsheet string, then uses the write characters to file function to append to a datafile. I am trying to clean things up and have came up with subVIs to collect the data from the following sources:
    8 serial port sources collecting btwn 8 and 20 parameters each
    ~15 thermocouple readings
    ~10 analog inputs
    ~20 parameters read off an ARINC 429 bus.
    I have come up with a subVI to read each of the sources and have placed the subVIs in the while loop. Each subVI outputs the data that it collects in array or cluster form. I was wondering how best to write each parameter to a CSV file at between 1 and 10 Hz. Should I write each subVI output to a LV and then read them off as was done before (the difference being that I have reduced the # of LVs to ~10 vs >100?
    I should add that precise timing is not that important, so if all the subVIs are not collecting simultaneously (which I understand that they won't be), it does not really matter.
    Thanks.

    Hi jilla,
    jilla wrote:
    What I think that you are saying is to turn the outputs of the 4 subVIs into inputs of a 5th subVI that writes to the data file. Correct?
    Yes.  It may sound like a fine-point, but I beileve it's better to create a VI specifically for formatting data - in your example, 4 arrays IN, a single string OUT.  Then write the string to file as a seperate operation.  GUI-displayed data can go through a similar transformation, the four arrays wired to a subVI which builds output-structures specifically for display.  It's a beginner's mistake to put lots of individual controls and indicators on the screen when groups of them are naturally related (in an object-oriented sense.)  Use clusters to group related controls - this will keep the diagram much cleaner.
    One more question: at what point (either # of data points or frequency of data collection) does it become necessary to use queues? Thanks.
    Well, there's not really a clearly definable "point".  I'd say if your update-rate climbs above 100Hz, or you witness poor program or system performance, then it's time.  The scenario you've described is a fairly simple acquire/display&log loop - and simple is good.   Then-again people can't see/react-to updates faster than about 10Hz - so it doesn't make sense to sacrifice performance - if performance becomes an issue.
    Re: queues:  Queues are sometimes used to buffer data that's "produced" in one place and "consumed" in another.
    Here, if/when logging data, you're logging with every DAQ.  I wouldn't recommend using a queue to transport data from a "DAQ loop" to a "Logging-loop" - those functions can be in the same loop.  Should/could a queue be used to get data from a "DAQ loop" to update the GUI at a lower frequency?  Sure, but a Notifier might be a better choice.   Further, in the (simple?) program you've described, you might use a case structure (True/False) to only update FP indicators every "X" iterations - a simple solution that doesn't require Queues or Notifiers.
    Cheers!
    "Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)

  • JDODataStoreException: The instance null does not exist in the data store

    I'm unable to figure out how this exception occurs.
    I have a class IDCounter which has a number of fields such as
    'm_Name' (String)
    'm_AccountName' (String)
    'm_UserName' (String)
    'm_Description' (String)
    'm_CreationDate' (Date)
    'm_LastModifiedDate' (Date)
    'm_DeletedDate' (Date)
    'm_Count' (long)
    The filter I'm using is "m_AccountName == \"test\" && m_UserName
    ==\"test\" && m_DeletedDate == null"
    The generated SQL statement is "SELECT t0.M_IDX, t0.JDOCLASSX,
    t0.JDOLOCKX, t0.M_ACCOUNTNAMEX, t0.M_CREATIONDATEX, t0.M_DELETEDDATEX,
    t0.M_DESCRIPTIONX, t0.M_LASTMODIFIEDDATEX, t0.M_NAMEX, t0.M_USERNAMEX,
    t0.M_COUNTX FROM ABSTRACTENTITYX t0 WHERE ((t0.M_DELETEDDATEX IS NULL) AND
    t0.JDOCLASSX = 'com.ewarna.pdm.entities.IDCounter')"
    Exception Trace:
    javax.jdo.JDODataStoreException: The instance null does not exist in the
    data store.
         at
    com.solarmetric.kodo.impl.jdbc.runtime.LazyResultList.instantiateRow(LazyResultList.java:165)
         at
    com.solarmetric.kodo.impl.jdbc.runtime.LazyResultList.get(LazyResultList.java:96)
         at java.util.AbstractList$Itr.next(AbstractList.java:416)
         at
    com.solarmetric.kodo.runtime.ResultListIterator.next(ResultListIterator.java:49)
         at
    com.solarmetric.kodo.impl.jdbc.runtime.ResultListFactory.createResultList(ResultListFactory.java:85)
         at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.executeQuery(JDBCStoreManager.java:646)
         at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCQuery.executeQuery(JDBCQuery.java:150)
         at com.solarmetric.kodo.query.QueryImpl.executeWithMap(QueryImpl.java:580)
         at com.solarmetric.kodo.query.QueryImpl.execute(QueryImpl.java:428)
         at
    com.solarmetric.kodo.query.QueryImpl$SynchronizedQuery.execute(QueryImpl.java:1331)
         at
    com.ewarna.pdm.sessions.persistence.BasicQuery.getByAdvancedFormula(BasicQuery.java:78)
         at
    com.ewarna.pdm.sessions.persistence.BasicQuery.getByFormula(BasicQuery.java:119)
         at
    com.ewarna.pdm.sessions.persistence.BasicQuery.getByFormula(BasicQuery.java:95)
         at
    com.ewarna.pdm.sessions.persistence.BasicQuery.getAll(BasicQuery.java:131)
         at
    com.ewarna.pdm.sessions.persistence.GenericEntityManager$7.execute(GenericEntityManager.java:305)
         at
    com.ewarna.pdm.sessions.persistence.GenericEntityManager.execute(GenericEntityManager.java:251)
         ... 18 more

    Youcan no longer display a workbook. You receive an error message when opening: <Internal error>: 1201 document storage
    Cause and prerequisites
    In very rare cases, when you store a workbook, you might not be able to open it again.
    Solution
    Function module BDS_PHIOS_GET_RIGHT has to be changed so that the last available version of the Workbooks can be displayed.

  • What do you recommend to use as an offline data store, since SQL CE support is not in VS 2013?

    A few years back I was architecting an occasionally connected .Net desktop application. 
    VS 2010 was offering full support for Microsoft Sync Framework and SQL CE with Entity Framework. 
    This seemed like the perfect marriage, so I ran with it, and the resulting software solution is still successfully running in production, years later. 
    Jump forward to today, and I am architecting a new occasionally connected .Net desktop application. 
    I was really looking forward to taking advantage of the advances made by Microsoft in using the tools built into VS 2013. 
    However, what I discovered has dumbfounded me.  VS 2013 has no designer support for Sync Framework, and worse, built in support for SQL CE has been completely removed, including the ability to generate Entity Framework models from a
    CE database using the designer. 
    My question to the community is, what tools should I be using to solve the problem of offline storage in my brand new .Net application? 
    I am aware of ErikEJ’s SQL Server Compact Toolbox, which brings back some support for these features in VS 2013, but it is not as fully featured as the VS 2010 native support was, plus it does not have the institutional “Microsoft” stamp on it. 
    I am building a multimillion dollar corporate solution that I will have to support for many years.
     I would like to have some comfort that the technologies I select, today, will still be supported 5 years from now, unlike the way Microsoft has discontinued supporting Sync Framework and CE in the most recent VS. 
    I can accept open source technologies, because there is a community behind them, or off the shelf corporate solutions, since they will be driven by financial gain, but I have trouble committing to a solution that is solely supported by an individual,
    even if that person is a very talented Microsoft MVP.
    Some of the features of SQL CE that I would like to keep are
    Built in encryption of the file on disk
    Easy querying with an ORM, like Entity Framework
    Tools to easily sync up the offline data store with values from SQL Server (even better if this can be done remotely, such as over WCF)
    Does not require installation of additional software on the client machine, as SQL Express would
    Please, provide your feedback to let me know how you have achieved this, without resorting to simply using an older version of VS or Management Studio. 
    Thank you.

    Hello,
    Based on your description, you can try to use SQL Server 2012 Express LocalDB.
    LocalDB is created specifically for developers. It is very easy to install and requires no management, but it offers the same T-SQL language, programming surface and client-side providers as the regular SQL Server Express.
    SQL Server LocalDB can work with Entity Framework and ADO.NET Syc Framework. However, there is no built-in encryption feature in LocalDB which
    can let you encrypt database. You should decrypt/encrypt data on your own, for example, using
    Cryptographic Functions
    Reference:SQL Express v LocalDB v SQL Compact Edition
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click here.
    Fanny Liu
    TechNet Community Support

  • Where does BPM context data store ?

    hi experts,
    When we start a BPM process, the process instance gets created. And we use the BPM context to store data for the purpose of passing it to another task. Where does this data reside ?? In server's primary memory(RAM) ?? Or there is some portal local data base where this conext data is getting stored ?
    If it is in RAM, If we restart the server will we loose this context data ?? Or there is some place(portal local database) where active process instances are saved.
    I am confused.. Why are we saying BPM sould not hold large volume of data ??. By this I believe the data is putting weight on RAM.
    Edited by: pramod bagauly on Sep 21, 2010 12:47 AM

    Hi,
       Context data is stored in the DB. I highly recommend you read the CE architecture guide. Below is a quote (page 14) that relates to context:
    "Instead, at every save point, the data context of a process is serialized to XML and stored as one
    u201Eblob‟. When the data needs to be read back, it is fetched from the DB and parsed to re-instantiate the data
    objects in the memory."
    You can read Ulf's blog:
    /people/ulf.fildebrandt/blog/2010/04/20/composite-development-architecture-guidelines
    The document itself is here:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/109b805f-2e28-2d10-ed9c-94eea0e8ae5c?quicklink=index&overridelayout=true
    HTH,
    O.

Maybe you are looking for