Usage as cache / data store?

I've read of people using Elasticsearch or Solr as a caching layer for their application, which on the surface at least makes a lot of sense: the search engine is fast, and often you've got all your data in there anyway. I imagine that the appropriateness
of this is primarily an architecture question, but is there anything specific to Azure Search that would make using it as a caching layer an especially bad idea?
Thanks!
I'm not sure how much context that question requires, but for what it's worth: we make a kind of social CRM/intranet product, so our data consists of contacts, employees, projects, companies, and social posts/comments (which can include uploaded documents).
Currently, most of this is pulled from a (fairly normalized) SQL database, or from an small-ish in-role Azure cache. We want to implement a great search feature across all this data, but because the usage of this data often requires some complex joins or formatting,
we also want to implement a larger cache to keep all of our data in a denormalized form. So if not Azure Search or Elasticsearch, we're probably looking at Redis or maybe even a NoSQL like Mongo.

Yes, we do see many cases where it's natural to use Azure Search (or some other search stack) both for search and for serving other queries/content as well, since as you say you already have the data there.
One particular pattern we see is having an operational store such as SQL which stays normalized (and sometimes even on-premises) and the denormalized Azure Search index is created out of it. Then online projections (websites, mobile apps, etc.) are served
entirely from the Azure Search index.
Whether you should use Azure Search or a different cache depends on your access patterns. If you need full-text search (e.g. basic keyword search and potentially things such as linguistics, ranking, etc.) then Azure Search could be a good fit. If you only
retrieve data on exact queries and you won't need search in the future, you could use Azure Search but other stores might offer other advantages (e.g. Redis cache or DocumentDB).
This posting is provided "AS IS" with no warranties, and confers no rights.

Similar Messages

  • Could not start cache agent for the requested data store

    Hi,
    This is my first attempt in TimesTen. I am running TimesTen on the same Linux host (RHES 5.2) that running Oracle 11g R2. The version of TimesTen is:
    TimesTen Release 11.2.1.4.0
    Trying to create a simple cache.
    The DSN entry for ttdemo1 in .odbc.ini is as follows:
    +[ttdemo1]+
    Driver=/home/oracle/TimesTen/timesten/lib/libtten.so
    DataStore=/work/oracle/TimesTen_store/ttdemo1
    PermSize=128
    TempSize=128
    UID=hr
    OracleId=MYDB
    DatabaseCharacterSet=WE8MSWIN1252
    ConnectionCharacterSet=WE8MSWIN1252
    Using ttisql I connect
    Command> connect "dsn=ttdemo1;pwd=oracle;oraclepwd=oracle";
    Connection successful: DSN=ttdemo1;UID=hr;DataStore=/work/oracle/TimesTen_store/ttdemo1;DatabaseCharacterSet=WE8MSWIN1252;ConnectionCharacterSet=WE8MSWIN1252;DRIVER=/home/oracle/TimesTen/timesten/lib/libtten.so;OracleId=MYDB;PermSize=128;TempSize=128;TypeMode=0;OracleNetServiceName=MYDB;
    (Default setting AutoCommit=1)
    Command> call ttcacheuidpwdset('ttsys','oracle');
    Command> call ttcachestart;
    *10024: Could not start cache agent for the requested data store. Could not initialize Oracle Environment Handle.*
    The command failed.
    The following is shown in the tterrors.log:
    15:41:21.82 Err : ORA: 9143: ora-9143--1252549744-xxagent03356: Datastore: TTDEMO1 OCIEnvCreate failed. Return code -1
    15:41:21.82 Err : : 7140: oraagent says it has failed to start: Could not initialize Oracle Environment Handle.
    15:41:22.36 Err : : 7140: TT14004: TimesTen daemon creation failed: Could not spawn oraagent for '/work/oracle/TimesTen_store/ttdemo1': Could not initialize Oracle Environment Handl
    What are the reasons that the daemon cannot spawn another agent? FYI the environment variables are set as:
    ORA_NLS33=/u01/app/oracle/product/11.2.0/db_1/ocommon/nls/admin/data
    ANT_HOME=/home/oracle/TimesTen/ttdemo1/3rdparty/ant
    CLASSPATH=/home/oracle/TimesTen/ttdemo1/lib/ttjdbc5.jar:/home/oracle/TimesTen/ttdemo1/lib/orai18n.jar:/home/oracle/TimesTen/ttdemo1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/ttdemo1/3rdparty/jms1.1/lib/jms.jar:.
    oracle@rhes5:/home/oracle/TimesTen/ttdemo1/info% echo $LD_LIBRARY_PATH
    /home/oracle/TimesTen/ttdemo1/lib:/home/oracle/TimesTen/ttdemo1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib
    Cheers

    Sure thanks.
    Here you go:
    Daemon environment:
    _=/bin/csh
    DISABLE_HUGETLBFS=1
    SYSTEM=TEST
    INIT_FILE=/u01/app/oracle/product/10.1.0/db_1/dbs/init+ASM.ora
    GEN_APPSDIR=/home/oracle/dba/bin
    LD_LIBRARY_PATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib
    HOME=/home/oracle
    SPFILE_DIR=/u01/app/oracle/backup/+ASM/initfile_dir
    TNS_ADMIN=/u01/app/oracle/product/11.2.0/db_1/network/admin
    INITFILE_DIR=/u01/app/oracle/backup/+ASM/initfile_dir
    HTMLDIR=/home/oracle/+ASM/dba/html
    HOSTNAME=rhes5
    TEMP=/oradata1/tmp
    PWD=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/bin
    HISTSIZE=1000
    PATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/bin:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/oci:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc/xla:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/jdbc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc_drivermgr:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/proc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/ttclasses:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/ttclasses/xla:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1/sdk:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/ant/bin:/usr/kerberos/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/bin/X11:/usr/X11R6/bin:/usr/platform/SUNW,Ultra-2/sbin:/u01/app/oracle/product/11.2.0/db_1:/u01/app/oracle/product/11.2.0/db_1/bin:.
    GEN_ADMINDIR=/home/oracle/dba/admin
    CONTROLFILE_DIR=/u01/app/oracle/backup/+ASM/controlfile_dir
    ETCDIR=/home/oracle/+ASM/dba/etc
    GEN_ENVDIR=/home/oracle/dba/env
    DATAFILE_DIR=/u01/app/oracle/backup/+ASM/datafile_dir
    BACKUPDIR=/u01/app/oracle/backup/+ASM
    RESTORE_ARCFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_arcfiles.txt
    TMPDIR=/oradata1/tmp
    CVS_RSH=ssh
    ARCLOG_DIR=/u01/app/oracle/backup/+ASM/arclog_dir
    REDOLOG_DIR=/u01/app/oracle/backup/+ASM/redolog_dir
    INPUTRC=/etc/inputrc
    LOGDIR=/home/oracle/+ASM/dba/log
    DATAFILE_LIST=/u01/app/oracle/backup/+ASM/datafile_dir/datafile.list
    LS_COLORS=no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35:
    PS1=rhes5:($ORACLE_SID)$
    G_BROKEN_FILENAMES=1
    SHELL=/bin/ksh
    PASSFILE=/home/oracle/dba/env/.ora_accounts
    LOGNAME=oracle
    ORA_NLS10=/u01/app/oracle/product/11.2.0/db_1/nls/data
    ORACLE_SID=mydb
    APPSDIR=/home/oracle/+ASM/dba/bin
    ORACLE_OWNER=oracle
    RESTOREFILE_DIR=/u01/app/oracle/backup/+ASM/restorefile_dir
    SQLPATH=/home/oracle/dba/bin
    TRANDUMPDIR=/tran
    RESTORE_SPFILE=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_spfile.txt
    RESTORE_DATAFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_datafiles.txt
    ENV=/home/oracle/.kshrc
    SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
    SSH_CONNECTION=50.140.197.215 62742 50.140.197.216 22
    LESSOPEN=|/usr/bin/lesspipe.sh %s
    TERM=xterm
    GEN_ETCDIR=/home/oracle/dba/etc
    SP_FILE=/u01/app/oracle/product/10.1.0/db_1/dbs/spfile+ASM.ora
    ORACLE_BASE=/u01/app/oracle
    ASTFEATURES=UNIVERSE - ucb
    ADMINDIR=/home/oracle/+ASM/dba/admin
    SSH_CLIENT=50.140.197.215 62742 22
    TZ=GB
    SUPPORT=oracle@linux
    ARCHIVE_LOG_LIST=/u01/app/oracle/backup/+ASM/arclog_dir/archive_log.list
    USER=oracle
    RESTORE_TEMPFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_tempfiles.txt
    MAIL=/var/spool/mail/oracle
    EXCLUDE=/home/oracle/+ASM/dba/bin/exclude.lst
    GEN_LOGDIR=/home/oracle/dba/log
    SSH_TTY=/dev/pts/2
    RESTORE_INITFILE=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_initfile.txt
    HOSTTYPE=i386-linux
    VENDOR=intel
    OSTYPE=linux
    MACHTYPE=i386
    SHLVL=1
    GROUP=dba
    HOST=rhes5
    REMOTEHOST=vista
    EDITOR=vi
    ORA_NLS33=/u01/app/oracle/product/11.2.0/db_1/ocommon/nls/admin/data
    ODBCINI=/home/oracle/.odbc.ini
    TT=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/
    SHLIB_PATH=/u01/app/oracle/product/11.2.0/db_1/lib:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1//lib
    ANT_HOME=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/ant
    CLASSPATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/ttjdbc5.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/orai18n.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/jms1.1/lib/jms.jar:.
    TT_AWT_PLSQL=0
    NLS_LANG=AMERICAN_AMERICA
    NLS_COMP=ANSI
    NLS_SORT=BINARY
    NLS_LENGTH_SEMANTICS=BYTE
    NLS_NCHAR_CONV_EXCP=FALSE
    NLS_CALENDAR=GREGORIAN
    NLS_TIME_FORMAT=hh24:mi:ss
    NLS_DATE_FORMAT=syyyy-mm-dd hh24:mi:ss
    NLS_TIMESTAMP_FORMAT=syyyy-mm-dd hh24:mi:ss.ff9
    ORACLE_HOME=
    DaemonCWD = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info
    DaemonLog = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/tterrors.log
    DaemonOptionsFile = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/ttendaemon.options
    Platform = Linux/x86/32bit
    SupportLog = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/ttmesg.log
    Uptime = 136177 seconds
    Backcompat = no
    Group = 'dba'
    Daemon pid 8111 port 53384 instance ttimdb1
    End of report

  • Reclaiming memory when using concurrent data store

    Hello,
    I use the concurrent data store in my python application and I'm noticing that
    the system memory usage increases and is never freed when the application
    is done. The following python code is a unit test that simulates my app's workload:
    ##########BEGIN PYTHON CODE##################
    """TestCases for multi-threaded access to a DB.
    #import gc
    #gc.enable()
    #gc.set_debug(gc.DEBUG_LEAK)
    import os
    import sys
    import time
    import errno
    import shutil
    import tempfile
    from pprint import pprint
    from random import random
    try:
        True, False
    except NameError:
        True = 1
        False = 0
    DASH = '-'
    try:
        from threading import Thread, currentThread
        have_threads = True
    except ImportError:
        have_threads = False
    import unittest
    verbose = 1
    from bsddb import db, dbutils
    class BaseThreadedTestCase(unittest.TestCase):
        dbtype       = db.DB_UNKNOWN  # must be set in derived class
        dbopenflags  = 0
        dbsetflags   = 0
        envflags     = 0
        def setUp(self):
            if verbose:
                dbutils._deadlock_VerboseFile = sys.stdout
            homeDir = os.path.join(os.path.dirname(sys.argv[0]), 'db_home')
            self.homeDir = homeDir
            try:
                os.mkdir(homeDir)
            except OSError, e:
                if e.errno <> errno.EEXIST: raise
            self.env = db.DBEnv()
            self.setEnvOpts()
            self.env.open(homeDir, self.envflags | db.DB_CREATE)
            self.filename = self.__class__.__name__ + '.db'
            self.d = db.DB(self.env)
            if self.dbsetflags:
                self.d.set_flags(self.dbsetflags)
            self.d.open(self.filename, self.dbtype, self.dbopenflags|db.DB_CREATE)
        def tearDown(self):
            self.d.close()
            self.env.close()
            del self.d
            del self.env
            #shutil.rmtree(self.homeDir)
            #print "\nGARBAGE:"
            #gc.collect()
            #print "\nGARBAGE OBJECTS:"
            #for x in gc.garbage:
            #    s = str(x)
            #    print type(x),"\n ", s
        def setEnvOpts(self):
            pass
        def makeData(self, key):
            return DASH.join([key] * 5)
    class ConcurrentDataStoreBase(BaseThreadedTestCase):
        dbopenflags = db.DB_THREAD
        envflags    = db.DB_THREAD | db.DB_INIT_CDB | db.DB_INIT_MPOOL
        readers     = 0 # derived class should set
        writers     = 0
        records     = 1000
        def test01_1WriterMultiReaders(self):
            if verbose:
                print '\n', '-=' * 30
                print "Running %s.test01_1WriterMultiReaders..." % \
                      self.__class__.__name__
            threads = []
            for x in range(self.writers):
                wt = Thread(target = self.writerThread,
                            args = (self.d, self.records, x),
                            name = 'writer %d' % x,
                            )#verbose = verbose)
                threads.append(wt)
            for x in range(self.readers):
                rt = Thread(target = self.readerThread,
                            args = (self.d, x),
                            name = 'reader %d' % x,
                            )#verbose = verbose)
                threads.append(rt)
            for t in threads:
                t.start()
            for t in threads:
                t.join()
        def writerThread(self, d, howMany, writerNum):
            #time.sleep(0.01 * writerNum + 0.01)
            name = currentThread().getName()
            start = howMany * writerNum
            stop = howMany * (writerNum + 1) - 1
            if verbose:
                print "%s: creating records %d - %d" % (name, start, stop)
            for x in range(start, stop):
                key = '%04d' % x
                #dbutils.DeadlockWrap(d.put, key, self.makeData(key),
                #                     max_retries=12)
                d.put(key, self.makeData(key))
                if verbose and x % 100 == 0:
                    print "%s: records %d - %d finished" % (name, start, x)
            if verbose:
                print "%s: finished creating records" % name
    ##         # Each write-cursor will be exclusive, the only one that can update the DB...
    ##         if verbose: print "%s: deleting a few records" % name
    ##         c = d.cursor(flags = db.DB_WRITECURSOR)
    ##         for x in range(10):
    ##             key = int(random() * howMany) + start
    ##             key = '%04d' % key
    ##             if d.has_key(key):
    ##                 c.set(key)
    ##                 c.delete()
    ##         c.close()
            if verbose:
                print "%s: thread finished" % name
            d.sync()
            del d
        def readerThread(self, d, readerNum):
            time.sleep(0.01 * readerNum)
            name = currentThread().getName()
            for loop in range(5):
                c = d.cursor()
                count = 0
                rec = c.first()
                while rec:
                    count += 1
                    key, data = rec
                    self.assertEqual(self.makeData(key), data)
                    rec = c.next()
                if verbose:
                    print "%s: found %d records" % (name, count)
                c.close()
                time.sleep(0.05)
            if verbose:
                print "%s: thread finished" % name
            del d
        def setEnvOpts(self):
            #print "Setting cache size:", self.env.set_cachesize(0, 2000)
            pass
    class BTreeConcurrentDataStore(ConcurrentDataStoreBase):
        dbtype  = db.DB_BTREE
        writers = 10
        readers = 100
        records = 100000
    def test_suite():
        suite = unittest.TestSuite()
        if have_threads:
            suite.addTest(unittest.makeSuite(BTreeConcurrentDataStore))
        else:
            print "Threads not available, skipping thread tests."
        return suite
    if __name__ == '__main__':
        unittest.main(defaultTest='test_suite')
        #print "\nGARBAGE:"
        #gc.collect()
        #print "\nGARBAGE OBJECTS:"
        #for x in gc.garbage:
        #    s = str(x)
        #    print type(x),"\n ", s
    ##########END PYTHON CODE##################Using the linux command 'top' prior to and during the execution of
    the python script above, I noticed that a considerable amount of memory
    is used up and never reclaimed when it ends.If you delete the db_home,
    however, the memory is reclaimed.
    Am I conjuring up the bsddb concurrent db store incorrectly somehow?
    I'm using python 2.5.1 and the builtin bsddb module.
    Thanks,
    Gerald
    Message was edited by:
    user590005
    Message was edited by:
    user590005

    I think I am seeing what you are reporing, but I need to check further into
    the reason for this.
    Running your program and monitoring with Top/vmstat before/after the test, and
    after deleting db_home is:
    BEFORE RUNNING PYTHON TEST:
    ++++++++++++++++++++++++++
    top - 17:00:17 up 7:00, 6 users, load average: 0.07, 0.38, 0.45
    Tasks: 111 total, 1 running, 109 sleeping, 0 stopped, 1 zombie
    Cpu(s): 3.6% us, 0.7% sy, 0.0% ni, 95.5% id, 0.0% wa, 0.2% hi, 0.0% si
    Mem: 1545196k total, 1407100k used, 138096k freeTerminal, 20700k buffers
    Swap: 2040212k total, 168k used, 2040044k free, 935936k cached
    [swhitman@swhitman-lnx python]$ vmstat
    procs -----------memory---------- ---swap-- -----io---- system ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    1 0 160 247096 20860 833604 0 0 31 22 527 675 7 1 91 1
    AFTER RUNNING PYTHON TEST:
    ++++++++++++++++++++++++++
    top - 17:02:00 up 7:02, 6 users, load average: 2.58, 1.36, 0.80
    Tasks: 111 total, 1 running, 109 sleeping, 0 stopped, 1 zombie
    Cpu(s): 3.7% us, 0.5% sy, 0.0% ni, 95.8% id, 0.0% wa, 0.0% hi, 0.0% si
    Mem: 1545196k total, 1508156k used, 37040k free, 20948k buffers
    Swap: 2040212k total, 168k used, 2040044k free, 1035788k cached
    [swhitman@swhitman-lnx python]$ vmstat
    procs -----------memory---------- ---swap-- -----io---- system ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    0 0 160 143312 21120 935784 0 0 31 25 527 719 7 1 91 1
    AFTER RUNNING PYTHON TEST & DB_HOME IS DELETED:
    ++++++++++++++++++++++++++++++++++++++++++++++
    But I think DB_ENV->close
    top - 17:02:48 up 7:02, 6 users, load average: 1.22, 1.17, 0.76
    Tasks: 111 total, 1 running, 109 sleeping, 0 stopped, 1 zombie
    Cpu(s): 8.8% us, 0.5% sy, 0.0% ni, 90.5% id, 0.0% wa, 0.2% hi, 0.0% si
    Mem: 1545196k total, 1405236k used, 139960k free, 21044k buffers
    Swap: 2040212k total, 168k used, 2040044k free, 934032k cached
    [swhitman@swhitman-lnx python]$ vmstat
    procs -----------memory---------- ---swap-- -----io---- system ----cpu----
    r b swpd free buff cache si so bi bo in cs us sy id wa
    1 0 160 246208 21132 833852 0 0 31 25 527 719 7 1 91 1
    So the Top/vmstat Memory Usage Summary is:
    before test after test after rm db_home/*
    Top 1407100k 1508156k 1405236k
    mem used
    vmstat
    free/cache 247096/833604 143312/935784 246208/833852

  • Cache data in Mobile Application

    Hi,
    I am in process of evaluating usage of Mobile for Field Sales Representative but I am encountering a problem that the coverage is not so good in the area where field users will be using the system so what I need to do is to cache data and whenever it gets signal it transmits it. Is it possible to do programmatically or is there any other way we can achieve it.
    Regards
    Hardik Mehta

    Hi Hardik,
    According to my knowledge the simplest answer to your query is a Smart Sync Application.
    Such type applications are designed for this purpose only. That is in the offline environment/the environment which is not always connected.
    A smart sync application enbles you to occasionally sync and save backend data on to your client(data relevant to your application). Once you have done this you can remain disconnected and use this data in all possible ways you need to like viewing it time and again, making changes tot it or even deleting some of the records. You may even create new records/data.
    The Smart sync application will store all these changes onto the local database of the device. The next time the user syncs the device, the device communicates all these changes to the backend via the middleware and whereby updates the backend database.
    Also if there are some changes made in the backend data while the user is disconnected the MI client will download those changes and reflect them onto its local database so even your local database remains up to date.
    So if you want you can call this as chaching the data and transmitting it to the backend whenever you get connectivity. Technically it is done through the Java persistence apis.
    I hope this has solved your query to a certain extent. If this is not what you meant/desired kindly revert back.
    Cheers.

  • Prevent multiple users from updating coherence cache data at the same time

    Hi,
    I have a web application which have a huge amount of data instead of storing the data in Http Session are storing it in coherence. Now multiple groups of users can use or update the same data in coherence. There are 100's of groups with several thousand users in each group. How do I prevent multiple users from updating the cache data. Here is the scenario. User logs-in checks in coherence if the data there and gets it from coherence and displays it on the ui if not get it from backend i.e. mainframe systems and store it in coherence before displaying it on the screen. Now some other user at the same time can also perform the same function and if don't find the data in coherence can get it from backend and start saving it in coherence while the other user is also in the process of saving or updating. How do I prevent this in coherence. As have to use the same key when storing in coherence because the same data is shared across users and don't want to keep multiple copies of the same data. Is there something coherence provides out-of-the-box or what is best approach to handle this scenario.
    Thanks

    Hi,
    actually I believe, that if we are speaking about multiple users each with its own HttpSession, in case of two users accessing the same session attribute in their own session, the actually used cache keys will not be the same.
    On the other hand, this is probably not what you would really like, you would possibly like to share that data among sessions.
    You should probably consider using either read-through caching with the CacheLoader implementor doing the expensive data retrieval (if the data to be cached can be obtained outside of an HTTP container), or side caching with using Coherence locks or entry-processors for concurrency control on the data retrieval operations for the same key (take care of retries in this case).
    Best regards,
    Robert

  • How can you manage data usage when cellular data is off, but you are using WiFi where your WiFi provider charges for data use?

    I spend quite a few months each year in Canada where I use a Telus Cellular Hub device which is also my WiFi Router.  My iPhone 5 has Cellular Data set to "Off" which insures I won't be charged via my Verizon Wireless Service Provider for charges while in Canada.  Trouble is Telus, Rogers and all the Canadian Internet Providers charge for all Data going through their Systems.  Again, My Cellular Data on the iPhone 5 is turned off, but I use WiFi for such things as checking the Weather, or FaceBook, or searching the Web. 
    I believe that things may be happening in the background from various Apps that use quite a lot of Data.  It could be that iCloud is part of the issue with things being backed up automatically.  It also could be that Apps like AP or other News Apps are sending large amounts of Data in the photos associated with their New Stories, etc.  I typically turn off the App Store "Updates" such that they don't automatically load.  The FaceBook works now posted videos play when you are just scrolling through the News Feed. 
    I have been trying to fine an article somewhere which focuses on this specific problem but unfortunately many if not most articles are about folks worried about using Cellular Data while in a WiFi environment when their Cellular Data is turned on. 
    Does anyone know of a fairly comprehensive article about what settings on which Apps might reduce the Data Usage when Cellular Data is turned "Off" but you are going through a Service Provider who charges for all Data Accessed even when you are using WiFi?

    Thanks for your comments, it is clear you understand my plight.  The trouble is fully understanding what Apps and App Features are transferring data in the background any time you happen to turn WiFi to on (even if you have had it off most of the day or night).  Obviously things like Location Services can constantly be sending and receiving data from my iPhone without any action on my part.  Also if you have things like photo backup on the iCloud then each time you take a photo you are sending a copy out.  All App Updates if set to Automatic also can add up to quite a bit of Data.  Reading the News on AP or scrolling through FB News Feed is actually adding up to a lot of Data.  There could be other culprits that I am not even thinking of.  I don't want to turn Apps like Find My Phone off or turn iCloud off due to loosing the value of such a program entirely.  Again thanks for your quick response. 

  • How to select from multiple tables which reside on different data stores ?

    Suppose I have two data stores in one TimesTen instance:
    1) Datastore A:
    table1
    2) Datastore B:
    table2
    I want to make a query like this:
    select ... from table1, table2 where table1.colA = table2.colB
    Can I ? If not, is there a workaround ?
    BTW, because of business, we have to use two or more different datastores, so we can not put table1/table2 in the same datastore.
    Thanks very much.

    You can query multiple TimesTen databases, but your original question was about joining tables from two databases, which is not supported.
    Using Cache Connect to Oracle to query an Oracle database is not distributed. It's still one single Oracle database you are querying. You cannot join a table in the TimesTen database with a table in the Oracle database, this is not allowed.
    If you are willing to share your business requiremens, we can take a look and see what solution might work for you. Would you like to discuss this offline?
    Susan

  • Approach when the used Live cache data area crosses the threshold

    Hi,
    Could any of you please let me know the detailed approach when the used Live cache data area crosses the threshold in APO system?
    The approach I have as of now is :
    1) When it is identified that data cache usage is nearly 100%, check for hit rate for OMS data in data cache in LC10 .Because generally hit rate for OMS data in data cache should be atleaset 99.8% and Data Cache usage should be well below 100%.
    2) To monitor unsuccessful accesses to data cache choose refresh and compare value now and before unsuccessful accesses result in physical disk I/O and should generally be avoided.
    3) The number of OMS data pages (OMS Data) should be much higher than the number of OMS history pages (History/Undo).A ratio of 4:1 is desirable. If OMS history has nearly the same size as OMS data, use Problem AnalysisPerformanceOMS versions to find out if named consistent views (Versions) are open for a long time. Maximum age should be 8hrs.
    4)If consumption of OMS heap and data cache is large, one reason may be a long running transaction simulation that accumulates heap memory and prevents the garbage collector from releasing old object images.
    5) To display existing transactional simulations in LC10,use Problem AnalysisPerformanceOMS versions and SM04 to find out user of corresponding transaction and may be required to cancel the session after contacting user if the version open for long time..
    Please help me by providing additional information on the issue.
    Thanks,
    Varada Reddy.

    Hi Mayank, sorry, one basic question - are you using some selection criteria during extraction? If yes, then try extraction without the selection criteria.
    If you maintain selection based on, let's say, material, you need to use the right number of zeros as prefix (based on how you have defined the characteristic for material) otherwise no records would be selected.
    Is this relevant in your case?
    One more option is to try to repair teh datasource. In the planning area, go to extraction tools, select the datasource, and then choose the option of repair datasource.
    If you need more info, pls let me know.
    - Pawan

  • Caching data with Entity Bean

    Hello,
    I am performing some tests concerning the benefit of caching data with Entity Bean.
    Here is the case :
    I have an Entity Bean with a business method getName() to retrieve a name field in the EJB.
    I understand that in order to cach data, I have to set the NOT_SUPPORTED transaction attr for this method. In this way, when this method is called, the ejbReload() is not called and the data is retreived from the EJB ready instance (and not from the database).
    Is it true and is it the good way to use the cach mechanism ?
    Now if we consider that this instance is the only one in the ready stage, and it is never pooled (it seems so !), what about a modification of the database from a tier (or from an other EB instance)? The Entity Bean is not able to see this modification seence it does not call the ejbLoad method.
    Is there a way to force an Entity Bean to be periodically polled in order to recover data from the data store when activated ?
    Thanks in advance,
    Thierry

    No, This is wrong way of doing what you want. Most of the application servers provide various configuration settings for this. Eg. caching mechanism, interval on when to call ejbLoad and ejbStore, read only beans. You have to check the documentation for this.
    --Ashwani

  • Accessing the planning cache data (In IP)

    Hello...
    I need a help in retreiving the live cache data.
    while we are using the transactional cube everytime we need to switch into planning mode and while loading we need to switch to load mode.
    Is there any other way to access the IP cache data.
    Appreciate any suggestions....
    Regards,
    Pari.

    Hi Parimala,
    Custom Planning Functions can be created using the transaction RSPLF1. Give a technical name and click on create button. It will ask for a class name to be attached with the function type. The class should be created in SE24 transaction. The implementation of the class will be in object oriented ABAP, where you will write the implementation logic in class methods.
    The custom planning function so created is available for usage when you create a planning function attached with an aggregation level. You can see it in the drop-down list of all the function types.
    You can refer to the standard function type for delete (0RSPL_DELETE) which will give you an idea on how a the class is implemented.
    The link provided by Marc above is helpful :
    http://help.sap.com/saphelp_nw70/helpdata/en/43/332530c1b64866e10000000a1553f6/frameset.htm
    Also, go through this how to guide:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c0ac03a4-e46e-2910-f69d-ec5fbb050cbf
    Hope this helps you.
    Regards,
    Srinivas Kamireddy.

  • Error during retrieval of the logon data store

    Hi guys,
    well after checking and changing RFC-Destination we are not able to resend stucked data in Monitor. We still get the error Error during retrieval of the logon data store
    What can we do to solve the issue?!
    br

    Hi, well we did this and got:
    if_http_client send http_communication_failure
    Therefore i checked SM59 INTEGRATION_DIRECTORY_HMI and the user-pw was invalid?! How comes?!
    We changed it, new cache refresh - but still we are not abel to resend the messages!
    br
    Edit:
    in Monitor we get this error:
    +  <SAP:Code area="IDOC_ADAPTER">ATTRIBUTE_IDOC_RUNTIME</SAP:Code>
      <SAP:P1>FM NLS_GET_LANGU_CP_TAB: Could not determine code page with Q01CLNT100 System-dependent data for entry /RFC/Q01CLNT100 ch FM NLS_GET_LANGU_CP_TAB</SAP:P1>
      <SAP:P2 />  +
    Seems to me like role problems we once solved!

  • Detailed approach when the used Live cache data area crosses the threshold

    Hi,
    Could any of you please let me know the detailed approach when the used Live cache data area crosses the threshold in APO system?
    The approach I have as of now is :
    1) When it is identified that data cache usage is nearly 100%, check for hit rate for OMS data in data cache in LC10 .Because generally hit rate for OMS data in data cache should be atleaset 99.8% and Data Cache usage should be well below 100%.
    2) To monitor unsuccessful accesses to data cache choose refresh and compare value now and before unsuccessful accesses result in physical disk I/O and should generally be avoided.
    3) The number of OMS data pages (OMS Data) should be much higher than the number of OMS history pages (History/Undo).A ratio of 4:1 is desirable. If OMS history has nearly the same size as OMS data, use Problem AnalysisPerformanceOMS versions to find out if named consistent views (Versions) are open for a long time. Maximum age should be 8hrs.
    4)If consumption of OMS heap and data cache is large, one reason may be a long running transaction simulation that accumulates heap memory and prevents the garbage collector from releasing old object images.
    5) To display existing transactional simulations in LC10,use Problem AnalysisPerformanceOMS versions and SM04 to find out user of corresponding transaction and may be required to cancel the session after contacting user if the version open for long time..
    Please help me by providing additional information on the issue.
    Thanks,
    Varada Reddy.

    Hi Mayank, sorry, one basic question - are you using some selection criteria during extraction? If yes, then try extraction without the selection criteria.
    If you maintain selection based on, let's say, material, you need to use the right number of zeros as prefix (based on how you have defined the characteristic for material) otherwise no records would be selected.
    Is this relevant in your case?
    One more option is to try to repair teh datasource. In the planning area, go to extraction tools, select the datasource, and then choose the option of repair datasource.
    If you need more info, pls let me know.
    - Pawan

  • Is there an app for monitoring CELLPHONE usage? (Not data, calls, SMS, etc.)

    Hey! Is there an app for monitoring CELLPHONE usage? (Not data, calls, SMS, etc.). I know there's for data, but I wanna know if there's one that keeps track of calls, SMS, etc.
    Thanks!

    Unless O2 has an app available via the iTunes app store that provides for this as AT&T does for the iPhone sold in the U.S., I don't believe so.
    The iPhone includes a usage indicator for Call Time and Cellular Network Data usage, which can be reset on a monthly basis based on your billing cycle, but there is no usage indicator for the number of messages sent or received.

  • Error 25109 : The install program could not create the config data store. Is this a know issue with a solution?

    Hi I cant install the Management Console.
    I keep getting
    Error 25109 : The installation program could not create the configuration data store. Please see the installation log for more info.
    Has someone come across this problem before?
    I am using
    MS Server 2008 Std x86
    SQL Server 2005 SP2 Std x86
    MDOP 2008 R2
    APP-V 4.5
    Please can someone help.
    Thanks

    I'm running into the same problem myself, I have an almost identical setup.
    Trying to do a clean install of App-V Management Server 4.5 from the MDOP 2008 R2 CD.
    Server OS: Windows Server 2008 Enterprise SP1 (Has WDS and IIS installed (IIS 6 Management Compatibility too), as well as .Net 3.5 SP1 Framework)
    Database: Microsoft SQL Server 2005 Enterprise (Version 9.00.3068.00) (Note: SQL server is running on a different server than the server App-V is being installed on.)
    Installer gets to the point where it starts creating the database than dies, reporting a 25109 error.
    Checked out the logfile the installer created, here's a snip of the last few lines before is starts rolling back the install:
    1
    [2008-12-23 15:58:35] (2284:3084) SQL state: ``01000'', Native: 0, Text: ``[Microsoft][ODBC SQL Server Driver][SQL Server]<<< CREATED TRIGGER dbo.TR_U_SYSTEM_OPTIONS >>>''. 
    2
    [2008-12-23 15:58:36] (2284:3084) SQL state: ``42000'', Native: 18058, Text: ``[Microsoft][ODBC SQL Server Driver][SQL Server]Failed to load format string for error 16873, language id 1033.  Operating system error: 122(The data area passed to a system call is too small.). Check that sqlevn70.rll is installed in C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\Resources\1033, where 1033 is the language ID of US English, or the appropriate folder for the locale in use. Also check memory usage.''. 
    3
    [2008-12-23 15:58:36] (2284:3084) SQL state: ``01000'', Native: 3621, Text: ``[Microsoft][ODBC SQL Server Driver][SQL Server]The statement has been terminated.''. 
    4
    [2008-12-23 15:58:36] (2284:3084) ::SQLExecDirectW error 0xffffffff. 
    5
    [2008-12-23 15:58:36] (2284:3084) Failed to execute SQL ``/* ------------------------------------------------------------------------- *\ 
    6
        Copyright (c) Microsoft Corporation.  All rights reserved. 
    7
    8
        Description: 
    9
            This script creates all user-defined messages. 
    10
    11
            The types of messages allowed (and their severities) are: 
    12
            FATAL   - SQL Server Severity = 17; ours = 1 
    13
            ERROR   - SQL Server Severity = 16; ours = 2 
    14
            WARNING - SQL Server Severity = N/A; ours = 3 
    Can anyone help me shed some light on what's going on here?

  • 925: Cannot create data store semaphores (Invalid argument)

    I'm trying to connect to Timesten, but I'm getting this error.
    I have looked at other similar discussions, but yet I could not solve the problem.
    [timesten@atd info]$ ttisql "dsn=tpch"
    Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    connect "dsn=tpch";
      925: Cannot create data store semaphores (Invalid argument)
      703: Subdaemon connect to data store failed with error TT925
    The command failed.
    Done.
    Here is my information.
    [tpch]
    Driver=/home_sata/timesten/TimesTen/tt1122/lib/libtten.so
    DataStore=/home_sata/timesten/TimesTen/tt1122/tpch/tpch
    LogDir=/home_sata/timesten/TimesTen/tt1122/tpch/logs
    PermSize=1024
    TempSize=512
    PLSQL=1
    DatabaseCharacterSet=US7ASCII
    kernel.sem = 400 32000 512 5029
    kernel.shmmax=68719476736
    kernel.shmall=16777216
    [timesten@atd info]$ cat /proc/meminfo
    MemTotal:       297699764 kB
    MemFree:        96726036 kB
    Buffers:          582996 kB
    Cached:         155831636 kB
    SwapCached:            0 kB
    Active:         115729396 kB
    Inactive:       78767560 kB
    Active(anon):   44040440 kB
    Inactive(anon):  8531544 kB
    Active(file):   71688956 kB
    Inactive(file): 70236016 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    SwapTotal:      112639992 kB
    SwapFree:       112639992 kB
    Dirty:               160 kB
    Writeback:             0 kB
    AnonPages:      38082348 kB
    Mapped:         15352480 kB
    Shmem:          14489676 kB
    Slab:            3993152 kB
    SReclaimable:    3826768 kB
    SUnreclaim:       166384 kB
    KernelStack:       18344 kB
    PageTables:       245352 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:    261457104 kB
    Committed_AS:   74033552 kB
    VmallocTotal:   34359738367 kB
    VmallocUsed:      903384 kB
    VmallocChunk:   34205870424 kB
    HardwareCorrupted:     0 kB
    AnonHugePages:  35538944 kB
    HugePages_Total:      32
    HugePages_Free:       32
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    DirectMap4k:        6384 kB
    DirectMap2M:     2080768 kB
    DirectMap1G:    299892736 kB
    ------ Shared Memory Limits --------
    max number of segments = 4096
    max seg size (kbytes) = 67108864
    max total shared memory (kbytes) = 67108864
    min seg size (bytes) = 1

    The error message suggests that the system is running out of semaphores although plenty seem to be configured:
    kernel.sem = 400 32000 512 5029
    Could it be that there are other programs on this machine as this user using semaphores?
    Have you made changes to the kernel parameters and haven't made them permanent with
    # /sbin/sysctl -p
    or a re-boot?
    If you've done a # /sbin/sysctl -p have you recycled the TT daemon
    $ ttDaemonAdmin -restart
    So TT takes up the new settings?
    Tim

Maybe you are looking for