Mobile App Best Practice When Using SQLite Database

Hello,
I have a mobile app that has several views.
Each view calls a different method of a Database custom class that basically returns the array from a synchronous execute call.
So, each view has a creationComplete handler in which I have something like this:
var db:Database=new Database();
var connectResponse:Object=db.connect('path-to-database');
if(connectResponse.allOK)//allOK is true if connection was succesful
   //Do stuff with data
else
   //Present error notice
However this seems reduntant. Is it OK to do this once (connect the the database) in the Main Application file?
The do something like FlexGlobals.topLevelApplication.db?
And even generally speaking, constants and other things that I would need throughout the app, can be placed in the main app? As a best practice, not technically as technically it is possible.
Thank you.

no, I only connect it once
I figured I wanted several views to use it so made it static and singleton as I only have 1 database
I actually use synchronous calls but there is a sync with remote mysql database function, hence the eventdispatcher
... although I am thinking it might be better to use Async and dispatch a custom event and have the relative views subscribe

Similar Messages

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best practice when using auto complete in view layer

    Hello
    I have a question regarding best way to store/cash data when using auto complete function in view layer.
    I know that there will be a lot of visitors that will use this function and I dont want to kill the application server so I need some advice.
    Its about 6000 words that should be searchable .... my first thought was to create a Singleton-bean that stores current items and that I will iterate over but its a lot of "waste" doing that way.
    I would be very glad if anyone could advice me how to do this best way if there is any de-facto standard to use when using auto completion in "view layer"
    Thanks!
    Best Regards/D_S

    I dont know what your design is, but here are some ideas:
    To me, autocomplete means you have some user specific data that the user entered prevously such as their home address, and some generic data that is not specific to any particular user. I would store all that in a database. For the user specific data I would store their userID along with the data in the database. Then, when populating a JSP page, I would call up just the data specific to that user and the generic data from the database. I would store it as an array of some type in javascript client--side. When the user clicks the autopopulate button, I would have that button call a javascript fuction that reteives the data from the javascript array and populate the various textfields. All this is done client-side so the form does not have to be re-drawn. I question why you have 6000 items. Normally, autopopulate has at most only a few dozens of items. If you still need 6000 items, I suggest adding a textfield to the form to filter what the data he needs down to a manageable amount. Example: rather than get all names from a telephone book, put a textfield on the form that allowfs an end user to enter a letter a to z such as 'b', then only fetch last names from the phone book that begins with 'b'.

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

  • Best practices when using OEM with Siebel

    Hello,
    I support numerous Oracle databases and have also taken on the task of supporting Enterprise Manager (GRID Control). Currently we have installed the agent (10.2.0.3) on our Oracle Database servers. So most of our targets are host, databases and listeners. Our company is also using Siebel 7.8 which is supported by the Siebel Ops team. The are looking into purchasing the Siebel plugin for OEM. The question I have is there a general guide or best practice to managing the Siebel plugin? I understand that there will be agents installed on each of the Servers that have Siebel Components, but what I have not seen documented is who is responsible for installing them? Does the DBA team need an account on the Siebel servers to do the install or can the Siebel ops team do the install and have permissions set on the agent so that it can communicate with the GRID Control? Also they will want access to the Grid Control to see performance of their Components, how do we limit their access to only see the Siebel targets including what is available under the Siebel Services tab? Any help would be appreciated.
    Thanks.

    There is a Getting Started Guide, which explains about installation
    http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b32394/toc.htm
    -- I presume there are two teams in your organization. viz. DBA team which is responsible for installing the agent and owns the Grid Control / Siebel ops team is responsible for monitoring Siebel deployment.
    Following is my opinion based on the above assumption:
    -- DBA team installs agent as a monitoring user
    -- Siebel ops team provides execute permission to the above user for server manager[]srvrmgr.exe] utilities and read permission to all the files under Siebel installation directory
    -- DBA team provisions a new admin for Siebel ops team and restrict the permissions for this user
    -- Siebel ops team configures the Siebel pack in Grid Control. [Discovery/Configuration etc]
    -- With the above set up Siebel ops team can view only the Siebel specific targets.
    Thanks

  • Best Practices when using MVMC to improve speed of vmdk download

    I've converted a number of machines already from ESXi4.1 to Hyper-V 2012 successfully and learnt pretty much all the gotchas and potential issues to avoid along the way, but I'm still stuck with extremely slow downloading of the source
    vmdk files to the host i'm using for the MVMC. This is not so much an issue for my smaller VM's but it will be once I hit the monster sized ones.
    To give you an idea on a 1GB network it took me 3 hours to download an 80GB VM. Monitoring the network card on the hyper-v host I have MVMC running on shows that i'm at best getting 30-40Mbs download and there are large patches where that falls right down
    to 20Kbs or thereabouts before ramping back up to the Mbs range again. There are no physical network issues that should be causing this as far as I can see.
    Is there some undocumented trick to get this working at an acceptable speed? 
    Copying large files from a windows guest VM on the esx infrastructure to the Hyper-V host does not have this issue and I get the full consistent bandwidth.

    It's VMWARE in general is why... Ever since I can remember (which was ESX 3.5) if you copy using the webservice from the data store the speeds are terrible. Back in the 3.5 days the max speed was 10Mbps as second. FASTSCP came around and threaded it to make
    it fast.
    Backup software like Veeam goes faster only if you have a backup proxy that has access to all data stores running in VMware. It will then utilize the backend VMware pipe and VM network to move the machines which is much faster.
    That being said in theory if you nested a Hyper-V server in a VMWARE VM just for conversations it would be fast permitting the VM server has access to all the datastores.
    Oh and if you look at MAT and MVMC the reason why its fast is because netapp does some SAN offloading to get around VMWARE and make it array based. So then its crazy fast.
    As a side not that was always one thing that has pissed me off about VMWARE.

  • Best practices when using collections in LR 2

    How are you using collections and collection sets? I realize that each photographer uses them differently, but I'm looking some inspiration because I have used them only very little.
    In LR 1 I used collections a bit like virtual Folders, but it doesn't anymore work as naturally in LR2.
    In LR 1 it was like:
    Travel (1000 images, all Berlin images)
    Travel / Berlin trip (400 images, all Berlin trip images)
    Travel / Berlin trip / web (200 images)
    Travel / Berlin trip / print (100 images)
    In LR 2 it could be done like this, but this somehow feels unnatural.
    Travel (Collection Set)
    Travel / Berlin trip (CS)
    Travel / Berlin trip / All (collection, 400 images)
    Travel / Berlin trip / web (collection, 200 images)
    Travel / Berlin trip / print (collection, 100 images)
    Or is this kind of use stupid, because same could be done with keywords and smart collections, and it would be more transferable.
    Also, how heavily are you using Collections? I'm kind of on the edge now (should I start using them heavily or not), because I just lost all my collections because I had to build my library from scratch because of weird library/database problems.

    Basically, i suggest not to use collections to replicate the physical folder structure, but rather to collect images independent of physical storage to serve a particular purpose. The folder structure is already available as a selection means.
    Collections are used to keep a user defined selection of images as a persistent collection, that can be easily accessed.
    Smart collections are based on criteria that are understood by the application and automatically kept up to date, again as a persistent collection for easy access. If this is based on a simple criterium, this can also, and perhaps even easier, done by use of keywords. If however it is a set of criteria with AND, OR combinations, or includes any other metadata field, the smart collection is the better way to do it. So keywords and collections in Lightroom are complementary to eachother.
    I use (smart)collections extensively, check my website  www.fromklicktokick.com where i have published a paper and an essay on the use of (smart)collections to add controls to the workflow.
    Jan R.

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best practice to create a database

    please can you send me the best practice to create a database which is want to be used in the future for a dataware house

    Hi,
    For Dataware housing purpose only means you can create the database by using DBCA. or for only transctional purpose means you can create manyally. like create control file , datafiles and all.
    Thanks and Regards
    Venkat.K.Raju
    Mindlance,
    Oracle Applications Team
    BANGLORE..66
    Mobile:+919986556688
    Land:080-41464843 Ext-4942
    [email protected]

  • Want the best practices docs on Oracle database Admin provided by Oracle

    Hi there,
    I looked at everywhere and didn’t find the best practices docs on Oracle database administration especially on creating db in oracle 10g dbs. I can find bits and pieces here and there. But I didn’t find all incorporated in one. Could somebody direct/provide me on this?

    ok, I'm not looking for the oracle provided manual to find out the best practices in db field. I'm looking for the best practices when creating the db in oracle db, for example. I can read all the oracle manual in Oracle tihiti world. But I don't know the most used and practical things to do when creating the db:- If I have to define what should be the best practices when creating the db, here are the checklist:
    Here are the best practices when creating the Oracle 10 DBs:
    1.     Create meaningful database name
    2.     Create the directory structure following the Optimal Flexible Architecture(OFA)
    a.     Give suffix for the redo log .LOG
    b.     Give suffix for the data file .DBF
    c.     Give suffix for the Control file .CTL
    3.     Enable Password complexity
    4.     Enable ARCHIVELOG Mode
    5.     Use User Oracle Managed File
    6.     Create separate tablespace for data files and Indexes
    7.     Put archive log in different multiple drives
    8.     Multiplex redo log files, and control file
    etc.
    I want to see what other db gurus considers the best practices in the field.

  • Best practice for use of spatial operators

    Hi All,
    I'm trying to build a .NET toolkit to interact with Oracles spatial operators. The most common use of this toolkit will be to find results which are within a given geometry - for example select parish boundaries within a county.
    Our boundary data is high detail, commonly containing upwards of 50'000 vertices for a county sized polygon.
    I've currently been experimenting with queries such as:
    select
    from
    uk_ward a,
    uk_county b
    where
    UPPER(b.name) = 'DORSET COUNTY' and
    sdo_relate(a.geoloc, b.geoloc, 'mask=coveredby+inside') = 'TRUE';
    However the speed is unacceptable, especially as most of the implementations of the toolkit will be web based. The query above takes around a minute to return.
    Any comments or thoughts on the best practice for use of Oracle spatial in this way will be warmly welcomed. I'm looking for a solution which is as quick and efficient as possible.

    Thanks again for the reply... the query currently takes just under 90 seconds to return. Here are the results from the execution plan ran in sql*:
    Elapsed: 00:01:24.81
    Execution Plan
    Plan hash value: 598052089
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 156 | 46956 | 76 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 156 | 46956 | 76 (0)| 00:00:01 |
    |* 2 | TABLE ACCESS FULL | UK_COUNTY | 2 | 262 | 5 (0)| 00:00:01 |
    | 3 | TABLE ACCESS BY INDEX ROWID| UK_WARD | 75 | 12750 | 76 (0)| 00:00:01 |
    |* 4 | DOMAIN INDEX | UK_WARD_SX | | | | |
    Predicate Information (identified by operation id):
    2 - filter(UPPER("B"."NAME")='DORSET COUNTY')
    4 - access("MDSYS"."SDO_INT2_RELATE"("A"."GEOLOC","B"."GEOLOC",'mask=coveredby+i
    nside')='TRUE')
    Statistics
    20431 recursive calls
    60 db block gets
    22432 consistent gets
    1156 physical reads
    0 redo size
    2998369 bytes sent via SQL*Net to client
    1158 bytes received via SQL*Net from client
    17 SQL*Net roundtrips to/from client
    452 sorts (memory)
    0 sorts (disk)
    125 rows processed
    The wards table has 7545 rows, the county table has 207.
    We are currently on release 10.2.0.3.
    All i want to do with this is generate results which fall in a particular geometry. Most of my testing has been successful i just seem to run into issues when querying against a county sized polygon - i guess due to the amount of vertices.
    Also looking through the forums now for tuning topics...

  • Best practices on using EVALUATE functions

    hi, experts,
    I wanna know what is the best practices on using EVALUATE functions on obiee (calling oracle user defined functions)
    I found that if I use evaluate functions in Answers,
    obiee will construct a sql behind and then execute.
    sometimes, obiee contructs some unexpected sqls, and returns errors.
    so, is it better to use EVALUATE functions in logical columns ?
    thanks

    EVALUATE('DB_Function(%1)' as returntype, {Comma separated Expression})
    even when used in Logical columns, its gonna fire the same sql.

  • Best practice to use Tortoise SVN with LV

    Can anyone recommend what is the best practice to use and structure the project using TSVN with LV? I have seen the jki tool and have also read about some issues of linkage when using TSVN with LV as posted on the forum here. I suppose these linkage issues still exists? Other than perforce is there any suggestion on source control that integrates well with LV?
    TIA
    CLD,CTD

    We use Tortoise SVN with LV and it works very good. It's not integrated in that i cannot from within LV check in and out stuff, i have to do that in the Explorer. That's not a problem to me.
    SVN is a very good source and version control regardless.
    One small issue with external handling is if you want to change an already used and active filename. In LV you can save to another filename and references will update, but ofcourse SVN doesn't pick up on that automagically. There are two solutions to this:
    1. When you check in, you'll get 1 added and 1 deleted file, select both, r-click and "Repair move".
    2. After changing the filename in LV, change it back in explorer and r-click the file for a SVN rename and rename it to the new name.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Best practice when modifying SAP Standard Development Component

    Hello Experts,
    What is best practice when modifying SAP Standard Development Component (Java Web Dynpro)? Iu2019m looking for the best method to do modifications to SAP Standard DC so that my changes will be kept (or need low maintenance) after a new service package (or EHP) is applied.
    Thanks,
    Kevin

    Hi,
      'How to use Busiess Packages in Enterprise Portal 6.0' is available in this link.
    http://help.sap.com/bp_epv260/EP_EN/documentation/How-to_Guides/misc/Using_Business_Packages.pdf
    Check out for the best practices.
    Regards,
    Harini S

Maybe you are looking for

  • My ipod wont install apps

    my ipod wont install apps from yesterday afternoon (15,8,2011) please help me i havent done anything to my ipod lately i was trying to install commodore 64 on my ipod and it wouldnt install. already restarted and synced it 4.2.1

  • How do you select items below other items?

    In InDesign, you hold CTRL as you click to select an object underneith another object. I would assume you can do the same thing with Illustrator (CS3), whats the function to do this?

  • Sending fax from 9i forms - urgent

    Hi, I have seen using utl_stmp packages for sending Fax in forms 6i and also with microsoft exchange.There is an demo for sending an email from forms 9i with javamail on OTN. There are any documents for similar kind for sending fax ? in forms 9i with

  • Battery drains off quickly with micro SD card‎

    Hi ,    My battery drains off quickly with micro SD card. This problem started a few days back when all of a sudden my micro SD card got corrupted while my cell was connected to laptop with the USB cord. It all of a sudden displayed message that ther

  • RAID MANAGER 6.22 PROBLEM (  DISPLAYING LUNs)

    Dears we r facing a problem in SOl- 8 Raid Manager 6.22 on Hw raid A3500FC controller. WE had share a central storedge between two hosts. from one hosts we are able to import the disk group as well as it also showing in raid manager GUI., but from an