Programmically update a SQLCE database file

Hello All,
I have two programs that I want to share the same database so that the tables that contain duplicate information don't have to be replicated if I provide two different databases (one for each program).
When I deploy these applications, and I make changes (adding tables and adding fields to existing tables) I'd like to verify that these changes are in my client's database and if not add them.
For example, I sell a program to a client. Later in the year, I make changes to the database in terms of tables and fields. If the client downloads the latest update to the program, and the program refers to a table or field that isn't in their current database,
their program will error out.
Can I check a SQLCE database or items and add things if they are not there?
ADawn
ADawn

You can Query the INFORMATION_SCHEMA views for object exsistance and take corrective action
(ie run ALTER TABLE ADD .. etc)
You can also use my scripting API from exportsqlce.codeplex.com if you are insecure about how to Query INFORMATION_SCHEMA views.
Please mark as answer, if this was it. Visit my SQL Server Compact blog http://erikej.blogspot.com

Similar Messages

  • Why is Lightroom CONSTANTLY updating it's database files?

    Whenever Lightroom is up and running multitudes of files associated with the program appear to constantly be changing and written (as in every few seconds) ... even though I'm doing absolutely nothing in Lightroom and there are no processes (i.e. preview generation etc.) in progress within lightroom ?
    Why do I say this, how do I know ... I have a backup program that monitors my data and automatically backs up files "as they change". With any other program that means when open files are closed.
    With lightroom the backup program is constantly backing up files associated the lightroom session and that starts to slow down the responsiveness of Lightroom as the two programs start vying for control of the same files (... absolute pure specualtion on my part!).
    It's no showstopper, just annoying  .... and ... I'm curious to know what in the heck is going on behind the curtain. It's the only program I have that does this.
    Any ideas??
    (I'm running LR V3.4.1)
    Thanks --- Ken

    Since I can't edit the subject line to add " ... while the program is running and there is NO user activity"
    I'll steal a few key words from your response and rephrase my inquiry to:
    "Why is LR's database in a constant state of instability even when there is NO user activity within the program"
    Taking LR files out of a robust back-up scheme (controlled by a program whose sole function in life is to back things up!)  ... is a band-aid, not a solution IMO (no offense intended).
    Unless there is more to LR's built-in backup than is documented, i.e. - does it support versioning (keeping x copies of older files), have the ability to restore files the user has deleted, write backups to multiple locations etc? Not that I can find, correct if I'm wrong.
    LR is the only program I run (among dozens) that exhibits such spastic behavior. As I said in my orig post, no show-stopper, just annoying and illogical (at this point anyway).
    Ok ... maybe an answer can be had by by asking this then:
    Is it just my installation of LR that does this AND/OR is this known and accepted LR program behavior?
    Inquiring obsessive minds need to know

  • Why did the latest update (31.1.2) delete database files for primary email account in profile?

    After restarting TB to apply the 31.1.2 update, all of the database files for my primary email account were apparently deleted. Restarting TB again got it to start syncing with the IMAP server again, but it has to download around 100,000 messages again because of this. Was this a design decision or is there a problem with the update itself? And why did it only affect my primary email account and leave the other 4 (much smaller ones) unscathed?

    1. Norton Internet Security 21.5.0.19
    2. Application Basics
    Name: Thunderbird
    Version: 31.1.2
    User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.1.2
    Profile Folder: Show Folder
    (Local drive)
    Application Build ID: 20140923203151
    Enabled Plugins: about:plugins
    Build Configuration: about:buildconfig
    Memory Use: about:memory
    Mail and News Accounts
    account1:
    INCOMING: account1, , (imap) imap.googlemail.com:993, SSL, passwordCleartext
    OUTGOING: smtp.googlemail.com:465, SSL, passwordCleartext, true
    account2:
    INCOMING: account2, , (none) Local Folders, plain, passwordCleartext
    account3:
    INCOMING: account3, , (imap) imap.googlemail.com:993, SSL, passwordCleartext
    OUTGOING: smtp.googlemail.com:465, SSL, passwordCleartext, true
    account4:
    INCOMING: account4, , (imap) imap.mail.yahoo.com:993, SSL, passwordCleartext
    OUTGOING: smtp.mail.yahoo.com:465, SSL, passwordCleartext, true
    account5:
    INCOMING: account5, , (imap) imap-mail.outlook.com:993, SSL, passwordCleartext
    OUTGOING: smtp-mail.outlook.com:587, alwaysSTARTTLS, passwordCleartext, true
    account6:
    INCOMING: account6, , (imap) mail.centurylink.net:143, alwaysSTARTTLS, passwordCleartext
    OUTGOING: smtp.centurylink.net:587, alwaysSTARTTLS, passwordCleartext, true
    Crash Reports
    http://crash-stats.mozilla.com/report/index/bp-ef9e8cc4-d975-4a65-8417-652332140926 (9/25/2014)
    Extensions
    Important Modified Preferences
    Name: Value
    accessibility.typeaheadfind.flashBar: 0
    browser.cache.disk.capacity: 358400
    browser.cache.disk.smart_size.first_run: false
    browser.cache.disk.smart_size.use_old_max: false
    browser.cache.disk.smart_size_cached_value: 358400
    extensions.lastAppVersion: 31.1.2
    font.internaluseonly.changed: false
    font.name.monospace.el: Consolas
    font.name.monospace.tr: Consolas
    font.name.monospace.x-baltic: Consolas
    font.name.monospace.x-central-euro: Consolas
    font.name.monospace.x-cyrillic: Consolas
    font.name.monospace.x-unicode: Consolas
    font.name.monospace.x-western: Consolas
    font.name.sans-serif.el: Calibri
    font.name.sans-serif.tr: Calibri
    font.name.sans-serif.x-baltic: Calibri
    font.name.sans-serif.x-central-euro: Calibri
    font.name.sans-serif.x-cyrillic: Calibri
    font.name.sans-serif.x-unicode: Calibri
    font.name.sans-serif.x-western: Calibri
    font.name.serif.el: Cambria
    font.name.serif.tr: Cambria
    font.name.serif.x-baltic: Cambria
    font.name.serif.x-central-euro: Cambria
    font.name.serif.x-cyrillic: Cambria
    font.name.serif.x-unicode: Cambria
    font.name.serif.x-western: Cambria
    font.size.fixed.el: 14
    font.size.fixed.tr: 14
    font.size.fixed.x-baltic: 14
    font.size.fixed.x-central-euro: 14
    font.size.fixed.x-cyrillic: 14
    font.size.fixed.x-unicode: 14
    font.size.fixed.x-western: 14
    font.size.variable.el: 17
    font.size.variable.tr: 17
    font.size.variable.x-baltic: 17
    font.size.variable.x-central-euro: 17
    font.size.variable.x-cyrillic: 17
    font.size.variable.x-unicode: 17
    font.size.variable.x-western: 17
    gfx.direct3d.last_used_feature_level_idx: 0
    mail.openMessageBehavior.version: 1
    mail.winsearch.firstRunDone: true
    mailnews.database.global.datastore.id: 0bb49554-fa41-4ecf-af58-17c0753468e
    network.cookie.prefsMigrated: true
    places.database.lastMaintenance: 1411474192
    places.history.expiration.transient_current_max_pages: 104858
    plugin.importedState: true
    print.printer_Brother_HL-2270DW_series.print_bgcolor: false
    print.printer_Brother_HL-2270DW_series.print_bgimages: false
    print.printer_Brother_HL-2270DW_series.print_colorspace:
    print.printer_Brother_HL-2270DW_series.print_command:
    print.printer_Brother_HL-2270DW_series.print_downloadfonts: false
    print.printer_Brother_HL-2270DW_series.print_duplex: 0
    print.printer_Brother_HL-2270DW_series.print_edge_bottom: 0
    print.printer_Brother_HL-2270DW_series.print_edge_left: 0
    print.printer_Brother_HL-2270DW_series.print_edge_right: 0
    print.printer_Brother_HL-2270DW_series.print_edge_top: 0
    print.printer_Brother_HL-2270DW_series.print_evenpages: true
    print.printer_Brother_HL-2270DW_series.print_footercenter:
    print.printer_Brother_HL-2270DW_series.print_footerleft: &PT
    print.printer_Brother_HL-2270DW_series.print_footerright: &D
    print.printer_Brother_HL-2270DW_series.print_headercenter:
    print.printer_Brother_HL-2270DW_series.print_headerleft:
    print.printer_Brother_HL-2270DW_series.print_headerright:
    print.printer_Brother_HL-2270DW_series.print_in_color: true
    print.printer_Brother_HL-2270DW_series.print_margin_bottom: 0.5
    print.printer_Brother_HL-2270DW_series.print_margin_left: 0.5
    print.printer_Brother_HL-2270DW_series.print_margin_right: 0.5
    print.printer_Brother_HL-2270DW_series.print_margin_top: 0.5
    print.printer_Brother_HL-2270DW_series.print_oddpages: true
    print.printer_Brother_HL-2270DW_series.print_orientation: 0
    print.printer_Brother_HL-2270DW_series.print_page_delay: 50
    print.printer_Brother_HL-2270DW_series.print_paper_data: 1
    print.printer_Brother_HL-2270DW_series.print_paper_height: 11.00
    print.printer_Brother_HL-2270DW_series.print_paper_name:
    print.printer_Brother_HL-2270DW_series.print_paper_size_type: 0
    print.printer_Brother_HL-2270DW_series.print_paper_size_unit: 0
    print.printer_Brother_HL-2270DW_series.print_paper_width: 8.50
    print.printer_Brother_HL-2270DW_series.print_plex_name:
    print.printer_Brother_HL-2270DW_series.print_resolution: 0
    print.printer_Brother_HL-2270DW_series.print_resolution_name:
    print.printer_Brother_HL-2270DW_series.print_reversed: false
    print.printer_Brother_HL-2270DW_series.print_scaling: 1.00
    print.printer_Brother_HL-2270DW_series.print_shrink_to_fit: true
    print.printer_Brother_HL-2270DW_series.print_to_file: false
    print.printer_Brother_HL-2270DW_series.print_unwriteable_margin_bottom: 0
    print.printer_Brother_HL-2270DW_series.print_unwriteable_margin_left: 0
    print.printer_Brother_HL-2270DW_series.print_unwriteable_margin_right: 0
    print.printer_Brother_HL-2270DW_series.print_unwriteable_margin_top: 0
    Graphics
    Adapter Description: NVIDIA GeForce GTX 560
    Vendor ID: 0x10de
    Device ID: 0x1201
    Adapter RAM: 1024
    Adapter Drivers: nvd3dumx,nvwgf2umx,nvwgf2umx nvd3dum,nvwgf2um,nvwgf2um
    Driver Version: 9.18.13.4052
    Driver Date: 7-2-2014
    Direct2D Enabled: true
    DirectWrite Enabled: true (6.2.9200.16492)
    ClearType Parameters: ClearType parameters not found
    WebGL Renderer: false
    GPU Accelerated Windows: 2/2 Direct3D 10
    AzureCanvasBackend: direct2d
    AzureSkiaAccelerated: 0
    AzureFallbackCanvasBackend: cairo
    AzureContentBackend: direct2d
    JavaScript
    Incremental GC: 1
    Accessibility
    Activated: 0
    Prevent Accessibility: 0
    Library Versions
    Expected minimum version
    Version in use
    NSPR
    4.10.6
    4.10.6
    NSS
    3.16.2.1 Basic ECC
    3.16.2.1 Basic ECC
    NSS Util
    3.16.2.1
    3.16.2.1
    NSS SSL
    3.16.2.1 Basic ECC
    3.16.2.1 Basic ECC
    NSS S/MIME
    3.16.2.1 Basic ECC
    3.16.2.1 Basic ECC
    Account5 is the one affected which is still attempting to sync. It also appears that my sync choices were altered without permission or notification. I always keep the Deleted folder and the POP subfolder from Hotmail accounts disabled since it is merely duplicating content in other folders. When re-checking the sync settings because it was taking so long, I found that the POP subfoler was selected for sync again.
    The crash report is from a later TB crash, the first one ever on this system.
    3. Windows 7 Ultimate Signature Edition, Cox Communications (shouldn't matter whatsoever), the affected account is Hotmail/Outlook.com, unaffected accounts: 2 GMail, 1 Yahoo, 1 CenturyLink

  • [iPhone SQLite] how to download database file updates

    hi everyone,
    I want to build an application that show some info stored in SQLite db, but have an issue:
    the user should be able to upgrade periodically the database file, to get new entries etc, can i do that with iPhone?
    thanks, bye
    s.g.

    You need the bytes use FileReference.download() and after download you can save
    it on disk with FileReference.save(); You also need FP 10 at least I think. Use
    the docs they are less error pron than mi memory :).
    C

  • Can I sync database files over to Mobile Me and then from MobileMe to iPod?

    Many programs will allow you to sync over Mobile Me and but not over a USB connection. Can I sync the files on my Mac to Mobile Me and then sync them to my iPod when I am somewhere that has a WiFi connection? I am asking this because the place I work does not have WiFi and I need to have a WiFi connection to sync database files (such as OmniFocus, Billings, and Bento.)

    "I hope they fix this in a later update."
    I don't think that it is broken. I think that it is intentional ( by design). Possibly to make it more difficult to use ipod touch to pirate/steal music/movies/etc.
    The ipod touch and iphone have never had this feature.
    Who knows?
    You can leave feedback for Apple at:
    http://www.apple.com/feedback

  • Identify database file usage

    I would like to identify over the last X seconds how much data has been written and read from each database file by SPID, is this possible? I want gather information on our busiest files and processes.
    Id like to be able to do this in TSQL rather than using profiler.

    Hi Jameslester,
    We can capture the information about reading and writing transaction log file via DMV or SQL Server Profiler when your perform any database activity. We will run a few DML scripts to check how data insertion, updating or deletion is logged in the database
    log file. During this operation you can also track how a page is allocated or de-allocated. For example, page splits like how many times page splits occur, on which page and during which operation. However, personally, we could not
     identity how much data has been written and read from database file in the last X seconds by using T-SQL.
    There is more information about how to read the SQL Server Database Transaction Log, you can review the following article.
    http://www.mssqltips.com/sqlservertip/3076/how-to-read-the-sql-server-database-transaction-log/
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Is there a size limit on the iPod for the song database file ?

    I have been running into the same issue for the last 2 weeks: Once I exceed 110 GB on my iPod Classic 160 GB, iTunes is no longer able to update the database file on the iPod.
    When clicking (on the iPod) on Settings/About, the iPod displays the wrong number of songs. Also, the iPod is no longer able to play any songs.
    Is there a size limit for the database file on the iPod ?
    I am making excessive use of the 'comments' field in every song that I load to the iPod. This increases the size of the database file.
    Is there a way, that I can manually update the database file on the iPod ?
    Thanks for your help !

    did you experience some crashing of the ipod as well? do you know how many separate items you had?

  • Edit the [LDF] database file

    Hi,
    I'm searching a way that can edit the database file *.ldf, cause some customers wouldn't put the Signal_endcoding_types in the ldf file. I need a way to save the "encoding" in the file with labview UI. I've found some old APIs for ldf file, but I've tried out, it seems like, that they only works on old versions. The version of my ldf file is LIN_protocol_version = "2.0"; LIN_language_version = "2.1"; does any one could tell me if there's any way that can realise my target.
    Thank you in advance
    best regards,
    Melo 
    Solved!
    Go to Solution.

    Do you actually want to edit the LDF file?  Because if so there is nothing NI currently provides that will work for you.  If you want to use the LDF in XNet, and you want to modify settings imported from the LDF then you can use the XNet API to achive this.  When you use XNet you import the LDF (or other database files) and then you can edit the database, but this doesn't edit the LDF.  This edits the database that XNet uses, which was started by importing the LDF.
    Recently there have been updates to XNet that allow exporting a database but I believe they only support exporting to a CAN database.  So with CAN you can import a DBC into XNet, edit it using the database editing tools, then export it back to a DBC that should have the edits you made.
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.

  • How to stop BDB from Mapping Database Files?

    We have a problem where the physical memory on Windows (NT Kernel 6 and up, i.e. Windows 7, 2008R2, etc.) gets maxed out after some time when running our application.  On an 8GB machine, if you look at our process loading BDB, its only around 1GB. But, when looking at the memory using RAMMAP, you can see that the BDB database files (not the shared region files) are being mapped into memory and that is where most of the memory consumption is taking place.  I wouldn't care normally, as memory mapping can have performance and usability benefits. But the results are the system comes to a screeching halt.   This happens when we are inserting results in high order, e.g. 10s of millions of records in a short time frame.
    I would attach a picture to this post, but for some reason the insert image is greyed out.
    Environment open flags: DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_TXN | DB_INIT_MPOOL | DB_THREAD | DB_LOCKDOWN | DB_RECOVER
    Database open flags: DB_CREATE | DB_AUTO_COMMIT

    An update for the community
    Cause
    We opened a support request (SR) to work with Oracle on the matter. The conclusion we came to was that the main reason for the memory consumption was the Windows System Cache.  (For reference, see this http://support.microsoft.com/kb/976618) When opening files in buffered mode, the equivalent of calling CreateFile without specifying FILE_FLAG_NO_BUFFERING, all I/O to a file goes through the Windows System Cache.  The larger the database file, the more memory is used to back it.  This is not the same as memory mapped files, of which Berkeley will use for the region files (i.e. the environment.) Those also use memory, but because they are bounded in size, will not cause an issue (e.g. need a bigger environment, just add more memory.)  The obvious reason to use the cache is for performance optimizations, particularly in read-heavy workloads. 
    The drawback, however, is that when there is a significant amount of I/O in a short amount of time, that cache can get really full and can result in the physical memory being close to 100% used.  This has negative affects on the entire system. 
    Time is important, because Windows needs time to transition active pages to standby pages which decreases the amount of physical memory.   What we found is that when our DB was installed on FLASH disk, we could generate a lot more I/O and our tests could run in a fraction of the time, but the memory would get close to 100%. If we ran those same tests on slower disk, while the result was the same, i.e. inserted 10 million records into the data, the time takes a lot long and the memory utilization does not approach even close to 100%. Note that we also see the memory consumption happen when we utilize the hotbackup in the BDB library. The reason for this is obvious:  In a short amount of time we are reading the entire BDB database file which makes Windows utilize the system cache for it. Total amount of memory might be a factor as well. On a system with 16GB of memory, even with FLASH disk, we had a hard time reproducing the issue where the memory climbs.
    There is no Windows API that allows an application to control how much system cache is reserved or usable or maximum for an individual file.  Therefore, BDB does not have fine grained control of this behavior on an individual file basis.  BDB can only turn on or off buffering in total for a given file.
    Workaround
    In Berkeley, you can turn off buffered I/O in Windows by specifying the DB_DIRECT_DB flag to the environment.  This is the equivalent of calling CreateFile with specifying FILE_FLAG_NO_BUFFERING.  All I/O goes straight to the disk instead of memory and all I/O must be aligned to a multiple of the underlying disk sector size. (NTFS sector size is generally 512 or 4096 bytes and normal BDB page sizes are generally multiples of that so for most this shouldn't be a concern, but know that Berkeley will test that page size to ensure it is compatible and if not it will silently disable DB_DIRECT_DB.)  What we found in our testing is that using the DB_DIRECT_DB flag had too much of a negative affect on performance with anything but FLASH disk and therefore can not use it. We may consider it acceptable for FLASH environments where we generate significant I/O in short time periods.   We could not reproduce the memory affect when the database was hosted on a SAN disk running 15K SAS which is more typical and therefore are closing the SR.
    However, Windows does have an API that controls the total system wide amount of system cache space to use and we may experiment with this setting. Please see this http://support.microsoft.com/kb/976618 We are also going to experiment with using multiple database partitions so that Berkeley spreads the load to those other files possibly giving the system cache time to move active pages to standby.

  • Managing a single Oracle Lite database file

    Hi,
    I was wondering if there's the possibility of using a single database file of Oracle Lite just like it can be done with SQL Server CE. At the moment, I'm using the SQLCE driver of .NET to manipulate my SDF file (SQLCE database for Pocket PCs) without using a SQL SERVER Merge replication; however, I was trying to change my SQLCE database to an Oracle Lite database without using the whole replication thing. I've already installed the whole Oracle Lite 10g kit but it seems it's necessary to create some DNS (that I don't fully understand :S) and that's not what I'm looking for. I hope my explanation isn't that vague and ambiguous. Thanks in advance.
    Best regards,
    César C.

    See Connection string  and DSN
    It appears that c:\windows\polite.ini c:\windows\odbc.ini need to be installed. odbc.ini must contain the DSN entry for your DB.
    Note you can create/modify these files when you install you application that uses Oracle Lite. If you happen to have an application that dynamically creates the DB then you can reuse the one DSN entry for multiple DBs. Just provide the DB location in the connection string along with the DSN reference.
    string dbpath = Path.Combine(
    Path.GetDirectoryName(System.Windows.Forms.Application.ExecutablePath), "Oracle");
    string constr = string.Format(@"DataDirectory={0};Database={1};DSN={2};uid=system;pwd=manager",
    dbpath, DB, DSN);
    OdbcConnection cn = new OdbcConnection(constr);

  • Kodo.util.FatalDataStoreException: Wrong database file version

    Hi,
    I am using Kodo JDO 3.0.2 together with HSQLDB (non-cached, same process).
    It
    runs fine. However, after having used a SQL tool such as Aqua Data Studio
    to
    inspect the database my Java code complains with the message
    "kodo.util.FatalDataStoreException: Wrong database file version". I have
    to
    rebuild the database and extend my classes again to get rid of this error.
    Is there some information in the database script that does not survive the
    inspection with the SQL tool? How can I work around this?
    Thanks for your help
    --Bruno

    Marc,
    It was indeed a version mismatch with my hsqldb libs. My SQL Tool used
    version 1.7.2 whereas Kdo used 1.7.0. A quick update of the property file
    of
    Aqua Data Studio fixed the problem. Thanks for the hint.
    --Bruno
    Marc Prud'hommeaux wrote:
    Bruno-
    Without being at all familiar with "Aqua Data Studio", I'll make a
    completely shot in the dark guess about what might be happening: you are
    using version x of Hypersonic to access the database, and then "Aqua
    Data Studio" is using version x+1. When the database is opened with HSQL
    version x+1, some internal version identifier in the database file is
    incremented, which disallows the previous version of HSQL (which is
    being used by Kodo) from opening the file.
    Again, this is a blind guess, but if it is the case, then the solution
    would be to ensure that you are using the same version of HSQL in both
    Kodo and "Aqua Data Studio".
    Otherwise, can you post the stack trace of the exception? That might
    give some more insight as to why this might be happening.
    As an aside, note that Kodo doesn't store or verify any internal
    "version" or anything like that, so I very much doubt that it is a
    problem with Kodo itself.
    In article <c1fihi$igu$[email protected]>, Bruno Schaeffer wrote:
    Hi,
    I am using Kodo JDO 3.0.2 together with HSQLDB (non-cached, same
    process).
    It
    runs fine. However, after having used a SQL tool such as Aqua Data Studio
    to
    inspect the database my Java code complains with the message
    "kodo.util.FatalDataStoreException: Wrong database file version". I have
    to
    rebuild the database and extend my classes again to get rid of thiserror.
    Is there some information in the database script that does not survivethe
    inspection with the SQL tool? How can I work around this?
    Thanks for your help
    --Bruno
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Database file size

    I am using Berkeley DB 5.1.19 with replication manager.
    I am seeing big differences between the size of the db files between the master and the client, is that expected and if so what is the reason. This has impact on the size of the backup too.
    On the master:
    [root@sde1 sandvine]# du -sh replica_data/*
    16K replica_data/__db.001
    29M replica_data/__db.002
    11M replica_data/__db.003
    2.9M replica_data/__db.004
    25M replica_data/__db.005
    12K replica_data/__db.006
    2.3M replica_data/__db.rep.db
    1.1M replica_data/__db.rep.diag00
    1.1M replica_data/__db.rep.diag01
    4.0K replica_data/__db.rep.egen
    4.0K replica_data/__db.rep.gen
    8.0K replica_data/__db.reppg.db
    8.0K replica_data/__db.rep.system
    11M replica_data/log.0000000158
    7.2M replica_data/log.0000000159
    8.0K replica_data/persistency_name_mapping.tbl
    8.0K replica_data/QM_KPI_NumManagedTable1_20111117T015214.012632_backup.db
    8.0K replica_data/QM_KPI_NumOverQuotaTable2_20111117T015214.074648_backup.db
    8.0K replica_data/QM_KPI_NumUnderQuotaTable3_20111117T015214.138377_backup.db
    8.0K replica_data/QM_KPI_NumUnmanagedTable4_20111117T015214.200234_backup.db
    8.0K replica_data/QmLastIpAddressTable5_20111117T015214.258221_backup.db
    12K replica_data/QmPolicyConfiguration6_20111117T015214.316379_backup.db
    13M replica_data/QmSubIdNameTable7_20111117T015214.375543_backup.db
    41M replica_data/QmSubscriberQuota_Daily8_20111117T015214.432662_backup.db
    41M replica_data/QmSubscriberQuota_PC_or_Monthly9_20111117T015214.506866_backup.db
    41M replica_data/QmSubscriberQuota_Roaming10_20111117T015214.570525_backup.db
    15M replica_data/QmSubscriberQuotaState12_20111117T015214.717594_backup.db
    41M replica_data/QmSubscriberQuota_Weekly11_20111117T015214.634982_backup.db
    On the client:
    [root@sde2 sandvine]# du -sh replica_data/*
    16K replica_data/__db.001
    146M replica_data/__db.002
    133M replica_data/__db.003
    3.3M replica_data/__db.004
    33M replica_data/__db.005
    12K replica_data/__db.006
    8.0K replica_data/__db.rep.db
    1.1M replica_data/__db.rep.diag00
    1.1M replica_data/__db.rep.diag01
    4.0K replica_data/__db.rep.egen
    4.0K replica_data/__db.rep.gen
    8.0K replica_data/__db.reppg.db
    8.0K replica_data/__db.rep.system
    7.2M replica_data/log.0000000159
    8.0K replica_data/persistency_name_mapping.tbl
    8.0K replica_data/QM_KPI_NumManagedTable1_20111117T015214.012632_backup.db
    8.0K replica_data/QM_KPI_NumOverQuotaTable2_20111117T015214.074648_backup.db
    8.0K replica_data/QM_KPI_NumUnderQuotaTable3_20111117T015214.138377_backup.db
    8.0K replica_data/QM_KPI_NumUnmanagedTable4_20111117T015214.200234_backup.db
    8.0K replica_data/QmLastIpAddressTable5_20111117T015214.258221_backup.db
    12K replica_data/QmPolicyConfiguration6_20111117T015214.316379_backup.db
    13M replica_data/QmSubIdNameTable7_20111117T015214.375543_backup.db
    41M replica_data/QmSubscriberQuota_Daily8_20111117T015214.432662_backup.db
    41M replica_data/QmSubscriberQuota_PC_or_Monthly9_20111117T015214.506866_backup.db
    41M replica_data/QmSubscriberQuota_Roaming10_20111117T015214.570525_backup.db
    15M replica_data/QmSubscriberQuotaState12_20111117T015214.717594_backup.db
    41M replica_data/QmSubscriberQuota_Weekly11_20111117T015214.634982_backup.db
    For example:
    The following 2 files on the master are small
    29M replica_data/__db.002
    11M replica_data/__db.003
    Where on the client, the same following:
    146M replica_data/__db.002
    133M replica_data/__db.003
    Thx in advance.

    The __db.00* files are not replicated database files. They are internal Berkeley DB files that back our shared memory regions and they are specific to each separate site's database. It is expected that they can be different sizes reflecting the different usage patterns and potentially different configuration options on the master and the client database. To read more about these files, please refer to the Programmer's Reference section titled "Shared memory regions".
    I am assuming that your replicated databases are the QM* and Qm* files. These look like they are the same size on the master and client, as we would expect.
    Paula Bingham
    Oracle

  • Delete a physical bdb database file

    Lets say, I have a bdb database file x. Now, how to delete this file ? Ofcourse you can’t just delete it via command line else there will in inconsistency in bdb logs.
    I saw remove API and used it the following way but the file is not deleted yet: http://www.oracle.com/technology/documentation/berkeley-db/xml/java/com/sleepycat/db/Database.html#remove(java.lang.String,%20java.lang.String,%20com.sleepycat.db.DatabaseConfig)
    // Get the database handle to the database name “dbToDelete”
    Database dbHandle = GetDataBaseHandle(“DbToDelete”)
    // Delete this database
    dbHandle.rmove(“DbToDelete, DbToDelete, null);
    It doesn’t work – any ideas why? Is this the wrong API? Or this API doesn’t remove the database from your system but updates its log to “delete” it?
    Thanks
    Neha

    Well, maybe I was using the API in incorrect manner. Now, I’m using it like:
    // Delete this database
    dbHandle.rmove(Absolute_Path_To_DbFile_To_Delete , null , null );
    This does removes the file from the system. But, I’m just concerned that is that the right way? Shouldn’t somehow the bdb env path be specified in this API?
    Neha

  • How to get AW7 to update an Access database...

    I have created a simple Access database file to be used for student records, and need to figure out how to set up AW7 to write data to and from the file. I have scoured the internet, blogs, forums and such, but haven't found anything helpful or straightforward, and most info is for old versions...so does anyone know of a resource, link, or has experience with this?  Some sample code for my calc icons to use as a guide? If anyone has experience with this ODBC MIcrosoft Access communication I would greatly appreciate some guidance. Thanks to everyone in advance, I really appreciate it!!

    Hey Steve-
    I would like your opinion/thoughts on the use of AW- I know that AW has not
    been updated for awhile, and won't ever again, so it wouldn't be a good idea
    to start now, and I would normally agree. But I still believe AW by itself,
    as a standalone program to develop courseware, is still a great tool.
    (Wouldn't bother learning it from scratch, but I've used it before.) It
    seems to me that AW is not practical in this day and age, because the AW Web
    Player is outdated, and AW web publishing was created around IE3 probably,
    and LMS's are updated every day...my point is that AW seems to be outdated
    when it comes to delivery over the web.
    To the point- I may still be tasked with developing some courseware solely
    for a stand-alone, independent classroom, completely isolated from the web,
    don't even care if IE or Mozilla is installed on any computer. (So no web
    packaging, just as .exe or .a7r) Even have full control over whatever
    version of Windows is used. So would you still consider AW? I still am,
    obviously, I have worked with Captivate and it just seems so simple and
    basic compared to the capability AW still has.
    To the point again- so taking the web environment out of the equation, *have
    you encountered any issues using AW in 2011 that are show-stoppers?* Maybe a
    strictly OS issue with Windows 7?
    I would be very interested in any brief comments or thoughts you have on
    what I have said, you have been a huge help to me already and I really
    appreciate your time. Thanks!!
    Terry

  • Flash and database files

    Solaris 8 10/02
    Archive a netra 20 running Solaris 8 10/02 - The archive create work great. However,
    when I tried to install it on another system (boot - install tape) with the same hardware configuration. The sparse Sysbase database files are expanded and no longer file on the partition that was created.
    Any ideas?
    thanks

    Thanks again Barney...
    I make a new version of this app every 3 months and I have just created one this week (after updates) and it works fine once burnt onto a CD.
    This is what makes me think that the OS is now looking at the older .dat files differently after the updates. Could it be a security issue that they may have been trying to resolve??
    It is also PC compatible and all the old CD-ROMs still work fine on PC's.
    Thanks again for your help.

Maybe you are looking for

  • Error using Youtube data source in siena

    Hi, We are facing authentication error while publishing the app for YouTube data source. All credentials and service keys are provided and correct. it works in source file, but not when published. Thanks in advance!! waiting for any help.. saravana

  • Generate a PDF from Excel with a Digital Signature Field?

    Hello, I have an excel workbook that is filled out weekly- I then have to generate a PDF with a digital signature field for a manager to sign (vouching for the data).  I currently have to manually generate the PDF and then manually add a digital sign

  • COPA vs. BCS design decisions (ex. profitability by customer in BCS)

    We are trying to meet a business goal of identifying gross profit by customer. We realize "customer" as a field in BCS is problematic, so we are thinking of only storing certain customers in BCS with a catch-all "Others" customer - with the goal of k

  • Problem uploading XLIFF translation file

    Hi, I am trying to carry out a translation process on an application but am getting an error which is stopping me progressing. I have exported the text to a XLIFF file successfully, however when I try to import the translated XLIFF, I get the 'Page c

  • How to get current top-most JFrame/JDialog?

    my problem is: i want to show an error message(JOptionPane). it could be shown at anytime because i am using another thread to show this error message(upon receiving of the Socket). but i dunno how to specify its owner in this case since it could be