MGP Compose Timeout

Hi people,
When we try to sync about 80000 new rows to the server we receive the following in mgptrace_sys1.log
log-1: ============== Server Exception - Begin ==================
java.lang.Exception: MGP compose timed out for C2131_01
     at oracle.lite.sync.Consolidator$O8Server.fastPush(Consolidator.java, Compiled Code)
     at oracle.lite.sync.MGP$MGPG.run(MGP.java, Compiled Code)
     at java.lang.Thread.run(Thread.java, Compiled Code)
================== Server Exception - End ====================
Does anyone know how to prevent this?
Many thanks
Andy

This is expected, since the compose cycle for the user C2131_01 exceeded the default value for parameter COMPOSE_TIMEOUT, which is 5 minutes. I suggest two things:
1. Tune the system to optinmize MGP performance (use the Consperf utiliy to get the numbers and explain plan)
2. Increase COMPOSE_TIMEOUT
The reason we did this is to prevent MGP from being blocked by one particular user

Similar Messages

  • Blocking sessions during MGP compose

    We keep getting the MGP compose process blocked by inactive sessions
    We raised a SR with Oracle and they suggested adding the instance parameter DO_APPLY_BFR_COMPOSE=YES to webtogo.ora, but this does not seem to have cured the problem.
    From tracing, it seems that the problem is an inactive apply session that resulted in an error, being there for the user it is trying to compose. The actual block is on the table MOBILEADMIN.C$ALL_CLIENT_ITEMS.
    If an apply fails due to an error (eg: constraint violarion, trigger failure etc), it executes the statement
    UPDATE MOBILEADMIN.C$ALL_CLIENT_ITEMS SET CRR='Y' WHERE CLIENTID=? AND PUBLICATION_ITEM IN ( SELECT PUBLICATION FROM MOBILEADMIN.C$ALL_TEMPLATE_ITEMS ati WHERE ati.TEMPLATE=?):
    as part of the copy to the error queue. It also looks to execute the same statement at the beginning of the compose for the user, hence the block.
    The other odd thing is that looking at the blocks this morning, the MGP apply/compose kicked off at 7.09 am. The user synchronised at just after 8 am, by which time the MGP cycle was well into the compose phase, and there is no record of apply activity for the user within the cycle (the actual apply is in the following cycle), the the content of the blocking session was definitely apply code.
    MGP used to do all of the applys first, and then all of the composes, with a test to skip the compose if there was data in the user in queue. The upgrade to 10.2 (or one of the many patches since) looks to have changed the default behaviour to attempt an apply before each user compose, but the parameter above was supposed to set the server back to its old behaviour (ie: keep apply and compose seperate). NOTE i have not seen any of the old 'compose deferred because of unprocessed data in the in queue' messages in the compose recently either.
    Upshot is that it looks like it is still mixing apply and compose together, and where the apply hits an error, the apply thread is not closing correctly, or releasing its locks.
    Does anyone have any information about the parameters
    DO_APPLY_BFR_COMPOSE or
    SKIP_INQ_CHK_BFR_COMPOSE (this appeared with a value of NO when we added the other parameter.
    Information from oracle is that the two are mutualy exclusive (?) but they are not documented anywhere that i can find (the one reference on metalink leads to a non existant note), and oracle seem reluctant to supply any
    so PLEASE
    a) any information about the parameters
    b) any dependancies (ie: parameters not working because we are missing a patch)
    c) location of documentation
    d) any other ideas

    which version are you on?
    we are on 10.2.0.2, so unsure if this works in other versions
    The standard mobile manager parameters screen does not by default show these settings, but we were asked to add them into the webtogo.ora file in the [CONSOLIDATOR] section eg:
    [CONSOLIDATOR]
    # Installer will change these values
    SERVER_VERSION=8.1.7
    SKIP_INQ_CHK_BFR_COMPOSE=YES
    DO_APPLY_BFR_COMPOSE=NO
    RESUME_FILE_SIZE=512
    # 8.1.7
    # Installer won't change these values
    MAX_THREADS=3
    JDBC_DRIVER=oracle.jdbc.driver.OracleDriver
    once this was done we stopped and restarted the mobile server and then the new parameters will appear in the normal data synchronisation>administration>instance parameters screen
    our current setting on the live system are
    SKIP_INQ_CHK_BFR_COMPOSE YES
    DO_APPLY_BFR_COMPOSE NO
    and this does the compose whether or not there is pending data in the in queues for the client. Just my opinion, but there seems little point in doing the check as even if you force two tries of the apply process (one in the main apply phhase before the compose phase, and one just before composing for a particular user), the data in the second apply will not be picked up on fast refresh publication items as the 'snapshot' of the changes has already been done in the process logs phase

  • MGP compose postponed because of unprocessed data in the in_queue

    Could anybody shed some light on what does this mean?
    "MGP compose postponed for B010MRM because of unprocessed data in the in_queue"
    Thanks in advance,
    dliu
    We saw this one in the "Moblile Manager" Data Synchronization > MGP Apply/Compose Cycles > MGP History Cycle >
    "Compose Error:
    java.lang.Exception: MGP compose postponed for B010MRM because of unprocessed data in the in_queue
         at oracle.lite.sync.Consolidator$O8Server.fastPush(Unknown Source)
         at oracle.lite.sync.MGP$MGPG.run(Unknown Source)
         at java.lang.Thread.run(Thread.java:534)"

    Basically there was data sitting in the in queue for the particular user when thier turn came around for the compose cycle of the MGP.
    If this is the case, there could be a conflict between the incoming data and the data that could be selected from the main server database for download. To prevent this the compose will not run for the user.
    During the next MGP cycle the in queue data will be posted first (assuming no errors) and then the compose will run as normal for the user. This is a relatively normal situation where there is a lot of incoming traffic and continual compose processes.

  • Exclude Pub Items from Compose

    G'day all,
    Do you know how to exclude a certain Publication Item during MGP compose cycle?
    I have a PI that only receive data from clients and it does not need to bring the data back to clients. This PI unfortunately is the biggest data submitted, so excluding it from compose cycle I think it will speed up MGP process.
    Thanks
    Vute

    There are a few options
    1) keep it as fast refresh, but (if you are sure that the data traffic is purely one way up from the client) set the selection to 'where 1=2'. This would run a compose, but would be pretty fast. The uploaded data would would trigger the MGP after the apply/insert, but this would just do the comparison within the CMP$ tables and generate delete transactions for the client.In this case data would stay on the client until the sync after it was applied to the server
    2) disable the triggers created by the publish (<table_name>i/u/d). This would mean that the table is never marked as dirty, and therefore will not be composed. In this case data created on the client would not be removed unless a complete refresh of the object was done
    3) make it a complete refresh object - suppresses from compose, but will flush data not applied from the client (at least until a subsequent sync). If you use a dummy selection then you can get it to flush the client table after each sync
    4) If you have a l;ot of dependancies, you could create a MyCompose class for the PI, and use the needcompose method to provide more control over whether or not the object is composed

  • MGP "record flags" from multiple tables?

    Hello All.
    I've run into a client requirement/business rule change that is causing problems in regards to data synchronizations and MGP processing. Hopefully, someone here can provide some insight in this situation.
    We have three tables in the scenario...
    - A table of sales figures with a unique key based on Store ID and month
    - An "org table" associating facilities with territories
    - A user data table...associating specific users with territory numbers.
    The goal with this was to provide only the set of sales figures related to a user's specific set of facilities, based on sales territory. This is provided with a view joining these tables together, with a bound variable tied to the user ID (set as a Data Subsetting Parameter in Mobile Manager).
    With this configuration, we should get an entirely new set of sales figures whever we...
    - load new territory assignments to the User Table
    - load new organizational info into the Stores Table
    - update the individual records in the Sales Figures table
    And this also avoids a lot of administrative overhead in resetting Data Subsetting Parameters, since there are massive rates of change in territory assignments.
    The problem I'm running into comes when defining the Published Item for this view. Mobile Databse Workbench still wants to define a "primary table" for the view, placing triggers on only one of the three tables. This means that synchronization/MGP flags are only being set when one of the tables is updated. I need them set whenever ANY of the constituent tables are updated.
    Since even a subset of this data is massive, we want to stick to Fast Refresh, so refusing to set a Primary and forcing Complete Refresh is not an option.
    Outside of forcibly de-normalizing our data into a single, messy, massive table (or huge de-normalized materialized view), how do I handle this? How do I get Oracle Lite to trigger Fast Refresh off of multiple constiuent tables from a join/view?

    We use views a lot for data denormalisation as there can be client side performance issues when joining tables in msql (either direcly of via local views)
    in order to create a view as a snapshot you WILL need to define a main base table as this is the source of the primary key. composite PKs from a number of tables will not work correctly as this causes problems in the out queue creation.
    for your scenario, in the view you will need a unique primary key - this may determine which table you use as the base table. when published this will autmatically create the triggers for fast refresh on the base table. To get the triggers on the other tables in the view, you can include these tables manually in the publication as well, either with create on client=NO or with a dummy select clause (we use where 1=2). this will trigger the compose if any of the tables change.
    There are a couple of issues
    1) you will see when looking at the details of a published view object that the base table and all subsidiary tables are shown, but ONLY if directly referenced in the view itself. derived data via functions for example will not show the table, and therefore changes to these tables will not trigger the compose of the object
    2) the out queue is only populated if the version number of the record is changed - this is associated with the base table ONLY. If you get the replication triggers on all of the tables as above, then any change to any table will trigger the compose, and if the nature of the change means that a new record is to be sent to the client, or one deleted, this will happen. HOWEVER if the nature of the change is just a change to data, the update will only be sent if the base table has altered.
    in some cases in our system we have gone for a strategy like
    view is based on table a, also includes data from tables b and c, and a function selecing data from table d
    create and publish based on table a
    create triggers in the base schema on tables b, c and d that in case of change do an update to the table a record that is associated.
    This method means that changes to any of the tables cause a change to table a, and therefore all inserts, updates and deletes will be replicated.
    an alternative is to create publication items for the view and also tables b, c and d
    this will create <tablename>i,u and d on all of the tables
    if you look at the triggers for table a, you will see that what it does is log a record in c$all_sid_logged_tables, and updates the version number in CVR$<table a>. if you copy this code to the corresponding <table b/c/d> triggers, this will 'fool' the mgp compose into processing the changes without the need for physical updates to the tables themselves

  • Long running MGP process / Tuning?

    Our MGP compose process runs for 25 minutes. this seems like a long time for it to be running. I have read of consperf but it seems that you can only enable it by going into each user's publication item and enabling it.
    Is there any global setting that will do the same thing?

    What version are you on?
    We use 10.0 live (testing out 10.2, but found a (to us) showstopper bug).
    To run consperf (for 10.0, i think it is similar for earlier versions), you can run it from the command line on the server, but it is a bit tempremental unless using windows servers, otherwise from the mobile manager try following this route
    1) look at the MGP apply/compose logs (perferably one with a number of logged tables processed), and drill into one of the users. This should give you a list of publication items (WTGPI_nnnnn names) composed with a number of milliseconds to process after it. On the whole ignore anything less than 2000 for now, but see if there are any with large numbers. anything into 5 digits or above needs looking at. If they are all low numbers, there is not a lot to be done - you must have a lot of users!.
    2) assuming you found something, then go to data synchronisation>performance, and press the link behind the word users in the middle of the text describing consperf
    3) select the user you looked at in step one, and press the subscriptions button.
    4) check the application that is the problem and press the consperf performance analysis button
    5) take the set parameters link. for now leave everything as seen, except in the pubitemlist type in wtgpi_nnnnn where nnnnn is the number of the slow running publication item from step 1. NOTE need the wtgpi prefix. press ok, and it will run with a spinning timer. NOTE not fast, may take 10 minutes, but you can swith to other pages and do other things and then come back.
    when finished you should see links to two files, a timing file and an execution plan. Look at the timing file first. This has two sections, SYNC is about the synchronisation (ie: download), the ldel, lins relate to the compose performance. For each area (file tells you about them), there are a number of different templates, and it will give example timings for them. If all of LDEL_1 to LDEL_4 show -10000 then you have a problem (means it gave up). one of the values for each set will have chevrons around it, and this is its default.
    The execution plan file give the coresponding sql statements and explain plans for them

  • 10gLiteR3 publishing ORA-00942: table or view does not exist error

    Hi All,
    I am encountering table or view does not exist error while publishing using the api.
    Below is the code:
    try {
    consolidatorManager.openConnection("MOBILEADMIN","PASSWORD", ADMIN_JDBC_URL);
    mobileResourceManager = new MobileResourceManager("MOBILEADMIN","PASSWORD",ADMIN_JDBC_URL);
    consolidatorManager.createPublicationItem(EXT_CONN, "RMT_TEST_TABLE" , "MOBILEADMIN","RMT_TEST_TABLE", "F", "SELECT * FROM MOBILEADMIN.RMT_TEST_TABLE", null, null);
    consolidatorManager.addPublicationItem(mobileResourceManager.getPublication("/mobileApp"),"RMT_TEST_TABLE",null, null, "S", null, null);
    } catch (Exception e) {
    e.printStackTrace();
    When I execute the above code it does not throw any exceptions but I see the below error in err.log. I,U,D triggers are getting created on the remote table, CMP, CFM, CVR,CLG tables are also created, I also see primary key for this table in VPKS. Publishing seems to be fine but I see the below error in err.log.
    java.sql.SQLException: ORA-00942: table or view does not exist
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:180)
         at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:208)
         at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:543)
         at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1451)
         at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteDescribe(TTC7Protocol.java:651)
         at oracle.jdbc.driver.OracleStatement.doExecuteQuery(OracleStatement.java:2117)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2331)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:422)
         at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:366)
         at oracle.lite.sync.Subscription.getVirtualTablePrimaryKey(Subscription.java:7522)
         at oracle.lite.sync.Subscription.getTablePrimaryKey(Subscription.java:7365)
         at oracle.lite.sync.Subscription.CreatePublicationItem(Subscription.java:2334)
         at oracle.lite.sync.Subscription.CreatePublicationItem(Subscription.java:2157)
         at oracle.lite.sync.Subscription.CreatePublicationItem(Subscription.java:2129)
         at oracle.lite.sync.Subscription.CreatePublicationItem(Subscription.java:2108)
         at oracle.lite.sync.Subscription.CreatePublicationItem(Subscription.java:2093)
         at oracle.lite.sync.Subscription.CreatePublicationItem(Subscription.java:2079)
         at oracle.lite.sync.ConsolidatorManager.createPublicationItem(ConsolidatorManager.java:1253)
    I am not able to figure out what table it is looking for. Any ideas why this is happening?

    check the MGP compose process, is it also showing the error?
    If it is them at some point in 10.2 Oracle introduced a 'feature' (ie: bug) that causes problems to the housekeeping routines when new versions of publication items are published
    In 10.0 and early 10.2 when making changes to publication items VIEWS called CPV$.. were created to log the column list for each version of a publication item, and check if you are making changes when you publish. In the case of old versions, once all clients have synchronised and been moved to the latest version it tries to delete this view.
    at some point the publish started creating TABLES instead of views (but using the same naming convention ie: CPV$..). This works fine as far as the logging of changes and old column lists, but the housekeeping called at the end of the publish and GP process still trieds to do a DROP VIEW and this causes the error.
    The only way of clearing this i have found is to look at the log file to get the name of the object that it is trying to drop, set up a view script to create a dummy view with the list of columns in the table, drop the table and then create the view. It then gets dropped on the next MGP cycle and the error goes away
    If the error is not showing up in the MGP process, only in the publish, and your changes appear to be publishing OK, then the problem is likely to be with a snapshot definition AFTER the change you have made - real pain to identify on re-publish - possibly missing schema name in the snapshot definition

  • Data subsetting depending on connected user

    hi there.
    It is possible to define parameters in the where clause when defining a publication item. This parameter is resolved at runtime which provides the possiblity to send just a subset of the data of the given publication item back to the remote client.
    In my case the subset is driven by the connected user. How can I configure this requirement. Do I need to define a own MyCompose class or even an own queue for this purpose?
    thx for any hints.
    j.

    Here is a section on shared maps from the documentation. You accomplish your task by assigning the user to a group.
    It is very common for publications to contain publication items that are used specifically for lookup purposes. That is, a publication item that creates a read-only snapshot. The server may change these snapshots, but the client would never update them directly. Furthermore, many users often share the data in this type of snapshot. For example, there could be a publication item called zip_codes, which is subscribed to by all Mobile users.
    The main function of Shared Maps is to improve scalability for this type of publication item by allowing users to share record state information and reduce the size of the resulting replication map tables. By default, if you have a non-updatable publication item, it defaults to using shared maps.
    Note:
    Shared Maps can also be used with updatable snapshots if the developer is willing to implement their own conflict detection and resolution logic; however, normally shared maps are only for non-updatable snapshots.
    Shared maps shrink the size of map tables for large lookup publication items and reduce the MGP compose time. Lookup publication items contain read-only data that is not updatable on the clients and that is shared by multiple subscribed clients. When multiple users share the same data, their query subsetting parameters are usually identical.
    For example, a query could be the following:
    SELECT * FROM WHERE EMP WHERE DEPTNO = :dept_id
    In the preceding example, all users that share data from the same department have the same value for dept_id. The default sharing method is based on subscription parameter values.
    In the following example, the query is:
    SELECT * FROM WHERE EMP WHERE DEPTNO = ( SELECT DEPTNO FROM EMP WHERE EMPNO = :emp_id )
    In this example, users from the same departments still share data. Their subsetting parameters are not equal, because each user has a unique emp_id. To support the sharing of data for these types of queries (as illustrated by the example), a grouping function can be specified. The grouping function returns a unique group id based on the client id.
    There is also another possible use for shared maps. It is possible to use shared maps for shared updatable publication items. However, this type of usage requires implementation of a custom dml procedure that handles conflict resolution.

  • Publication not sending records (from two pub items) down to Oracle Lite

    Hi there,
    I have several publication items that are limited by queries. This query limits the data coming back to a time span of several days. When I run this limiting query in SQL Developer against the main database, the query brings back several rows.
    There are two tables on the device, however, does not receive any rows upon first synchronization. I know that all of the publication items containing this limiting query have worked correctly in the past (on several devices over the last month).
    Any ideas? Is there something that I can look for in the server logs to indicate why this publication did not transfer any records for only these two publication items? All of the other publications seemed to have worked correctly, and they are actually driven off the same limiting query as the problem publication.
    Allen
    Edited by: AllenS on Sep 24, 2008 11:27 AM

    we have a similar issue with some of our data where jobs are allocated to users for a period of time, that may be set in advance.
    The way the MGP compose process works does not go well with selecions by date range, as it only processes records that have altered since the last compose process, so you get the scenario
    record created 1/1/08 with a 'send to client date' of 1/3/08
    snapshot has the query select * from table where send to client date <= system date
    compose runs on 1/1/08, processes the record as it has changed, but it does not meet the selection criteria, so is not sent
    compose runs on 1/3/08, but as the record has not changed, it is not processed
    there are a few options
    1) batch job that does a dummy update of all records where send date=system date, this will force the mgp process to pick it up, but not too efficient
    2) as above but the update is to the CVR$ table, incrementing the version number - achieves the same objective and saves updates to the real data
    3) use MyCompose for the object, with a needCompose test either always true, or based on records for the date range existing
    4) use a queue based publication item
    NOTE if using 1 or 2, ALL matching records need the update, not just one for the table

  • Conflicts resolution methods in Oracle Lite

    Can anyone please provide the answers of the following questions?
    1_What Methods are used for Conflict detection and resolution for concurrent updates by multiple clients in Oracle lite databases?
    2_ Is there any method that extract semantic relation from the concurrent update transactions and create a global update schedule?
    3_ Does oracle lite use conflict avoidance mechanism?
    4_ What replication method is used by Oracle Lite Database?

    In terms of conflict resolution with oracle lite, which end do you mean? conflict resolution in the client database (ie: oracle lite) or on the server side when processing client uploads (this is just a standard oracle database), also not sure what you are trying to achieve
    *1_What Methods are used for Conflict detection and resolution for concurrent updates by multiple clients in Oracle lite databases?*
    I assume in the following that you are talking about dealing with uploads
    Depending on how the publication items are defined, the process is quite different.
    a) fast refresh publication items
    When the client synchronises, the upload data is uploaded as a packed binary file which is then unpacked and inserted into in queue tables in the mobileadmin repsitory (table names begin CFM$ followed by the publication item name). This is the only action that happens during the actual sync process.
    A second and independent process, not linked to the actual synchronisation - the MGP process, runs on the mobile server, and this has three phases - apply, process logs and compose that run one after the other. You can set the MGP to only do the apply phase, or all three.
    during the apply phase the data in the in queue tables for a particular user/transaction will be applied to the server database. Normally the MGP process is set to have three threads (this can be changed, but three is the default), and therefore three client uploads will be processed in parallel, but each of these threads is independant of the others and therefore should be seen as seperate transactions.
    It should be noted that even if you have 50 or 100 users synchronising concurrently, only three upload sets will be processed at any one time, and almost certainly a period of time after the synchronisation has completed (may be many hours depending on the MGP cycle time)
    As each of the apply threads is a seperate transaction, there is no concept of concurrency built in, and the only conflict resolution by default is based on the server wins/client wins setting of the publication item. where multiple users are updating the the same server record with 'client wins', the first user will update the data, and then the next user will update the data (just a repeat of the previous one). NOTE also that in the case of an update, ALL columns in the record are updated, there is no column level update.
    There are customisation options available to provide finer grained control over the apply process, look at the PLSQL callback packages registered against each publication item for beforeapply, afterapply, beforetranapply and aftertranapply, Apply DML procedures against the publication items and also the CUSTOMIZE package at the MGP level
    b) complete refresh publication items
    where the publication as a whole has a mixture of fast and complete refresh publication items, these normally work in the same way as the fast refresh described above. Where however you just have complete refresh items the data MAY be written directly to the server table on upload
    c) queue based publication items
    These work in realtime, rather than with a delay for the MGP process to pick up the data.
    When the user synchronises, the uploaded data is is written to the in queue tables in the same way, but when this is completed, a package (defined as part of the publication definition) is called, and the procedure upload_complete is run passing in the user and transaction identifiers. This package needs to be hand crafted, but you have full control over what and how all of the uploaded data is processed, but again this is a single transaction for that user. If you want to look at other sessions running, you need to find a way to implement this.
    *2_ Is there any method that extract semantic relation from the concurrent update transactions and create a global update schedule?*
    As noted above, the uploads may be processed in parallel, but they are seperate transactions, so no built ins
    *3_ Does oracle lite use conflict avoidance mechanism?*
    Only the basic oracle stuff, unless you use the customisation options to write your own
    *4_ What replication method is used by Oracle Lite Database?*
    The different types of publication items select data from the server database for download in different ways
    a) fast refresh
    change logging tables and triggers are created in the server database. These are scanned during the MGP process logs phase to determine what changes have happened since the last MGP compose, and what publication items they affect. The MGP compose then runs and this uses the same three threads to process the users in alphabetical order using the change keys to populate data in out queue tables (prefixed CMP$) in the repository. These have the PK values for the data, plus a transaction types (insert/update/delete). All the MGP process does is populate these out queue tables.
    When the user synchronises, the data in the out queue tables is used as a key list to extract the data from the actual server tables into a packed binary file, and this is sent to the client.
    b) complete refresh
    there is no pre-preparation in this case, the data is streamed directly from the server database into the packed binary download file
    c) queue based items
    in real time when the user is synchronising after the apply has been done by the uploade_complete procedure, a second procedure download_init is called. Within this you have to code the data extract, and MUST populate tables (you also need to create them) CTM$<publication item name> these are effectively out queue tables, but contain all of the data, not just the PK values. At the end of the procedure, the data is streamed from these into the binary file for download.
    Depending on the definition of your apublication, you could have one or more of the above types (VERY bad idea to mix queue based and fast refresh unless you are very sure about what you are doing) and therefore there may be a mix of different actions happening at different times
    In conclusion i would say that try and send seperate data to clients so that they do not interfere with each other, and for inserts use uniqueue keys or sequences. If you MUST send the same data to different clients for update, then the queue based approach provides the best control, but as it is real time is not as scalable for large data sets.

  • Sync Behaviour

    Hi all,
    I m using 10gr2 (10.2.0.4.0) database as standalone and using oracle lite r3 (10.3.0.2)
    I have an application created with oracle branch office platform.
    I downloaded a client in a mobile device and started doing data entry.
    I made an entry in parent table and also made a corresponding entry in the child table.
    The parent table is non-sync able table.
    And also i did some updates in some sync able tables
    Now i did a normal sync. Sync was success in the client side.
    As the parent entry was not created in the server, i got an error “parent key not found”.
    Now all the records will be in error queue until i clear the error.
    Now, I didn’t clear the error.
    Scenario 1:
    Without doing any DML operations in the ODB which i synced earlier, i synced again.
    The sync was success in the client side.
    In the synced ODB, I noticed that all the records which i inserted earlier got deleted and also the updates which I made were not there.
    The updates were changed like how its currently present in the server.
    Scenario 2:
    In this scenario, I did some updates in some other tables (different from previously updated tables) in the ODB which i synced earlier.
    Now I did a normal sync. The sync was success in the client side.
    When I checked in the server, the updates which I made now were reflected in the server.
    When I checked in the ODB, the updates which I made now were unchanged and so the previous inserts and updates.
    Can someone explain how oracle lite behaves in the above mentioned scenarios?
    Thanks in advance for ur help.
    Regards,
    Prasnnaa.

    The standard behaviour for fast refresh publication items is
    a) data created/updated on the client
    1) the sync process copies the changed data from the client database and creates a packed binary file
    2) file uploaded to the server and unpacked into the cfm$ tables (in queue) with 'header' data in c$in_messages
    3) any new changes waiting in the cmp$ (out queue tables) for the client are extracted into a packed file
    4) packed file of server changes downloaded and applied to the client database
    NOTE no data removed from the client unless a delete was sent from the server out queue
    5) at some point the server MGP apply process will run, and this takes the in queue data and attempts to write to the server tables
    6) if this is sucessful the in queue tables are purged, if not the data is copied to the error queue
    7) if the apply was successful, then the triggers on the tables updated during the apply will fire, logging the data for the compose process
    8) MGP compose process runs, finds all server side changed data and builds the out queue for the next sync
    9) in the next sync 'acknowledgement' updates will be sent to the client for previously uploaded data (if the data no longer meets the selection criteria, for example status=ACTIVE , this is the point at which it gets deleted)
    b) data data created/updated on the server
    1) the data updates cause the triggers to fire, logging the records for processing by the MGP process
    2) the MGP compose process usees the logged records and builds the out queue
    3) changes downloaded when the user next synchronises (NOTE the data will only be available if the client sync is AFTER the MGP compose process)
    for complete refresh items
    1) data upload from client
    2) all data matching the selection is downloaded in real time
    3) client table cleared
    4) client table rebuild with the new download
    The difference in behaviour, and the process delay whilst waiting for the MGP process is done, is the reason newly created data can temporarily get removed from the client, and put back once applied to the server
    There is a third type of publication item that works in real time on both upload and download, but this is more complex to use

  • Using the APIs - how do you define the application?

    After using lite for a long while, i am finally getting around to evaluating ways to override/improve the compose process as this is becoming a real problem for us.
    as i see it there are 3 main options
    1) call out pl/sql package - got this working, but does not directly affect the compose
    2) mycompose - despite not being a java person, got this working, and this does reduce the compose impact (the inconsistencies in the PK name mapping threw me a bit, but thats oracle lite for you)
    3) queue based publication items to bypass the MGP process entirely. following the examples (and a lot of make it up as you go along) i managed created a publication via the consolidator APIs, added the publication items, added and instantiated a user, synced and all seems to be working (at least in outline), but the publication i have created is not attached to any application so i cannot maintain the users via the applications tab as normal. found the resourcemanager API to create the application (i think), but cannot find any way to associate my publication with it
    Am i missing something? or can i just not manage to find the correct API?

    Firstly i will point out that the underlying source of our 'issues' is poor main system database design (politics and in place before i arrived), and this has pushed us down a couple of possibly non standard routes
    1) quite a number of publication items based on complex updatable views (too much normalisation on the server side, and the performance overheads for high numbers of table joins on the PDA clients we use have forced this)
    2) complex data subsetting (using relationships disconnected from much of the actual data)
    the issues we have are
    a) poor performance in the MGP compose on a few of the view based objects (they have been extensively tuned, conspref steeing as good as they can be, but the execution plans generated by the standard compose process wrapping is poor)
    b) updates to view 'content' only happen by default if the base table of the view is updated (this is where it gets the version from)
    c) increasing MGP compose time (for roughly the same number of users this have gone from an average of about 1-2 hours to 8-12 over the past 12 months - this is probably more related to server application design issues, increasing data volumes and possibly conflicts with streams being used for other applications)
    d) performance overhead in terms of resource (CPU, IO) usage when compose is running
    e) locking conflicts between the main server system and the MGP process (either the process log phase failing in mobile due to transaction locks on all_sid_logged_tables preventing the compose from starting, or server transaction being blocked for a few minutes due to the exclusive locks needed for the process logs phase)
    Due to the above i decided to do a proof of concept type study on the 3 alternative approaches for controlling the MGP process (before/after callouts, mycompose and queue based publication items).
    results so far
    - before/after callouts do not affect the actual compose processing, so not much use (but we are looking at these for intercepting some standard issues from the client applications and correcting the data before it hits the error queues)
    - mycompose - despite minimal java knowledge (and the undocumented issues like the poor execution of moving data from the cpd tables to the cmp tables), i have been able to demonstrate 60-90 reduction in compose time for the most problematical objects - sorts out issues a-c, helps with d, but does not do much about e
    - queuebased publication items - seriously cumbersome at the moment (may be able to make it more generic), but PL/SQL that i am more comfortable with (except for the actual publish) - still working on this, but so far at the cost of an increase in average client sync time from about 1-2 minutes to about 5-8, i have managed to get a working application (30 complete refresh items plus 44 queue based) that does not require the MGP process to be running or enabled, and needs no CVR/CLG/all_sid_logged_tables logging, so looks to achieve all of the objectives plus a performance improvement overall on the main server system
    no final decisions yet, the queue based approach looks best, but would be problematical in doing the switchover without losing any client data, mycompose is easy to overlay over the existing application, but does not fully meet all objectives

  • Server-Side Wins with Shared Maps

    We are using shared maps/publication items in our solution. Using this approach, the MGP process composes the set of publications items once for each group of users instead of once for each user. This helps a great deal in reducing MGP-compose times.
    What we are looking to implement however is server-side conflict resolution with shared maps. Oracle does not support this by deafult, so we were told write custom conflict resolution for this to work.
    As it stands now, we have written custom DML for the conflict resolution. However, in the apply-phase we are only given the group-id for the current sync user. What we need, however, is the client-id of the current sync user.
    Using the client-id, we can check to see if the client (and not the entire group) has any dirty records in the out-queue. If there are dirty records, then the server wins and the conflict-records are not applied.

    shared items have the same selection criteria for all users (ie: no data subsetting). Therefore the same data is composed and placed in the out queue for each user that is part of the group.
    I assume you are trying to check if there is a copy of the incoming record waiting to go to the the user, so you can compare the data on update. The problems you may have with this are
    1) data being present in the out queue (or not) depends as much on when the last compose ran, as the fact that the data has changed (can you hand on heart guarantee that the compose will have run if the user sends an update within a half an hour of the server being changed.
    2) the same record (because it is shared) may have been downloaded to a number of different clients. What will you do if they all send back updates?
    normally shared items are used for reference data (ie: static), but if you want to use them in this way it may be better to do your conflict tests against the actual underlying data, rather than what is stored in the MGP temporary tables. This will provide more consistent results as it does not rely on the timing of the compose process, and will be consistent for each user transaction
    You know which publication item is being processed. From mobileadmin.C$all_publications you can pick up the schema name, object name and template query. You can then use this to compare the current server data against the incoming data

  • Help understanding fast and complete refresh

    hi
    -i have a table with 3 rows with col2='T' and 1 column with col2=null.
    -my query(fast refresh) is "select * from table where col='T'"
    -after first sync the 3 rows are on my mobile.
    -then i set col2='T' in 4th row and make a sync with msync, but on my mobile are only 3 rows after the sync.
    with complete refresh all 4 rows would be on my mobile.
    but what will happen with changed data in my 3 rows on mobile at this complete refresh???
    i want this 4th row on my mobile too without change the changes in my 3 rows on mobile.
    thx
    rico

    If you are using fast refresh, then after changes are made on the server side,, the MGP compose process must run to build the out queue data for the client before the change will be available for download.
    For tables with a fast refresh, if you look at the database, you will see when you publish, three triggers are created for the base table (<tablename>1/u/d) and a table CLG$<tablename> (there are others as well). when a change is made, it will fire the triggers. this will but the primary key of the changed record into the CLG$ table, and also create a record in mobileadmin.c$all_sid_logged_tables. this triggers the list of objects to process for the compose cycle.
    the MGP compose sometimes stops processing for no apparent reason, so this may be a problem. Also for snapshots on tables, this works fine. If however the object is a view, or the column is derived (say from a function) the trigger just goes on the base table(s), if the base table data does not change, no data will be composed.
    complete refreshes should not necessarily take longer to process (may be longer to download if a big table), but have the disadvantage of overwriting data just created on the client. this will not re-appear until the next synchronise (after it has been posted to the server) - this is the main disadvantage

  • IOException during synchronization

    Hi all,
    I have a problem when I try to sync a publication. All publication items are "out of sync" (normal, it's the first sync).
    After 220 of 606 publication items, the sync failed. the error in mobile manager is
    Trace de pile d'exceptions de la session de synchronisation:
    java.io.IOException: L'entrée est fermée
         at oracle.lite.sync.resume.SpoolEntry.checkClosed(Unknown Source)
         at oracle.lite.sync.resume.SpoolEntry.write(Unknown Source)
         at oracle.lite.sync.resume.Client.send(Unknown Source)
         at oracle.lite.sync.HeliosSession.sendConv(Unknown Source)
         at oracle.lite.sync.HeliosSession.sendCompress(Unknown Source)
         at oracle.lite.sync.HeliosSession.sendDML(Unknown Source)
         at oracle.lite.sync.HeliosSession.sendI(Unknown Source)
         at oracle.lite.sync.HeliosSession.sendPayload(Unknown Source)
         at oracle.lite.sync.HeliosSession.sendSubData(Unknown Source)
         at oracle.lite.sync.HeliosSession.downloadSubs(Unknown Source)
         at oracle.lite.sync.HeliosSession.startSession(Unknown Source)
         at oracle.lite.sync.resume.Client$1.run(Unknown Source)
         at oracle.lite.sync.resume.ThreadPool$PoolTask.run(Unknown Source)When I remove the publication item #201 (that was synchronized when the error raised before), the sync ends without error.
    What are the potential problems with this type of error?
    Thanks for your help.

    1651 seconds is a long time for a sync, but 606 PIs is also a lot of objects. We have two quite comprehensive applications and they only have 29 and 82 PIs respectively.
    For this number of objects, the average sync time is 2.5 seconds per PI, which is still quite a long time. Are the PIs fast refresh? if so what is the average MGP compose time for each user. If they are complete refresh then it may be worth looking at queue based items to give you more control over the selection process
    You do not say what types of objects you use, but have you considered using updateable views on the server side as the base for the PIs to denormalise the data before download. If appropriate, this can make significant reductions in data traffic and selection speed. use of the myCompose classes can improve the MGP compose time, but will not help much with the actual sync

Maybe you are looking for