TimesTen's Query Latency and Commit Latency

Hi,
Can I get details on TimesTen's query latency (for read transactions) and commit latency (for update transactions)?
I was not able to get a detailed explanation from any of the TimesTen document source.
Regards
Pratheej

Hi Pratheej,
There are a huge number of factors that affect the numbers that you are talking about so we cannot simply give a figure. We can quite figures for very specific workloads on very specific hardware but that may not be so helpful to you. By far the best way is to try TimesTen out with your workload on your hardware.
Typical figures for TimesTen 11.2.2 running on a modern Intel Xeon 64-bit CPU with a clock speed of around 2.7 GHz in direct mode with the TPTBM benchmark that ships as part of TimesTen are around 2.4 microseconds for a key based single row SELECT and around 7.7 microseconds for a committed single row key based update.
Regards,
Chris

Similar Messages

  • How to query latency with HistoryVerbose 0

    We're using Transactional Replication in SQL 2008R2 SP2
    I'm trying to evaluate whether using a HistoryVerbose setting of 0 on the Log Reader and Distribution agents is a worthwhile modification to make, but still need to know Latency in these agents at periods of time.
    I know it's possible to use Windows Performance Monitor and enable a couple of replication counters but I have reason to want to do this in T-SQL.
    Tracer Tokens don't give you the answer until the token has finished replicating - it tells you about the problem AFTER the problem's happened.
    It's also possible to get Current Latency from the MSDistribution_History and MSLogReader_History tables, but this information isn't recorded when the HistoryVerbose setting is 0.
    It's possible to calculate what the latency is going to be based on the agent's transactions per second and the number of pending transactions, but querying these values can be costly on the Distribution database.
    So there may not be an answer to this other than Windows Performance Monitor, but has anyone managed to query Latency in a light-weight query which isn't reliant on the logging in MSDistribution_History and MSLogReader_History?
    Thank you if anyone can help!
    Tim Johnstone Senior Technical Consultant Computacenter (UK) Ltd

    Hi Tim,
    Does Replication Monitor meet your requirement? For transactional replication, Replication Monitor displays information about the number of transactions in the distribution database that have not yet been distributed to a Subscriber and the estimated time
    for distributing these transactions.
    If not, I suggest you looking into below links.
    They are related to the tracer tokens technology for monitoring replication latency.
    How to: Measure Latency and Validate Connections for Transactional Replication (Replication Transact-SQL Programming)
    http://technet.microsoft.com/en-us/library/ms147309(v=sql.105).aspx
    SQL Server Transactional Replication Latency Monitoring Tool
    http://gallery.technet.microsoft.com/scriptcenter/SQL-Server-Transactional-e34ed1e8
    Cheers,
    Ying

  • Query latency vs number of partitions

    I am trying to measure the average latency per object retrieval using the query approach, and what I am finding is that as the number of instances of the server (a server being a single jvm running DefaultCacheServer with backup-count set to 1, and partition-count set to 1021) increases, the latency increases significantly as shown in the results below:
    Latency (microseconds) for Number of server instances
    Objects 2 4 8 16
    In Cache srvrs srvrs srvrs srvrs
    1K 593 730 889 1175
    10K 592 720 880 1206
    50K 580 724 879 1200
    100K 585 726 877 1204
    The query code is as follows:
    final Filter filter = new EqualsFilter(idExtractor, key);
    final Filter limitedFilter = new LimitFilter(filter, 1);
    final Set spotPricesKeys = cache.keySet(limitedFilter);
    if (spotPricesKeys != null && spotPricesKeys.size() > 0){
         spotPrice = (SpotPrice) cache.get(spotPricesKeys.iterator().next());
    and
    private static ValueExtractor idExtractor = new KeyExtractor("intValue");
    Any idea why the latency is increasing with the added number of server instances?
    Thanks
    Adsen

    Partitioning strategies is an "advanced" feature of Coherence. It works much like a hash function but instead of a hash code it calculates the partition number that will be used for each key. You can use more or less any deterministic algorithm based only on key information. Lets assume that you have a cache with strings as keys (representing for instance a persons name) and you would like to partition based on the first letter it could look something like this (this is just an example - in production code you would like to make sure the algorithm statistically makes as even use of all the partitions that you have in your cluster!):
    public class MyPartitionStrategy implements KeyPartitioningStrategy, Serializable {
    private PartitionedService partitionedService;
    public MyPartitionStrategy() {
    public void init(final PartitionedService partitionedService) {
    this.partitionedService = partitionedService;
    public int getKeyPartition(final Object key) {
    if (key instanceof String) {
    return ((String) key).charAt(0) % (partitionedService.getPartitionCount() + 1);
    } else {
    return key.hashCode() % (partitionedService.getPartitionCount() + 1);
    With this filter you would know that all keys starting with the same first letter ends up in the same partition. You would specify the use of this filter once in the coherence configuration (dont remember the element to use on top of my head though).
    Now assume that you for some reason want to do some query that only should apply to entries with a key starting with the letter 'a' you could do that with a PartitionedFilter (you would have to write some code that determines what partition the key 'a' would end up if it existed and then restrict the query to that single partiton). With this solution Coherence will only send the request to the single node holding the primary for that one partition holding all the "axxxx" keys and not to all the nodes in the cluster making the query scale close to linerary no matter how many nodes you have!
    A problem when the number of partitions grow is to find a partitioning strategy that allocates a useful set of objects into a single partition. If you for instance have like 4000+ partitions and would like to divide addresses based on state in order to perfrom queries against all of your customers in a particular state there is no obvious way to do that since the number of states is much smaller than the number of partitions :-(
    This kind of partitioning can however be very useful in many cases (we have used it to good effect in my current project).
    In produktion code it is often some part of a composite key that is useful as a key partitioning criteria and you could then for instance return the standard hash code of that key part modulus he partition count.
    Another possibly useful feature you may look at is when it comes to limiting the number of nodes invloved in queries is the KeyAssociatedFilter (if you use key association at all in your cache).
    /Magnus

  • Weird result of oracle query before and after function base index creation

    Hi All,
    Here is the unique situation we are facing after creating just an index.
    The query result before the index and the query result after the index do not match.
    This is very illogical situation. Shidhar and me have done lot of R&D, also tried to get lots info from Google but we couldn't decipher the reason for that.
    I am giving you all the details about the query, index and tables with following steps.
    Please let us know if anything is going wrong from our side or is it a bug at oracle level which is a rarest possibility but a possibility.
    Step 1 :- Create table
    create table TEMP_COMP
    ID VARCHAR2(10),
    GROUP_ID VARCHAR2(10),
    TRAN_DATE DATE,
    AMT_1 NUMBER,
    AMT_2 NUMBER,
    AMT_3 NUMBER
    Step 2 :- Insert Sample data
    set feedback off
    set define off
    prompt Deleting TEMP_COMP...
    delete from TEMP_COMP;
    commit;
    prompt Loading TEMP_COMP...
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('01', 'G01', to_date('01-03-2007', 'dd-mm-yyyy'), 1, 11, 111);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('02', 'G01', to_date('02-03-2007', 'dd-mm-yyyy'), 2, 22, 222);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('03', 'G01', to_date('03-03-2007', 'dd-mm-yyyy'), 3, 33, 333);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('04', 'G01', to_date('04-03-2007', 'dd-mm-yyyy'), 4, 44, 444);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('05', 'G01', to_date('05-03-2007', 'dd-mm-yyyy'), 5, 55, 555);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('06', 'G01', to_date('01-03-2008', 'dd-mm-yyyy'), 6, 66, 666);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('07', 'G01', to_date('02-03-2008', 'dd-mm-yyyy'), 7, 77, 777);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('08', 'G01', to_date('03-03-2008', 'dd-mm-yyyy'), 8, 88, 888);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('09', 'G01', to_date('04-03-2008', 'dd-mm-yyyy'), 9, 99, 999);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('10', 'G01', to_date('05-03-2008', 'dd-mm-yyyy'), 10, 100, 1000);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('01', 'G01', to_date('01-03-2007', 'dd-mm-yyyy'), 1, 11, 111);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('02', 'G01', to_date('02-03-2007', 'dd-mm-yyyy'), 2, 22, 222);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('03', 'G01', to_date('03-03-2007', 'dd-mm-yyyy'), 3, 33, 333);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('04', 'G01', to_date('04-03-2007', 'dd-mm-yyyy'), 4, 44, 444);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('05', 'G01', to_date('05-03-2007', 'dd-mm-yyyy'), 5, 55, 555);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('06', 'G01', to_date('01-03-2008', 'dd-mm-yyyy'), 6, 66, 666);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('07', 'G01', to_date('02-03-2008', 'dd-mm-yyyy'), 7, 77, 777);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('08', 'G01', to_date('03-03-2008', 'dd-mm-yyyy'), 8, 88, 888);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('09', 'G01', to_date('04-03-2008', 'dd-mm-yyyy'), 9, 99, 999);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('10', 'G01', to_date('05-03-2008', 'dd-mm-yyyy'), 10, 100, 1000);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('01', 'G02', to_date('01-03-2007', 'dd-mm-yyyy'), 1, 11, 111);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('02', 'G02', to_date('02-03-2007', 'dd-mm-yyyy'), 2, 22, 222);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('03', 'G02', to_date('03-03-2007', 'dd-mm-yyyy'), 3, 33, 333);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('04', 'G02', to_date('04-03-2007', 'dd-mm-yyyy'), 4, 44, 444);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('05', 'G02', to_date('05-03-2007', 'dd-mm-yyyy'), 5, 55, 555);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('06', 'G02', to_date('01-03-2008', 'dd-mm-yyyy'), 6, 66, 666);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('07', 'G02', to_date('02-03-2008', 'dd-mm-yyyy'), 7, 77, 777);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('08', 'G02', to_date('03-03-2008', 'dd-mm-yyyy'), 8, 88, 888);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('09', 'G02', to_date('04-03-2008', 'dd-mm-yyyy'), 9, 99, 999);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('10', 'G02', to_date('05-03-2008', 'dd-mm-yyyy'), 10, 100, 1000);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('01', 'G03', to_date('01-03-2007', 'dd-mm-yyyy'), 1, 11, 111);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('02', 'G03', to_date('02-03-2007', 'dd-mm-yyyy'), 2, 22, 222);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('03', 'G03', to_date('03-03-2007', 'dd-mm-yyyy'), 3, 33, 333);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('04', 'G03', to_date('04-03-2007', 'dd-mm-yyyy'), 4, 44, 444);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('05', 'G03', to_date('05-03-2007', 'dd-mm-yyyy'), 5, 55, 555);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('06', 'G03', to_date('01-03-2008', 'dd-mm-yyyy'), 6, 66, 666);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('07', 'G03', to_date('02-03-2008', 'dd-mm-yyyy'), 7, 77, 777);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('08', 'G03', to_date('03-03-2008', 'dd-mm-yyyy'), 8, 88, 888);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('09', 'G03', to_date('04-03-2008', 'dd-mm-yyyy'), 9, 99, 999);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('10', 'G03', to_date('05-03-2008', 'dd-mm-yyyy'), 10, 100, 1000);
    insert into TEMP_COMP (ID, GROUP_ID, TRAN_DATE, AMT_1, AMT_2, AMT_3)
    values ('11', 'G01', to_date('03-03-2008', 'dd-mm-yyyy'), 100, 200, 300);
    commit;
    prompt 41 records loaded
    set feedback on
    set define on
    prompt Done.
    Step 3 :- Execute the query.
    SELECT GROUP_ID
    , SUM(LAST_YR_REV) as "Year_2007_Amt"
    , SUM(CURR_YR_REV) as "Year_2008_Amt"
    FROM (
    SELECT GROUP_ID,
    CASE WHEN TO_CHAR(TRAN_DATE,'YYYYMM') BETWEEN'200701'AND'200712'THEN SUM(AMT_1) ELSE 0 END AS LAST_YR_REV ,
    CASE WHEN TO_CHAR(TRAN_DATE,'YYYYMM') BETWEEN'200801'AND'200812'THEN SUM(AMT_1) ELSE 0 END AS CURR_YR_REV
    FROM TEMP_COMP t
    WHERE GROUP_ID ='G01'
    AND TO_CHAR(TRAN_DATE,'YYYYMM') BETWEEN'200601'AND'200912'
    GROUP BY GROUP_ID, TRAN_DATE
    GROUP BY GROUP_ID
    The result of above query
    GROUP_ID Year_2007_Amt Year_2008_Amt
    G01 30 180
    Step 4 : Create composite index
    create index GROUP_ID_TRAN_DATE_IDX on TEMP_COMP (GROUP_ID, TO_CHAR(TRAN_DATE,'YYYYMM'))
    Step 5 : Execute once again query from step 3.
    SELECT GROUP_ID
    , SUM(LAST_YR_REV) as "Year_2007_Amt"
    , SUM(CURR_YR_REV) as "Year_2008_Amt"
    FROM (
    SELECT GROUP_ID,
    CASE WHEN TO_CHAR(TRAN_DATE,'YYYYMM') BETWEEN'200701'AND'200712'THEN SUM(AMT_1) ELSE 0 END AS LAST_YR_REV ,
    CASE WHEN TO_CHAR(TRAN_DATE,'YYYYMM') BETWEEN'200801'AND'200812'THEN SUM(AMT_1) ELSE 0 END AS CURR_YR_REV
    FROM TEMP_COMP t
    WHERE GROUP_ID ='G01'
    AND TO_CHAR(TRAN_DATE,'YYYYMM') BETWEEN'200601'AND'200912'
    GROUP BY GROUP_ID, TRAN_DATE
    GROUP BY GROUP_ID
    The result of above query
    GROUP_ID Year_2007_Amt Year_2008_Amt
    G01 0 210
    Thanks
    Sunil

    I just wanted to make a comment. The predicates in both your queries are flawed I believe. You convert a date column to a character and then you say only pick the converted result between two sets of characters.
    TO_CHAR(TRAN_DATE,'YYYYMM') BETWEEN'200601'AND'200912'It should be coded like this for a proper date range comparison:
    TRAN_DATE BETWEEN TO_DATE('200601','YYYYMM') AND TO_DATE('200912','YYYYMM')That will eliminate the need for you to create FBI that you created. You should now be able to create a regular (B*Tree) index on the TRAN_DATE column.
    Also, now that we have the structure of your tables and sample data, what business question are you trying to answer? I think there is a better query that can be written if we know the requirements.
    Hope this helps!

  • Need to Return immediately and commit the App Module on a different thread

    I have an action that I want to return fast (immediately) but the server processing takes longer than acceptable. The results of the operation don't matter to the page submitting it and I want it to be able to navigate away even if the operation is not complete. I want to either be able to send a non-blocking server event from the browser or on the server side start a new thread that performs the operation allowing the original thread to return immediately. The new thread would need access to an Application Module in order to commit data. How would I go about accomplishing this?
    Some thoughts
    I've tried creating a ConcurrentLinkedQueue and putting the DataControl on the que, then in the other thread I pull it off the que, process and commit the data. This works unless the page is navigated away from. Then calling dc.getApplicationModule(); returns null.
    I thought about using createRootApplicationModule in the new thread (since the new thread has no context) but don't know how that would work
    This is the code in the run method of the new thread. In this example, I'm adding data to the app module in the original thread and committing the data in a new thread.
    (like I said, it works most of the time.)
    Object[] req = (Object[])que.poll();
    DCDataControl dc = (DCDataControl)req[0];
    try{
    ApplicationModule am = dc.getApplicationModule();
    if (am != null){
    am.getTransaction().commit();
    } else{
    System.out.println("AM:null unable to commit ");
    } catch (Exception e){
    e.printStackTrace();
    finally{
    if (dc!= null){ dc.resetState();} // release app module
    }

    Thanks for the replies. I am aware of the inherent risks of running a separate thread within a managed container.
    The use case is a performance logging operation. We have a internal web app used by a network of franchises with over 1000 users. We log response time and performances statistics to the database. When the user clicks to navigate or commit data, the response time that the user experiences is logged after the page has fully rendered either through a PPR or a full submit. This is done by submitting ADFCustomEvent from javascript on the page after rendering is complete.. The event sends up the time difference from when the user first clicked to when the page was fully rendered. This information is then merged with logged events stored on the users Session that shows the name and response time of every query that was executed during the previous request. Depending on the page this could be up to half dozen to a dozen or more queries. The logging operation as experienced by the browser is generally fast (<200ms) but sometimes can be as long as a second or more when the database gets busy. A half second is too long as makes the app appear sluggish if the user can't type or click immediately after the page has finished rendering. The logged data is aggregated so we know exactly how much of the page load was due to a slow browser/network, how much was database time, webservice call time, etc... If it's due to a slow database we can drill down and see which query is the culprit. These performance metrics are critical to operations and are charted throughout the day so we know exactly what our users are experiencing. All of our users use a custom firefox client that we control. Using this logging framework we were able to determine that upgrading to a Firefox 4.0 based client cut browser render time by more than half a second on average. We can also tell what type of hardware the user is running so can place the blame for poor performance where appropriate. We have determined that pages render considerably faster on Windows 7 than on Windows 98 with the same hardware. We are moving the logging tables off of our exadata database to a separate box to remove that load from the application database. Since we expect the other database not to perform as well we don't want it to affect the user experience, hence the need to log asynchronously. I would like to put the data on a queue and have a background daemon process read from the queue and commit to the database. I would like the daemon thread to be able to use BC components. I would prefer not to resort to using a web service because of the inherent overhead. The logging operation is not a long operation but is of high frequency so should be as streamlined as possible. The load is spread over 6 servers with 4 JVM's each (24 weblogic instances). I know it's possible to use BC components from a plain Servlet (which runs on it's own thread) so what I want is to have something like a servlet thread that loops forever processing my logging queue.
    One other method I am investigating is using my own non-blocking ajax call that callls a servlet to perform the logging. I will need to pull out the timestamp contained within a client side ADF component along with the pages ctrl-state variable that is included with every ADF request as it uses this as the key to get to the data on the session. ADF really needs a non-blocking ADFCustomEvent for this type of request. (send and don't care about the response)
    The client component with the server listener looks like this
    <af:outputText value="#{pageFlowScope.perfClientTS}" visible="false"
    id="perfClientTSField" clientComponent="true">
    <af:serverListener type="logPerfData" method="#{perfLog.logPerfDataAction}"/>
    </af:outputText>
    The script that queues the ajax call after the page loads looks like this
    AdfCustomEvent.queue(perfClientTSField, "logPerfData", {
    typeId : typeId,
    subTypeId : subTypeId,
    responseTime1 : new String(responseTime1),
    responseTime2 : new String(responseTime2),
    openedVia: via
    true);
    I also tried calling the noResponseExpected() method on the event before queuing it but it still blocked the UI and caused an additional side effect in that the client sent two ajax requests instead of one. It somehow thought something on the client side needed to be synced with the server.
    email me and I can send a doc with more details about how our performance logging framework works.
    Edited by: Don Kleppinger on Mar 14, 2012 2:52 PM

  • Transactional triggers and commit processing

    I have only been an active member of this thread for a couple of weeks and have tried to contribute to a few postings. But I have also noticed lots of postings relating to the use of DML statements inside what I would call non commit time triggers.
    What I mean here is, for example, a WHEN-BUTTON-PRESSED trigger that does inserts, updates, deletes, followed by the COMMIT_FORM procedure.
    I thought it might be useful to draw attention to the possible pitfalls of this approach. I'm not saying that this approach is wrong - far from it, but sometimes it means that people aren't leveraging the functionality that you get "for free" from Forms.
    By coding some DML followed by a Commit_Form, you are not getting any rollback functionality from Forms. For example:
    INSERT INTO A
    INSERT INTO B
    COMMIT_FORM;
    Let's imagine that the insert into A worked but the insert into B failed for some reason. The user gets an unhandled exception. Now imagine that the cause of the error is cleared up, and the user presses the button (or whatever the invokation action was) again, and the commit works. You will have 2 records in A and 1 in B. You may not have expected that, and without coding your own rollback/savepoint that's what you would get.
    You also (1) don't get a Working... message without coding it yourself, and (2) get the "No changes to commit" message; removing this is the subject of many threads on this site.
    If you have a block which is based over a table which you need to update, then all of the DML statements ought to be coded inside PRE/POST/ON INSERT/UDPATE/DELETE statements, or other Forms transactional triggers (pre-database-commit etc..). If you need to change other tables then put the DML for them inside those triggers. Then you don’t have to worry about which records the user changed – if the user didn’t touch them then Forms won’t fire the triggers. And when you call the COMMIT_FORM procedure, Forms will fire the triggers for you and if any of them fail, it will rollback back to where the COMMIT_FORM started.
    An example in 1 thread I saw this morning, was a Delete button, which was to delete the current record in a multi-record block. This button performed the “DELETE FROM <table> WHERE <key=:block.key>” followed by a Commit. Then of course the block needed to be re-queried to reflect the fact that the record had gone. Using the power of Forms to simply call the DELETE_RECORD procedure would have achieved the same result without needing to requery the block. And Forms would do the delete by ROWID, the fastest way of doing it.
    If you don’t have a block based over a table, then you can consider creating a dummy block which uses Transactional Triggers. Code an ON-INSERT on that block which includes the DML you want to execute. Then in the trigger initiating the commit processing you would do something like:
    :DUMMY_BLOCK.ITEM1 := ‘X’;
    COMMIT_FORM;
    IF (NOT FORM_SUCCESS) OR :SYSTEM.FORM_STATUS != ‘QUERY’ THEN
    -- the commit failed
    END IF;
    Then you’ll get a nice Working.. message and full rollback control.
    I think the moral of what I’m trying to get across is to use the power of Forms in the way it handles the transactions. Whilst we moan about it, it is actually quite good at that!
    I hope this posting is taken in a positive light, I am certainly not trying to teach anyone to “suck eggs”.
    PS. I find it ironic that you were prevented from coding DML statements outside of commit time triggers, in Forms 2.3!

    Thank you, Kevin. Very informative.

  • External Task - prepared and commit method

    Hey guys,
    I have a problem with External Task. I don't know how make a prepared and commit method for use with J2EE Application Server.
    Anyone have an example?
    tks

    I am having some issue with the Externhal Task prepare and commit method,
    Actually I am ponting the interactive activity to external Task and in the external task i am getting instanceID , participantId and activity as query paramenters.Using those query paramemeters i am initalizing the ProcessServiceSession and then making call to prepareActivate Method as below.
    I am getting query string instnaceId value like " %2FScorecardChallenge%23Default-1.0%2F4%2F0 " .
    And I tried to call the IntsnaceID method InstanceId.getProcessId(instanceId), but this getProcessId method gives Null value.
    Is there any idea why i am getting the Null processID value for the instanceIdString " %2FScorecardChallenge%23Default-1.0%2F4%2F0 ".
    My sampel code looks like below ....
    String instanceId = this.getRequest().getParameter("instanceId");
                   String activity = this.getRequest().getParameter("activity");
                   String participantId = this.getRequest().getParameter("participantId");
                   this.getRequest().setAttribute("activity",activity);
                   this.getRequest().setAttribute("isFromBpm","true");
                   try
                        System.out.println("########### BPM params" + instanceId + " : " + activity + " : " +participantId);
                        ProcessServiceSession bpmSession = ConnectPAPI.createSession(participantId,this.getRequest().getRemoteHost());
                   String pid=InstanceId.getProcessId(instanceId);
                   System.out.println("Process ID value "+ pid);
                   System.out.println("Instance ID value "+ InstanceId.getInstanceId(instanceId));
                   System.out.println("Instance ID value "+ InstanceId.getInstanceIn(instanceId));
                        if(bpmSession != null)
                        {  try{
                                                                               Arguments args = bpmSession.activityPrepare(activity,instanceId,Arguments.create());
                                       Map argument =(Map) args.getArguments();
                                       Iterator it = argument.entrySet().iterator();
                                       while (it.hasNext()) {
                                       Map.Entry pairs = (Map.Entry)it.next();
                                       String key = (String)pairs.getKey();
                                       Object value = pairs.getValue();
                                       System.out.println("########### BPM args" + key + " : " + value + " : " );
    But my problem here is I am getting null pointer exception like below ( I found that null pointer exception is coming because processID ivelue is null).
    Stack Trace is output :
    Process ID value null
    Instance ID value %2FScorecardChallenge%23Default-1.0%2F4%2F0
    Instance ID value -1
    bpmSesssion instance details abstract class fuego.papi.ProcessServiceSesion
    bpmSesssion instance is not null
    java.lang.NullPointerException
         at fuego.directory.DirDeployedProcess.isConsolidatedId(DirDeployedProcess.java:114)
         at fuego.papi.impl.ProcessServiceSessionImpl.getProcessControl(ProcessServiceSessionImpl.java:2328)
         at fuego.papi.impl.ProcessServiceSessionImpl.processGetInstance(ProcessServiceSessionImpl.java:1926)
         at fuego.papi.impl.ProcessServiceSessionImpl.activityPrepare(ProcessServiceSessionImpl.java:1128)
         at com.rollsroyce.gsp.poc.samplePOC.SamplePOCController.begin(SamplePOCController.java:57)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:879)
         at org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)
         at org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)
         at org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:306)
         at org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336)
         at org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:52)
         at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
         at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.access$201(PageFlowRequestProcessor.java:97)
         at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor$ActionRunner.execute(PageFlowRequestProcessor.java:2044)
         at org.apache.beehive.netui.pageflow.interceptor.action.internal.ActionInterceptors$WrapActionInterceptorChain.continueChain(ActionInterceptors.java:64)
         at org.apache.beehive.netui.pageflow.interceptor.action.ActionInterceptor.wrapAction(ActionInterceptor.java:184)
         at org.apache.beehive.netui.pageflow.interceptor.action.internal.ActionInterceptors$WrapActionInterceptorChain.invoke(ActionInterceptors.java:50)
         at org.apache.beehive.netui.pageflow.interceptor.action.internal.ActionInterceptors$WrapActionInterceptorChain.continueChain(ActionInterceptors.java:58)
         at org.apache.beehive.netui.pageflow.interceptor.action.internal.ActionInterceptors.wrapAction(ActionInterceptors.java:87)
         at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processActionPerform(PageFlowRequestProcessor.java:2116)
         at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
         at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:556)
         at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:853)
         at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:631)
         at org.apache.beehive.netui.pageflow.PageFlowActionServlet.process(PageFlowActionServlet.java:158)
         at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
    Please help me , is any knows the problem.........................................

  • ADF Form Submision and Commit on Same button

    HI All,
    I have created a jsff which contains 2 forms. My requirement is to create a button which first submits the form and then do the commit too on same button click.
    I am able to do it in 2 steps, as i have Submit button and commit operation available.
    But my query is that Submit is not displayed as any operation, Then how can i write any method in my bean for making sure that i do both the tasks on same button.
    Regards
    Harsh

    Hey John,
    Commit button my page does not gets enabled unless i submit the form.
    Sorry if i am sounding stupid.I am kind of new to ADF .
    Regards
    Harsh

  • Can we use Begin Tran and commit Tran

    Hi,
    Can we use Begin Tran and commit Tran in sql user defined fuctions?
    Thanks,

    That's also documented:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_5009.htm#SQLRF01208
    "Restrictions on User-Defined Functions User-defined functions are subject to the following restrictions:
    * User-defined functions cannot be used in situations that require an unchanging definition. Thus, you cannot use user-defined functions:
    o In a CHECK constraint clause of a CREATE TABLE or ALTER TABLE statement
    o In a DEFAULT clause of a CREATE TABLE or ALTER TABLE statement
    *In addition, when a function is called from within a query or DML statement, the function cannot*:
    o Have OUT or IN OUT parameters
    o *Commit or roll back the current transaction, create a savepoint or roll back to a savepoint, or alter the session or the system. DDL statements implicitly commit the current transaction, so a user-defined function cannot execute any DDL statements.*
    o Write to the database, if the function is being called from a SELECT statement. However, a function called from a subquery in a DML statement can write to the database.
    o Write to the same table that is being modified by the statement from which the function is called, if the function is called from a DML statement.
    "

  • Differences between commit work and commit work & wait

    Hi people,
    I have a theory question:
    What differences are there between commit work and commit work & wait?
    Thx

    Hi,
    <b>COMMIT WORK:</b>
    This statement will apply any outstanding database updates and wait until they have actually been put on the database before proceeding to the next statement.
    An ordinary commit work will initiate the process to update the databases in a separate task and will press on in your abap.
    COMMIT WORK: ( Asynchronous)
    Your program does not wait for any acknowledgement. it just start executing the next statment after COMMIT WORK.
    <b>COMMIT WORK and WAIT</b>: (Synchronous)
    Whereas For <b>COMMIT WORK and WAIT</b>, the system waits for the acknowledgment, and then moves to the next statement.
    Hope this resolves your query.
    Reward all the helpful answers.
    Regards

  • Problem with sqlldr and commit

    Hi,
    i have a problem with sqlldr and commit.
    I have a simple table with one colum [ col_id number(6) not null ]. The column "col_id" is primary key in the table. I have one file with 100.000 records ( number from 0 to 99.999 ).
    I want load the file in the table with sqlldr ( sql*loader ) but i want commit only if all records are loaded. If one record is discarded i want discarded all record of file.
    The proble is that in coventional path the commit is on 64 row but if i want the same records of file isn't possible and in direct path sqlldr disable primary key :(
    There are a solutions?
    Thanks
    I'm for the bad English

    This is my table:
    DROP TABLE TEST_SQLLOADER;
    CREATE TABLE TEST_SQLLOADER
    (     COL_ID NUMBER NOT NULL,
         CONSTRAINT TEST_SQLLOADER_PK PRIMARY KEY (COL_ID)
    This is my ctlfile ( test_sql_loader.ctl )
    OPTIONS
    DIRECT=false
    ,DISCARDMAX=1
    ,ERRORS=0
    ,ROWS=100000
    load data
    infile './test_sql_loader.csv'
    append
    into table TEST_SQLLOADER
    fields terminated by "," optionally enclosed by '"'
    ( col_id )
    test_sql_loader.csv
    0
    1
    2
    3
    99999
    i run sqlloader
    sqlldr xxx/yyy@orcl control=test_sql_loader.ctl log=test_sql_loader.log
    output on the screen
    Commit point reached - logical record count 92256
    Commit point reached - logical record count 93248
    Commit point reached - logical record count 94240
    Commit point reached - logical record count 95232
    Commit point reached - logical record count 96224
    Commit point reached - logical record count 97216
    Commit point reached - logical record count 98208
    Commit point reached - logical record count 99200
    Commit point reached - logical record count 100000
    Logfile
    SQL*Loader: Release 11.2.0.1.0 - Production on Sat Oct 3 14:50:17 2009
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Control File: test_sql_loader.ctl
    Data File: ./test_sql_loader.csv
    Bad File: test_sql_loader.bad
    Discard File: none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 0
    Bind array: 100000 rows, maximum of 256000 bytes
    Continuation: none specified
    Path used: Conventional
    Table TEST_SQLLOADER, loaded from every logical record.
    Insert option in effect for this table: APPEND
    Column Name Position Len Term Encl Datatype
    COL_ID FIRST * , O(") CHARACTER
    value used for ROWS parameter changed from 100000 to 992
    Table TEST_SQLLOADER:
    100000 Rows successfully loaded.
    0 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.
    Space allocated for bind array: 255936 bytes(992 rows)
    Read buffer bytes: 1048576
    Total logical records skipped: 0
    Total logical records read: 100000
    Total logical records rejected: 0
    Total logical records discarded: 0
    Run began on Sat Oct 03 14:50:17 2009
    Run ended on Sat Oct 03 14:50:18 2009
    Elapsed time was: 00:00:01.09
    CPU time was: 00:00:00.06
    The commit is on 992 row
    if i have error on 993 record i have commit on first 992 row :(
    Edited by: inter1908 on 3-ott-2009 15.00

  • BEx Variable defined in BI 7.0 Query Designer and using in BW 3.5 Analyzer

    Hi,
      I have defined a variable in BI 7.0 Query designer for a characteristic which is present in characteristic restrictions area to restrict it . Ideally when that query is executed a pop up box should be opened to enter a value of the variable. But when that query is opened in BW 3.5 BEx Analyzer, instead of pop up box the error message " Error specify a value for the Variable ' Variable Name '  " has been displayed. Please help me to resolve this problem
    <i><b>Details of Variable:</b></i>
    Type: Characteristic Value
    Processed by: Manual Entry/Default Value
    Variable is : Mandatory
    and Ready for input
    No default values are defined for this variable.
    Thanks
    Hari Prasad.

    Hi,
    Once a query opened and changed in the BI7.0 query designer u can not use the same query in lower version.
    Thats the reason the error message coming.
    If u need the 3.5 query just keep as it is , and copy the same query in different name and open it in the BI7.0 and do the changes.
    Thanks,
    Debasish

  • Lock and Commit work in INBOUND IDOC

    Hi Experts,
    Problem is about lock and commit work
    i need to receive idoc for Good receipt for purchase order.
    For one Purchase Order i can receive many good receipt Idoc at the same time and when first came in Lock the Purchase Order and further idoc came after give Errors becouse purchase Order is lock.
    Problem is not the serialization (the sequence is correct), but is the lock.
    Any idea on how to fix this issue? (maybe there is some std settings??)
    Cheers
    Boris

    Hello Guys
    the packetsize is already set to 1 but the problem still again..... and where i can find this setting   "in Customizing choose Engineering Change Management ® Define statuses for master record". ?
    Any way i try in function module in inbound to check the lock object with this sample code:
          DO 30 TIMES.
            CALL FUNCTION 'ENQUEUE_EMEKKOS'
             EXPORTING
               mode_ekko            = 'S'
               mandt                = sy-mandt
               ebeln                = goodsmvt_item-po_number
               _scope               = '2'
             EXCEPTIONS
               foreign_lock         = 1
               system_failure       = 2
               OTHERS               = 3.
            IF sy-subrc <> 0.
              WAIT UP TO 1 SECONDS.
            ELSE.
              EXIT.
            ENDIF.
          ENDDO.
    but also with this....problem syill again...
    thanks
    Boris

  • Issues with Bex query structures and Crystal Reports/Webi

    Hi experts,
    I'm having an issue with Bex Query structures and nulls. I've built a Crystal Report against a Bex query that uses a Bex Query structure. The structure looks like the following
    Budget $
    Budget %
    Actual $
    Actual %
    Budget YTD
    etc
    if I drag the structure into the Crystal Report detail section with a key figure it displays like this
    Budget $     <null>
    Budget %     <null>
    Actual $     300
    Actual %     85
    Budget YTD     250
    the null values are displayed (and this is what is required). However if I filter using a Record selection or group on a profit centre then the nulls along with the associated structure component are not displayed.
    Actual $     300
    Actual %     85
    Budget YTD     250
    Webi is also behaving similarly. Can anyone explain why the above is happening and suggest a solution either on the Bex side of things or on the Crystal Reports side of things? I'm confused as to why nulls are displayed in the first example and not the second.
    Business Objects Edge 3.1 SP2
    SAP Int Kit SP2
    OS: Linux
    BW 701 Level 6
    Crystal Reports 2008 V1
    Thanks
    Keith

    Hi,
    Crystal Reports and Web Intelligence will only show data which is in the cube. You could have an actual 0 or Null entry whithout grouping but by changing the selection / grouping in the report the data does not include such entry anymore.
    ingo

  • How to get query result in comma dilimited text or excel file?

    Does anybody know how to get query results in comma delimited
    text file or excel file, I tried spool abc.txt, but the result
    showed some ------ lines
    Thanks

    Try doing this in your sql scripts
    set heading off
    set pagesize 0
    set linesize 4000
    set feedback off
    set verify off
    set trimespace on
    set colsep ","
    spool output.txt
    select * from dual (or whatever you are querying
    spool off
    There may be a couple other set statement that you could add but
    this should get you started in the right direction

Maybe you are looking for

  • Criar ordem de venda a partir de um pedido de compra de forma automatizada

    Olá pessoal, Na empresa onde trabalho, acabamos de implantar mais uma company code no R/3, dessa forma o que antes era feito através de transferência (MB1B, 833 e 835) agora será feito via pedido de compra (empresa X1) e ordem de venda (empresa X2).

  • Web site will not open on a Mac

    A friend has helped me to develop a simple web site in Flash which I use to manage my photo galleries, which are created with Lightroom or Photoshop. I am familiar with most of Adobe CS3 except Flash or Dreamweaver, but I can get by with just editing

  • Cannot create or attach CSS files in CSS Designer panel

    I'm not getting any action with create or attach options in the Sources dropdown menu. Only 'Define in Page' seems to work. What am I missing?? I've created a simple HTML5 doc with a few page elements and saved it before trying to create my external

  • My iPhone fell from my hand as i was about to charge, doesn't turn/stay on.

    My iPhone 4 fell from my hand as I was about to plug it in to charge. It hit the floor with the charger side down, and now it will not turn/stay on without being plugged into the wall charger. Also, will not let me turn wifi on. Any suggestions befor

  • Airport Admin Utility Problem: Configuration error

    I just bought an Airport Extreme 802.11n. Installed the program, checked for updates, setted up, everything is working fine. Except, the AirPort Admin Utility. Everytime I open the program and select Config, it pops up with a window that says Config