On Commit data is written to incorrect row

Running JDev 11.1.2.4
Our ADF application is setup so there is a Tree control in the first facet of a panel splitter and panelStretchLayout in the second.  In the center facet of the psl we have a switcher component to show different regions (forms) based on what is selected in the tree.  We are running into a case where the user is editing a third level record and upon commit the data is commited to a different id in the database.  The save/commit is done from a managed bean that called the Commit binding and then we refresh the tree in case the name changes.  We have been unable to track down the steps to reproduce this and in production the users do not necessarilly recognize the issue right away.
I am looking for suggestions on possible debug options or if anyone else has run into a similiar issue.
Thank you
Rudy

Sure:
  private boolean commit() {
    DCIteratorBinding ipIter = (DCIteratorBinding)getBindings().get("AllIntPropertiesIterator");
    Row currentRow = ipIter.getCurrentRow();
    if (currentRow == null) {
      JboException ex = new JboException("The current IntProperties row is null.");
      BindingContext bctx = BindingContext.getCurrent();
      ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex);   
      return false;
    String origIntId = ""+currentRow.getAttribute("Intid");
    logger.warning("Commit: intid="+origIntId);
    OperationBinding operationBinding = getBindings().getOperationBinding("Commit");
    operationBinding.execute();  
    if (!operationBinding.getErrors().isEmpty()) {
      JboException ex = new JboException(operationBinding.getErrors().toString());
      BindingContext bctx = BindingContext.getCurrent();
      ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex);   
      return false;
    // check if the IntProperties row changed   
    currentRow = ipIter.getCurrentRow();
    String currentIntId = ""+currentRow.getAttribute("Intid");
    if(!currentIntId.equals(origIntId)) {
      logger.severe("!! Commit - orig id != currentid ("+origIntId+"/ "+currentIntId+") setting row back to "+origIntId);
      // show msg
      JboException ex = new JboException("The Integration Id is not consistent: original id ("+origIntId+") != current id ("+currentIntId+"). Setting it back to the original id ("+origIntId+").");
      BindingContext bctx = BindingContext.getCurrent();
      ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex);
      // set it back
      ipIter.setCurrentRowWithKeyValue(origIntId);
    // check it again
    currentRow = ipIter.getCurrentRow();
    currentIntId = ""+currentRow.getAttribute("Intid");
    if(!currentIntId.equals(origIntId)) {
      logger.severe("The original id ("+origIntId+") != current id ("+currentIntId+") after setting it back.");
      // show msg
      JboException ex = new JboException("The original id ("+origIntId+") != current id ("+currentIntId+") after setting it back.");
      BindingContext bctx = BindingContext.getCurrent();
      ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex);
      return false;
    return true;

Similar Messages

  • Reducing data rate written to a LVM file

    I have a LabVIEW 7.1 question regarding data logging. I have designed a system that reads the pressure of 6 4-20Ma pressure transducers. This data is then filtered to remove any noise, and displayed to the user on the computer's screen.
    I am currently reading the data at 1000HZ using the DAQ Assistant, which seems to give a clear picture of the pressure changes. My supervisor has asked me to allow the system to log the results for what may be 3 or 4 days. As you ca imagine, 48 hours at 1000 samples a second is a lot of data! I need some method to reduce the data rate written to the file. Ideally, I would like to take one second of data, find the highest reading, and then file a single entry and a timestamp interval of 1 second. So, for each hour, there would be 3600 rows with 7 columns (1 time stamp and 6 sensors). It would be nice if the 1 second average could be adjustable, so that if long term logging is to be done (a week, for example), the interval can be changed to 5 seconds or whatever.
    I have tried reducing the data read rate as much as possible, but it seems that 100hz is the smallest that is possible. That is still too high... as the hard drive would fill up in no time.
    Any help in this area would be appreciated. I have tried a few things, and all to no success. I have included a copy of the code for anyone to review.
    Hardware:
    - 1 X P3 Laptop running LabVIEW 7.1 data acquisition software
    - 1 X NI DAQCard-6062E interface PCard
    - 1 X NI SC-2345 data acquisition hardware with
    - 3 X SCC-CI20 current input module
    - 6 X Omega PX 0-600PSIA 4-20mA pressure transducer
    thanks so much!
    Andrew
    Attachments:
    Testing.vi ‏812 KB

    You would have to talk with your supervisor first to determine what he intends to do with the log data and what degree of resolution he actually needs first. You probably want the 1000Hz Sampling rate so you can do decent filtering on the signal (hopefully your pressures aren't actually changing at a faster rate than that). I'm assuming you are returning a single result for those 1000 readings for each sensor. Specify some file logging duration of n-seconds (or minutes or whatever). Between file writes, pop each filtered measurement into an array (either 1 array of 6 dimensions or 6 1-dimension arrays). After n-seconds have passed, determine the min, avg, and max values from each sensor (array) and log those with your timestamp. So if you set your log timer for 1 minute, you would log a single min, max, and average reading of 60 readings for 6 sensors (this would only require 1 row with say a timestamp and 3x6 (18) columns for each sensor's min, max, average data). After a 24hour period you will have logged 1440 rows of data. In 3 days that would only be 4320 rows of data. All that is as easy as using a timer and a case structure around your logging function which would be triggered every n-seconds. Everything else you're doing would be the same. None of this really has much to do with labview as it is more of a logical explanation of how and when to acquire and log your data. What method are you using for storing your data? CSV, BIN, etc.. If you also want to display the data in a chart, I would recommend charting the same data you're logging, otherwise your chart will probably crash your system at 1000samps/second for 60 hours.. Once again, it depends on how your supervisor is analyzing the logged data. Make your log duration programmable and change it until he is happy. If he's(she's?) using Excel, your maximum log timer would be 9 seconds (Equates to ~6.67 Logs per Minute, ~400 Logs per Hour, ~9600 Logs Per Day, for a total of 28,800 Logs(rows) for 3 days -- Excel is limited to 32000 rows).

  • How to identify date & time of insertion of rows in a table

    Hi,
    Is it possible to identify the date & time of insertion of rows in a table?
    for example:
    Table name: emp
    I have rows like this
    emp_code name dept
    pr01 ram edp
    ac05 gowri civil
    pr02 sam pro
    i want to know the date and time of the insertion this( ac05 gowri civil).
    Could you please help me.....
    Thanks in advance....

    psram wrote:
    sorry for the confusion. I dont want to store date and time. I said that for example only.
    I have table which consists of thousands of rows. I want to know the insertion date & time of a particular row.
    Is it possible?So, if I have a table that stores a load of employee numbers, do you think I could get the database to tell me their names?
    If you don't store the information, you can't query the information.
    Ok, there are some dribbs and drabbs of information available from the back end of the database via the SCN's, Archive Logs etc., but those sort of things disappear or get overwritten as time goes by. The only way to know the information it to ensure you store it in the first place.
    So, in answer to your question, No, there isn't a true and reliable way to get the data/time that rows were inserted into the table, unless you have chosen to store it alongside the data.

  • To find the date type fields in the row and validate those date fields

    TYPES : BEGIN OF TY_MARA,
              MATNR TYPE MARA-MATNR,
              ERSDA TYPE MARA-ERSDA,
              ERNAM TYPE MARA-ERNAM,
              LAEDA TYPE MARA-LAEDA,
              MTART TYPE MARA-MTART,
            END OF TY_MARA.
    DATA : it_mara TYPE STANDARD TABLE OF ty_mara,
          it_mara1 TYPE STANDARD TABLE OF ty_mara,
           wa_mara TYPE ty_mara.
    loop at it_mara into wa_mara.
      describe field wa_mara-ersda type c_data.
    if c_data eq 'D'.
      CALL FUNCTION 'DATE_CHECK_PLAUSIBILITY'
        EXPORTING
          date                            = wa_mara-ersda
       EXCEPTIONS
         PLAUSIBILITY_CHECK_FAILED       = 1
         OTHERS                          = 2
      IF sy-subrc eq 0.
    wa_mara-ersda = '00000000'.
        append wa_mara to it_mara1.
        write :wa_mara-matnr,wa_mara-ersda.
        else.
            wa_mara-ersda = '00000000'.
        append wa_mara to it_mara1.
        write :wa_mara-matnr,wa_mara-ersda.
      ENDIF.
      endif.
      endloop.
    This issue regarding how to find the date type fields in the row and validate those date fields.If its not a valid date ,i have to assign initial value to that.
    I've tried that for single field using describe field.Please help me do that for all fields.

    Hi Sam,
     I believe we had discussed the same issue in the below thread. Can you please refer the below one?
    http://social.msdn.microsoft.com/Forums/sharepoint/en-US/d93e16ff-c123-4b36-b60b-60ccd34f6ca7/calculate-time-differences-in-infopath?forum=sharepointcustomizationprevious
    If it's not helping you please let us know
    Sekar - Our life is short, so help others to grow
    Whenever you see a reply and if you think is helpful, click "Vote As Helpful"! And whenever
    you see a reply being an answer to the question of the thread, click "Mark As Answer

  • Can Data Be Written To A Called PDF When Using Adobe Reader 9?

    Can data be written and saved to another called pdf when using Adobe READER 9?

    You'll have to expand on your description for a full answer - but all things considered, it would be possible.
    If the "called" (this is one of the things you need to clarify - do you mean a PDF opened via the JavaScript openDoc method?) PDF discloses itself, then the "calling" PDF can execute an importXFDF or similar call on that PDF as long as they are Reader Extended.
    Past that, you'll need to give us more information on exactly what you're trying to do.

  • I want the max date but only look at rows with a certain category value.

    I want a way to get the max date but only look at rows with a certain category value - ignoring the other rows.  My detail table contains expenditures including date (col A) and category (col D) the number of rows will increase with expenditures over time.  My summary table will have a cell for each category and display the last expense date for that category using a functionality that I must ask of you, dear community.
    I am using the latest numbers on an iPad (4) with IOS6.
    Secondarily, I would like to add another cell in the summery table with the value (col E) of the last expense for each category.
    Thank you,
    Warren

    ...later...
    With the addition of an auxiliary column to the Main table, a second header row to the Most recent table, and a minor modification to the formula on the second table, the tables can handle a range of dates set by entering the first and last date into A1 abd B1 respectively of the summary table, Most recent.
    Note that the selected range, shown with a green background in the auxiliary column, does not contain any category B expenses. Using LOOKUP, this would result in a repeat of the January 6 expense bering listed in this row. Switching to VLOOKUP, which can be set to require an exact match, allows the result shown—if there are no expenses in a given category, the formula returns "none" (or whatever message you substitute for "none" in the formula in that column).
    Formulas:
    Main::A2: =IF(OR(B<Most recent :: $A$1,B>Most recent :: $B$1),"x"&E,E)
    Fill down to the end of column A.
    This column must be located on the left side of the table (ie. must be column A), but may be hidden.
    Most recent::A2: =IFERROR(VLOOKUP($D,Main :: $A:$D,COLUMN()+1,FALSE),"none")
    "FALSE" will display as "Exact match" in Numbers's formula editor.
    Fill down to the end of the table and right to column C.
    Regards,
    Barry

  • How data is written in table

    Hi All,
    I have a table which contains 2 main columns apart from other columns. These two columns are REC_ID and ENTRY_DATE_TIME.
    In this table data is inserted through a data load process using SQL Loader. REC_ID is populated to table through a sequence and ENTRY_DATE_TIME is populated as a sysdate.
    select rec_id, to_char(entry_date_time, 'dd-mon-yyyy hh24:mi:ss') entry_date_time from itpcs_grt_oa_tran_rdpk_stock order by 1;
    REC_ID ENTRY_DATE_TIME
    1     12-oct-2012 07:06:23
    2     12-oct-2012 07:06:31
    3     12-oct-2012 07:06:35
    4     12-oct-2012 07:06:21
    5     12-oct-2012 07:06:32
    If we see the data, then time for REC_ID 4 is less than REC_ID 3.
    Now the question is if REC_ID is populated through a sequence then sequence 3 must have been created before REC_ID 4 and if so then why the ENRTY_DATE_TIME of REC_ID 4 is less than REC_ID 3.
    Further to the question, Can anyone please explain me how data is written in tables. What steps are performed by at database level when we any DML(insert) is performed.
    My DB Version : Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit

    583003 wrote:
    Hi All,
    I have a table which contains 2 main columns apart from other columns. These two columns are REC_ID and ENTRY_DATE_TIME.
    In this table data is inserted through a data load process using SQL Loader. REC_ID is populated to table through a sequence and ENTRY_DATE_TIME is populated as a sysdate.
    select rec_id, to_char(entry_date_time, 'dd-mon-yyyy hh24:mi:ss') entry_date_time from itpcs_grt_oa_tran_rdpk_stock order by 1;
    REC_ID ENTRY_DATE_TIME
    1     12-oct-2012 07:06:23
    2     12-oct-2012 07:06:31
    3     12-oct-2012 07:06:35
    4     12-oct-2012 07:06:21
    5     12-oct-2012 07:06:32
    If we see the data, then time for REC_ID 4 is less than REC_ID 3.
    Now the question is if REC_ID is populated through a sequence then sequence 3 must have been created before REC_ID 4 and if so then why the ENRTY_DATE_TIME of REC_ID 4 is less than REC_ID 3.
    Further to the question, Can anyone please explain me how data is written in tables. What steps are performed by at database level when we any DML(insert) is performed.
    My DB Version : Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bitdoes DB reside in/on RAC?
    Note that REC_ID 4 has a timestamp before all REC_ID 1 - 3.

  • How to increase rate at which data is written to file?

    I have a program that reads data at 200kHz and I would like the program to write the data at the same rate. However, the fastest I seem to be able to get my program to write is about 5kHz. I have been using the DAQmx Read and the Format into File functions to read and write the data.
    I have tried moving the write function into a separate loop, so that the data would write to a file after the data collection was complete. However, this did not change the rate at which data was written to a file. Any suggestions for increasing the rate at which data is being to a file? Thanks for looking over my problem!
    Attachments:
    SampleWrite_Read.vi ‏58 KB

    Well, writing to a file is always slower since it takes some time to access the hard drive. I noticed in your program that you are writing into an ASCII file. That is also slower than if you write to a binary file. There are several examples that ship with LabVIEW that allow you to do High-Speed Datalogging (I actually believe that is the name of the examples). Those examples actually come in pairs, one does the datalogging, and another helps you read the file. I will recommend taking a look at them.
    The previous suggestions by Les Hammer is a great idea. Instead of acquiring 1-sample at a time, try acquiring 100 or 1000 samples and write them to the file.
    I hope that helps!
    GValdes

  • How to aggregate a column based date column (for weekly single row)?

    How to aggregate a column based date column (for weekly single row)?

    Hi,
    Consider the below statement for daily bases which is ok
    SELECT ID, DATE, SUM(AMOUNT) FROM TABLE_NAME GROUP BY ID, DATE ORDER BY ID, DATE
    The same like the above statement, I want output on weekly, fortnightly, and monthly bases, How to do this? Need your help...

  • Compressing data when written to disk not when written to cache

    I am doing a lot of compression before the data is written to the JE. This compression will have lots of CPU overhead. I was wondering when the data was stored to the JE cache it could be uncompressed but when the JE decided to persist that data I could then compress it.
    Message was edited by:
    Chad Boucher

    Hi Chad,
    No, there is no way to compress on disk but not in the cache. We may add this capability at a future date, but it would likely be part of built-in compression. There is no firm plan for doing this work.
    I understand that you may not want to incur the CPU overhead of compress and decompress every time an item is retrieved from or stored into the cache. However, keeping items compressed in the cache has the advantage of using less cache memory -- more items fit in cache. This is, for many applications, at least as important as compression on disk. So there is a trade-off.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Commit data and put it to database

    Hi,
    Anyone know how to put data from objects to database table?
    Now I have a simple application which get data from table, modify some rows and now I don't know how to put this result to another table in database.
    Thanks for any suggestions.
    I have to add, that I put data from database, modify it, and insert new row to another database (anoteher phisical machines). So probably I need to create 2 connections. But do I need mapping table to objects in second database if tables are similar in both of this databases??
    Message was edited by:
    tutus

    What version of TopLink are you using? Are you using JPA? I'll assume EclipseLink JPA.
    The easiest thing to do is to create two persistence units (PU), one for each database schema. Using orm.xml mapping you can map the same class to both the source and target table. If the tables are pretty similar you could map the common mappings with annotations and use xml to map the things that are different.
    Using this approach you can read an object from the source PU, modify it and then persist it in the second PU.
    --Shaun                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • ORA-27063: number of bytes read/written is incorrect

    Hello -
    I am getting this error because my filesytem is at 100%:
    ORA-01114: IO error writing block to file 202 (block # 423324)
    ORA-27063: number of bytes read/written is incorrect
    However, when I query the dba_data_files, and v$datafile views, I do not see a reference to file 202. Where can I get this information?
    Thanks,
    Mike

    Mike,
    Looks like you got a solution, however, just FYI, tempfiles are numbered starting w/ db_files+1, so, likely, your value of db_files is 200, the error occurred on your number 2 tempfile.
    -Mark

  • Audit Commit Delay exceeded, written a copy to OS Audit Trail

    OS=Redhat 5
    DB=11.1.0.7.0
    We are running a 2 node RAC and are seeing the following error in the alert log file. Based on my research, this may be a bug, but I am not convience it is. Also, this error is only ocurring on one of the nodes. Any ideas.
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail

    After futher investigation, it turns out there was something wrong with the Network Attached Storage. As a result, database was not able to write to this storage, which explains why we were seeing the Audit Commit Delay exceed error.

  • Is there any difference between ways to commit data?

    Hello friends at www.oracle.com ,
    when you need to commit data at Forms, by using PL/SQL, is there any difference between COMMIT and FORMS_DDL ('COMMIT') instructions?
    Thanks for your answer, and best regards,
    Franklin Gongalves Jr.

    commit is a synonym for commit_form, which means any changes in the form will be posted (firing any pre/post insert/update/delete triggers).
    If there are no changes in the form, then you get a message to that effect, and nothing happens at the database end.
    forms_ddl('commit') sends a commit statement to the database. Changes made to the form are not posted to the database first.
    it's generally used when you've made 'behind the scenes' database changes (eg update/insert/delete statements in the body of a trigger/function/procedure) but no changes to base table data, and it ensures the changes are committed at the database, irrrespective of nasty message about 'no changes to commit'.

  • How to commit data when item is being validated

    the problem is, that I have an input field on the form. when
    ever I make changes in it I need to validate the change first.
    and if its validated then I need to make certain changes in the
    database and commit.
    I have written code in when-validate-item, which is called when
    ever I make change in the input field. In this trigger I first
    check data entry is correct and then make certain changes in the
    database and finally when I try to commit the changes I made it
    gives me error message of illegal restriction cannot use commit
    procedure in when-validate-item.
    can anyone help me as how can I validate an input field first
    and then make transactions in the database and commit.
    thanks

    Hi
    I do not know exactly what you want to
    do, so use this with care.
    If you want to commit only the database
    (not forms itself), you can use:
    forms_ddl('commit');
    If you need to commit the form itself,
    in the when-validate-item you can
    create a timer, and in the when-timer-expired
    trigger do a commit.
    Luis Cabral

Maybe you are looking for

  • Forms 6i, open default mail client and add attachment to message

    Hi, I would like to use some functionality in my forms application. I want call some action, that opens default mail client in windows platform and add some attachment to message. I want to do this for my reports. I am exporting reports to PDF files.

  • What u hear problems in vista 32bit

    #What u hear problems in vista 32bit2 I just installed the newer variety of the x-fi notebook card with the wireless broadcast function, and for whatever reason installing the drivers failed the first time, I tried a clean install after that, and the

  • Compaq Presario CQ57 Laptop System Bios recovery problem

    A customer of mine has a Compaq Presario CQ57 that has developed a BIOS problem. I am unsure if it had had a BIOS update through HP Support Assistant or if it just decided to start for no reason but it now intermittently decides to recover the System

  • Mouse cursor acceleration - any way to fix?

    Just got my first Mac and I love it. I only have one problem - when I connect an external mouse via USB, the cursor tracking is terrible. I have the speed cranked up all the way in System Preferences, but it is still too slow when I'm not shoving the

  • Syncing troubles with the new iTunes software.

    Why won't CD's I put in iTunes not sync to my iDevices?  I'll burn a CD to my itunes library to problem.  But if I want to put that CD to my iPod/iPad, it won't do it.  HELP!