Kodo 3.0 Bug with simple optimistic transaction

Hi guys,
Using the attached code, I find that a very simple situation that should result in an optimistic
verification exception is not occurring because Kodo 3.0 is reloading the persistent fields of the
data object when it becomes transactional. It doesn't always do this, just usually for this
example. For other more complicated data classes, this apparently does not happen. This bug
appears when using MySQL with InnoDB and isolation = read-committed. It also occurs when using
Oracle 8i.
The output within the zip shows just one TestToggles run. But because of the reloading, the
exception does not occur when the PNT version is stale prior to applying the modification.
So far as I can tell, the only difference between this case that is failing and the case that is not
failing is that the non-failing case has more fields including relations in the data class.
David

Thanks. Setting the "kodo.RetainValuesInOptimistic: true" appears to provide a workaround until it
is fixed.
Abe White wrote:
>
Thanks for the heads-up David. Believe it or not, I thought I was
correcting a bug when I changed this for 3.0. Silly me.I think there is a quantum law in software development that goes something like this:
There is a minimum number of bugs that a piece of software can have. For any software
of significant size, the minimum is greater than zero. Of course, this law is counter-
intuitive, but experience verifies it again and again.
See the
following bug report for the whole story:
http://bugzilla.solarmetric.com/show_bug.cgi?id=512
We'll have it go back to the previous behavior in 3.0.1.

Similar Messages

  • Optimistic Locking - Possible bug with Weblogic

    After extensive testing of a j2ee application Im involved with, it would appear their exists a problem with using Weblogic's Optimistic Concurrency (OL) mechanism.
    The exact problem is as follows:
    The ejbCreate and ejbRemove methods of a particular entity bean are as follows:
    public abstract class ProductBean implements javax.ejb.EntityBean {
    ejbCreate(){
    FolderEntityHome folderEH = FolderComponent.getFolderEntityHome();
    folderEH.create(getId());
    ejbRemove(){
    FolderEntityHome folderEH = FolderComponent.getFolderEntityHome();
    try {
    FolderBean folderEH.findByProductId(getId());
    catch(InvalidAccessRightsException iare)
    throw new RemoveException();
    Previously before OL was added when a RemoveException was thrown, this would cause the ejbRemove exception to fail, thus both the product and folder would still exist.
    After adding OL, when an InvalidAccessRightsException occurs giving rise to a RemoveException being thrown, weblogic simply ignores the RemoveException and deletes the Product even though the Folder could not be deleted. This causes system errors when users try to access the folder which contains a link to a product which no longer exists!
    Is anyone aware of this particular problem? Is it indeed a bug with Weblogic? For clarity, I believe I am using version 8.1 and the way in which I have implemented OL is to use an additional version column in the underlying tables for all entity beans.

    In case anyone's interested, it appears from further testing that the problem I've been having in the way the RemoveException behaves is down to the difference in which version 6.0 treats this exception compared to version 8.1!
    In version 6.0, if you threw a RemoteException at any point in the ejbRemove(), the entity would not be removed!
    In version 8.1, something wierd happens. If a RemoteException() is thrown in the ejbRemove() and sometime during the same transaction at the point of commit, the entity on which the exception is thrown is attempted to be accessed (through a finder), then the entity continues to be deleted! If on the other hand, a RemoveException is thrown and no access/modification is attempted on that entity within the same transaction, then at the point of commit, the entity is not removed!
    Seems this is indeed a problem which needs to be addressed in future releases.
    Message was edited by:
    rotan_imretxe
    Message was edited by:
    rotan_imretxe

  • Plz tell me BDC  CALL TRANSACTION steps with simple example

    hi,
    plz tell me the steps
    BDC  CALL TRANSACTION steps with simple example

    Hi,
    BATCH DATA COMMUNICATION
    About Data Transfer In R/3 System
    When a company decides to implement the SAP R/3 to manage business-critical data, it usually does not start from a no-data situation. Normally, a SAP R/3 project comes into replace or complement existing application.
    In the process of replacing current applications and transferring application data, two situations might occur:
    ? The first is when application data to be replaced is transferred at once, and only once.
    ? The second situation is to transfer data periodically from external systems to SAP and vice versa.
    ? There is a period of time when information has to be transferred from existing application, to SAP R/3, and often this process will be repetitive.
    The SAP system offers two primary methods for transferring data into SAP systems. From non-SAP systems or legacy system. These two methods are collectively called ?batch input? or ?batch data communication?.
    1. SESSION METHOD
    2. CALL TRANSACTION
    3. DIRECT INPUT
    Advantages offered by BATCH INPUT method:
    1. Can process large data volumes in batch.
    2. Can be planned and submitted in the background.
    3. No manual interaction is required when data is transferred.
    4. Data integrity is maintained as whatever data is transferred to the table is through transaction. Hence batch input data is submitted to all the checks and validations.
    To implement one of the supported data transfers, you must often write the program that exports the data from your non-SAP system. This program, known as a ?data transfer? program must map the data from the external system into the data structure required by the SAP batch input program.
    The batch input program must build all of the input to execute the SAP transaction.
    Two main steps are required:
    ? To build an internal table containing every screen and every field to be filled in during the execution of an SAP transaction.
    ? To pass the table to SAP for processing.
    Prerequisite for Data Transfer Program
    Writing a Data Transfer Program involves following prerequisites:
    Analyzing data from local file
    Analyzing transaction
    Analyzing transaction involves following steps:
    ? The transaction code, if you do not already know it.
    ? Which fields require input i.e., mandatory.
    ? Which fields can you allow to default to standard values.
    ? The names, types, and lengths of the fields that are used by a transaction.
    ? Screen number and Name of module pool program behind a particular transaction.
    To analyze a transaction::
    ? Start the transaction by menu or by entering the transaction code in the command box.
    (You can determine the transaction name by choosing System ? Status.)
    ? Step through the transaction, entering the data will be required for processing your batch input data.
    ? On each screen, note the program name and screen (dynpro) number.
    (dynpro = dyn + pro. Dyn = screen, pro = number)
    ? Display these by choosing System ? Status. The relevant fields are Program (dynpro) and Dynpro number. If pop-up windows occur during execution, you can get the program name and screen number by pressing F1 on any field or button on the screen.
    The technical info pop-up shows not only the field information but also the program and screen.
    ? For each field, check box, and radio button on each screen, press F1 (help) and then choose Technical Info.
    Note the following information:
    - The field name for batch input, which you?ll find in its own box.
    - The length and data type of the field. You can display this information by double clicking on the Data Element field.
    ? Find out the identification code for each function (button or menu) that you must execute to process the batch-input data (or to go to new screen).
    Place the cursor on the button or menu entry while holding down the left mouse button. Then press F1.
    In the pop-up window that follows, choose Technical info and note the code that is shown in the Function field.
    You can also run any function that is assigned to a function key by way of the function key number. To display the list of available function keys, click on the right mouse button. Note the key number that is assigned to the functions you want to run.
    Once you have program name, screen number, field name (screen field name), you can start writing.
    DATA TRANSFER program.
    Declaring internal table
    First Integral Table similar to structure like local file.
    Declaring internal table like BDCDATA
    The data from internal table is not transferred directly to database table, it has to go through transaction. You need to pass data to particular screen and to particular screen-field. Data is passed to transaction in particular format, hence there is a need for batch input structure.
    The batch input structure stores the data that is to be entered into SAP system and the actions that are necessary to process the data. The batch input structure is used by all of the batch input methods. You can use the same structure for all types of batch input, regardless of whether you are creating a session in the batch input queue or using CALL TRANSACTION.
    This structure is BDCDATA, which can contain the batch input data for only a single run of a transaction. The typical processing loop in a program is as follows:
    ? Create a BDCDATA structure
    ? Write the structure out to a session or process it with CALL TRANSACTION USING; and then
    ? Create a BDCDATA structure for the next transaction that is to be processed.
    Within a BDCDATA structure, organize the data of screens in a transaction. Each screen that is processed in the course of a transaction must be identified with a BDCDATA record. This record uses the Program, Dynpro, and Dynbegin fields of the structure.
    The screen identifier record is followed by a separate BDCDATA record for each value, to be entered into a field. These records use the FNAM and FVAL fields of the BDCDATA structure. Values to be entered in a field can be any of the following:
    ? Data that is entered into screen fields.
    ? Function codes that are entered into the command field. Such function codes execute functions in a transaction, such as Save or Enter.
    The BDCDATA structure contains the following fields:
    ? PROGRAM: Name of module pool program associated with the screen. Set this field only for the first record for the screen.
    ? DYNPRO: Screen Number. Set this field only in the first record for the screen.
    ? DYNBEGIN: Indicates the first record for the screen. Set this field to X, only for the first record for the screen. (Reset to ? ? (blank) for all other records.)
    ? FNAM: Field Name. The FNAM field is not case-sensitive.
    ? FVAL: Value for the field named in FNAM. The FVAL field is case-sensitive. Values assigned to this field are always padded on the right, if they are less than 132 characters. Values must be in character format.
    Transferring data from local file to internal table
    Data is uploaded to internal table by UPLOAD of WS_UPLOAD function.
    Population of BDCDATA
    For each record of internal table, you need to populate Internal table, which is similar to BDCDATA structure.
    All these five initial steps are necessary for any type of BDC interface.
    DATA TRANSFER program can call SESSION METHOD or CALL TRANSACTION. The initial steps for both the methods are same.
    First step for both the methods is to upload the data to internal table. From Internal Table, the data is transferred to database table by two ways i.e., Session method and Call transaction.
    SESSION METHOD
    About Session method
    In this method you transfer data from internal table to database table through sessions.
    In this method, an ABAP/4 program reads the external data that is to be entered in the SAP System and stores the data in session. A session stores the actions that are required to enter your data using normal SAP transaction i.e., Data is transferred to session which in turn transfers data to database table.
    Session is intermediate step between internal table and database table. Data along with its action is stored in session i.e., data for screen fields, to which screen it is passed, the program name behind it, and how the next screen is processed.
    When the program has finished generating the session, you can run the session to execute the SAP transactions in it. You can either explicitly start and monitor a session or have the session run in the background processing system.
    Unless session is processed, the data is not transferred to database table.
    BDC_OPEN_GROUP
    You create the session through program by BDC_OPEN_GROUP function.
    Parameters to this function are:
    ? User Name: User name
    ? Group: Name of the session
    ? Lock Date: The date on which you want to process the session.
    ? Keep: This parameter is passed as ?X? when you want to retain session after
    processing it or ? ? to delete it after processing.
    BDC_INSERT
    This function creates the session & data is transferred to Session.
    Parameters to this function are:
    ? Tcode: Transaction Name
    ? Dynprotab: BDC Data
    BDC_CLOSE_GROUP
    This function closes the BDC Group. No Parameters.
    Some additional information for session processing
    When the session is generated using the KEEP option within the BDC_OPEN_GROUP, the system always keeps the sessions in the queue, whether it has been processed successfully or not.
    However, if the session is processed, you have to delete it manually. When session processing is completed successfully while KEEP option was not set, it will be removed automatically from the session queue. Log is not removed for that session.
    If the batch-input session is terminated with errors, then it appears in the list of INCORRECT session and it can be processed again. To correct incorrect session, you can analyze the session. The Analysis function allows to determine which screen and value has produced the error. If you find small errors in data, you can correct them interactively, otherwise you need to modify batch input program, which has generated the session or many times even the data file.
    CALL TRANSACTION
    About CALL TRANSACTION
    A technique similar to SESSION method, while batch input is a two-step procedure, Call Transaction does both steps online, one after the other. In this method, you call a transaction from your program by
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>
    Messages into <MSGTAB>.
    Parameter ? 1 is transaction code.
    Parameter ? 2 is name of BDCTAB table.
    Parameter ? 3 here you are specifying mode in which you execute transaction
    A is all screen mode. All the screen of transaction are displayed.
    N is no screen mode. No screen is displayed when you execute the transaction.
    E is error screen. Only those screens are displayed wherein you have error record.
    Parameter ? 4 here you are specifying update type by which database table is updated.
    S is for Synchronous update in which if you change data of one table then all the related Tables gets updated. And sy-subrc is returned i.e., sy-subrc is returned for once and all.
    A is for Asynchronous update. When you change data of one table, the sy-subrc is returned. And then updating of other affected tables takes place. So if system fails to update other tables, still sy-subrc returned is 0 (i.e., when first table gets updated).
    Parameter ? 5 when you update database table, operation is either successful or unsuccessful or operation is successful with some warning. These messages are stored in internal table, which you specify along with MESSAGE statement. This internal table should be declared like BDCMSGCOLL, a structure available in ABAP/4. It contains the following fields:
    1. Tcode: Transaction code
    2. Dyname: Batch point module name
    3. Dynumb: Batch input Dyn number
    4. Msgtyp: Batch input message type (A/E/W/I/S)
    5. Msgspra: Batch input Lang, id of message
    6. Msgid: Message id
    7. MsgvN: Message variables (N = 1 - 4)
    For each entry, which is updated in database, table message is available in BDCMSGCOLL. As BDCMSGCOLL is structure, you need to declare a internal table which can contain multiple records (unlike structure).
    Steps for CALL TRANSACTION method
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. UPLOAD or WS_UPLOAD function to upload the data from local file to itab. (Considering file is local file)
    4. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>.
    Refresh BDCTAB.
    Endloop.
    (To populate BDCTAB, You need to transfer each and every field)
    The major differences between Session method and Call transaction are as follows:
    SESSION METHOD CALL TRANSACTION
    1. Data is not updated in database table unless Session is processed. Immediate updation in database table.
    2. No sy-subrc is returned. Sy-subrc is returned.
    3. Error log is created for error records. Errors need to be handled explicitly
    4. Updation in database table is always synchronous Updation in database table can be synchronous Or Asynchronous.
    Error Handling in CALL TRANSACTION
    When Session Method updates the records in database table, error records are stored in the log file. In Call transaction there is no such log file available and error record is lost unless handled. Usually you need to give report of all the error records i.e., records which are not inserted or updated in the database table. This can be done by the following method:
    Steps for the error handling in CALL TRANSACTION
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. Internal table BDCMSG like BDCMSGCOLL
    4. Internal table similar to Ist internal table
    (Third and fourth steps are for error handling)
    5. UPLOAD or WS_UPLOAD function to upload the data from the local file to itab. (Considering file is local file)
    6. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tr.code> using <Bdctab>
    Mode <A/N/E>
    Update <S/A>
    Messages <BDCMSG>.
    Perform check.
    Refresh BDCTAB.
    Endloop.
    7 Form check.
    IF sy-subrc <> 0. (Call transaction returns the sy-subrc if updating is not successful).
    Call function Format_message.
    (This function is called to store the message given by system and to display it along with record)
    Append itab2.
    Display the record and message.
    DIRECT INPUT
    Thanks &regards,
    Sravani

  • Optimistic transaction - Recovering after exception

    I'm looking for a usage pattern / hint for following problem:
    We are using long-running optimistic transactions, where a large number of
    JDO objects is modified during a transaction. Since multiple end users can
    modify the same JDO objects, we can get OptimisticVerificationExceptions.
    Following recovering strategies are trivial to implement:
    - accept changes of the user who committed first by calling refresh() on the
    conflicting JDO objects, discarding all changes made during the transaction
    - step back to the state where the transaction began (with restoreValues set
    to true), discarding all changes made during the transaction
    However, we did not find a simple solution for recovering the following way:
    - commit the changes of the user who received the exception, thus
    overwriting the changes of the user who committed first
    After retrieving a conflicting JDO object via getFailedObject(), it is
    either in hollow or persistent-nontransactional state (depending on the
    restoreValue property), so all changes that were made during the
    transaction are lost. The only way I can think of is to copy/clone mapped
    fields in preStore/preDelete instance callbacks and storing them in
    transient fields within the JDO object. After receiving the exception, the
    mapped fields could be set back to the values stored in the transient
    fields. This approach appears somewhat clumsy, however. Are there better
    ideas / proven usage patterns around?
    Thanks,
    Contus

    Hi,
    The JDO specification mandates that an OptimisticVerificationException is a
    fatal exception and thus any transaction is implicitly rolled back and there
    will no longer be an active optimistic transaction (hence why you are seeing
    objects as hollow or PNT depending on RestoreValues).
    The only thing that I can think of is to maybe use the detach() API to take
    a copy of the objects you've changed prior to trying the commit. Then if
    there is a failure you can begin a new tx and attach the detached
    copies...not something I've tried but might work?
    Cheers
    - Keiron
    "contus" <[email protected]> wrote in message
    news:[email protected]...
    I'm looking for a usage pattern / hint for following problem:
    We are using long-running optimistic transactions, where a large number of
    JDO objects is modified during a transaction. Since multiple end users can
    modify the same JDO objects, we can get OptimisticVerificationExceptions.
    Following recovering strategies are trivial to implement:
    - accept changes of the user who committed first by calling refresh() onthe
    conflicting JDO objects, discarding all changes made during thetransaction
    - step back to the state where the transaction began (with restoreValuesset
    to true), discarding all changes made during the transaction
    However, we did not find a simple solution for recovering the followingway:
    - commit the changes of the user who received the exception, thus
    overwriting the changes of the user who committed first
    After retrieving a conflicting JDO object via getFailedObject(), it is
    either in hollow or persistent-nontransactional state (depending on the
    restoreValue property), so all changes that were made during the
    transaction are lost. The only way I can think of is to copy/clone mapped
    fields in preStore/preDelete instance callbacks and storing them in
    transient fields within the JDO object. After receiving the exception, the
    mapped fields could be set back to the values stored in the transient
    fields. This approach appears somewhat clumsy, however. Are there better
    ideas / proven usage patterns around?
    Thanks,
    Contus

  • Blocking bug with 10.8.3+Adobe Lightroom+Canon iPF5000 (but workaround found)

    Hi all,
    since I have updated my MacBook Pro (early 2011 model) with 10.8.3, I have a blocking bug with Adobe Lightroom (either 4.3 or 4.4) for launching prints on my Canon iPF5000 (firmware 1.33, either MacOS driver 2.67 or 3.06, connection through USB).
    When I was under 10.8.2 with the Canon's driver 3.06, everything works fine.
    Basically, after clicking Print a Copy or Print... in Lightroom, the file spooled to the driver stays in MacOS printer queue. Within the printer queue window, the printer is in pause mode, and each time I try to resume the printing processing, the driver put it on hold immediately. When I look into the history on the Console, the driver complains about an unknown error... The data stay within the driver, as my Canon iPF5000 "Data" led indicator stays unlit.
    My print settings are 600dpi or 300dpi, print sharpening Standard/Glossy, 16 bit output. And Color managed by printer (because I let the Canon driver managing my B&W prints).
    Strangely enough, when I launch a print from Preview application, the drivers does its job and sends it successfully to my Canon iPF5000...
    As this was driving me crazy, I have tried a clean install of MacOS 10.8.3 (as 10.8.2 is no more available from Apple servers...). No success, same blocking issue.
    Eventually, I've found the workaround: using the imagePROGRAF Advanced Preview. You have to click on "Print..." in Lightroom in order to see the driver settings window, and then check the "Preview before printing" option in the main tab of the driver settings. Then I can preview my Lightroom printing work and then launch successfully my printing.
    Simple workaround, but very strange bug...
    For me, the root cause could be multiple and not very obvious.
    Is it Mac OS 10.8.3 ? But why can I make prints from Preview application for instance, and not from LR ?
    Is it the Canon driver ? But this lead me to the same question.
    Is it Lightroom ? Does it send corrupted data that make the driver crazy but not the imagePROGRAF Advanced Preview ?
    So, if any user of Canon iPF5000 + Mac OS 10.8 + Lightroom is around, please share your experience.
    And if any Apple, Adobe or Canon engineer read this, please try to fix this.
    Happy printing,
    Amaury

    It's different how applications write print data. If you use Advanced Preview from the driver then it calculates already the output which afterwards is only sent to the printer.
    Have you once tried to print with fast graphic process switched off (option of advanced settings)?
    Hope it helps
    Renate

  • Optimization bug with C++ inlining

    Hi,
    While evaluating Sun Studio 11 I have identified an optimization bug with C++ inlining.
    The bug can easily be reproduced with the small program below. The program produces
    wrong results with -xO2, because an inline access function always returns the value 0.0
    instead of the value given on the commandline:
    djerba{ru}16 : CC -o polybug  polybug.cc
    djerba{ru}17 : ./polybug 1.0
    coeff(0): 1.000000
    djerba{ru}18 : CC -o polybug -xO2 polybug.cc
    djerba{ru}19 : ./polybug 1.0
    coeff(0): 0.000000            <<<<<<<<<< wrong, should be 1.000000This occurs only with optimization level O2; levels below or above O2 don't
    exhibit the bug.
    Compiler version is
    Sun C++ 5.8 Patch 121017-01 2005/12/11
    on Solaris 8 / Sparc.
    I include a preliminary analysis at the end.
    Best Regards
    Dieter R.
    -------------------- polybug.cc -------------------------
    // note: this may look strange, but this is a heavily stripped down
    // version of actual working application code...
    #include <stdio.h>
    #include <stdlib.h>
    class Poly {
      public:
        // constructor initializes number of valid coefficients to zero:
        Poly() { numvalid = 0; };
        ~Poly() {};
        // returns coefficient with index j, if valid. Otherwise returns 0.0:
        double coeff(int j) {
         if (j < numvalid) {
             return coefficients[j];
         } else {
             return 0.0;
       // copies contents of this Object to other Poly:
        void getPoly(Poly& q) { q = *this; };
        // data members:
        // valid coefficients: 0 ... (numvalid - 1)
        double coefficients[6];
        int numvalid;
    void troublefunc(Poly* pC) {
        // copies Poly-Object to local Poly, extracts coefficient
        // with index 0 and prints it. Should be the value given
        // on commandline.
        // Poly constructor, getPoly and coeff are all inline!
        if (pC) {
         Poly pol;                      
         pC->getPoly(pol);
         printf("coeff(0): %f\n",pol.coeff(0));
    int main(int argc,char* argv[]) {
        double d = atof(argv[1]);
        // creates Poly object and fills coefficient with index
        // 0 with the value given on commandline
        Poly* pC = new Poly;
        pC->coefficients[0] = d;
        pC->numvalid = 1;
        troublefunc(pC);   
        return 0;
    The disassembly fragment below shows that the access function coeff(0), instead
    of retrieving coefficient[0] simply returns the fixed value 0.0 (presumably because the
    optimizer "thinks" numvalid holds still the value 0 from the constructor and that therefore
    the comparison "if (i < numvalid)" can be omitted).
    Note: disassembly created from code compiled with -features=no%except for simplicity!
    00010e68 <___const_seg_900000102>:
            ...     holds the value 0.0
    00010e80 <__1cLtroublefunc6FpnEPoly__v_>:
       10e80:       90 90 00 08     orcc  %g0, %o0, %o0      if (pC) {   
       10e84:       02 40 00 14     be,pn   %icc, 10ed4
       10e88:       9c 03 bf 50     add  %sp, -176, %sp
                                                       local Poly object at %sp + 120
                                                             numvalid at %sp + 0xa8 (168)
       10e8c:       c0 23 a0 a8     clr  [ %sp + 0xa8 ]      Poly() { numvalid = 0; };
                                                             pC->getPoly(pol):
                                                             loop copies *pC to local Poly object
       10e90:       9a 03 a0 80     add  %sp, 0x80, %o5
       10e94:       96 10 20 30     mov  0x30, %o3
       10e98:       d8 5a 00 0b     ldx  [ %o0 + %o3 ], %o4
       10e9c:       96 a2 e0 08     subcc  %o3, 8, %o3
       10ea0:       16 4f ff fe     bge  %icc, 10e98
       10ea4:       d8 73 40 0b     stx  %o4, [ %o5 + %o3 ]
                                                             pol.coeff(0):
                                                             load double value 0.0 at
                                                             ___const_seg_900000102 in %f0
                                                             (and address of format string in %o0)
       10ea8:       1b 00 00 43     sethi  %hi(0x10c00), %o5
       10eac:       15 00 00 44     sethi  %hi(0x11000), %o2
       10eb0:       c1 1b 62 68     ldd  [ %o5 + 0x268 ], %f0
       10eb4:       90 02 a0 ac     add  %o2, 0xac, %o0
       10eb8:       82 10 00 0f     mov  %o7, %g1
                                                             store 0.0 in %f0 to stack and load it
                                                             from there to %o1/%o2
       10ebc:       c1 3b a0 60     std  %f0, [ %sp + 0x60 ]
       10ec0:       d2 03 a0 60     ld  [ %sp + 0x60 ], %o1
       10ec4:       d4 03 a0 64     ld  [ %sp + 0x64 ], %o2
       10ec8:       9c 03 a0 b0     add  %sp, 0xb0, %sp
                                                             call printf
       10ecc:       40 00 40 92     call  21114 <_PROCEDURE_LINKAGE_TABLE_+0x54>
       10ed0:       9e 10 00 01     mov  %g1, %o7
       10ed4:       81 c3 e0 08     retl
       10ed8:       9c 03 a0 b0     add  %sp, 0xb0, %sp
    Hmmm... This seems to stress this formatting tags thing to its limits...

    Thanks for confirming this.
    No, this happens neither in an Open Source package nor in an important product. This is an internal product, which is continuously developed with Sun Tools since 1992 (with incidents like this one being very rare).
    I am a bit concerned with this bug though, because it might indicate a weakness in the area of C++ inlining (after all, the compiler fails to correctly aggregate a sequence of three fairly simple inline functions, something which is quite common in our application). If, on the other hand, this is a singular failure caused by unique circumstances which we have hit by sheer (un)luck, it is always possible to work around this: explicitly defining a assignment operator instead of relying on the compiler-generated one is sufficient to make the bug go away.

  • Newbie: Simple Reads, Transactions, Pooling and Code Form

    I am currently refactoring a set of code (assembly) called by a windows service that provides persistence to an Oracle DB (DataAccessComponent). The motivation for the request to perform this work was to reduce connections to the Oracle DB.
    The most salient cause I believe of the excessive connections is the intermingling of ODP and the MS.Net Oracle provider. This has been removed and all code now utilizes ODP.
    My problem: The DataAccessComponent undergoing a refactor itself depends (half of the code access methods) heavily on another external assembly that was created to manage all access to the Oracle DB (DataManagment). This DataManagment component Requires a call to its own BeginTransaction() method. This BeginTransaction() behavioral facade then proceeds to create a new OracleConnection then OracleTransaction and adds it to an internal STACK. From what I can tell this stack is and its Transactions are initially independent of an OracleCommand. Then when a Command is executed in this DataManagment component it checks its internal state/stack for an active Transaction and either: joins the currently executing internally-referenced Transaction if it has not been committed or POPs the STACK for an available Transaction. The ostensible reason (assumptions) for this architecture are a) a joined Transaction only utilizes one Connection to the Oracle DB - thereby minimizing connections and b) an attempt to execute two Transactions on a single Connection will (reportedly) cause an exception and c) served as a basis for a home-grown code data access code generation tool.
    The side effects (as I see them) of utilizing this external DataManagment assembly (with its Transaction stack) are that even simple ExecuteReader() (database read only) must participate in a Transaction (indeed create one if none exists). Furthermore, in my more familiar MS SQL .Net provider world, the general rule I believe is that the .Net provider takes care of caching the Connection and two separate Transactions that attempt execute a command at roughly the same time (overlap) will not throw an exception. Furthermore, I am not accustomed to wrapping simple database reads in Transactions - seems like overkill to me.
    My preferred method of performing reads and writes to Oracle would be to utilize what I know of ADO styled data access in general:
    using(OracleConnection conn = new OracleConnection(ConnectionString))
         using(OracleCommand cmd = new OracleCommand(SqlStatementString,conn))
              cmd.CommandType = CommandType.Text;
              cmd.Connection.Open();
              using(OracleDataReader rdr = cmd.ExecuteReader(CommandBehavior.CloseConnection))
                   while(rdr.Read())
                        //Do something with reader data
                   rdr.Close();
         //Should I call conn.Close();
    using (OracleConnection conn = new OracleConnection(ConnectionString))
         using (OracleCommand cmd = new OracleCommand(OraclePackageString, conn))
              cmd.CommandType = CommandType.StoredProcedure;
              //cmd.Parameters.Add(...);
              //cmd.Parameters.Add(...);
              cmd.Connection.Open();
              using(OracleTransaction tran = cmd.Connection.BeginTransaction())
                   try
                        cmd.ExecuteNonQuery();
                        tran.Commit();
                   catch
                        tran.Rollback();
                        //Handle exception
         //Should I call conn.Close();
    So I have done some research and know that Transactions are scoped to connections. Do that mean that if the above two routines execute simultaneously they will a) create two separate connections? b) throw an exception?
    Furthermore, if I call two of the second routines (Transacted) simultaneously with I a) get an exception? b) create two separate connections?
    And lastly, if I am calling a bunch of the first type of routine above (non-transacted, read-only) will I a) create a new connection with every call? b) use the same ODP provider cache connection?
    Also most of these access routines that do writes, only update a single table and row. If that’s all that is being done and I am not participating in Save Points for rollback do I need a transaction on the writes at all? (Given that I would need to respond to an exception as part of the application logic)
    I am not familiar with Oracle connection pooling so I am also not sure if I should be implicitly calling Dispose() on the Connection object(s) above -> which in turn calls Close() or even calling close manually. I read in a post on these boards that if you are running connection pooling you should leave the connection open?
    Any help much appreciated... I
    Thanks, Brad

    I may not be able to answer all of your questions but I can certainly attempt to answer your questions related to connection pooling. Based on the code samples you posted, I do not see any threading issue with connection or transaction since you are always creating a new connection object and always a new txn.
    Once you are done with the connection, you should call conn.Close() so that the underlying session returned to the ODP.NET connection pool as you need not keep connection open to utilize ODP.NET connection pooling. Obviously, in your sample, you are using the "using" clause which will implicitly dispose the connection object hence you do not need to call Close here as Dispose should close the connection.

  • Virtual Interface generation bug with NetWeaver ?

    I use the Web Service generation wizard to generate a virtual interface which exposes methods in an EJB. As one of the parameters I pass a simple java bean, lets say:
    public class StatusCode
        private String name;
        private boolean buggy;
        public String getName() {...}
        public void setName(String newName){...}
        public boolean isBuggy() {...}
        public void setBuggy(boolean trueOrFalse){...}
    When I look at the virtual interface generated, especially at the Types tab, my boolean attribute is not included.
    If I change the getter to "getBuggy()", the attribute is included in the VI. The only problem is that the typical Java Bean framework, say from sun or ibm, always map 'boolean' getters to 'isSomething()' and not 'getSomething()'.
    Is this a bug with NW? Is there a fix or workaround for this ?
    My version of NW:
    Version: 2.0.7
    Build id: 200407270250
    Thanks in advance,
    Mark
    (If this is not the right forum, could someone suggest where I can post this question?)

    Hi.  As of Netweaver 2004s,  the virtual interface piece has be absorbed into the creation of the web service definition, so there is no need to create it as well.   When you are getting the 403,  how are you trying to run this, using the Web Service Homepage?  If so, you will need to configure what j2e engine that you want to use to use.  You can do this in WSADMIN under the Administration settings.   Check that this is set.  YOu must know the URL of your j2e engine.
    Regards,
    Rich Heilman

  • MTS dies on 8.1.5 with simple EJB!

    Invoking a simple, hello-world like EJB against 8.1.5 occasionally (well, sort of frequently) causes a core dump, as traced in the alert file (excerpt follows). Apparently MTS on 8.1.5 is not very robust... or what?
    Has anybody experienced the same problem?
    Mauro
    Errors in file /u01/app/oracle/admin/DB8I/bdump/s000_18045.trc:
    ORA-07445: exception encountered: core dump [11] [0] [] [] [] []
    Thu May 4 18:27:40 2000
    Load Indicator not supported by OS !
    found dead multi-threaded server 'S000', pid = (8, 2)
    null

    The server has to be 8.1.6 or above to
    use the XA features.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Bill Burcham ([email protected]):
    I'm using Oracle's 8.1.6 XADataSource. I'm enlisting the XAResource associated with an XAConnection with my JTA transaction manager. I'm hitting an 8.1.5 RDBMS server.
    Apparently, I'm missing the PL/SQL package JAVA_XA. Can I get a .sql for it and load it into the 8.1.5 server and succeed, or do I have to get an 8.1.6 server?
    Here's the error message I get at commit-time (the transaction manager (mine) is issuing an "end" to the XAResource which in turn appears to be trying to invoke JAVA_XA.XA_END via a prepared statement):
    java.sql.SQLException: ORA-06550: line 1, column 14:
    PLS-00201: identifier 'JAVA_XA.XA_END' must be declared
    ORA-06550: line 1, column 8:
    PL/SQL: Statement ignored
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java)
    I searched my catalog (all_source table) for this package and can't find it.<HR></BLOCKQUOTE>
    null

  • Explain User exit concept with simple example

    Hi friends
    I am new member of this forum & i am learning ABAP.
    Kindly send me user exit concept with simple example to my mail ID so i can able to use it.
    mail ID  [email protected]
    thanks in advance
    Thanks & Regards

    Code SE18 is used to Identify the BADI available.
    Look for the string 'CL_EXITHANDLER' in the standard program. This is a class which has a method 'GET_INSTANCE' which is used to trigger BADI's from the Standard Program. The interface parameter for this static method 'EXIT_NAME' is used to pass the BADI to the method.
    Open Standard Program and do a global search 'CL_EXITHANDLER'.
    SE18 > give the BADI name found through above search.
    CUSTOMER_ADD_DATA > which has a method SAVE_DATA.
    T.Code SE19 is used to Implement BADI.
    SE19 > give the implementation name > Give the Definition name as CUSTOMER_ADD_DATA and the Short Text.
    Intro.....
    http://help.sap.com/saphelp_nw04/helpdata/en/e6/d54d3c596f0b26e10000000a11402f/content.htm
    Check these links for info about badi..
    BADI's
    http://support.sas.com/rnd/papers/sugi30/SAP.ppt
    BADI's
    http://help.sap.com/saphelp_erp2005/helpdata/en/73/7e7941601b1d09e10000000a155106/frameset.htm
    http://support.sas.com/rnd/papers/sugi30/SAP.ppt
    http://www.sts.tu-harburg.de/teaching/sap_r3/ABAP4/abapindx.htm
    http://members.aol.com/_ht_a/skarkada/sap/
    http://www.ct-software.com/reportpool_frame.htm
    http://www.saphelp.com/SAP_Technical.htm
    http://www.kabai.com/abaps/q.htm
    http://www.guidancetech.com/people/holland/sap/abap/
    http://www.planetsap.com/download_abap_programs.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/c8/1975cc43b111d1896f0000e8322d00/content.htm
    /people/thomas.weiss/blog/2006/04/03/how-to-define-a-new-badi-within-the-enhancement-framework--part-3-of-the-series
    /people/thomas.weiss/blog/2006/04/18/how-to-implement-a-badi-and-how-to-use-a-filter--part-4-of-the-series-on-the-new-enhancement-framework
    http://www.esnips.com/web/BAdI
    http://www.allsaplinks.com/badi.html
    New to Badi
    check any fo the below links. this will def help u.
    http://www.allsaplinks.com/badi.html
    And also download this file....
    http://www.savefile.com/files.php?fid=8913854
    There are other tutorials on this site...
    http://sapbrain.com/Tutorials/tuto_download.html
    What are BAdIs?
    -> is an anticipated point of extension – these points act like sockets and exist in the original source code
    -> based on ABAP Objects. BAdI defines an interface that can be implemented by BAdI-implementations that are transport objects of their own
    ->Important! There are 2 roles: Enhancement Option-provider & Implementer.
    -> In the above context, Enhancement Implementation can be done only if option (hook) is provided by the Option-provider. In simple words there are no implicit BAdIs.
    Note: In the following slides, Definitions are created so as to understand the method of BAdI definition & for example purpose. As stated above this is the role of Enhancement Option-Provider.
    Classic BAdIs already exist since SAP Release 4.6
    BAdIs have been Re-implemented in ECC7.0 under the new Enhancement Framework & Switch Framework
    Classic BAdIs
    To understand what a powerful pattern a BAdI is, we will now define & then implement a BAdI
    BADI Class is created automatically.
    The various options are described below in detail:
    1. Enhanceable: Enhanceability of filter types can only be specified for filter-dependent BADI definitions under very special conditions. For example, the domain belonging to the filter type must be linked with a value table that is of the type E or G. A BADI implementation can then be created in one step by creating a new filter value that is automatically entered into the value table at save and also copied into the transport order of the BADI implementation. In addition, it is also possible to create a new filter value and, at the same time, a BADI implementation with the same name. Naturally, you can also specify existing filter values.
    You should select this feature if there is a prerequisite that a new filter value is created together with a new BADI implementation - that is, that BADI implementations are not created solely with existing filter values, although this, too, is possible.
    2. Multiple-Use
    3. Filter-Dependent
    Instance Methods can access all of the attributes of a class and can trigger all events of a class. Static Methods can only access static attributes and static events.
    Exceptions:
    Events:
    Events can be defined in classes or in interfaces. Corresponding methods can trigger these events with the RAISE EVENT statement. Each class (or interface) that is going to handle the corresponding event must implement a relevant handler method, and register it using the SET HANDLER statement. When an event occurs, the system calls all of the handler methods registered for that event.
    Like method definitions, events have a parameter interface. The only difference is that events may only have EXPORTING parameters.
    BADI : Businees Add IN's
    Business Add-Ins are SAP enhancement technique based on ABAP Objects.
    Where the SAP standard program is not going to fullfill the client requirement , we are going to add our own program to SAP standard program, without changing the standard prog.
    Each Business Add-In has
    – at least one Business Add-In definition
    – a Business Add-In interface
    – a Business Add-In class that implements the interface
    Each BADI has two different Views.
    1.Definition view
    2.Implementation view
    T.C for BADI Definition is SE18.
    T.C for BADI Implementation is SE19.
    There are multiple ways of searching for BADI.
    • Finding BADI Using CL_EXITHANDLER=>GET_INSTANCE
    • Finding BADI Using SQL Trace (TCODE-ST05).
    • Finding BADI Using Repository Information System (TCODE- SE84).
    1. Go to the Transaction, for which we want to find the BADI, take the example of Transaction VD02. Click on System->Status. Double click on the program name. Once inside the program search for ‘CL_EXITHANDLER=>GET_INSTANCE’.
    Make sure the radio button “In main program” is checked. A list of all the programs with call to the BADI’s will be listed.
    The export parameter ‘EXIT_NAME’ for the method GET_INSTANCE of class CL_EXITHANDLER will have the user exit assigned to it. The changing parameter ‘INSTANCE’ will have the interface assigned to it. Double click on the method to enter the source code.Definition of Instance would give you the Interface name.
    2. Start transaction ST05 (Performance Analysis).
    Set flag field "Buffer trace"
    Remark: We need to trace also the buffer calls, because BADI database tables are buffered. (Especially view V_EXT_IMP and V_EXT_ACT)
    Push the button "Activate Trace". Start transaction VA02 in a new GUI session. Go back to the Performance trace session.
    Push the button "Deactivate Trace".
    Push the button "Display Trace".
    The popup screen "Set Restrictions for Displaying Trace" appears.
    Now, filter the trace on Objects:
    • V_EXT_IMP
    • V_EXT_ACT
    Push button "Multiple selections" button behind field Objects
    Fill: V_EXT_IMP and V_EXT_ACT
    All the interface class names of view V_EXT_IMP start with IF_EX_. This is the standard SAP prefix for BADI class interfaces. The BADI name is after the IF_EX_.
    So the BADI name of IF_EX_CUSTOMER_ADD_DATA is CUSTOMER_ADD_DATA
    3. Go to “Maintain Transaction” (TCODE- SE93).
    Enter the Transaction VD02 for which you want to find BADI.
    Click on the Display push buttons.
    Get the Package Name. (Package VS in this case)
    Go to TCode: SE84->Enhancements->Business Add-inns->Definition
    Enter the Package Name and Execute.
    Here you get a list of all the Enhancement BADI’s for the given package MB.
    Have a look at http://help.sap.com/saphelp_nw04/helpdata/en/04/f3683c05ea4464e10000000a114084/content.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/73/7e7941601b1d09e10000000a155106/frameset.htm
    http://support.sas.com/rnd/papers/sugi30/SAP.ppt
    http://www.sts.tu-harburg.de/teaching/sap_r3/ABAP4/abapindx.htm
    http://members.aol.com/_ht_a/skarkada/sap/
    http://www.ct-software.com/reportpool_frame.htm
    http://www.saphelp.com/SAP_Technical.htm
    http://www.kabai.com/abaps/q.htm
    http://www.guidancetech.com/people/holland/sap/abap/
    http://www.planetsap.com/download_abap_programs.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/c8/1975cc43b111d1896f0000e8322d00/content.htm
    /people/thomas.weiss/blog/2006/04/03/how-to-define-a-new-badi-within-the-enhancement-framework--part-3-of-the-series
    /people/thomas.weiss/blog/2006/04/18/how-to-implement-a-badi-and-how-to-use-a-filter--part-4-of-the-series-on-the-new-enhancement-framework
    How to develop BADI
    Rewards if useful.

  • Commit work and roll back with simple language and simple example

    hi guru
    commit work and roll back with simple language and simple example

    Hi,
    The statement COMMIT WORK completes the current SAP LUW and opens a new one, storing all change requests for the current SAP LUW in the process. In this case, COMMIT WORK performs the following actions:
    It executes all subroutines registered using PERFORM ON COMMIT.
    The sequence is based on the order of registration or according to the priority specified using the LEVEL addition. Execution of the following statements is not permitted in a subroutine of this type:
    PERFORM ... ON COMMIT|ROLLBACK
    COMMIT WORK
    ROLLBACK WORK
    The statement CALL FUNCTION ... IN UPDATE TASK can be executed.
    ROLL BACK:
    The statement ROLLBACK WORK closes the current SAP-LUW and opens a new one. In doing so, all change requests of the current SAP-LUW are canceled. To do this, ROLLBACK WORK carries out the following actions:
    1) Executes all subprograms registered with PERFORM ON ROLLBACK.
    2) Deletes all subprograms registered with PERFORM ON COMMIT.
    3) Raises an internal exception in the Object Services that makes sure that the attributes of persistent objects are initialised.
    4) Deletes all update function modules registered with CALL FUNCTION ...IN UPDATE TASK from the VBLOG database table and deletes all transactional remote Function Calls registered with CALL FUNCTION ... IN BACKGROUND TASK from database tables ARFCSSTATE and ARFCSDATA.
    5) Removal of all SAP locks set in the current program in which the formal parameter _SCOPE of the lock function module was set to the value 2.
    6) Triggers a database rollback, which also ends the current database-LUW.

  • Optimistic transactions

    I'm catching JDOUserException in an optimistic transaction environment.
    My first question is why do I get this Exception when my process is the
    only user of the database?
    My second question is why, when I retry the transaction (after a
    rollback() and a begin() ), do I get the same Exception? What should I
    do prior to repeating the transaction?
    I'm using two different PMs, one for a read-only database and the other
    for the database I'm updating.
    Thanks
    # Kodo JDO Properties configuration
    # for 2.4.0:
    com.solarmetric.kodo.LicenseKey=[[LICENSE KEY CENSORED]]
    com.solarmetric.kodo.Logger=sql.txt
    com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=java:/TransactionManager
    javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.JDBCPersistenceManagerFactory
    javax.jdo.option.ConnectionURL=jdbc\:microsoft\:sqlserver\://SERVER1\:1433;DatabaseName=AML;SelectMethod=cursor
    #javax.jdo.option.ConnectionURL=jdbc\:JSQLConnect\://SERVER1\:1433/database=AML
    javax.jdo.option.ConnectionDriverName=com.microsoft.jdbc.sqlserver.SQLServerDriver
    #javax.jdo.option.ConnectionDriverName=com.jnetdirect.jsql.JSQLDriver
    # new 12/4/02:
    com.solarmetric.kodo.impl.jdbc.FlatInheritanceMapping=true
    javax.jdo.option.ConnectionUserName=sa
    javax.jdo.option.ConnectionPassword=
    #problems w/ SQLServerDriver and long-lived connections: need to set
    pool sizes to zero for Thomson database load
    javax.jdo.option.MinPool=1
    #javax.jdo.option.MinPool=0
    javax.jdo.option.MaxPool=80
    #javax.jdo.option.MaxPool=0
    javax.jdo.option.Optimistic=true
    javax.jdo.option.RetainValues=true
    javax.jdo.option.NontransactionalRead=true
    javax.jdo.option.RestoreValues=true
    javax.jdo.option.NontransactionalWrite=false
    javax.jdo.option.Multithreaded=true
    javax.jdo.option.MsWait=5000
    javax.jdo.option.IgnoreCache=false
    com.solarmetric.kodo.impl.jdbc.WarnOnPersistentTypeFailure=true
    com.solarmetric.kodo.impl.jdbc.SequenceFactoryClass=com.solarmetric.kodo.impl.jdbc.schema.DBSequenceFactory
    com.solarmetric.kodo.impl.jdbc.AutoReturnTimeout=10
    com.solarmetric.kodo.EnableQueryExtensions=false
    com.solarmetric.kodo.DefaultFetchThreshold=30
    com.solarmetric.kodo.DefaultFetchBatchSize=10

    I was getting the JDOUserException for concurrnecy problems before I
    started catching that exception. After I modified the code, I started
    catching the same exception (the e.getMessage() was the same as the
    uncaught exceptions.)
    The process is attempting a database load from a read-only database into
    the active database. I have a separate PM (and properties file) for
    each database. The point at which it starts throwing the
    JDOUserException seems to be random (but after substantial records have
    been processed.)
    Seems the only way to recover from the exception is to call evictAll()
    on the PM. This takes considerable time--is there a better way to recover?
    I'll try to research whether the jdolockx fields are changing. I know
    it's not happening in my code, but I'll check.
    Thanks,
    Mike
    Petr Bulanek wrote:
    Hi Michael,
    Just a thought:
    Have you tried to check if the value in jdolockX column changes during
    your transaction? I thought that this column is pretty much a version of
    the record and should always change when the record is updated.
    Consequently, I would expect Kodo to store its value when tx.begin() is
    called and compare this value to the curent one in that database prior to
    commit.
    How do you actually distinguish between JDOUserTransaction thrown as a
    result of optimistic transaction and JDOUserException thrown for some
    other reason?
    It may be just that I am new to JDO, but I really thought it would not
    hurt to have a special exception (maybe subclass of JDOUserException)
    dedicated to this purpose.
    Having said that, I would love to know how do you do it now.
    Thank you,
    Petr
    Michael Welter wrote:
    I\'m not doing any updates, begin(), or commit() in the read-only database.
    What can I do to recover from a JDOUserException? I can\'t seem to
    preceed once I catch the exception.
    Thanks.
    This are the properties for the read-only db:
    com.solarmetric.kodo.LicenseKey=[[LICENSE KEY CENSORED]]
    com.solarmetric.kodo.Logger=sql.txt
    com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=java:/TransactionManager
    javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.JDBCPersistenceManagerFactory
    javax.jdo.option.ConnectionURL=jdbc\\:microsoft\\:sqlserver\\://SERVER1\\:1433;DatabaseName=ScottWekly;SelectMethod=cursor
    javax.jdo.option.ConnectionDriverName=com.microsoft.jdbc.sqlserver.SQLServerDriver
    com.solarmetric.kodo.impl.jdbc.FlatInheritanceMapping=false
    javax.jdo.option.ConnectionUserName=sa
    javax.jdo.option.ConnectionPassword=
    javax.jdo.option.MinPool=1
    javax.jdo.option.MaxPool=20
    javax.jdo.option.Optimistic=true
    javax.jdo.option.RetainValues=true
    javax.jdo.option.NontransactionalRead=true
    javax.jdo.option.RestoreValues=true
    javax.jdo.option.NontransactionalWrite=false
    javax.jdo.option.Multithreaded=true
    javax.jdo.option.MsWait=5000
    javax.jdo.option.IgnoreCache=false
    com.solarmetric.kodo.impl.jdbc.WarnOnPersistentTypeFailure=true
    com.solarmetric.kodo.impl.jdbc.SequenceFactoryClass=com.solarmetric.kodo.impl.jdbc.schema.DBSequenceFactory
    com.solarmetric.kodo.impl.jdbc.AutoReturnTimeout=10
    com.solarmetric.kodo.EnableQueryExtensions=false
    com.solarmetric.kodo.DefaultFetchThreshold=30
    com.solarmetric.kodo.DefaultFetchBatchSize=10
    Properties for the db being updates:
    # Kodo JDO Properties configuration
    com.solarmetric.kodo.LicenseKey=[[LICENSE KEY CENSORED]]
    com.solarmetric.kodo.Logger=sql.txt
    com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=java:/TransactionManager
    javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.JDBCPersistenceManagerFactory
    javax.jdo.option.ConnectionURL=jdbc\\:microsoft\\:sqlserver\\://SERVER1\\:1433;DatabaseName=AML;SelectMethod=cursor
    javax.jdo.option.ConnectionDriverName=com.microsoft.jdbc.sqlserver.SQLServerDriver
    com.solarmetric.kodo.impl.jdbc.FlatInheritanceMapping=true
    javax.jdo.option.ConnectionUserName=sa
    javax.jdo.option.ConnectionPassword=
    javax.jdo.option.MinPool=1
    #javax.jdo.option.MinPool=0
    javax.jdo.option.MaxPool=20
    #javax.jdo.option.MaxPool=0
    javax.jdo.option.Optimistic=true
    javax.jdo.option.RetainValues=true
    javax.jdo.option.NontransactionalRead=true
    javax.jdo.option.RestoreValues=true
    javax.jdo.option.NontransactionalWrite=false
    javax.jdo.option.Multithreaded=true
    javax.jdo.option.MsWait=5000
    javax.jdo.option.IgnoreCache=false
    com.solarmetric.kodo.impl.jdbc.WarnOnPersistentTypeFailure=true
    com.solarmetric.kodo.impl.jdbc.SequenceFactoryClass=com.solarmetric.kodo.impl.jdbc.schema.DBSequenceFactory
    com.solarmetric.kodo.impl.jdbc.AutoReturnTimeout=10
    com.solarmetric.kodo.EnableQueryExtensions=false
    com.solarmetric.kodo.DefaultFetchThreshold=30
    com.solarmetric.kodo.DefaultFetchBatchSize=10
    Stephen Kim wrote:
    You can still can an optimistic exception if you do concurrent
    modifiactions across different pms. Be sure that your read-only pm is not
    doing any transactional work (e.g. currenTransaction ().commit ()).
    On Wed, 08 Jan 2003 11:20:25 +0000, Michael Welter wrote:
    I\'m catching JDOUserException in an optimistic transaction environment.
    My first question is why do I get this Exception when my process is the
    only user of the database?
    My second question is why, when I retry the transaction (after a
    rollback() and a begin() ), do I get the same Exception? What should I
    do prior to repeating the transaction?
    I\'m using two different PMs, one for a read-only database and the other
    for the database I\'m updating.
    Thanks
    # Kodo JDO Properties configuration
    # for 2.4.0:
    com.solarmetric.kodo.LicenseKey=[[LICENSE KEY CENSORED]]
    com.solarmetric.kodo.Logger=sql.txt
    com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=java:/TransactionManager
    javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.JDBCPersistenceManagerFactory
    javax.jdo.option.ConnectionURL=jdbc\\:microsoft\\:sqlserver\\://SERVER1\\:1433;DatabaseName=AML;SelectMethod=cursor
    #javax.jdo.option.ConnectionURL=jdbc\\:JSQLConnect\\://SERVER1\\:1433/database=AML
    javax.jdo.option.ConnectionDriverName=com.microsoft.jdbc.sqlserver.SQLServerDriver
    #javax.jdo.option.ConnectionDriverName=com.jnetdirect.jsql.JSQLDriver
    # new 12/4/02:
    com.solarmetric.kodo.impl.jdbc.FlatInheritanceMapping=true
    javax.jdo.option.ConnectionUserName=sa
    javax.jdo.option.ConnectionPassword=
    #problems w/ SQLServerDriver and long-lived connections: need to set
    pool sizes to zero for Thomson database load
    javax.jdo.option.MinPool=1
    #javax.jdo.option.MinPool=0
    javax.jdo.option.MaxPool=80
    #javax.jdo.option.MaxPool=0
    javax.jdo.option.Optimistic=true
    javax.jdo.option.RetainValues=true
    javax.jdo.option.NontransactionalRead=true
    javax.jdo.option.RestoreValues=true
    javax.jdo.option.NontransactionalWrite=false
    javax.jdo.option.Multithreaded=true
    javax.jdo.option.MsWait=5000
    javax.jdo.option.IgnoreCache=false
    com.solarmetric.kodo.impl.jdbc.WarnOnPersistentTypeFailure=true
    com.solarmetric.kodo.impl.jdbc.SequenceFactoryClass=com.solarmetric.kodo.impl.jdbc.schema.DBSequenceFactory
    com.solarmetric.kodo.impl.jdbc.AutoReturnTimeout=10
    com.solarmetric.kodo.EnableQueryExtensions=false
    com.solarmetric.kodo.DefaultFetchThreshold=30
    com.solarmetric.kodo.DefaultFetchBatchSize=10

  • AIR bug with iOS fullscreen video! (easy repro)

    Here's a bug with fullscreen AIR apps not dealing with iOS fullscreen video resize. Easy repro:
    1. Create an mc with a background graphic aligned with the top and bottom of the stage. (This is so you can easily see the bug.)
    2. Add a stageWebView and load an html page with embedded video. (or just load a video directly)
    3. Play video and enable fullscreen mode.
    4. Rotate device 90 degrees while in fullscreen mode and then hit "Done" button to return to stage (exit fullscreen).
    5. BUG: Stage is now "pushed down" and off-screen. (even though the mc still traces x = 0)
    Conclusion: the display bug occurs when the user exits the iOS native player fullscreen mode from a different orientation then they started with (common if the original aspect ratio was portrait)
    Can someone else please confirm this? (The amount of stage "push" appears to corrolate with the height of the iOS status bar.)
    Thanks.
    PS. You don't necessarily need to load an html page. You can just have stageWebView load a video directly and experience the same result.
    PPS. I also tried using the new UIWebView Native Extension and am seeing the same thing, so this is really looking like an AIR/Flash bug with the iOS native player.

    A simpler summary for this issue:
    If the device has been rotated since entering Fullscreen native player, a fullscreen app will be "pushed downwards" and off-screen slightly.
    BUG: Adobe AIR app does not stay in fullscreen mode after such and event, ruining graphic UI for fullscreen apps.
    Simple workaround: listen for on orientationChange and if stage.displayState == StageDisplayState.NORMAL, set to stage.displayState = StageDisplayState.FULLSCREEN.
    However, iOS native player will lose traditional status bar appearance (and risk rejection by Apple??)
    Complex workaround (as mentioned earlier) will keep the native player's status bar but result in the app's screen "flashing" upon resizing the stage.
    Please help fix this bug by commenting in my bug report. Thanks again.
    https://bugbase.adobe.com/index.cfm?event=bug&id=3486264

  • 30EA3: German language bug with user defined reports still not fixed

    In 30EA2: Limited folder functionality with German language
    I wrote about a bug in 30EA2. Was this message noticed by the developers?
    The bug still exists in 30EA3.
    In new folders for user definded reports only the options "Kopieren" and "Exportieren" (copy and export) are available, when SQL-Deverloper runs in German language mode.
    A workaround is to force language to English with the line
    AddVMOption -Duser.language=en
    in sqldeveloper.conf

    I made a new thread for the same bug in 3.1EA2:
    3.1EA2 Old bug with German language settings still exists
    I have some hope, that my problem finally got noticed by the developers.

  • 3.1EA2 Old bug with German language settings still exists

    In the past I wrote several times about a bug in SQL Developer when running in German language mode. This bug is still not fixed in 3.1 EA2:
    In new folders for user definded reports only the options "Kopieren" and "Speichern unter ..." ('Copy' and 'Cave as') are available, when SQL-Deverloper runs in German language mode. Other options like "Bearbeiten, Neuer Ordner, Neuer Bericht, Ausschneiden, Einfügen, Löschen" (Edit, New Folder, New Report, Cut, Paste, Delete) are missing. A workaround is to force language to English with the line
    AddVMOption -Duser.language=en
    in sqldeveloper.conf
    I mentioned it first in Re: Folders with limited functionality about version 2.1.1.64 and then in 30EA2: Limited folder functionality with German language 3.0 EA2 and German language bug with user defined reports still not fixed Beta Release 3.0 EA3
    Edited by: user1775992 on 22.11.2011 03:30

    I have raised a bug on this issue and I am actively looking into it.

Maybe you are looking for