Batch tables not generating the Int. object no.

HI
In Batch related tables the Internal object number is not generating.
Waht could be the reasons
-ashok

issue closed as it isnot yet answered

Similar Messages

  • Error while generating the Business Object in Mobile Sales

    Hi,
    I made changes to the Business Object BOCAPGEN.
    Now I am trying to generate the business object but it is giving out the error saying
    Error intializing RT Generator !.RT Generator Failed for ".Error arsrep.dat is in use so Generation cannot be done
    I am not a MSA Developer and have no clue regarding MAS(Mobile Application Studio)
    So any one can please give me a clue regarding the issue
    I have solved it by closng all other application other than client console
    Thanks Guys
    Message was edited by: zack taylor

    Hi !
    1 year later, I am facing the same problem.
    I want to build MSA 4.0 SP8, everything runs fines until the end of the generation of the Tiles, the next step fails :
    Error intializing RT Generator !.RT Generator Failed for ".Error arsrep.dat is in use so Generation cannot be done
    Then around 10 thousands of error messages of that kind follow in the output window ; howeverin the end it says "Generation End" without telling whether it was successful or not (the first time the output window was too small, I thought all was OK )
    Anyways, the Mobile Sales icon appeared on the desktop and when I try to launch it I get the error "Starting MobileSalesfailed".
    During another attempt, I check with "Unlocker" the 2 arsrep.dat files I found under the BOL directory, it reported that none was used and locked at that moment.
    Also I killed the vbagen.exe process before starting the build, but it was automatically launched after. (first build was launched after a reboot, the 2nd after the 1st failed + vbagen.exe process killed)
    Does anyone have any clue ?
    Another question is : what are the rights required ? I am a local administrator but I do not have full admin rights (the only thing I saw at the moment is I cannot access to Add/Delete Programs in the Control Panel)
    Thanks & Regards,
    François
    -edit-
    thanks to the one who moved it to the right forum
    Message was edited by:
            Francois Feugier

  • DirectToXMLTypeMapping "create-tables" not generating XMLTYPE column type

    Can someone tell me how to code an XMLTYPE field such that "create-tables" will generate the XMLTYPE column and such that the IntegrityChecker will not throw an error.
    I am forced to run these alters after "create-tables" is run.
    ALTER TABLE XML_SYS_MSG drop column message;
    ALTER TABLE XML_SYS_MSG add (message XMLType);
    Snippets:
    <persistence...
    <property name="eclipselink.ddl-generation" value="create-tables" />
    </persistence>
    public class XmlMessageCustomizer implements DescriptorCustomizer {
    @Override
    public void customize(final ClassDescriptor descriptor) throws Exception {
    final DirectToXMLTypeMapping mapping = new DirectToXMLTypeMapping();
    descriptor.removeMappingForAttributeName("message");
    // name of the attribute
    mapping.setAttributeName("message");
    // IntegrityChecker requires uppercase for oracle
    // name of the column
    mapping.setFieldName("MESSAGE");
    descriptor.addMapping(mapping);
    @Entity(name = "XmlMessage")
    @Table(name = "XML_MSG")
    @Customizer(XmlMessageCustomizer.class)
    public class XmlMessage {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "ID")
    private long id;
    // @Column(columnDefinition = "XMLTYPE")
    // private String message;
    // ALTER TABLE XML_SYS_MSG drop column message;
    // ALTER TABLE XML_SYS_MSG add (message XMLType);
    private Document message;
    public XmlMessage() {
    public long getId() {
    return id;
    public void setId(final long id) {
    this.id = id;
    public Document getMessage() {
    return message;
    public void setMessage(final Document message) {
    this.message = message;
    Secondly if I turn on the IntegrityChecker it will fail
    public class EnableIntegrityChecker implements SessionCustomizer {
    @Override
    public void customize(final Session session) throws Exception {
    session.getIntegrityChecker().checkDatabase();
    session.getIntegrityChecker().setShouldCatchExceptions(false);
    }

    Adding:
         mapping.getField().setColumnDefinition("XMLTYPE");to the customizer should solve the problem.
    --Shaun                                                                                                                                                                                                                                                                                       

  • Java.sql.SQLException: Could not generate the DTD because..

    I am trying to reverse an XML file and running into the following exception:
    java.sql.SQLException: Could not generate the DTD because the file could not be created. Verify that you have write permission in the directory.
    I have permission to the directory, so I suspect something else is causing this. I saw in another thread that file name length sometimes caused this. I tried that approach and it did not help.
    I am trying to get data from an oracle table and put it in an XML file using an interface.
    I have my interface set to use an oracle source table and an XML target. My LKM is SQL to SQL and my IKM is SQL Control Append. I have staging area different from target set.
    My model (for the XML file) has technology set to XML, uses the SnpsXML jdbc driver and I use the following parameters in my file path: f, d, s, dp. This tests successfully.
    I am trying reverse the xml file using standard reverse.
    I am running this on a remote agent.
    I am using ODI 10.1.3.5.5
    Thank you for any help you can provide me.
    Edited by: user13279807 on Oct 19, 2010 11:44 AM

    The exact error message I recieved was the following:
    The Technology or the Driver used does not support Reverse Engineering.
    java.sql.SQLException: Could not generate the DTD because the file could not be created. Verify that you have write permission in the directory.
    Details:
    java.sql.SQLException: Could not generate the DTD because the file could not be created. Verify that you have write permission in the directory.
         at com.sunopsis.jdbc.driver.xml.bw.a(bw.java:810)
         at com.sunopsis.jdbc.driver.xml.bw.<init>(bw.java:450)
         at com.sunopsis.jdbc.driver.xml.bx.b(bx.java:292)
         at com.sunopsis.jdbc.driver.xml.bx.a(bx.java:270)
         at com.sunopsis.jdbc.driver.xml.SnpsXmlDriver.connect(SnpsXmlDriver.java:110)
         at com.sunopsis.sql.SnpsConnection.v(SnpsConnection.java)
         at com.sunopsis.sql.SnpsConnection.a(SnpsConnection.java)
         at com.sunopsis.sql.SnpsConnection.testConnection(SnpsConnection.java)
         at com.sunopsis.dwg.reverse.Reverse.a(Reverse.java)
         at com.sunopsis.dwg.reverse.Reverse.a(Reverse.java)
         at com.sunopsis.dwg.reverse.Reverse.a(Reverse.java)
         at com.sunopsis.dwg.reverse.Reverse.getMetaData(Reverse.java)
         at com.sunopsis.graphical.frame.a.ip.a(ip.java)
         at com.sunopsis.graphical.frame.a.ip.a(ip.java)
         at com.sunopsis.graphical.frame.a.hq.b(hq.java)
         at com.sunopsis.graphical.tools.utils.swingworker.v.call(v.java)
         at edu.emory.mathcs.backport.java.util.concurrent.FutureTask.run(FutureTask.java:176)
         at com.sunopsis.graphical.tools.utils.swingworker.l.run(l.java)
         at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:665)
         at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:690)
         at java.lang.Thread.run(Thread.java:619)

  • Create UDF for table not in the List of tables

    Hi all,
    I know that my question maybe easy or been asked before, but I couldn't find the answer.
    To create a UDF in SAP B1 version 9.0 you should go Tools -> Customization Tools -> User-Defined Fields - Management...
    which is ok and working perfect. but my question is:
    If I want to create UDF for a table not in the list of tables there what should I do? I need to create 2 UDFs for table OMRC [Manufacturers], and can't find it in master data tables.
    anyone had this issue before?
    EDIT  : Is it good to add the field by using sql server? I know it's possible, but will it be visible in SAP
    thank you
    Message was edited by: Samira Haroun

    Hi Samira,
    There is noit a simple link for this, I advise you to study the documentation for TB1300 SBO Development Certification.
    Ypu should also have knowledge of .net, and C# or VB, because you have to make a small program/addon to add the fields
    Kind regards
    Ad Kerremans

  • Thread: Could not generate the XML in single thread mode

    Hi all,
    I have created a report using PLSQL Procedure method after submitting the request I am getting the following Error.Couldn't sort out why I am getting the error while running the report.
    Error in "Multi threaded or single threaded execution block" of XXRX_REPORT_OUTPUT_PKG.insert_into_nested_table procedure
    ERROR :ORA-20005: Could not generate the XML in single thread mode
    XXRXERROR: XXRX_REPORT_OUTPUT_PKG.run_report SQLERROR: ORA-20005: ORA-20005: Could not generate the XML in single thread mode
    Can someone help me out finding the issue
    Thanks in Advance

    Hi,
    Please read SQL and PL/SQL FAQ
    We cannot guess what is the error if you don't post any part of your code.
    Additionally when you put some code or output please enclose it between two lines starting with {noformat}{noformat}
    i.e.:
    {noformat}{noformat}
    SELECT ...
    {noformat}{noformat}
    Regards.
    Al                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Error While Generating the Proxy Objects

    Hi All,
    While Generating the Proxy Obects in SPROXY at SAP R/3 side,
    Iam getting this Error
    http://img169.imageshack.us/img169/5752/proxylm0.jpg
    Can any one Occured this type of Error
    Regards
    Suman

    Hi PT,
    Now the Strange Issue came,
    While generating the proxy objects i got that above mentioned error
    After that i have run the 3 report programs according to the Ramesh Suggession
    Now if i goto SPROXY means it is saying that No Conection to Integration Builder .
    I have checked the RFC Destination created in SM59 of SAP R/3 of Type H.It is fine
    What more i have to check
    First of all when ever i open the Sropxy, every time User & pwd is requesting
    Can you please through a light where to check
    Regards
    Suman

  • CiscoWorks LMS 4.0.1 - Could not generate the report

    Hello,
    I am running CiscoWorks LMS 4.0.1 since 6 months and I wanted to generate today a report about the interface utilization on 2 Cisco switches (Catalyst 3750G). The corresponding job is created, it runs and then i get "succeeded with info" in the "Run Status" column. When I want to click then on the "View Report" link, I get the following error: "Could not generate the report. Either data is not available for the specified duration or the report job failed."
    I tried the same procedure with 2 other switches but I have got the same result.
    Does anybody has an idea of how I can fix this issue?
    Thanks a lot in advanced.
    Best regards,
    Marc Hoffmann

    Hi Marc,
    I have this problem too. I rebooted my Windows but no solved. You known the service name responsible for this error? You have any other sugestion?
    Thank you !!!

  • Stamp tool not generating the same colour as the sample area I select.

    Heya, I am having an issue with the stamp tool not generating the same colour as the sample area I select. I have tried reducing the flow rate but my understanding was it should be an exact copy of the ample area. I have only had this issue for the last two versions of photoshop.
    Cheers, Andy

    function(){return A.apply(null,[this].concat($A(arguments)))}
    UltimateTop Trumps wrote:
    I create a shape and then put a shape inside, right click and choose Make Selection and the inner shape disappears leaving the larger outer shape.
    function(){return A.apply(null,[this].concat($A(arguments)))}
    UltimateTop Trumps wrote:
    Another weird thing is that when I use a filter it applys it outside of the selection, but this is not as annoying as the original issue.
    This occurs  when you use the quick mask to create a selection. Did you?
    miss_marple

  • Could not generate the flash file(SWF)

    Hi all,
    I am working with a Dashboard having a GMap plugin and it was working fine on BO server.
    But now the plugin has been changed from GMap to CMaps plugin. I am able to download the same dashboard from BO Server to my desktop.
    The snapshot of the warning I get while downloading the dashboard has been attached.
    When downloaded from server, in place of displaying map on GMap plugin I get a warning "Invalid request this request is invalid" but when i open its properties the warning goes and the map appears back.
    Then I tried to upload the same dashboard on BO Server and I get an error "Could not generate the flash file(SWF)...". The snapshot of the same has been attached.
    I am unable to do any modification on this Dashboard.
    Regards

    Hi,
    In your discussion,you have downloaded the dashboard file from the BO server to your desktop.
    You can export the dashboard to BO Server in SWF format.But how can you download the physical dashboard file from BO server.Is it possible?
    Do the changes in .xlf physical file and export the .xlf file to BO server as File-->Save to platform.
    Regards,
    Venkat P

  • Batch Job not Generating Spool No

    Hi Experts,
    We had a custom program where we are printing multiple invoice in a single go i.e all invoices for a particular sales office are printed in a single spool request. for foreground execution it is running fine. But after creating variant of it on selection screen & submitting the program to Batch job it is not creating spool no when the job is finished what could be the reason.?
    Also i would like to tell you that we are asking user to give a invoice no range on the selection screen.
    Edited by: priyeshosi on Jan 4, 2012 5:03 PM

    Hi priyeshosi ,
           If you use function modules start with GUI_* or  WS_* in your report , then you can't generate the spool.
    reason for this, Please check this link,
    http://www.sap-img.com/ab004.htm
    Regards,
    Selva M

  • Batch session not generated in FF67

    Dear All
    I have entered some  transactions in FF67 and then saved it and posted it. My processing type is 2. I have defined the posting keys for BRS and also assigned the GL account. However the system has not generated a batch input session in SM35. In the over view of the statement, system is showing an entry for IN51 as GL posted and IN01 as Not to be Posted . What could be the error?
    Regards
    Sanil Bhandari

    Hi
    With the same  config and figures, the batch session is generated and posted in testing. But in prod envior i am not able to generate the session. More ever, there is no statement with the id in FEBA
    Regards
    Sanil Bhandari

  • Compile pll (ONLY not generate the plx )in unix

    hi all,
    i want to just compile my pll and save it , but don't generate the .plx
    i use
    f90genm.sh module=x.pll userid=user/pass@db module_type=library compile_all=yes batch=yes logon=yes build=no output_file=x.pll
    and this produces x.plx , i want x.pll to just be compiled a
    Thanks for any contribution

    As Steve has said the compile_all=yes option will rebuild the pll and save it. That's the way I've done it for years. Then I've either deleted the plx if I'm not using them or split the pll and plx into diferent directories : Forms Builder has the FORMS_PATH set for the pll and the Application Server has it's FORMS_PATH set for the plx.
    HTH
    Steve
    EG:
    Q:\Forms>dir jacob.pll
    Volume in drive Q is Home
    Volume Serial Number is 047F-1902
    Directory of Q:\Forms
    13/03/2007  11:45           126,976 jacob.pll
                   1 File(s)        126,976 bytes
                   0 Dir(s)  25,464,627,200 bytes free
    Q:\Forms>frmcmp module=jacob.pll module_type=library userid=prism_adm/password@devprop compile_all=yes window_state=minimise batch=yes
    Q:\Forms>dir jacob.*
    Volume in drive Q is Home
    Volume Serial Number is 047F-1902
    Directory of Q:\Forms
    06/07/2007  09:08           122,880 jacob.pll
    06/07/2007  09:08            53,248 jacob.plx
                   2 File(s)        176,128 bytes
                   0 Dir(s)  25,464,483,840 bytes free
    Q:\Forms>

  • Transport the table not just the DDIC structure

    I had a table in Dev as a local object. I assigned a package to the database table as follows.
    Goto->Object Directory Entry -> Here click on edit mode --and enter the package name.
    When I transported the object to QDF only the DDIC structure was transorted and not the table. How do I transport the table and not just the DDIC structure? Thanks,

    Hi,
    What do you mean by entire table?
    Do you mean that the table contents should also be transported?
    Table contents will be transported only if you have included them in a customising request and you table is of type customising(C).
    Regards,
    Ankur Parab

  • AbstractSession.checkAndRefreshInvalidObject(..) not refreshing the passed object

    While investigating an issue regarding stale data within optimistic locks (eventually causing an OptimisticLockException) in a setup with two nodes and JMS synchronized caching, I started to debug into EclipseLink and to my understanding found the place where the problem arises but unfortunately I do not quite understand its reason.
    Please note: I looked in the forum for a similar question and did not find any, should I have missed one, please feel free to move this topic. Secondly, I am fairly new to EclipseLink, so maybe this question is a no-brainer, unfortunately it is not for me
    In org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.cloneAndRegisterObject the object is checked for validity with checkAndRefreshInvalidObject(). The query which is issued eventually after this.executeQuery(query) in checkAndRefreshInvalidObject() returns the correct lock code, but when the control flow returns to cloneAndRegisterObject(), the object "original" still has the old values which are incorporated into "workingClone" a few lines later.
    I was a bit surprised to see, that checkAndRefreshObject() has no return value and the object "original" is not updated directly or rather the result of the query within it is simply discarded. I further assume, there should be some behind-the-scenes updating initiated by "query.refreshIdentityMapResult()" in checkAndRefreshObject(), correct?
    Environment: EclipseLink 2.6.0, Oracle-DB, Two Payara nodes with a shared cache synchronized by ActiveMQ.
    Additional remarks: The table in question uses SINGLE_TABLE inheritance, and the child has a column OPT_LOCK annotated with @Version.
    My questions would be:
    If checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) is called, where is the actual refresh happening if isConsideredInvalid(..) returns true (which is the case for the scenario being described)?
    Should the first parameter of checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) have been updated already after the call to checkAndRefreshInvalidObject()?
    If not, how are the refreshed attributes merged into the variable workingClone in org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.class.cloneAndRegisterObject(..), because after checkAndRefreshInvalidObject(..), the method keeps working the the possibly stale "original" or rather creates a "workingClone" of it?
    I checked the members "object" on unitOfWorkCacheKey & parentCacheKey in cloneAndRegisterObject(..) and they are both null, should the refresh of checkAndRefreshInvalidObject(..) be visible in these members instead?
    The overall logic of cloneAndRegisterObject(..) puzzles me a bit: Even if the results of checkAndRefeshInvalidObject(..) end up in the UOW session, the method keeps working with the old value "original" and does not retrieve it somehow out of the session afterwards. Therefore I suppose, in either UnitOfWorkImpl.populateAndRegisterObject(..) or among the interactions between ObjectLevelReadQuery and ObjectBuilder should be some registration in UOW session happening. Still, even if this were the case, how are these changes incorporated into workingCopy at the end of cloneAndRegisterObject(..)?

    tl;dr With UnitOfWorkImpl.checkAndRefreshInvalidObject(original, parentCacheKey, descriptor), where parentCacheKey is the variable used to access the variable original at a later point in ObjectBuilder, is it actually possible to manifest the changes visible on the database directly in the first level cache, and not the second level cache, if the object being updated is the member object of parentCacheKey (which by my current understanding is its second level cache representation)?
    After investigating the issue further, I guess I have now some understanding about what is causing the expected behaviour. First I would like to summarize my insights, if anyone notices any fundamental flaws within them, I would appreciate the corrections
    Andreas Schmidt wrote on Thu, 06 August 2015 12:22
    While investigating an issue regarding stale data within optimistic locks (eventually causing an OptimisticLockException) in a setup with two nodes and JMS synchronized caching, I started to debug into EclipseLink and to my understanding found the place where the problem arises but unfortunately I do not quite understand its reason.
    In org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.cloneAndRegisterObject the object is checked for validity with checkAndRefreshInvalidObject(). The query which is issued eventually after this.executeQuery(query) in checkAndRefreshInvalidObject() returns the correct lock code, but when the control flow returns to cloneAndRegisterObject(), the object "original" still has the old values which are incorporated into "workingClone" a few lines later.
    I was a bit surprised to see, that checkAndRefreshObject() has no return value and the object "original" is not updated directly or rather the result of the query within it is simply discarded. I further assume, there should be some behind-the-scenes updating initiated by "query.refreshIdentityMapResult()" in checkAndRefreshObject(), correct?
    The object is updated in org.eclipse.persistence.internal.descriptors.ObjectBuilder. One possible path is through buildWorkingCopyCloneNormally which later on goes passes through buildObject where the CacheKey of the parent session is retrieved and with it the object reference from checkAndRefreshInvalidObject() which is then updated in buildAttributesIntoObject.
    Andreas Schmidt wrote on Thu, 06 August 2015 12:22
    Environment: EclipseLink 2.6.0, Oracle-DB, Two Payara nodes with a shared cache synchronized by ActiveMQ.
    Additional remarks: The table in question uses SINGLE_TABLE inheritance, and the child has a column OPT_LOCK annotated with @Version.
    Two important details missing were
    There is an explicit call to flush within the transaction, which according to https://www.eclipse.org/eclipselink/documentation/2.6/concepts/cache003.htm marked it as dirty effectively disabling the second level cache for the remaining transaction.
    The cache coordination is set to INVALIDATE_CHANGED_OBJECTS
    Andreas Schmidt wrote on Thu, 06 August 2015 12:22
    My questions would be:
    If checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) is called, where is the actual refresh happening if isConsideredInvalid(..) returns true (which is the case for the scenario being described)?
    Should the first parameter of checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) have been updated already after the call to checkAndRefreshInvalidObject()?
    If not, how are the refreshed attributes merged into the variable workingClone in org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.class.cloneAndRegisterObject(..), because after checkAndRefreshInvalidObject(..), the method keeps working the the possibly stale "original" or rather creates a "workingClone" of it?
    I checked the members "object" on unitOfWorkCacheKey & parentCacheKey in cloneAndRegisterObject(..) and they are both null, should the refresh of checkAndRefreshInvalidObject(..) be visible in these members instead?
    The overall logic of cloneAndRegisterObject(..) puzzles me a bit: Even if the results of checkAndRefeshInvalidObject(..) end up in the UOW session, the method keeps working with the old value "original" and does not retrieve it somehow out of the session afterwards. Therefore I suppose, in either UnitOfWorkImpl.populateAndRegisterObject(..) or among the interactions between ObjectLevelReadQuery and ObjectBuilder should be some registration in UOW session happening. Still, even if this were the case, how are these changes incorporated into workingCopy at the end of cloneAndRegisterObject(..)?
    In org.eclipse.persistence.internal.descriptors.ObjectBuilder (see above)
    Yes
    Redundant by the previous answer
    I think it should be visible on parentCacheKey but not on unitOfWorkCacheKey, which may be part of the root cause of the issue.
    Redundant by previous answers.
    While we might be able to circumvent the issue by skipping the intermediate flush, I still think this could be a bug which might be a design issue with the call hierarchy and the parameters of checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) , although I am rather uncertain about the latter, as my experience with EclipseLink is still very limited and I might not see some interdependencies yet
    My train of thought is as follows:
    If a flush occurs within a transaction, the transaction is marked as dirty and the shared persistence unit cache is ignored and according to https://www.eclipse.org/eclipselink/documentation/2.6/concepts/cache003.htm the objects are built directly in the first level cache, ie. in the unit of work context.
    By my understanding, this means, with respect to checkAndRefreshInvalidObject(original, parentCacheKey, descriptor), if the CacheKey has been invalidated, the data should be retrieved from the database but because the transaction is dirty not be written to the second level cache but be created in the first level cache directly.
    Within the depths of ObjectBuilder, I think the object original from checkAndRefreshInvalidObject is retrieved to be updated by buildAttributesIntoObject by its CacheKey, in particular the parentCacheKey, for example by
    ObjectBuilder:965: cacheKey = session.retrieveCacheKey(primaryKey, concreteDescriptor, joinManager, query);.
    My new question are, whether it is actually possible with the current implementation to materialize the fresh data from the database directly in the persistence context cache effectively bypassing the second level cache?
    As far as I understood, the CacheKey passed into checkAndRefreshInvalidObject(original, parentCacheKey, descriptor) is the second level cache representation of the object having a reference to it in its member object. If the parentCacheKey has to be used in ObjectBuilder to access and update the object original from checkAndRefreshInvalidObject, will these changes not be visible automatically in the second level cache as well?
    The reason, why the update of the variable original was not visible after checkAndRefreshInvalidObject(original, parentCacheKey, descriptor), I think by now was the fact, that the code flow went through ObjectBuilder:2207: workingClone = buildNewInstance(); originating from buildWorkingCopyCloneFromRow due to unitOfWork.wasTransactionBegunPrematurely() being true in buildObjectInUnitOfWork. With workingClone = buildNewInstance(), the updates made by ObjectBuilder:2250: buildAttributesIntoWorkingCopyClone will not affect the object original from checkAndRefreshInvalidObject.
    One way to make the changes on the object original is by replacing
    ObjectBuilder:2250: workingClone = buildNewInstance()
    with
    workingClone = unitOfWork.getParentIdentityMapSession(query).retrieveCacheKey(primaryKey, descriptor, joinManager, query).getObject()
    but there I definitely lack the experience with the code base to have any clue regarding possible side effects.

Maybe you are looking for