IBR on separate server - Custom Conversions

I have a custom conversion process that interupts the IBR process to do some work on a document before the rendering step. It works well while IBR is installed on the same server as the Content Server, but I am struggling to get it going when I move IBR onto a separate server.
When run on the same server, the "DocConverter.hda" contains the location of the original location of the document in the vault, but when on a separate server, everything points to a local folder. It appears that a pre-process copies in a copy of the document into a temp directory, and after the completion of the rendering, a post-process copies the pdf out to the weblayout folder. My process needs to know the location of the original document in the vault, but I cannot find any references to the pre/post processes in the documentation or even the logs.
Anyone know how this works?
I am running CS 7.5.1 and PDFConverter 7.6 on a Windows Server 2003 platform with a Sql Server 2005 back end.
Cheers,
Desmo

David is correct. Depending on exactly what you are changing (always should be in a component), you need to restart the Content Server first and then restart the refinery because of the way the conversion information gets passed to the refinery from the Content Server. If you are changing the top level file formats entry, you may need to check in a new version of the file vs just resubmitting the file to the refinery using the Repository Manger applet to make sure the right conversion gets called.
If you don't want to create this yourself, Fishbowl Solutions has a product called Native File Updater (part of the Document Automation Suite) that will likely work for you.
http://www.fishbowlsolutions.com/StellentSolutions/StellentComponents/fs_docauto_webcopy?WT.mc_id=fb_sig_to
Hopefully that helps,
Tim

Similar Messages

  • IBR : Custom conversion

    Hi,
    I am trying to test the custom conversion with IBR.
    As a first step I would like to be able to execute a java method before the PDF conversion of a MS Word document.
    I have created a component with the following tables :
    <@table CustomConversion_DocumentConversions@>
    <table border=1><caption><strong>CustomConversion_DocumentConversions</strong></caption>
    <tr>
    <td>drConversion</td>
    <td>drSteps</td>
    <td>drDescription</td>
    <td>drIsEnabledFlag</td>
    </tr>
    <tr>
    <td>CheckLinks</td>
    <td>CheckLinksFile</td>
    <td>check link into doc files</td>
    <td>TRUE</td>
    </tr>
    <tr>
    <td>MSOffice</td>
    <td>ExecuteSubconversion<$stepsToWebviewable="CheckLinksFile, MSOfficeToPostscript, PostscriptToPDF, ExecuteSubconversion<$subConversion='CreateThumbnail'$>", subConversion="UniversalConversion", supportsOIX="true", supportsXX="true", supportsOO="true", supportsIX="true"$></td>
    <td>Conversion MS Office customisee</td>
    <td>TRUE</td>
    </tr>
    </table>
    <@end@>
    <@table CustomConversion_ConversionSteps@>
    <table border=1><caption><strong>CustomConversion_ConversionSteps</strong></caption>
    <tr>
    <td>drStep</td>
    <td>drStepType</td>
    <td>drStepAction</td>
    <td>drStepParameters</td>
    <td>drStepTimeoutName</td>
    <td>drStepReport</td>
    <td>drStepControlCodes</td>
    <td>drStepDescription</td>
    <td>drStepIsEnabled</td>
    </tr>
    <tr>
    <td>CheckLinksFile</td>
    <td>Code</td>
    <td>oracle.customconversion.CheckLinksFileStep</td>
    <td></td>
    <td></td>
    <td>!csCheckFiles</td>
    <td>onErrorFail</td>
    <td>Sample step that checks word files</td>
    <td>TRUE</td>
    </tr>
    </table>
    <@end@>
    The "CheckLinksFile" step should be executed as it is added to the conversion steps of the MSOffice conversion.
    However, it does not seem to be executed ...
    Below, I copy the console output :
    starting conversion: WORD; is subconversion: false
    Script: TRUE
    Evaluated to: TRUE
    Script: ExecuteSubconversion<$subConversion="MSOffice",msOfficeApp="WORD"$>
    Evaluated to: ExecuteSubconversion
    Starting step: ExecuteSubconversion
    Script: ignoreStatus
    Evaluated to: ignoreStatus
    Script: <$subConversion$>
    Evaluated to: MSOffice
    starting conversion: MSOFFICE; is subconversion: true
    Script: TRUE
    Evaluated to: TRUE
    Script: ExecuteSubconversion<$stepsToWebviewable="CheckLinksFile, MSOfficeToPostscript, PostscriptToPDF, ExecuteSubconversion<$subConversion='CreateThumbnail'$>", subConversion="UniversalConversion", supportsOIX="true", supportsXX="true", supportsOO="true", supportsIX="true"$>
    Evaluated to: ExecuteSubconversion
    Starting step: ExecuteSubconversion
    Script: ignoreStatus
    Evaluated to: ignoreStatus
    Script: <$subConversion$>
    Evaluated to: UniversalConversion
    starting conversion: UNIVERSALCONVERSION; is subconversion: true
    Script: TRUE
    Evaluated to: TRUE
    Script: <$include universal_conversion$>
    Evaluated to: ExecuteSubconversion
    Starting step: ExecuteSubconversion
    Script: ignoreStatus
    Evaluated to: ignoreStatus
    Script: <$subConversion$>
    Evaluated to: Direct PDFExport
    starting conversion: DIRECT PDFEXPORT; is subconversion: true
    Script: true
    Evaluated to: true
    Script: PDFExport, ExecuteSubconversion<$subConversion='CreateThumbnail', supportsIX='true'$>
    Evaluated to: PDFExport, ExecuteSubconversion
    Starting step: PDFExport
    Script: onErrorFail, producesRequiredWebviewable
    The java method that should be executed does a System.out.println that should appear in that log ...
    Does someone have an idea ?
    Thank you,
    Romain.

    Hi Raphael,
    Thank you for your answer.
    I am just trying to do a simple task to see how I could customize the IBR conversions.
    I have written a simple Java class with a method that just writes a comment into the conversion logs.
    I've also made a component that adds lines into the conversions and steps tables to enable my custom conversion.
    My java method should be executed before PDF conversion but I do not see the comment into the logs even if the conversion succeeded !
    I don't know if I do not look at the right log files (IBR output into the IBR console) or if I missed something ...
    Have you an idea ?
    Did you use the same method to create custom conversions (java class + custom component) ?
    Thank you,
    Romain.

  • Where could i place the RTF file (locally or in server custom top)

    Hi guys,
    I am very new to xml report publisher and I completed one RTF template successfully and i am accessing oracle apps through my client instances which is in USA.
    I dont knw where to place the RTF file whether in my local system or in the custom_top.
    please reply immediately if any one knw,I hope that i will get immediately reply and its very urgent task to complete.
    Regards
    Praburam s

    Hi,
    I knw that oracle apps have separate res as XML Publisher Administrator.
    Just I want to knw in Template tab we should attach the RTF.My question is that where should i browse for that RTF(in template tab) to my local machine or server custom top)..
    Can i save my rtf file as locally or in server custom top.
    I hope that u can get my question..
    Please reply immediately.
    Regards
    Prabu

  • Custom conversion of MSOB 10.7 / 11.0 to MO 11i

    Was anyone involved with a project where a custom conversion of Multiple set of books architecture on 10.7 or 11.0 was done prior in upgrading to 11i MO ?

    Anybody been through this scenario before, or can give some stepwise approach?
    We been through same scenario -- and successfully finished upgrade in development machine with may release of 11i..
    1) In your new machine run rapidinstall and create technology stack.
    2) finish Catogary1 steps in you existing database (10.7 in HP 10). You can also do this even after export using server partition. You have to apply relevant pathces.
    3)Export data from 10.7 (whichever database you have for 10.7 . ours was 7.3.4) to new machine database 8.1.6.X
    4)We tested export data from 10.7 application using server partition. You can apply patches if you want to work from 10.7 application against 8.1.6 database.
    Now that your new machine is ready with old data in 8.1.6 and new application 11i
    5)Apply all critical patches (for your liscenced products and AD and FND) to the new machine
    -- continue rest of upgrade process.
    If you need more detailed/specific information contact me @ [email protected] Glad to help you.
    Hope this may help you
    Shibu
    Thanks!!
    null

  • Session broker and custom conversion manager

    I'm having some problem using session broker and a custom conversion manager. I just moved from using single session to using a session broker in the sessions.xml. I'm using a custom conversion manager as shown in this tech. tips URL
    http://www.oracle.com/technology/products/ias/toplink/technical/tips/customconversion/index.html
    Here's my conversion manager set up code -
    public class JpmiConversionManagerSetup extends SessionEventAdapter
    * During the pre-login event the new MyConversionManager must be installed
    * @see oracle.toplink.sessions.SessionEventAdapter#preLogin
    * @param event
    public void preLogin(SessionEvent event) {
    ConversionManager cm= new JpmiConversionManager();
    ConversionManager.setDefaultManager(cm);
    event.getSession().getLogin().getPlatform().setConversionManager(cm);
    My session broker manages 2 sessions. In sessions.xml for one session I have the <event-listener-class> entry where I need some conversion, another session I don't have any such entry as I don't need any conversion.
    Now when I try to run a named query using session broker the conversion part blows up, throws a ConversionException. Any idea? Do I need to configure the session broker instead of session in the preLogin or anything like that?

    I think sessions editor is not available in 10.1.3dp4 yet. So I have to write the sessions.xml by hand. But the parser throwing me an error saying that <session-broker> is not allowed in sessions.xml.
    SessionLoaderExceptions:
    org.xml.sax.SAXParseException: <Line 41, Column 18>: XML-24534: (Error) Element 'session-broker' not expected.
         at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:415)
         at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:284)
         at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:302)
         at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:199)
         at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:155)
         at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:111)
         at oracle.toplink.platform.xml.xdk.XDKParser.parse(XDKParser.java:160)
         at oracle.toplink.platform.xml.xdk.XDKParser.parse(XDKParser.java:190)
         at oracle.toplink.tools.sessionconfiguration.XMLSessionConfigLoader.loadDocument(XMLSessionConfigLoader.java:191)
         at oracle.toplink.tools.sessionconfiguration.XMLSessionConfigLoader.loadDocument(XMLSessionConfigLoader.java:151)
         at oracle.toplink.tools.sessionconfiguration.XMLSessionConfigLoader.load(XMLSessionConfigLoader.java:88)
         at oracle.toplink.tools.sessionmanagement.SessionManager.getSession(SessionManager.java:364)
         at oracle.toplink.tools.sessionmanagement.SessionManager.getSession(SessionManager.java:331)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
    Any idea how to or where to write sessions broker in sessions.xml for 10.1.3dp4???

  • Which database driver is required for weblogic 10.3 and Oracle DB 11g both on MS2008 separate server

    Hi,
    i am trying to configure JDBC with weblogic. Can any one tell me which deriver needs to be selected for weblogic 10.3 and Oracle DB 11g both on MS2008 separate server.
    if i use BEA oracle Driver (Type 4) version 9.0.1, 9.2.0,10,11,  i find error (see snap:2)
    Connection test failed.
    [BEA][Oracle JDBC Driver]Error establishing socket. Unknown host: hdyhtc137540d<br/>weblogic.jdbc.base.BaseExceptions.createException(Unknown Source)<br/>weblogic.jdbc.base.BaseExceptions.getException(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.makeConnectionHelper(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.makeConnection(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.connectAndAuthenticate(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.open(Unknown Source)<br/>weblogic.jdbc.base.BaseConnection.connect(Unknown Source)<br/>weblogic.jdbc.base.BaseConnection.setupImplConnection(Unknown Source)<br/>weblogic.jdbc.base.BaseConnection.open(Unknown Source)<br/>weblogic.jdbc.base.BaseDriver.connect(Unknown Source)<br/>com.bea.console.utils.jdbc.JDBCUtils.testConnection(JDBCUtils.java:505)<br/>c om.bea.console.actions.jdbc.datasources.createjdbcdatasource.CreateJDBCDataSource.testConn ectionConfiguration(CreateJDBCDataSource.java:369)<br/>sun.reflect.GeneratedMethodAccessor 826.invoke(Unknown Source)<br/>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl. java:25)<br/>java.lang.reflect.Method.invoke(Method.java:597)<br/>org.apache.beehive.netui .pageflow.FlowController.invokeActionMethod(FlowController.java:870)<br/>org.apache.beehiv e.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)<br/>org.ap ache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)<br/>or g.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java :306)<br/>org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336 )<br/>...
    and
    when i use oracle's driver (thin) version 9.0.1, 9.2.0,10,11, i find error
    Connection test failed.
    Io exception: The Network Adapter could not establish the connection<br/>oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:10 1)<br/>oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:112)<br/>oracle .jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:173)<br/>oracle.jdbc.drive r.DatabaseError.throwSqlException(DatabaseError.java:229)<br/>oracle.jdbc.driver.DatabaseE rror.throwSqlException(DatabaseError.java:458)<br/>oracle.jdbc.driver.T4CConnection.logon( T4CConnection.java:411)<br/>oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnectio n.java:490)<br/>oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:202)<br/>oracle .jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33)<br/>oracle.jdbc. driver.OracleDriver.connect(OracleDriver.java:474)<br/>com.bea.console.utils.jdbc.JDBCUtil s.testConnection(JDBCUtils.java:505)<br/>com.bea.console.actions.jdbc.datasources.createjd bcdatasource.CreateJDBCDataSource.testConnectionConfiguration(CreateJDBCDataSource.java:36 9)<br/>sun.reflect.GeneratedMethodAccessor826.invoke(Unknown Source)<br/>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl. java:25)<br/>java.lang.reflect.Method.invoke(Method.java:597)<br/>org.apache.beehive.netui .pageflow.FlowController.invokeActionMethod(FlowController.java:870)<br/>org.apache.beehiv e.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)<br/>org.ap ache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)<br/>or g.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java :306)<br/>org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336 )<br/>...

    i am finding this error when i click on Test Configuration button to test the connection wth oracle DB

  • R12 Customer Conversion by using TCA API  OR  Standard interface table ?

    R12 Customer Conversion should be done via  TCA API  OR  Standard interface table? Why ?
    Which approach is more suitable  kindly let us know your thoughts.

    We have used interface tables just to have better control and reprocessing abilities. You may need to be little careful about the TCA tables.

  • Customer conversion for Non-English languages

    Hi:
    We have a requirement to convert customer data from lagacy system to Oracle EBS supporting English and Non-English languages. Our conversion programs for english is working fine but we are not sure about the approach for other language converions. Can anyboody share the knowledge if faced similar requirements?
    Thanks /Santanu

    Duplicate thread (please post only once) ...
    Customer conversion for non-english language
    Re: Customer conversion for non-english language

  • Custom Conversion Exit for a Custom InfoObject

    Has anyone created a custom conversion exit for an InfoObject?
    I have a requirement to mask the first five digits of the Social Security Number on output only. I created a custom conversion exit ZCONV, for example and added code to the output function module CONVERSION_EXIT_ZCONV_OUTPUT to accomplish the masking. In the CONVERSION_EXIT_ZCONV_INPUT, I just passed INPUT to OUTPUT.
    It works OK for masking the output results but any input variables return an "Invalid Format" error. So I am not able to have a selection screen variable where you enter the SSN number and run the query for it. It tries to convert it to the masked form and then run the query.
    Any pointers. Helpful answers will be promptly rewarded.
    Thanks
    Vineet

    Since the ultimate goal was not to display Social Security Numbers, I changed the custom Conversion exit to display SID values. In case anyone needs to do the same, The output function module CONVERSION_EXIT_ZCONV_OUTPUT  is coded to read the SID value for the SSN number from the SID table of the InfoObject and the input function module CONVERSION_EXIT_ZCONV_INPUT does the reverse by reading the SSN number for the SID.
    Dropdowns on variable values work OK by displaying the SID value and the employee name. The only drawback is that you cannot input a SSN number on the selection variable. You have to do the dropdown and select.
    Thanks
    Vineet

  • Import PO Receipts using custom conversion.

    I am importing PO Receipts through a custom conversion by populating the PO Interface tables and running the receving Transaction processor. But when I look for the receipt in the application, I cannot find it. The receipts correspond to existing Approved POs. The Tables are actually populated in the database, the Concurrent Program contain no errors, and the Processing Mode has been set to both Batch and Immediate. has anyone else experienced this issue? If so, were you able to find a resolution? Any help would be greatly appreciated. Thanks.
    Ray

    INSERT INTO RCV_TRANSACTIONS_INTERFACE(INTERFACE_TRANSACTION_ID,
    PROCESSING_STATUS_CODE,
    TRANSACTION_DATE,
    GROUP_ID,
    LAST_UPDATE_DATE,
    LAST_UPDATED_BY,
    CREATION_DATE,
    CREATED_BY,
    TRANSACTION_TYPE,
    PROCESSING_MODE_CODE,
    TRANSACTION_STATUS_CODE,
    QUANTITY,
    ITEM_DESCRIPTION,
    UOM_CODE,
    VENDOR_ID,
    VENDOR_SITE_ID,
    FROM_ORGANIZATION_ID,
    SOURCE_DOCUMENT_CODE,
    PO_HEADER_ID,
    PO_LINE_ID,
    PO_UNIT_PRICE,
    PO_DISTRIBUTION_ID,
    DESTINATION_TYPE_CODE,
    BILL_OF_LADING,
    VENDOR_ITEM_NUM,
    COMMENTS,
    ATTRIBUTE15,
    VENDOR_NAME,
    VALIDATION_FLAG,
    SHIPMENT_LINE_STATUS_CODE
    VALUES (rcv_transactions_interface_s.NEXTVAL,
    (PO_RECORD.PO_RECEIPTS_STG_ID),
    PO_RECORD.PROCESSING_STATUS_CODE,
    TO_DATE(PO_RECORD.TRANSACTION_DATE, 'YYYYMMDD'),
    TO_NUMBER(PO_RECORD.GROUP_ID),
    SYSDATE, --TO_DATE(PO_RECORD.T_LAST_UPDATE_DATE, 'YYYYMMDD'),
    v_user_id, --TO_NUMBER(PO_RECORD.T_LAST_UPDATED_BY),
    SYSDATE, --TO_DATE(PO_RECORD.T_CREATION_DATE, 'YYYYMMDD'),
    v_user_id, --TO_NUMBER(PO_RECORD.T_CREATED_BY),
    PO_RECORD.T_TRANSACTION_TYPE,
    PO_RECORD.PROCESSING_MODE_CODE,
    PO_RECORD.T_TRANSACTION_STATUS_CODE,
    TO_NUMBER(PO_RECORD.QUANTITY),
    PO_RECORD.ITEM_DESCRIPTION,
    PO_RECORD.UOM_CODE,
    --TO_NUMBER(PO_RECORD.T_VENDOR_ID),
    v_vendor_id,
    -- TO_NUMBER(PO_RECORD.T_VENDOR_SITE_ID),
    v_vendor_site_id,
    TO_NUMBER(PO_RECORD.TO_ORGANIZATION_ID),
    PO_RECORD.SOURCE_DOCUMENT_CODE,
    TO_NUMBER(PO_RECORD.PO_HEADER_ID),
    TO_NUMBER(PO_RECORD.PO_LINE_ID),
    TO_NUMBER(PO_RECORD.PO_UNIT_PRICE),
    TO_NUMBER(PO_RECORD.PO_DISTRIBUTION_ID),
    PO_RECORD.DESTINATION_TYPE_CODE,
    PO_RECORD.BILL_OF_LADING,
    PO_RECORD.VENDOR_ITEM_NUM,
    PO_RECORD.T_COMMENTS,
    PO_RECORD.T_ATTRIBUTES15,
    PO_RECORD.VENDOR_NAME,
    PO_RECORD.T_VALIDATION_FLAG,
    PO_RECORD.SHIPMENT_LINE_STATUS_CODE
    INSERT INTO RCV_HEADERS_INTERFACE(HEADER_INTERFACE_ID,
    PROCESSING_STATUS_CODE,
    RECEIPT_SOURCE_CODE,
    TRANSACTION_TYPE,
    AUTO_TRANSACT_CODE,
    TEST_FLAG,
    LAST_UPDATE_DATE,
    LAST_UPDATED_BY,
    RECEIPT_NUM,
    VENDOR_NUM,
    VENDOR_ID,
    VENDOR_SITE_ID,
    FREIGHT_CARRIER_CODE,
    EXPECTED_RECEIPT_DATE,
    COMMENTS,
    FREIGHT_TERMS,
    ATTRIBUTE14,
    ATTRIBUTE15,
    VALIDATION_FLAG,
    TRANSACTION_DATE,
    ORG_ID,
    OPERATING_UNIT
    VALUES (rcv_headers_interface_s.NEXTVAL,
    (PO_RECORD.XXPO_RECEIPTS_STG_ID),
    PO_RECORD.PROCESSING_STATUS_CODE,
    PO_RECORD.RECEIPT_SOURCE_CODE,
    PO_RECORD.H_TRANSACTION_TYPE,
    PO_RECORD.AUTO_TRANSACT_CODE,
    PO_RECORD.TEST_FLAG,
    SYSDATE, --TO_DATE(PO_RECORD.T_LAST_UPDATE_DATE, 'YYYYMMDD'),
    v_user_id,
    PO_RECORD.RECEIPT_NUM,
    V_VENDOR_ID,
    -- TO_NUMBER(PO_RECORD.VENDOR_CODE),
    -- TO_NUMBER(PO_RECORD.H_VENDOR_ID),
    v_vendor_id,
    -- TO_NUMBER(PO_RECORD.H_VENDOR_SITE_ID),
    v_vendor_site_id,
    PO_RECORD.FREIGHT_CARRIER_CODE,
    TO_DATE(PO_RECORD.EXPECTED_RECEIPT_DATE, 'YYYYMMDD'), -- date?
    PO_RECORD.H_COMMENTS,
    PO_RECORD.FREIGHT_TERMS,
    PO_RECORD.ATTRIBUTES14,
    PO_RECORD.H_ATTRIBUTES15,
    PO_RECORD.H_VALIDATION_FLAG,
    TO_DATE(PO_RECORD.TRANSACTION_DATE, 'YYYYMMDD'),
    TO_NUMBER(PO_RECORD.ORGANIZATION_ID),
    PO_RECORD.OPERATING_UNIT
    We are using Oracle Apps R12. We are using 'RECEIVE' by the way.

  • RollbackException using UserTransaction when calling EJB in separate server

    I'm using WL 6.1 on Solaris and am calling a stateless session EJB that
              is running in a separate server. I'm looking up the remote EJB using
              JNDI and calling through it's home interface. This works fine but if
              the client code begins a UserTransaction, then calls the EJB that's in
              the separate server, and then calls commit on the transaction, I get the
              following:
              weblogic.transaction.RollbackException: Aborting prepare because some
              resources could not be assigned - with nested exception:
              [javax.transaction.SystemException: Aborting prepare because some
              resources could not be assigned]
              The code that works looks like:
              rgData =
              HomeHolder.ROUTING_GUIDE_MGR_HOME.create().getResourceOptions(qd);
              whereas the code that fails is:
              UserTransaction transaction = new UserTransaction();
              transaction.begin();
              rgData =
              HomeHolder.ROUTING_GUIDE_MGR_HOME.create().getResourceOptions(qd);
              transaction.commit();
              If I put the EJB in the same server as the client, I don't get the
              exception so it seems to be related to running it in the separate server
              and using the UserTransaction. The deployment descriptor of the EJB
              states that it "Supports" transactions.
              Any ideas?
              Thanks,
              John
              

    Yes, actually we are using:
              AppServerTransaction transaction = new AppServerTransaction();
              which is a wrapper which does what you say:
              Context ctx = new InitialContext(...); // connect to another WLS
              UserTransaction tx = (UserTransaction)ctx.lookup("java:comp/UserTransaction");
              and our HomeHolder does what you say as well:
              Homexxx home = ctx.lookup(...);
              Any ideas why surrounding the EJB call by a UserTransaction causes a problem when
              committing?
              Thanks,
              John
              Dimitri Rakitine wrote:
              > John Hanna <[email protected]> wrote:
              > > I'm using WL 6.1 on Solaris and am calling a stateless session EJB that
              > > is running in a separate server. I'm looking up the remote EJB using
              > > JNDI and calling through it's home interface. This works fine but if
              > > the client code begins a UserTransaction, then calls the EJB that's in
              > > the separate server, and then calls commit on the transaction, I get the
              > > following:
              >
              > > weblogic.transaction.RollbackException: Aborting prepare because some
              > > resources could not be assigned - with nested exception:
              > > [javax.transaction.SystemException: Aborting prepare because some
              > > resources could not be assigned]
              >
              > > The code that works looks like:
              >
              > > rgData =
              > > HomeHolder.ROUTING_GUIDE_MGR_HOME.create().getResourceOptions(qd);
              >
              > > whereas the code that fails is:
              >
              > > UserTransaction transaction = new UserTransaction();
              >
              > It's an interface, how did this work? Assuming that you do not want
              > distributed tx, does this work:
              >
              > Context ctx = new InitialContext(...); // connect to another WLS
              > UserTransaction tx = (UserTransaction)ctx.lookup("java:comp/UserTransaction");
              > Homexxx home = ctx.lookup(...);
              > tx.begin();
              > home.create().getResourceOptions(qd);
              > tx.commit();
              >
              > ?
              >
              > > transaction.begin();
              > > rgData =
              > > HomeHolder.ROUTING_GUIDE_MGR_HOME.create().getResourceOptions(qd);
              > > transaction.commit();
              >
              > > If I put the EJB in the same server as the client, I don't get the
              > > exception so it seems to be related to running it in the separate server
              > > and using the UserTransaction. The deployment descriptor of the EJB
              > > states that it "Supports" transactions.
              >
              > > Any ideas?
              >
              > > Thanks,
              >
              > > John
              >
              > --
              > Dimitri
              

  • SharePoint 2013 workflows for SPD - need a separate server to run workflow engine?

    Hi there,
    I have a single machine dev installation of SharePoint 2013. How to have SP 2013 Workflows done in SharePoint Designer - so we can migrate them from one environment to the other without having to make manual changes again.
    Does this need any special software installed? Can we install it on our APP server?
    Thanks.

    Hi Frob,
    If you want to be able to move workflow manager to another environment, then you need to install the workflow manager in a separate server.
    When you want to use the workflow manager in another environment, you need to leave the workflow manager from the existing farm.
    And then join the workflow manager to the farm where you want to use the workflow manager and also re-register the workflow service.
    Here is a similar issue for you to take a look:
    http://sharepoint.stackexchange.com/questions/132524/move-workflow-manager-to-new-farm-in-a-new-domain
    More references:
    https://msdn.microsoft.com/en-us/library/jj193527(v=azure.10).aspx
    https://msdn.microsoft.com/en-us/library/jj193433(v=azure.10).aspx
    Best regards,
    Thanks
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Is it possible to take the CDR data from a v4.2 Call Manager and copy it to a separate server where it would be made available for reporting?

    Is it possible to take the CDR data from a v4.2 Call Manager and copy it to a separate server where it would be made available for reporting? We are not interested in migrating the CDR data to v6 because of the concerns it introduces to the upgrade process. Is it possible to get the raw data and somehow serve it from a different machine? (knowing it would be 'old' data that stops as of a certain date). If so, what would be the complexity involved in doing so?
    It seems like the CDR data lives within MSSQL and the reporting interface is within the web server portion of the Call Manager... that's as far as we've dug so far.

    Hi
    It is absolutely possible to get the data - anyone you have in your org with basic SQL skills can move the data off to a standalone SQL server. This could be done most simply by backing up and restoring the DB using SQL Enterprise Manager.
    Moving the CAR/ART reporting tool would be more difficult... if you do actually use that for reporting (most people find it doesn't do what they need and don't use it for anything but basic troubleshooting, and get a third party package) then the best option may be to keep your publisher (possibly assigning it a new IP) and leave it running for as long as you need reporting.
    You would then need a new server to run your upgraded V6 CCM; you may find you need this anyway.
    Regards
    Aaron
    Please rate helpful posts...

  • Best method for archiving .mpp files on a separate server or location?

    We want to be able to run a program or job on Project Server 2013 that will export all current published project .mpp files to a separate server or location on our network. What is the best, or suggested, method for something like this? Our managers want
    to have this job run on a weekly or bi-monthly basis in order to have backup files of each active project schedule. This would be beyond the Project Server archive database. This would be for Business Continuity purposes of having those schedules available
    should our servers ever crash. 
    Any help would be much appreciated. I am not a developer, but if there is code available for something like this we have developers in-house that can perform the work. 
    Thank you,
    Travis
    Travis Long IT Project Manager Entry Idaho Power Co. Project Server Admin

    Project server already has an archiving mechanism which backs up project plans based on schedule and maitains versions of it which can be restored at any point ? check administrative backup in central Admin under PWA settings
    However I wouldn't say this is the best method, but you can run a macro which would export all projects and save it at a location(could be network file share), Something like this (havent tested it recently with 2013 but i believe should work
    Sub Archiving()
    Dim Conn As New ADODB.Connection
    Dim Cmd As New ADODB.Command
    Dim Recs As New ADODB.Recordset
    'Connect to Project Server Reporting DB, Get Project Names
    Conn.ConnectionString = "Provider=SQLOLEDB;Data Source=servername;Initial Catalog=ProjectServer_Reporting;User ID=; Password=; Trusted_Connection=yes"
    Conn.Open
    With Cmd
        .ActiveConnection = Conn
        .CommandText = "Select ProjectName From MSP_EpmProject_UserView"
        .CommandType = adCmdText
    End With
     With Recs
        .CursorType = adOpenStatic
        .CursorLocation = adUseClient
        .LockType = adLockOptimistic
        .Open Cmd
      End With
    Dim prjName As String
    Dim ArchivePrjName As String
    Application.Alerts (False)
    If Recs.EOF = False Then
       Recs.MoveFirst
       For x = 1 To CInt(Recs.RecordCount)
        prjName = "<>\" + Recs.Fields(0)
        FileOpenEx Name:=prjName, ReadOnly:=True
        ArchivePrjName = "C:\Temp\" & prjName & "4Sep2014_9PM"
        FileSaveAs Name:=ArchivePrjName, FormatID:="MSProject.MPP"
        FileCloseEx
        prjName = ""
        Recs.MoveNext
       Next x
    End If
       Recs.Close
       Conn.Close
    End Sub
     Let us know if this helps
    Thanks | epmXperts | http://epmxperts.wordpress.com

  • Can GG SQL Server DataPump Process run on separate server to Extract

    Its clear from the Goldengate SQL Server documentation that the Extract process for SQL Server has to be on the database server. Is it possible to split the Data Pump off onto a separate server in order to limit the load on the DB Server? It seems like this would require a separate GoldenGate installation with just a data pump reading the trail files from the Extract installation.
    Appreciate any comments on possibility, worthwhileness etc.
    Stonemason

    An extract (or pump) process reads trail files locally, and writes to either local or remote file system. The pump itself -- if you configure it as "passthru" -- has minimal load.   But if you don't want the pump running on the DB server, then just have the primary extract write out remote trails instead of local trails.  This architecture presents more disadvantages than advantages, so in general it's not recommended.   For example, if the network goes down and the remote trails can't be written, then your primary extract stops processing database logs, and this is something that should continue running, even if just to create local trails until the network is back up.
    Alternatively, you can also have your trail files written to "local" trails (extTrail instead of rmtTrail) that is actually remote storage (NAS/SAN), and then you would have a pump running on a remote machine and reading these trails.  This is more common on linux/unix -- if you're running Windows, you might want to check with support for supported configurations. (Also, even if using linux/unix, check for KM's (support notes) on support.oracle.com for optimal and/or required configurations for the network attached storage/NFS.)

Maybe you are looking for