OIM API portablity issue  with OIM 9.1 / Weblogic 10.3

Hi , We have a existing piece of code which does some User Mutation through OIM API.
[I am not well versed with OIM ]
The code was running fine with Weblogic 8.3 and previous OIM version.
Here is the piece of code.
logger.info("Initializing OIM Params from config location:" oimConfigFileUtil.getOIMConfigBase());+*
+          System.setProperty("XL.HomeDir", oimConfigFileUtil.getOIMConfigBase().getAbsolutePath());+
+          System.setProperty("java.security.auth.login.config", oimConfigFileUtil.getOIMAuthWLFile().getAbsolutePath());+
+          ConfigurationClient.ComplexSetting configClient = ConfigurationClient.getComplexSettingByPath("Discovery.CoreServer");+
+          env = configClient.getAllSettings();+
*+          try {+*
+               oimAccessFactory = new tcUtilityFactory(env, oimConfigFileUtil.getUserID(), oimConfigFileUtil.getPassword());+
I traced all the dependecy's for this piece of code.
If I run this with Weblogic.jar[8.1] it gives me
java.io.InvalidClassException: com.thortech.xl.dataaccess.tcDataSet; local class incompatible: stream classdesc serialVersionUID = -5446056666465114187, local class serialVersionUID = -8857647322544023100*
With the compatablity issue I substituted with weblogic.jar:10.3 , now its giving me all classpath issues.
Can someone layout the exact jars that are required for this to work?
Thanks
Vignesh

Installl a Design Console. Copy any files that are required. Then take the class paths that are listed in the classpath and basecp files and put those into your application classpath files.
-Kevin

Similar Messages

  • Issue with deploying EAR to Weblogic 10.3.4 from STS

    Hi,
    I am using Spring Source Tool 2.6 as the IDE. While publishing the EAR to weblogic from STS, it is not happening. It doesnt give any error too. When I check from the admin console under deployments it will not be found. However, I am able to deploy the same ear file from admin console without any issue. Am I missing some thing while configuring the domain? Is there any work around for this? (Note: I am creating the domain by selecting the default options in dev environment).
    Thanks,
    Vishnu

    There appears to be an issue with tooling versions for Oracle tools and underlying eclispe. see following
    http://forum.springsource.org/showthread.php?104597-STS-2.6.0-M2-does-not-allow-adding-Weblogic-Servers

  • Issue with Distributed Queue and WebLogic Clustering

    Hi, When a message is received by distributed queue, MDB is processing the message on two managed servers. There seems to be issue with clustering and the physical queues present on both the managed servers are receving the message.
    Our environment configuration details are as below:
    One Web logic Cluster with 2 nodes (2 managed web logic servers).
    One MDB deployed on the cluster listening to a queue with JNDI name “xng/jms/CODEventsQueue”
    One Distributed queue with two members on the two nodes of the cluster, and with JNDI name “xng/jms/CODEventsQueue”
    Two members of the distributed queue deployed on two JMS servers, which are separately deployed on each managed server .
    And the distributed queue is deployed on the cluster.
    Any help is appreciated.
    Thanks
    Sampath

    It is not clear to me how you concluded that "both the managed servers are receiving the message". Did you monitor the queues' statistics, or did you see both MDB instances received the same message?
    It looks like that you using a weighted distributed queue. Do the two physical queues that compose the distributed queue have their own JNDI names? If so, what are they?
    Have you tried to use a uniform distributed queue and see if the same behavior shows up?
    You can find more about uniform distributed destination at
    http://edocs.bea.com/wls/docs103/jms/dds.html#wp1313713
    BTW, which WebLogic Server releases are you using? Could you provide the distributed queue configuration?
    Thanks,
    Dongbo

  • Log4j issue with two ears in weblogic cluster

    Hi,
    I am having issue with log4j.I am having 2 ear files say AppA and AppB.I have used log4j(DailyRollingFileAppender) in both so that logs from AppA gets written to a.log and from AppB gets written to b.log.
    This approach is working fine in non-clustered environment(weblogic admin server).
    But when i try the same in a clustered(weblogic cluster) environment the logs from both AppA and AppB gets written to a.log.
    Could you please help me to resolve this issue.

    Include ${weblogic.Name} in your log4j property file thiis will create the log file name with server name
    log4j.appender.TEST_LOGGER.File=/logs/${weblogic.Name}_TEST_LOGGER_LOG.txt
    Do not keep log4j within your ear
    move it to server and include it in your class path of your server.

  • Issue with JPDs in my weblogic application

    I have a weblogic integration application running on WL 8.1 SP5. There are 2 JPDs and 2 web services in the application. I am having an issue in the live server when the application has been running for quite a while. One of the JPDs stops working. The issue is resolved by redeploying the application. On redeploying from the weblogic console, the code starts to run again!!! Has any one come across such an issue? Is this something to do with server settings? I would welcome any suggestions to solve this recurring issue.

    Hi Darshan,
    Thanks for your answer.
    I change my package to do an another test.
    My XML :
    <dataTemplate name="XXAPINMOXML" description="SUIVI DES INTERETS MORATOIRES" defaultPackage="XXAPINMO_PKG">
    <dataQuery>
    <parameters>
    <parameter name="p_type_edition"/>
    </parameters>
    <PROPERTIES>
    <property name="include_parameters" value="true"/>
    </PROPERTIES>
    My package :
    CREATE OR REPLACE PACKAGE XXAPINMO_PKG AS
    p_type_edition varchar2(150);
    FUNCTION beforeReport return boolean;
    END XXAPINMO_PKG;
    show errors package XXAPINMO_PKG;
    CREATE OR REPLACE PACKAGE BODY XXAPINMO_PKG
    AS
    FUNCTION beforeReport return boolean is
    BEGIN
    fnd_file.put_line(FND_FILE.LOG,'p_type_edition ' || p_type_edition);
    IF p_type_edition = 'Y' THEN
         fnd_file.put_line(FND_FILE.LOG,'Case Y');
    ELSE
         fnd_file.put_line(FND_FILE.LOG,'Case N');
    END IF;
    return true;
    END beforeReport;
    END XXAPINMO_PKG;
    show errors package body XXAPINMO_PKG;
    Then I launch my concurrent program with 'Y' in parameter and I see the log of my concurrent program :
    XDO Data Engine Version No: 5.6.3
    Resp: 51213
    Org ID : 45
    Request ID: 2468909
    All Parameters: p_type_edition=Y
    Data Template Code: XXAPINMOXML
    Data Template Application Short Name: XXAP
    Debug Flag: N
    {p_type_edition=Y}
    Calling XDO Data Engine...
    Début des messages de journalisation à partir du fichier FND_FILE.
    p_type_edition
    Case N
    Fin des messages de journalisation à partir du fichier FND_FILE.
    But normally it should be 'Case Y' which would be written ...
    And when I see the XML via Diagnostic I don't see the parameter however I add <property ...>
    I really don't understand ...
    Maybe do you have an another idea ?
    Frédérique

  • Critical issue with Connection.setReadOnly on WebLogic

    Hi All,
    We've just hit a critical issue while running on WebLogic.
    After establishing a connection from a connection pool, we set the ReadOnly status to true on the Connection.
    (We do this so user entered SQL can be executed)
    In testing, we put in a DELETE FROM TEST_TABLE_NAME command, and the code is actually performing the delete.
    (Updates the same)
    I've connected a remote debugger and seen that the ReadOnly flag is set to true inside the connection Object.
    This is how my connection pool is configured........
    driver class name: oracle.jdbc.OracleDriver
    properties:
    user=V42_ISMART_DEMO_USER
    ejb=sisPool
    class=oracle.jdbc.pool.OracleConnectionPoolDataSource
    protocol=thin
    xa-location=jdbc/xa/sisPool
    Transaction is using One-Phase commit.
    database is 11.1.0.7.0
    Weblogic 10.3
    Java 1.6
    Any pointers in the right direction would be greatly appreciated, as this is currently blocking a product release.
    Thanks,
    Ronan

    Do tell us more about the product release and how this issue has come to
    block it. How does the behavior you see differ from anything earlier in your
    development cycle!
    You have already taken WebLogic out of the equation be confirming that
    the read-only setting is passed to the driver's connection object, so this is
    only a driver issue. You could take it up in the JDBC forum, but the main
    issue is probably that you have not investigated the specified semantics of
    setReadOnly(). From a user perspective, it sounds like it might be intended
    to stop any user SQL from altering the DBMS, though with deeper thought
    on the issue, that could only be implemented at the DBMS itself. The driver
    could never know if a stored procedure you call or a trigger on a select
    query you make altered the DBMS.
    The spec for the call, however, carves out a totally different and vastly
    less interesting meaning to the method:
    "Puts this connection in read-only mode as a hint to the driver to enable
    database optimizations. "
    This allows the driver to totally ignore the input in cases where it does not
    enable any 'dbms optimizations' which it does not in this case, so the driver
    does ignore the setting, except to helpfully return it to you later if you should
    call getReadOnly()... ;)
    HTH,
    Joe

  • Clustering issue with WLCS 3.2 weblogic 5.1 and service pack 8

    I'm having a problem with Portlets on a page with commerce server 3.2 on
              Cluster A below. One Portal page has multiple portlets on it. When the
              Portal Page executes the portlets are being scheduled across both SCCWLS00
              and SCCWLS01 and they are not able to share the session. Thus they will
              throw exceptions trying to read properties from the WLCS profile which is
              shared.
              In Cluster B, When the portal page excutes all portlets are "pinned" to that
              server and they share the information perfectly. Both Cluster Groups have
              their CLUSTER-wide weblogic.properties file set for
              weblogic.httpd.clustering.enable=true
              weblogic.httpd.session.persistence=true
              weblogic.httpd.session.persistentStoreType=replicated
              weblogic.httpd.register.cluster=weblogic.servlet.internal.HttpClusterServlet
              Any ideas on why this is happening? And How I can correct it?
              David Marshall
              ==============================================
              Environmental Background
              CLUSTER Group A (Production)
              VLAN1
              StrongHold 3 (Apache 1.3.19 w/SSL) (Solaris 8)
              ! mod_wl-ssl.so
              ! WebLogicCluster
              sccwls00:nnnn,sccwls01:nnnn
              V
              <firewall translates IPs from 1 VLAN 1 VLAN 2>
              VLAN2 !
              V V
              SCCWLS00 (Solaris 8) SCCWLS01 (Solaris 8)
              CLUSTER Group B (Development)
              StrongHold 3 (Apache 1.3.19 w/SSL) (Solaris 7)
              ! mod_wl-ssl.so
              ! WebLogicCluster
              weblogic00:nnnn,weblogic01:nnnn
              V V
              WEBLOGIC00 (Soaris 8) WEBLOGIC01 (Solaris 8)
              

    What kind of problem do you have?
    If it's not connected with JDBC, you'd better post your question
    in webservices newsgroup.
    Regards,
    Slava Imeshev
    "leopld goess" <[email protected]> wrote in message
    news:[email protected]..
    hy out there.
    i've been working with apache soap on wl 5.1 for a while now, and
    everything seems to be working allright- as long as i don't try to
    install a servicepack, namely sp8 or sp10. if i do that, the entire
    soap service fails to deploy.
    any ideas...
    thanx
    leopold

  • Solution stack with Ariba Buyer, BEA Weblogic, Oracle

    Has anyone setup such a stack? I am trying to implement that but am having some
    issues with Buyer going thru Weblogic to the back end Database? How do i route
    requests to Oracle from Ariba Buyer thru Weblogic?
    Thanks,
    VC

    nevermind - I'm am following the instructions in this thread. Sounds like the problem that I have
    http://forums.bea.com/bea/thread.jspa?forumID=2022&threadID=200007092&messageID=200020921&start=-1#200020921

  • Issue with deleting a group using Request APIs in OIM 11g R1

    Hi,
    I am facing an issue with Request Based provisioning in OIM 11g R1.
    I am currently testing a scenario where i have imported a data set for 'Modify Provisioned Resource' and am able to add a group/entitlement to an already provisioned resource by using the following code :
            RequestBeneficiaryEntityAttribute childEntityAttribute= new RequestBeneficiaryEntityAttribute();
            childEntityAttribute.setName("AD User Group Details");
            childEntityAttribute.setType(TYPE.String);
            List<RequestBeneficiaryEntityAttribute> childEntityAttributeList=new ArrayList<RequestBeneficiaryEntityAttribute>();
            RequestBeneficiaryEntityAttribute attr = new RequestBeneficiaryEntityAttribute("Group Name", <group>,                                                                       RequestBeneficiaryEntityAttribute.TYPE.String);
            childEntityAttributeList.add(attr);
            childEntityAttribute.setChildAttributes(childEntityAttributeList);
            childEntityAttribute.setAction(RequestBeneficiaryEntityAttribute.ACTION.Add);
            beneficiaryEntityAttributeList = new ArrayList<RequestBeneficiaryEntityAttribute>();   
            beneficiaryEntityAttributeList.add(childEntityAttribute);
            beneficiarytEntity.setEntityData(beneficiaryEntityAttributeList);
    This works fine for adding a group but if i try to remove a group by changing the action to Delete in the same code, the request fails. The only change made is in the following line:
    childEntityAttribute.setAction(RequestBeneficiaryEntityAttribute.ACTION.Delete);
    Could you please suggest where can this possibly be wrong.
    Thanks for your time and help

    Hi BB,
    I am trying to follow up your response.
    You are suggestng to use prepopulate adapter for to populate respource object name, that means We have to just use an sql query from obj tabke to get the resource object name. right ?? it could be like below, what should I have entity-type value here ??
    <AttributeReference name="Field1" attr-ref="act_key"
    available-in-bulk="false" type="Long" length="20" widget="ENTITY" required="true"
    entity-type="????"/>
    <PrePopulationAdapter name="prepopulateResurceObject"
    classname="my.sample.package.prepopulateResurceObject" />
    </AttributeReference>
    <AttributeReference name="Field2" attr-ref="Field2" type="String" length="256" widget="lookup-query"
    available-in-bulk="true" required="true">
    <lookupQuery lookup-query="select lkv_encoded as Value,lkv_decoded as Description from lkv lkv,lku lku
    where lkv.lku_key=lku.lku_key and lku_type_string_key='Lookup.xxx.BO.Field2'
    and instr(lkv_encoded,concat('$Form data.Field1', '~'))>0" display-field="Description" save-field="Value" />
    </AttributeReference>
    Then I need think about the 'Lookup.xxx.BO.Field2' format.
    Could you please let me know if my understanding is correct?? What is the entity-type value of the first attribute reference value?
    Thanks for your all help.

  • OIM integration with OpenDJ : Issue with OU Reconciliation

    Hi All,
    I am trying to integrate OIM and OpenDJ for provisioning of users.I am facing issues with OU reconciliation. I had performed the following steps.
    1. Copied OID 11.1.1.5 connector binaries under OIM_HOME\server\ConnectorDefaultDirectory.
    2. Installed connector by choosing DSEE server from the list.
    3. Create IT Rsource and Application instance.
    4. Modified Lookup.LDAP.Configuration based on OpenDJ schema.
    5. Modified "LDAP Connector OU Lookup Reconciliation" scheduled task parameters for IT resource name and executed it. it executes successfully without any errors but the OU from OpenDJ are not populated into the organization lookup( Lookup.LDAP.Organization - this is the Lookup name value specified in the scheduled task ).
    6. I had enabled the connector logs for debugging, as per logs when the scheduled job is ran I can see the organizationunits from OpenDj in the logs. Even the logs states that scheduled task was executes successfully.
    Please suggest why the lookup name is not being populated by the organizations. Please provide any suggestions or steps to debug this furthur.
    For example, below lines from the logs shows the organization(ou=Internal) from OpenDJ.
    [2013-01-13T01:55:16.040-08:00] [oim_server1] [TRACE] [] [ORG.IDENTITYCONNECTORS.LDAP.CONNECTOROBJECTUTIL] [tid: Thread-322] [userId: oiminternal] [ecid: 0000JkmuRveFg4Uay5R_6G1GwaZX000001,1:28786] [SRC_CLASS: com.thortech.util.logging.Logger] [APP: oim#11.1.2.0.0] [SRC_METHOD: debug] org.identityconnectors.ldap.ConnectorObjectUtil : createConnectorObject : ENTRY createConnectorObject(ldapGroups=org.identityconnectors.ldap.LdapConnection@1773f6bb,posixRefAttrs=ObjectClass: OU,posixGroups=org.identityconnectors.ldap.GroupHelper@22244683,roles=org.identityconnectors.ldap.EDIRRolesHelper@7b9153f7,parentDN=o=sca,rdnAttributes=ou=Internal,ou=External,ou=spe: null:null:{entryuuid=entryUUID: 873193ec-b60d-35fb-b7e2-f8da3c50aa8e, ou=ou: Internal, objectclass=objectClass: organizationalUnit, top},rdnAttributeType=[__NAME__, __UID__, objectClass, Organisation Unit Name, ou],value=true)
    [2013-01-13T01:55:16.041-08:00] [oim_server1] [TRACE] [] [ORG.IDENTITYCONNECTORS.LDAP.LDAPENTRY] [tid: Thread-322] [userId: oiminternal] [ecid: 0000JkmuRveFg4Uay5R_6G1GwaZX000001,1:28786] [SRC_CLASS: com.thortech.util.logging.Logger] [APP: oim#11.1.2.0.0] [SRC_METHOD: debug] org.identityconnectors.ldap.LdapEntry : create : ENTRY create(baseDN=o=sca,*result=ou=Internal,ou=External,ou=spe*: null:null:{entryuuid=entryUUID: 873193ec-b60d-35fb-b7e2-f8da3c50aa8e, ou=ou: Internal, objectclass=objectClass: organizationalUnit, top})
    [2013-01-13T01:55:16.041-08:00] [oim_server1] [TRACE] [] [ORG.IDENTITYCONNECTORS.LDAP.LDAPENTRY] [tid: Thread-322] [userId: oiminternal] [ecid: 0000JkmuRveFg4Uay5R_6G1GwaZX000001,1:28786] [SRC_CLASS: com.thortech.util.logging.Logger] [APP: oim#11.1.2.0.0] [SRC_METHOD: debug] org.identityconnectors.ldap.LdapEntry : create : RETURN create(baseDN=o=sca,result=ou=Internal,ou=External,ou=spe: null:null:{entryuuid=entryUUID: 873193ec-b60d-35fb-b7e2-f8da3c50aa8e, ou=ou: Internal, objectclass=objectClass: organizationalUnit, top}) returns: org.identityconnectors.ldap.LdapEntry$SearchResultBased@73366019
    Thanks,
    Anumolu.
    Edited by: Anumolu111 on Jan 13, 2013 4:14 AM

    Hi All,
    I am trying to integrate OIM and OpenDJ for provisioning of users.I am facing issues with OU reconciliation. I had performed the following steps.
    1. Copied OID 11.1.1.5 connector binaries under OIM_HOME\server\ConnectorDefaultDirectory.
    2. Installed connector by choosing DSEE server from the list.
    3. Create IT Rsource and Application instance.
    4. Modified Lookup.LDAP.Configuration based on OpenDJ schema.
    5. Modified "LDAP Connector OU Lookup Reconciliation" scheduled task parameters for IT resource name and executed it. it executes successfully without any errors but the OU from OpenDJ are not populated into the organization lookup( Lookup.LDAP.Organization - this is the Lookup name value specified in the scheduled task ).
    6. I had enabled the connector logs for debugging, as per logs when the scheduled job is ran I can see the organizationunits from OpenDj in the logs. Even the logs states that scheduled task was executes successfully.
    Please suggest why the lookup name is not being populated by the organizations. Please provide any suggestions or steps to debug this furthur.
    For example, below lines from the logs shows the organization(ou=Internal) from OpenDJ.
    [2013-01-13T01:55:16.040-08:00] [oim_server1] [TRACE] [] [ORG.IDENTITYCONNECTORS.LDAP.CONNECTOROBJECTUTIL] [tid: Thread-322] [userId: oiminternal] [ecid: 0000JkmuRveFg4Uay5R_6G1GwaZX000001,1:28786] [SRC_CLASS: com.thortech.util.logging.Logger] [APP: oim#11.1.2.0.0] [SRC_METHOD: debug] org.identityconnectors.ldap.ConnectorObjectUtil : createConnectorObject : ENTRY createConnectorObject(ldapGroups=org.identityconnectors.ldap.LdapConnection@1773f6bb,posixRefAttrs=ObjectClass: OU,posixGroups=org.identityconnectors.ldap.GroupHelper@22244683,roles=org.identityconnectors.ldap.EDIRRolesHelper@7b9153f7,parentDN=o=sca,rdnAttributes=ou=Internal,ou=External,ou=spe: null:null:{entryuuid=entryUUID: 873193ec-b60d-35fb-b7e2-f8da3c50aa8e, ou=ou: Internal, objectclass=objectClass: organizationalUnit, top},rdnAttributeType=[__NAME__, __UID__, objectClass, Organisation Unit Name, ou],value=true)
    [2013-01-13T01:55:16.041-08:00] [oim_server1] [TRACE] [] [ORG.IDENTITYCONNECTORS.LDAP.LDAPENTRY] [tid: Thread-322] [userId: oiminternal] [ecid: 0000JkmuRveFg4Uay5R_6G1GwaZX000001,1:28786] [SRC_CLASS: com.thortech.util.logging.Logger] [APP: oim#11.1.2.0.0] [SRC_METHOD: debug] org.identityconnectors.ldap.LdapEntry : create : ENTRY create(baseDN=o=sca,*result=ou=Internal,ou=External,ou=spe*: null:null:{entryuuid=entryUUID: 873193ec-b60d-35fb-b7e2-f8da3c50aa8e, ou=ou: Internal, objectclass=objectClass: organizationalUnit, top})
    [2013-01-13T01:55:16.041-08:00] [oim_server1] [TRACE] [] [ORG.IDENTITYCONNECTORS.LDAP.LDAPENTRY] [tid: Thread-322] [userId: oiminternal] [ecid: 0000JkmuRveFg4Uay5R_6G1GwaZX000001,1:28786] [SRC_CLASS: com.thortech.util.logging.Logger] [APP: oim#11.1.2.0.0] [SRC_METHOD: debug] org.identityconnectors.ldap.LdapEntry : create : RETURN create(baseDN=o=sca,result=ou=Internal,ou=External,ou=spe: null:null:{entryuuid=entryUUID: 873193ec-b60d-35fb-b7e2-f8da3c50aa8e, ou=ou: Internal, objectclass=objectClass: organizationalUnit, top}) returns: org.identityconnectors.ldap.LdapEntry$SearchResultBased@73366019
    Thanks,
    Anumolu.
    Edited by: Anumolu111 on Jan 13, 2013 4:14 AM

  • Issues with offline provisioning in OIM 11G

    We are facing an issue with OIM 11G where we are trying to provision few resources via offline provisioning. Ths issue is that when I do a provisioning/disable/enable on the resource the status of the resource says something like "provisioning in queue/Disable in queue/Enable in queue". This is not happenning all the time but seems to be consistent when I repeatedly disable/enable the resource. Once the status of hte resource remains "in queue" it is never changed back to the actual status which says provisioned/disabled/enabled. Can anyone provide me an insight of what is happenning here and how the offline events are processed within OIM? Is there any way to get the status of the resource back to normal? Please let me know.
    Thanks!

    Check
    http://docs.oracle.com/cd/E14899_01/doc.9102/e14761/offline_prov.htm
    Configuring the Remove Failed Off-line Messages Scheduled Task
    Configure the Remove Failed Off-line Messages scheduled task to schedule deletion of failed requests from the OPS table. While configuring this scheduled task, set a value for the Remove Failed Messages Older Than (days) attribute.
    Regards
    Shashank

  • Issue with Apps adapter for Create_Cust_Account API

    Hi ,
    I need to invoke this package from apps adapter in BPEL(10.1.3.5).
    Create Customer: HZ_CUST_ACCOUNT_V2PUB. create_cust_account. This is a Over loaded procedure. Look for the Procedure with these Parameters.
    PROCEDURE create_cust_account (
    p_init_msg_list IN VARCHAR2:= FND_API.G_FALSE,
    p_cust_account_rec IN CUST_ACCOUNT_REC_TYPE,
    p_organization_rec IN HZ_PARTY_V2PUB.ORGANIZATION_REC_TY
    PE,
    p_customer_profile_rec IN HZ_CUSTOMER_PROFILE_V2PUB.CUSTOMER
    PROFILEREC_TYPE,
    p_create_profile_amt IN VARCHAR2:= FND_API.G_TRUE,
    x_cust_account_id OUT NUMBER,
    x_account_number OUT VARCHAR2,
    x_party_id OUT NUMBER,
    x_party_number OUT VARCHAR2,
    x_profile_id OUT NUMBER,
    x_return_status OUT VARCHAR2,
    x_msg_count OUT NUMBER,
    x_msg_data OUT VARCHAR2
    But I’m getting the following error,
    An error occurred while running Jpublisher.missing method
    · I’ve tried with Database adapter. But in the runtime I’m not able to pass oracle apps initialization parameter in spite of using transaction and idempotent property in partnerlink.
    · Then I’ve tried to invoke fnd_global.apps_initialize first then call the package from database adapter, but it fails again, apparently its not able to execute both DBadapter in same database session although they are in same BPEL transaction.
    When I’m passing the initialization parameters in adapter created wrapper procedure then its working fine and customer got created.
    Please let me know where I’m going wrong or issue with apps adapter.
    It’s urgent …
    Thanks in Advance,
    Shreekanta

    Thanks for the reply.
    I'm able to execute the BPEL flows using DB adapter in same session and customer got created.
    But I'm wondering why I cant invoke the API using Apps adapter though its standard one.
    wsdl file is not getting generated as the adapter wizard not get completed.
    Do u have any idea why its giving ' error occurred while running Jpublisher.missing method' error?should I conclude that apps adapter does not support overloaded procedure.

  • Issue with JAVA Mail API

    Hi
    We have a requirement to create a custom e mail. For the same I am trying to use Java Mail API.I am facing an issue with the following code:
    session session1 = session.getInstance(properties, null);
    System gives an error Sourced file: inline evaluation of: ``Properties props = new Properties(); session session1 = session.getInstance(prop . . . '' : Typed variable declaration : Class: session not found in namespace
    Is there some specific API i need to import for session class. Kindly suggest.
    Regards
    Shobha

    Hi Shobha,
    I was also facing the same issue from last couple of weeks and just now i have achieved the working functionality.
    Please find below working code and replace values as per your serveru2019s configuration.
    import com.sap.odp.api.util.*;
    import java.util.*;
    import javax.mail.*;
    import javax.mail.internet.*;
    import javax.activation.*;
    import java.io.File;
    import java.net.*;
    // SUBSTITUTE YOUR EMAIL ADDRESSES HERE!!!
    String to =<email address>;
    String from =<email address>;
    // SUBSTITUTE YOUR ISP'S MAIL SERVER HERE!!!
    String host = <smtp host name>;
    String user = <smtp user name>;
    // Create properties, get Session
    // Properties props = new Properties();
    Properties props = System.getProperties();
    props.put("mail.transport.protocol", "smtp");
    props.put("mail.smtp.host", host);
    props.put("mail.debug", "false");
    props.put("mail.smtp.auth", "false");
    props.put("mail.user",user);
    props.put("mail.from",from);
    Session d_session = Session.getInstance(props,null);//Authenticator object need to be set
    Message msg = new MimeMessage(d_session);
    msg.setFrom(new InternetAddress(from));
    InternetAddress[] address = {new InternetAddress(to)};
    msg.setRecipients(Message.RecipientType.TO, address);
    msg.setSubject("Test E-Mail through Java");
    msg.setSentDate(new Date());
    msg.setText("This is a test of sending a " +
    "plain text e-mail through Java.\n" +
    "Here is line 2.");
    Transport.send(msg);
    Deepak!!!

  • Issue with "firstRecord" Business Component method of JAVA Data bean API.

    Hi,
    Following is my use-case scenario:
    I have to add or associate child MVG business component (CUT_Address)
    with the parent business component (Account) using JAVA Data bean API.
    My requirement is: first to check whether child business component(i.e. CUT_address) exists. If it exists then associate it with parent business component (Account)
    otherwise create new CUT_address and associate it with account.
    Code (using JAVA Data bean APIs) Goes as follows:
    SiebelBusObject sBusObj = connBean.getBusObject("Account");
    parentBusComp = sBusObj.getBusComp("Account");
    SiebelBusComp parentBusComp;
    SiebelBusComp childBusComp;
    // retrieve required account.. Please assume Account1 exists
    parentBusComp.activateField("Name");
    parentBusComp.clearToQuery();
    parentBusComp.setSearchSpec("Name", "Account1");
    sBusComp.executeQuery2(true, true);
    sBusComp.firstRecord();
    Counter = 0;
    while (counter < Number_Of_Child_Records_To_Insert)
    childBusComp = parentBusComp.getMVGBusComp("City");
    associatedChildBusComp = childBusComp.getAssocBusComp();
    childBusComp.activateField("City");
    childBusComp.clearToQuery();
    childBusComp.setSearchSpec("City", Vector_of_city[counter]);
    sBusComp.executeQuery2(true, true);
    if( sBusComp.firstRecord() )
    // Child already exist and do processing accordingly
    else
    // Child does not exist and do processing accordingly
    childBusComp.release();
    childBusComp = null;
    associatedChildBusComp.release();
    associatedChildBusComp=null;
    Now the issue with this code is: For the first iteration, SbusComp.firstRecord returns 0 if records does not exist. However from the second iteration, SbusComp.firstRecord returns 1 even if there is no record matching the search specification.
    Any input towards the issue is highly appreciable.
    Thanks,
    Rohit.

    Setting the view mode to "AllView" helped.
    Thanks for the lead!
    In the end, I also had to invoke the business component method SetAdminMode with "true" as the argument so that I could also modify the records from my script.

  • After upgrading from Prosight V6 to V7.5 we are having issue with API's

    Team,
    Current version 6 of prosight upgraded to version 7.5. We are having our external webservice consumes Prosight webservice which defaulted installed under ProsightWS virtual directory. After up gradation we are running into issue in external application ,application unable to communicate to Prosight new version webservices. Fails in the time of calling in Login method itself in the psPortfoliossecturiy service.
    Exception as follows:
    “System.Web.Services.Protocols.SoapException: Could not create Security Token for specified User and Password at ProSight.Portfolios.WebServices.WS.psPortfoliosSecuritySOAP.Login(String sUser, String sPassword, Int32 lTimeOut) “
    Our web application uses a dedicated user account to interact with Prosight webservice and it runs under windows authentication. Application build on ASP.NET 2.0 and later.
    The sad part here is , our web application running perfectly with Prosight v6. :( I would appreciate if anyone could help us to resolve this issue. This is blocking our up gradation from 6 to higher version of prosight.
    Thanks for your valuable time
    Regards,
    Jithesh

    Here is the offical documentation from version 8.0 on the backward-compatability of the APIs. This document comes the version 8 installer. Not sure this helps you with the 7.5 issues, but hoping it will help:
    Considerations for applications using the PPM Open API via COM
    Applications which use the Primavera Portfolio Management (PPM) Open API via any interface (COM, SOAP/RCP or Document/Literal) and were developed and used with previous versions of PPM will continue to operate without the need for recompilation, as Primavera Portfolio Management 8.0 provides full binary backwards compatibility for all of the Open API interfaces. New functionality is available only to applications that are written to take advantage of such functionality.
    Note however that any process can only load and use one version of the Microsoft .NET Framework. Therefore, applications developed with the .NET Framework 1.1 cannot use APIs to communicate with software developed using the .NET Framework 2.0 if this would cause the same process to need to load both versions of the .NET Framework.
    This means that if any part of an application that uses the Open API which was developed using .NET Framework 1.1 communicates directly with PPM using the COM API (which is in-process), that part of the application would need to be recompiled to target the .NET Framework 2.0. If on the other hand, the application developed using .NET Framework 1.1, only communicates with PPM using SOAP RPC or Web Services (which are out-of-process) then there is no issue.
    Instead of recompiling, it is usually sufficient to ”force” the existing executables to actually use the .NET Framework 2.0 instead of the native version of the .NET Framework for which they were compiled (1.1). This can be achieved by using an application config file with the following content:
    <?xml version ="1.0"?>
    <configuration>
    <startup>
    <supportedRuntime version="v2.0.50727" />
    </startup>
    </configuration>
    The application config file can easily be created using Notepad or something similar. It does not need to be compiled in any way. It should be named with the exact same name as the executable file plus the suffix “.config”. For example: if the application executable file is called “Tester1.exe” then the config file must be named “Tester1.exe.config”. It must be placed in the same directory as the executable file.
    The config file “forces” the application to use the .NET Framework 2.0, which would mean that even when using COM (in-process) Open API calls to PPM, there would still be only one version of the .NET Framework involved in the process (.NET 2.0).
    ----------------------------------------------------------------------------------------

Maybe you are looking for