Third-Party Authentication: Search User identity problem

I have installed OpenSSO agent on my SGD server. I have followed this doc: http://wikis.sun.com/display/SecureGlobalDesktop/HOWTO+Use+OpenSSO+With+SGD
Everything works except the part where the SGD server tries to map the username with the Local+LDAP repository
I have enabled the Third-Party Authentication and selected the option: "Search the User Identity in the LDAP Repository and use the closest matching LDAP Profile from the Local Repository"
I have also kept the already working System Authentication (Active Directory) enabled
I have 2 problems:
1 - If the user does not exist in local directory (but exists in the active directory) the user is not automatically logged (the login screen of sgd appears).
In the log file (catalina.out) I have an "Invalid credentials" message.
The user is then able to log on manually.
2 - If the user exists in both directories (local and Active Directory) the user is automatically logged-in but the profile is not the same. (the application list is different and the client settings are reset)
I have checked the open session on the Administration console and I see a difference in the case of the User identity.
If I log manually I will see "DC=COM / DC=DOMAIN / CN=User Name (LDAP)" (this is the right user)
If I log with opensso I will see "DC=com / DC=domain / CN=User Name (LDAP)"
I can log on with both at the same time, SGD seems to consider it like two different users.
Thanks

Hi,
You need to create at least one high level "LDAP Profile" user profile in the SGS ENS.
Regards,
Arno Staal
Divider B.V.

Similar Messages

  • SGD with Third Party Authentication issue

    Hi
    I am trying to setup SGD with Third Party Authentication and have done all the requisites for this.
    I input the SGD URL and get the Third Party Login page but after I input my credentials, I get redirected to the SGD default login page which should not be the case. I had already set "Tomcat Authentication" as false in server.xml and enabled the Third Party authentication scheme in Array Manger
    What else am I missing ?
    Kindly advise
    SGD ver4.31
    Thanks

    Every now and then I have found the same. One thing that almost always solved the problem was recreating a new trusted user, you can follow the steps from:
    [http://docs.sun.com/source/820-1088/trusted_users.html|http://docs.sun.com/source/820-1088/trusted_users.html]
    Especially the step to test the trusted_user is a very good test to see if the trusted user is ok: http://server/axis/services/rpc/externalauth
    When prompted, log in as the trusted user.
    An other way to test it is via the api-test functionality: http://server/sgd/admin/apitest/
    First setup a session: webtopsession->startSession(0)
    Then authenticate via externalauth->setSessionIdentity
    These steps are the minimal steps to perform 3rdParty Authentication
    (There is also an example jsp for 3rd Party Authentication on the wikis.sun: [http://wikis.sun.com/display/SecureGlobalDesktop/Single+sign-on+(before+4.40)|http://wikis.sun.com/display/SecureGlobalDesktop/Single+sign-on+(before+4.40)] )
    - Remold

  • OAAM Integration with Third Party Authentication tool

    Hi Guys,
    In our project we are planning to integrate OAAM11GR2 with OIM11GR2 and OAM11GR2 through Advanced integration. We have a requirement to call a third party authentication service from OAAM as a step up authentication for a particular user base (based on the group membership). Kindly suggest if this requirement is feasible and if you can provide any pointers to implement this requirement.
    Thanks

    Yes, you can use third party step up authentication.
    You can customize the challenge flow. Here is the link.
    http://docs.oracle.com/cd/E28389_01/doc.1111/e15480/igotp.htm
    (It is for 11gR1 but same applies to 11gR2)

  • Open Directory, third party LDAP search path problem on Snow Leopard

    Happy new year folks,
    I ran into an interesting problem this past week in regards to a third party LDAP directory in the Search path (which used to work on previous versions). The issue brings the server to its knees eventually. I'm still digging through the logs, but here's the general breakdown...
    1. Add third-party LDAP to the OD node list. This has always worked on previous versions, and appears to still work at the most basic level. I can navigate the node with DSCL, read records, etc.
    1. Add third-party LDAP to the OD search path.
    2. Wait a few minutes....
    3. The server begins to slow down. Apache, SSH, ServerAdmin service stop responding. I'm able to run "top" briefly, which shows an increase of threads.
    4. Restart the server and quickly remove the directory from the OD search path
    5. Server goes back to being rock solid with very nice response times for Apache, SSH, ServerAdmin, etc.
    If anyone has any debugging suggestions, or has seen this before, let me know.
    Jaime
    --- Below is some console output leading up to the chaos. Before adding to search path, everything looks good --------------------
    bash-3.2# dscl
    Entering interactive mode... (type "help" for commands)
    read /LDAPv3/ldap.itd.umich.edu/Users/jaimelm cn
    dsAttrTypeNative:cn:
    Jaime Magiera
    Jaime L Magiera 1
    Jaime L Magiera
    --- Add to Search Path, which hangs ------------------------------------------------------------------------------
    bash-3.2# dscl /Search -append / CSPSearchPath /LDAPv3/ldap.itd.umich.edu
    --- DSCL in debug mode contains the following ----------------------------------------------
    2010-01-01 19:26:25 EST - T[0x00000001037A5000] - Client: ipfw, PID: 1097, API: libinfo, Server Used : libinfomig DAR : Procedure = getprotobynumber (13) : Result code = 0
    2010-01-01 19:26:25 EST - T[0x00000001037A5000] - Client: sso_util, PID: 1103, API: dsFindDirNodes(), Server Used : DAR : 1 : Dir Ref = 16779669 : Requested nodename = /Search
    2010-01-01 19:26:25 EST - T[0x00000001037A5000] - Plug-in call "dsDoPlugInCustomCall()" failed with error = -14292.
    2010-01-01 19:26:25 EST - T[0x00000001037A5000] - Port: 27151 Call: dsDoPlugInCustomCall() == -14292
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsFindDirNodes(), Server Used : DAR : 1 : Dir Ref = 16779
    707 : Requested nodename = /LDAPv3/ldap.itd.umich.edu
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsFindDirNodes(), Server Used : DAR : 2 : Dir Ref = 16779707 : Result code = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsVerifyDirRefNum(), Server Used : DAC : Dir Ref 167797072010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsVerifyDirRefNum(), Server Used : DAR : Dir Ref 16779707
    : Result code = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsFindDirNodes(), Server Used : DAC : Dir Ref 16779707 :
    Data buffer size = 1282010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsFindDirNodes(), Server Used : DAR : 1 : Dir Ref = 16779
    707 : Requested nodename = ConfigNode2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsFindDirNodes(), Server Used : DAR : 2 : Dir Ref = 16779
    707 : Result code = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: Requesting dsOpenDirNode with PID = 1114, UID = 0, and EUID = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsOpenDirNode(), Configure Used : DAC : Dir Ref = 16779707 : Node Name = /Configure
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsOpenDirNode(), Configure Used : DAR : Dir Ref = 1677970
    7 : Node Ref = 33556926 : Result code = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsVerifyDirRefNum(), Server Used : DAC : Dir Ref 16779707
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsVerifyDirRefNum(), Server Used : DAR : Dir Ref 16779707 : Result code = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsGetDirNodeInfo(), Configure Used : DAC : Node Ref = 33556926 : Requested Attrs = dsAttrTypeStandard:OperatingSystemVersion : Attr Type Only Flag = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsGetDirNodeInfo(), Configure Used : DAR : Node Ref = 33556926 : Result code = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsGetDirNodeInfo(), Search Used : DAC : Node Ref = 33556924 : Requested Attrs = dsAttrTypeStandard:LSPSearchPath : Attr Type Only Flag = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsGetDirNodeInfo(), Search Used : DAR : Node Ref = 33556924 : Result code = 0
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Client: dscl, PID: 1114, API: dsDoPlugInCustomCall(), Search Used : DAC : Node Ref = 33556924 : Request Code = 444
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Checking for Search Node XML config file:
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - /Library/Preferences/DirectoryService/SearchNodeConfig.plist
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Have written the Search Node XML config file:
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - /Library/Preferences/DirectoryService/SearchNodeConfigBackup.plist
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - Setting search policy to Custom search
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - CSearchPlugin::SwitchSearchPolicy: switch - reachability of node </LDAPv3/127.0.0.1> retained as <true>
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - CSearchPlugin::CheckNodes: checking network node reachability on search policy 0x0000000000002201
    2010-01-01 19:26:36 EST - T[0x00000001037A5000] - CCachePlugin::EmptyCacheEntryType - Request to empty all types - Flushing the cache
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - Client: Requesting dsOpenDirNode with PID = 0, UID = 0, and EUID = 0
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - Internal Dispatch, API: dsOpenDirNode(), LDAPv3 Used : DAC : Dir Ref = 16777216 : Node Name = /LDAPv3/127.0.0.1
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - Internal Dispatch, API: dsOpenDirNode(), LDAPv3 Used : DAR : Dir Ref = 16777216 : Node Ref = 33556929 : Result code = 0
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - CSearchPlugin::CheckNodes: calling dsOpenDirNode succeeded on node </LDAPv3/127.0.0.1>
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - Internal Dispatch, API: dsCloseDirNode(), LDAPv3 Used : DAC : Node Ref = 33556929
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - Internal Dispatch, API: dsCloseDirNode(), LDAPv3 Used : DAR : Node Ref = 33556929 : Result code = 0
    2010-01-01 19:26:36 EST - T[0x0000000103181000] - mbr_mig - dsFlushMembershipCache - force cache flush (internally initiated)
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - Client: Requesting dsOpenDirNode with PID = 0, UID = 0, and EUID = 0
    2010-01-01 19:26:36 EST - T[0x0000000103181000] - Membership - dsNodeStateChangeOccurred - flagging all entries as expired
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - Internal Dispatch, API: dsOpenDirNode(), LDAPv3 Used : DAC : Dir Ref = 16777216 : Node Name = /LDAPv3/ldap.itd.umich.edu
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - CLDAPNodeConfig::InternalEstablishConnection - Node ldap.itd.umich.edu - Connection requested for read
    2010-01-01 19:26:36 EST - T[0x000000010070A000] - CLDAPNodeConfig::FindSuitableReplica - Node ldap.itd.umich.edu - Attempting Replica connect to 141.211.93.133 for read
    2010-01-01 19:26:36 EST - T[0x0000000102481000] - CCachePlugin::SearchPolicyChange - search policy change notification, looking for NIS
    2010-01-01 19:26:36 EST - T[0x0000000102481000] - Internal Dispatch, API: dsGetDirNodeInfo(), Search Used : DAC : Node Ref = 33554436 : Requested Attrs = dsAttrTypeStandard:SearchPath : Attr Type Only Flag = 0
    ------- From another screen, I do "id jaimelm", which hangs ------------------------------------------------------------------------
    : Requested Rec Names = jaimelm : Rec Name Pattern Match:8449 = eDSiExact : Requested Rec Types = dsRecTypeStandard:Users
    2010-01-01 19:36:55 EST - T[0x00000001082A2000] - Internal Dispatch, API: dsGetRecordList(), Search Used : DAC : 2 : Node Ref = 33554436 : Requested Attrs = dsAttrTypeStandard:AppleMetaNodeLocation;dsAttrTypeStandard:RecordName;dsAttrTy peStandard:Password;dsAttrTypeStandard:UniqueID;dsAttrTypeStandard:GeneratedUID; dsAttrTypeStandard:PrimaryGroupID;dsAttrTypeStandard:NFSHomeDirectory;dsAttrType Standard:UserShell;dsAttrTypeStandard:RealName;dsAttrTypeStandard:Keywords : Attr Type Only Flag = 0 : Record Count Limit = 1 : Continue Data = 0
    2010-01-01 19:37:03 EST - T[0x0000000108325000] - Client: httpd, PID: 157, API: mbr_syscall, Server Used : process kauth result 0x0000000102022B30
    2010-01-01 19:37:03 EST - T[0x00000001083A8000] - Client: httpd, PID: 151, API: mbr_syscall, Server Used : process kauth result 0x0000000102022C50
    2010-01-01 19:37:05 EST - T[0x000000010842B000] - Client: httpd, PID: 203, API: mbr_syscall, Server Used : process kauth result 0x0000000102022D70
    2010-01-01 19:37:15 EST - T[0x00000001084AE000] - Client: httpd, PID: 994, API: mbr_syscall, Server Used : process kauth result 0x0000000102023890
    2010-01-01 19:37:26 EST - T[0x0000000108531000] - Client: httpd, PID: 198, API: mbr_syscall, Server Used : process kauth result 0x0000000102023980
    2010-01-01 19:37:31 EST - T[0x00000001085B4000] - Client: httpd, PID: 161, API: mbr_syscall, Server Used : process kauth result 0x0000000~

    Hi
    I'm in agreement with harry here but what I'm struggling to understand is why you are seeing this as a problem? I'm also struggling to see this as being a possibility in a single server environment if I understand your post correctly?
    Promotion to OD Master with all that entails absolutely rests on a properly configured and tested internal DNS Service. The Kerberos Realm's foundation (and with that the ability of the server to perform its function as KDC and offer LDAP services) entirely depends on what is configured in the DNS Service. This will include the server name, domain name and tld. The Kerberos Realm automatically configures itself using that information. Likewise the searchbase.
    Its more than possible to change the Realm name and with it the LDAP search base (in certain circumstances) and have an OD Master, however Kerberos won't start it won't need to as the KDC will be elsewhere. You generally see this when augmenting Windows AD with MCX. In that situation Realm name and search base will reflect what is set on the Active Directory. Client computers will use what is set there for contact and authentication information before looking at the OD Master for anything else.
    Does this help? Tony

  • From sales order to third party sales order user ex

    hi .....my client busieness process is normal he produces the product his own and whenever the stock is not available automatically sys sends purchase requistion to third party................. how to write a user exit for this........
    can pls explain how we maintain functional specs.................for this reuirements.
    thanks,
    hari shankar...
    Message was edited by:
            hari shankar

    You should use a user exit such as MV45AFZZ to do this. Get an ABAPer to help you do this.
    You'll need to read the unrestricted stock level for the material in question and if that level is less than the order quantity, then you would need to change the item category of the material to BANS. this will prompt the creation of purchase requisition for the material.
    You'll need to set up the item category assignment table to add BANS as a manual entry. You will also need to ensure that you create a purchasing information record as well as a source list for your material and vendor. You will need to set up the program behind transaction ME59 in a daily batch job (this converts the purchase requisition which is automatically generated from the sales order) into a purchase order.
    Go to http://www.sap-img.com/sap-sd/process-flow-for-3rd-party-sales.htm for further details.
    Please award points if helpful.
    Thanks
    Jon

  • Original Cisco GLC-GE-100FX gbics are not working in WS-C3560CG-8PC-S while Third-Party gbics work without any problems

    Hi all,
    i have a gbic problem with my WS-C3560CG-8PC-S.
    My original 100FX gbics are not working
    Mar 30 01:28:41.575: %PHY-4-SFP_NOT_SUPPORTED: The SFP in Gi0/10 is not supported
    Mar 30 01:28:41.575: %PM-4-ERR_DISABLE: gbic-invalid error detected on Gi0/10, putting Gi0/10 in err-disable state
    100FX Third-Party Gbics are working without any problems.
    NAME: "GigabitEthernet0/9", DESCR: "100BaseFX-FE SFP"
    PID: Unspecified       , VID:      , SN: AFFxxxxx
    What can i do?
    I tried different firmware versions (12.2 and 15.2(1)E3)
    I also tried the "service unsupported-transceiver" command.
    It still does not work with 8 Switches and 16 original gbics :(

    GLC-GE-100FX and GLC-T is not supported as documented in the 3560C Data Sheet.
    You could try contacting your Cisco SE/AM and request for a Product Enhancement Request (PER).

  • I get a popup in firefox stating that "Although this page is encrypted, the information you have entered is to be sent over an unencrypted connection and could easily be read by a third party" This is a recent problem

    I keep getting the above popup. I do not get it when using IE. I came home from vacation to find this problem which I did not have before leaving so it is a new issue

    Ignore that warning. Report it to the Website Developers of the websites on which you are seeing this message. Ask them to deploy Secure HTTP Connection. And use secure Websites (https) addresses.
    Site Identity Button
    * https://support.mozilla.com/en-US/kb/Site%20Identity%20Button

  • Third Party Authentication

    Hi,
    I have been trying unsuccessfully to configer SiteMinder and 9iAS in the following configuration :
    9iAS, Portal and Database on an HP-UX Server
    Netegrity SiteMinder 5.5 on Windows NT
    I successfully did the verified direct integration with SiteMinder 4.5 (ssoxnete, ssonete) with the SiteMinder WebAgent loaded into the 9iAS Apache.
    However SiteMinder are going to desupport 4.5 at the end of the year and I can't move to 9iAS R2 as I have some Oracle E-Business Apps integration.
    What I am trying to do is configure SiteMinder 5.5 with its own Apache on the NT server and integrate the SiteMinder Apache with the 9iAS Apache to enable the authentication to be passed through.
    I am unable to get the sessions information SiteMinder sets in the browser to be picked up by 9iAS, I have tried setting up proxy passes, reverse proxies etc. unsuccessfully.
    If any one has done this, or has any suggestions I would really appreciate it (running out of ideas).
    Regards
    Stephen Gunn

    Just config AccessManager to use AD or others to authenticate the users.

  • Trigger for blocking user using third party tool !

    Dear Friends ,
    I have to block the users from using sqlplus, TOAD, PLsldev etc (Except SYSTEM user) from client end using the below trigger :
    create or replace trigger check_logon
    after logon on database
    declare
    cursor c_check is
    select
    sys_context('userenv','session_user')
    username,
    s.module,
    s.program
    from v$session s
    where
    sys_context('userenv','sessionid')=s.audsid;
    lv_check c_check%rowtype;
    begin
    open c_check;
    fetch c_check into lv_check;
    if lv_check.username in ('SYSTEM')
    then
    null;
    elsif upper(lv_check.module) like
    ('%SQL*PLUS%') or
    upper(lv_check.program) like
    ('%SQLPLUS%')  or
    upper(lv_check.module) like
    ('%T.O.A.D%') or
    upper(lv_check.program) like
    ('%TOAD%')    or
    upper(lv_check.program) like
    ('%PLSQLDEV%')    or
    upper(lv_check.program) like
    ('%BUSOBJ%')    or
    upper(lv_check.program) like
    ('%EXCEL%')
    then
    close c_check;
    raise_application_error(-
    20100,'Banned! Contact with Database Admin!');
    end if;
    close c_check;
    end;
    It works fine all normal user cannot access the database using above third party tools .
    But the problem is , user with DBA privileges can access the database with generating an trace file . Is there any way to restrict DBA Privileged user ? or is there any mechanism to create a log/trace file so that If there any  DBA Privilege user acess to the Database , then we can get the information from that specified log/trace file ? 
    Waiting your kind reply ... ...

    Hi,
    If the DBA users has the DBA role granted to them so they will by passes the logon trigger. For example, the SYSTEM user has the DBA role and the DBA role has the ADMINISTER DATABASE TRIGGER privilege. The ADMINISTER DATABASE TRIGGER by pass the logon trigger. If you want to restrict the access to a DBA user, then you need to revoke the ADMINISTER DATABASE TRIGGER privilege from the DBA role or grant individual privileges except the ADMINISTER DATABASE TRIGGER privilege to the DBA users.
    Cheers
    Legatti

  • Sending query to third party portal from TREX Search Engine

    HI Experts,
    we need to implement normal search option in SAP portal.
    when we search any thing from portal ,
    It has to search in EP & KM and also it has to search
    windows sharepoint portal and bring the data back to SAP Portal.
    For this purpose one solution is
    1) implementing  enterpise search,
       but my client dont want to go for enterprise search(federated search).
    2) The other way
    Sending query to third party portal from TREX Search Engine
    In this way what i want is ,
    how TREX can send query to third party portal SEARCH ENGINE
    (in my case share point portal search ENGINE).
    There is no problem to search in EP & KM becuse it is default.
    To search in Micro soft Share point portal,
    TREX should pass the query to share point portal search ENGINE.
    Is there any API TO send a query to third party portal like sharepoint?
    I searched in SDN and Other sites also but i am not getting exactly what I require.
    If any one has ideas or implemented already please guide me.
    My client require searching option like this,
    we need to provide drop down box in SAP Portal with 3 options like
    1) search in Share point portal
    2) search in SAP EP & KM
    3) Search in both the portals
    please provide any code samples in case if you have.
    please help me , its urgent
    Thanks in advance.
    Regards
    Bala

    Hi Bala,
    please check the information on the KM IMS (Index Management Service) in KMC's developer guide. A connection to a 3rd party search is done from KM, then calling TREX and XY in parallel, not serially from TREX.
    Here's a paper describing this for an older KM release:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5e514b57-0701-0010-3796-deb3636835fa
    Regards, Karsten

  • Problem in Third Party Sales (W & W/0 Shipping Notification)

    Hi Gurus,
    When I am doing Third Party Scenario I am facing Problem in Understanding why there is no Outbound
    Delivery takes place, Only because if I am taking Goods-In from MIGO against Sales Order, It means
    my Stock is increased & after Goods Issue only I can deduct that stock,
    If I wants to do Outbound Delivery is it Recommended or Not ?
    Please Explain me,
    I checked all Relevant massages but still confuse,
    Regards,
    Sai

    Hi Sai,
    I think there is some slight confusion here wrt 3rd party sales, in this process we are not actually delivering the goods to customers, that part is done on behalf of us by our vendor. Vendor directly sends the gooods order to customer does the outbound delivery and all from his system and sends us a corrosponding invoice (MIRO) for same.
    On that MIRO vendor invoice we create a final customer invoice (VF01) and send it to customer for collecting the payment.
    Although you are somehat right that once we do MIGO (goods are placed in storage> so stock should also increase first and should be reduced accordingly after doing PGI) but here since we are not making any physical delivery, there is no movement type, PGI from our side thus a aadition or reduction in stocks. MIGO stands here as 1 component of whole process.
    VA01> ME51- auto creates PR for SO> ME21N create PO/Convert PR to PO> MIGO>MIRO>VF01.
    config is in place of normal CP schedule lines for TAS its CS, Movement type NIL, item relevant for delivery box- unchecked.

  • Crazy JNDI Problem... (Third Party DB Driver Effects Lookup???) (OAS 10g)

    Ok folks, I need some help here...
    The scenario is this...
    I have a simple web app (one jsp) which is used as a "status checker" to ensure that all the session beans we expected to be deployed are actually deployed and that they can be looked up with JNDI and run some small status routine... That war file for that app is included in the ear file with all of the session ejb jar deployments...
    Now, the collection of session beans references three different datasources at one time or another... 2 of which are oracle databases and 1 is a DB2 database...
    Lets name them as follows...
    OracleDataSource1
    OracleDataSource2
    DB2DataSource
    Now, if I configure the three datasources in the oracle standalone OC4J 10g datasource file and deploy the ear file to standalone OC4J 10g, all of the lookups work, and it successfully uses the datasources... no problems at all...
    Ok, so now I try to deploy the ear on OAS 10g using the enterprise manager console, and deployment works fine...
    If I run the app before configuring the datasources, of course the lookups work, but the status routines of certains beans fail because they are not able to look up the datasources... no big deal, just start adding the datasources...
    I configure OracleDataSource1 and rerun... now things still work, the lookups for all the beans works fine, and some pass now, but others still want the other datasources...
    Now configure OracleDataSource2 and rerun... as before, things all work great but there are still complaints looking for the DB2DataSource from a couple of the routines...
    Herein lies the problem...
    I configure DB2DataSource and rerun, and boom, my program crashes claiming that the lookup of the bean failed...
    Now mind you, these beans have all been looked up numerous times during previous runs, and absolutely nothing has been changed with the application... in fact, even the beans that wanted the DB2DataSource were looked up successfully and then just complained of no datasource during their status routine execution...
    It's just that once I configure the DB2DataSource, all of a sudden my lookups don't work... I have tried commenting out a couple of the beans, and regardless of which lookups are commented out, it still fails saying it can't find any of the beans, even if the beans i'm trying to look up don't use the DB2DataSource, it doesn't seem to matter...
    The InitialContext used for the lookup uses the default "new InitialContext()" constructor and the lookups work fine when the DB2DataSource is not configured, but once it is, the lookups fail saying the objects are not found...
    I have consulted with a few other developers here, and noone can seem to understand why this behavior is happening... I have added debugging statements to print the contents of the context's environment, and it is always empty regardless of whether the lookups pass or fail...
    The ONLY difference I can see with this DB2DataSource is that it uses an external 3rd party driver... The oracle connections use a driver which was packaged with the OAS installation... I feel pretty strongly that it has to do with the driver because if I have the DB2DataSource configured (causing the failure) and then I edit that datasource to say that it should us the oracle driver, miraculously it causes the lookups to work again, except now I get the error saying the oracle driver doesn't like my DB2 jdbc url...
    I'm sorry for the long post, but I'm hoping that at least one person has encountered this before... I cannot think of any reason why the configuration of that datasource with the third party driver would cause these problems, especially when the exact same configuration and setup DOES work with standalone OC4J 10g, even with the DB2DataSource configured... ????? And the fact that the lookups work fine until that datasource is configured really blows my mind... i wouldn't think the datasource configuration should have anything to do with whether the lookups of the session beans succeed or not... hahaha...
    Any help would be great... I'm pullin my hair out here... :)
    Thanks,
    -Garrett

    No, I have not asked on the oracle forums yet... :)
    It seems that the cause is the driver itself... if I set a different driver for the datasource configuation, the lookups work but it just complains about the driver not being correct for the individual beans...
    I have found some documentation claiming that there is an OAS version of the DB2 driver, as well as a Merant version, but I can't seem to find jar files for either...
    Does anyone know where I can download the "YMdb2.jar" file? Supposedly this contains the DB2 DataDirect driver that I need...
    Thanks,
    -Garrett

  • Problems with enhancers, haxies, other third party apps?

    Many posts in Apple Discussions have blamed haxies and various third party apps for many opperational problems. I found the Dock so irritating (and the Launcher) that I primarily used OS 9 until 3 haxies were released with the classical drop down Apple Menu. I installed FruitMenu, Xounds and Application Enhancer November 11 2002 on 2 computers. I also have 3 GUI's installed. I have installed every OSX update and have had no problem of any type with computer operations. With each OSX update you should update your third party apps when required.

    Stick with Brand name RAM and reputable distributors.  Mac is finicky about low quality RAM
    Try OWC http://www.macsales.com/
    Crucial http://www.crucial.com/
    Most RAM comes with Lifetime guarenntee so return it where you got it.

  • VPRS in third party flow

    Hello Expert:
    ABout the Third party flow, i met a problem recently.
    in the PO, it has material cost 100, and freight cost 10.
    in the sales billing , the VPRS also equals 110 .
    but after MIRO
    the VPRS in sales billing has changed to 100 (110-10).
    Does anyone has idea that can let the VPRS do not change after MIRO?
    thanks,
    Linda

    Hello Linda,
      As of Release 4.6, the costs for a third-party business transaction
    are only transferred from the billing document to the costing-based
    profitability analysis via condition type VPRS. Please read note 322497
    regarding system behavior in third-party case.
      Individual purchase orders should be handled in the same way as third-
    party billing documents regarding the determination of condition type
    'VPRS'. The VPRS condition will contain the right value.
      If we pass the costs also during the invoice receipt then you will
    have duplicate costs i.e. once during IR and again during billing.
      If you wish to bypass the standard behavior then you can activate the
    exit in note 1376862. However this is not recommended.
    You can use the user-exit solution described in the attached note
    1376862. The code is already part of your system so that the only step
    necessary is to implement and activate the RKE_EXIT as described in the
    note.
    Note 1376862 is a pilot release note. so i think maybe it will be better that you can
    create one incident for the issue.
    Thanks and best regards,
    Ronghua Fan

  • Can third-party memory ruin my hard drive?

    Sorry this is a wee bit off topic, but since (a) I am more likely to get a straight answer from people here, and (b) I've already brought this problem up before, I'm going to ask..
    I have 2 identical laptops. We bought both for the lab about 2 years ago. They are G4 ppc. I bought an extra half gig of memory for each at the time of purchase, but I think it is from ramjet, not Apple.
    Both drives failed within a few weeks of one another. The second one came back from Apple today with a snotty message saying that the third-party memory had caused the problem and that they will refuse to do a repair if we ever send them a computer in the future with a third-party memory chip in it.
    This strikes me as absolute horse-shiite, but then again, maybe I am not aware of something I should be.

    The letter states "During the testing process, it was
    determined that a part Apple has not approved for use
    with your product resulted in your product's failure.
    When the part was removed, your product successfully
    passed all Apple diagnostic and reliability tests"
    (then it checks RAM in the space provided."
    (A new hard drive, which is 15 GB larger than the
    original, was put in, along with an obsolete version
    of OS X. One has to wonder why the drive was
    replaced if removing the memory resulted in all tests
    being passed.)
    It has been well documented that Mac memory meet strict requirements because of very tight timing and sync on the MLB... memory with sloppy gating (inconsistent timing) will cause read/write errors on HDs and that it can cause corruption in the boot record and index sectors... this in turn can cause the drive to seek for data trying to fix itself and excessively work the drive and shorten it's MTBF. So, Indirectly... cause damage... sorta... but not like taking a hammer to it.. You basically got a form letter.. but the memory that you are using may be causing some problems and contributing to problems on your HD. Even if your system may not call for matched memory if you always stick to using matched sticks, (especially the speeds, not just size) you will save your self a bunch of headaches in the long run. Pull any 3rd party memory before sending it in for repair... you don't always get the system back with the same sticks (even if they weren't officially replaced.
    OS9 was problematic in this similar reguard... I went thru 3 HDs before figuring out that an extension conflict was the cause of my data being hosed on my old Performa. (Apple had replaced all components at least once and I was still having problems until I killed the offending extension).

Maybe you are looking for