ODI 11g, Issue with Load plan variable in CASE WHEN STEP

Hi,
I am trying to retrieve value of the current context using the following code in refreshing variable:
Variable name: Context
Refreshing Code: select  '<%=odiRef.getContext( "CTX_Name" )%>'  from dual
The variable is returning correct results when I refresh / Call it as part of Run Scenario step.
Requirement: I want to use the variable Context in CASE step, depending on the value it results I want to execute a selective set of scenarios. I have defined case when steps per the requirements.
Issue: The behavior of the variable is not as expected when used in CASE WHEN step. Load plan execution always triggers only ELSE part of CASE WHEN step.
I have tried force refresh option in CASE STEP on the variable Context.
Need your help and advice on how to fix the issue.
Thank you,
Bharani

HI Bharani,
Below link may helps you about load plan in ODI
Help on Load Plan Execution
Regards,
Phanikanth

Similar Messages

  • I am having issues with loading gmail in my safari.when i open gmail in safaari it just shows loading but doesnot open. Neither it shows safari cannot display the files nor dere is any internet problems. I am confused what it is?

    cannot browse gmail in safari

    If your still having problems, or anyone else......
    For future info, your in the wrong forum, your need the MBP area, this is the MacPRO area, no big deal, I can barely keep up with the M & B & P, etc...
    Sounds like when it migrated over your old settings it may have gotten corrupted? It has been known to work if you were to just make another "Location" and just re-enter the information again and sometimes that will help. You might take it back to the store and see if it does it on their connection also. The silly person should have tested it first before letting you go.
    Hope you got it fixed...

  • Issues with load from Excel

    Dear all,
    I have issues with loading the data from excel.
    My excel file looks like this:
    Time
    Store
    Neto_prodaja_ACT
    Neto_prodaja_TAR
    Jan-12
    C1
    16
    16
    Feb-12
    C1
    2
    2
    Jan-12
    C2
    1
    1
    Feb-12
    C2
    3
    3
    My procedure for load is:
    CLEAR STATUS 
    Across Var Down Time, Store
    Sel Neto_prodaja_ACT, Neto_prodaja_TAR
    Sel Store Input
    access lslink
         connect test1
         select * from my_list
         peek only 10
         read
    end
    When I load the data I receive following error:
          Time                         Store Neto_prodaja_ACT  Neto_prodaja_TAR 
        1 01/01/2012                   C1 16.00             16.00 
        2 02/01/2012                   C1 2.00              2.00 
        3 01/01/2012                   C2                                                                                                                                                                                                                                                                       1.00              1.00 
        4 02/01/2012                   C2 3.00              3.00 
    4 Record(s) Read, 0 Record(s) Skipped.
    DAT096:
    Unexpected Dimensions in ACROSS/DOWN List For Variable NETO_PRODAJA_ACT
    DAT096:
    Unexpected Dimensions in ACROSS/DOWN List For Variable NETO_PRODAJA_TAR
    The loaded data looks like this:
    The data for store C2 is loaded also on store C1 so everything is messed up.
    I also have a question regarding dimensions.  I have a model in PAS which has data from different data sources (BW and excel). In excel I have dimensions which are not loaded from BW (they don't exist there). How to create such dimenison?
    Thank you a lot in advance.
    Best regards,
    Petra

    Petra,
    The Forum isn't really designed as a training system but rather where people can share questions or get a separate pair of eyes to look afresh at issues that aren't working for some reason. This is particularly the case for something which is as important as creating dimensions.
    The idea of the SSM Cube Builder/Model Designer was to enable people to build models with their relevant dimensions and metrics for demos and simple initial systems using manual data entry. If you are getting into building dimensions that will be outside BW then you are moving into the implementation arena rather than demo creation and need to work carefully so that things tie up.
    I doubt people would expect to be able to set up/implement BW without training and SSM is the same.
    If you would like training or would like to collaborate on a first project to enable skills transfer then my colleague Pedro and I would be happy to discuss this. We have done this with other people and it has worked well.
    Regards
    Colin

  • Performance issue with loading Proclarity Main Page..

    Hi All,
    I have Proclarity 6.3 installed on a Windows 2008 R2 OS. The Proclarity Reports was working well until last week. From last few days I am seeing a slow response time in loading the Proclarity Main page. 
    Loading Proclarity Main page on Internet Explorer 8 is taking 150 seconds and the same Proclarity Main page is loading on Google Chrome in 30 seconds.
    Have any of you faced similar issue ? 
    Already below things explored
    1. Clear Cache on PAS Tool
    2. Event Viewer, Noticed if there is any error or warning
    3. Tried browsing the Proclarity URL from server itself ( still the performance is slow)
    4. Memory consumption validated on server side. MSSQLServer was consuming more space. Hence restarted /. After restart also same issue ( with loading main page in IE ONLY)
    5. Checked drive space .. All drives has minimum 1.5 GB of free space
    6. Cleared up Proclarity Event Logs 
    The issue is NOT ONLY with loading Main page.. Navigating to any further web pages in Proclarity STANDARD and PROFESSIONAL version is responding VERY slowly.
    The only other option, that I am thinking now is RESTARTING THE WINDOWS SERVER. Which may not be a easy deal SINCE ITS A PRODUCTION SERVER.
    But the loading of web page on Chrome is 30 seconds and on IE its 150 seconds ( i.e, 5 times more..) .. So does proposing to restart the server makes sense ? 
    Any help, suggestion , thoughts on what I am facing.. ? Thanks 
    Regards,
    Aravind

    <b>onInputProcessing for two pages</b>  
      DATA: event TYPE REF TO if_htmlb_data.
      event = cl_htmlb_manager=>get_event_ex( request ).
      IF event IS NOT INITIAL AND event->event_name = 'button'.
        navigation->goto_page( event->event_server_name ).
      ENDIF.
    page1.htm
      <%@page language="abap" otrTrim="true"%>
      <%@extension name="htmlb" prefix="htmlb"%>
      <htmlb:content design="design2003">
        <htmlb:page>
          <htmlb:form>
            <htmlb:button       text          = "next"
                                design        = "NEXT"
                                onClick       = "page2.htm" />
          </htmlb:form>
        </htmlb:page>
      </htmlb:content>
    page 2
    <%@page language="abap" otrTrim="true"%>
      <%@extension name="htmlb" prefix="htmlb"%>
      <htmlb:content design="design2003">
        <htmlb:page>
          <htmlb:form>
            <htmlb:button       text          = "Page 1"
                                design        = "PREVIOUS"
                                onClick       = "page1.htm" />
          </htmlb:form>
        </htmlb:page>
      </htmlb:content>
    above will work fine.
    another way :
    you can define a global variable in your application class and subsquently change its value according to your requirement as the name of the page
    and whenever you want to move to some page. jaust assign on onclick event of the button:
    navigation->goto_page(global_variable);
    where global variable is the variable you have defined.
    hope this works for you.
    if not reply
    regards,
    Hemendra

  • Issue with Hyperion Planning 11.1.2 EPM

    Hi All,
    I am facing an issue with Hyperion Planning 11.1.2 which was working till yesterday. When i login into the and go to navigate --> Administor --> Dimension Library i am getting an error which iam not able to understand. kindly help me out. i checked all my EPM services and they are up and working and also i have restarted them but no use. Can someone help me out what should i do. I have also logged in http://<machine name>:19091/awb but getting the same below error.
    can someone help me out.
    org.apache.jasper.JasperException
         org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:510)
         org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:393)
         org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
         org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         com.hyperion.awb.web.util.HttpFilter.doFilter(HttpFilter.java:19)
         com.hyperion.bpm.LocalizationServletFilter.doFilter(LocalizationServletFilter.java:69)
    root cause
    java.lang.NullPointerException
         com.hyperion.bpm.logon.CSSAuthenticate.authenticateSSOUser(CSSAuthenticate.java:334)
         com.hyperion.bpm.logon.CSSAuthenticate.getAuthenticatedUser(CSSAuthenticate.java:130)
         org.apache.jsp.index_jsp._jspService(index_jsp.java:182)
         org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:332)
         org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
         org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         com.hyperion.awb.web.util.HttpFilter.doFilter(HttpFilter.java:19)
         com.hyperion.bpm.LocalizationServletFilter.doFilter(LocalizationServletFilter.java:69)
    note The full stack trace of the root cause is available in the Apache Tomcat/5.5.17 logs.
    Edited by: [email protected] on Jul 18, 2009 10:38 PM

    Hi,
    First question, has anything changed since it worked like admin accounts.
    Reboot the machine.
    Next, just start the "Hyperion Foundation Shared Services - Web Application" service
    Can you log into Shared Services ?
    If you can, then start "Hyperion EPM Architect - Web Application" service
    Go to http://localhost:19091/awb/ Does the web page display ?
    If not then you it looks like EPMA needs reconfiguring.
    Next start "Hyperion EPM Architect - Process Manager" service, make sure the service starts and it should start another 4 services.
    Next refresh http://localhost:19091/awb/ and try and login.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Issues with loads

    There has been issues with loads to a Sales cube.  The issue is that delta loads for the first few days in January have been lost (also deleted from PSA).  I wanted to check the following before I make a suggestion:
    1. Data up until the end of last year is available and okay
    2. Create a copy of the original Sales cube
    3. Generate a DataSource from the original cube
    4. Load up until the end of last year to the new cube via the Datamart
    5. Load from Sales LO extractors, carrying out the init from the beginning of this year to 9999 into the new copy
    6. Load Deltas into the new copy
    Thanks

    Hi,
    I hope You had taken no restriction on the date when you were doing delta init.thats why you are getting delta for <b>all</b> records.It would continue in futue also.
    We can easily find out the sales document number which was created <b>somewhere</b> in December ending( Iam not saying the document should be last document of the December). Use that documetn number to fil the setup table as starting document number.
    Now do selective deletion of  the data from the cube which has Calnder day >= 01.01.2007. Now  Do repair request with selection on calender day(or the field which is used to map to Calnder day) with the range 01.01.2007,12.12.9999 .
    Next time when you are doing delta load to the cube, it will bring the records as nothng was happend.
    Ofcourse as Roberto mentioned ,<i>it is possible only in a specific case: if you are sure that a every sales document created before 31/12/06 cannot change anymore after that date !</i>
    What ever the date you feel comfotable (i.e you feel that the records which were created before that date were not changed in after Jan 01) instead of 01.01.2007 in the above explanations.
    With rgds,
    Anil Kumar Sharma .P
    Message was edited by:
            Anil Kumar Sharma

  • I am having issues with my computer recognizing my phone when attached. I am trying to transfer pictures to my computer but it doesn't even see the phone.

    I am having issues with my computer recognizing my phone when attached. I am trying to transfer pictures to my computer but it doesn't even see the phone.

    skinaked101,
    Look no further help is here! I can understand the importance of being able to keep your pictures safe on your computer.
    Have you tried using different USB ports on your computer, as your fellow community member suggested?
    Visit http://www.verizonwireless.com/support/knowledge-base-97259/ for complete steps to transfer the pictures from your phone to computer.
    JohnB_VZW
    Follow us on Twitter @VZWSupport
    If my response answered your question please click the �Correct Answer� button under my response. This ensures others can benefit from our conversation. Thanks in advance for your help with this!!

  • OIM 11g - Issue with Bulk Load Utility for Account Data

    Hi,
    We are trying to load the account data for users in OIM 11g using bulk load utility.
    We are trying to load the account data for resource "iPlanet". For testing purpose, we made one account entry in csv file and run the bulk load utility. After the bulk load process completes, we have noticed that resource is provisioned to the user multiple times and multiple entries have been created in process form table.
    We have tried to run the utility multiple times with a different user record each time.
    The out put of the below sql query:
    SELECT MSG FROM OIM_BLKLD_LOG
    WHERE MODULE = 'ACCOUNT' AND LOG_LEVEL = 'PROGRESS_MSG'
    ORDER BY MSG_SEQ_NO;
    is coming as follows:
    MSG
    Number of Records Loaded: 126
    Number of Records Loaded: 252
    Number of Records Loaded: 504
    Number of Records Loaded: 1008
    Number of Records Loaded: 2016
    Number of Records Loaded: 4032
    We have noticed that each time the number of records loaded is increased to double from the records loaded in last run even when the csv file contains only one record.
    Provided below are the parent and child csv file entries.
    Parent file:
    UD_IPNT_USR_USERID,UD_IPNT_USR_FIRST_NAME,UD_IPNT_USR_LAST_NAME,UD_IPNT_USR_COMMON_NAME,UD_IPNT_USR_NSUNIQUEID
    KPETER,Peter,Kevin,Peter Kevin,
    Child file 1:
    UD_IPNT_USR_USERID,UD_IPNT_GRP_GROUP_NAME
    KPETER,group1
    Child file 2:
    UD_IPNT_USR_USERID,UD_IPNT_ROL_ROLE_NAME
    KPETER,role1
    Can you please throw some insight on what could be the potential cause for this issue and how it could be resolved?
    Thanks
    Deepa
    Edited by: user10955790 on Jun 25, 2012 6:45 AM

    Hi Deepa,
    I know from 'User load' perspective that is required to restart Oracle Identity Manager when we need to reload data that was not loaded during the first run.
    So, my suggestion is restart it before reload.
    Reference: http://docs.oracle.com/cd/E21764_01/doc.1111/e14309/bulkload.htm#CHDEICEH
    I hope this helps,
    Thiago Leoncio.

  • Load issues with integrated planning

    I have a cube XYZ. Sometimes some key figures are wrong and they should be manual changed. I use a real time cube for adjustment. I loaded all data from cube XYZ to real time cube. I changed some key figures with integrated planning. Now I have to load correctional data to XYZ. If I use delta, in first request all data will be loaded to XYZ. In this case, I have double data in XYZ. 
    My question: how can I load only correction data to XYZ?

    Hi,
    After you have loaded from XYZ to the real time infocube, you make a initialization without data transfer back to XYZ. This means that the change pointer is reset.
    Then you change the records in the real time infocube. These changes will be put in a new request which equals the delta queue (because you reset the change ponter).
    Make a delta load from the real time cube to the XYZ cube => only the changes are moved.
    Hope this answer helps
    Best regards,
    Keld

  • Performance issues with query input variable selection in ODS

    Hi everyone
    We've upgraded from BW 3.0B to NW04s BI using SP12.
    There is a problem encountered with input variable selection. This happens regardless of using BEx (new or old 3.x) or using RSRT. When using the F4 search help (or "Select from list" in BEx context) to list possible values, this takes forever for large ODS (containing millions of records).
    Using ST01 and SM50 to trace the code in the same query, we see a difference here:
    <u>NW04s BI SQL command</u>
    SELECT                                                                               
    "P0000"."COMP_CODE" AS "0000000032" ,"T0000"."TXTMD" AS "0000000032_TXTMD"                             
    FROM                                                                               
    ( "/BI0/PCOMP_CODE" "P0000" ) LEFT OUTER JOIN "/BI0/TCOMP_CODE" "T0000" ON  "P0000"."COMP_CODE" = "T0000
      "."COMP_CODE"                                                                               
    WHERE                                                                               
    "P0000"."OBJVERS" = 'A' AND "P0000"."COMP_CODE" IN ( SELECT "O"."COMP_CODE" AS "KEY" FROM              
      "/BI0/APY_PP_C100" "O" )                                                                               
    ORDER BY                                                                               
    "P0000"."COMP_CODE" ASC#                                                                               
    <u>BW 3.0B SQL command:</u>
    SELECT ROWNUM < 500 ....
    In 3.0B, rownum is limited to 500 and this results in a speedy, though limited query. In the new NW04s BI, this renders the selection screen unusable as ABAP dumps for timing out will occur first due to the large data volume searched using sequential read.
    It will not be feasible to create indexes for every single query selection parameter (issues with oerformance when loading, space required etc.). Is there a reason why SAP seems have fallen back on a less effective code for this?
    I have tried to change the number of selected rows to <500 in BEx settings but one must reach a responsive screen in order to get to that setting and it is not always possible or saved for the next run.
    Anyone with similar experience or can provide help on this?

    here is a reason why the F4 help on ODS was faster in BW 3.x.
    In BW 3.x the ODS did not support the read mode "Only values in
    InfoProvider". So If I compare the different SQL statements I propose
    to change the F4 mode in the InfoProvider specific properties to
    "About master data". This is the fastest F4 mode.
    As an alternative you can define indexes on your ODS to speed up F4.
    So would need a non-unique index on InfoObject 0COMP_CODE in your ODS
    Check below for insights
    https://forums.sdn.sap.com/click.jspa?searchID=6224682&messageID=2841493
    Hope it Helps
    Chetan
    @CP..

  • Odi 11G ISSUE (Jdk)

    Hi
    I have a problem with installation ODI 11g under Widows 7 (64 bit). I have installed jdk 64 bit version.
    When I start setup.exe (run as Administrator), Oracle Universal Installer starts, but needs to specify path to JDK
    "Preparing to launch Oracle Universal Installer from C:\Users\*. Please wait ...
    Please specify JDK location ( Ex. /home/jdk ), <location>/bin/java should exist:"
    When I input a correct path, installer looks like working but suddenly installator window closes without any error.
    Does anybody have the same problem?
    Thanks for advices.
    S. Mahesh

    Hi Mahesh,
    You have installed jdk 64-bit in your windows 7 ? right
    1) Make sure that, jdk160_21 or higher version should install in your windows 7 machine. later set environment variable for same jdk as follows
    In you windows environment
    JAVA_HOME=c:\Program Files\Java\jdk1.6.0_21
    path=%path%;c:\Program Files\Java\jdk1.6.0_21\bin;c:\Program Files\Java\jre1.6.0_21\bin;
    and refresh your desktop.
    2) Download generic Weblogic 10.3.3. 64-bit
    install weblogic 10.3.3
    eg: java jar wls1033_generic.jar (eg: give middleware path D:\Oracle\Middleware at time of weblogic installation)
    3) start installation of ODI 11g and when it will ask JDK/JRE path location provide following JDK/JRE location and press enter
    "D:\Oracle\Middleware\jdk1.6.0_18"
    Hope above steps helps you or else follow the following thinks
    http://oracleintelligence.blogspot.com/2010/10/how-to-install-odi-11g.html
    Thanks,
    Phani

  • ODI Multi Execution of Load Plans

    Hi,
    I am using ODI version 11.1.1.6.0
    I wanted to know the number of instance running for my load plan. I tried using ODI APIs (odiRef) but no results.
    Can anyone help on this topic?
    Thanks in advance.
    Regards
    Gaurav

    AJ,
    1) We see the status remains in R state if Load plan fails during execution of ON CONNECT. Status does not change until the ODI studio is closed (Version is 11.1.1.6).
    2) Can we run your SQL as part of a Load Plan(load plan executes on different target database-different from work repository instance)?
    3) Is it possible to get number of load plan instance in run state through some API or class in ODI like [http://docs.oracle.com/cd/E28389_01/apirefs.1111/e17060/oracle/odi/runtime/agent/invocation/LoadPlanStatusInfo.html] ? If yes, please help with some example.
    Thanks

  • Issue with load from livecache

    Dear all
    In our APO system we have encountered a strange issue with data loaded from livecache into the infocube.
    Problem is in the infocube, where there are two requests with a forecast version data with the same amount of data.
    When I look in my infocube in APO I see e.g. the two following request IDs for a given product:
    REQU_4CZ4SQ5PSM9V5PHJCHTIS24GK is the most recent load from Livecache (replacing any previous livecache loads).
    APO_R43U3KZLM0V3WVK112YEK3KKLE cannot be located. It is not available on the infocube manage section
    Ultimately this results in double data for this given product.
    Do any of you have an idea as to what this APO_* request could be triggered from?
    This is the input from the BW-team:
    This indicates that an APO program has modified the infocube data without updating the request tab of the manage infocube. But it has updated the request id of the data which is transformed. With subsequent loads into the infocube, these records are split based on different request idu2019s, and loaded collectively into BI.
    I hope you can help me out.
    Best regards,
    Anders Thinggaard

    Hi Visu
    It seems I made a typing error in my description of the problem. It is of course the DESTINATION combination, not the SOURCE combination, that is written into the infocube when realigning on the infocube.
    This, of course, creates the request ID starting with APO. Besides that I get the request starting with REQU when I load from livecache using an infopackage. So far so good.
    What confuses me though is that when I load back into livecache (load my planning area) is seems to pick up the correct amount of data (not taking the APO* request into account) whereas my load to my external BW out of my APO infocube seems to pick up both the REQU* and the APO* requests, resulting in dublicate data.
    Have you had this challenge as well?
    My first idea is to make the BW team make some ABAP coding leaving out any request ID starting with APO*. However it seems to me that this is a stardard functionality of APO, and I'd like to get to the bottom of this...
    Best regards,
    Anders
    P.S. As I understand, the copy logic only dertermines whether data from source combination is added to or overwriting the data for the destination combination.

  • OBIEE 11g Issue with "group by"

    Hello,
    I have issues with the physical queries generated by OBIEE engine.
    All the facts are at detailed granular level. when I query the table, group by clause is excluded from the query. following is the sample query generated by OBIEE 11g engine.
    select 0 as c1,
    D1.c2 as c2,
    D1.c1 as c3,
    D2.c3 as c4,
    D2.c2 as c5,
    D2.c1 as c6
    from
    (select distinct T48494.SMBL as c1,
    T48494.EXP_DT as c2
    from
    VW_CONTRACT_DETAILS T48494
    where ( T48494.EXP_DT in (TO_DATE('2012-09-03 00:00:00' , 'YYYY-MM-DD HH24:MI:SS'), TO_DATE('2012-09-19 00:00:00' , 'YYYY-MM-DD HH24:MI:SS')) )
    ) D1,
    (select sum(T48494.TRADED_VOL) as c1,
    sum(T48494.TRADED_VAL) as c2,
    sum(T48494.TOTAL_TRADES) as c3
    from
    VW_CONTRACT_DETAILS T48494
    where ( T48494.EXP_DT in (TO_DATE('2012-09-03 00:00:00' , 'YYYY-MM-DD HH24:MI:SS'), TO_DATE('2012-09-19 00:00:00' , 'YYYY-MM-DD HH24:MI:SS')) )
    ) D2
    order by c3, c2
    I'm getting similar kind of issues(exclusion of group by from query) even when i query other fact tables as well. Is there any global change to be made?
    Regards,
    Kishore

    Does it mean you are expecting a group by on
    D1.c1, D1.c2?
    So query should like
    select 0 as c1,
    D1.c2 as c2,
    D1.c1 as c3,
    D2.c3 as c4,
    D2.c2 as c5,
    D2.c1 as c6
    from
    (select distinct T48494.SMBL as c1,
    T48494.EXP_DT as c2
    from
    VW_CONTRACT_DETAILS T48494
    where ( T48494.EXP_DT in (TO_DATE('2012-09-03 00:00:00' , 'YYYY-MM-DD HH24:MI:SS'), TO_DATE('2012-09-19 00:00:00' , 'YYYY-MM-DD HH24:MI:SS')) )
    ) D1,
    (select sum(T48494.TRADED_VOL) as c1,
    sum(T48494.TRADED_VAL) as c2,
    sum(T48494.TOTAL_TRADES) as c3
    from
    VW_CONTRACT_DETAILS T48494
    where ( T48494.EXP_DT in (TO_DATE('2012-09-03 00:00:00' , 'YYYY-MM-DD HH24:MI:SS'), TO_DATE('2012-09-19 00:00:00' , 'YYYY-MM-DD HH24:MI:SS')) )
    ) D2
    group by c3, c2 ---> added line
    order by c3, c2

  • OBIEE 11G: Issue with Agent

    Hello All,
    I am trying to make agent work in OBIEE 11G together with Active Directory settings but the agent are failling for some users. Some users are not able to send the ibots:
    We get the following message with Agent Failed:
    Agent ID: XXXXXXX
    No devices for user: XXXX
    Agent ID: XXXXXXXX
    No Content for user: XXXX
    Kindly note that I have already checked the My Account of these users and the email address is well present there.
    Can anyone help us investigate on this issue.
    Thanks and Regards

    Where do you have your LDAP configured
    Is it in RPD or in Weblogic?

Maybe you are looking for