Ideal approach

I am about to write a custom jsp portlet which will take a few parameters as input and generate a web report.
What is the best approach to access data from the oracle database using this portlet?
Shall I use JDBC/JSP combination? or use Oracle 9i reports.
I want to make it really simple.
Thank You,

Hi Usha,
Normally they are ignored as the objects which are reflected under DELETED node are not required in the newer version.
Ironically i didnt found any documentation from SAP on this.
May be you can raise an OSS message & check with SAP. Do update the thread with the SAP reply.
Thanks,
Best regards,
Prashant

Similar Messages

  • Design approach suggestion

    Hello Experts,
    Here is the scenario -
    I have an oracle database from which I need to fetch the data through record adapter(SQL pass through). The data is from different tables which are "unrelated" to each other. There are in all 8 such unrelated tables.
    Now the approaches that come to my mind -
    Approach 1: Create 8 different applications and fetch respective tables. Run ITL over it and push the indexed data to the same Dgraph.
    Risk : 1Running multiple applications on the same dgraph at the same time. But this could be mitigated by the processes followed.
    Risk : Manageability
    Any other risk you see?
    Approach 2 :Create 1 application with 8 record adapters and finally join all on the basis of a unique identifier column.
    Any risk or challenges do you foresee in this?
    If you have any other ideal approach , please share!
    Thanks!

    Hello Kristen,
    Thanks for a quick reply!
    Yes, creating 8 applications doesnt seems a good idea and I am more focussing on the second approach -
    "Create a pipeline with 8 record adapters that fetch the data from tables and then I will do a switch join and provide it to mapper"
    Do you foresee any issues in the above approach?
    I am less familiar with how CAS works at the moment. Can you please guide me to the correct document or just brief the approach you suggested.
    Thanks a ton for your reply!

  • One Computer, Two Accounts - How Best to Proceed with MacOS 10.8.2?

    My wife and I want to share our new iMac, and I'd like some advice on how best to do this without messing things up.
    We both have iOS devices to sync, and we both are using the same iTunes Store Account. However, we each have our own iCloud account. We each have our own email accounts, Hotmail accounts, and Yahoo accounts. We each have our own account for the iMac, which I assume you would need if you wanted to maintain two different sets of Mail accounts and two different Facetime "accounts," what to speak of different desktop customization options. 
    Things we would like to share without duplicating files include: iPhoto library (~100 GB), iTunes media files (over 300 GB), Contacts list, (maybe) Calendar, and (maybe) Safari bookmarks. We do NOT want to share e-mail, Reminders, Facetime. Ideally, I would like for her to be able to sync her devices entirely through her iMac account. However, I am not opposed to her syncing her iOS devices through my account for some things and through her account for other things, if that is not easily avoidable.
    Regarding the iTunes library, I could go either way on her having separate library files with separate ratings & playlists or just sharing the same library setup that I have. However, I would want her to have access to the same metadata that I have for all music and video files, since I have lots of custom album art as well as lyrics loaded in. Do we have to use the same library file in order to have access to the same metadata, which will be updated on a regular basis?  If so, then is there a way to do that via two different computer accounts, or does she pretty much have to sync with my account?
    Now regarding iPhoto libraries, I wouldn't mind us sharing the same "library" as well as the files, including ratings and albums. I would like for the "faces" setup to be the same for both of us, if possible, so that she does not have to repeat the process of face tagging. I read in an older article that sharing the iPhoto library could be accomplished by sending the iPhoto library file to an external hard drive, but as that seemed to be an older approach, I thought I would ask here if that is still the ideal approach. I might want to send the iPhoto library file to an external HD anyway for space considerations. But if I do this, and we are both using the same library file for viewing photos, then does she still see the same albums, events, and tags that I have created already?
    Is there a way to share contacts and calendars without having to share other things like Notes and Reminders? I suppose I could have her sync this information via my account, but I'd be greatful to learn of a better way if there is one.

    You can share out both the iTunes library and the iPhoto Library in their respective preferences. However, the sharing account must have both iTunes and iPhoto running. You can keep the account running with those things open in the background using Fast User Switching. Enable that in the Users & Groups system prefs and then never log out, just switch to the login screen or the other account using the Fast User switching menu.
    I haven't done much actual use in either, so I don't know what the limits are (like making a playlist out of the shared tracks, syncing with iPhone/iPod, etc). For iPhoto, all the photos are available, but if you want to edit one, you have to import a copy into your own library. Another option might be to not keep the media inside the Library, but store it in the /Users/Shared folder. Then, set up your Libraries to leave the content where it is (Preferences). However, I don't think you'd get the metadata for the iTunes music. I believe that is all stored within the iTunes Library.
    You can share iCloud calendars so the other person can see your calendar by right-clicking on the Calendar name. There will be an option to share the calendar, there. Also, you can do so from the iCloud web interface.
    I've got nothing on sharing contacts. Not sure of a good way to do that besides maintaining two sets. You can start from the same source, but it will diverge over time as contacts are added and removed individually.

  • Planning and Inter Org Transfers

    Hi all,
    We have a scenario where in kanban items procured from supplier (using consignment process) are received and stocked in IO/3rd party warehouse (say IO-A). Whenever the main plant (say IO-B) requires these kanban items they source them from IO-A. The two plants are just 2-3 kms away from each other and a truck is used to move the goods between the two plants. We are thinking what would be the best way to move goods between the two plants/IOs?
    Using IR / ISO approach?
    Using Inter Organization transfers between IO-A and IO-B?
    We want to use Inter Organization transfer approach as its simple and doesn't generate too many transactions. However my query is, if we use the 'Inter Org Transfer' approach, then how does plant (IO-B) request plant (IO-A) to dispatch goods? If we use ASCP on IO-B main plant, then can ASCP initiate the inter-org transfer from IO-A to IO-B? Or if we use min-max planning, then can the min-max report output be used by the IO-A to dispatch goods to IO-B?
    Please provide your suggestions.
    Thanks,
    Nitin

    Hi Nitin,
    As mentioned earlier, 'Inter-Org Transfer' is not a controlled process. That means the 3rd Party WH would never be able to see any kind of requests in oracle system from Org A (Main Plant) regarding the requirements.
    And hence I had mentioned that, If the 3rd Party WH wants to have a formal requests placed in the system from Org-A then Kanban planning with source type as 'Inter-Org' would be the ideal approach.
    Further, ASCP will not be able to initiate the Inter-Org transfer process from IO-A to IO-B.
    Regarding Min-Max planning, You would be able to achieve the inter-org transfer with the source type as 'Inventory'.
    But again in this case, 3rd Party WH would not be able to see anything in the Min-Max planning report which are planned in Main Plant (Org-A). They will have to follow the ISO process in order to process the Internal Orders which would be created for the Internal Requisitions through Min-Max planning in Main Plant. It would be more or less similar to the Kanban planning except the fact that system will plan it automatically and generate IR based on the Min-Max set-up.
    Regards
    Shabbir

  • ALE or EDI ?

    Hi Guru's  ,
    One of our client wants to exchange transactional data(Purchase Order and Invoice related dat only) with one of their client .
    Both the parties have SAP system behind their own firewalls.
    Business data related to purchase order and Invoice is only to be exchanged .
    Now I have few questions :
    1. What would be the ideal approach ALE or EDI as both parties have SAP in place?
    2. Is it advisable to go for EDI considering only exchange of business data for PO/Invoice ?
    3. What is the Extra cost if going for EDI ?
    4. If via ALE-Idocs system X sends PO for material ABC (matnr in their SAP system X)...once it reaches system Y . Do the system Y needs to have the same MATNR ? If not than how we usually caters this type of situation ?..
    How we take care of master data sent via Idocs in other system ?
    Regards ,

    >
    Santosh Rawat wrote:
    > Hi Matt ,
    >
    > I am sorry for all this . I am a person from SAP PI forum, there i used to get prompt reply . This is the reason why I overdid myself.
    > Thanks for correcting me .
    > May I know in which forum should i raise these questions ? or which is the correct forum for these queries ?
    >
    > Regards ,
    Hey, no problem.  The ABAP forums have a high turnover, so the best thing to do is simply to edit your original post a bit and bump it to the top.  I've moved the post to the correct forums.
    I'm no expert, but these are my thoughts.
    1. If the parties are both legally seperate entities, then I'd think that ALE is not appropriate.  You'd have to establish trust between the systems, and what would you do if another, non-SAP client wants to exchange POs and Invoices?
    2. EDI just means exchanging data electronically.  ALE is a form of EDI.
    3. Don't know.
    4. You'll need some kind of conversion routines, I would expect.
    matt

  • Need to persist data in JVM...

    Hi,
    I have a requirement in my project where once the user logs in, the database is queried and some information
    related to the user is retrieved and stored in JVM.So the number of hits to the database is decreased.
    The requirement in detail :
    This is Simple Cart-Shopping application.
    There is a UI which displays various products which one can view and purchase.
    *1)Once the user logs in he can view the various products.*
    *2)There are restriction which decides which products that particular user can purchase,*
    although a user can view all the products listed in the store.
    *3)Every time a user clicks on a product, database is queried to retrieve access info for the user on that product.*
    *4)If a user has proper access only then he can purchase the product.*
    Presently the database is queried in all the four steps explained above.
    I intend to :
    Hit database once user logs in and retrive information regarding the access levels of the user for all products listed in the store.
    Persist this info in JVM(serialization or by other means...).
    Now for any further access info, the persisted objects will provide the data.
    This reduces the database hits.
    Other details :
    App Server - Jboss
    Database - oracle
    Expected Concurrent users - 2000 - 3000
    Please provide your suggestions as to what should be the ideal approach.
    regards,
    Abhishek.

    Hi,
    The valueHolder class is a name a suggested and it is used in axis webservice implementation to hold the value by refrence ( array ) which is an implemetaion (if I still remember ) of the wsdl array type
    Even Spring is using this type of pattern to hold a constructor argument as an inner static class for another class.
    Ex:
    import java.util.HashMap;
    import java.util.Map;
    public class  ValueHolder {
         private static final Map<String, Information> valueHolder = new HashMap<String, Information>();
         public static void addValue(String userId, Information info) {
              valueHolder.put(userId, info);
         public static Information getValue(String userId) {
              return valueHolder.get(userId); // check for none existing userId to avoid null returned value
         public void cleanOnEvent(){
               * Clean on certain occation if the heap size gets into certain point etc.. or other
               *  predefined conditions etc..
              valueHolder.clear();
    }Regards,
    Alan Mehio
    London,UK

  • Arranging Hierarchy tables in BMM layer

    Hi
    I have a different structure of dimensions in the database.
    Year->Season->Quarter->Month->Week
    For every level in Hierarchy I have table i.e, Dim_Year, Dim_Season, Dim_Quarter, Dim_Month, DIm_Week
    They are joined in physical layer in as per the hierarchy thats is Dim_Year is joined to DIm_Season by Year_id, Dim_Season is joined to Dim_Qaurter by Season id and so on.
    In BMM layer I have to craete an Hierrarchy.
    How should i keep these tables
    Option1: Pull the all the tables separately
    Create comples join between all the dim tables year->Season->Quarter->Month->Week
    Create hierarchy from these tables
    Option2:
    Create a logical tables as Time
    Pull all the tables into Time table with different Logical table sources.
    Then create a hierarchy from these tables.
    This will avoid the comples join in BMM layer
    Please suggest which option will lead to less performance issue and which is ideal approach in such scenarios.
    Thanks in Advance

    Hi,
    I am new to OBIEE but still I am suggesting one option....
    Drag and drop any one of the Time table (for ex Dim_Year) from physical layer to bmm layer. create a complex join of this table with the other tables in the bmm layer. then simply drag and drop the columns from other tables (Dim_Season, Dim_Quarter, Dim_Month, DIm_Week) from the physical layer one by one on the LTS of Dim_Year.
    Hope I haven't given you a funny answer.
    Edited by: user9149257 on Mar 8, 2010 9:20 PM

  • Multiple records in MessageInOutEvents table

    I have enabled tracking the message body in my receive port before the file is processed but I get two entries available in the [dta_MessageInOutEvents] table due to which the BTS.MessageId is returning the message id of the second entry for which no message
    tracking is available in the [Tracking_Parts1] table.
    Kindly advice how the insertion of the second entry can be avoided?
    Also there are two entries for a message in the Custom] for which no tracking is enabled.
    Kindly advice.
    Regards, Vivin.

    I would agree with la Cour, retrieving content from the Tracking database is not an ideal approach.
    If you need the EDI content later in the app, then you should take steps to keep it in the running app.  You can do this by using a PassThrough in the initial Receive, then let an Orchestration manage it through the whole process.  You can parse/debatch
    the EDI in a Receive Pipeline through a Loopback Adapter (http://social.technet.microsoft.com/wiki/contents/articles/12824.biztalk-server-list-of-custom-adapters.aspx),
    then correlate back to the Orchestration which holds the original EDI message for further processing.
    You are getting the MessageID of the second (Send) message because that is what's is hitting the Orchestration, the original message (Receive) no longer exist because the Disassembler creates a new message.  You can relate them through the uidServiceInstanceId
    which is the id of that Pipeline instance.

  • Iteration problem in ref cursor

    hi all
    declare
    type ref_cur is ref cursor;
    p_ref_cur ref_cur;
    type namelist is table of varchar2(10);
    type address is table of varchar2(20);
    p_namelist namelist;
    p_address address;
    begin
    open p_ref_cur for 'select emp_code,EMP_ADDR from emp_det_bak';
    fetch p_ref_cur bulk collect into p_namelist,p_address;
    for i in p_ref_cur
    loop
    dbms_output.put_line('this is for testing '||p_ref_cur(i));
    end loop;
    end;
    i found error like following--
    ERROR at line 11:
    ORA-06550: line 11, column 10:
    PLS-00221: 'P_REF_CUR' is not a procedure or is undefined
    ORA-06550: line 11, column 1:
    PL/SQL: Statement ignored
    if i use
    for i in p_ref_cur.first..p_ref_cur.last
    i got error message like --
    Invalid reference to variable 'P_REF_CUR'
    can anyone pls tell me what wrong i'm doing and if i want to iterate through will i use
    like--while(p_ref_cur.next) loop
    please suggest me.
    Message was edited by:
    p.bhaskar

    > please suggest me.
    Well, there is a fairly easy way to create a refcursor with the ability to loop through the columns of the cursor - without knowing what the columns are.
    However, this is not always an ideal approach. It uses VARCHAR2 as the data type for all columns and requires the source SQL to to construct a collection.
    Here's a brief example of this approach:
    SQL> create or replace type TColumns as table of varchar2(4000);
    2 /
    Type created.
    SQL>
    SQL> create or replace procedure DisplaySQL( sqlStatement varchar2 ) is
    2 c SYS_REFCURSOR;
    3 colList TColumns;
    4 rowCnt integer := 0;
    5 begin
    6 open c for sqlStatement;
    7
    8 loop
    9 fetch c into colList;
    10 exit when c%NOTFOUND;
    11
    12 rowCnt := rowCnt + 1;
    13
    14 W( '****************' ); -- W = DBMS_OUTPUT.put_line
    15 W( 'row: '||rowCnt );
    16 W( 'columns: '||colList.Count );
    17
    18 for i in 1..colList.Count
    19 loop
    20 W( 'column='||i||' value='||colList(i) );
    21 end loop;
    22 end loop;
    23
    24 W( '****************' );
    25 close c;
    26 end;
    27 /
    Procedure created.
    SQL>
    SQL>
    SQL> var SQL varchar2(4000);
    SQL>
    SQL> exec :SQL := 'select TColumns(created, object_type, object_name) from ALL_OBJECTS where rownum = 1';
    PL/SQL procedure successfully completed.
    SQL> exec DisplaySQL( :SQL )
    row: 1
    columns: 3
    column=1 value=2005/07/21 19:04:20
    column=2 value=TABLE
    column=3 value=CON$
    PL/SQL procedure successfully completed.
    SQL>
    SQL> exec :SQL := 'select TColumns(created, object_type, object_name) from
    ALL_OBJECTS where rownum = 1 UNION ALL select TColumns(dummy,SYSDATE) from
    DUAL';
    PL/SQL procedure successfully completed.
    SQL> exec DisplaySQL( :SQL )
    row: 1
    columns: 3
    column=1 value=2005/07/21 19:04:20
    column=2 value=TABLE
    column=3 value=CON$
    row: 2
    columns: 2
    column=1 value=X
    column=2 value=2007/05/16 13:14:56
    PL/SQL procedure successfully completed.
    SQL>

  • Alternative Application Module Coding Style

    I want to preface this post with the fact that I'm relatively new to the OA Framework, but not general OO concepts.
    Typically you'd see an application module with this structure:
    public class testAMImpl extends OAApplicationModuleImpl
       * This is the default constructor (do not remove)
      public testAMImpl()
       * Sample main for debugging Business Components code using the tester.
      public static void main(String[] args)
        launchTester("oacle.apps.test.server", "testAMLocal");
       * Container's getter for SomeVO1
      public SomeVOImpl getSomeVO1()
        return (SomeVOImpl)findViewObject("SomeVO1");
      public void runCheck()
         // Code Here
    }As you can see there is a public function to return the view object, and run a method. Also note that the AM extends the OAApplicationModuleImpl class. Typically to invoke a method one would have to call am.invokeMethod("runCheck").
    What about coding the AM like this instead:
    public class testAMImpl extends OAApplicationModuleImpl implements oracle.apps.test.testAM
       * This is the default constructor (do not remove)
      public testAMImpl()
       * Sample main for debugging Business Components code using the tester.
      public static void main(String[] args)
        launchTester("oacle.apps.test.server", "testAMLocal");
       * Container's getter for SomeVO1
      private SomeVOImpl getSomeVO1()
        return (SomeVOImpl)findViewObject("SomeVO1");
      public void runCheck()
         // Code Here
    }I would think that this would be a more ideal approach for a couple of reasons:
    1. The client methods can be exported (e.g. runCheck()) and this have compile time checks vice run time.
    2. Client can call the method via am.runCheck() vice am.invokeMethod("runCheck")
    3. One can make the "getVO" methods private so you prevent the client from calling these methods, which it shouldn't be doing anyways.
    I do realize that in this current example the client could still invoke the am.findViewObject method.
    Thanks!

    No comments? : )

  • ADF Master Detail Forms

    Hi,
    We have a master detail views and the master detail relationship is working fine between the views. We have a requirement to display one master form and 2 detail forms (user has to see two detail forms) to the user. The default master detail relationships in ADF are displaying one master form and one detail form with the navigation controls. Can you please suggest ideal approach for displaying multiple detail forms.
    We tried with the af:iterator for displaying details collection model as multiple forms. But the problem with the current approach, we are unable to get the reference to the detail record when a value is changed in the detail form.
    <af:iterator id="i1" varStatus="vs" value="#{bindings.BillDetailVO11.collectionModel}" var="row">
    <af:inputDate id="serviceDate"
    label="Service Date"
    value="#{row.ServiceDate}" *valueChangeListener="#{backingbean.myvaluechangelistener}"*
    columns="9"/>
    <af:inputText id="it1122" label="POS"
    value="#{row.PlaceServiceCode}"
    columns="4"/>
    </af:iterator>
    I am comfortable with the iterator approach if I can get the value of PlaceServiceCode for the same row in the detail form when the ServiceDate value changes. I am ready to try for alternative approaches for displaying multiple detail forms.
    Thanks and Regards,
    Prasad

    Hi Shay,
    Thank you for responding to my query. I followed the steps in the blog and created two detail forms. But both the detail forms are representing the same detail record. Our requirement is little different.
    We have to create a master detail form (Bill Summary with Bill Lines) to edit the bill summary with lines and display two Bill lines along with summary to the user.If hte number of bill lines are more than 2 then we have to display navigation controls to the user to navigate across the bill lines.
    As an example the form data comes from Bill_summary(Master) and Bill_Lines (detail) tables. If the number of Bill Lines are 5 then we have to display the form as follows. if the user selects next the form should display 2,3 lines, 3,4 lines etc.
    BILL_SUMMARY
    BILL_LINE 1
    BILL_LINE 2
    <Navigation control for the details>
    Thanks and Regards,
    S R Prasad

  • PLS-00394 - Using collections

    Oracle 9i
    I get the following error when I run the below stored procedure.
    PLS-00394: wrong number of values in the INTO list of FETCH statement.
    CREATE OR REPLACE procedure TEST_PROC
    is
    cnt number := 1;
    type my_record is record
    a varchar2(20),
    b number(10),
    c varchar2(50),
    d varchar2(10),
    e number(2),
    l char(2) :='ll',
    m varchar2(2) :='mm',
    n varchar2(2) :='nn',
    s varchar2(2) :='ss'
    type tab_my_rec is table of my_record;
    tab_rec tab_my_rec;
    cursor cur is
    select a,b,c,d,e from table2;
    begin
    open cur;
    loop
    fetch cur bulk collect into tab_rec limit 100;
    forall t in 1.. tab_rec.count
    insert into (select a,b,c,d,e,l,m,n....s from table1) values tab_rec(t);
    -- Commit every 100 records
    cnt := cnt+1;
    if mod(cnt, 100) = 0 THEN
         COMMIT;
    end if;
    EXIT WHEN tab_rec.COUNT = 0;
    end loop;
    commit;
    close cur;
    end;
    If I specify a default value in the select clause then it works
    cursor cur
    is
    select a,b,c,d,e,'ll','mm',.,.,.,.,'ss' from table2;
    Question:
    1) I need to insert default values to the last columns(column 7 .....column 10) of a table1. Should i specify it in the cursor or can i do only by defining it at the record level? I do not want to define it in the select statement.
    Should the number of columns selected in the SELECT query match the number of variables used in the fetch statement?
    2) I am using records to group related data items from table2 & load it into table1 with some additional constant fields. Using RECORDS an ideal approach?
    Thanks in advance.

    Thanks for the reply.
    Yes you are right I do have a workaround. But it goes back to my original question were i didn't want to define the constant values in the cursor declaration section but wanted to specify it while inserting the values. I don't think its possible to do it directly in oracle 9i or 10g becuase of the restriction stated above.
    This works
    forall t in 1.. tab_rec.count
    insert into (select a,b,c,d,e,l,m,n....s from table1) values tab_rec(t);
    This doesn't
    forall t in 1.. tab_rec.count
    insert into (select a,b,c,d,e,l,m,n....s from table1) values (tab_rec(t).a,tab_rec(t).b,tab_rec(t).c,tab_rec(t).d,tab_rec(t).e,'ll','mm','nn',....,'ss');
    Can not reference individual attributes of records with FORALL in (oracle 9i or 10g).
    I think the only solution is to define the constant values in the cursor select statement & in the record declartion to hold these values and insert them all. Like this
    forall t in 1.. tab_rec.count
    insert into (select a,b,c,d,e,l,m,n....s from table1) values tab_rec(t);
    Committing once at the end of all the inserts might not works as I have 1 million records to insert.
    Solomon is right my code for commit is not efficient. If I have 1 million records to commit then is it good to commit for every 1000 or 10000 records? How do I code it more efficiently?
    Thanks in advance. Appreciate your input.

  • Changing Partner Details in SD Billing document header

    In the configuration table V_TPAER_SD we have switched off the field AENDB so that we can amend certain elements of the header partners of the billing document
    However, we are still unable to amend any element of the header partners within the billing document via VF02
    Any further thoughts
    Please advise
    Steve Dennis +44 7812 607901

    Steve,
    It is bit strange, there is bit contradiction between oss notes 43296 and 499918. How successful you are using this user exit?  We have similar process as stated in 499918 , which doesn’t seem to be an ideal approach.
    Thanks
    Bala

  • Shift of Customer service department from Non SAP system

    Hi All
    I have two companies one in SAP and other Non SAP .Now the Client has planned to move/MERGE  the Customer service department  (ONLY ) of the non SAP legacy to the existing SAP company code ,though there is no physical movement of Goods from the  new merged SAP plant since all sales process involve 100% drop shipment.So for this what should be the ideal approach ?
    My gues would be to create New Sales area/Sales office/ sales group assigned to the SAP company code,data upload of material master/customer master/vendor/open order/open billing etc..
    Pls advise if this the apt one ?
    thanks
    Viswa

    Hi Will,
    Please check configuration "Reset Time Series for Incoming ProductActivityNotifications". This should help you reset old values with new values.
    IMG - Supply Network Collaboration -> Basic Settings -> Processing Inbound and Outbound Messages -> Display of External Exceptions and Logs -> Reset Time Series for Incoming ProductActivityNotifications
    Hope this helps you.
    Thanks,
    Sunil

  • OVD - OIM Query

    Hi all,
    There is OID 11g, AD, OVD 11g in our environment. OID has external users, AD has internal users. OVD 11g is used for Virtualization - nothing new so far.
    This implementation is almost completed and we are heading to next phase i.e., OIM 11g implementation. Here, we can think of two trusted sources - AD, OID. Nothing complex until now.
    Now the question is: Can OIM talk to OVD for provisioning and reconciliation? If so, do we need to develop custom connector (as OOTB connector is not available) and what is the trusted source to be used here? Is this recommended approach?
    I have seen LDAP Sync feature in OIM 11g which makes OVD-OID in synch with OIM. So, is it recommended to design OIM with OVD using LDAP Synch which automatically sync users from OID, AD.
    Can someone throw light on this?
    Thanks,
    Mahendra.

    Hi Mahendra,
    OVD acts like another LDAP which is having the same capabilities as any other Standard LDAP service. Because this OVD & OID are Oracle products so, Both's products base ldap schemas will be almost same. OIM connector point of view you can use the OID connector for OVD. We have used the same approach in OIM 10g version and we did not see any issues even for provisioning operations too since we have single source ldap connected to the OVD.
    But in Ideal approach OVD is not recommended for ldap modifications in the sense Provisioning operations in terms of OIM world. OVD product architecuture is more robust for Read operations since this product has a capacity of consolidate different data structures into single ldap view that make more complexity and if we use this product for modify operations then many other factors we need to consider manually like performance, ACLs and, customization etc.
    Hope this info helps yo to get an idea for your decision point.
    OIM synchronization with LDAP i dont have any insight on this..
    Cheers,
    Srini

Maybe you are looking for