Design Question - BPM and dynamic JDBC adapters

Hello,
I need help to finish my scenario.
scenario:
step1 : Idoc > PI(7.1)  <>  JDBC stored procedure call
step2: If the Sync. JDBC call is successful  then make a sync. BAPI call  to R/3.
step3: If JDBC all fails ( in step 1 ) it should tigger an emial and do not make BAPI call ( do not execute step 2).
I have 200 SQL servers and each time IDOC goes to any one of these 200 servers  ( yes, only one server ) , depending on the connection parameters in one of the idoc segment.
Questions:
1. Can we do this without BPM?
2. can we configure dynamic JDBC adapte depending on the login credentials in IDOC ( server name, port, user name , passwore).
3. If dynamic JDBC adapter configuration is not possible, what should be my design. Do i need to create 200 communication channels and 200 rec. determination, 200 interface determination, 200 receiver agreement..I dont think this is a good design

Hello,
It seems doable without using BPM.
step1 : Idoc > PI(7.1) <> JDBC stored procedure call
step2: If the Sync. JDBC call is successful then make a sync. BAPI call to R/3.
You can use a two-step mapping.
1.) The first one calls the stored procedure using UDF or Java Mapping (as was suggested in earlier threads)
2.) The input to the second mapping will be the response from 1. You can use RFCAccessor to execute the BAPI.
step3: If JDBC all fails ( in step 1 ) it should tigger an emial and do not make BAPI call ( do not execute step 2).
Use XI/PI alerting framework for failed messages. The BAPI call can/cannot be executed by using a try-catch statement in the root node of 2nd mapping (1..1 occurrence), return suppress in the root node if conditions are not met or return a value if otherwise.
Note: Consider this blog in your design, /people/thorsten.nordholmsbirk/blog/2008/11/04/why-xi-mappings-should-be-free-of-side-effects
Hope this helps,
Mark

Similar Messages

  • Questions about example "Dynamic JDBC Credentials for Model 1 and Model 2"

    Hello,
    i am trying to set up dynamic JDBC authentication in my ADF BC application - i want that it'll work like in Forms - a dababase user with the proper priveleges can log into my ADF BC application using his database login and password, and work with application.
    I've read the paper "How To Support Dynamic JDBC Credentials" at
    http://www.oracle.com/technology/products/jdev/howtos/10g/dynamicjdbchowto.html
    and test the very useful example, created by Steve Muench, which i've got from
    http://radio.weblogs.com/0118231/stories/2004/09/23/notYetDocumentedAdfSampleApplications.html#14
    The example works, but when i'm transfer its realisation in my application - it doesnt work the right way. The problems is the following:
    1. I can connect and work successfully only under the owner of the schema - the username and password of which i've wrote in the "jbo.server.internal_connection" string of the AM configuration.
    2. When i'm connecting under other users, which have all the rights to work with the db objects, used by application, i got the main page with the "Access Denied" message - as i have no priveleges to the tables.
    3. The big surprise is that if i am entering the fake username and password - the random letter combination - then i am getting the same behavior as in p.2 - the main page with the "Access Denied" message!
    And the last question is:
    4. Is it possible to set up the dynamic jdbc authentication using the build-in JDeveloper functions - i mean not to use that additional code, not override the ADF Binding Filter, and so on, but set up the similar behaviour (users log in using their db names and passwords) in several minutes following the standart documentation?
    Thanks in advance!

    One more question:
    I have 2 independent Application Modules in my application - to make the 2 transactions independent one form another, when working with different parts of project - and while using dinamic JDBC authentification, the user connects only in the first AM under the username he's entered, but the 2nd AM works under the predefined earlier (during development) connection for that AM.
    How can i make the 2nd AM to be connected under the logging in user (same as the 1rst AM)?

  • SCA design question - PIX and SCA with dual logical SSL server.

    I have a SCA design question. please correct or verify my solution.
    1. connectivity.
    <Client with port 443>--<ISP>--<PIX>--<SCA>--<SERVER(two IP on single NIC and each IP associates to WEB server) with port 81>
    * client will access WEB server with x.x.1.100 or x.x.1.101
    2. physical IP address
    - PIX outside=x.x.1.1
    - PIX inside=x.y.1.1
    - SCA device=x.y.1.2
    - SERVER NIC1=x.y.1.10
    - SERVER NIC2=x.y.1.11
    3. PIX NAT
    - static#1=x.x.1.100 map to x.y.1.10
    - static#2=x.x.1.101 map to x.y.1.11
    4. SCA configuration.
    mode one-port
    no mode one-port
    ip address x.y.1.2 netmask 255.255.255.0
    ip route 0.0.0.0 0.0.0.0 x.y.1.1
    ssl
    server SERVER1
    ip address x.y.1.10
    localport 443
    remoteport 81
    server SERVER2
    ip address x.y.1.11
    localport 443
    remoteport 81
    Thanks,

    The document http://www.cisco.com/univercd/cc/td/doc/product/webscale/css/scacfggd/ has a link to a page which describes how to use the configuration manager command line interface to configure the Secure Content Accelerator. Several configuration examples are also included in this page.

  • BPM and Dynamic JMS

    Hi
    Here's a sample code written to receive a message from a JMS Queue. The external resources to connect to the JMS queue are configured.
         logMessage ":::::::Before Receiving:::::::"
         jmsMessage as JmsMessage= receiveMessage(DynamicJMS, configuration : "jmsConfig")
         logMessage ":::::::Receiving:::::::"
         msg as String=jmsMessage.textValue
         messagId as String = jmsMessage.messageId
         logMessage ":::::::message received is :"+msg
    This code is written in the Global Automatic Activity which is configured as an Automatic JMS Listener.
    The weird thing is that when I debug the method in BPM Studio(10.3.1), it works fine and I am able to receive the message.
    However when I start the Process Engine, I find that the message is not received and the logs show the following error:
    The method 'CIL_globalAutomatic' from class 'Sample.BPMOSB.Default_1_0.Instance' could not be successfully executed.
    Caused by: java.lang.NullPointerException
    fuego.lang.ComponentExecutionException: The method 'CIL_globalAutomatic' from class 'Sample.BPMOSB.Default_1_0.Instance' could not be successfully executed.
    +     at fuego.component.ExecutionThreadContext.invokeMethod(ExecutionThreadContext.java:519)+
    +     at fuego.component.ExecutionThreadContext.invokeMethod(ExecutionThreadContext.java:273)+
    +     at fuego.fengine.FEEngineExecutionContext.invokeMethodAsCil(FEEngineExecutionContext.java:219)+
    +     at fuego.server.execution.EngineExecutionContext.runCil(EngineExecutionContext.java:1280)+
    +     at fuego.server.execution.GlobalAutomaticJMSListeningHelper.executeJmsListener(GlobalAutomaticJMSListeningHelper.java:94)+
    +     at fuego.server.AbstractProcessBean$45.execute(AbstractProcessBean.java:3017)+
    +     at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304)+
    +     at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470)+
    +     at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551)+
    +     at fuego.transaction.TransactionAction.start(TransactionAction.java:212)+
    +     at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123)+
    +     at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66)+
    +     at fuego.server.AbstractProcessBean.runGlobalJmsActivity(AbstractProcessBean.java:3023)+
    +     at fuego.server.execution.GlobalJMSExecutor$1.run(GlobalJMSExecutor.java:113)+
    +     at fuego.component.Message.process(Message.java:576)+
    +     at fuego.component.ExecutionThread.processMessage(ExecutionThread.java:780)+
    +     at fuego.component.ExecutionThread.processBatch(ExecutionThread.java:755)+
    +     at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:142)+
    +     at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:134)+
    +     at fuego.fengine.FEngineProcessBean.processBatch(FEngineProcessBean.java:244)+
    +     at fuego.component.ExecutionThread.work(ExecutionThread.java:839)+
    +     at fuego.component.ExecutionThread.run(ExecutionThread.java:408)+
    Caused by: java.lang.NullPointerException
    +     at Sample.BPMOSB.Default_1_0.Instance.CIL_globalAutomatic(Instance.xcdl:11)+
    +     at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)+
    +     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)+
    +     at java.lang.reflect.Method.invoke(Unknown Source)+
    +     at fuego.component.ExecutionThreadContext.invokeMethod(ExecutionThreadContext.java:512)+
    +     ... 21 more+
    Please, help me, I have spend a lot of hours into this...
    Edited by: user13017288 on May 6, 2010 4:55 AM

    Hello,
    It seems doable without using BPM.
    step1 : Idoc > PI(7.1) <> JDBC stored procedure call
    step2: If the Sync. JDBC call is successful then make a sync. BAPI call to R/3.
    You can use a two-step mapping.
    1.) The first one calls the stored procedure using UDF or Java Mapping (as was suggested in earlier threads)
    2.) The input to the second mapping will be the response from 1. You can use RFCAccessor to execute the BAPI.
    step3: If JDBC all fails ( in step 1 ) it should tigger an emial and do not make BAPI call ( do not execute step 2).
    Use XI/PI alerting framework for failed messages. The BAPI call can/cannot be executed by using a try-catch statement in the root node of 2nd mapping (1..1 occurrence), return suppress in the root node if conditions are not met or return a value if otherwise.
    Note: Consider this blog in your design, /people/thorsten.nordholmsbirk/blog/2008/11/04/why-xi-mappings-should-be-free-of-side-effects
    Hope this helps,
    Mark

  • Design question, UCS and Nexus 5k - FCP

    Hi,
    I need some advice from (Mainly a Nexus person); I have drawn and attached the proposed solution (below).
    I am designing a solution with 3 UCS chassis, Nexus 5K and 2X NetApp 3240 (T1 and T2). FC will be used to access disk on the SAN. Also, Non UCS compute will need access to the T2 SAN only. (UCS will access T1 and T2). It is a requirement for this solution that non UCS devices do not connect to the same Nexus switches that the UCS chassis use.
    UCS Compute:
    the 3 X chassis will connect to 2 X 6296 FIs which will cross connect to the a 2 X Nexus 5K through a FC port channel, the Nexus 5Ks will be configured in NPIV mode to provide access to the SAN. FC from each Nexus 5K to the NetApp controllers will be provided through a total of 4 X FC Port Channels (2 FC member ports per PC) from each nexus 5K one going to controller A and the other to controller B.
    Non UCS compute:
    These will connect directly through their HBAs to their own Nexus 5Ks and then to the T2 SAN, these will be zoned to never have access to the T1 SAN.
    Questions:
    1-      As the UCS compute will need to have access to the T1, what is the best way to connect the Nexus 5Ks on the LHS in the image below to the Nexus on the RHS (This should be FC connection).
    2-      Can fiber channel be configured in a vPC domain as Ethernet? Is this a better way for this solution?
    3-      Is FC better than FCoE for this solution? I hear FCoE is still not highly recommended.
    4-      Each NetApp controller is only capable of pushing 20GBps max, this is why each port channel connecting to each controller is only 2 members. However, I’m connecting 4 port channel members from each Fabric Interconnect (6296) to each Nexus switch. Is this a waste? Remember that connectivity form each FI is also required to the T2 SAN.

    Max,
    What you are implementing is traditional FlexPod design with slight variations.
    I recommend to look at FlexPod design zone for some additional material if you have not done so yet.
    http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns743/ns1050/landing_flexpod.html
    To answer your questions:
    1) FC and FCoE do not support vPC. If UCS needs to have access to T1, then there is no need to have ISL between sites. If UCS needs to have access to T1 and T2, then best option would be to set up VSAN trunking on N5K and UCS and configure vHBAs accordingly.
    2) Both should work just fine. If you go with FCoE, then UCS would need to be on the latest version for multi-hop FCoE support.
    3) If you only worried about storage throughput then yes, you will never utilize 40Gb PO if your source will be 20Gb PO. What are your projected peak and average loads on this network?

  • Dynamic JDBC Credentials and using ADF Region ERROR

    Hi,
    I used solution by Steve Muench, Dynamic JDBC Credentials (for ADF Faces Rich Client)
    (129.     11g Dynamic JDBC Credentials for Model 1, Struts, Trinidad, and ADF Faces Rich Client 11.1.1.0.0 06-AUG-2008), but it works not correct in case of using ADF Dynamic Region (or simply ADF Region).
    I have added in ViewControllerJSFRichFaces a page (main.jspx) including ADF Dynamic Region consisting of 2 simple tasks flows. Action of the Login button, if login and password are correct, redirect to main.jspx
    There are to cases.
    1) Login and password are correct:
    In this case all work fine.
    2) Login and password are not correct:
    It does not work. redirect to login.jspx does not occur as it is expected, but the following page (main.jspx) is loaded, herewith thrown exception: oracle.jbo.DMLException: JBO-26061: Error while opening JDBC connection.
    I would be very grateful you for help.

    Hi,
    hard to say from your description if this is a bug or an implementation problem. Note that errors that occur in regions are not handled by the exception handler of the the parent page and require special handling within the region. In other words, if the problem is in the region then it needs to be handled in here. I think the right strategy here is to not show the region until a valid connection exist. Is the login performed from the parent page ?
    Frank

  • BPM and ADF integration - some questions

    Hi,
    I have a few questions about comunication between BPM 11.1.1.5 and Human Task based Task Flow:
    1) What is the best way to pass data (task id, proces id, proces data) from BPM workspace to Human Task Task Flow (and get them as TF params).
    2) Where can I find a description, how ADF app comunicates with BPM process.
    3) Where can I find a description of all data controls created by JDeveloper (BPM Suite) when HT Task Flow is created.
    3) Where can I find a description (and their functionality) of managed beans created by JDeveloper (BPM Suite) when HT Task Flow is created.
    Kuba

    Hi,
    Sory, but I'm still not sure, how ADF comunicates with BPM (I know that it uses EJB services and hwtaskflow.xml). But a still don't have an answers for follwowing questions:
    1) I know ADF quite good. Having method in data controls (in our case getTaskDetails()) we need to invoke it somehow. I don't see anywher invokation of this method.
    2) In generated task flow there is some managed beans and params - what is role of them ? Having over 50 task flows , do I need it in all of them. Where can I find description of those beans and params
    3) In our approach we use BPM , ADF RC for UI and Business Components to persist data into database. Only data from payload we need is ID of master-level row. My question is - do I have to generate data controls for all human task ?. In my my opinion it should be only one communication point between BPM and ADF but not the same for all pair human task --> task flow.
    All infomation I need from BPM is:
    - task ID
    - task flow name (to open apriopriate tab in my application)
    - available outcomes
    - to know is BPM operation is enabled
    Kuba

  • BPM design question

    Hello folks,
    I have this requirement  and I have designed a BPM for the same, I would appreciate if you could give me any improvements/suggestions:
    Req: Receive a message from Sender A the message has a transaction ID associated to it, Send the message to Receiver B and from then on wait an Hour to receive an acknowledge from Receiver B for that transaction. if no ack is received then shoot a mail to the users saying that the transaction didn't make it through. If success don't do anything, Just end the process.
    Design:
    1. Recv step (to receive the message/ start the BPM)
    2. Send Step (to reciever B to send the message )
    3. Block  ---  Block has the following    a. Receive step (to receive the ACK from Receiver B)     b.   Deadline Branch  (with a wait time of 1 hour) Inside the Deadline branch there is a Send Step to send Email followed by a Control step to end the process.
    Thank you! I would appreciate a betterment in design

    In my opinion, this is not a very good design. Keeping open a BPM instance for 1 hour is not recommended. In case you have hundreds or thousands of such messages coming in, it would badly hit the performance.
    You haven't mentioned which kind of system is your receiver system. You may think about the following parameters:
    1.  What is taking so much time to send the ack?
    2. Could this ack be sent later as an async interface?
    Regards,
    Prateek

  • How would i design the relationship between "question", "subquestion", and "answer"

    Hi all. Consider the following scenario:
    Scenario:
    A Question has an Answer, but some Questions have Subquestions. For example:
    1. Define the following terms: (Question)
    a) Object (1 marks) (Subquestion)
    An instance of a class. (Answer)
    b) ...
    2. Differentiate between a constructor and a destructor (2 marks)
    (Question)
    A constructor constructs while a destructor destroys.
    (Answer)
    Question:
    I want to model Questions, Subquestion, and Answer as Entities with relationships/associations, preferably binary relationships as i feel ternary relationships will be problematic while programming. Any suggestion on how i would
    go about this?
    There is never infinite resources.
    For the Question Entity, a question has the attributes "QuestionPhrase <String>", "Diagram<Binary>", and "Marks
    <Decimal>".
    For the SubQuestion Entity, a subquestion has the attributes "SubQuestionPhrase <String>", "Diagram<Binary>", and "Marks <Decimal>".
    For the Answer Entity, an answer has attributes, "AnswerPhrase<String>", "Diagram <Binary>"

    Yes. I am in .Net. I sure do hope i did not ask in the wrong forum. :-|
    Hi KCWamuti,
    If you need to design the relationship between Question table and Answer table in SQL Server, as Uri’s and Visakh’s posts, you can create the foreign key to establish relationship between tables, and use join in query to get your desired result. For more
    information about JOIN in SQL Server, please review this article:
    Different Types of SQL Joins.
    However, if you need to model Questions, Subquestion, and Answer as Entities in .Net, then the issue regards data platform development. I suggest you post the question in the Data Platform Development forums at
    http://social.msdn.microsoft.com/Forums/en-US/home?category=dataplatformdev . It is appropriate and more experts will assist you.
    Thanks,
    Lydia Zhang

  • Method design question...and passing object as parameter to webserice

    I am new to webservice...one design question
    i am writing a webservice to check whether a user is valid user or not. The users are categorized as Member, Admin and Professional. For each user type I have to hit different data source to verify.
    I can get this user type as parameter. What is the best approach to define the method?
    Having one single method �isValidUser � and all the client web service can always call this method and provide user type or should I define method for each type like isValidMember, isValidAdmin ?
    One more thing...in future the requirement may change for professional to have more required field in that case the parameter need to have more attribute. But on client side not much change if I have a single isValidUser method...all they have to do is pass additional values
    isValidUser(String username, String usertype, String[] userAttributes){
    if usertype == member
    call member code
    else if usertype = professional
    call professional code
    else if usertype = admin
    call admin code
    else
    throw error
    or
    isValidMember(String username, String[] userAttributes){
    call member code
    One last question, can the parameter be passed as object in web service like USER object.

    First of all, here is my code
    CREATE OR REPLACE
    TYPE USERCONTEXT AS OBJECT
    user_login varchar2,
    user_id integer,
    CONSTRUCTOR FUNCTION USERCONTEXT (
    P_LOGIN IN INTEGER
    P_ID_ID IN INTEGER
    ) RETURN SELF AS RESULT
    Either your type wont be compiled or this is not the real code..

  • XI Design Issue- BPM Usage and Performance

    Hi All
    System A is sending mutiple messages to XI and every message has a node called TEVEN which has line Items. The TEVEN is repeated and based on EId value. The receiver has to be decided, that means a single message can have multiple same Eid which has to be colleted in one set of Message and XI will keep on receiving such messages for 30 minutes and after the same grouping from all messages and their payload being done a file will be created to different Receivers (in case of Eid 1 the receiver will be System A in case of Eid 2 the receiver will be System B)
    How do I Achieve this in my BPM - the problem is to go through every message payload and then collect TEVEN header in one single message and keep on doing so for all messages received within 30 minutes and then using file adpater put those files on File Server (The receiving system desires to have only one file and will check every 30 minutes for the file)
    Any thoughts on designing this scenarion in XI are welcome. And also regarding comments on designing a BPM to handle this and the performance related with that.
    <ns0:TEVEN>
    <ns0:EText />
    <ns0:EId>0001</ns0:EId>
    </ns0:TEVEN>
    <ns0:TEVEN>
    u2026u2026.
    u2026.
    </ns0:TEVEN>
    BR / Swetank

    Hi,
    If you have collect the messages till 30 mins and then create a file then i see you have to use BPM only.
    You can use the correlation for different  Eid, or you can use the option of Enhanced Receiver Determination.
    The help for both is available on SDN.
    with regards,
    Ravi Siddam

  • Regarding file content conversion and jdbc adapters

    hi
    can any one send me the details about the sender jdbc adapter and the receiver jdbc adapter.
    i need the output of sender jdbc adapter(structure)also fro the receiver jdbc adapter.
    for file content conversion -
    can any one send abt the sender and receiver of the FCC adapter.
    also provide some links abt these in blogs.
    thanks in advance .

    Hi..
       You can search those links in SDN.
    FILE to JDBC Adapter using SAP XI 3.0 --Receiver Jdbc adapter .
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/xi/jdbcTOJDBC& ---for both sender and receiver jdbc adapter configuration
    Content Conversion (Pattern/Random content in input file) ---for FCC
    Regards,
    Leela

  • Credit Management: Difference Between Static and Dynamic Credit Check

    Hi,
    Could anyone tell the difference Between Static and Dynamic Credit Check?
    According to website: http://www.sap-basis-abap.com/sd/difference-between-static-and-dynamic-credit-check.htm ... this is the answer:
    ====================
    Simple Credit Check : Tr.Code - FD32
    It Considers the Doc.Value + Open Items.
    Doc.Value : Sales Order Has been saved but not delivered
    Open Item : Sales Order has been saved , Delivered, Billed & Transfered to FI, but not received the payment from the customer.
    Static Credit Check it checks all these doc value & check with the credit limit
    1) Open Doc.Value / Sales Order Value : Which is save but not delievered
    2) Open Delivery Doc.Value : Which is delivered but not billed
    3) Open Billing Doc.Value : Which is billed but not posted to FI
    4) Open Item : Which is transfered to FI but not received from the customer.
    Dynamic Credit Check         1) Open Doc
                                                2) Open Delivery
                                                3) Open Billing
                                                4) Open Items
                                                5) Horizon Period = Eg.3Months
    Here the System will not consider the above 1, 2, 3 & 4 values for the lost 3 months.    
    ====================
    Question 1: Could you further explain the above information, if there is any?
    Question 2:: What is the Tcode to customize settings of:
    a) Simple Credit Check (isn't this same with b) below?)
    b) Static Credit Check
    c) Dynamic Credit Check

    Hi Tanish,
    Diff between Static and Dynamic Filters.
    Example One at report Level.
    Create a variable for a Infoobject say ,Material .
    1)In the Query Designer and if u restrict it to some 10 materials at query level, the report will display for only those 10 materials only.This is Static Filter.UR AHrdcoding it to those materials.You cant change them at Query Run time.i.e not changeable by user.
    2)If u give the variable as input ,and when u run the query ,u can can choose the material,may 10 may be 1 or may 20 .It is dynamic.Changeable by user at run time
    Example Two at DTP and Start Routine Level,say Document Type.
    1)If u give filters in Start routine it is Static as u cannot change it in Production,not changeable by user.
    2)f u give filters in DTP it is Dyanamic as u can change it in Production.U can give any doc type,Changeable by user at run time.
    Hope it is Understood.
    Rgds
    SVU

  • Merging files using BPM-Prod Issue - JDBC to File.

    Hi all,
    We had a scenario in which data is available in two database tables and that data needs to be merged and dropped in a File system.
    To meet this requirement we have developed a BPM with constant correlation and it was working fine.
    But some days one file wont get generated as the JDBC adapters are polling the DB tables based on date.So there are chances that records wont be present in DB for that particular date and file may not be created.So if only one file is generated my entire business scenario fails as the fork step used to merge two files will not be completed.
    The records which is pulled from DB table is based on data an i have written UDF based on these dates.
    Could someone help me in finding a solution.
    Thanks & Regards,
    Lekshmi.

    Hi ,
    I have included a deadline and exception branch in my scenario and it seems to be working fine.Need to do some more testing in this regard.
    But i have one doubt regarding this issue.In the transformation step i have used multi message mapping where i have mentioned the occurrence of both input messages as 1. I was expecting this scenario to throw an error at transformation step (since only one message was given as input )once i have made the changes but surprisingly it didnt happen.Could you pls tell me why this didnt happen..
    Rgds,
    Lekshmi.

  • Dynamic JDBC credentials example application from Steve Muench

    Apologies for this newbie question...but I'm trying to understand the Dynamic JDBC credentials example application from Steve Muench:
    http://radio.weblogs.com/0118231/stories/2004/09/23/notYetDocumentedAdfSampleApplications.html#14
    I think I understand most of it but the one bit I dont understand is why it customizes the ADF Page Lifecycle (DynamicJDBCADFPhaseListener, DynamicJDBCPageLifecycle, DynamicJDBCPageLifecycleContext).
    Can anyone explain to an ex-forms developer why this code is there?
    I'm also trying to work out a way for the session to be invalidated when the user logins again
    e.g. a user logins, he doesnt use the logout function but uses the back button to go back to the login page. when he logs in with another set of credentials, would a new session start or as i supsect, it would use the original login credentials?
    Thanks

    You can ignore those three classes in the example. They are not related to the dynamic credential solution, and must have been left over from some other example I evolved into what you see. Sorry to have cluttered up the implementation with stuff that isn't really contributing to the actual solution. DOH!

Maybe you are looking for