Avoiding implementing getPassword() in SSOWorkspaceLoginInterface

So i've implemented the SSOWorkspaceLoginInterface interface, and have custom SSO working -- ie, BPM calls my class and I can provide getUser() and getPassword() which results in successfully logging in without showing the login dialog. (by having skipFDIAuthentication return false).
however, I'm trying to figure out how the skipFDIAuthentication method works. Basically, i want to do my own authorization, but it appears that i must return a valid password that matches what BPM has for that user regardless of whether i return true or false from skipFDIAuthentication. I'm assuming i want to return true in this case.
I was hoping to get called for getUser and in there i would do my own validation, and then return the username. since skipFDIAuthentication return true, now, that's all that would happen. as long as that user was in the BPM datastore, i would be good to go.
That doesn't seem to be happening.... am i missing something?
(Another way of asking this question is what do you do if you return true from skipFDIAuthentication?)

Ok, so it is failing in JDBCAuthenticationAccessor.checkParticipantTrustInternal
and i notice that this method is checking session properties
SKIP_CHECK_TRUST
ldc <String "SKIP_CHECK_TRUST"> [147]
6 iconst_0
7 invokevirtual fuego.directory.provider.DirectorySessionImpl.getBooleanProperty(java.lang.String, boolean) : boolean [91]
10 istore_1 [skipCheckTrust]
and
skip-auth
ldc <String "skip-auth"> [90]
30 iconst_0
31 invokevirtual fuego.directory.provider.DirectorySessionImpl.getBooleanProperty(java.lang.String, boolean) : boolean [91]
34 istore_3 [skip_auth]
Am I supposed to be setting these properties somewhere if I return true for skipFDIAuthentication?

Similar Messages

  • Is this a documented Collision Avoidance method???

    I keep the avatar in my app above the terrain by checking the x and z location every frame and then positioning it at the appropriate y point (i.e. the height of the terrain at point y) - I've seen methods of collision avoidance implemented before using picking, it seemed fairly complex. I want mine as simple as possible so I plan to do it like this:
    - Add the objects to the terrain so the avatar would go over them regardless of their height
    - At every frame, store the current x,y and z co ordinates
    - Then calculate the hieght of what the new x, y and z coordinates will be
    - If the the new y coordinate has too greater difference between that and the old one (relative to the differences betweeen the x and y as this will reflect the speed) then keep the avatar in the position of the old x, y and z.
    The value that represents the difference between the old y and new y cordinates would in effect be how steep the avatar can climb.
    Is this a documented mothod of collision avoidance?
    How good is this, speed wise, compared to other methods - I favour speed over accuracy.
    Thanks.

    I think it is quite a standard technique - the only
    danger is that if you are moving fast with a low frame
    rate you may jump up vertical faces like this:B            B
    |              /
    |             /   
    A   |         A  /
    -------         ---If you only have a frame drawn at A and B you would
    find no difference between those two examples, whereas
    from a gameplay point of view you may want there to be
    one.Hmmm, yes this thought had crossed my mind. Anyways, I'll give it a go and see how it works :)

  • Please help me with the code!!

    Dear All, I am going to simulate a TCP via UDP , and I have been asked for creating some specific methods , also additional helper methods are allowed but I can't avoid implementing any of the functions described below.
    I have bold the functions which are ambiguous to me.
    [click here to see the detail assignment explanation|http://www.net.t-labs.tu-berlin.de/teaching/ws0809/PD_labcourse/PDF/u5en.pdf|click here to see the detail assignment explanation]
    If you see anything wrong please post , thanks.
    &bull; rdt connect()
    This function sets all the parameters required for connecting to a server. Moreover, it opens a
    UDP socket.
    &bull; rdt listen()
    This function is the server-side counterpart of rdt connect(). It opens a UDP socket for
    incoming connections and sets all the required parameters.
    &bull; rdt send()
    This function takes an arbirary amount of data as input and writes it into a send buffer. It
    does not send anything to the network.
    &bull; rdt recv()
    This function returns the whole receive buffer. It does not really read data from the network,
    but only from the receive buffer.
    &bull; rdt select()
    This function does the whole work of moving data over the network. It behaves similar to the
    system call select(), i.e. it blocks until there is some data in the receive buffer. The received
    data can be subsequently read with rdt recv(). A flow diagram of the functionality of this
    function follows:
    &ndash; send all()
    This function sends all the data in the send buffer over the network to the receiver. If
    more than 500 bytes are present in the send buffer, more packets must be sent. The MSS
    (Maximum Segment Size) of an RDT packet is 534 bytes.
    *&ndash; select()*
    This is the well-known system call that waits until the receive buffer contains data.
    [ I don't know what to do with the buffer , and how to make it empty]
    &ndash; recv from()
    This is the well-known system all recv from() that reads data from the network.
    *&ndash; statemachine send()*
    This is a function that processes a newly received packet. In RDT 1.0 nothing is done here,
    becuase the sender does not expect any ACK packets from the receiver.
    &ndash; statemachine recv()
    This function processes incoming packets. In RDT 1.0 the header is removed and the
    payload copied to the receive buffer.
    &ndash; Payload
    rdt select() returns at this point if there&rsquo;s any data in the receive buffer. If not, it starts
    from the beginning.
    [ where the data should be returned?]
    This is my code:
    import java.io.BufferedReader;
    import java.io.IOException;
    import java.io.InputStreamReader;
    import java.net.DatagramPacket;
    import java.net.DatagramSocket;
    import java.net.InetAddress;
    import java.net.Socket;
    import java.net.SocketException;
    public class Ex5 {
        DatagramSocket socket , client_socket;
        DatagramPacket rcv_pkt;
        DatagramPacket send_pkt;
        byte[] sendData;
        byte[] receivedData;
        InetAddress ipaddr;
        int client_port;
        public void rdt_connect(String host , int client_port) throws IOException{
                this.client_port = client_port;
                client_socket = new DatagramSocket();
                ipaddr = InetAddress.getByName(host);
        public void rdt_listen(int port) throws SocketException{
            //byte[] buffer = new byte[1024];
            socket = new DatagramSocket(port);
            //DatagramPacket packet = new DatagramPacket(buffer, buffer.length );
        public DatagramPacket rdt_send() throws IOException{
            sendData = new byte[534];
            BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
            String inString = in.readLine();
            sendData = inString.getBytes();
            DatagramPacket send_packet = new DatagramPacket(sendData,sendData.length, ipaddr,client_port );
            return send_packet;
        public DatagramPacket rdt_recv(byte[] receivedData) throws IOException{
            //byte[] receivedData = new byte[1024];
            DatagramPacket received_packet = new DatagramPacket(receivedData, receivedData.length );
            //socket.receive(received_packet);
            return received_packet;
        public byte[] rdt_select() throws IOException{
            rcv_pkt = new DatagramPacket(receivedData,receivedData.length);
            while(!receivedData.equals(null)){
                return receivedData;
            send_all();
            return null;
        public void send_all() throws IOException{
            rcv_pkt = this.rdt_send();
            client_socket.send(rcv_pkt);
        public DatagramPacket select(){
            DatagramPacket local_rcv_pkt = null;
            while(receivedData != null){
                //local_rcv_pkt = rcv_pkt;
            return local_rcv_pkt;
        public void recv_from() throws IOException{
            rcv_pkt = select();
            socket.receive(rcv_pkt);
        public void statemachine_send(){
        public void statemachine_recv() throws IOException{
            String data = new String(rcv_pkt.getData());
            receivedData = data.getBytes();
            rcv_pkt = this.rdt_recv(receivedData);
    }

    public class Ex5 {
    DatagramSocket socket , client_socket;Surely you're not expected to implement the client and the server in the same class?
    while(!receivedData.equals(null)){
    return receivedData;
    }This is just nonsense.
    public DatagramPacket select(){See java.nio.channels.Selector.
    while(receivedData != null){
    //local_rcv_pkt = rcv_pkt;
    }More nonsense.
    I suggest if you don't understand the assignment you advise those who gave it to you, and/or review whatever supporting information you were supplied or have been taught.

  • Monthly TPM1 without reset for FX forward in Hedge Accounting

    Hi,
    I would like a clarification on the "standard" way of resetting the valuated position of an FX forward (using Hedge Accounting position management procedure) when using TPM1 and the "mid-year valuation without reset" option.
    When executing TPM1, the typical postings for an FX forward consist in:
    - debiting or crediting the G/L balance sheet account representing the FX deals position
    - against the OCI interim account
    - then to further classify the OCI interim account to OCI and P&L
    When reaching the deal maturity, balances exist consequently on following accounts:
    - B/S account for FX deals position
    - OCI (unrealized)
    - unrealized P&L
    When posting TBB1, a new B/S acount (FX flows reconciliation) is debited and credited with the incoming and outgoing flows of the FX.
    When posting TPM18 subsequently, following postings are done:
    --> REVERSAL OF UNREALIZED:
    - OCI (unrealized) is reversed (in custo, the position management procedure mentions that 'With Reset' of classification flows must be used when realizing the deal)
    - P&L (unrealized) is reversed
    --> POSTING OF REALIZED
    - The FX reconciliation account (the one posted with TBB1) is posted with the realized gain or loss
    - against the OCI interim account
    - the the OCI interim is further classified to OCI and realized P&L
    At the end of the HR, the OCI (realized) is reclassified to realized P&L if needed (THM10).
    As you understand, the B/S account representing the FX deals position that was initially posted with TPM1has never been reversed while the deal has been realized and the position should be closed. In fact that's the only account that remains open (has a balance) at the end. Why is it so? How is it supposed to be offset when using the 'no reset' procedure of TPM1.
    In table TRLV_TRANS_POS for the deal, I see that everything has status posted except 2 records: flow types OTC001 and OTC002. These remain in status scheduled. Is the issue related to that?
    By the way, we investigate the 'no reset' procedure because the 'with reset' option prevents TPM18 from working afterwards. I saw an OSS note for this but it is huge and we would like to avoid implementing it.
    Thanks for advices.
    Regards,
    Christophe.

    Hi Christophe,
    I left the thread open for some possible hints from others, but I believe your question was answered by Rudolf through the SAP Support message. Here his summary again:
    the valuation togehther with classification works in two possible ways:
    1. valuation without reset + reset of the unrealized OCI and P/L
    at the maturity of the deal the valuation is reset with the derived business transaction. the derived business transactions can distinguish between asset and liability accounts. It has to be setup in the position management procedure. The classification reset the unrealized OCI an P/L and posts new relalized classification.
    2. valuation with reset + reset of the unrealized OCI and P/L
    in this case every valaution is reset (usually on the next date) and the corresponding classification is reset as well. At the maturity no reset of the valuation is necessary and same is valid for unrealized OCI + P/L.
    We do not support scenario where valuation with reset is used but the corresponding classfication is not reset together with the valuation.
    BR, Tomislav

  • Sending Email if PO approver does not approved after 2 days

    Hi Experts,
    I am not sure whether I am posting to the right forum, but I will still try to asked.
    We have a requirement that we need a certain agent or report that will run checking all POs need for approval then sending it to each PO's approver outlook email daily?
    Is this can be done in ABAP, like creating a report like ME28, get all PO per approver then send it to their respective outlook mail stating to them that this are the PO's need their approval? Or other thing like workflow is suitable?
    I am thinking of doing it in a ABAP program to avoid implementation of workflow if possible.
    Hoping for your positive feedback.
    Thanks.

    Hi Tina,
    1.         Firstly I would suggest you to check this report RSSCD100
    2.         Using workflow if you want to solve this then thi slink may help you http://help.sap.com/saphelp_srm30/helpdata/en/ec/fe163ff8519a06e10000000a114084/content.htm
    3.      Now the way you are looking for is :
             3.a Table EKKO to get Purchase Doc No. , Purch. Doc Category, Release Status (FRGZU)
             3.b Pass these 2 values in  FM ME_CHANGEDOC_SELECT to get Username (Approver Name) aginst required status 
                that  is FRGZU = 'X'
             3.c Use FM SO_NEW_DOCUMENT_ATT_SEND_API1 to send mail (put date logic to compare)
             3.d If you are maintaining E-mail IDs in some Z table then okie, otherwise this FM may help you BAPI_USER_GET_DETAIL.
    Just give a try & let me know if this was intended.
    Regards,
    N. Singh

  • OBPM 10gR3 Dynamic Role Assignment at user login

    Hi,
    For all the great integration with LDAP in 10gR3, unfortunately, the system is unable to deal with dynamically-defined LDAP groups.
    Our goal is to apply a BPM Role to ALL humans defined in our LDAP.
    All humans happen to already be defined by a dynamically-defined LDAP group called 'AllPeople'.
    It would have been perfect if we could simply assign our BPM Role, 'Employee', to the LDAP group, 'AllPeople'. Sadly you can't (one for the next release pls).
    So as a workaround, what we want to do instead is assign the BPM Role 'Employee' to each individual user dynamically when they first login.
    Since the FDI library is useless outside of a BPM context (you'll find that some of the familiar methods of RoleAssignment are missing), We opted to create an actual BPM process to conduct role assignments, and we would then trigger it via PAPI.
    The question then was, where/when do we invoke the process such that it does the role assignment quickly and soon enough for the appropriate views and applications to appear in their workspace straight after login?
    We opted for a customised implementation of the SSOWorkspaceLoginInterface class.
    However, we tried making the invocation in the setupAuthenticatedSession() and the processRequest() methods but, although the role assignment was successfully done in either case, sadly the user's session was loaded without the new changes - perhaps loaded quicker than the role assignment could be fed back through the directory.
    Therefore, we dumped the invocation in the actual constuctor - and this seems to work for the most part. Yet on the odd ocassion, the role assignment is not quick enough to be realised in the user's workspace session - the user has to logout and back in before the changes are realised.
    We've even tried to get the execution to sleep for a second or two, while the PAPI thread goes about doing the role assignment - again not much success.
    So I really have 2 questions:
    1. Where during login can we make a PAPI call to do a role assignment so that it should be picked up by the time the session is created? perhaps we already are doing it in the right place.
    2. How could we refresh/request a new session cookie without explicitly logging out and back in again? Note, page refresh is not enough.
    Thanks for reading.

    Sorry for the belated response - I don't get notified of replies.
    The code for my custom SSOLoginModule class is:-
    import javax.servlet.http.HttpServletRequest;
    import javax.servlet.http.HttpServletResponse;
    import java.io.FileInputStream;
    import java.io.IOException;
    import java.util.Properties;
    import fuego.workspace.security.SSOWorkspaceLoginInterface;
    import fuego.papi.Arguments;
    import fuego.papi.CommunicationException;
    import fuego.papi.InstanceInfo;
    import fuego.papi.OperationException;
    import fuego.papi.ProcessService;
    import fuego.papi.ProcessServiceSession;
    import fuego.sso.SSOLoginException;
    import fuego.sso.SSOUserLogin;
    import fuego.jsfcomponents.Util;
    import fuego.workspace.model.common.WorkspaceApplicationBean;
    public class CustomSSOWorkspaceLogin extends SSOUserLogin implements SSOWorkspaceLoginInterface {
    private ProcessService pService;
    private ProcessServiceSession pServiceSession;
    private Properties properties;
    public SSOWorkspaceDBLogin() {
    //Do the role assignment here because it works, and does not work in the ideal location of setupAuthenticatedSession method
    pService = createProcessService();
    pServiceSession = createProcessServiceSession();
    assignDefaultRole(Util.getHttpServletRequest().getRemoteUser());
    private ProcessService createProcessService() {
    return WorkspaceApplicationBean.getCurrent().getProcessService();
    private ProcessServiceSession createProcessServiceSession() {
    return pService.createSession("yourdirectoryusername","yourdirectorypassword",null);
    //This method is used to remotely invoke a BPM process to do the role assignment - no external API to do this directly!
    private void assignDefaultRole(String email) {
    try {
    String processId = "myRoleAssignmentProcessId";
    String argumentName = "argumentName"; //the name of the input argument to feed in the participant
    String argumentValue = email;
    Arguments arguments = Arguments.create();
    arguments.putArgument(argumentName, argumentValue);
    InstanceInfo instance = pServiceSession.processCreateInstance(processId, arguments);
    Long waitTime = new Long(1000);
    Long timeLimit = new Long(5000);
    boolean roleAssigned = false;
    boolean timeLimitExceeded = false;
    Long startTime = System.currentTimeMillis();
    //Allow role assignment thread to complete
    while (!roleAssigned && !timeLimitExceeded) {
    try {
    Thread.sleep(waitTime);
    if (pServiceSession.processGetInstance(instance.getId()).isCompleted()) {
    roleAssigned = true;
    if (System.currentTimeMillis() - startTime > timeLimit) {
    timeLimitExceeded = true;
    } catch (InterruptedException e) {
    e.printStackTrace();
    //close process service session
    pServiceSession.close();
    //Do not close the service itself as it is shared with the Workspace itself!
    //pService.close();
    } catch (Exception e) {
    e.printStackTrace();
    public void setupAuthenticatedSession(HttpServletRequest httpservletrequest, HttpServletResponse httpservletresponse) throws SSOLoginException {
    //Unfortunately, the below does not work here because the role assignment is not fast enough
    //The result is that the user logs in but cannot see any applications because the role assignment has not been made in time.
    //Therefore, we run the below statements from the constructor - ugly but functions.
    //pService = createProcessService();
    //pServiceSession = createProcessServiceSession();
    //assignDefaultRole(httpservletrequest.getRemoteUser());
    public void processRequest(HttpServletRequest httpservletrequest, HttpServletResponse httpservletresponse) throws SSOLoginException {
    }

  • Intercompany scenario in manufacturing

    We're in the manufacturing business with several companies in US and Mexico. As we roll out SAP at a Mexico location, we need to prorotype an intercompany scenario. Manufacturing of some of our products is started in the US and is completed in Mexico. We were hoping to use Intercompany functionality, but it seems that it is working only with finished goods. I.e. if we sell one product from US and one product from Mexico then it would work. But we sell one product, it's just manufactured in two locations.
    We explored an option to enter 2 materials instead of one (IS and Mexico pieces separately), but this would mess up the reporting,  require customer service re-training, etc.
    In another company on a different SAP box we use the following process:
    - sales order and production order are created in the US company
    - subcontracting purchase order is created for Mexico company
    - Mexico company creates their own sales order/delivery/invoice and completes their piece
    - US company completes their delivery and invoice.
    Since in this case US and Mexico companies will be in the same box, we were hoping to avoid implementing the same scenario because it involves many additional documents (POs, SOs, etc.).
    Is there any way to utilize Intercompany functionality in our scenario? If not, is there any way to simplify the process described above?
    We've done a lot of research and are at the end of our wits right now. Any suggestions would be appreciated.
    Thank you.

    Hi
    This is just a wild thought about the process to be mapped...
    I am thinking about using both Intercompany and Third Party sales processes to map the requirement.
    Scenario 1: If the Mexican plant has to make the delivery to customer...the Mexican plant will be selected as the delivering plant in the sales order and the availability check happens as per the lead time required to receive the partially processed material (raw material for Mexican plant) from US and other lead times and the finished goods should be delivered to the customer directly.
    Scenario: If the US plant has to make the delivery to customer...the US plant will be selected as the delivering plant in the sales order...a different item category should be determined (like TAS) and a PR should be generated automatically like third party process and the non SD people should take care of getting the finished good from Mexican plant to US plant (sending the semi finished product and getting the finished good).
    This can just be an input for your thoughts.
    Thanks,
    Ravi

  • Load balancing connections in 3.7

    Hi,
    in our current setup we use read-write and read-only proxies. With 3.7 a client that is supposed to connect to a read-only proxy can be redirected to a read-write proxy (and a read-write to a read-only).
    Is there a way to partition proxies for load balancing?
    Haven't checked the API yet, maybe it is possible to do it by implementing a custom load balancer but I really want to avoid implementing a custom one as much as possible.
    Thanks,
    Alberto

    Hi NJ,
    we need to access the same cache in a read-only or read-write way.
    Anyway thanks you are right, the key point is using different proxy schemes, specifically different proxy services. I've just read the documentation more carefully and it says:
    "... proxy – (default) This strategy attempts to distribute client connections equally across proxy service members..."
    They should be partitioned by proxy service.
    I'll test it tomorrow.
    Thanks,
    Alberto

  • A question about inheritance and overwriting

    Hello,
    My question is a bit complicated, so let's first explain the situation with a little pseudo code:
    class A {...}
    class B extends A{...}
    class C extends B {...}
    class D extends C {...}
    class E extends B {...}
    class F {
      ArrayList objects; // contains only objects of classes A to E
      void updateObjects() {
        for(int i = 0; i < objects.size(); i++)
          A object = (A) objects.get(i); // A as superclass
         update(A);
      void update(A object) { ... }
      void update(B object) { ... }
      void update(D object) { ... }
    }My question now:
    For all objects in the objects list the update(? object) method is called. Is it now called with parameter class A each time because the object was casted to A before, or is Java looking for the best fitting routine depending on the objects real class?
    Regards,
    Kai

    Why extends is evil
    Improve your code by replacing concrete base classes with interfaces
    Summary
    Most good designers avoid implementation inheritance (the extends relationship) like the plague. As much as 80 percent of your code should be written entirely in terms of interfaces, not concrete base classes. The Gang of Four Design Patterns book, in fact, is largely about how to replace implementation inheritance with interface inheritance. This article describes why designers have such odd beliefs. (2,300 words; August 1, 2003)
    By Allen Holub
    http://www.javaworld.com/javaworld/jw-08-2003/jw-0801-toolbox.html
    Reveal the magic behind subtype polymorphism
    Behold polymorphism from a type-oriented point of view
    http://www.javaworld.com/javaworld/jw-04-2001/jw-0413-polymorph_p.html
    Summary
    Java developers all too often associate the term polymorphism with an object's ability to magically execute correct method behavior at appropriate points in a program. That behavior is usually associated with overriding inherited class method implementations. However, a careful examination of polymorphism demystifies the magic and reveals that polymorphic behavior is best understood in terms of type, rather than as dependent on overriding implementation inheritance. That understanding allows developers to fully take advantage of polymorphism. (3,600 words) By Wm. Paul Rogers
    multiple inheritance and interfaces
    http://www.javaworld.com/javaqa/2002-07/02-qa-0719-multinheritance.html
    http://java.sun.com/docs/books/tutorial/java/interpack/interfaceDef.html
    http://www.artima.com/intv/abcs.html
    http://www.artima.com/designtechniques/interfaces.html
    http://www.javaworld.com/javaqa/2001-03/02-qa-0323-diamond_p.html
    http://csis.pace.edu/~bergin/patterns/multipleinheritance.html
    http://www.cs.rice.edu/~cork/teachjava/2002/notes/current/node48.html
    http://www.cyberdyne-object-sys.com/oofaq2/DynInh.htm
    http://www.gotw.ca/gotw/037.htm
    http://www.javajunkies.org/index.pl?lastnode_id=2826&node_id=2842
    http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=1&t=001588
    http://pbl.cc.gatech.edu/cs170/75
    Downcasting and run-time
    http://www.codeguru.com/java/tij/tij0083.shtml
    type identification
    Since you lose the specific type information via an upcast (moving up the inheritance hierarchy), it makes sense that to retrieve the type information ? that is, to move back down the inheritance hierarchy ? you use a downcast. However, you know an upcast is always safe; the base class cannot have a bigger interface than the derived class, therefore every message you send through the base class interface is guaranteed to be accepted. But with a downcast, you don?t really know that a shape (for example) is actually a circle. It could instead be a triangle or square or some other type.
    To solve this problem there must be some way to guarantee that a downcast is correct, so you won?t accidentally cast to the wrong type and then send a message that the object can?t accept. This would be quite unsafe.
    In some languages (like C++) you must perform a special operation in order to get a type-safe downcast, but in Java every cast is checked! So even though it looks like you?re just performing an ordinary parenthesized cast, at run time this cast is checked to ensure that it is in fact the type you think it is. If it isn?t, you get a ClassCastException. This act of checking types at run time is called run-time type identification (RTTI). The following example demonstrates the behavior of RTTI:
    //: RTTI.java
    // Downcasting & Run-Time Type
    // Identification (RTTI)
    import java.util.*;
    class Useful {
    public void f() {}
    public void g() {}
    class MoreUseful extends Useful {
    public void f() {}
    public void g() {}
    public void u() {}
    public void v() {}
    public void w() {}
    public class RTTI {
    public static void main(String[] args) {
    Useful[] x = {
    new Useful(),
    new MoreUseful()
    x[0].f();
    x[1].g();
    // Compile-time: method not found in Useful:
    //! x[1].u();
    ((MoreUseful)x[1]).u(); // Downcast/RTTI
    ((MoreUseful)x[0]).u(); // Exception thrown
    } ///:~
    As in the diagram, MoreUseful extends the interface of Useful. But since it?s inherited, it can also be upcast to a Useful. You can see this happening in the initialization of the array x in main( ). Since both objects in the array are of class Useful, you can send the f( ) and g( ) methods to both, and if you try to call u( ) (which exists only in MoreUseful) you?ll get a compile-time error message.
    If you want to access the extended interface of a MoreUseful object, you can try to downcast. If it?s the correct type, it will be successful. Otherwise, you?ll get a ClassCastException. You don?t need to write any special code for this exception, since it indicates a programmer error that could happen anywhere in a program.
    There?s more to RTTI than a simple cast. For example, there?s a way to see what type you?re dealing with before you try to downcast it. All of Chapter 11 is devoted to the study of different aspects of Java run-time type identification.
    One common principle used to determine when inheritence is being applied correctly is the Liskov Substitution Principle (LSP). This states that an instance of a subclass should be substitutible for an instance of the base class in all circumstances. If not, then it is generally inappropriate to use inheritence - or at least not without properly re-distributing responsibilities across your classes.
    Another common mistake with inheritence are definitions like Employee and Customer as subclasses of People (or whatever). In these cases, it is generally better to employ the Party-Roll pattern where a Person and an Organization or types of Party and a party can be associated with other entities via separate Role classes of which Employee and Customer are two examples.

  • Set idoc Packet Size to always be a single packet

    Hi All,
    WE20 config has a field for packet size.  If I set it to 1, the system will create 1 packet/LUW for each idoc.
    If I set it to two, it will wait until there are 2 idocs available and then create 1 packet/LUW for both idocs.  however, if that second idoc is never created, the first idoc will never be sent.
    Is there a way to specify that the system always send 1 packet/LUW for any number of idocs?  So lets say today I created 10 idocs - the system would send that as 1 LUW.  Tomorrow, I create 25 idocs, and the system would send that as 1 LUW.
    I'm guessing the answer to this is NO, because the system will never know when to stop waiting for more idocs and send them to the receiving system already.  The PI developers here are looking for a way to avoid implementing BPM, because they say it will cause performance issues.  This was suggested as an alternative.  As I said, I doubt it's possible, but I wanted to make certain before going back to them with that info.
    Thanks,
    Bryan

    For some reason I was under the impression that RSEOUT00 wouldn't send the idocs unless the number had been reached, but I see that is not the case.
    I always make things more complicated than they need to be.
    Thanks!

  • Help needed: Memory leak causing system crashing...

    Hello guys,
    As helped and suggested by Ben and Guenter, I am opening a new post in order to get help from more people here. A little background first...  
    We are doing LabView DAQ using a cDAQ9714 module (with AI card 9203 and AO card 9265) at a customer site. We run the excutable on a NI PC (PPC-2115) and had a couples of times (3 so far) that the PC just gone freeze (which is back to normal after a rebooting). After monitor the code running on my own PC for 2 days, I noticed there is a memory leak (memory usage increased 6% after one day run). Now the question is, where the leak is??? 
    As a newbee in LabView, I tried to figure it out by myself, but not very sucessful so far. So I think it's probably better to post my code here so you experts can help me with some suggestions. (Ben, I also attached the block diagram in PDF for you) Please forgive me that my code is not written in good manner - I'm not really a trained programmer but more like a self-educated user. I put all the sequence structures in flat as I think this might be easier to read, which makes it quite wide, really wide.
    This is the only VI for my program. Basically what I am doing is the following:
    1. Initialization of all parameters
    2. Read seven 4-20mA current inputs from the 9203 card
    3. Process the raw data and calculate the "corrected" values (I used a few formula nodes)
    4. Output 7 4-20mA current via 9265 card (then to customer's DCS)
    5. Data collection/calculation/outputing are done in a big while loop. I set wait time as 5 secs to save cpu some jucie
    6. There is a configuration file I read/save every cycle in case system reboot. Also I do data logging to a file (every 10min by default).
    7. Some other small things like local display and stuff.
    Again I know my code probably in a mess and hard to read to you guys, but I truely appreciate any comments you provide! Thanks in advance!
    Rgds,
    Harry
    Attachments:
    Debug-Harry_0921.vi ‏379 KB
    Debug-Harry_0921 BD.pdf ‏842 KB

    Well, I'll at least give you points for neatness. However, that is about it.
    I didn't really look through all of your logic but I would highly recommend that you check out the examples for implementing state machines. Your application suffers greatly in that once you start you basically jumped off the cliff. There is no way to alter your flow. Once in the sequence structure you MUST execute every frame. If you use a state machine architecture you can take advantage of shift registers and eliminate most of your local variables. You will also be able to stop execution if necessary such as a user abort or an error. Definitely look at using subVIs. Try to avoid implementing most of your program in formula nodes. You have basically written most of your processing there. While formula nodes are easier for very complex equations most of what you have can easily be done in native LabVIEW code. Also if you create subVIs you can iterate over the data sets. You don't need to duplicate the code for every data set.
    I tell this to new folks all the time. Take some time to get comfortable with data flow programming. It is a different paradigm than sequential text based languages but once you learn it it is extremely powerful. All your data flow to control execution rather than relying on the sequence frame structure. A state machine will also help quite a bit.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • Central contrcact distribution with WSRM with SRM 7.01 (EHP1)

    Hi Gurus,
    We are implementing Central contract distribution with SRM 7.01 and ECC 6 EHP 5 backends.
    Can we use WSRM for Point-to-Point Communication for contract distribution? As the current customer does not have PI I am happy to avoid implementing PI.
    Thanks in advance.

    Hello Naveen,
    Find the error details:
    We have followed the steps for peer to peer service configuration as per note1268336 for synchronous service QUERYCODELIST for our XI independent scenario for contract management.
    As per the steps for peer to peer for settings,we are done with the configuration in SRM side i.e. provider system  and configured the QUERYCODELIST service.
    Now while doing the same on ECC side(BDE) i.e.  Consumer system when we are maintaining this service and creating Logical port,We get pop up to enter the details.
    Once we enter the details and click on APPLY setting button we are getting this error which Muthu as mentioned.
    "SRT Framework exception: Error in WSDL access: Exception occurred in communication framework:Error in HTTP Framework:404Connection Failed"
    I have configured this in the past for some other project once where similar error(it was 402 or 403) was coming since WSDL node was not active under SRT in SRM but this time we have checked it and all the required services are also active.
    Can you help to check further on this.
    Regards.
    Paresh.

  • Drawbacks to developing JBI Components

    From a developers perspective my instinct tells me there appears to be two drawbacks to developing a JBI component.
    First, I have to waste thousands of brain cells creating/updating a WSDL.
    Second, each component is responsible for XML transformation (binding). For example, each component takes the incoming content (XML) and uses JAXB or XMLBeans to get some Java objects he can work with. Then right before he sends the response he uses the same objects to set the response content.
    I was under the impression that we were getting away from all that complexity? Axis, XFire, JAX-WS, JSR181, etc�.. I thought all shared a common goal of providing developers a quick way to create web services by not requiring them to create WSDLs, supporting annotations, and handling the binding automatically.
    To me there is a big time difference compared to writing a JAX-WS web service running in Glassfish and writing a JBI component.
    Am I seeing JBI component development differently than how it was originally intended? Perhaps I am unfamiliar with potential existing solutions.
    Thanks in advance.
    James

    JBI has to qualify as one of the most unusual standards to ever come out of the JCP, because it primarily addresses a very small audience (implementors of JBI environments and plug-in components), yet it is of great value to a much wider audience (the users of those plug-in components), who use JBI only indirectly.
    When should you develop a new service engine? My short answer is "only when you have no other choice." SEs and BCs provide support for specific application and protocol technologies, respectively. If you need such a technology, and the appropriate SE (or BC) doesn't exist, you may have a case for creating a new SE (or BC).
    You always have the option of implementing support for such technology separately, outside of the JBI environment, and using standard protocols such as SOAP over HTTP or JMS to connect to your JBI environment. In theory this lets you avoid implementing some JBI component features, but practice this ends up being more costly in terms of development effort. For example, a JBI SE doesn't have to worry about implementing SOAP, or any other protocol for that matter.
    JAX-WS and JBI really have different goals. JAX-WS is an API for applications, to add WS support easily to Java code. It is meant for "regular" developers. JBI defines a "container of containers", defining an infrastructure for accomplishing integration of different application technologies and communications protocols.
    JBI was designed as an integration technology. Any integration technology requires "adapters" for existing applications, protocols, resources, and services. That has always been a weak point for any integration technology -- do you have adapters for X, Y and Z? The big problem was that all those adapters were proprietary, which really locked you in to a particular vendor. JBI changed that, by introducing the standardized adapter: the JBI component. From that perspective, there are plenty of opportunities to create new JBI components out there. Packageware, CICS, custom apps, etc.
    JBI also creates a "composite application" environment, which I suspect is your primary interest -- composing useful applications from a variety of application technologies, such as EJB, BPEL, even XSLT, and making those applications available over a variety of transports & protocols. From this perspective, there is still plenty of space for new components. I'd like to see a service engine that supports ebXML BPSS, for example. I'd also like to see a component that serves as a UDDI registry, exposing all of the internal services within the JBI environment to design tools. A business rules engine would be cool (I believe that is being developed already). I'd like to see an engine that integrates portal support, perhaps using WS-RP. I'd like to see some scripting engines, as well as a lighter-weight process engine (BPEL is very capable, but can be a problem if latency / throughput are of primary concern). I've love to see a work flow engine (XPDL would be very useful).
    That's off the top of my head. I'd say that the current crop of components is enough to accomplish some very interesting, useful things. Their main accomplishment is to isolate you from all the plumbing underneath the covers -- you can concentrate on creating BPEL, or EJB, or XSLT, or configuring (not coding) SOAP, JMS, FTP, etc, access to/from BPEL, etc. Although JBI is a very interesting technology, I'd say it is doing its job when it 'disappears' into the plumbing, and application developers aren't even aware that it is there.

  • Child frame problem

    My intention:
    I wrote a parent frame. in one of the memuItem action performed, the program
    create a child frame and try to get some input. after get the input, in the same action performed code, the parent will update its display pane using the data obtained from the child frame.
    my code:
    private void jMenuItemAddCActionPerformed(java.awt.event.ActionEvent evt) {
    //get the type of the carrier set
    String sort = getSort("AddC");
    if ((sort != null) && (sort.length() > 0)) { //this is a pop up dialog
    CarrierSet aSet = new CarrierSet(); //create an empty set
    aSet.setSort(sort);
    //let user enter the elements
    // UAEnterElements is a subclass of JFrame( I also tried with extending JDialog
    // it will add elements to aSet . there is a text field to let user enter stuff
    // a NEXT, DONE and CANCEL button
    UAEnterElements getCarrierD = new UAEnterElements(this, aSet);
    if(getCarrierD.getStatus() == 1){   //if user clicked done in UAEnterElements
    algebra.addCarrierSet(aSet);   // algebra is a private member of this
    // redisplay the content of _algebra              
    jLabelShowAlg.setText(_algebra.toString());
    My problem:
    my problem:
    the parent just goes ahead to update its display pane without the child returning.
    So it display old information.
    I tried to make the child as a subclass object of JDialog, but still not working.
    I wrote _parent.setEnabled(false) in the child constructor. The parent still just go
    ahead to display old information.
    how should I make the parent wait until the user click DONE and then execute
    if(getCarrierD.getStatus() == 1 {
    } Can I avoid implementing thread?
    Thanks a lot for any help.

    Thanks for your help. 2 Duke dollars is in your accout now.
    it worked when I wrote a class extending the JDialog class and setModal(true) .
    one more question related to this one: what if I still want the child be a frame,
    what should I do to make the parent wait till child done?

  • Primary mailbox (on Exchange 2010) and Personal Archive (on Exchange 2013), possible?

    Current environment is Exchange 2010 SP3 RU5 supporting 4,000 Users. Client estate is Outlook 2010 SP1 going on SP2.
    We're pulling our Archiving solution away from 3rd party and back into Exchange. Implementing a new set of Exchange 2010 Servers (old DAG or in a new Archive DAG) would be easy. But is there Exchange 2013 stepping stone potential?
    Can the Archive DAG / Archive mailboxes be on 2013? i.e. for any given User, leave their primary mailbox on Exchange 2010 and create new Archive mailbox on 2013.
    I want to avoid implementing 2010 Archive Servers and then go 2013 Archive 6 months or a year later.
    This article suggests 'no':
    http://technet.microsoft.com/en-gb/library/dd979800(v=exchg.150).aspx
    "Locating a user’s mailbox and archive on different versions of Exchange Server is not supported."
    I've found little info but the odd statement here / there.
    Is this the latest position? Is it that cut & dry? Anyone tried it? Why won't it work (or will it but it's not supported)?
    Thanks!

    <I had a response from MS>
    Below is a summary of the case for your records:
    Symptom:
    =============
    Is it possible to implement a 2013 environment to host the Archive mailboxes? i.e. for any given User, their primary mailbox is on Exchange 2010 and their Archive
    mailbox is on 2013. 
    Resolution:
    =============
    It’s not supported to have a user’s primary mailbox reside on an older Exchange version than the user’s archive. If the user’s primary mailbox is still on Exchange
    2010, you must move it to Exchange 2013 before or at the same time when you move the archive to Exchange 2013.
    http://technet.microsoft.com/en-us/library/jj651146(v=exchg.150).aspx
    as per the repro in our lab, having the archive mailbox in higher version of exchange would fail with the error above
    <the scenario isn't completely relevant, looks like he's trying to put the Primary on 2013 and not the Archive, no matter, we've established there are problems, question is whether they are looking into this area / to patch, they go on...>
    At this point in time we don’t have a conformation from the product team, if the above would change in the future exchange versions.
    <MS did say on the call that they were not looking at fixing it, naturally this isn't a "never", as per previous statement - they can't commit 100% to the future, but they've provided me the answer - they are not currently looking at resolving/providing
    this as a migration scenario, end.>

Maybe you are looking for

  • IPod touch 4th gen wont show up in iTunes, but my old iPod nano does.

    So I've looked up this issue on multiple forums and here, but it seems like everyone is having a simmilar issue, but not the same. I've gone through just about everything here: http://support.apple.com/kb/ts1369 and I've tried all the trouble shootin

  • SOLVED!--Media Buttons not working and TVAP will NOT uninstall

    I have  a A660D-STNX2 running WIN 7 sp 1. The media buttons have stopped working and all threads concerning this have one reinstalling TVAP.  Trying that has not changed functionality.   I believe it is because I cannot uninstall TVAP.  Uninstall giv

  • HttpConnection error on P900

    Hi I run the following code on the p900 device. this application connects to a url and gets a simple text message then displays it. ( this code is a modified example from the core j2me site examples ...) it doesn't work. on nokia 6310i device it work

  • How to edit an existing page in SharePoint designer and add custom javascript

    When I edit an existing page in SharePoint Designer to add JavaScript and save, I get a dialog with message "Content in the embedded form field may be changed by the server to remove unsafe content. Do you want to reload you page to see the results o

  • Batch add prefix to phone numbers?

    I need to add 4 numbers and a dash as a prefix to what is already in the phone number fields of about 1,000 address book contacts, so that they work in another country when synced to a blackberry. Some of them already have the prefix I want to add, b