Loading IDoc Metadata fails

Hi,
I try to load IDoc Metadata within IDX2.
It works fine for several systems/ ports (4.6c) but when I try to load data from a certain system (E.C.C. 6.0) I get an information box that says: "I::000" no matter which IDoc I try to read. It's alway the same mesage even if I try to load data from an IDoc that does not exist in the backend ( like "TEST" for instance).
The rfc connection (SM59) works (connection test), I suspect the user to miss some authorization though. Is that reasonable? Or could it be a "release conflict"? The XI is XI 3.0, Release NW04_20_REL and the backend is E.C.C. 6.0 as mentioned above. The "good "systems are all 4.6c.
Or what else could it be??
Thanks in advance
Karsten

Hi,
>  So a user in a SM59 connection can be "good enough" for a connection test but may lack some authorization to read IDoc metadata?
Exactly!
There is an additional Authorization Test.
In sm59 try:  Utilities --> Test --> Authorization Test
This test must work.
It could happen that a password is changed in capial letters by the system. Maybe this is the reason.
Regards
Patrick

Similar Messages

  • Canu0092t load IDoc Metadata!

    Hi all,
    I don’t know why, but I can’t see the IDoc metadata in IDX2 on my Integration Server.
    I already have defined the RFC’s connection between these two systems (the user used to connect has the authorities needed). The port and the partner profile are created (we20 and we21 on backend side)
    When I try to create “load meta data” manually (XI Integration Server, transaction IDX2) with IDoc Types ZSX_P1100 and source port (the port of backend), I receive “Basic type ZSX_P1100 does not exist” message!?!? But this message type exists on backend side… I already checked! Why I can’t load it???
    Anybody knows what’s happening?
    Thanks in advance,
    Ricardo.

    Hi Ricardo -
    >>><i>I need to release the IDoc segmets before the tranfer of meta data to my integration server (XI -> IDX2)?</i>
    It's definitely good practice when you set up a new port in IDX1 to see if you can manually load the metadata from IDX2, but it's not a requirement.  At runtime, if the metadata isn't there, it will go and get it and load it into the cache.
    Regards,
    Jin

  • Exception message, failed to get value, could not load managed metadata, invalid field name

    Hi, i have created some site collection columns with managed metdata and taxonomy term sets. I have then created some site content types of those site columns. Some of them function properly and some don't.
    When i have created or uploaded a document to the document library i start to "tag" the document by first choosing witch conent type i want too use, but when it comes to save the document it renders an error message(its not the full content
    of the message):
    "exception message, failed to get value, could not load managed metadata, invalid field name"
    I have created some other site content types before and with the same site columns and they do not generate a error message. Is there a solution for my dilemma?

    try these links:
    https://prashobjp.wordpress.com/2014/02/14/failed-to-get-value-of-the-column-name-column-from-the-managed-metadata-field-type-control-see-details-in-log-exception-message-invalid-field-name-guid-site-url/
    http://www.sharepointconfig.com/2011/03/issues-provisioning-sharepoint-2010-managed-metadata-fields/
    http://blog.goobol.com/category/sharepoint/sharepoint-issue-troubleshooting/
    http://www.instantquick.com/index.php/correctly-provisioning-managed-metadata-columns?c=elumenotion-blog-archive/random-whatnot
    https://pholpar.wordpress.com/2010/03/03/taxonomy-issues/
    Please mark answer as correct if it is correct else vote for it if you find it useful Happy SharePointing

  • Problem in importing idoc metadata

    Hi all,
    i have problem in loading metadata in idx2. idoc details are AFS/ORDERS05/ZADIORDCONF port: SAPAGS client 003 . i could see an idoc metadata with name ORDERS05 under the corresponding sap system (SAPAGS with client 003) but with out any extension.
    now i tried to create another with idoc type : ORDERS05
                                                    Extension: ZADIORDCONF
                                           and    Source port:SAPAGS
    it is giving error msg Extension ZADIORDCONF is not assigned to basic type OREDERS05.
    wat could be the reason , please help to solve it
    Thanks and Regards
    Jhansi

    Hi,
    There is no need to add anything to idx2. You could delete the existing metadata from idx2 and reimport it.
    In some cases even this would not work. In that case, you could refresh the entire metadata by running report IDX_RESET_METADATA. However, use this report only if the above option does not work, since this report will reset your entire metadata.
    Thanks,
    Bhavish
    Reward points if comments helpful

  • Loading IDOC meta data into Xi not working

    Hello,
    I'm trying to load IDOC meta data using trx IDX2 but the program is not able to recognize the port "SAPRD1_222" I previously defined in trx IDX1.
    Any ideas, what Im missing here?
    Ruud

    Ruud,
    Pls take a look of this thread:
    Re: Importing Metadate from IDES 4.7 into Integration Server Rep

  • How to get IDoc Metadata and Structure without connection to sender

    Hello folks!
    Is there any chance to receive and process an IDoc with PI without having loaded the IDoc
    Metadata with a direct connection from the sending SAP System? Can't this be done by hand?
    Or is it not possible to bypass the IDoc Adapter and send the Idoc via an RFC?
    Thanks in advance
    Gunnar

    Hi!
    The idea behind the question is the following:
    Usually any NONE-SAP RFC-Server of an Integration Tool can receive IDocs just by
    beein registered at the sending system. The way you get the metadata into the
    receiving Integration tool is an complete different story an can be done by
    manually or by doneload the metadata from the DDIC just one time by hand.
    But with PI you need to connect to the sending SAP System directly
    with idx1 and by importing the structure out of the Integration Builder, right?

  • Idoc metadata

    hi
    why should we import idoc metadata while using idoc adapter and what do we mean by metadata.even in mapping idoc structure why we assign segment and begin to constant values and disable cntrol record field.
    regards
    raghu

    Hi Raghu,
    What is Metadata:
           IDoc (Intermediate Document) metadata comprises structures for the corresponding IDoc types .This metadata is just a description of available structures
    and fields.Using an RFC connection, metadata of this type can be either called directly at runtime or loaded to the Integration Server beforehand.Thus it can be loaded in two ways:
    1. During the runtime
    2. Before the actual message flow.
    To find out what metadata has already been loaded, call the transaction Metadata Overview for IDoc Adapter (IDX2).The system displays a screen with the directory of all systems connected with the IDoc adapter (including a description) for which metadata has already been loaded. Choose Port Maintenance in IDoc Adapter () to call the corresponding transaction and to create additional ports.
    Why to Import:
        To process an IDocs in the form of an IDOCXML
    message, an IDoc’s metadata must be loaded into
    the Integration Server so that SAP XI’s IDoc adapter can
    change the native IDoc format into the XML representation.
    All IDocs are created according to specific rules: This is the general structure (the record types) of every IDoc. Special rules affect the different IDoc types.
    Segment:
    Segment is like a container which strores all administration information for technical processing, as well as the actual application data.of an IDOC.
    Fields :
    A segment comprises segment fields as the smallest unit of the Idoc which contains the actual values and information.
    control record :
    The control record is identical for all IDocs and contains the administration information, for example, sender, recipient and message. Note that the control record also contains the last processing status ( STATUS field).
    If you disable the Apply Control Record Values from Payload indicator in the receiver IDoc adapter, the fields are filled from the system information not from the payload as:
    1. From XI sender service, configuration in Integration Directory, System Landscape Directory
    2. Constant LS
    3 . From XI receiver service, configuration in Integration Directory, System Landscape Directory
    If you enable the Apply Control Record Values from Payload indicator in the receiver IDoc adapter, the fields are filled from the IDoc-XML payload.
    **Pls: Reward points if helpful **
    Regards,
    Jyoti

  • "IDOC Metadata not Found" - Error with Business Connnector 4.7

    I have been trying to get our 4.7 installation of Business Connector going after having all kinds of issues getting 3.5 to run as a service on Windows Server 2003.
    I believe I am just about able to succesfully load the Idocs as I did in 3.5. However, when it gets to the inboundProcess step, I get an error saying "The IDOC Metadata for ORDERS05 is not available." I seem to be stuck at this point.
    If anyone has any ideas on what I can do to get around this or fix this issue, I would really appreciate it. I am trying to load these IDOCS via ALE and I am getting this error just after the OutBoundProcess (ALE.java) step.
    Thanks for any help you can give.
    Damon Hicks

    Hi,
    Did you configure the Tc.IDX2.
    See the below link
    http://help.sap.com/saphelp_nw2004s/helpdata/en/8a/b8b13bb3ace769e10000000a11402f/content.htm
    Regards
    Chilla

  • BCExeption: The IDOC metadata for null is not available in SID

    <h3>Hello Guys,</h3>
    we need some information about the error message:
    com.wm.pkg.sap.BCExeption: The IDOC metadata for null is not available in <SID>
    We use a Business Connector 4.8 and the backend is an ECC 6.0, we have an inbound xml scenario with FTP.
    We pick up the ORDRSP from the vendors ftp-server and we would like to drop it to our backend system.
    We can read the xml file into a string, but not send to the backend system, we use following steps:
    - bytesToString
    - stringToDocument
    - documentToRecord
    - recordToIDOC
    - lockSession
    - createTID
    so far works perfekt
    - sendIDoc
    - releaseSession
    On the step "sendIDoc" we see the described error message, but what is the message from the message?
    Any Ideas?
    Thank you and regards,
    Michael

    Hello Michal,
    thank you for your quick answer!
    In my opinion, i don't need any input for this service, because we pick up all data from the vendors ftp - server in a definied time interval.
    All needed fields are available in the xml file. In the result tab is the IDocList filled correct, but no sending to the backend system because the error is displayed.
    Is it a problem with my concept?
    Regards,
    Michael

  • I have elements 7 loaded on my laptop. I bought a new one and want to load it onto the new one. I talked to someone on adobe site last night and he gave me a link to down load it. it failed so i googled it, and the downloaded it just fine. I went on to th

    i have elements 7 loaded on my laptop. I bought a new one and want to load it onto the new one. I talked to someone on adobe site last night and he gave me a link to down load it. it failed so i googled it, and the downloaded it just fine. I went on to the adobe site signed in and looked at my product history and got the product #, however, when i entered the product # i it didn't likeit. What do i do?

    Is your laptop running Windows XP, windows 7 or Windows 8/8.1?  If so then I suggest download a tool that will retrieve the serial number from the old laptop which you can use it on your new machine.  The tool is called:
    Belarc Advisor
    It can be downloaded from here:
    <http://www.belarc.com/Programs/advisorinstaller.exe>
    Install this and then run it by double clicking on its icon and then wait for about 5 minutes while it generates all the info about your machine and software keys.

  • Refresh Idoc Metadata in PI 7.31 Java Only

    Hello experts,
    We are facing issue to refresh the Idoc metadata every time when we change Idoc structure in ECC for Idoc outbound scenario.
    We have imported the latest Idoc structure in ESR and assigned that to MM,OM and IFolw. Still in message monitoring we can see the old Idoc structure.
    Is there anywhere else to import Idoc metadata like it used to be in Dual Stack idx2 ??
    Regards,
    Suman

    Hi Suman,
    If you are new with Idoc in PI  you try with the sap.help documentation: http://help.sap.com/saphelp_nw73ehp1/helpdata/en/50/980951964146f1a7f189b411796bae/content.htm
    Regards

  • Error reading IDoc Metadata from CRM 5.0 (Basis 7.0) through XI 3.0

    We are not able to generate the IDoc metadata from XI 3.0 through IDX2 for port/RFC destination pointing to SAP CRM 5.0 which runs on the Basis 7.0 Web application server.
    Has anyone run into this issue?  We get the dialog box with information "I::000" Message no. 000.  The metadata is not generated at all.
    We have no problem generating IDoc metadata for other SAP systems including 6.40 and 4.6C basis layers.  Is there something different with 7.0 Basis?
    Thanks,
    Jay Malla
    SAP XI Consultant
    Licensed To Code

    Hi Renjith,
    Thanks for the suggestion.  We've already tried that out - the RFC call IDX_STRUCTURE_GET works from SE37 using the RFC destination that we have defined for the CRM system which is referenced in the port.  We can debug from the XI system into the CRM system and the data is returned correctly.  We can also generate the IDoc schema from the Integration Builder Repository.
    However, we cannot generate the metadata from IDX2.  If we point the port in IDX1 to another RFC destination for a non Basis 7.0 system, the metadata generation works.  So it seems that something is strange regarding generating the IDoc metadata for Basis 7.0 systems.
    Regards,
    Jay

  • OCS on a cluster with Load balancing and fail safe environment

    Dear all,
    i want to ask is there any documat or hints on how to do an OCS R2 installaiotn on 3 server with RAC option (clustered Fail Safe), how can i install OCS on a cluster with Load balancing and fail safe environment.
    plz i need ur help
    thanking u
    [email protected]

    Dear all,
    i want to ask is there any documat or hints on how to do an OCS R2 installaiotn on 3 server with RAC option (clustered Fail Safe), how can i install OCS on a cluster with Load balancing and fail safe environment.
    plz i need ur help
    thanking u
    [email protected]

  • WLS6.1sp1 stateful EJB problem =   load-balancing and fail over

              I have three problem
              1. I have 2 clustered server. my weblogic-ejb-jar.xml is here
              <?xml version="1.0"?>
              <!DOCTYPE weblogic-ejb-jar PUBLIC '-//BEA Systems, Inc.//DTD WebLogic 6.0.0 EJB//EN'
              'http://www.bea.com/servers/wls600/dtd/weblogic-ejb-jar.dtd'>
              <weblogic-ejb-jar>
              <weblogic-enterprise-bean>
                   <ejb-name>DBStatefulEJB</ejb-name>
                   <stateful-session-descriptor>
                   <stateful-session-cache>
                        <max-beans-in-cache>100</max-beans-in-cache>
                        <idle-timeout-seconds>120</idle-timeout-seconds>
                   </stateful-session-cache>
                   <stateful-session-clustering>
                        <home-is-clusterable>true</home-is-clusterable>
                        <home-load-algorithm>RoundRobin</home-load-algorithm>
                        <home-call-router-class-name>common.QARouter</home-call-router-class-name>
                        <replication-type>InMemory</replication-type>
                   </stateful-session-clustering>
                   </stateful-session-descriptor>
                   <jndi-name>com.daou.EJBS.solutions.DBStatefulBean</jndi-name>
              </weblogic-enterprise-bean>
              </weblogic-ejb-jar>
              when i use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              and deploy this ejb, exception cause
              <Warning> <Dispatcher> <RuntimeException thrown b
              y rmi server: 'weblogic.rmi.cluster.ReplicaAwareServerRef@9 - jvmid: '2903098842
              594628659S:203.231.15.167:[5001,5001,5002,5002,5001,5002,-1]:mydomain:cluster1',
              oid: '9', implementation: 'weblogic.jndi.internal.RootNamingNode@5f39bc''
              java.lang.IllegalArgumentException: Failed to instantiate weblogic.rmi.cluster.B
              asicReplicaHandler due to java.lang.reflect.InvocationTargetException
              at weblogic.rmi.cluster.ReplicaAwareInfo.instantiate(ReplicaAwareInfo.ja
              va:185)
              at weblogic.rmi.cluster.ReplicaAwareInfo.getReplicaHandler(ReplicaAwareI
              nfo.java:105)
              at weblogic.rmi.cluster.ReplicaAwareRemoteRef.initialize(ReplicaAwareRem
              oteRef.java:79)
              at weblogic.rmi.cluster.ClusterableRemoteRef.initialize(ClusterableRemot
              eRef.java:28)
              at weblogic.rmi.cluster.ClusterableRemoteObject.initializeRef(Clusterabl
              eRemoteObject.java:255)
              at weblogic.rmi.cluster.ClusterableRemoteObject.onBind(ClusterableRemote
              Object.java:149)
              at weblogic.jndi.internal.BasicNamingNode.rebindHere(BasicNamingNode.jav
              a:392)
              at weblogic.jndi.internal.ServerNamingNode.rebindHere(ServerNamingNode.j
              ava:142)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              2)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.RootNamingNode_WLSkel.invoke(Unknown Source)
              at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              So do i must use it or not???
              2. When i don't use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              , there's no exception
              but load balancing does not happen. According to the document , there's must load
              balancing when i call home.create() method.
              my client program goes here
                   DBStateful the_ejb1 = (DBStateful) PortableRemoteObject.narrow(home.create(),
              DBStateful.class);
                   DBStateful the_ejb2 = (DBStateful) PortableRemoteObject.narrow(home.create(3),
              DBStateful.class);
              the result is like that
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@acf6e)/398
                   or
                   the_ejb1 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@252fdf)/380
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
                   I think the result should be like under one... isn't it??
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
              In this case i think the_ejb1 and the_ejb2 must have instance in different cluster
              server
              but they go to one server .
              3. If i don't use      "<home-call-router-class-name>common.QARouter</home-call-router-class-name>",
              "<replication-type>InMemory</replication-type>" then load balancing happen but
              there's no fail-over
              So how can i get load-balancing and fail over together??
              

              I have three problem
              1. I have 2 clustered server. my weblogic-ejb-jar.xml is here
              <?xml version="1.0"?>
              <!DOCTYPE weblogic-ejb-jar PUBLIC '-//BEA Systems, Inc.//DTD WebLogic 6.0.0 EJB//EN'
              'http://www.bea.com/servers/wls600/dtd/weblogic-ejb-jar.dtd'>
              <weblogic-ejb-jar>
              <weblogic-enterprise-bean>
                   <ejb-name>DBStatefulEJB</ejb-name>
                   <stateful-session-descriptor>
                   <stateful-session-cache>
                        <max-beans-in-cache>100</max-beans-in-cache>
                        <idle-timeout-seconds>120</idle-timeout-seconds>
                   </stateful-session-cache>
                   <stateful-session-clustering>
                        <home-is-clusterable>true</home-is-clusterable>
                        <home-load-algorithm>RoundRobin</home-load-algorithm>
                        <home-call-router-class-name>common.QARouter</home-call-router-class-name>
                        <replication-type>InMemory</replication-type>
                   </stateful-session-clustering>
                   </stateful-session-descriptor>
                   <jndi-name>com.daou.EJBS.solutions.DBStatefulBean</jndi-name>
              </weblogic-enterprise-bean>
              </weblogic-ejb-jar>
              when i use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              and deploy this ejb, exception cause
              <Warning> <Dispatcher> <RuntimeException thrown b
              y rmi server: 'weblogic.rmi.cluster.ReplicaAwareServerRef@9 - jvmid: '2903098842
              594628659S:203.231.15.167:[5001,5001,5002,5002,5001,5002,-1]:mydomain:cluster1',
              oid: '9', implementation: 'weblogic.jndi.internal.RootNamingNode@5f39bc''
              java.lang.IllegalArgumentException: Failed to instantiate weblogic.rmi.cluster.B
              asicReplicaHandler due to java.lang.reflect.InvocationTargetException
              at weblogic.rmi.cluster.ReplicaAwareInfo.instantiate(ReplicaAwareInfo.ja
              va:185)
              at weblogic.rmi.cluster.ReplicaAwareInfo.getReplicaHandler(ReplicaAwareI
              nfo.java:105)
              at weblogic.rmi.cluster.ReplicaAwareRemoteRef.initialize(ReplicaAwareRem
              oteRef.java:79)
              at weblogic.rmi.cluster.ClusterableRemoteRef.initialize(ClusterableRemot
              eRef.java:28)
              at weblogic.rmi.cluster.ClusterableRemoteObject.initializeRef(Clusterabl
              eRemoteObject.java:255)
              at weblogic.rmi.cluster.ClusterableRemoteObject.onBind(ClusterableRemote
              Object.java:149)
              at weblogic.jndi.internal.BasicNamingNode.rebindHere(BasicNamingNode.jav
              a:392)
              at weblogic.jndi.internal.ServerNamingNode.rebindHere(ServerNamingNode.j
              ava:142)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              2)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.BasicNamingNode.rebind(BasicNamingNode.java:36
              9)
              at weblogic.jndi.internal.RootNamingNode_WLSkel.invoke(Unknown Source)
              at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              So do i must use it or not???
              2. When i don't use "<home-call-router-class-name>common.QARouter</home-call-router-class-name>"
              , there's no exception
              but load balancing does not happen. According to the document , there's must load
              balancing when i call home.create() method.
              my client program goes here
                   DBStateful the_ejb1 = (DBStateful) PortableRemoteObject.narrow(home.create(),
              DBStateful.class);
                   DBStateful the_ejb2 = (DBStateful) PortableRemoteObject.narrow(home.create(3),
              DBStateful.class);
              the result is like that
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@acf6e)/398
                   or
                   the_ejb1 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@252fdf)/380
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
                   I think the result should be like under one... isn't it??
                   the_ejb1 = ClusterableRemoteRef(203.231.15.167 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@4695a6)/397
                   the_ejb2 = ClusterableRemoteRef(203.231.15.125 weblogic.rmi.cluster.PrimarySecon
                   daryReplicaHandler@6a0252)/381
              In this case i think the_ejb1 and the_ejb2 must have instance in different cluster
              server
              but they go to one server .
              3. If i don't use      "<home-call-router-class-name>common.QARouter</home-call-router-class-name>",
              "<replication-type>InMemory</replication-type>" then load balancing happen but
              there's no fail-over
              So how can i get load-balancing and fail over together??
              

  • Load-balancing and fail-over between web containers and EJB containers

    When web components and EJB components are run in different OC4J instances, can we achieve load-balancing and fail-over between web containers and EJB containers?
    null

    It looks like there is clustering, but not loadbalancing available for rmi
    from the rmi.xml configuration. The application will treat any ejbs on the
    cluster as one-to-one look-ups. Orion will go out and get the first ejb
    available on the cluster. See the docs on configuring rmi.xml (and also the
    note below).
    That is a kind-of failover, because if machine A goes down, and the
    myotherAejbs.jar are on machine B too, orion will go out and get the bean
    from machine B when it can't find machine A. But it doesn't go machine A
    then machine B for each remote instance of the bean. You could also specify
    the maximum number of instances of a bean, and as one machine gets "loaded",
    orion would go to the next available machine...but that's not really
    loadbalancing.
    That is, you can set up your web-apps with ejbs, but let all of the ejbs be
    remote="true" in the orion-application.xml file:
    <?xml version="1.0"?>
    <!DOCTYPE orion-application PUBLIC "-//Evermind//DTD J2EE Application
    runtime 1.2//EN" "http://www.orionserver.com/dtds/orion-application.dtd">
    <orion-application deployment-version="1.5.2">
    <ejb-module remote="true" path="myotherAejbs.jar" />
    <ejb-module remote="true" path="myotherBejbs.jar" />
    <ejb-module remote="true" path="myotherCejbs.jar" />
    &ltweb-module id="mysite" path="mysite.war" />
    ... other stuff ...
    </orion-application>In the rmi.xml you would define your clustering:
    <cluster host="230.0.0.1" id="123" password="123abc" port="9127"
    username="cluster-user" />
    Tag that is defined if the application is to be clustered. Used to set up
    a local multicast cluster. A username and password used for the servers to
    intercommunicate also needs to be specified.
    host - The multicast host/ip to transmit and receive cluster data on. The
    default is 230.0.0.1.
    id - The id (number) of this cluster node to identify itself with in the
    cluster. The default is based on local machine IP.
    password - The password configured for cluster access. Needs to match that
    of the other nodes in the cluster.
    port - The port to transmit and receive cluster data on. The default is
    9127.
    username - The username configured for cluster access. Needs to match that
    of the other nodes in the cluster.

Maybe you are looking for