EO_INBOUND_PARALLEL - equally distribute the load

we have used EO_INBOUND_PARALLEL to increase the number of parallel processes at a single time, but when the processing started we did not see the load is equally distributed among all the Qs, few Qs are loaded more and few are less. Is there any other setting which need to be done to make this distribute equally shared amongst the Qs?
Edited by: SCK on Oct 6, 2010 9:19 AM

check page number 10 of this document - http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/423f5046-0a01-0010-2698-b2dc7c3185f1?quicklink=index&overridelayout=true
do add the other relevant parameters

Similar Messages

  • RDS Connection Broker does not distribute the load among Session Hosts

    Hello Folks,
    I have a three server RDS setup in which the roles are distributed as follows:
    S1 -> RD Web Acc / RD Gateway / RD Connection Broker /Session Host
    S2 -> Licensing Server / Session Host
    S3 -> Session Host (and most powerful server)
    I would like the load to be distributed among the session hosts, depending on the resources of the servers. But in my setup, all the apps launched by the users are ran on S1 for some reason.
    Also, when I disable S1 from the list of session hosts that accept new connections, and start a new app from the conseole, I get an error saying: " An authentication error has occurred (Code0x607)"
    Any tips? 
    Edit: I had a workaround for Code 0x607 but I still get an error saying that "Couldn't open this program or file. Either there was a problem with <appname> or the file you're trying to open couldn't be accessed" 

    Hi,
    Thanks for your comment.
    Yes, we can deploy RDSH on all 3 servers and use as session host and that is a normal scenario for RDS Farm. You can refer following article for reference.
    Checklist: Create a Load-Balanced RD Session Host Server Farm by Using RD Connection Broker
    http://technet.microsoft.com/en-in/library/cc753891.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support

  • Webcache distrubute the load among instances

    I'm testing clustering with Oracle10g 10.1.2
    installed infrastructure(1 machine),one webcache(1 machine),2 instances(2 application servers in 2 different machines);
    It is a database repository cluster(DCM).
    presently in the cluster 2 application server instances are there.
    i have some doubt regarding cluster.
    say if there are 4 instances in the cluster.say 4 application server instances capacity is 100 users( we will define in webcache-origin servers ).all instances servicing 100 users.if i down one of the server in the cluster what happen to those 100 users which is servicing.Will all 100 users are in queue or will remaining 3 servers share 100 users(33,33,33) like this.
    how webcache distrubute the load among instances.

    Hi,
    From the viewpoint of Web Cache, AFAIK, it doesn't matter/know that the Application server instances are in a cluster...i.e. it doesn't do anything special w.r.t distributing the load whether or not the server instances are in a cluster.
    There is a parameter 'Capacity' to be configured in the 'Origin Servers' page in the Web Cache Administration page. Set it to the appropriate value.
    In the particular case that you've mentioned about: this is the general rule/methodology that Web Cache uses: say a site is mapped to 2 servers (say, s1 and s2) & say, s1 is the first in its mapping after which s2's mapped. Then, on receving a request for some page in the stie, it'll first try to send it to s1 (since it's the 1st in the mapping) . but if it finds s1 to be down, then it routes it to s2.
    Hope this helps.
    Regards,
    Priyanka GES
    Oracle Web Cache Team

  • How the Load balancing happens in CPO

    Hi All,
    On what bases the process engine selects the process or request and how the load balancing happens.

    Hi!
    I am a little confused by the question (as it refers to "request"), but I am going to assume that you are asking how a High Availability Process Orchestrator environment with several servers chooses which processes running on which server.
    The answer to that question is...
    In general, processes to be executed are split equally between all servers. The only piece of data being taken into account during process instance assignment is the current load on the servers (as counted by the number of top-level processes, not counting child processes). For example, suppose that there are 3 servers in the environment, and server A is running 5 top-level processes, servers B & C are running 3 top-level processes. When new process is started (e.g. on a schedule or manually or triggered via an external event), it will be assigned to either server B or server C for execution, because servers B & C have less load. If under the same circumstances (A:5, B:3, C:3), there are 4 processes started at the same time. When the work is distributed, the total expected work 5+3+3=11 (existing work) and 4 (new work) will be distributed equally with, with servers B&C each getting 2 new processes.
    This is a general load balancing algorithms used by the servers in HA environment to decide which server runs which process instance.
    There are other factors that come into play, as some processes/activities can only run on server A or server B for technical limitations (e.g. SAP work against a particular SAP System can only be executed from one server in the environment). When those come into play, the work may end up distributed unevenly.
    Note that available memory, CPU load, or disk space on servers are not directly taken into account during load distribution.

  • 3rd party distributed SW load balancing with In-Memory Replication

              Hi,
              Could someone please comment on the feasibility of the following setup?
              I've started testing replication with a software load balancing product. This
              product lets all nodes receive all packets and uses a kernel-level filter
              to let only one node at the time receive it. Since there's minimum 1 heartbeat
              between the nodes, there are several NICs in each node.
              At the moment it seems like it doesn't work: - I use the SessionServlet - with
              a 2-node cluster I first have the 2 nodes up and I access it with a single client:
              .the LB is configured to be sticky wrt. source IP address, so the same node gets
              all the traffic - when I stop the node receiving the traffic the other node takes
              over (I changed the colours of SessionServlet) . however, the counter restarts
              at zero
              From what I read of the in-memory replication documentation I thought that it
              might work also with a distributed software load balancing cluster. Any comments
              on the feasability of this?
              Is there a way to debug replication (in WLS6SP1)? I don't see any replication
              messages in the logs, so I'm not even sure that it works at all. - I do get a
              message about "Clustering Services startting" when I start the examples server
              on each node - is there anything tto look for in the console to make sure that
              things are working? - the evaluation license for WLS6SP1 on NT seems to support
              In-Memory Replication and Cluster. However, I've also seen a Cluster-II somewhere:
              is that needed?
              Thanks for your attention!
              Regards, Frank Olsen
              

    We are considering Resonate as one of the software load balancer. We haven't certified
              them yet. I have no idea how long its going to take.
              As a base rule if the SWLB can do the load balancing and maintain stickyness that is fine
              with us as long as it doesn't modify the cookie or the URL if URL rewriting is enabled.
              Having said that if you run into problems we won't be able to support you since it is not
              certified.
              -- Prasad
              Frank Olsen wrote:
              > Prasad Peddada <[email protected]> wrote:
              > >Frank Olsen wrote:
              > >
              > >> Hi,
              > >>
              > > We don't support any 3rd party software load balancers.
              >
              > Does that mean that there are technical reasones why it won't work, or just that
              > you haven't tested it?
              >
              > > As >I said before I am thinking your configuration is >incorrect if n-memory
              > replication is not working. I would >strongly suggest you look at webapp deployment
              > descriptor and >then the config.xml file.
              >
              > OK.
              >
              > >Also doing sticky based on source ip address is not good. You >should do it based
              > on passive cookie persistence or active >cookie persistence (with cookie insert,
              > a new one).
              > >
              >
              > I agree that various source-based sticky options (IP, port; network) are not the
              > best solution. In our current implementation we can't do this because the SW load
              > balancer is based on filtering IP packets on the driver level.
              >
              > Currently I'm more interested in understanding whether it can our SW load balancer
              > can work with your replication at all?
              >
              > What makes me think that it could work is that in WLS6.0 a session failed over
              > to any cluster node can recover the replicated session.
              >
              > Can there be a problem with the cookies?
              > - are the P/S for replication put in the cookie by the node itself or by the proxy/HW
              > load balancer?
              >
              > >
              > >The options are -Dweblogic.debug.DebugReplication=true and
              > >-Dweblogic.debug.DebugReplicationDetails=true
              > >
              >
              > Great, thanks!
              >
              > Regards,
              > Frank Olsen
              

  • ConnectionFactory - who does the load balancing

              Consider creating a connectionfactory (with server affinity unticked, load balancing
              ticked and using the message delivery policy of round robin) we then go on to
              create a distributed domain targetted at the cluster of two managed server's (managed1
              and managed2)
              If I create a simple java app that put's messages to that distributed destination,
              using the connectionfactory above, who's responsible for doing the load balancing.
              Does the client create the session knowing that the connectionfactory requires
              load balancing and thus takes the responsibility for it, or does the client just
              put a constant stream of JMS messages to the WLS and the connectionfactory class
              takes responsibility for the load balancing
              Who maintain's the delivery state, the client application or WLS (i.e who's job
              is it to look up the last messages queue destination?)
              

    Hi Barry,
              A JMS client's produced messages are first delivered to the WL server
              that hosts the client's JMS connection. The JMS connection
              host remains unchanged for the life of the connection.
              Once produced messages arrive on the connection host,
              they are load balanced to their JMS destination.
              For more information I suggest reading the clustering
              sections of the JMS Performance Guide white-paper. You can find
              the white-paper here:
              http://dev2dev.bea.com/technologies/jms/index.jsp
              Tom Barnes
              Barry Myles wrote:
              > Consider creating a connectionfactory (with server affinity unticked, load balancing
              > ticked and using the message delivery policy of round robin) we then go on to
              > create a distributed domain targetted at the cluster of two managed server's (managed1
              > and managed2)
              >
              > If I create a simple java app that put's messages to that distributed destination,
              > using the connectionfactory above, who's responsible for doing the load balancing.
              >
              > Does the client create the session knowing that the connectionfactory requires
              > load balancing and thus takes the responsibility for it, or does the client just
              > put a constant stream of JMS messages to the WLS and the connectionfactory class
              > takes responsibility for the load balancing
              >
              > Who maintain's the delivery state, the client application or WLS (i.e who's job
              > is it to look up the last messages queue destination?)
              >
              >
              >
              >
              

  • Can I distribute the open source SWF compiler with my application?

    I'm unclear reading the documentation if there is a difference between Adobe's official Flex SDK, and the open source version?
    Can I distribute the open source SWF compiler with my application?
    I have a flash application that users can change the fonts being displayed, if they supply the fonts in a compiled SWF. I found I can let the user select a font from their computer system, and using the mxmlc command I can easily generate a SWF with the font, which can be loaded by my application so the font will be part of the run time application when played on systems without that font already in the System fonts.
    I was wondering if I could distribute the open source SDK so that I could compile these font SWFs for the user so they would not have to get involved in complicated Flash development. The audience is non Flash audience.
    I tried using SWFMill but the fonts don't seem to work as they do with the mxmlc compile.
    Thank you,
    Scott Kerr

    Moreover check also the compatibility of your open source license with MPL
    Regards, Giuseppe

  • 2012R2 Checkpoint Backups Fail - "The disk signature of disk1 is equal to the disk signature of disk0"

    Before reading any further please note there are no actual disk signature problems per other threads on technet. I mounted all VHD's on the host and none reported offline. I manually checked all VHD disk signatures to the host and all were unique. I loaded
    a brand new fresh 2012R2 VM/VHD and at first fine, but after installing updates the problem occurs.
    I believe a Windows Update is causing this error but do not know which one. I had a 2012 (non r2) VM that backed up fine, however after upgrading it to 2012R2 integration services and updates, this VM also has the below listed errors. Whether or not there
    is only one disk in the VM or multiple the following error is occurring. If I save all the VM's first I get a good and successful backup and checkpoint/snapshot transfers to backup location correctly. It is only when the VM's are running and checkpoint the
    process fails.
    When a backup is initiated on the 2012R2 host the VM's enter into the backing up state and are still live (not saved).
    The following sequence of events occur in the VMs:
    Event ID: 58
    Source: partmgr
    "The disk signature of disk 1 is equal to the disk signature of disk 0"
    Event ID: 7036
    Source: Service Control Manager
    "The portable device enumerator service service entered the running state"
    Event ID: 58
    Source: partmgr
    "The disk signature of disk 1 is equal to the disk signature of disk 0"
    Event ID: 98
    Source: Ntfs
    "Volume E: (\Device\HarddiskVOlume3) is healthy. No action is needed"
    Event ID: 98
    Source: Ntfs
    "Volume F: (\Device\HarddiskVolume4) is healthy. No action is needed"
    Event 157
    Source: Disk
    "Disk 1 has been surprise removed"
    I can see in device manager that a Microsoft Virtual Disk is created and then disappears (greys out if viewing hidden devices)
    Once Event 157 occurs, the VM's are merged back successfully and the following error occurs on the host.
    Log Name:      Application
    Source:        VSS
    Date:          4/26/2014 8:00:58 PM
    Event ID:      8229
    Task Category: None
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      SERVER
    Description:
    A VSS writer has rejected an event with error 0x800423f3, The writer experienced a 
    transient error.  If the backup process is retried,
    the error may not reoccur.
    . Changes that the writer made to the writer components while handling the event will not 
    be available to the requester. Check the event log for related events from the application 
    hosting the VSS writer. 
    Operation:
       PostSnapshot Event
    Context:
       Execution Context: 
    Writer
       Writer Class Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Name: Microsoft 
    Hyper-V VSS Writer
       Writer Instance ID: {2aa82577-e446-4340-9afe-1e75fa3a52d4}
       Command 
    Line: C:\Windows\system32\vmms.exe
       Process ID: 2088
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="VSS" />
        <EventID Qualifiers="0">8229</EventID>
        <Level>3</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2014-04-27T03:00:58.000000000Z" />
        <EventRecordID>2254</EventRecordID>
        <Channel>Application</Channel>
        <Computer>SERVER</Computer>
        <Security />
      </System>
      <EventData>
        <Data>0x800423f3, The writer experienced a transient error.  If the backup process is 
    retried,
    the error may not reoccur.
    </Data>
        <Data>
    Operation:
       PostSnapshot Event
    Context:
       Execution Context: Writer
       Writer 
    Class Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Name: Microsoft Hyper-V VSS 
    Writer
       Writer Instance ID: {2aa82577-e446-4340-9afe-1e75fa3a52d4}
       Command Line: C:
    \Windows\system32\vmms.exe
       Process ID: 2088</Data>
    I have found no relevant information regarding this error please advice if anyone has opened a case on this and has a reported solution. It appears to be an issue with the new checkpoint process in 2012R2 possibly created by a recent update.
    Thanks in Advance

    Hi CCPD,
    Thanks for contacting Microsoft.
    From your description, I learnt that the issue you are experiencing is that you failed to backup VMs when the VM is running. Pleas let me know if I misunderstand anything.
    Firstly, please let me know which backup tool you are using to perform backup. And please let mw know how the issue goes if you use Windows Server Backup feature.
    After that, please install the Windows Server 2012 R2 newest update rollup as below.
    ==========================================================
    Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 update rollup: February 2014
    http://support.microsoft.com/kb/2919394
    Best regards,
    Sophia Sun
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Class Equality In Checking Loader Constraint

    Hi,
    Class loaders are suppose to throw constraint violation (Linkage Error) if a class with the same name is loaded by two separate class loaders if the class being loaded are not the same class. How is this equality of classes checked for? Does it have to come from the same location (jar file) or does it merely have to be the 'same'. If it is the latter, how is the equality determined.
    We have run into a strange situation where org.w3c.dom classes are loaded from jre 1.4 runtime (rt.jar) by the web server but then these classes also exist in our web applications WEB-INF/lib and get loaded by web application context classloader. We have tried a couple of versions of jars that contain the org.w3c.dom classes in the WEB-INF/lib directory and one of them causes a constraint violation while the other one does not. When we looked specifically at the class (Node.class) that is causing the constriant violation, the strange thing is that the version which causes the constraint violation contains the same stuff (same after decompilation) as the one in rt.jar while the one which didn't cause constraint violation actually had more methods. We are pretty confused as to how the loader is checking the constraint. Can some one shed some light on what is happening?
    Regards,
    Len Takeuchi

    Hi,
    Class loaders are suppose to throw constraint
    violation (Linkage Error) if a class with the same
    name is loaded by two separate class loaders if the
    class being loaded are not the same class. Huh?
    What does "same name" mean. The fully qualified name? It certainly has nothing to do with the name of the class itself.
    And what do you mean by "are not the same class"? That is definitely not the case if the functionality is different. Hotloading works and it would be pointless if the functionality and even the interface couldn't change.
    >
    We have run into a strange situation where
    org.w3c.dom classes are loaded from jre 1.4 runtime
    (rt.jar) by the web server but then these classes
    also exist in our web applications WEB-INF/lib and
    get loaded by web application context classloader.It sounds to me like you have a class loaded by the system class loader and one loaded by a custom class loader.
    That has nothing to do with the problem above though.
    The ideal solution is for you to not do that. Other than that I believe (but I could be wrong) that you can implement a custom class loader which does NOT try to resolve to the parent. This means that you must write it yourself. The url one in the java API will not work.
    We have tried a couple of versions of jars that
    t contain the org.w3c.dom classes in the WEB-INF/lib
    directory and one of them causes a constraint
    violation while the other one does not. When we
    looked specifically at the class (Node.class) that is
    causing the constriant violation, the strange thing
    is that the version which causes the constraint
    violation contains the same stuff (same after
    decompilation) as the one in rt.jar while the one
    which didn't cause constraint violation actually had
    more methods. We are pretty confused as to how the
    loader is checking the constraint. Can some one shed
    some light on what is happening?
    It isn't calling the class that you think it is.
    If you search this site there are examples of code that allows you to determine exactly where a class is loaded from.

  • Distributing the data files

    Hi All,
    We are working on the redistribution process of data files in SQL level. We knew that by doing the empty files options through shrink we could achieve it.  But I would like to know, Is there any things which I need to consider in terms of number of data files(Any Formula to arrive a total number of data files for the better performance in SQL database level).
    In my environment, the current database size is 420GB and its running with 24 data files, SAP advised us to minimize the data files because the data files are not in order (Like 1 data file having GB and some having 40GB and few having 90GB) - So, SAP recommended us to maintain the data file size as common and distribute the data equally among the files.
    So, wondering the data file size and also number according to our environment (It would be kind if you suggest on it)
    Database size :420GB
    Database version : SQL 2005
    CPU core : 4 cores running and its a Xeon processor
    Regards
    Vijay

    Hi,
    But I would like to know, Is there any things which I need to consider
    in terms of number of data files (Any Formula to arrive a total number of data files
    for the better performance in SQL database level).
    You can refer Topic "Number and Size of SQL Server Data Files" of this [Best Practice document|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/4ab89e84-0d01-0010-cda2-82ddc3548c65&overridelayout=true] (Page no.37 to 44 ). I hope you will get all the required information ,which can be considered for File Management in SQL server.
    Also you can refer the following SAP Notes to get more information.
    Note 987961 - FAQ: SQL Server I/O performance
    Note 363018 - File management for SQL Server
    Note 1488135 - Database compression for SQL Server
    Regards,
    Bhavik G. Shroff

  • PCR to equally distribute amount across WPBP splits

    Hi All
    Is there a PCR operation I can use to equally distribute an amount across WPBP splits?
    For example:
    WT currently processed as below (if a WPBP split occurs):
    /0Z1 02 50.00
    I need to write a PCR which then sets the WT as follows:
    /0Z1 01 50.00
    /0Z1 02 50.00
    I've tried the many WPBPC operations but they don't seem to work correctly for this.
    Any help would be appreciated.
    Thanks

    you can check the standrad PCR  U111 for the reference
    OPIND
    ELIMI A
    PRINT
    WPBPCD  add this for the custom specificatin of PRCL 47  for Rule X011

  • Creating Report- PS- Distributing the plan amount by period

    Hi,
    This is a question for PS module in BI.
    I have a planing amount as KF. I need to distribute the amount based on activity period. I need to distribute the amount in equal no. of periods. E.g. Planning amount  = 600 and activity period = 180 days (i.e. 6 months) then I need to display the planning amount divided into 6 periods (because of 6 months), i.e. 100 each. The starting period of the calculation is based on the fiscal period (or month) of the start date. Important point here is the distribution will change based on the activity period. If activity period is for 4 months then the division should be only for 4mnths.
    If the report is to be derived for the actual values then it is easy. I will place the fiscal period and the KFs in the columns. But how to derive such display for planning KFs? How to write the logic for this?
    Regards,
    Shailesh Naik

    Hi,
    The reports with 'Y' or 'Z' are not standard ones. You have to check with the person who developed this report. Ask to your ABAP team to help you.
    Regards,
    Eli

  • How to check the load balancing in Oracle 11gR2 2 node RAC

    Dear All,
    Can any one please assist me how to check whether the incoming connections are evenly distributing across the nodes..?
    We have two nodes, when we check the sessions counts in both nodes, Most of the time we could see node -1 has more no of sessions than node-2..? So just wanted to know whether load balancing is happening or not ...? If not how to enable it and distribute the incoming connections evenly..?
    Oracle 11gR2 / RHEL5

    SQL> select inst_id,count(*) from gv$session where username is not null group by inst_id;
    INST_ID COUNT(*)
    1 43
    2 40
    Not sure how to check the users are connecting through scan or not ..? But below are scan setttings...
    SQL> !srvctl config scan_listener
    SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
    SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
    SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
    SQL> !srvctl status scan_listener
    SCAN Listener LISTENER_SCAN1 is enabled
    SCAN listener LISTENER_SCAN1 is running on node za-rac-prd-02
    SCAN Listener LISTENER_SCAN2 is enabled
    SCAN listener LISTENER_SCAN2 is running on node za-rac-prd-01
    SCAN Listener LISTENER_SCAN3 is enabled
    SCAN listener LISTENER_SCAN3 is running on node za-rac-prd-01
    SQL> !srvctl config scan
    SCAN name: rac_prd.abc.local, Network: 1/10.100.130.0/255.255.255.192/eth6.64
    SCAN VIP name: scan1, IP: /rac_prd.abc.local/10.100.130.55
    SCAN VIP name: scan2, IP: /rac_prd.abc.local/10.100.130.54
    SCAN VIP name: scan3, IP: /rac_prd.abc.local/10.100.130.53
    SQL>

  • OSB 10gR3 (WLS 10.3) - Distributed Queues & Load Balancing

    I have a question in relation to distributed queues and its JMS proxy service consumer in OSB
    I've set up a uniform distributed queue deployed using a sub-deployment resulting in the queue being targeted to the respective JMS servers in the cluster.
    I've then set up a messaging service using JMS as the transport with the following URI
    jms://server1:7011,server2:7012/weblogic.jms.XAConnectionFactory/myQueue
    When I look at the monitoring tab of my distributed queue, I can see 16 current consumers to one of the members but none for the other one. My understanding is that the proxy is just a mere MDB and as such I thought WLS was optimised to make sure all MDB instances would listen to all members of the distributed queue. Why do I have 16 consumers to one member only?
    Since only one member has consumers, any producer will always push messages to this member only. (I believe it is optimised to get a member with consumer(s) if any available)
    I've also tried to use a custom Connection Factory deployed the same way my distributed queue was, and ensure the connection factory had load balancing enabled. But no success with this either.
    jms://server1:7011,server2:7012/jms.MyConnectionFactory/myQueue
    I looked at the deployment - though not directly performed by me but rather the bus console - and it looks like the application is targeted to the cluster.
    How can I achieve true load balancing here, ensuring both members are consumed by my JMS proxy service?
    In that case, would any produced message go to either member then as both have consumers?
    Also, is the load balancing decision made by the producer when the Queue connection is created?
    If so, how do you achieve true load balancing? Do you need to ask for a new Q connection each time you want to send a message rather than caching the connection?
    Hope I am clear enough
    Thanks
    Arnaud

    This confused me too!
    The way I understand it, is that, as you say, a proxy service is like a single MDB. The MDB will bind to the queue it first finds when it connects.
    The URL that you specify which contains your two servers but the first address in the URL is the one that will be used for the connection. If the first server is unavailable, then the second one will be used.
    If you have a distributed queue, this doesn’t help much, as you do end up with one of the queue members with no consumers on it.
    You can configure a forward delay for the distributed queue, which will cause WLS to forward messages to a queue with consumers, but this isn’t a good idea if you have large JMS messages as WLS needs to serialize and de-serialize across the network to move the message.
    I think that what you have to do, is define two proxy services, one connecting to the first server, and the other connecting to the second.
    I haven’t found a better way so far, but it does seem a bit over the top, but then, if you wrote a an external java client which attached to a distributed queue you would specify the connection url and it would behave in the same way – if you wanted it to bind to both distributed destination members, you would have to code it or run two instances, so maybe its just working as it should – even though it seems strange.
    I think the producer will simply load balance across the distributed queue members, it doesn’t pay any attention whether there are to consumers attached – this happened to me the other day!!
    Pete

  • How to get rid of the loading status at the bottom

    The loading status is like how it is in chrome now. I absolutely hate that. I use the addon bar, so why can't i put it back into the addon bar. Its just additional stuff i dont need. I would also like it to be where if you hover over a link you can see it before you open it in the addon bar instead of the address bar. Problem with it being in the address bar is i can't see the whole link address.

    See also http://forums.mozillazine.org/viewtopic.php?f=23&t=2087945

Maybe you are looking for