Coherence Extend Address-provider for dynamic load balancing

Hi,
I want to have load balancing across my proxies. so, trying to use <address-provider> feature.
Here is my setup,
I have two proxies and many clients.I am configuring both proxy and the client to use load sharing scheme.
There are implemented classes for the interface AddressProvider. So, i am going to use that.
Here is proxy server settings,
<proxy-scheme>
<service-name>ExtendTcpProxyService</service-name>
<thread-count>10</thread-count>
<acceptor-config>
<tcp-acceptor>
<address-provider>
<class-name>com.tangosol.net.RefreshableAddressProvider</class-name>
</address-provider>
</tcp-acceptor>
</acceptor-config>
<autostart>true</autostart>
</proxy-scheme>
I know there is some silly mistake with that,
I am getting error message in the following log,
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.emc.srm.common.daemon.SrmDaemon.start(SrmDaemon.java:59)
... 5 more
Caused by: (Wrapped: Missing or inaccessible constructor "com.tangosol.net.RefreshableAddressProvider()"
<address-provider>
<class-name>com.tangosol.net.RefreshableAddressProvider</class-name>
</address-provider>) java.lang.InstantiationException
at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2542)
at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2426)
at com.tangosol.net.ConfigurableAddressProvider.createAddressProvider(ConfigurableAddressProvider.java:277)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.acceptor.TcpAcceptor.configure(TcpAcceptor.CDB:78)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ProxyService.configure(ProxyService.CDB:67)
at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:6)
at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInternal(DefaultConfigurableCacheFactory.java:1057)
at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:892)
at com.tangosol.net.DefaultCacheServer.startServices(DefaultCacheServer.java:81)
at com.tangosol.net.DefaultCacheServer.intialStartServices(DefaultCacheServer.java:250)
at com.tangosol.net.DefaultCacheServer.startAndMonitor(DefaultCacheServer.java:55)
at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:197)
... 10 more
Caused by: java.lang.InstantiationException
at sun.reflect.InstantiationExceptionConstructorAccessorImpl.newInstance(InstantiationExceptionConstructorAccessorImpl.java:30)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:355)
at java.lang.Class.newInstance(Class.java:308)
at com.tangosol.util.ClassHelper.newInstance(ClassHelper.java:587)
at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2501)
... 23 more
Cannot start daemon
First question:
1) without address provider section in the proxy server configuration xml, will it use any default address provider implementation?
2) what's wrong with the proxy server settings xml which causes above error?
Thanks
Prab

Hi LuK,
Sounds like there is no load balancing is happening in my environment.
I didn't use address provider tag since it's mentioned in AddressProvider Interface API that by default it uses ConfigurableAddressProvider implementation.
Here is a xml snippet in remote client cache configuration xml,
<tcp-initiator>
*<remote-addresses>*
*<socket-address>*
*<address>PROXY-A</address>*
*<port>9099</port>*
*</socket-address>*
*<socket-address>*
*<address>PROXY-B</address>*
*<port>9099</port>*
*</socket-address>*
*</remote-addresses>*
<connect-timeout>10s</connect-timeout>
</tcp-initiator>
what i mean by load balancing is,
I will be having lot of web service queries which runs very fast. I would like to see both the proxies serves the web requests.
I am observing, One connection has been made to one proxy when it's starts up and keep hold on to it (no matter whether other proxy is free or not).
is it a bug? or am i doing some mistake?
Thanks
Prab

Similar Messages

  • Nic teaming - what is dynamic load balancing

    When set up nic teaming in Windows  2012 I have the option of selecting "Address Hash", "Hyper-V Port", or "Dynamic" for the load balancing mode. The technet documentation explains "Address Hash" and "Hyper-V
    Port" but there is nothing about "Dynamic". Is there anywhere I can find a description of what the "Dynamic" option provides?

    Microsoft's official recommendation is to use Dynamic load balancing in most configurations.
    Section 3.3 of
    the NIC Teaming Deployment Guide explains what Dynamic is.  Section 3.4 suggests when to use Dynamic load balancing, and when to use other modes.
    I suggest reading the Guide from start to finish.  I learn new things every time I look at it.

  • Best way for HTTP load balancing in OSB

    Hi everybody,
    We have setup an OSB cluster and we need to load balance HTTP requests across managed servers. Looking for info about load balancing in OSB I found that there are mainly two options: using a hardware load balancer or a software solution like Weblogic HttpClusterServlet. At the moment we have no hardware balancer available so we will have to take the software option. I found some articles about configuring HttpClusterServlet like http://redstack.wordpress.com/2010/12/20/using-weblogic-as-a-load-balancer.
    But I have a question about this configuration. If we use a managed server as an HTTP proxy that balances requests between OSB managed servers, what would happen if this server goes down? I think one of the main goals of a clustered deployment is avoiding a single point of failure but with that setup all requests would depend on the availability of the proxy managed server.
    Could you recommend us a setup for implementing load balancing in OSB?
    Thank you in advance,
    Daniel.

    Load balancing in a cluster for http requests can be achieved using atleast 4 different ways:
    (1)- use a hardware load balancer like F5 BigIP LTM
    (2)- use a web server with weblogic plugin to frontend the cluster
    (3)- use weblogic with HTTPClusterServlet
    (4)- use DNS round robin - this works if you have managed servers running on 2 machines (say mach1, mach2) but on the same port. HTTP clients use hostname 'mach' to access the URL's and the dns does a round robin name resolution of mach to mach 1 and mach2 IP addresses..
    All the options except (1) achieve only load balancing and not auto failover on all instances.. Hardware load balancers has the extra feature of probing [ sending periodic pings to the targets] , by which it can detect whether the target resource is alive and if not send the traffic to other nodes which are alive.. this is why hardware load balancers are worth their investment..
    other options may work if client is coded to do retrying on failure.. so on 2nd or subsequent attempt, the routing is done to the machine which is alive..
    For options (1),(2) and (3), you also need some redundancy of load balancing device ( web server, weblogic or hardware load balancer) to prevent single point of failure.. Hardware load balancers are usually deployed in redundant pairs to achieve this..
    Edited by: atheek1 on 22/11/2011 15:31

  • GSS act as an authoritative DNS for non-load balanced sites?

    I have a client asking if a GSS can be the authoritative DNS server for their entire domain.  This would include sites that are not load balanced.
    TIA,
    Dan

    Hi Dan,
    Yes, you would just have to create a new domain under the Domain Lists, create an Answer Group associated to that domain and then you can start adding DNS answers. For non-load balanced sites you would just have one answer in your answer group.
    Sincerely,
    Kyle

  • Geting IP address for Internal Load Balancer

    I've recently been experimenting with internal load balancing for VMs. I'm able to create and delete an internal load balancer (ILB) using the .NET wrapper for the API (https://github.com/Azure/azure-sdk-for-net).  What I cannot do though is actually
    get the internal address for it. Nor does it seem you can get it from the REST API (which, as far as I know, is what .NET wrapper wraps).   The only method I can see that claims to get the address is Powershell.
    Can anyone confirm if there is any way using the REST API or its .NET wrapper to obtain the internal address for the ILB?

    I have not looked into the .NET wrapper that you mentioned here, but according to this powershell script:
    http://msdn.microsoft.com/en-us/library/azure/dn690125.aspx
    $svc="<Cloud Service Name>"
    $ilb="<Name of your ILB instance>"
    $subnet="<Name of the subnet within your virtual network-optional>"
    $IP="<The IPv4 address to use on the subnet-optional>"
    Add-AzureInternalLoadBalancer -ServiceName $svc -InternalLoadBalancerName $ilb –SubnetName $subnet –StaticVNetIPAddress $IP
    IP address is optional, so maybe the wrapper hasn't implemented this, which is kind of undesirable. But maybe it allows you to specify the IP?
    Frank

  • SUNW.gds for jboss + Load Balancing Group = Failed to connect to host ...

    Hi all!
    In a simple two node cluster (Solaris cluster 3.2) with quorum server I've created a resource for jboss 5.1.0 using SUNW.gds. It is supposed to be load-balanced. To achieve that I've followed instructions from [http://download.oracle.com/docs/cd/E18728_01/html/821-1258/gds-25.html]
    The command I've used to create the resource was:
    clresource create -g scalable-rg -t SUNW.gds -p resource_dependencies=vip -p Scalable=TRUE -p Start_timeout=400 -p Stop_timeout=400 -p Probe_timeout=30 -p Port_list=8080/tcp -p Start_command="/opt/jboss-5.1.0.GA/bin/run.sh -b 0.0.0.0" -p Child_mon_level=0 -p Failover_enabled=TRUE -p Stop_signal=15 -p Load_balancing_policy=LB_STICKY_WILD jboss-rs
    The whole configuration seems to work, but when the second node joins cluster, resource with jboss can't bind to shared ip address. There are many entries in /var/adm/messages like:
    Jan 19 13:46:35 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:35 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    Jan 19 13:46:37 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:37 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    Jan 19 13:46:39 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:39 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    Jan 19 13:46:41 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:41 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    Jan 19 13:46:43 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:43 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    However, after some time, it finally somehow binds to shared ip and port 8080.
    Is this something I should be worried about? Or it is a normal thing, since e.g. it takes some time to bring the interface with shared ip up? I've never had to install such configuration, and I don't have neither intuiton, nor the experience. Any help would be very appreciated.
    Thanks a bunch,
    Bart

    Hi, a couple of things to check:
    - did you check that both JBOSS instances were up and running?
    - can you check in the logs (on both nodes) when a message saying something like "service .... registered..." showed up. This should show up for both JBOSS instances. From the time of the second "registered" message, the load balancer should sending incoming requests to both instances using its distribution mechanism
    - did you check the vip address on one of the nodes on an external interface and on lo0 on the other
    - connection refused looks like a server problem; can you connect to the JBOSS instance locally?
    Hth
    Hartmut

  • Dynamic Load Balancing vs Static Load Balancing

    What do you guys recommend, Dynamic or Static Loading Balancing+Logon Groups? This is to be for 2 instances envoirnment. 1 Central and 1 Dialog.
    Thank you,
    Samer

    I would say (as so often) - it depends.
    If you CI also carries the batch and update processes and it's not overloaded one could add it to the logon group. If you have a heavy batch load (such as we have) and the CI is already loaded just use the DI for the interactive work.
    Markus

  • Can't see Native Authentication Provide while configuing Load Balance Manag

    I am configuring Load Balance Manager in FDM 11.1.1.3. I followed the steps as per Oracle, and am setting my local Windows 2003 default Adminstrator account as the username for everything. Everything worked fine upto the point where I specify the authentication provider. I wanted to use Native Authentication, but when I try to "add" an authentication provider, the only option I have is to add Shared Services, Visual Basic Script Authentication and Visual Basic SSO. Can't figure out why Native authentication is not there.
    to be completely honest, I don't care what mode it uses as long as it works. What are the implications of using Native vs Shared Services. Say I choose Shared services, do I have to do anything in shared services as part of the configuration?
    I am running Hyperion Planning, Essbase.

    When you say "native authentication" are you referring to the shared services native directory or are you expecting to see NTLM, MSAD, or LDAP as available authentication providers?
    All authentication is now handled via shared services in 11.1.1.3. You will need to specify the provider as shared services and then add your MSAD or LDAP providers within shared services and provision the users for the FDM application(s).
    NTLM is no longer supported and has been removed from the FDM list of providers as well as an available external provider option in shared services.

  • Not able to find mod_oc4j.conf file for setting Load Balancing on ESB

    Hi Guys,
    I am trying to set load balancing on ESB ,I got one document from oracle site for setting the load balancing on ESB. According to that document we have to change mod_oc4j.conf file for setting the roundrobin:weighted,metric,random algorithms.
    But I am not able to find mod_oc4j.conf file in my ORACLE_HOME directory.
    Actual path is
    ORACLE_HOME/Apache/Apache/conf/mod_oc4j.conf file
    I have All folders up to conf but in conf folder there is no mod_oc4j.conf file.
    I just want to know is there any installation problem or we have to create
    mod_oc4j.conf file in that conf folder for seting the loadbalancing on ESB
    kindly give me the exact solution on this .
    Thanks in advance

    Hi Chintan,
    I have apache folder ,but there i have cong folder but in that I am not able to find mod_oc4j ,I know ESB it maintain load in transparent way but in my project we have heavy load on ESB we have to maintain this load in a proper way
    Please give me the exact solution to set load balancing On ESB.
    Thank you
    Bollineni

  • Forms Services availability checking for BIGIP Load Balancer

    We are load balancing across a number of 10.1.2.2 Forms servers using a BIGIP load balancer. Currently our load balancing is done based on which server has the "least connections" to the BIGIP. So far we have been using the following test URL to allow BIGIP to check the availability of the Forms Services on each server.
    http://server:port/forms/frmservlet?ifcmd=status
    This works well however it only checks through to the HTTP level within Forms Services. We encountered a problem when the Forms Services failed to work on a particular server however the above URL showed that everything was OK. The effect of this was that all new users attempting to login were directed to the failed server as this server had the "least number of connections".
    After raising an SR with Oracle they advised that the forking of runtime processes had probably failed and this was not detectable by the load balancer with the above URL. So they have recommended a number of options for checking the status of the Forms Services. These are:
    a) http://server:port/forms/frmservlet
    This loads the default Form and therefore by definition tests the forking of runtime processes. However BIGIP is unable to automatically process the information to distinguish whether the service is up or down. Oracle recommended that if using this method we would need to customise BIGIP to handle the various FRM-xxxx error codes.
    b) http://server:port/forms/frmservlet?userid=scott/tiger@YOURDB&form=yourtestform.
    fmx
    Even more thorough would be to actual log on to the database using a test form as above.
    My question is does anyone out there have experience in checking Forms Services availability using these last two methods as I'm not sure how to customize the load balancer so that it can handle the output of these URLs. Also when using the original URL is it normal to load balance using a "least connections" method or do people out there use a different algorithm.
    Thanks for any help/advise that you can give.
    Regards,
    Philippe

    Well SR followed up and it looks like the only course of action is to use the standard HTTP check: http://server:port/forms/frmservlet?ifcmd=status ...
    ... unless that is you want to do some serious customisation. Oracle don't support any other form of checking.
    I'm guessing from the lack of responses to this thread that this hasn't been an issue for anybody else ... ???
    Any thoughts/suggestions really welcome as we go into production in 4 weeks.
    a) What do people recommend for load balancing Forms ... least connection, round robin ... ?
    b) Do people use http://server:port/forms/frmservlet?ifcmd=status or have some of you used something else?
    Thanks,
    Philippe

  • Question on WCCPv2 - bucket assignment for WCCP2 load balancing

    Hello,
    I would like to know if any one has tried out running Cisco's WAAS/WAFS/WAE or
    Squid proxy as a cache cluster to leverage load balancing support in WCCPv2.
    I am trying to understand WCCP based transparent network redirection in a lab setup using squid cache's WCCP and Cisco routers only. When I tried with 2 proxies for load balancing, I see that the router *always* allocates buckets in the reverse order of the specified assignment - its confusing as its not mentioned in Cisco WCCP2 protocol drafts.
    In my case, the lead cache with the lowest IP specifies buckets 0-127 to itself and 128-255 to the other; but the router assigns buckets 0-127 to the second cache and 128-255 to lead cache.
    I have attached the ethereal trace. Can someone explain what is going wrong here?
    The issue was found in the following router versions:
    Cisco 3600, IOS 12.3(1a);
    Cisco 2600 IOS 12.3(9a);
    Cisco 2800 IOS 12.4(3d)
    Squid proxy:
    2.5
    WCCP status output - all the routers above show the same behavior.
    From the trace, 192,168.8.231 specifies bucket distribution as its the
    lead cache with a lower IP than 192.168.41.232.
    router#sh ip wccp 99 detail
    WCCP Cache-Engine information:
    Web Cache ID: 192.168.41.232
    Protocol Version: 2.0
    State: Usable
    Initial Hash Info: 00000000000000000000000000000000
    00000000000000000000000000000000
    Assigned Hash Info: 00000000000000000000000000000000 FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
    Hash Allotment: 128 (50.00%)
    Packets Redirected: 0
    Connect Time: 00:23:06
    Bypassed Packets
    Process: 0
    Fast: 0
    CEF: 0
    Web Cache ID: 192.168.8.231
    Protocol Version: 2.0
    State: Usable
    Initial Hash Info: 00000000000000000000000000000000
    00000000000000000000000000000000
    Assigned Hash Info: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
    00000000000000000000000000000000
    Hash Allotment: 128 (50.00%)
    Packets Redirected: 0
    Connect Time: 00:23:05
    Bypassed Packets
    Process: 0
    Fast: 0
    CEF: 0
    Thanks in advance.

    I'm not an expert of the details of wccp, but looks like the squid is not setting the bucket info correctly.
    From the draft [below], the first bit is the the A flag - for alternative hashing.
    And the alternative hashing is determined by another flag in the service info.
    So, why is Squid setting this bit ?
    I feel like they forgot to shift the index by 1 bit to the left.
    Bucket 0-255
    Contents of the Redirection Hash Table. The content of each bucket is a
    web-cache index value in the range 0-31. If set the A flag indicates
    that alternative hashing should be used for this web-cache. The value
    0xFF indicates no web-cache has been assigned to the bucket.
    0 1 2 3 4 5 6 7
    +-+-+-+-+-+-+-+-+
    | Index |A|
    +-+-+-+-+-+-+-+-+
    I'm double checking with a developpers for our cache, but I feel like this is the explanation.
    More info to come if I'm wrong.
    Gilles.

  • Azure: "Cloud Services" for VM - Load Balancing, yes, and other things?

    I'm trying to get a handle on the significance of the cloud service
    (that is created when a new VM is created). I understand that a group of
    VMs need to belong to the same cloud service in order to participate in
    Load Balancing. I can't see any other reason to group VMs into a single
    Cloud Service. On the other hand it seems like overkill to create a
    cloud service for each VM.
    Are there any advantages/reasons to adding a group of VMs to Cloud Service other than Load Balancing?

    Hi,
    If you made a group VMs as a cloud service, you can configure them and manage them by yourself, you could select Linux or Windows Server VMs and either compose the VM images in the cloud or upload a VHD you’re previously
    created using Hyper-V, You can capture a VM and add it your image gallery for easy reuse. you also could run a product like Active Directory or SQL Server or SharePoint Server successfully, etc...
    I suggest you have a look at following article. (create VM as cloud service belong to IaaS)
    #http://davidpallmann.blogspot.in/2012/07/windows-azure-is-3-lane-highway-how-to.html
    Best Regards
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • ConnCacheBean Setting for a Load Balance  URL

    Hi,
    We are using the ConnCacheBean to set the URL property. We have a new load-balance database, how can I pass that on as a paramter? Thanks.
    <jsp:setProperty name="tdbean" property="URL" value= "jdbc:oracle:thin:@----" />
    Thanks.

    Hi,
    I would pass this to the database forum as this class belongs to the Oracle JDBC package
    Frank

  • Dynamic Load Balancing

    We have developed Client Server Communication through Tuxedo Server and Workstation Client. This approach should handle 500 concurrent requests.
    We want to configure our Tuxedo Server in such a way that it could handle all the Requests in parallel.
    Currently below is our UBB Configuration
    WSL SRVGRP="WSLGRP" SRVID=1 RESTART=Y MAXGEN=255 GRACE=3600
    ##CLOPT="-A -- -n //10.223.26.28:12791 -m 5 -M 10 -x 15 -p 16914 -P 16924"
    ##CLOPT="-A -- -n //10.223.26.28:12791 -m 2 -M 4 -x 45" ##Minimum 90 ports needs to be available
    CLOPT="-A -- -n //10.223.26.28:12791 -m 2 -M 9 -x 60" ##Minimum 120 ports needs to be available
    "REQ_HANDLER" SRVGRP="DATA" SRVID=22280
    CLOPT="-A -r -e /log/timings/stderr -o /log/tuxlog/req_handler_stdout"
    RQPERM=0666 RPPERM=0666 MIN=1 MAX=200 CONV=N
    SYSTEM_ACCESS=FASTPATH MAXGEN=255 GRACE=3600 RESTART=Y
    The Problem with the above configuration is that, it is not booting another instance of REQ_HANDLER Server when multiple Request receives.
    For the Alternative, we have set both MIN and MAX as 10 to make sure it can handle concurrent request.
    But We want the Configuration like if there are multiple request received, it will automatically boot the Another instance of the Server and down it(except MIN value) in case of no request.
    Please advise.
    Thanks in advance

    Hi Vikas,
    It's not really possible to give you an answer just by looking at the config; you will need to observe the queue behaviour to get an idea of what is happening; but perhaps the following will help.
    You mention that two servers have booted, so since MIN=1 that does indicate that automatic spawning has taken place. What I think is happening is that the two servers are managing with the workload. When you say you "push 20 requests" how quickly did these arrive, and were the servers able to cope with the workload? You can use the pq command in tmadmin to view the queue "REQ_HANDLER" - based on your config the queue would need to exceed the 50 load for at least 5 seconds to cause another server to start, and you may find that by the time the second server has started up the 20 messages have already been processed (I think I am correct in saying that automatic spawning will only be tested again once the second server has completed it's startup and is able to process messages). Also remember that if the queue depth drops below the level you specified, even briefly, the timer starts again from zero. Do you know how long it takes for the service to execute (you can use the -r CLOPT option and run "txrpt" to determine the execution time)? You can also use the psc and psr commands of tmadmin to see which servers are actually processing the messages.
    I notice you have used the "L" option in -p, and are working with service loads rather than number of messages. If the messages all have the same load value then you can simply use the number of messages as an alternative in the -p option.
    Regards,
    Malcolm.

  • CSS for Dynamically Loaded External Text

    I'm externally loading text from a .txt file. I need to format that file using CSS. I've got the CSS file set up and everything. I've tried every tutorial I can find to try and load the CSS, but nothing is working.
    Here is the script for loading the text:
    var textLoader:URLLoader = new URLLoader();
    var textReq:URLRequest = new URLRequest("content_files/commissions.txt");
    textLoader.load(textReq);
    textLoader.addEventListener(Event.COMPLETE, textLoadComplete);
    function textLoadComplete(event:Event):void
        commissionscontent.htmlText = textLoader.data;
        commissionscontent.htmlText = textLoader.data.split("\r\n").join("\n");
            if (commissionscontent.maxScrollV <= 1){
            scrollBox_mc.visible = false;
            scrollLine.visible = false;
        else {scrollBox_mc.visible = true;
              scrollLine.visible = true;}
    Can anyone help me to load the CSS for this?? I'm really new to Flash and have just been teaching myself, except for now I've hit a wall and I am so incredibly frustrated that it's not even funny.

    var css_loader:URLLoader = new URLLoader();
    var textLoader:URLLoader = new URLLoader();
    var my_css:StyleSheet = new StyleSheet();
    var my_txt:TextField = new TextField();
    css_loader.load(new URLRequest("style.css"));
    css_loader.addEventListener(Event.COMPLETE, onCSSComplete);
    function onCSSComplete(e:Event):void {
        my_css.parseCSS(e.target.data);
        textLoader.load(new URLRequest("commissions.txt"));
    textLoader.addEventListener(Event.COMPLETE, textLoadComplete);
    function textLoadComplete(e:Event):void {
        my_txt.styleSheet=my_css;
        my_txt.htmlText=e.target.data;
        addChild(my_txt);
        my_txt.width=300;
        my_txt.autoSize=TextFieldAutoSize.LEFT;
        my_txt.wordWrap=true;

Maybe you are looking for