Xserve Load Balancing

Hi,
Our company is planning to use two Apple Xserve as our media streaming server and our transfer rate estimation is around 6Gbps. We are planning to use a load balancing hardware. Could you recommend to us a load balancer that works well with Xserve.
Thanks in advanced.

Well, at the simplest level, assuming you're using gigabit ethernet you're going to need 6 servers assuming you can max the gig-e port, which never happens.
Assuming you can get a 50% utilization rate (which is good going), you'll need 12 machines.
Are you sure about those 10,000 connections? If it's true you're talking of running a 600kbps stream, which is at the high end of streaming.
Also, if these are internet-based clients, do you have the network infrastructure to handle that kind of throughtput?
Routers with capacity to handle that kind of bandwidth typically cost hundreds of thousands of dollars, not to mention the bandwidth costs - even at $10/mbps you're talking $60,000/month in bandwidth costs.
Don't get me wrong, if your numbers are right, it's a nice project to run, but it's very difficult to jump from 0 to 10,000 connections/sec or 6gbps in no time at all.
Also, if this isn't an ongoing thing you might want to consider using a content delivery network such as Akamai to handle the client traffic - your two servers would be enough to seed the content to Akamai, leaving them to serve it to the 10,000 end users.

Similar Messages

  • Round robin DNS for load balancing between multiple network adapters (Xserve)

    I'm attempting to use 'round robin' DNS to load balance between the two ethernet adapters of an Xserve.
    Both ethernet adapters are connected to the same LAN and have static IP addresses of 192.168.2.250 and 192.168.2.251.
    The DNS zone for the server's local domain/host (macserver.private) has a machine record with both IP addresses (set up in the Lion Server UI).
    Having read up on round robin DNS, I would have expected DNS requests for 'macserver.private' to be answered with the two IP addresses ordered at random, achiving my aim of requests being served at random via each ethernet adapter.
    However this doesn't seem to be the case. Doing a 'nslookup' from any of the network clients results in the two IP addresses being listed in the same order everytime. And pinging 'macserver.private' only ever results in a response from the same address.
    Does anyone know why this is the case? Does Lion Server use a non-standard DNS configuration? Are there any additional settings I need to configure in Lion's DNS server to make adopt a round robin approach to responding to requests?
    Thanks in advance for any help!

    Be careful what you wish for
    Round Robin DNS is rarely the best option for 'load balancing'. At the very least it's subject to caching at various point on the network - even at the client side, once the client looks up the address it will cache that response - this means that subsequent lookups may be served from the client's cache and not refer back to the server. Therfore any given client will always see the same address until the cache expires.
    I suspect this is what you're seeing.
    You can minimize this by setting a lower TTL on the records. This should result in the response being cached for a shorter period, meaning the client will make more requests to the server, with a higher change of using the 'other' address.
    However, you're also going to run into issues with the server having two interfaces/addresses in the same LAN. This isn't recommended.
    As Jonathon mentioned, you may be better off just bonding the two interfaces. This will provide an automatic level of dynamic load balancing without the latency of DNS caches, as well as automatic failover should one link fail (as opposed to round robin DNS which will cause 50% of requests to fail until the client cache expires and a new lookup is performed (and, even then, there's still a chance the client will try to use the failed link).

  • IChat Load Balancing or failover solution?

    Hello, I am working a plan to develop a iChat server. I think a Mac Mini would be a good start for a group of 50 users. The users are all over the country and my role is to unite them all in a iChat domain. I thought about building two Mac Mini servers and have them run a same domain where all users are registered in. So, we would not be impacted when one of them goes down.
    Anyway, the question is how can I have a load balancing or failover solution for the iChat domain?

    On the issue of load balancing, whilst I don't have any experience with macMini's, you will not need to worry about load balancing with 50 users. I'm sure you can probably put a few naughts on the end of that before you need to worry.
    The design you are proposing will not work for iChat services / and for that matter most of Apple server services. For high availability services (e.g. transparent failover) I think you are going to struggle to get this working and it 'seems' Apple no longer offers guidance on this subject on 10.6.x.
    You will increase availability by using an Xserve with dual PSUs and raid disks. If you are only running high availability ichat services, I would buy a pair of second hand xserves with 10.5 OS and set the ipfailover services running. Personally I would buy one and a service kit and not bother with HA - as you will find the servers are very reliable.
    If you have to use mini's then just have one live and keep a near constant clone of it on another ready to manually swap out if you have a hardware failure.
    Your proposed design will not work without a lot of effort non of which is supported by Apple - although it would be rewarding if you did get it working. You cannot have server to server traffic for the same domain as all your application data needs to be stored centrally. OD only provides services for authentication. The ichat server also has its own data store and this is not distributed nor can it be. It is possible to move the data store over to say an enterprise version of mysql and have that distributed.

  • Distributed HA cluster with load-balancing and failover: advice?

    My workplace has a Xeon Xserve, which acts as our primary external server, with an attached ActiveStorage XRAID. We have just purchased a second Xserve/XRAID set to act as a mirror, which we will colocate. Both have Leopard Server installed, along with an array of additional software.
    What we want to do is have both servers load-balance between the two, with failover in case of a server or XRAID fault. I plan on using RSYNC to mirror static files between the two, and I'm looking into PostgreSQL replication and load-balancing solutions for our database. I gather that Apache supports web-server failover and load-balancing, as well. But, that still leaves the actual host and network setup to arrange.
    Does Leopard server support such a thing? The only information I found on IP failover instructs the user to place the two servers on the same subnet, directly connected via ethernet cable; obviously, this would not work in my case.
    Ideally, what we'd end up with is a situation in which the two systems kept each other in sync, both in static files and database data, and load-balanced between themselves; in cases of failure, the remaining system would transparently assume all duties until the other was restored, at which time they would resynchronize
    Any suggestions on how I could arrange such a thing?

    Interesting. Does this DNS-based approach support session tracking, though? I would need to have a user directed to just one of the two servers for the duration of their session, to avoid having to synchronize temporary files and such.
    You can't have it both ways. You need to build tolerance into the app.
    At the simplest level where you run all traffic to one site and use the second site as a failover/standby site you'll be OK most of the time - all users will go to the same server and their sessions will be intact.
    However, under any failover situation (your primary site is down for some reason), there is going to be some level of session traffic that it going to switch over to the other site. If your site depends on sessions then you're going to need to tolerate this kind of situation - your app will need to be able to fail gracefully if a user comes in with an invalid session cookie.
    Note, though, that this may be less of an issue than you at first think - all DNS clients will cache DNS data for whatever TTL you set. This means that if a user looks up your site name and you return an IP address with a 30 minute TTL, then that user is going to use the same IP address for the next 30 minutes and isn't going to ask the server again. This should negate most chance of a user suddenly switching from one server location to the other in mid-session.
    The trick comes in setting the DNS TTL low enough to effect a failover, yet not so long that you impact performance - e.g. you don't want the user to perform a DNS lookup on every page load. You may find that 10 minutes is appropriate. Just bear in mind that this affects how long a user could see your site 'down' before the failover DNS kicks in. Clearly you don't want to set the DNS TTL to a day since that may prevent the user switching to the secondary site for 24 hours by which time, hopefully, the primary site is back up, anyway.
    The 'right' TTL value may take some analysis on your traffic to see how long a typical user 'session' is. If the average user spends 20 minutes on your site, then it would make sense to set your TTL to somewhere around 20 minutes to give the best chance of their entire session staying on the same server.

  • IP failover, load balancing and notification...

    Pretend I have the following setup/hardware:
    Two intel xserves running 10.4. One is for http traffic the other https. The http server contains a static html website while the other server has a large dynamic database driven website and all of its pages require ssl encryption. I'll refer to the first as server 1 and server 2 for the other.
    Now I want to implement a solution for providing high availability and performance.
    If I wanted IP failover I would need two additional servers, one for the first webserver and the second for the other. Likewise if I wanted to address load balancing I would also need two additional servers, one for server 1 the other for server 2. Now my questions:
    1) It seems implementing load balancing as described on page 32 of Apples High Availability pdf that this would also provide high availability like IP failover does. If two additional servers were purchased to provide high availability via a load balancing strategy would there be any need to implement IP failover? Does load balancing provide the same benefits as IP failover when talking about high availability? When if ever would one need to implement both strategies?
    2) Can you somehow provide IP failover with only one server as the backup using the setup above (a third server to provide IP failover for both servers 1 and 2)? Assume the third server has all the data of both server 1 and server 2.
    3) Is it possible to have Server Admin or Raid admin notify you of a problem via calling your cell phone or sending you a text message as opposed to only email, maybe via a third party solution? I think (not 100% sure) APC offers this when the power left in their batteries reaches a certain level.
    Thanks.
    G5 xserve   Mac OS X (10.4.8)  

    1) There's generally no need to implement IP Failover at the server level if you're already using a separate load balancing solution. The load balancer should be able to take care of dealing with a failed server.
    2) Good question - it's not clear whether IPFailover will failover for one machine or more than one.
    3) Most cellphone providers offer an email-to-SMS gateway, allowing you to send an email to an email address that's forwarded to your phone as a text message. Check your cellphone provider for details on what that email address might be (e.g. Cingular uses <phonenumber>@cingularme.com, Verizon uses <phonenumber>@msg.myvzw.com, etc.

  • Homedir with load balancing

    I have rights now 3 xserve with 1 xraid and 3 license of xsan. I want to be able to do load balancing and fault tolorence with the xserve.
    The problem I am having is that when I create a share point I have to point to a server and that's not good idea because when a server goes down I lose connection of the home folder.
    I would like to know if the is a solution out there ?

    3 Xserves, 1 Xsan Volume, you can do:
    Xserv1: MDC + OpenDir replica +DNS slave
    Xserv2: MDC replica + Opendir master + AFP share naar Homedirs + heartbeatd + DNS slave
    Xserv3: MDC replica + OpenDir replica + failoverd + script to change AFP share from server Xserv2 -> Xserve3 + DNS master
    1. You have to assign an additional alias ipaddress to en0, when you want to failover, just make a script to move the alias ipaddress from Xserv2 -> Xserv3, start afp services etc.
    2. Make sure that when you're binding the clients configure the alias ip address in the client.
    I have done this and it works very well!!! But you'll have to do scripting...
    Goodluck!

  • Network Load Balancing for AFP Sharing

    Dear all,
    Anyone can kindly teach me how can I configure network load balancing with 2 xserve?
    Currently I success to bond 6 ethernet port with a virtual IP in a single machine and I have a link aggregation setup in my switch. It works fine.
    How to configure 2 xserve with 6 ethernet port per each to have a single virtual IP?
    my switch do not support link aggregation with a virtual IP to do the load balancing so I just can consider to do it in software level.
    Anyone know whether OS Leopard/Snow Leopard can do it? Or any suggestion for 3rd party software can perform this?
    Thanks expert!
    Karl

    It sounds like you need a load balancer. There's nothing built-in to Mac OS X Server that's going to support one virtual IP address shared across multiple physical servers.
    Your problem, though, is likely to be one of throughput - I don't know any cheap load balancers that will sustain that kind of throughput, so you may be looking at some serious $$$$s.
    There are some software-based load balancers that might be able to handle the load balancing side of things but many of them are designed around HTTP so might not work so well for other protocols.
    In addition, the software load balancer is going to suffer the same bottleneck as your AFP server, but doubly so - two servers with 6 x 1gbps links each you have a theoretical limit of 12gbps.
    To run that through a load balancer, the load balancer will need double that - 12gbps for the client-side, plus 12gbps for the server side. In reality you're probably looking at needing 10gbps interfaces and switches if you're really pulling that much bandwidth.

  • Network Load Balancing and failover for AFP Sharing

    Dear all,
    Somebody kindly teach me to use round robin DNS to perform the network load balancing, it's success but not the failover.
    I have 4 xserve and want to do the load balancing and failover at the same time.
    I have read the IP failover document and setup it successfully, but anyone know is it possible to do the IP failover for more than 2 server?
    For example, 4 server serving the AFP service at the same time, maybe I have 1 more extra server to do the IP failover for thoese 4 servers.
    As I know, IP failover require Firewire as the heartbeat detection. But one xserve only have 2 firewire ports. May I setting up the IP failover only by a ethernet port and an IP address? does it possible to detect and failover to any server after server down has been detected?
    I believe load balancer maybe the best solution but its cost is too high.
    Thanks any advance!
    Karllee

    well, u have 2 options here
    software load balancing
    request comes it foo.com -> ws7u2 hosting foo.com is configured to run as reverse proxy . this server sends any incoming requests to one of the four back end web server 7 handling your incoming request
    hardware load balancing (this you need to invest)
    request comes to hardware load balancer who responds for foo.com -> sends requests to four ws7 server hosting your application
    you could try out how software load balancing works out for you before you invest in hardware load balancing
    here is more instruction on configuring ws7 + reverse proxy (software load configuration)
    - install ws7 on foo.com
    - create a new configuration (choose port 80, disable java

  • Howto load balancing

    Hi
    Using Dell 2U servers running FreeBSD 6, we are very exciting to get some new Xserve for our web needs.
    We plan to buy 2 XServe for sharing performances for a huge website and a Xserve RAID.
    As MySQL can be master-master replicated on its own, we only want to balance network load coming from the internet. What do you suggest to buy in front of the 2 Xserve ? How to sync files between the 2 Xserve but with manual rsyncs ?
    Thank you for your tips
    PowerBook 12" + MacBook rev1   Mac OS X (10.4.8)   Airport Express / 23" Cinema Display / Freebox

    What kind of traffic levels are you planning for?
    There are various load balancing techniques around ranging from the free to the very expensive, and the inefficient to the highly effective.
    At the lowest end of the scale is simple round-robin DNS. You configure your site's address with two IP addresses and the DNS server alternates between the answers. This gives you a crude load balancing option - there's no direct control over which server gets the traffic, levels may be uneven and, worst of all, there's no redundancy in case one server is down - the DNS server will continue to hand out it's IP address. Its advantage, though, is that it's free.
    Moving up the scale a little there are various Linux based solutions that can do simple load balancing through its IPTables (or ipchains in older distributions).
    I've never used them, so I don't know how effective they are.
    At the top end of the scale are load balancing appliances such as those from F5, Cisco, NetScaler and others.
    These move up the price chain a fair way but offer far more features, server health monitoring (to make sure the server is able to service the request), advanced load balancing rules to decide which server should handle the request, and multi-gigabit per second throughput.
    If you just have a couple of servers, the appliance path may be overkill, although if you expect to grow then it may be something worth considering.
    As for the replication question, there are many ways of doing that. At its simplest level, rsync can replicate a directory or filesystem using an efficient protocol that just transfers the differences. It's included in Mac OS X and the man page gives examples of its use.

  • Error while selecting Load Balancing in JCO creation

    While creating JCO i am facing this error.It is working fine with Single server connection,but when i chose Load balancing i error comes out.Please tell me the solution.
    I have read couples of forum mentioned you need to start both Portal and ECC.
    For you information my Portal and Java are both on diffrrent Box.
    com.sap.mw.jco.JCO$Exception: (102) RFC_ERROR_COMMUNICATION: Connect to message server host failed Connect_PM  TYPE=B MSHOST=olameccpdvr GROUP=PUBLIC R3NAME=DVR MSSERV=sapmsDVR PCS=1 LOCATION    CPIC (TCP/IP) on local host with Unicode ERROR       service 'sapmsDVR' unknown TIME        Thu Feb 24 12:19:54 201 RELEASE     701 COMPONENT   NI (network interface) VERSION     38 RC          -3 MODULE      nixxhsl.cpp LINE        776 DETAIL      NiHsLGetServNo: service name cached as unknown COUNTER     5

    Is your backend system configured correctly in your SLD ?
    Go to transaction SMMS on your backend system that your are connecting to. Click on Goto=>Parameters=>Display. Look for "server port" value.
    This should give you the TCP/IP port for your message server. It could be 3600 or 3601 (36NN - where NN is the instance number).
    In your services file, if you made the entry at the end of the file, press Enter (Return) after your entry.
    Try restarting your server after making the above changes.
    - Shanti

  • Error in creation of JCO with Load balancing server

    Hi,
    We are using a ABAP user base for our WEBAS server 6.40 (with ABAP+JAVA). i have created a Public group in concerned ECC 5.0 system. I have already configured SLD, and then i maintain data supplier bridge in SLD and run RZ70 in ECC 5.0 system to load system information.. i can see details in SLD ..
    now i am trying to create JCO connections .. here i am unable to create JCO with load balancing option..  i get
    com.sap.mw.jco.JCO$Exception: (102) RFC_ERROR_COMMUNICATION: Connect to message server host failed Connect_PM  TYPE=B MSHOST=<servername> GROUP=PUBLIC R3NAME=SID MSSERV=sapms<SID> PCS=1 ERROR       service 'sapms<SID>' unknown TIME        Fri Jun 16 12:41:20 2006 RELEASE     640 COMPONENT   NI (network interface) VERSION     37 RC          -3 MODULE      ninti.c LINE        505 DETAIL      NiPGetServByName2: service 'sapms<SID>' not found SYSTEM CALL getservbyname_r COUNTER     1
    i am able to create single server JCO, but it fails in load balancing.. is there anything i have  missed out in settings...
    Thanks and regards,
    Sudhir

    Thanks, Bogdan Rokosa
    I have the same problem,and solved it following the steps provided by Bogdan Rokosa  :
    you must insert an entry for your R3 system
    (like: sapms<SID> 3600/tcp)
    in services file
    (C:\WINDOWS\system32\drivers\etc\services) on Java WAS.
    I test the Jco successful without restart J2EE Engine.

  • ISE 1.2 - Multiple NICs/Load Balancing for DHCP Probe

    Hello guys
    Just prepping an ISE 1.2 patch 8 setup in our organization. I am going for the virtual appliances with multiple NICs. It will be a distributed deployment with 4 x PSNs behind a load balancer and there is no requirement for wireless or guest user at the moment. I've got 2 points I will like to get some guidance on:
    Our DC has a dedicated mgmt network and I plan to IP the gig0 interface of the PANs, MNTs and PSNs from this subnet. All device admin, clustering, config replication, etc will be over this interface. However, RADIUS/probe/other user traffic to the ISE PSNs will be over the gig1 interface which will be addressed from another L3 network. Is this a supported configuration in ISE?
    I intend to use the DHCP probe as part of device profiling and will ideally like to have just an additional ip helper to add to our switch SVI config. Also, it will appear that WLCs can only be configured for 2 DHCP servers for a given network so another consideration for when we bringing our WLAN in scope. We however use ACE load balancers within our DC and from what I have read, they do not support DHCP load balancing. Are there any workarounds to using the DHCP probe with multiple PSNs without having to add each node as an ip helper/DHCP server on the NADs?
    Thanks in advance
    Sayre

    Hello Sayre-
    For Question #1:
    Management is restricted to GigabitEthernet 0 and that cannot be changed so you should be good there
    You can configure Radius and Profiling to be enabled on other interfaces
    Even though you are not using guest services yet, you can dedicate an interface just for that. As a result, you can separate guest traffic completely from your production network
    Take a look at this link for more info:
    http://www.cisco.com/c/en/us/td/docs/security/ise/1-2/installation_guide/ise_ig/ise_app_c-ports.html
    For Question #2
    If you are using a Cisco WLC and running code 7.4 and newer you don't need to mess with the IP helper configurations. 
    The controller can be configured to act as a collector for client profiling and interact with the DHCP thread along with the RADIUS accounting task that is running on the controller. The controller receives a copy of the DHCP request packet sent from the DHCP thread and parses the DHCP packet for two options:
    –Option 12—HostName of the client
    –Option 60—The Vendor Class Identifier
    After this information is gathered from the DHCP_REQUEST packet, a message is formed by the controller with these option fields and is sent to the RADIUS accounting thread, which is in turn transmitted to the ISE in the form of an interim accounting message.
    Both DHCP and HTTP profiling settings are located under the "Advanced" configuration tab in the WLC
    On the other hand, you can also use Anycast for profiling. You can check out some of Cisco Live's sessions for more info on that. Here is one that is from a couple of years (There are more recent ones that are available as well):
    http://www.alcatron.net/Cisco%20Live%202013%20Melbourne/Cisco%20Live%20Content/Security/BRKSEC-3040%20%20Advanced%20ISE%20and%20Secure%20Access%20Deployment.pdf
    I hope this helps!
    Thank you for rating helpful posts!

  • SAP GLM Print Request - Load Balancing of WWI server

    Hi GLM Experts,
    I am using new GLM + module that generates labels based on Print Requests. I am unable to understand how I can load balance the WWI services when there are multiple label printing requests.
    In GLM + we associate a WWI to a Print Station and which can then be associated with a printer. So in the configuration we are tying up a printer a WWI.
    Also during label printing, if the scenario uses print request module, then the use need to select a print station and printer. What happens if the WWI related to the print station is down?
    For example I have two services in WWI server GENPC1 and GENPC2. I created WWII and WWI2 as two print stations. I will associate my printer PRNWWI to both the print stations WWI1 and WWI2.
    During label printing if the user picks and WWI1 and Printer PDNWWI and if the GENPC1 WWI server assocaited with print status WWI1 is busy and down I want WWI GENPC2 to generate the label?
    How to setup the above load balancing or fall back? Please let me know.
    Thanks
    Pugal

    Dear Pugal
    we are not using GLM + and I am not sure about the technqiue used there to handle load balancing. Regarding general WWI setup I assume you know this Note: EH&amp;amp;S: Availability and performance of WWI and Expert servers
    On the top there is a further SAP Note abvailable which might be of interest. This is referenced here:
    http://de.scribd.com/doc/191576739/011000358700000861002013-e
    May be check OSS note: 1958655; OSS Note 1155294 is more related to normal WWI stuff; but may be check it as well. May be 1934253 might help better
    May be this might help.
    C.B.
    PS: may be check as well: consolut - EHS_MD_140_01 - EH&amp;amp;S-Management-Server einrichten
    The load balancing of synchron WWi servers is donein the "RFC" layer, therefore you have no inffluence here, for asynchron WWI servers you can do a lot to manage the WWI load balancing by using "exits" etc.

  • APEX SSO and Load balancing: Could not determine workspace for application

    We had a single HTTP Server serving APEX in a 10.2.0.2 database configured with SSO to be used by the developers. APEX has been registered as a partner application and the login url has been CA Siteminder protected so that the SM_USER details are forwarded in the header for the application to use for authorization. Everything is fine so far.
    Now we have added a HTTP Server on another host and have it all set up for APEX and its pointing to the same database. APEX_ADMIN access works as normal, but applications previously using SSO now get the following error after entering the URL.
    Expecting p_company or wwv_flow_company cookie to contain security group id of application owner.
    Error ERR-7620 Could not determine workspace for application ().
    Using HTTP Watch I find that the application is not even trying to redirect to the login page.
    What is wrong here?

    APEX has been registered as a partner application as described in
    http://www.oracle.com/technology/products/database/application_express/howtos/sso_partner_app.html
    In the meantime I found metalink document 368746.1 which describes the cause of this problem. Please read carefully what I wrote, it all works when the the new APEX web server is turned off in the server farm on the load balancer and directed through the original web server. When running regapp.sql the hostname in the listener token was using the virtual hostname. This works fine if the request comes from the original APEX server which proofs that there is nothing wrong with the installation and set up of SSO. When directing the request to the new APEX web server the APEX_ADMIN page still works only existing work spaces using SSO don't seems to work anymore resulting in a error as described in the subject.
    As for metalink document 368746.1 naming the causes of this error:
    - there are no duplicate entries in WWSEC_ENABLER_CONFIG_INFO$
    -LISTENER_TOKEN clearly works for requests coming from the first web server
    -theoretically the web server listener port could be changed from 7777, but port 80 needs to be maintained here as production is mimiced as far down as possible.
    Is there some cache table which can be cleared? How is it that the flows schema (apex engine) can not find the work space when the request comes from a new web server which can however access the APEX_ADMIN pages.
    anyone?

  • SSO with SAP R/3 with load balancing as backend over the Web AS

    Hi,
    we have Netweaver 2004 at this time and we have to connect the portal to a BSP application in a load balancing environment.
    We set user mapping for the user and set the connection type from SAPLOGONTICKET to UIDPW. This is running for a test environment with only one R/3 system without load balancing.
    Does anyone know the setting parameters for a load balancing environment (ok, the message server and...?).
    Thank you.
    Best regards
    Patrizia

    Hi all,
    run into the same problem. Setting up a mapping with UIDPW in a non load balanced WEB-AS enviroment for BSP or Webdynpro for ABAP works fine. But if I go to set it up in a balanced system I can see the following behavior. The http request is send to the messageserver. This request enclosed my mapped user and password. The messageserver responds with an HTTP 301 wich contains one of my applicationservers, so far so good. The client sends a new request to the mentioned applicationserver but this time without the UIDPW. So the user will not be logged in.
    I was wondering if my backend have to issue logonticket too, cause today it only accept tickets from the portal.
    Is this is a bug or a feature?
    Regards,
    Bernd

Maybe you are looking for