Networking performance question

Hi guys, in the company I work for we've got J2EE distributed systems running on application servers, in a scenario (a lot simplified) like the following:
Box1: App1...Appn
Box2: App1...Appn
where often Box1.App1 depends on Box2.App1 for some business activity.
Currently we implement the BusinessDelegate pattern, so that Box1.App1 will use a delegate to Box2.App1 which in turn will use Box2.App1 remote business interfaces to invoke business methods (we are using EJB 2.1).
This kind of scenario caused us many problems in the past when we changed some interfaces for Box2.App1 and therefore the ejb client.jar packed with Box1.App1 was not updated. The problems involved versions of classes not compatible and therefore when Box1.App1 tried to invoke a business method on Box2.App1 running with a different version of one or more classes we got an exception.
Currently the only way to solve this problem is to re-build Box1.App1 with the latest version of Box2.App1. Obviously this is not desirable for a number of reasons:
1) We don't want to re-build/re-deploy Box1.App1 everytime a remote type changes
2) Box1.App1 may be part of a set of other key live services which we don't want to take down (downtime would be an issue for our clients).
3) It breaks existing applications such as Box1.App1 stopping live services from working
So I thought we could define some stable interfaces between Box1 and Box2 and use XML schema to define such interfaces and then use tools like Castor or JAXB to send XML/HTTP in a way that is objectified to Java clients (i.e. Java clients would continue to deal with types, and not straight XML).
This way the implementation of Box2.App1 can change without necessarily having to change/redeploy/break Box1.App1 provided that the XML interface doesn't change.
I was wondering on the kind of performance penalty that we would face if we followed such scenario.
Thanks for any reply.

Enormous.
You need to give better thought to how you change
your interfaces between versions. It is possible with
discipline to retain binary and wire-protocol
compatibility between old and new.
What exception(s) did you get?
Have a look at the Versioning section of the
Serialization Specification.The exceptions we get have mainly to do with serial version UIDs not-compatible. See, all our distributed clients are packed with an ejb-client.jar of the app they depend upon. An ejb-client.jar typically contains:
1) Local (even if not used) and remote EJB component interfaces. Those interfaces are automatically generated through XDoclet.
2) Business Delegates of the applications they depend on. These are automatically generated by a customization of XDoclet.
3) Value Objects (for the Transfer Object pattern) of the applications they depend on
4) Exceptions
5) Helper classes
I'm working on a project many clients depend upon and I started undertaking an optimization activity where I removed unnecessary Session Beans, refactored a lot of code, removed some entity beans which were not used (and therefore the associated VOs), added business methods to existing session beans. All of this can be normal part of a project lifecycle, especially when working on legacy systems like the one I'm working at the moment. The problem is that those changes, perfectly legimate in a project lifecycle, change the structure of the ejb-client.jar thus clients break.
We tried to solve this problem by adding the serialVersionUID to all our remote objects, but still got versioning problems.
What I'm thinking now is to create another layer of indirection where to put independent interfaces/VOs and to use the decorator pattern. By moving all projects dependency to this extra layer, we could change all implementations without breaking the clients. All this project would have to do is to offer a set of methods to clients, go to the real projects to retrieve the data and set/return independent VOs to clients by using the decorator pattern. So all clients would depend on this facade, and this facade would be the only project depending on the remote services. If we are able to maintain the interfaces and VOs stable, we should solve the problem, without affecting the possibility for projects to change their internals in a normal project lifecycle. What do you think?

Similar Messages

  • Xserve G5 Network Performance Questions.

    Hello,
    I have the latest 2.3Ghz Xserve G5 and was wondering what the optimal configuration would be for the best network performance. The unit seems to come with a couple of GB Ethernet interfaces but would a Fibre connection improve bandwidth at all? Currently this unit has direct access to a layer3 managed switch but is GB Ethernet more limiting than a fibre connection into the same switch and is a Fibre network connection possible on this unit?

    Are you talking gigabit ethernet over fiber vs. gigabit ethernet over copper? Or Fiber Channel?
    There's no significant difference between gigabit ethernet over fiber vs. over copper. There are some minor issues (such a a lower propagation delay in fiber vs. copper) but over short distances it's not relevant.
    There are Fiber Channel cards available, but they're usually used for connectivity to storage systems (arrays, backup devices, etc.), not host-to-host connectivity.
    Attaching a fiber channel storage array (such as the XServe RAID woul dhave a trickle-down effect on your network (there would be less overhead reading and writing data to disk therefore data can be pushed or pulled from the network more quickly), but it's not going to be a huge difference.
    For maximum impact you could consider either trunking (combining multiple ethernet ports into a larger, virtual port) or moving to 10GbE (10 gigabit ethernet), although that will require new switches as well.
    You might want to check out Small Tree, they're well versed in high performance networking and sell a range of multi-port gigabit ethernet cards as well as 10GbE solutions.

  • Very poor network performance with a G4

    I have a white & grey G4 (1.25GHz and 256MB) running OS X 10.4.5. It's exhibiting very poor network performance, e.g. it will not FTP files over a LAN at more than about 40KB/second.
    Other nodes on the same switch perform fine, as do other ports on the switch. The machine is set to autodetect for speed & duplex, so my question is -- how do I debug this? I've tried swapping network cables already...
    Thanks!
    Chris

    This came down to an incompatibility between the G4 and the Cisco. When the switch is set to force 100Mbit but negotiate duplex, the G4 doesn't like it. Going back to auto speed & duplex fixed the problem...

  • PC and Network Performance

    Hello:
    The City of Detroit is planning to upgrade to 11.03 or 11i. We are looking for network and PC performance minimums/maximums to support as a minimum 11i and any known future upgrades coming but not released. Is there a publication/white paper that would provide information, not only on the above, but would be the expected performance benchmarks for AP, AR, PO, GL, Projects & Grant HRMS, OTA, and FA? At the moment we are on 10.7 and interface with EMPAC. Users are spread over the City in numerous locations. We must assume that the user will have other non-Oracle apps open such as Office and GroupWise.
    Thanks,
    J
    Gerald R J Heuer
    313-224-2109
    Beeper: 313-275-6714
    F:313-224-9342
    [email protected]
    1232 Coleman A Young Municipal Center
    2 Woodward Ave
    Detroit MI 48226

    The information is available in Appsnet under the 'Technology' Tab and you would need to look at some of the White Papers - including one with the name 'Nothing but Net' and the Network Performance Presentation.
    null

  • Performance questions in Oracle Forms/Reports of Application Server 10g

    Hello.
    I have a dual Xeon 2.6 computer with 3 GB of memory and 6 GB of cache, running RedHat EL 3 with Oracle AS 10g (9.0.4.0.0).
    Application Server is used to run some applications created with forms, and it generates several reports, but the forms are also used as a front end to another Oracle Database (Oracle 7.0, runing in Sun Solaris 5.6).
    I've notested (and also my users) the application is getting very slow (is running in a dual Xeon, with 2 Gbit nics, in a 3 Gb backbone).
    The speed (or lack of it) is most noted when the forms are generating the reports (it used to be fast - 2 seconds) and now it can take up to 5 minutes.
    The AS 10g, when generating the reports, almost all of the querys go to the other server (in Solaris) and the results return to the server, and then the report is generated. The network has no problems (at least, that i know of), and the network performance is not an issue - both the servers are connected to a Gbit switch.
    Can someone may have an ideia of it can be?
    even in the clients, when running the forms application, the transition of the several menus is slow, and it used to be more fast.
    the 10g is using Java Jinitiator 1.3.18. Can it be from that ? i'm not certain, but i could say this things start happen when we upgraded the jinitiator from 1.3.13 to 1.3.18...
    My users are really getting on my nervers and i cant do nothing, because they have reason...
    any help would be appreciated !
    Cheers
    Bruno Santos

    Bruno, did you solve the performance problem? I'm experiencing the same lack of speed.

  • Solaris 9, V210, bge0 network performance issue

    Setup: Solaris 9, V210, single network (bge0).
    ndd shows: 100FD
    LAN Switch set to 100FD
    Network performance bad, in fact very bad. System shows upwards of 19% collisions with netstat -- odd since FD should have no collisions. Network folks have checked switch settings and they are 100FD. I have checked more than once that ndd returns 100FD.
    I am not sure what to check next. Thanks for any help you can provide in what to check next or pointing me to something I have missed.
    Thanks - Wes

    Means your system and the switch aren't playing nicely.
    Anytime a Sun internal is showing collisions (don't care what ndd says), it means the interface is probably flip flopping between FD and HD (causing your collisions). The flip flops are causing performance to be MUCH worse then just the collisions ever would.
    You might try a new factory made CAT5 network cable.
    Then if that doesn't work, you should try using ndd to force the bge interface to 100FD and forcing the switch to 100FD.

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • Regarding Coherence Network Performance test?

    Hi,
    We have performed the datagram tests for two servers that we have using the coherence datagram test scripts.
    We have shared the results that we got after a run of 60 – 65 seconds each for the unidirectional and bidirectional tests.
    Please refer to
    https://docs.google.com/spreadsheet/ccc?key=0Atn1ma-myp6ndEc0THBKRTFDSnBaN0NqV1o1Ym1jRnc#gid=0
    for the results.But we want to understand from these results if this performance is under the agreeable range, for which we don’t have any absolute references with us.
    Can someone suggest if this network performance is good enough or if there are any problems based on these statistics?
    Quick help much appreciated. Thanks in advance.
    Edited by: saiyansharwan on Jan 11, 2012 5:40 AM

    Appreciate your response much. For your reference I am putitng the details on the excel sheet right on the reply.
    For unidirectional test
    =============
    Test Parameters -
    1     Buffersize     89 (Changed as exception occurred due to OS buffer size limitations)
    2     Packet size     1468 (default size adopted)
    3     Command on Server A     sh datagram-test.sh serverb
    4     Command on Server B     sh datagram-test.sh -rxBufferSize 89 ( to limit the receive buffer size assumed by coherence to match with the actual value in the OS.)
    Results:
    Server A as publisher and Server B as listener          
    Publisher Details     Byte transfer rate for Server A     111 MB/sec
         Packet transfer rate for Server A     79550 packets/sec
    Listener Details
    Sl.No     Parameter     Server B values
    1     elapsed     61547ms
    2     packet size     1468 packets
    3     throughput     111 MB/sec
              79614 packets/sec
    4     received     4900000 of 4902366
    5     missing     2366 packets
    6     success rate     0.9995174
    7     out of order     0
    8     avg offset     0
    9     gaps     802
    10     avg gap size     2
    11     avg gap time     0ms
    12     avg ack time     -1.0E-6ms; acks 0
    For Bidirectional test
    ============
    Test Parameters -
    Sl.No     Parameters     Values     Remarks
    1     Buffersize     89     Changed as exception occurred due to OS buffer size limitations
    2     Packet size     1468     default size adopted
    3     Command on Server A     sh datagram-test.sh -polite serverb -rxBufferSize 89     started on polite mode to wait till it receives data
    4     Command on Server B     sh datagram-test.sh servera -rxBufferSize 89     
    Results:
    Publisher Details
         Byte transfer rate for Server A     114 MB/sec
         Packet transfer rate for Server A     81367 packets/sec
         Byte transfer rate for Server B     84 MB/sec
         Packet transfer rate for Server B     59768 packets/sec
    Listener Details
    Sl.No     Parameter          Server A               Server B
    1     elapsed          65037ms               64932ms
    2     packet size          1468 packets          1468 packets
    3     throughput          107 MB/sec               65 MB/sec
                   76618 packets/sec          46202 packets/sec
    4     received          4982987 of 5050738          3000000 of 3001389
    5     missing          67751 packets          1389 packets
    6     success rate          0.9865859               0.9995372
    7     out of order          0               0
    8     avg offset          0               0
    9     gaps          20097               119
    10     avg gap size          3               11
    11     avg gap time     0ms               0ms
    12     avg ack time          1.149157ms; acks 309602     1.169717ms; acks 1017381
    As you may observe, unlike the test you performed, bidirectional tests are showing much better success rate in this case.
    But what I was looking for was some standards to compare against to decide if these statistics indicate good network performance.
    Any suggestions?
    Best Regards.

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Network Performance Troubleshooting?

    Greetings all,
    I have a new Sun X4240 server installed as an enterprise backup media server and it's suffering terrible network performance to our new Exagrid disk storage backup appliance. The shares on the Exagrid are mounted as NFS3 shares on the server. If I cable the server directly to the Exagrid appliance, I get a reasonable 450-500 Mb/s on a simple copy from local disk to the NFS share. When I connect them back into our Cisco switch, the performance on the same copy drops to well under 20 Mb/s. A Windows server on the same switch sending traffic to CIFS shares on the Exagrid is performing quite well.
    Does anyone have any ideas how to start troubleshooting this problem? Any other recommendations? Anything will be very welcome!
    Sun X4240, Solaris 10 05/09 X64
    Cisco Catalyst 4506, Supervisor II+ version 12.2(46), (cat4500-ipbasek9-mz.122-46.SG.bin)
    WS-X4548-GB-RJ45 (48 port 10/100/1000BaseT module)
    Many thanks,
    Tim

    I tried forcing the port settings but that still didn't work. In the end it turned out to be a problem with the code on the Cisco router. We tried a different, much cheaper Cisco router and it worked great.

  • X6450 network performance issue

    We are seeing a network performance issue using the Blade 6000 chassis with two (10x 1 Gbps Ethernet) NEM modules. The x6450 blades in the chassis have Solaris 10 u7 installed. In our tests, we used two blades in the following manner: each blade had only one network interface connected (and plumbed in the OS level) and each blade was connected to a different NEM. Both blades were connected to the same gigabit switch. There were two other test systems (x2270-s) also connected to the same switch. We ran the following tests:
    both the application client and server are on the same blade (i.e. application is using the loopback network interface): n operations/second (the number here is irrelevant, only the relation to other test numbers are relevant)
    application client on first blade, server on second blade: 0.77 x n operations / second (i.e. 23% degradation)
    application client on blade, server on x2270: 0.77 x n operations /second (i.e. 23% degradation)
    application client on x2270, server on blade: 0.75 x n operations /second (i.e. 25% degradation)
    We ran similar tests with the x2270. We did not see any difference between running the test on the loopback interface vs. running the test between two x2270-s.
    Is there a known issue with these network modules or their drivers?
    Edited by: kbertold on Jul 14, 2009 11:54 AM

    Just use a high interval and you see how much throughput you have in that time
    example: dlstat vnet1 120
    Best regards,
    Marcel

  • Windows 7 file copy and overall network performance

    I have noticed some major problems with network performance when trying to access files stored on our NSS based file servers when using Windows 7 Pro.
    Our file servers are OES2 SP1 and SLES 10 SP3. I did not have this type of problem when I was on Windows XP. I am currently using the Novell Client 2 SP1 on Windows 7. I did have the BSOD referring to the nccache.sys file. I read a TID that said to turn off file caching, but since doing that it is painfully slow.
    Any help in advance will be greatly appreciated. If I should post this elsewhere please let me know as well. Thanks again.

    Originally Posted by mrosen
    On 27.01.2011 20:06, dshofkom33 wrote:
    >
    > Our file servers are OES2 SP1 and SLES 10 SP3.
    I sure hope that's incorrect, as OES2SP1 runs on SLES10 SP2, not 3.
    Also, you should really be running OES2 SP2 at aleast (SP3 just having
    been released).
    > I did not have this type
    > of problem when I was on Windows XP. I am currently using the Novell
    > Client 2 SP1
    No IR5?
    > on Windows 7. I did have the BSOD referring to the
    > nccache.sys file. I read a TID that said to turn off file caching, but
    > since doing that it is painfully slow.
    Caching doesn't have any noticeable or even measurable performance
    impact on more copying of files, as caching doesn't come into play here.
    It can only have an effect on repeated activity on the same file, which
    fily copying by design isn't.
    > Any help in advance will be greatly appreciated. If I should post this
    > elsewhere please let me know as well. Thanks again.
    This would probably be better suited for the client forums, but you need
    to post more information. Being "Painfully slow" isn't a technically
    precise nor helpful description.
    CU,
    Massimo Rosen
    Novell Product Support Forum Sysop
    No emails please!
    Untitled Document
    You are correct with our Server installations. Sorry for the mistype.
    As far as my client goes. I will try the IR5 client version to see if that is any better.
    As far as being "Painfully slow" i mean it was taking hours to install a product from a .ISO hosted on one of our NSS mapped drives. It would take 15 minutes to copy a 3MB file wile the .ISO was installing. It took even longer to try to copy the file down. I had no problems with any of the forementioned when running on Windows XP SP3 with client 4.91 SP5. What other details do you need?

  • Help: network security question

    I just bought a PowerBook G4 running OSX 10.4.5 and was wondering about network security. What are some good anti-virus protection programs? I was searching the Apple store and found Net Barrier X4 and Virus Barrier X4 by INTEGO. What is the difference between the two? Are there other programs out there that are better? I will be the only person using this computer and it's for personal use, not business. Does anybody have any recommendations?
    powerbook G4   Mac OS X (10.4.5)  

    What you mention anti virus software programs. In your topic it reads "network security question"
    There is a difference between the two. Network security would be protecting a local LAN or WAN home network used for gaining access to the net. If this is what you want to do then you should have your network WEP or WPA password protected and enable OS X's personal Firewall by going to System Preferences->Sharing->Firewall->Start Firewall. Some good tips to remember are:
    * Never leave your network unlocked.
    *Keep your network password complex (12 digits and letters).
    *Don't hesitate to tell your ISP if someone is "using" your Network.
    *If you see any unknown files don't open them!
    Now if your were talking about a Software virus that affects your computer and causes it to malfunction/crash/break Then you don't have very many worries as there are no "Real" viruses for the Mac right now other then two worms, one which is spread via iChat and the other Bluetooth, both causing you to open them and give your Admin password to run them
    In other words moral of the story is don't open unknown files/programs and don't give your Mac your password unless you know what it's for and why it's asking.
    Net barrier acts as a firewall with more options all though I have found it to cause trouble with my network and have stopped using it.
    Virus Barrier, attempts to keep viruses from affecting your OS by scanning for them and warning you if it finds one and delete them. Once a again two different types of software.
    -Internet Wiz

  • DMVPN - improve network performance

    Hi All,
    We have a dual hub dual dmvpn cloud network running EIGRP or about 50 and 100 sites in the coming future.
    I have configured it in a way for 25 spokes to designate hub1 as primary and the other 25 spokes to designate hub2 as its primary link.
    To load balance.
    I need some help on sugguestion or recommendations on how to improve its network performance.
    This is to anticipate queries by customer running systems&apps complaining why is the link too slow after implementing dmvpn
    Is there any parameters that can be fine tuned to help increase its performance?
    Please advise.
    Thanks

    Hello, I know this thread is old but it is exactly relevant to what i have now. We have implemented the dual hub dual dmvpn solution over the last year on our remote sites. The head ends are 7200s w/C7200P-ADVSECURITYK9-M, Version 12.4(24)T3 and the remotes are 1700s (slowly being replaced with 1800s) and 1800 series routers. there are about 60 sites, most of them riding over Comcast cable (preferred) or Verizon DSL. Many of our sites have both, where comcast is primary and DSL is secondary, so on these sites there are 4 tunnels. Our connections are getting very slow. For instance, at one site they have paid for 50mb cable connection, which, when plugged directly into the cable modem, reaches those speeds. When going through the tunnel back to our core, where we have mutiple GB ISP connections out to the internet, they are getting 2mb download speeds.  Actually, they don't even have to be going out to the internet, just hitting our internal servers in the core is slow for them. We started testing multiple sites and it seems all of them are getting very slow compared to the service they have. In looking at the troubleshooting options available you listed above, I am very curious about making sure none of the devices are oversubscribed. Since we have no spoke to spoke connections I am assuming that this troublshooting should be done on the 7200s. What commands would be good to run to check for oversubscription on the 7200s regarding CPU and crypto accelerator? Also, when you mention QoS, where does this get applied? I am familar with manual QoS config for voice and video, but how does it relate to VPN? Is there anything else i can look at/modify that will help alleviate the slowness of these tunnels? Any help would be greatly appreciated!!
    Thank you,
    Noel

  • Hard drive and network performance

    Can someone give me the intuitive, if not mathematic, side to where a hard drives performance becomes a bottleneck in network performance?
    For example, say a given internal hard drive in a computer has a read/write of 40MB/s = 320Mbps. This is 1/3 the bandwidth of a gigabit ethernet cable, assuming everything works like it should.
    So when you are downloading from your big RAID 5 file store on the gigabit network, to your local desktop (the hard drive that read/writes at 320Mbps), how do you ever utilize the full gigabit ethernet connection?
    Just seems to me like the drive cache would be constantly full if you are downloading a large video file or such.
    So what obvious thing am I missing?
    Thanks

    The simple answer is that you don't. At least not in this scenario.
    As you've correctly surmised, the slower components are going to limit your throughput, but the disk speed is only one of several factors. Network latency, TCP/IP packet overhead and more will all contribute.
    So you'll rarely get true gigabit throughput on file transfers. 300-400 mpbs on a gigabit ethernet connection is typical.
    Where you can and do get more is on memory-based transfers - data in one systems's RAM (or cache) being copied to another system's RAM (or cache). That's why switches and routers can handle line-speed throughput - they get a packet on one interface and send it out another, never having to think about it again (oversimplified, of course).

Maybe you are looking for

  • Using old iMac Screen on new Mini

    My trusted old iMac G4 700 (dome base) is just too slow these days, but I still love the design. The screen is also very adjustable and can extend to a comfortable working height. Is there any way in which I can couple this with a new mini, ie. just

  • Uploading flex application to server

    Hi all, I am new to flex2, I have made one sample application, it is working fine in my system. the problem is, if i move my project folder to some other location, and if i open the project html file (which is in bin directory) in the browser it is n

  • Demantra Integration Interface - export

    Hi all, I'm trying to export the results after each forecast and for that I created an workflow that containes a transfer step which refers to a Export Integration Interface. -I set the following series to be exported: sales_date, history, forecast,

  • ITunes says I can authorize 5 computers is that computers or devices ? (iPad, iPod, iPhone.)

    itunes says i can authorize 5 computers is that computers or devices ( pad, pod, phone, etc. )

  • N80 Firmware Upgrade Problems.

    Hi I have just used the Software update utility to update my N80 from V3.x to the latest one v4.x. It now hangs when trying to open menus ie when trying to open the SMS menu or gallery. It has also locked up a few times after hanging up a call. Tryin