Oracle VM 2.2.2 - TCP/IP data transfer  is very slow

Hi, i've encountered a disturbing problem with OVM 2.2.2.
My dom0 network setup (4 identical servers):
eth0/eth1 (ixbe 10gbit) -> bond0 (mode=1) -> xenbr0 -> domU vif's
Besides bonding setup, it's default OVM 2.2.2 installation.
Problem description:
TCP/IP data dransfer speed:
- between two dom0 hosts: 40-50MB/s
- between two domU hosts within one dom0 host: 40-50MB/s
- between dom0 and locally hosted domU: 40-50MB/s
- between any single domU and anything outside it's dom0 host: 55KB/s -
something is definitely wrong here.
domU network config:
vif = ['bridge=xenbr0,mac=00:16:3E:46:9D:F1,type=netfront']
vif_other_config = []
I have similar installation on Debian/Xen, and everything is running
fine, e.g. i don't have any data transfer speed related issues.
regards
Robert

There is also an issue with the ixgbe driver in the stock OVM2.2.2 kernel (bug:1297057 on MoS). We were getting abysmal results for receive traffic (measured in hundreds of kilobytes!!! per second at times) compared to transmit. It's not exactly the same as your problem, so don't blindly follow what I say below!!!
### "myserver01" is a PV domU on Oracle VM 2.2.2 server running stock kernel ###
[root@myserver02 netperf]# ./netperf -l 60 -H myserver01 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver01.mycompany.co.nz (<IP>) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 60.23 1.46
### Repeat the test in the opposite direction, to show TX is fine from "myserver01" ###
[root@myserver01 netperf]# ./netperf -l 60 -H myserver02 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver02.mycompany.co.nz (<IP>) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 60.01 2141.59
In my case, a workaround as advised by Oracle Support is to run:
ethtool -C eth0 rx-usecs 0
ethtool -C eth1 rx-usecs 0
against the slaves within your bond group. This will give you better performance (in my case, got up to ~1.2GBit/s), although there are some fixes coming out in the next kernel which get even better speeds (in my tests, ~2.2GBit/s):
Edited by: user10786594 on 11/09/2011 02:22

Similar Messages

  • Internal Disk to Disk Data Transfer Speed Very Slow

    I have a G5 Xserve running Tiger with all updates applied that has recently started experiencing very slow Drive to Drive Data transfer speeds.
    When transferring data from one drive to another ( Internal to Internal, Internal to USB, Internal, Internal to FW, USB to USB or any other combination of the three ) we only are getting about 2GB / hr transfer speeds.
    I initially thought the internal drive was going bad. I tested the drive and found some minor header issues etc... that were able to be repaired so I replace the internal boot drive
    I tested and immediately got the same issue.
    I also tried booting from a FW drive and I got the same issue.
    If I connect to the server over the ethernet network, I get what I would expect to be typical data transfer rates of about 20GB+ / hr. Much higher than the internal rates and I am copying data from the same internal drives so I really don't think the drive is the issue.
    I called AppleCare and discussed the issue with them. They said it sounded like a controller issue so I purchased a replacement MLB from them. After replacing the drive data transfer speeds jumped back to normal for about a day maybe two.
    Now we are back to experiencing slow data transfer speeds internally ( 2GB / hr ) and normal transfer speeds ( 20GB+ / hr ) over the network.
    Any ideas on what might be causing the problem would be appreciated

    As suggested, do check for other I/O load on the spindles. And check for general system load.
    I don't know of a good GUI in-built I/O monitor here (and particularly for Tiger Server), though there is iopending and DTrace and Apple-provided [performance scripts|http://support.apple.com/kb/HT1992] with Leopard and Leopard Server. top would show you busy processes.
    Also look for memory errors and memory constraints and check for anything interesting in the contents of the system logs.
    The next spot after the controllers (and it's usually my first "hardware" stop for these sorts of cases, and usually before swapping the motherboard) are the disks that are involved, and whatever widgets are in the PCI slots. Loose cables, bad cables, and spindle-swaps. Yes, disks can sometimes slow down like this, and that's not usually a Good Thing. I know you think this isn't the disks, but that's one of the remaining common hardware factors. And don't presume any SMART disk monitoring has predictive value; SMART can miss a number of these cases.
    (Sometimes you have to use the classic "field service" technique of swapping parts and of shutting down software pieces until the problem goes away. Then work from there.)
    And the other question is around how much time and effort should be spent on this Xserve G5 box; whether you're now in the market for a replacement G5 box or a newer Intel Xserve box as a more cost-effective solution.
    (How current and how reliable is your disk archive?)

  • Need to build communication redundancy using serial RS-232 for Data Transfer b/w Host and RT irrespective of TCP/IP Data Transfer

    Hi - I would like to build the logic in which it should accomodate the communication redundancy using serial RS-232 for Data Transfer b/w Host and RT irrespective of TCP/IP Data Transfer.
    I want to do data transfer b/w host and RT through RS232 VISA portal whenever TCP/IP ethernet cable has been unplugged from the controller continuosly , it should keep on checking for TCP/IP link re-establishing also , when ever the tcp/ip link established again that time the communication should be using in that link only. This is accomplished by deploying the RT vi as execuatbale file. I made some logic regards to the above said logic , bur it was not working as much I expected.
    I request you to go through the attached two VI's and let me know , what I did wrong in that,
    Please do the needful.
    Attachments:
    TCP_Serial_Host.vi ‏33 KB
    TCP_Serial_RT.vi ‏41 KB

    even i am new to this topic and i am trying to get familiar with these protocols
    refer to tcp server/client examples in labview examples

  • Data Base is very slow

    Dear All,
    Certain query on my database is very slow .
    One of the query some times does not execute at all.this query involves a big table of about 5 million records.
    Some Facts About my Database.
    OS: SUN Solaris
    DataBase: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    RAM:32GB
    Dedicated oracle server
    Processors:16
    DB Block Size:2048
    Large pool:150994944
    log_buffer:10485760
    shared_pool_size:150994944
    There are in total 21 production Database running on the same BOX.
    Previously my buffer cache hit ratio was 27%.
    So I recommended an increase in DB_CACHE_SIZE from 101MB to 300MB
    and SGA_MAX_SIZE to 800MB from 600MB.
    As a result ,The buffer cache hit ratio increased to 75%.
    But still the queries run slow.
    I even tried with Partitioning the Big Table Didn't help.
    My question is ,is the system over loaded ?
    or increasing the db_cache_size will help ?
    Regards.

    By itself the buffer hit cache ratio is a meaningless statistic. It can in fact be a misleading indicator since it does not actually reflect application performance.
    Tune the query. Make sure it is running as well as it can.
    Then look at overall machine resources: average and peak cpu, memory, and IO loads.
    If spare resources exist then considering more resources to the more important databases on the system.
    Document any performance changes that occur after each change. It is possible for the database performance problem being latching, that is, shared pool access and you might need space in the shared pool more than the buffers. It depends on the application and user load.
    Why are you using a 2K database block size? I would think 4k or 8k would probably be better even for a true OLTP with almost all access by index.
    To get help on the query you will need to post it, the explain plan, and information on available indexes, table row counts, and perhaps column statistics for the indexed columns and filter conditions.
    HTH -- Mark D Powell --

  • Data Services Designer - Very Slow on VPN

    Hello,
    Any idea why Data Services Designer is very slow and many a times goes into Not Responding state. I'm using this client tool to connect to the Data Services Repository + Servers via VPN.
    It takes few minutes to load the jobs or to save the changes. Some times hangs.
    Wanted to know if anyone is facing similar issues, and any workaround/setup changes to eliminate these delays...
    Regards,
    Madan
    Edited by: Madan Mohan Reddy Zollu on Mar 12, 2010 9:24 AM

    Data Services Designer is communicating with the repo (to store/retrieve objects) and the jobserver (to execute jobs and get status/log files) so if there is a slow network connection, response time in the Designer could become problematic.
    One way to solve this is use CITRIX or terminal services to have your Designer close to the database and only screens are send over the slow connection. In the Windows installation guide there is a chapter that documents how to setupDesigner in a (multi-user) Citrix environment.

  • TCP/IP data transfer to PDA?

    I'm trying to display indicator values on my laptop through a wireless connection on my PDA by whatever means available in LabVIEW. I'm currently utilizing TCP/IP. I haven't been successful utilizing my own program so I've tried the Simple Data Server and Simple Data Client available in the LabVIEW examples. I can't get them to work. Attached are the example programs I'm using. I run the server on my Laptop with port 5060. I created a PDA executable from the LabVIEW example Simple Data Client and successfully downloded it to my Dell Axim 3i. I'm directing the Data Client to port 5060 at 192.168.2.100: my laptop (Dell Latitude C510/C610)data server. It worked one time immediately after I disabled the VI server in LabVIEW. It hasn't worke
    d since that one time, however, when it was running it wouldn't update the different waveforms when I'd switch the server from random to sine to chirp, it'd stay on random.
    I can ping the laptop at 192.168.2.100 from the PDA and I can ping the PDA at 192.168.2.101 from the laptop.
    Attachments:
    PDA_TCP.zip ‏65 KB

    Try the "TCP data server" and "TCP data client" that comes specifically for the PDA. If that works fine then we can have alook further on why the other server-client pair does not work. It does not matter if either of them is on your laptop.
    Mostly all the programs examples do not adapt completely to the PDA.
    Try it and let me know if I can help you more.

  • Official release of data modeler is very slow on Linux (64bit)

    I was testing the beta version on Linux (64bit) and it was fast (reverse engineering from database for example), but I found out the official release of the data modeler is slower compared to the Beta version. The same JDK version, the same Linux 64bit distro. Is there a memory leak with the official release ? Has any one else observed such sluggish performance with the official release of the data modeler ?
    thanx

    By itself the buffer hit cache ratio is a meaningless statistic. It can in fact be a misleading indicator since it does not actually reflect application performance.
    Tune the query. Make sure it is running as well as it can.
    Then look at overall machine resources: average and peak cpu, memory, and IO loads.
    If spare resources exist then considering more resources to the more important databases on the system.
    Document any performance changes that occur after each change. It is possible for the database performance problem being latching, that is, shared pool access and you might need space in the shared pool more than the buffers. It depends on the application and user load.
    Why are you using a 2K database block size? I would think 4k or 8k would probably be better even for a true OLTP with almost all access by index.
    To get help on the query you will need to post it, the explain plan, and information on available indexes, table row counts, and perhaps column statistics for the indexed columns and filter conditions.
    HTH -- Mark D Powell --

  • Master Data loading is very slow.

    Hi Experts,
    I have scheduled Master Data Attribute Process Chain daily. 0employee_attr, infoObject is having only 487315 & it is taking more than 12 hrs. There are number of infoobjects are much bigger is taking 10 minutes only. 0Employee attr in increasing 5-10 records daily only. Earlier it was taking 4-5 hours.
    Regards,
    Anand Mehrotra.

    Hi,
    You must have the following profiles to BWREMOTE or ALEREMOTE users.So add it. Bcoz either of these two users will use in background to get extract teh data from ECC, so add tehse profiels in BW.
    S_BI-WHM_RFC, S_BI-WHM_SPC, S_BI-WX_RFC
    And also check the following things.
    1.Connections from BW to ECC and ECC to BW in SM59
    2.Check Port,Partner Profiles,and Message Types in WE20 in ECC & BW.
    3.Check Dumps in ST22, and SM21.
    4.If Idocs are stuck i.e see the OLTP Idoc numbers in RSMO Screen in (BW) detials tab see in bottom, you can see OLTP Idoc number and take the Idoc numbers and then goto to ECC see the status in WE05 or WE02, if error then check the log else goto to BD87 in ECC and give the Idoc numbers and execute manually and see in RSMO and refresh.
    5.Check the LUWs struck in SM58,User Name = * (star) and run it and see Strucked LUWs and select our LUW and execute manually and see in RSMO in BW.
    See in SDN
    Re: Loading error in the production  system
    Thanks
    Reddy

  • Data load is very slow

    Hi Experts,
    I am working on CRM Analytic. I am loading address data from extractor 0BP_DEF_ADDRESS_ATTR to Business partner, with  19 lakhs records. When I execute the DTP it is taking 3 to 4 days to complete the load.
    Please provide me solution so that my data load will become fast.
    With Regards,
    Avenai

    Hi,
    Increase the Number of parallel processes.
    in order to increase parallel processes --> from menu of DTP -> goto -> "settings for batch manager" -> increase the Number of parallel processes (by default it will be 3 increase it to 6).
    Increase the Datapacket size in the DTp -extraction tab.
    Do you have any routines used in the transfromations? if yes try to debug the code where its taking time, find n try to fine tune the code with the help of ABAP person.
    Below option may also be one of the reasons while using CRM data sources...
    The data source consists of lots of fields which are not used or mapped in the transformation ... try to hide those fields or you can create a copy of your data source using BWA1 transaction in CRM system.
    Regards
    KP

  • HDD data transfer rate is slow in my new i7

    Hi
    I had MacBook Pro 15" , 2.8 GHz , 500 GB HDD and because of some hardware problems Apple replace this laptop with new MacBook Pro 15" 2.66 i7 CPU, 500 GB HDD. I install same windows7 64bit in both via Boot camp. Disk transfer rate in old MacBook is 5.9 and in my new laptop is 5.6 ?!?!?!?
    I installed Bootcamp drivers and also installed all updates of windows7.
    Can you tell me reason of this difference and any solution to improve this rate?

    Hi n, and welcome to Apple Discussions.
    As you're still covered under warranty, call AppleCare or go to an Apple Store or AASP and explain the situation. A replacement computer is supposed to meet or exceed the machine it replaced, and if yours isn't performing to that level . . . .

  • Data load becomes very slow

    Hi,after a migration from Version 5 to 6.5 the dataload becomes very slow.With V5 the dataload takes 1 hour, with 6.5 it takes about 3 hours.the calculation takes the same time.Any idea?

    To many sub VIs could not be found so I can not give you more than some advises. But I see that you run all your loops at full speed. I do not think it is very wise. Insert the "wait (ms)" functions in all while loops, but not the loops handling the daq functions. Since they are controlled by a occurrence. In Loops handling user input only you may set the wait time as high as 500ms. In more important loops use shorter time.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

  • Large SGA issue-- insert data is very slow--Who can help me?

    I set the max_sga_size to 10G,and db_cache_size to 8G, but the value of db_cache_size is negative number in OEM, and also I found that the data inserting was very slow, I checked the OS, found no CPU consuming and no IO consuming.
    The OS is HP-UX B11.23 ia64.
    Oracle server 9.2.0.7
    Physical memory : 64G
    CPU: 8
    (oracle server and os are all 64-bit).
    If I decrease the SGA to 3G,and db_cache_size to 2G, and the same data inserting is very fast, everything is well.
    so I guess if there are some os parameters needed to set for using LARGE memory.
    Who know this issue or who has the experience of using large SGA in HP-UX ?
    Message was edited by:
    user548543

    Sounds like you might have a configuration issue on the o/s side.
    Check that kernel parameters are set as recommended in the installation guide
    The first thing that came to mind after reading the problem description was that you might have too low SHMMAX for that 10GB SGA, which would cause multiple shm segments to be created and thus explain the performance degration you're experiencing.
    A quick way to check if that's the case would be doing "ipcs -m" to see if there are multiple shm segments when SGA is set to 10GB.

  • [Solved]Computer slow down and freeze during data transfer syncing

    Hello,
    Everytime I make a data transfer, my computer slow down, and sometimes freeze during a few seconds, work for a fraction of seconds and freeze again. I don"t understand why.
    As an example, here is a bench of files i transfered during a syncing with Grsync
    It took about 12h to sync 64Gb.
    I've read a lot of topics talking about this problem but nothing works. I tried to switch off the SWAP and setting the vm.dirty_ratio and vm.background_dirty_ratio but nothing works.
    I'm making the trasfer between two Western Digital USB 2.0 external hard drive in NTFS.
    Does anyone know how to fix this?
    Thanks in advance
    Last edited by Neldar (2015-03-04 15:54:29)

    It seems it was beacause one of my external hard drive was about to die. I performed the same operation with 2 Western Digital external hard drive 1To USB3 and it works perfectly.

  • Megastick 511 slow data transfer

    Hi
    I have a 1GB Megastick that should support USB2.0 but it's data transfer is extremly slow. It doesn't give more than some 0.5 MB/s or even less. Is it some kind of firmware or software problem? I tried latest firmware but it did't help...

    https://forum-en.msi.com/index.php?topic=90221.0
    https://forum-en.msi.com/index.php?topic=81136.0
    the Search button is your friend...
    now without knowing more details of the system you are connecting your MegaStick too, it is difficult to advise further...

  • Data Transfer from Oracle database 10g to Oracle database 9i

    Hi Experts,
    We need to insert records at the speed of min 10,000 records per sec in the following condition.
    Source : Oracle 10 g and Oracle 9i
    We need to select the data from Oracle 10g and insert into Oracle 9i database.
    Here we not allowing to create database link and also not allowing to create view or materialized view
    because this two database is not on same network.
    So we developing the small java application on intermediate server where we write process to get the connection of this two database servers.
    From java application we call the procedure for selecting data from Oracle 10g and insert into oracle 9i database.
    What is the best way to achieve?
    Or You also suggest any other way to achieve
    ( As per this scenario materialized view is working ? )

    Thanks freiser,
    But it create another problem as per my business logic, There will be two database server , one is online server where online user fill the form which is generated by java, spring , hibernate and using database 10 g.
    at day end i need to execute a process that transferring data from online server to offline server that is in oracle database 9i. This process is scheduled. Some security reason client do not kept this two database on same network.
    My challenge is that transfer data from online server to offline server with applying client security norms.
    I have option like
    1) Using Oracle replication method, creating materialized view on remote server , refreshing it at regular interval. but database connectivity is not contineous, should i go for that ?
    2) Write java application on intermediate server where we write process to get the connection of this two database servers. From java application we call the procedure for selecting data from Oracle 10g and insert into oracle 9i database and using flag on both data to identified how many rows are transfered and how many remaining for trasfer.
    Please tell me what is best way to acheive this ?

Maybe you are looking for

  • How do i share a iPhoto Library on iPhoto 9.5

    Basically with the old version of iPhoto, they allowed a function in the preferences called sharing, with enabled a user to share a iPhoto library over a network to be accesed on another mac however this is not possible in the new version of iPhoto.

  • Windows shuts down when monitor is closed

    I am having a problem with my bootcamp (Windows mode). 13 inch MacBook with Windows 7 partition. It is basically working fine, but as soon as close the lid the system automatically shuts down and never comes back unless I turn on it again. Does any o

  • Variant configuration-Configurable NWk

    Hello PS Experts Is it possible to change the value of the field in activity e.g. normal duration by formulation maiintained locally on standard network-activity through procedure dependancy when project or concerned network and activity is in Releas

  • Smartforms - Zebra Printers.

    Hello experts, My Smartform layout is related to warehoues management - Transfer order. Same layout gives a clear Print in the Hp Printers But only a Half of the layout gets print in the Zebra printer. Please help me in this issue.

  • Mail Addresses Problem

    Mail does not seem to find all recipient addresses that are stored in Address Book. I've checked the distribution lists in Address Book, and all looks fine there. I type the recipient name in the To: field, and the name I want does not appear even th