MPLS TE - Latency
Is TE able to detect high latency on a circuit? I understand is solve problems like efficient use of all avaliable bandwidth based on RSVP but how about long distant links with increasing latency over time? Will TE detect that?
Hello Francisco,
the answer should be negative.
With MPLS TE you can associate resources to links (RSVP bandwidth ) and you can run a modified version of OSPF or ISIS that takes in account current usage of these resources over the links, but not performance related info about the links.
For that you should look at OER / Performance routing where you can "test" the links over time and you can decide to stop to use links that are beyond a threshold.
And of course QoS can be of great help in providing better treatment to some traffic classes (so bounded latency and limited jitter)
What kind of WAN links are you meaning ?
Hope to help
Giuseppe
Similar Messages
-
Hello All,
I have a MPLS VPN setup for one of my sites. We have a 10M pipe (Ethernet handoff) from the MPLS SP, and it is divided into 3 VRFs.
6M - Corp traffic
2M - VRF1
2M - VRF2
The users are facing lot of slowness while trying to access application on VRF1. I can see the utilization on the VRF1 is almost 60% of it's total capacity (2M). Yesterday when trying to ping across to the VRF1 Peer in the MPLS cloud, I was getting a Max response time of 930ms.
xxxxx#sh int FastEthernet0/3/0.1221
FastEthernet0/3/0.1221 is up, line protocol is up
Hardware is FastEthernet, address is 503d.e531.f9ed (bia 503d.e531.f9ed)
Description: xxxxx
Internet address is x.x.x.x/30
MTU 1500 bytes, BW 2000 Kbit, DLY 1000 usec,
reliability 255/255, txload 71/255, rxload 151/255
Encapsulation 802.1Q Virtual LAN, Vlan ID 1221.
ARP type: ARPA, ARP Timeout 04:00:00
Last clearing of "show interface" counters never
I also see a lot of Output drops on the physical interface Fa0/3/0. Before going to the service provider, can you please tell me if this can be an issue with the way QoS is configured on these VRFs?
xxxxxxx#sh int FastEthernet0/3/0 | inc drops
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3665
Appreciate your help.
Thanks
MikeyHi Kishore,
Thanks for the clarification. Let me speak to the service provider and see if we can sort out the Output drops issue.
I had a few more queries.
1) Will output drops also contribute to the latency here?
2) The show int fa0/3/0.1221 output below only shows the load on the physical interface (fa0/3/0) and not of that particuar interface.Right?
xxxxxx#sh int fa0/3/0.1221 | inc load
reliability 255/255, txload 49/255, rxload 94/255
xxxxx#sh int fa0/3/0 | inc load
reliability 255/255, txload 49/255, rxload 94/255
I can try and enable IP accounting on that sub-interface (VRF) and see the load. Thoughts?
3) As you said, if the 2M gets maxed out I would see latency as the shaper is getting fully utilized. But I don't see that on the interface load as mentioned above? I have pasted the ping response during the time load output was taken. I can;t read much into the policy map output, but does it talk anything about 2M being fully utilized and hence packets getting dropped.
xxxxxxx#ping vrf ABC x.x.x.x re 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to x.x.x.x, timeout is 2 seconds:
Success rate is 99 percent (997/1000), round-trip min/avg/max = 12/216/1972 ms
xxxx#sh policy-map interface fa0/3/0.1221
FastEthernet0/3/0.1221
Service-policy output: ABC
Class-map: class-default (match-any)
114998 packets, 36909265 bytes
5 minute offered rate 11000 bps, drop rate 0 bps
Match: any
Traffic Shaping
Target/Average Byte Sustain Excess Interval Increment
Rate Limit bits/int bits/int (ms) (bytes)
2000000/2000000 12500 50000 50000 25 6250
Adapt Queue Packets Bytes Packets Bytes Shaping
Active Depth Delayed Delayed Active
- 0 114998 36909265 1667 2329112 no
Thanks
Mikey -
Dear All,
I am mid way through delivery of an MPLS network for a customer. We have the option of adding a DMVPN internet backup. The WAN routing protocol is BGP for both DMVPN and MPLS (MPLS is straight routing - no overlay or getvpn)
The customer has some pretty odd ideas about how to use the service, mainly they are very paranoid about oversubscibing the MPLS link for their "golden" application, so have come up with all kinds of weird and wacky solutions that they call "requirements". As a result, they have the idea that they cannot route all traffic over MPLS, and instead want to create a mix of route filter, PBR, NAT, ACL and all kinds of horrible things to keep the MPLS link "sacred".
I would like to guide them to use the service in a better way leveraging PfR, something simple to start with like:
- use MPLS all of the time by default
- when it reaches 80% utilisation, switch the non-critical traffic to DMVPN yet keep the "golden" app on MPLS
- this policy is applied at both branch and hub BRs
I have read this cisco wiki (it is rather good!!!!):
http://docwiki.cisco.com/wiki/PfR:Solutions:EnterpriseDualVPN#PfR_performance_and_load_policy_test_case
Where I get a bit lost is in the symmetry. Is it symmetrical?
For example, if a branch MPLS link hits 81% outbound utilisation, then uses dynamic PBR to switch outbound traffic to VPN. Sorted! but what about the inbound? If that is hitting 81% also. What does PfR do?
This is where I get a bit lost. For the inbound to branch, the hub must do something surely? Consider the MPLS capacity is 100M - the 8.1M to the branch is in policy so should keep forwarding via MPLS. How does it know that the branch is OOP?
If it doesn't do that, you end up with assymetric traffic (branch outbound DMVPN, inbound MPLS).
Am I missing something here? Any advice you can give us much appreciated.
Thanks a million!
PhilDisclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Ah, well my experience with asymmetrical routing has often been with similar bandwidth paths, so it's not normally a problem unless there's some "stateful" device between hosts. But I can see why inadvertent transit via VSAT might be considered a problem. ;)
I think PfR can help us with the outbound traffic from branch, but I struggle to see how it can help with the inbound IF using utilization only on HUB BR - agree it would never go out of policy (10M to a single branch means the hub is only at 10%).
Ah, it works because, as noted, PfR will see the jump in latency (passively and/or actively) to the destination if you have it do more than monitor egress port utilization. (When I had used OER/PfR, I had international multipoint. So "in" congestion to any one site could be caused by the aggregate of multiple other sites "out" to it.
With active monitoring, PfR would see the MPLS link latency worse than your VPN backup (to same destination), and when it does, juggle traffic flows.
No, don't have experience with using PfR's inbound control (at the time I set it up, most of my devices only supported OER, and inbound isn't an option with OER).
Also cannot say what's typical "real world". I can say, when I setup "my" OER/PfR, our network monitoring section complained almost all our WAN performance problems disappeared. (Laugh - we then had to work out how to "see" WAN performance problems in spite of OER/PfR.) -
Hi:
Based on paper titled "L3 MPLS VPN Enterprise Consumer Guide" page 52, figure 44. (http://www.cisco.com/en/US/partner/netsol/ns465/networking_solutions_white_papers_list.html).
1) The figure discards the "streaming video" and "bulk data" traffics within the mapping process. Why? What happens with these traffics? Both traffics are discarded or simply they need to be mapped to "Best Effort"? Please explain.
2)In the same figure, "Interactive Video" is mapped to "Realtime" SP class with "Voice" traffic. Is this "Interactive Video" traffic always no TCP-based? If the opposite is true, why is it mixing TCP & UDP over the same "Realtime" class?Hi,
That articles mentions that these protocols tend to use transport-layer protocols such as UDP and RTSP. That is true but there are a lot of different streaming protocols around and some of them do use TCP. In fact, even RTSP supports the use of TCP. And you can also stream via HTTP (Windows Media supports this, for example).
So you see, there can be a mix of TCP and UDP traffic here.
The other, more critical, reason for not mixing interactive-traffic with streaming (one-way) traffic is the drastically different jitter/latency requirements for the two. Streaming traffic will easily sustain latency in the order of seconds and jitter is not even a problem. Whereas interactive traffic will not. That is why you should not mix the two.
Hope that helps - pls rate the post if it does.
Paresh -
SLA monitoring of MPLS service
Hi Guys..we have MPLS links to about 5 offices around the globe, Bandwidth is around 2 mb across all links, managed by a single ISP. Now we have had various outages recently and we do not have transparency of the average bandwidth. The ISP has his own portal but it doesn't work when needed the most. They have an option where we can pay for getting SNMP feeds but there is no provisioning for capturing Netflow. I guess thay use that for their own portal purposes.
The routers at the CPE side (our side), are managed by the ISP.
What tools/applications I can use on my side to maintain visibility over the MPLS links provided to us?Hi,
You can make use of the "IP SLA" features between your CPE devices. Though CPE's are connected via MPLS VPN network the enterprise network (your network)is actually unaware of MPLS technology and all you need for IP SLA to work is the ip reachability between devices. CPE to CPE "IP SLA" can be configured which will give you lot of informations that can be gathered. It also has MIB and OID values associated with it. So you can use a free network monitoring tool with OID values so that you can even view the pictorial presentation of your network uptime and lot of performance parameters (jitter, packet loss, latency, etc.)
You can get some insight to "IP SLA":
http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6555/ps6602/prod_presentation0900aecd8047bab5.pdf
HTH.. Pls rate if useful..
cheers
Arun Kumar -
Data Center Latencies !!!!
Hello,
I am coming across these phrases where MPLS is legacy, old school etc.etc..
My topic here is not an issue but more of a knowledge gathering where i would like to know more about the new WAN technologies in the market, what are the efficient ways of connecting two cities miles away from each other or countries or continents. Getting latency in WAN is a thing of past and there are better ways to get low latencies.
Please be kind to share insights on the new effiecient WAN technologies that will reduce the latencies drastically.
Thx in advance.
Regards,
Amol.My topic here is not an issue but more of a knowledge gathering where i would like to know more about the new WAN technologies in the market, what are the efficient ways of connecting two cities miles away from each other or countries or continents. Getting latency in WAN is a thing of past and there are better ways to get low latencies.
This line of questioning sounds very familiar to a school work.
Anyway, MPLS is not a "legacy". It's still there. Growing. The only thing stopping or slowing down the implementation of MPLS/VRF is cost and the knowledge skill.
Have you heard of WAN Acceleration? Have you heard of "dark fibre"? These two are predominantly used to link two (or more) sites together. -
Data Center Interconnect using MPLS/VPLS
We are deploying a backup data center and need to extend couple of Vlans over the backup data center.These two DC's which are interconnected by a fibre link which we manage and terminates on the ODC2MAN and ODCXMAN.We run MPLS on these devices ODC2MAN and ODCXMAN(Cisco 6880) as PE routers. I configured OSPF between these devices and advertised their loopbacks.
I need configuration assistance on my PE (odcxman and odc2man) to run the VFI and the VPLS instances.The vlans on the ODCXAGG need to extend to the ODC2AGG.
Also, I am looking for the configuration assistance such that each core devices should have 3 eigrp neighbors.
For example:
ODC2COR1 should have Eigrp neighbors with ODCXCOR1, ODCXCOR2 and ODC2COR2 and my VPLS Cloud should be emulated as a transparent bridge to my core devices such that it appears that ODC2COR1 is directly connected to ODCXCOR1 and ODCXCOR2 and have cdp neighbor relation. I have attached the diagram.Please let me know your inputs.Hello.
If you are running Active/Backup DC scenario, I would suggest to make network design and configuration exactly the same. This includes platforms, interconnectivity types and etc.
Do you know what is the latency on the fiber between these two DCs?
Another question: why do you run 6880 in VSS, do you really need this?
Q about the diagram: are you going to utilize 4 fibers for DC interconnection?
PS: did you think about OTV+LISP instead of MPLS? -
MPLS P-to-PE OSPF Inter-Area failover
Hi Guys.
I am simulating a MPLS core using OSPF for the control plane IGP.
Here's the setup:
Area 0 - backbone
Area 1 - PE routers in location A (PE-A)
Area 2 - PE routers in location B (PE-B)
Network is running MPLS/VPN
Here are the requirements:
1. There will be nxGE links between PE-A and PE-B for better latency requirement and bypass Area 0 for Location A<>B destined traffic
- I can probably use a new direct route between PE-A and PE-B to establish MP-BGP.
2. When Area 1 to Area 0 links are down, Area1 should failover via Area 2.
And when Area 2 to Area 0 links are down, Area2 should failover via Area 1.
- I can probably use virtual-links here...But I dont want to complicate things.
Any recommendations on better design?
ThanksA long time ago the rule of thump was that you can have up to 50 routers in one area. This was at the time that the routers and switches had low CUP speed and memory. Now days, the router and switches are powerful enough that can handle the database of more than 50 routers. I don't think this is going to be an issue with 24 routers, specially since you already have 20 routers in one area.
HTH -
Voip deployment over mpls link
Hi
we want to deploy voip for multiple location over the mpls lines we have between the multiple location can any body suggest the feasibility and how to do it and any related document and guidance for that
Thanks
Manish gaurManish,
That is probably the best way to connect multiple sites without having to connect each one with a T1, etc. Our MPLS vendor has an SLA on latency with 50ms or less guaranteed. With voice you want to make sure the latency is definitely less than 150ms. For ease of management we're using a router at each remote site with SRST capabilities so that all of the configuration remains at the main HQ callmanager, but if the connection ever goes down, the remote site phones will still be able to call each other (not main office until connection is restored) Alternatively you can put a router at each site with CallManager Express, you will just have to manage each site separately. Hope that helps a little. -
Hi,
We are already having leased line for our branch office connectivity. We are planning to extend the connectivity through MPLS.
Is it possible to convert my traffic from LL to MPLS or we need to extend the connectivity with MPLS only.
Exisiting
Branch Office --> LL --> Head Office
Proposed
Branch Office --> LL --> MPLS --> MPLS Cloud --> Head Office
Also let me know to check the network latency in MPLS cloud.
Best Regards,
M.KHi,
What kind of traffic do you have?
You should be able to run MPLS over the LL. You can also run MPLS over the other links, extending the MPLS cloud.
To check round-trip latency, you can use IP SLA. The easiest is to just use ICMP:
http://www.cisco.com/en/US/partner/docs/ios/12_4/ip_sla/configuration/guide/hsicmp.html
Thanks,
Luc -
We have a branch office that has about 220ms of latency between it and head office. The branch is composed of about 15 client machines (Windows 7 desktops and laptops). We have been struggling with application performance across this link. The Branch office has a 2Mbps MPLS connection and head office has a 50Mbps MPLS connection. Because of the Latency and BDP we get horrible throughput and rarely reach 2Mbps. Increasing the bandwidth to this site would not have much affect I don't think. One thing that was suggested was getting a second 2Mbps MPLS connection there and routing certain traffic over each each pipe. Idea is that we can double the throughput by having 2 separate pipes rather than 1 larger pipe.
My question is, does anyone have any experience with this sort of setup? Will this improve overall performance to the site?Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Yes, I have lots of experience (supporting international WAN links).
No, I don't see a 2nd pipe being better than one twice as large. If fact, it can be detrimental. However, the killer in these situations is the distance based latency which isn't "fixed" by adding bandwidth.
For distance based latency, there's no 100% "fix" beyond "just don't do it". If possible, use local resources especially for "interactive" network access. WAAS/WAFS, with their "tricks", can mitigate much of the distance latency impact (basically the most benefit comes from local caching which avoids the WAN's RTT).
For long distance throughput, you want to avoid any drops (because of slow error recovery) and need to have (for TCP apps) the receiving host's RWIN support (available) BDP. This is difficult to manually optimize, although often, on older hosts, you need to increase their RWIN to allow a TCP flow to utilize all available bandwidth. Again, WAAS/WAFS or dynamic traffic shapers can do "tricks" to optimize throughput (basically they often spoof the connection across the WAN link so you don't need to adjust a host's RWIN).
QoS can keep bulk transfers from disrupting interactive applications, which can make a noticeable improvement.
Bulk transfers, w/o WAN acceleration, can be faster if concurrent flows (not multiple links) are used. This because the high distance based RTT slows how fast a TCP flow will initially increase it's flow rate and/or (especially) recover its transmission rate when there's packet loss.
PS:
BTW, different hosts, with different "vintage" TCP stacks and/or file processing logic, can fare quite differently across a high latency network. -
In our enterprise MPLS network we are using 192.168.20.0/24 subnet, in this subnet we have not assigned the IP 192.168.20.200/30 & 204/30, But still these subnets are reachable . Are these NNI IP ...Please explain.
I have checked with ISP, there response is like below:
Those are the NNI to GBNET IPs for Dominican Republic. They are Network IPs. You should be able to ping them-that means they are working.
WANRT01#show ip route | include 192.168.20.20
B 192.168.20.200/30 [20/0] via 192.168.20.226, 02:18:29
B 192.168.20.204/30 [20/0] via 192.168.20.226, 02:18:29
Here its shows from any of our MPLS site we are able to trace the IP and it seems like, 192.168.20.204/30 is one more site but in actual its not.
INMUMWANRT01#ping 192.168.20.205
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.20.205, timeout is 2 seconds:
Success rate is 100 percent (5/5), round-trip min/avg/max = 224/232/260 ms
INMUMWANRT01#trace
INMUMWANRT01#traceroute 192.168.20.205
Type escape sequence to abort.
Tracing the route to 192.168.20.205
VRF info: (vrf in name/id, vrf out name/id)
1 192.168.20.226 24 msec 24 msec 24 msec
2 192.168.20.206 [AS 8035] 232 msec 232 msec 252 msec
3 192.168.20.205 [AS 8035] 224 msec 224 msec * -
Latency issue in Desktop Sharing
We are planning to develop a conferencing solution using LCCS. I am trying to evaluate the screen/desktop-sharing application. I am experiencing 5-10 seconds latency during the transfer of the screen data to the other end.
I am using the demo application(ScreenShareSubscriber and ScreenSharePublisher), provided in the SDK.
Some more details:
- Current OS is Windows 7 (32 bit).
- I am behind a proxy.
- I am running the applications in India.
- Using Flex builder 4.6 with Flash player 11.1.
- Using the developer account to test the application.
Questions:
- Can the delay be reduced programatically? If yes, then how?
- The final solution may be used by people distributed across the globe. Is there a possibility that, the latency is affected by your location?
- If the above is true, does Adobe provide cloud services (for commercial applications) that are distributed in different location, to reduce the latency?
- Can proxy server be an issue? We have port 443 open on the proxy server for TSL connections.
- If the above is true, then how can we avoid the issue? The final application may be used in a corporate network and we cannot ask everyone to change their network settings to connect LCCS services.
I have checked some posts on the forum, which say that, the performance is faster on Macs. We are currently not targetting the Mac platform.
Thanks,
SubratHi,
Please double check the following firewall port between two subnets.
Front End Servers-Lync Server Application Sharing service 5065 TCP used for incoming SIP listening requests for application sharing.
Front End Servers-Lync Server Application Sharing service 49152-65535 TCP Media port range used for application sharing.
Clients 1024-65535* TCP Application sharing.
If the issue persists, you can use the Lync server logging tool on FE server to test the process of desktop sharing.
Here is the link of using the Lync logging tool:
http://blog.schertz.name/2011/06/using-the-lync-logging-tool/
Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. Please make
sure that you completely understand the risk before retrieving any suggestions from the above link.
Best Regards,
Eason Huang
Eason Huang
TechNet Community Support -
A quick primer on audio drivers, devices, and latency
This information has come from Durin, Adobe staffer:
Hi everyone,
A common question that comes up in these forums over and over has to do with recording latency, audio drivers, and device formats. I'm going to provide a brief overview of the different types of devices, how they interface with the computer and Audition, and steps to maximize performance and minimize the latency inherent in computer audio.
First, a few definitions:
Monitoring: listening to existing audio while simultaneously recording new audio.
Sample: The value of each individual bit of audio digitized by the audio device. Typically, the audio device measures the incoming signal 44,100 or 48,000 times every second.
Buffer Size: The "bucket" where samples are placed before being passed to the destination. An audio application will collect a buffers-worth of samples before feeding it to the audio device for playback. An audio device will collect a buffers-worth of samples before feeding it to the audio device when recording. Buffers are typically measured in Samples (command values being 64, 128, 512, 1024, 2048...) or milliseconds which is simply a calculation based on the device sample rate and buffer size.
Latency: The time span that occurs between providing an input signal into an audio device (through a microphone, keyboard, guitar input, etc) and when each buffers-worth of that signal is provided to the audio application. It also refers to the other direction, where the output audio signal is sent from the audio application to the audio device for playback. When recording while monitoring, the overall perceived latency can often be double the device buffer size.
ASIO, MME, CoreAudio: These are audio driver models, which simply specify the manner in which an audio application and audio device communicate. Apple Mac systems use CoreAudio almost exclusively which provides for low buffer sizes and the ability to mix and match different devices (called an Aggregate Device.) MME and ASIO are mostly Windows-exclusive driver models, and provide different methods of communicating between application and device. MME drivers allow the operating system itself to act as a go-between and are generally slower as they rely upon higher buffer sizes and have to pass through multiple processes on the computer before being sent to the audio device. ASIO drivers provide an audio application direct communication with the hardware, bypassing the operating system. This allows for much lower latency while being limited in an applications ability to access multiple devices simultaneously, or share a device channel with another application.
Dropouts: Missing audio data as a result of being unable to process an audio stream fast enough to keep up with the buffer size. Generally, dropouts occur when an audio application cannot process effects and mix tracks together quickly enough to fill the device buffer, or when the audio device is trying to send audio data to the application more quickly than it can handle it. (Remember when Lucy and Ethel were working at the chocolate factory and the machine sped up to the point where they were dropping chocolates all over the place? Pretend the chocolates were samples, Lucy and Ethel were the audio application, and the chocolate machine is the audio device/driver, and you'll have a pretty good visualization of how this works.)
Typically, latency is not a problem if you're simply playing back existing audio (you might experience a very slight delay between pressing PLAY and when audio is heard through your speakers) or recording to disk without monitoring existing audio tracks since precise timing is not crucial in these conditions. However, when trying to play along with a drum track, or sing a harmony to an existing track, or overdub narration to a video, latency becomes a factor since our ears are far more sensitive to timing issues than our other senses. If a bass guitar track is not precisely aligned with the drums, it quickly sounds sloppy. Therefore, we need to attempt to reduce latency as much as possible for these situations. If we simply set our Buffer Size parameter as low as it will go, we're likely to experience dropouts - especially if we have some tracks configured with audio effects which require additional processing and contribute their own latency to the chain. Dropouts are annoying but not destructive during playback, but if dropouts occur on the recording stream, it means you're losing data and your recording will never sound right - the data is simply lost. Obviously, this is not good.
Latency under 40ms is generally considered within the range of reasonable for recording. Some folks can hear even this and it affects their ability to play, but most people find this unnoticeable or tolerable. We can calculate our approximate desired buffer size with this formula:
(Sample per second / 1000) * Desired Latency
So, if we are recording at 44,100 Hz and we are aiming for 20ms latency: 44100 / 1000 * 20 = 882 samples. Most audio devices do not allow arbitrary buffer sizes but offer an array of choices, so we would select the closest option. The device I'm using right now offers 512 and 1024 samples as the closest available buffer sizes, so I would select 512 first and see how this performs. If my session has a lot of tracks and/or several effects, I might need to bump this up to 1024 if I experience dropouts.
Now that we hopefully have a pretty firm understanding of what constitutes latency and under what circumstances it is undesirable, let's take a look at how we can reduce it for our needs. You may find that you continue to experience dropouts at a buffer size of 1024 but that raising it to larger options introduces too much latency for your needs. So we need to determine what we can do to reduce our overhead in order to have quality playback and recording at this buffer size.
Effects: A common cause of playback latency is the use of effects. As your audio stream passes through an effect, it takes time for the computer to perform the calculations to modify that signal. Each effect in a chain introduces its own amount of latency before the chunk of audio even reaches the point where the audio application passes it to the audio device and starts to fill up the buffer. Audition and other DAWs attempt to address this through "latency compensation" routines which introduce a bit more latency when you first press play as they process several seconds of audio ahead of time before beginning to stream those chunks to the audio driver. In some cases, however, the effects may be so intensive that the CPU simply isn't processing the math fast enough. With Audition, you can "freeze" or pre-render these tracks by clicking the small lightning bolt button visible in the Effects Rack with that track selected. This performs a background render of that track, which automatically updates if you make any changes to the track or effect parameters, so that instead of calculating all those changes on-the-fly, it simply needs to stream back a plain old audio file which requires much fewer system resources. You may also choose to disable certain effects, or temporarily replace them with alternatives which may not sound exactly like what you want for your final mix, but which adequately simulate the desired effect for the purpose of recording. (You might replace the CPU-intensive Full Reverb effect with the lightweight Studio Reverb effect, for example. Full Reverb effect is mathematically far more accurate and realistic, but Studio Reverb can provide that quick "body" you might want when monitoring vocals, for example.) You can also just disable the effects for a track or clip while recording, and turn them on later.
Device and Driver Options: Different devices may have wildly different performance at the same buffer size and with the same session. Audio devices designed primarily for gaming are less likely to perform well at low buffer sizes as those designed for music production, for example. Even if the hardware performs the same, the driver mode may be a source of latency. ASIO is almost always faster than MME, though many device manufacturers do not supply an ASIO driver. The use of third-party, device-agnostic drivers, such as ASIO4ALL (www.asio4all.com) allow you to wrap an MME-only device inside a faux-ASIO shell. The audio application believes it's speaking to an ASIO driver, and ASIO4ALL has been streamlined to work more quickly with the MME device, or even to allow you to use different inputs and outputs on separate devices which ASIO would otherwise prevent.
We also now see more USB microphone devices which are input-only audio devices that generally use a generic Windows driver and, with a few exceptions, rarely offer native ASIO support. USB microphones generally require a higher buffer size as they are primarily designed for recording in cases where monitoring is unimportant. When attempting to record via a USB microphone and monitor via a separate audio device, you're more likely to run into issues where the two devices are not synchronized or drift apart after some time. (The ugly secret of many device manufacturers is that they rarely operate at EXACTLY the sample rate specified. The difference between 44,100 and 44,118 Hz is negligible when listening to audio, but when trying to precisely synchronize to a track recorded AT 44,100, the difference adds up over time and what sounded in sync for the first minute will be wildly off-beat several minutes later.) You are almost always going to have better sync and performance with a standard microphone connected to the same device you're using for playback, and for serious recording, this is the best practice. If USB microphones are your only option, then I would recommend making certain you purchase a high-quality one and have an equally high-quality playback device. Attempt to match the buffer sizes and sample rates as closely as possible, and consider using a higher buffer size and correcting the latency post-recording. (One method of doing this is to have a click or clap at the beginning of your session and make sure this is recorded by your USB microphone. After you finish your recording, you can visually line up the click in the recorded track with the click in the original track by moving your clip backwards in the timeline. This is not the most efficient method, but this alignment is the reason you see the clapboards in behind-the-scenes filmmaking footage.)
Other Hardware: Other hardware in your computer plays a role in the ability to feed or store audio data quickly. CPUs are so fast, and with multiple cores, capable of spreading the load so often the bottleneck for good performance - especially at high sample rates - tends to be your hard drive or storage media. It is highly recommended that you configure your temporary files location, and session/recording location, to a physical drive that is NOT the same as you have your operating system installed. Audition and other DAWs have absolutely no control over what Windows or OS X may decide to do at any given time and if your antivirus software or system file indexer decides it's time to start churning away at your hard drive at the same time that you're recording your magnum opus, you raise the likelihood of losing some of that performance. (In fact, it's a good idea to disable all non-essential applications and internet connections while recording to reduce the likelihood of external interference.) If you're going to be recording multiple tracks at once, it's a good idea to purchase the fastest hard drive your budget allows. Most cheap drives spin around 5400 rpm, which is fine for general use cases but does not allow for the fast read, write, and seek operations the drive needs to do when recording and playing back from multiple files simultaneously. 7200 RPM drives perform much better, and even faster options are available. While fragmentation is less of a problem on OS X systems, you'll want to frequently defragment your drive on Windows frequently - this process realigns all the blocks of your files so they're grouped together. As you write and delete files, pieces of each tend to get placed in the first location that has room. This ends up creating lots of gaps or splitting files up all over the disk. The act of reading or writing to these spread out areas cause the operation to take significantly longer than it needs to and can contribute to glitches in playback or loss of data when recording.There is one point in the above that needed a little clarification, relating to USB mics:
_durin_ wrote:
If USB microphones are your only option, then I would recommend making certain you purchase a high-quality one and have an equally high-quality playback device.
If you are going to spend that much, then you'd be better off putting a little more money into an external device with a proper mic pre, and a little less money by not bothering with a USB mic at all, and just getting a 'normal' condensor mic. It's true to say that over the years, the USB mic class of recording device has caused more trouble than any other, regardless.
You should also be aware that if you find a USB mic offering ASIO support, then unless it's got a headphone socket on it as well then you aren't going to be able to monitor what you record if you use it in its native ASIO mode. This is because your computer can only cope with one ASIO device in the system - that's all the spec allows. What you can do with most ASIO hardware though is share multiple streams (if the device has multiple inputs and outputs) between different software.
Seriously, USB mics are more trouble than they're worth. -
Windows Server 2012 Hyper-V network latencies
Hi All,
I have an issue with our Windows Server 2012 Hyper-V hosts that I can't seem to figure out. Situation:
2 x Dell PowerEdge R815 servers with AMD opteron 6376 16 core CPU's and 128 GB RAM running Windows Server 2012 with Hyper-V.
2 virtual machines running on the same physical host and connected to the same virtual switch show high TCP connection latencies. One virtual machines runs a SQL Server 2012 database instance and a Dynamics AX 2012 R2 instance. The other machine a
SharePoint 2013 instance and the AX client. We see latencies of 20ms and higher on most of the TCP connections that are made from the sharepoint machine to the sql server machine.
At first I thought it might have something to do with the physical NIC's. It turned out that VMQ wasn't correctly supported by the firmware of the Broadcom BCM5709c cards. By default this setting is enabled. Turning off the VMQ setting somewhat improved
the situation but the latencies are still at 8ms and higher.
What I don't understand is what influence enabling/disabling VMQ should have on network performance. As I understand it now virtual machines connected to the same virtual switch bypass the physical altogether. Another point is that VMQ should actually improve
performance, not decrease it.
Another question I have is about the various tcp offloading settings on the physical NIC's. After installing the new firmware and drivers from Dell most of these settings are set to disabled. The documents I have been able to find talk about Windows Server
2008, any thought how these settings relate to Windows Server 2012 and whether they should be enabled?
Thanks in advance for your time and thoughts
Kind regards,
Dennes SchuitemaHi Denes,
Please try to update your BroadCom NIC driver version ,the newest version should be 7.8.51
For details please refer to following link :
http://www.broadcom.com/support/ethernet_nic/netxtremeii.php
Best Regards
Elton Ji
If it is not the answer , you can unmark it to continue .
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.
Maybe you are looking for
-
DVD plays well on some machines, gets stuck on others
Made a 20-minute DVD, having done all my editing, etc. in iMovie and exported to iDVD5. Once burned, tested it on my iMac and it worked fine. Also worked fine in a friend's stand-alone DVD player. But some users since have this problem: the DVD plays
-
How do I install musical fonts?
I need to have musical fonts show up in "Special Characters". Specifically for chord charts I'm preparing in ireal b, but would like to have them available in any program (text, pages). Macrumours forum says if I have a program like Logic (which I do
-
How to mark a support messge for deletion
Hi Gurus, Can anyone guide me how to mark a service desk message for deletion? You help highly appreciated Thank you BR Saravanan Ramasamy
-
Lion Preview.app crashes every time i use a key command
Whenever I use a key command in Preview.app it crashes. I'm fine selecting any command from the actual menus, but as soon as I use the key commands for the same functions the app crashes. (And we're talking also basic stuff like command N or O or eve
-
Report a developer to Apple. How?
I've stumbled upon a developer on app store that is violating the iOS developer license agreement and the iOS Enterprise license agreement by marketing and releasing an In House app from within an app on App Store (Giving a bit.ly link for OTA instal