How to alter High/Mid/Lows of audio?
I have an audio track that I want to bring down the high levels in that particular track. Is there any way in FCE to change the High/Mid/Low levels of audio?
I want to bring down the high levels in that particular track
Use an EQ filter or Low Pass filter.
-DH
Similar Messages
-
High, mid, low reference level calculation
Hi,
Does anyone know the equations for calculating the high, mid, low reference voltage? I have set the reference voltages to 10%, 50% and 90%. I just don't know based on what the digitizer calculates the level. For example 1, the Mid level is calculated as (Vtop+Vbase)*0.5. Should it be (Vtop-Vbase)*0.5? The Hi and low reference calculations also do not add up. I was using a signal generator to generate the signals.
I am using PXI-5224 and niScope.lib verison 3.5.
Example 1
3Mhz Square Wave 3.3VPP
ch0 ch1
Vpp=4.704506 Vpp=4.575151
Vtop=3.304177 Vtop=3.304474
Vbase=-0.003679 Vbase=-0.618361
VHigh=3.304177 VHigh=3.304474
VLow=-0.003679 VLow=-0.001788
VMax=4.011691 VMax=3.956790
VMin=-0.692815 VMin=-0.618361
LowRef =0.327107 V LowRef =-0.226078 V
MidRef =1.650249 V MidRef =1.343056 V
HiRef =2.973391 V HiRef =2.912190 V
RiseTime =7.437264 RiseTime =141.967975
FallTime =7.438606 FallTime =9.800559
Example 2
ch0 ch1
Vpp=4.612737 Vpp=4.580664
Vtop=3.295510 Vtop=3.311956
Vbase=-0.001876 Vbase=-0.016182
VHigh=3.295510 VHigh=3.311956
VLow=-0.001876 VLow=-0.016182
VMax=3.971204 VMax=3.965059
VMin=-0.641533 VMin=-0.615605
LowRef =0.327863 V LowRef =0.316632 V
MidRef =1.646817 V MidRef =1.647887 V
HiRef =2.965772 V HiRef =2.979142 V
RiseTime =7.605729 RiseTime =7.745428
FallTime =7.536156 FallTime =7.708538
Message Edited by kkwong on 04-16-2009 03:56 AM
Solved!
Go to Solution.Hi all,
I think I messed up the calcuation. The reference voltages are are correct. It is the (Vop+Vbase)/2 for mid ref level. The Lo and hi are also correct. -
How can I get a low resolution audio file?
Hello to all. I have recently upgraded my iTunes to 11.0.03. I remember being able to create a new audio file of specifications offered by the menu in iTunes. My present requirement is to make a low quality MP3 file of an existing higher quality file and find that I am unable to do it with this new version of iTunes which only creates a new version of like quality and does not offer me options of selecting the quality of the file I need to make. Any help in this matter will be deeply appreciated.
Thanks Jim for the leads. I did go into preferences>import settings and set it to the lowest offered quality. When I then selected the file and clicked, I have not been given the option of converting the file but only create:
When I had earlier clicked this option, it only resulted in a same sized file being replicated.
Anyway, I proceeded and clicked create MP3 version and, 'lo and behold' it created one according to the import settings that had been made!
Thanks Jim, you have solved my problem. -
Two mp4 videos on skydrive - low res mp4 video plays but higher res only plays audio
I have uploaded two mp4 video files to windows live skydrive. When I browse to them in firefox (10.0) the lower res one plays perfectly but the higher res plays only audio the video remains blank.
'''this file plays video and audio correctly :'''
File name
P1000505_x264
Dimensions
480 x 320
Duration
0:43
Bitrate
774 Kbps
Size
4,565 KB
Type
MP4 File
'''this higher resolution file plays audio but no video ''':
File name
P1000805_x264
Dimensions
960 x 640
Duration
1:10
Bitrate
1 Mbps
Size
13,573 KB
Type
MP4 File
I have not got quicktime on my desktop pc - I have not loaded any plugins for quicktime - so why does one play and not the other? - and how can I get the other to play please?
(I have updated my graphics driver )Are Forums any good? - whenever I post I always end up having to answer my own question eventually! - aaah well, anyway for the benefit of anyone interested I suspect it is flash player in firefox browser that is playing the MP4's and perhaps it cannot cope with the higher resolution format.
-
I am retrieving high and low limits from step results in VB code that looks something like this:
' (This occurs while processing a UIMsg_Trace event)
Set step = context.Sequence.GetStep(previousStepIndex, context.StepGroup)
'(etc.)
' Get step limits for results
Set oStepProperty = step.AsPropertyObject
If oStepProperty.Exists("limits", 0&) Then
dblLimitHigh = step.limits.high
dblLimitLow = step.limits.low
'(etc.)
So far, so good. I can see these results in
VB debug mode.
Immediately after this is where I try to put the limits into the results list:
'Add Limits to results
call mCurrentExecution.AddExtraResult("Step.Limits.High", "UpperLimit")
call mCurrentExecution.AddExtraResult("Step.Limits.Low", "LowerLimit")
(No apparent errors here while executing)
But in another section of code when I try to extract the limits, I get some of the results, but I do not get any limits results.
That section of code occurs while processing a UIMsg_EndExecution event and looks something like this:
(misc declarations)
'Get the size of the ResultList array
Call oResultList.GetDimensions("", 0, sDummy, sDummy, iElements, eType)
'Step through the ResultList array
For iItem = 0 To iElements - 1
Dim oResult As PropertyObject
Set oResult = oResultList.GetPropertyObject("[" & CStr(iItem) & "]", 0)
sMsg = "StepName = " & oResult.GetValString("TS.StepName", 0) & _
", Status = " & oResult.GetValString("Status", 0)
If oResult.Exists("limits", 0&) Then
Debug.Print "HighLimit: " & CStr(oResult.GetValNumber("Step.Limits.High", 0))
Debug.Print "LowLimit: " & CStr(oResult.GetValNumber("Step.Limits.Low", 0))
End If
'(handle the results)
Next iItem
I can get the step name, I can get the status, but I can't get the limits. The "if" statement above which checks for "limits" never becomes true, because, apparently the limit results never made it to the results array.
So, my question again is how can I pass the low and high limit results to the results list, and how can I retrieve the same from the results list?
Thanks,
GriffGriff,
Hmmmm...
I use this feature all the time and it works for me. The only real
difference between the code you posted and what I do is that I don't
retrieve a property object for each TestStand object, instead I pass the
entire sequence context (of the process model) then retrieve a property
object for the entire sequence context and use the full TestStand object
path to reference sub-properties. For example, to access a step's
ResultList property called "foo" I would use the path:
"Locals.ResultList[0].TS.SequenceCall.ResultList[].Foo"
My guess is the problem has something to do with the object from which
you're retrieving the property object and/or the path used to obtain
sub-properties from the object. You should be able to break-point in the
TestStand sequence editor immediately after the test step in question
executes, then see the extra results in the step's ResultList using the
context viewer.
For example, see the attached sequence file. The first step adds the extra
result "Step.Limits" as "Limits", the second step is a Numeric Limit (which
will have the step property of "Limits") test and the third step pops up a
dialog if the Limits property is found in the Numeric Limit test's
ResultList. In the Sequence Editor, try executing with the first step
enalbled then again with the first step skipped and breakpoint on the third
step. Use the context viewer to observe where the Limits property is added.
That might help you narrow in on how to specify the property path to
retrieve the value.
If in your code, you see the extra results in the context viewer, then the
problem lies in how you're trying to retrieve the property. If the extra
results aren't there, then something is wrong in how you're specifying them,
most likely a problem with the AddExtraResult call itself.
One other thing to check... its hard to tell from the code you posted... but
make sure you're calling AddExtraResult on the correct execution object and
that you're calling AddExtraResult ~before~ executing the step you want the
result to show up for. Another programmer here made the mistake of assuming
he could call AddExtraResult ~after~ the step executed and TestStand would
"back fill" previously executed steps. Thats not the case. Also, another
mistake he made was expecting the extra results to appear for steps that did
not contain the original step properties. For example, a string comparison
step doesn't have a "Step.Limits.High" property, so if this property is
called out explicitly in AddExtraResult, then the extra result won't appear
in the string comparison's ResultList entry. Thats why you should simply
specify "Step.Limits" to AddExtraResul so the Limits container (whose
contents vary depending on the step type) will get copied to the ResultList
regardless of the step type.
I call AddExtraResult at the beginning of my process model, not in a UI
message handler, so there may be some gotcha from calling it that way. If
all else fails, try adding the AddExtraResult near the beginning of your
process model and see if the extra results appear in each step's ResultList.
Good luck,
Bob Rafuse
Etec Inc.
[Attachment DebugExtraResults.seq, see below]
Attachments:
DebugExtraResults.seq 20 KB -
How can I change my Camera resolution from very high to lower for email use.
How can I change my Iphone 4-8gb Camera resolution from very high to lower for emailing? it is now 26-- x 19-- approx..
Are both of your Apple routers configured for an extended or roaming network?
-
Random 3 note tone, low high mid tone sound in succession.
Hi All,
Occasionally I get a three tone sound from my MBP. It sounds like something you would hear on a station with a low high mid tone sound in succession. As it is too random it is hard to capture using opensnoop and filemon.
Thanks, Dave.If you are running Skype, it could be alerts that contacts are available. You can turn the off in Skype/Preferences.
-
How can I extract high and low parts of a string that represents a 64bits decimal number?
I want to extract the high and low parts to interpret it and convert to binary code, but such a hugh number (represented by a string) isn't easy to extract from the string directly its high and low parts.
LabVIEW can't handle a 64-bit integer. You will have to store it as two 32-bit integers. If you need exact math on those 64-bit intergers you will have to make up your own routines to handle the carries and what not. If you just need pretty good accuracy, covert both 32-bit integers to double, multiply the upper 32-bit number by 2*32 (also a double) and then add the lower 32-bit number to it.
Good luck. -
How to view records frm higher to lower using sql statement
Dear all
i have tables
RATING:
50
76
55
60
80
90
95
100
20
30
32
34
i want to view this record into higher to lower and 5 records only will appear in sql ..
Like this :
100
95
90
80
76
thank you very muchselect rating
from
(select rating
from table_name
order by rating desc)
where rownum <=5;
<br>
<br>
Message was edited by:
jeneesh
Misread the requirement first -
How can I have uniform volume of audio for my vocal in FCPX?
Hello to all!
I have sung a song in GB and have imported the same into FCPX:
I had sung the first stanza at a higher volume than the second stanza, as you may discern from the screenshot above.
Can anyone tell me how I can get the audio volume of both the first and second stanzas to be of the same level?
Thanks in advance!
Dr. SomannaThanks Karsten for your reply.
I shall try to get in a clean audio from GB as you suggested.
Let me explain my need by this screenshot:
Arrow 1 is pointing to the audio file which was shown in my first screenshot in my opening post. Arrow 2 is the same audio file which I had edited in FCPX, and the effects I had added to the file can be seen in the inspector window. These effects have been able to bring about decrease in the audio levels of both the fist and second stanzas. Also, FYI, I was watching the audio meters peaks while the song was playing and had used the volume control parameter in the inspector window to bring down the volume wherever required [allowing them to hover between -6 to 0 db.
What I would like to have directions further would be this: How to have a uniform volume of audio throughout the audio using the various controls available in FCPX? My problem is this: even in the first stanza above, I have sung the first verses at a higher level than the latter part of the stanza! I have watched my music composer edit my 'completed vocal' in his editing software: he had some 'effect' by which he was able to have the entire audio clip have the same volume throughout...highs were brought down and the lows were brought up. I would like to know how to achieve such an effect in FCPX.
Regards and take care. -
Alignment of external MIDI events and audio
Hi,
I've been doing some careful measurements of the timing of MIDI events under Logic 8.0.2. I record a lot of external MIDI hardware, so the timing of external MIDI events and its relation to recorded audio is very important to me.
My setup is OS X 10.5.5, Logic 8.0.2., all relevant patches and updates. I'm using an RME fireface 800 as both the audio and MIDI interface in these examples.
I've created a project in which there is no plug-in of any kind: neither instrument nor effect. Even the klopgeist is gone.
My preferences settings:
Audio/Devices:
buffer=1024 samples
sw monitoring=off
indep mon level=off
I/O safety buffer=off
Audio/General:
PDC=Off (I also measured with PDC=all, to verify that it has no effect)
low latency mode=off
I create an external MIDI track of quantized whole notes: C3, playing on the first tick of each bar. BPM is 120.
I then create another external MIDI track, cable MIDI OUT to MIDI IN on the Fireface, and record the first MIDI track onto the second (i.e., loopback test). The note data is right on the money; within one tick of the original values. I set PDC to all, repeat the test, same result.
I then cable MIDI out on the Fireface to a hardware synth, and call up a patch with a sharp attack. I route the audio output of the synth to an audio input of the Fireface. I create an audio track with this input selected, and record the output of the synth. The recorded audio is early by 20ms or so -- roughly equivalent to the buffer size.
I repeat the test with PDC set to All, same result. PDC has no effect, as you would expect, as there are no plug-ins in this project.
I then set Preferences/MIDI/Sync/All MIDI Output delay to 21ms. I repeat the audio recording test, and now the recorded events are on the money.
Here are my questions. First, am I doing something wrong, or does Logic have a fundamental bug concerning the linkage between MIDI timing and audio timing?
It seems clear to me that this timing error has nothing to do with PDC. There is no "P". Right?
I thought that the RME has timestamped MIDI. The RME manual speaks of very accurate MIDI timing, and I can verify that the MIDI jitter of the Fireface is indeed low, in the 1ms neighborhood. This is borne out by the MIDI loopback test I did above. But MIDI timestamps don't mean anything if they are not correlated to the time code of the audio, haha! What am I missing here?
If timestamping the MIDI data can't work for some reason, could this problem not be solved by having Logic automatically delay the transmission of external MIDI data by an amount equal to the audio buffer size? Put differently, my solution to this problem is to set a MIDI output delay in preferences equal to the audio buffer size, i.e., 21.3ms. And this seems to work. Is that a proper solution, and if so, why does Logic not do such a thing automatically?
Thanks,
-LuddyOK, so I just retested and here is the best part... I recorded a loop from my drum machine, cut the first note nicely to the 1 on the first bar... using a count in etc.
I played it back and looped outs 3-4 back into 1-2 on my Ensemble. I also played back the same kick on my drum machine just to check this reverse latency issue. I was not concerned with hit for hit layering accuracy. What I got back astounds me!
The "loopback" recording was spot on to the original being played out... totally locked. BUT, the one that was midi clocked by Logic and play from my drum machine came in ahead of the 1??? How is this possible when I am hearing it when monitoring this recording and it is slightly behind? RIDICULOUS! What does this mean?
Logic is NOT recording what I am hearing, but rather recording some timewarped version of it in which it places events ahead of when they happened, but ONLY on midi clocked or midi sent notes!
I have lurked here a long time and I know that there is not much known here about this issue, so before anyone tells me the typical solution I will say as I did in my last post:
I have done a ton of tests like this, I also followed the loopback testing as described on Logicprohelp.com.
Yes I tried it with all PDC modes, SW monitoring on AND off.
I am truly beginning to think that this is a major bug of some sort.
Anyone else out there with hardware that reports latency to Logic (RME, Metric Halo, and Apogee stuff are ones I know that do this) experiencing this abberation?
thanks -
My business specializes in audio and music apps for the Windows ecosystem. For this new project that I’m considering (a virtual instrument of sorts), I need to achieve the lowest possible audio latency from capture to render.
A measured latency below 3 ms would be ideal, 20 ms would be ok, and anything above 30 ms would be a deal breaker. And just to be precise about my definition of latency, I’m strictly talking about “mic-to-speaker” latency (includes the processing in-between).
I’m not talking about “glass-to-speaker” latency, although this topic is also of interest to me.
I figured that I should be using WASAPI, as this is apparently the API that sits at the bottommost of the audio stack in WinRT / WinPRT. So I spent a couple of days re-familiarizing with C++ (happy to meet an old friend)
and I coded a prototype capturing, processing and rendering audio using WASAPI having in mind the goal of achieving the lowest possible latency. After trying and comparing various methods, workflows, threading mechanisms, buffer values and mix formats, the
best I could achieve using WASAPI on Windows Phone was ~140 ms of latency. This is using 3 ms buffers. And although the audio is glitch-free, I could have been using 30 ms buffers and it wouldn’t have made the slightest difference! The latency would still
be around 140 ms. As I found out, the API/driver is extremely defensive in protecting against audio starvation, so it adds quite a bit of buffering at both ends. This is very unfortunate, because it basically disqualifies real-time audio/musical
applications.
I’d love to be able to provide quality audio/musical apps for the platform (both Windows Phone and Windows 8), but right now, this latency issue is kind of a deal breaker.
I've been pointing out the importance of low latency audio to Microsoft for quite a while, and I know I'm not the only one, and I know a lot of people at Microsoft realize this is important. But in its execution,
it seems Microsoft constantly fails to deliver a truly low-latency audio stack. In the pre-XP days, I've had talks with the sysaudio devs about this, and I was told, "yeah, we're working on a new architecture that will come out after XP and
it will solve the latency problem for all audio and musical applications." Fast forward to mid-2010 (pre-Windows Phone 7), and I was still there pointing out the horrible latency figures one would get from the APIs that were about to ship. And
now that WASAPI is available on WP8 (our best hope yet for low-latency audio), I discover the overly defensive and buffer-happy architecture of WASAPI (even though one of its promises was precisely low-latency audio).
So, the question is…
Is Microsoft aware of this issue? If so, is Microsoft giving up and simply conceding the pro-audio territory to iOS? If not, I’d be glad to discuss this issue with an engineer at Microsoft. I’m serious about bringing audio apps
to the platform. I just need some assurance that Microsoft is taking action on its end, so that I can sync my development with the next product cycle.
Thanks in advance for any help, advice, or insights!
/AntoineAny update on this issue? Is there a roadmap for a fix?
I ported a realtime sound analyzing application (requiring both audio input and output) to WinRT and was quite surprised at the high latency from WASAPI. I then searched around and posted some questions regarding this on various MS forums and got to the
following conclusions based on feedback from people including MVPs and MS employees:
A lot of those people are under the impression that WASAPI qualifies as a low-latency API.
Apparently WinRT apps are not supposed to have high CPU usage, and this is purposefully baked into the framework (minimal thread priority control, async/await everywhere which results in thread priority inversion, and other issues).
Why would anyone ever need high-priority threads? They must use a lot of CPU, right?
A lot of people think low-latency audio means high CPU usage. You can see where this is going when you look at the previous point.
Async/Await is being forced down to all levels, even though it should only be used at the UI level. What some people are actually now calling "old-school" multithreading is now being pushed out (lock, etc). Async/Await has horrible overhead and
results in thread priority inversion amongst other issues. For example, witness that there is no StorageFile.Read, just StorageFile.ReadAsync. Do some IO benchmarks with some of these async methods and you will see some horrible performance compared to desktop
file IO.
To get an understanding of what low latency audio means and why it is important, see this video. It compares the latency of Android to iOS using the exact same music app. Ever wondered
why there are no quality music apps for Android? Well now you know. And then realize that WinRT has
twice as much latency as Android.
And if anyone thinks this is a niche use case, consider that Apple created adds showcasing "musicians" playing their iPad "instruments". A use case that is essentially unavailable for WinRT apps. Why would Apple create
adds for a niche use case?
This should have been one of the high priority issues solved from the start in WinRT. MS solved this issue in Windows (desktop) long ago, with the ability to get insanely low latency there (0.3 ms in some stress tests,
see here), even beating out OSX. It is as if there is a new generation of architects at MS that know nothing about this previous work and are doomed to make the same mistakes over again. I really don't
understand why these pre-existing APIs can't be exposed in WinRT. No need to re-invent the wheel. But I guess it just isn't important enough.
I find this situation really sad since otherwise it could be a great and powerful platform. -
Low output audio level MBP Retina, 15-inch, Late 2013
All of a sudden I am getting very low level audio output from my speakers even when audio level is high, any suggestions please?
1. System Preferences > Sound > Output > Internal Speakers/Headphone
Settings for the selected device:
Balance:
Make sure that the slider is set at middle.
2. AudioMIDI Setup
Applications/Utilities/ Audio MIDI Setup.app
Audio Devices window
Side Bar
Click the Built-in Output.
Make sure that CH1 and CH2 volume sliders are matched.
3. Reset SMC and PRAM.
Reset PRAM. http://support.apple.com/kb/PH4405
Reset SMC. http://support.apple.com/kb/HT3964
Choose the method for:
"Resetting SMC on portables with a battery you should not remove on your own". -
Make USB-6001 digital output always high or low in C
Hello all,
I am new to the NI DAQ interface. I have an USB-6001 and I am trying to use this device to control some logic circuit in C. So what I want to do is:
* set some digital output lines to high or low levels, and change their status when needed (in C).
I have tested the device in NI MAX -> Test Panels, and found that the device is able to do this. Then I try to do it in C. I hace checked the examples, and the function I should use is the one called "DAQmxWriteDigitalU32". I have problem in understanding its input parameters. I have tried something with my own understanding, but it does not work as I expected. Here is one test I did:
uInt32 data=1;
int32 written;
TaskHandle taskHandle=0;
DAQmxErrChk (DAQmxCreateTask("",&taskHandle));
DAQmxErrChk (DAQmxCreateDOChan(taskHandle,"Dev1/port0/line7","",DAQmx_Val_ChanForAllLines));
DAQmxErrChk (DAQmxStartTask(taskHandle));
DAQmxErrChk (DAQmxWriteDigitalU32(taskHandle,1,1,10.0,DAQmx_Val_GroupByChannel,&data,&written,NULL));
taskHandle=0;
DAQmxErrChk (DAQmxCreateTask("",&taskHandle));
DAQmxErrChk (DAQmxCreateDOChan(taskHandle,"Dev1/port0/line0","",DAQmx_Val_ChanForAllLines));
DAQmxErrChk (DAQmxStartTask(taskHandle));
DAQmxErrChk (DAQmxWriteDigitalU32(taskHandle,1,1,10.0,DAQmx_Val_GroupByChannel,&data,&written,NULL));
I want to simply set "Dev1/port0/line7" and "Dev1/port0/line0" to high level, but only "Dev1/port0/line0" responds me. The second parameter of the function DAQmxWriteDigitalU32 corresponds to numSampsPerChan. If i replace it (currently 1) with a larger value, e.g. 100, I can see that "Dev1/port0/line7" sends a number of 1 out, then return back to 0. So I guess the problem is just that I do not understand well all parameters of the function DAQmxWriteDigitalU32. Can anyone please tell me how I can set a digital output line to 1 or 0?
Thanks!
Hongkun
Solved!
Go to Solution.Hello,
Here is a link explaining the inputs of function:
http://zone.ni.com/reference/en-XX/help/370471W-01/daqmxcfunc/daqmxwritedigitalu32/
Also here you can find a lot of examples in ANSI C
http://www.ni.com/example/6999/en/
Randy @Rscd27@ -
Just bought Photoshop Elements 13...I'm trying to make a slideshow but can't figure out how to alter duration time that the slide is on the screen. They presently move from one to another way too quickly...also need a different pan and zoom option. Where are all the options I had in PS10? Also...Can I burn this to a DVD?
The changes have brought improvements but also drawbacks compared with the old slideshow editor.
The templates are now fairly fixed but I find the “Classic Dark” gives reasonable results with some panning and you can click the audio button and browse you PC for any track. Unfortunately there are only three speed choices linked to the music track. The improvement for most people is that you can now export to your hard drive at 720p or 1080p and upload to sites like YouTube and Vimeo.
Maybe you are looking for
-
I use Jabber, but since installing have had issues with Shockwave not responding. Is this common, connected or just coincidence? Cheers Chris
-
HT4913 can someone tell me how to add my pc to itunes?
i dont want it on itunes match its saying going to cost me a yearly subscription.
-
Memory leak in JCO when calling an ABAP-function that returns larg tables
Hello everybody, I think discovered a memory leak in JCO when the calling functionions that have exporting tables with large datasets. For example the ABAP-function RFC_READ_TABLE, which in this example I use to retrieve data from a table called "RSZ
-
Is a hard drive faster on a shorter IDE cable?
Hello, I am considering connecting my main hard drive with a 24" IDE cable rather than using my 18" cable which I used previously. (The IDE cable therefore will be 6" longer.) Will this slow down the performance of the hard drive? My hard drive is a
-
Pool connection issue with sql server database in adf application
Hi everyone I'm Developing a BPM application using Oracle bpm 11.1.1.5.0 My JDeveloper version is 11.1.1.5.0 and my database is SQL Server 2008. I have created a new BPM application, an application module and created several view objects based on my