Low 'levels' in logic meters???

Why are the readings on the channel meters so low in Logic?
The meters on my fireface total mixer are much higher than in logic... why is that? Any way to fix this?

same problem with motu 828. try this, run a steady state signal into your box.(a test tone oscillater is the best tool for tests like this), by this i mean a sustained note(C6 Roland) from a keyboard organ patch wuth no modulation. make sure the input sensitivity on your box matches the keyboard's output as much as possible. latch the key so it sustains (sustain pedal) and get the input meter on your box to read "0".(just touching the red.) switch to output on box and trim so that you get the same reading.(near "0"). Oh, and turn your speakers OFF whilst doing these tests. next, send this signal to logic, assign it to a track. make sure pre fader metering is on. what ever signal is displayed on the track meter is a PEAK reading, not RMS!
all my boxes show similar readings to yours. consider the gap in logics track meter as your headroom.
a steady state tone is not a real world signal, so upon recording percussion(for eg) you will have to back off you input trim on your hardware box.
make sure your MONITORING set up is POWERFUL enough to be used for all stages in the recording process. bookshelf and pwrd monitors might not be "loud" enough. DJR
G4 450DP   Mac OS X (10.4.4)   digidesign, motu, apogee

Similar Messages

  • Setup help - low signal level in Logic

    I'm getting low input signal levels when recording in Logic (-17db to about -10db, on average), and am hoping someone here can point out an obvious oversight on my part.
    I'm recording mic'd guitars and vocals (one track at a time), with the mic(s)running into a Mackie 1402 mixer via XLR. Signals are strong and fine when I solo on the Mackie.
    The signal runs out of the Mackie via the Alt 3-4 bus into the first and second 1/4" inputs on the back of a Motu 828 mkII, then into Logic.
    Even if I drive the signal to near clipping on the Mackie, I get these low levels when recording in Logic. This is true whether I'm using a condenser mic with phantom power for vocals, or a dynamic mic on a guitar amp. I've also tried swithcing XLR mic cables.
    I'm running Logic Pro 7.2.3 and OS X (10.4.9) on a G5 2.5 DP.
    Is my signal path in need of some tweaking, or perhaps it's a setting I'm overlooking on the Motu 828?
    Any help greatly appreciated to get this rookie back on track!
    G5 2.5 DP   Mac OS X (10.4.9)  

    Hi,
    You wrote:
    I'm recording mic'd guitars and vocals (one track at
    a time), with the mic(s)running into a Mackie 1402
    mixer via XLR. Signals are strong and fine when I
    solo on the Mackie.
    The solo on the Mackie is probably louder,check to see if you have it AFL(after fader listen or PFL(pre fader level or "level set").
    The signal runs out of the Mackie via the Alt 3-4 bus
    into the first and second 1/4" inputs on the back of
    a Motu 828 mkII, then into Logic.
    The first thing to check is to see what kind of cables you are running into the Motu mkii.
    A. Unbalanced (ie "guitar-type" with two wires)
    B.Balanced (ie "stereo-type" with three wires)
    The second thing is to check what your inputs on your Motu are,there are two choices: +4dBu("pro") or -10dBV("consumer")
    Here's the catch:
    You MUST MATCH the correct cables for the correct levels.If the Mackie is outputting -10dBV but your Motu is thinking it is +4 you see less level no matter how loud you make the Mackie outputs.
    The Mackie can output AUTOMATICALLY either -10dBV or +4dBu depending on what cable is hooked up to it.
    Even if I drive the signal to near clipping on the
    Mackie, I get these low levels when recording in
    Logic. This is true whether I'm using a condenser mic
    with phantom power for vocals, or a dynamic mic on a
    guitar amp. I've also tried swithcing XLR mic cables.
    This is not where the problem lies.It lieas between the output of the Mackie and the input to Logic.
    I'm running Logic Pro 7.2.3 and OS X (10.4.9) on a G5
    2.5 DP.
    Is my signal path in need of some tweaking, or
    perhaps it's a setting I'm overlooking on the Motu
    828?
    Yes,a setting somewhere in the Motu is not set correctly.Check above for how to do that.
    Any help greatly appreciated to get this rookie back
    on track!
    G5 2.5 DP
      Mac OS X (10.4.9)  
    Cheers

  • Levels in Logic

    Thre has been a lot of discussion on whether or not it is importat to try adn keep channel levels from clipping (with Pre-fader metering) in a 32 bit float app like Logic as long as it is not clipping the output. I took the postition that it still is and some disagreed.
    The following is my exchange with Paul Frindle, who designed the Sony Oxford series of plug-ins and is an acknowledged expert in digital audio.
    i apologize for the length but I think it is important.
    PAUL WROTE:
    There are 2 reasons to record, process and master at less than flat out dBFS:
    1. To avoid hidden overs not shown on metering due to the reconstruction of the signal. This occurs mostly in D/As (variably depending on how they are designed) and to some extent within some plug-ins and digital processes. Error from this range from limiting all the way to loud 'splats' as values may fold over completely.
    A safe margin for normal programme to avoid most of this is -3dBFS. Although some artificially maximised programme can create more than this and some test material can create 6dB of over - so real safety is only gained at around -6dBFS.
    If you have a reconstruction meter you can monitor this effect yourself (for the mix output) and compensate manually, or if you have a suitably equipped limiting app (I.e. Oxford Limiter) you can correct these errors automatically. This does not help much with stuff within the channels of the mix itself - so prudence us still advisable.
    2. To create headroom, using lower levels within your mix allows you to avoid clipping signals every time you do anything - it frees you up to concentrate on sound rather than red lights and radically eases the mixing process. Some plugs may actually sound better because internal overs may be avoided.
    To do this you need to reduce levels to something sensible first thing in the playback channel - process at lower levels - end up with a mix at less than flat out - then make up the level at the very end of the mix. It sounds like you are doing this already But please note this is NOT to avoid math overs in the PT summing buss - as these are catered for already in the PT mixer
    Ok - you talk of making mixes that are suitably modified by the output limiter? Yes, this is common practice and in fact mixing with the limiter in place is a really good idea as you instinctively adjust the mix for the best final sound. But these days we have to watch it as the industry is obsessed with loudness at the expense of absolutely everything else - we produce 2 dimensional programme that has no dynamic range. Therefore it isn't possible for you to create real dynamics in this current environment (if you want to stay in business) - instead you are limited to trying to create the impression of dynamics from the extra artefacts and distortions the limiter generates.
    Is was with this in mind that I designed the Oxford Limiter - basically to create the impression of dynamic range when in fact there was none - and do it in a way that sounded as natural as possible. You can use this effect either to produce stuff that is loud as ever but sounds less artificial - or you can use it to produce stuff that is as bad as before but is even louder
    I hope this is helpful [/QUOTE]
    I THE ASKED: ul would you say this advice holds true for a 32 bit float app like Logic as well as fixed point apps like PT?
    PAUL RESPONDED:
    Yes I would.
    The intersample peaking reconstruction problem is the same at the output of the mix, as it must be represented in a fixed point output format anyway (i.e. CD or DVD).
    Whilst with float it's possible to accomodate internally numbers bigger than flat out, any process that has need to refer to actual real values might be at risk of overload (or unspecified behaviour). Why take the risk?
    From the point of the headroom issue, things might be different in that an entirely float system from start to finish might handle overs properly - however the meters will be calibrated to a fixed point reference (and will come on willy nilly, whether the signal is clipped or not). Some systems using expansion DSP pass and process signals in fixed point (PowerCore being one example) and may not have any of the float headroom and may mess up with overs.
    Again, with 140dB or so dynamic real range at your disposal, why bother to risk it?
    The most important thing to remember is that recording and processing at lower levels DOES NOT waste 'bits'. It doesn't work like that - all your 'bits' are there all the time at all levels

    Thanks for that Jay.
    Paul certainly knows his stuff - interestingly, I have been trying out the Oxford AU's of late, and reading the manuals - they are pretty dry, but an excellent source of technical material on this stuff.
    Everything he says in your post I agree with, but it doesn't really have anything to do with mixing with high channel levels. Let me take a few points:-
    To avoid hidden overs not shown on metering due to the
    reconstruction of the signal.
    This is absolutely true. In fact, it is possible to clip a D/A (ie go over 0dBFS) from sample values that are as low as 70% of 0dbFS. If people don't know about reconstructed waveforms, go and have a read of the Oxford limiter manual - you can download it from oxfordplugins.com - which explains this.
    However, this is from the master output onwards. As long as your master output is not clipping, including the reonstructed waveform, then this has nothing at all to do with mixing at high channel levels. So we can happily ignore this.
    Next:-
    2. To create headroom, using lower levels within your mix allows
    you to avoid clipping signals every time you do anything - it frees
    you up to concentrate on sound rather than red lights and radically
    eases the mixing process. Some plugs may actually sound better
    because internal overs may be avoided.
    A few points to make here. When tracking, it absolutely makes sense to track at lower levels and not try to reach 0dBFS, for exactly the reasons noted above. Obviously if you clip your recording at the A/D stage, your recording will always have distortion regardless of what happens to the audio from that point on - it's "burned in" to the recording.
    When mixing, the maths makes sure we are not clipping signals even if we add +500dB of gain. In effect, in a well-designed digital mixer, we always have a ton of headroom. So no need to worry about clipping channels in normal use - it simply doesn't happen. In short, in a 32f mixer, we are always mixing at fairly low levels - one of the benefits of this is so we can mix lots of signals together.
    Now, I'm not sure of the exact figures here, but someone with a more DSP bent willl doubtless chime in and state how many 24-bit 0dBFS signals you can mix together in a 32f mixer without clipping the entire signal (ie running out of maths headroom) but I seem to recall it's a lot. ie thousands - but again, don't quote me on that!
    Whilst with float it's possible to accomodate internally numbers
    bigger than flat out, any process that has need to refer to actual
    real values might be at risk of overload (or unspecified behaviour).
    Why take the risk?
    I'm not sure exactly what he's saying here, but I take it to mean "If a system is properly designed, high internal values are no problem. But if you've got a badly designed digital mixer and/or processing, you might get problems".
    Whilst that is true, it's also true of anything else - badly designed gear is badly designed gear. Now it could be that Logic's mixer is not designed well, or some plugins may not handle 32f values properly, and if so, then it is a very real cause for concern. but for the most part, I do not believe it to be true, although I cannot back that up with empirical evidence.
    However, some null tests would probably be able to verify whether the maths is working properly, both without processing and using various plugins or plugin chains.
    From the point of the headroom issue, things might be different in
    that an entirely float system from start to finish might handle overs
    properly - however the meters will be calibrated to a fixed point
    reference (and will come on willy nilly, whether the signal is clipped
    or not).
    I'm not sure what's he's saying here. Some plugins might get their metering wrong? That may be the case. I've always said that bad plugins can be a problem, but I don't believe there are that many out there. So let's leave aside the "crap plugins" issue and take that as read, and just concentrate of the straight 32f mixer as containined in Logic and other DAWs.
    Some systems using expansion DSP pass and process signals in fixed
    point (PowerCore being one example) and may not have any of the
    float headroom and may mess up with overs.
    Possibly - I think for those things, there are often some maths juggling going on to do with the DSP processors they use. I can't really speak on that, but again, it's not (as I understand it) the case in an ideally implemented 32f software mixer.
    The most important thing to remember is that recording and
    processing at lower levels DOES NOT waste 'bits'. It doesn't work
    like that - all your 'bits' are there all the time at all levels
    Also absolutely true.
    So as far as I can see - and do correct me if I'm understanding or interpreting things wrongly - but there isn't really anything here that suggests that mixing in a 32f environment at high-ish levels, then turning the master fader down to bring the sum combined mix down to below the fixed odBFS level for your convertor/file-format is bad.
    What he's basically saying if I'm understanding him correctly is that there is the potential for implementation errors in these systems, and those can makes things sound bad, and by mixing at very high levels you can possibly exxagerate those things to occur. That sounds plausible to me.
    But the maths alone quite happily supports mixing multiple signals at high levels - in fact, let's say +12dB over your individual channels 0dbFS point would be considered high - but with the maths, that's still a really small signal - that additional gain is tiny in relation to the overall 32f mixer's headroom.
    Now, if you are mixing channels together that are clipping the mixer internally (and again, I'm not sure of the values), but let's say you were adding +1000dB to every channel and adding 150 of these, then yes, I would fully expect your signal to be degraded, as the maths would not be able to retain the integrity of the full signal. But that's a wildly over-the-top example which would never ever happen in practice.
    So, my view still holds - given my interest in this, I see no reason that a well-designed system shouldn't be behaving properly, and the variety of tests performed by people infinitely smarter than me that I've seen over the past few years seem to bear out that things by and large are performing perfectly, at least in digital mixer's summing processing.
    I'm not saying I know for sure they are, or that I have real evidence at my fingertips to back it up, and as such I always remain open to investigation and findings. But I'm pretty sure people have added a bunch of signals and bounced the result, then added he same bunch of signals at +1000dB, reduced the master fader to compensate bring the mix down to the exact same levels as the previous test, bounced that, and then phase inverted and tested the files, to reveal they null out down to impossibly small values.
    As in all of these things, there are additional complications - when people say they hear a difference, then either they are mistaken or there is something going on - but I'm not (yet) convinced that that something is the native performance of a 32f mixer.
    Phew! Lots of typing, and I've probably forgotten things, but that'll do for this post anyway - my brain's hurting!
    PS Do go and read the Oxford manuals though - they contain some of the best technical info I've come across on DSP and signal processing. Great stuff! (And the plugs aren't to shabby, neither!
    Out.

  • Setting the record straight for setting recording levels in Logic

    Hey everyone,
    I asked a similar question a while back and got some great feedback, but I'm still a little confused about how to set a perfect recording level and work with the fader in Logic. A lot of what I've been doing has been trial and error and I would like to ask a few more specific questions:
    I record with a MOTU traveler and have recently, for example, been recording direct with my guitar into the MOTU inputs. I am using "pre-fader metering", which is checked, and I using "software monitoring", which is also checked. When I am watching the fader, what should I be looking for? I always have the fader set at 0.0db. How do I set the perfect level in this situation? Should the fader be hitting red as I record? How do I make sure I don't clip, and what should I be watching for overall? AND, is it all the same process when recording with microphones? I've heard Logic records at a low level with MOTU products, but what exaclty does this mean in terms of setting levels and watching the fader as I record? I have been recording a lot lately and have some good sounding tracks and some that are not so great and I think it has a lot to do with the topic of this post.
    Thanks for the help and clarification.

    the fader position in logic does nothing to affect the recording level. That is totally dependent on the gain setting on your MOTU. The logic fader merely allows you to adjust the level at which you listen to the material you are recording. If you set the gain correctly on your MOTU then you shouldn't see the logic audio object hit the red unless your logic fader is set higher than 0dB. If your logic fader is at 0, and you still see red, then your gain is set too high. Remember, digital distortion is not like analog distortion. Once you have recorded the signal and you are in playback mode, then things change somewhat. The internal headroom of logic allows individual channels to hit the red without causing audible distortion. You should definitely avoid hitting the red on your audio output objects though.
    I haven't heard about low levels with MOTU stuff. My own 896 has three settings on the inputs - mic, line and fixed +4dB. If you have variable gain as well, then you should be able to get healthy gain. If your recording level is very low, then recording at 24bit will help a lot towards a more detailed recording, as 16 bit can get grainy if the recorded levels are low.

  • How do you Control SPDIF input levels in Logic Pro using Apogee Ensemble

    Hello,
    Question. I have an Apogee Ensemble audio interface hooked up to my Macbook Pro. I have a Motif XS8 that I record audio from through SPDIF.
    I am able to get sound through to my audio track once setting the inputs in Logic, however the levels are not even close to halfway. I can not figure out how to fix adjust the SPDIF level to make it hotter b/c I know once I add virtual tracks and vocals through my preamp, the audio from my motif will likely not be hearable.
    Please let me know.
    Also, when I record an audio track is says about 15 minutes remaining to record. Does that mean I have 15 mins of audio for each track or for all my recording period???

    Not owning an Apogee Ensemble, maybe someone else who does has more definitive info can jump in, but do not confuse the lower level of the SPDIF input with "lesser quality". It's a digital connection, with no analog conversion.
    -18 dBFS on a digital scale is the same as 0dB on an analog scale.
    I think what you're experiencing, is the fact that so many people think they need to get the meters in Logic closer to 0dB, which is actually NOT what you want (when staying ITB). This thinking is left over from the days of analogue recording, or the early days of 16 bit digital recording.
    Turn your other sources in Logic down, and you'll reap the benefits of not overloading the 2 buss, allowing the plug-ins to do their computations without distorting, etc... Then you can bring the over all level of your mix back up, in mastering.
    If the Motif is considerably lower in volume than that, you could always insert a gainer plug-in on the audio track.
    As to the 15 minutes, that is what Logic has pre-allocated for recording to your hard drive. It "re-sets" each time you go into record, so it's not saying "you only have 15 minutes of record time available". It's saying, "you have 15 minutes of hard disk space allocated each time you go into record, based on your sample rate and bit depth settings". This can be changed in the Audio pathway, but it's recommended to leave this number as low as possible/necessary, to avoid disk fragmentation.

  • Low Levels Assistance Please

    Hi Guys
    I need some help with my new set up
    Mac G5 version 10.3.9
    Logic Pro 7
    Audio Interface = motu 828
    I don't understand why I am not getting decent levels through logic audio and maybe someone who knows my set up can assist as I am sure it is easy solution
    When i record using a mic into logic via the Motu I get the leds lighting up nicely through the MOTU and also the software interface. But In the logic Audio trackmixer the signal is coming through very low and I have to normalize the file every time.
    I am assuming that i need to set up the levels through logic? As even when I peak the trim knobs on the MOtu and get feedback I still dont get a signal through a decent volume through logic
    Any ideas?
    Thanks in anticipation Carlo

    The level meters in logic are peak meters (0dbfs), not rms meters. I would check your interface and make sure you have it setup correctly.
    Is it a MKI or MKII 828 interface? How are you connecting your mic to it? A little more info might help solve your problem.
    Alan
    Powerbook G4, Dual 1.25 G4   Mac OS X (10.4.2)  

  • Question about CHKDSK ,S.M.A.R.T and low level format

    hi,
    I like to know the difference     exactly  between CHKDSK and S.M.A.R.T and low level format program(I know that  low level format  writes zero"s) I mean besides the low level format writes  zero's what onother
    benefits it  has , it repairs some bad sectors or mark bad sectors ect?
    very short explanation wil be enough.
    thanks
    johan
    h.david

    H.david
    1-CHKDSK http://en.wikipedia.org/wiki/CHKDSK  CHKDSK verifies the
    file system integrity on
    hard disks or
    floppy disk and fixes logical file system errors.
    2-S.M.A.R.T http://en.wikipedia.org/wiki/S.M.A.R.T.  smart is a monitoring system for
    computer
    hard disk drives (HDDs) and
    solid-state drives (SSDs)<sup class="reference" id="cite_ref-1">[1]</sup> to detect and report on various indicators of reliability, in the hope of anticipating failures.
    3-Low level formatting  http://en.wikipedia.org/wiki/Low_level_format
    Wanikiya and Dyami--Team Zigzag

  • High level & low level Desing in OBIEE

    Hi Gurus/Experts,
    I am new to in OBIEE and please let me know about the High Level and Low level design of obiee projects.
    Really its helpful for me to develope my carrier skills.
    Thanks in advance,
    Sriram

    Hi,
    OBIEE 11g Basic Security Guide from Deliver BI
    http://www.box.net/shared/5ef1alb2sp
    http://www.rittmanmead.com/2008/04/migration-obiee-projects-between-dev-and-prod-environments/
    http://obiee101.blogspot.com/2009/07/obiee-how-to-get-started.html
    http://www.scribd.com/doc/60264784/OBIEE11g-Logical-Table-Souorce
    tutorial from the Oracle By Example range
    http://st-curriculum.oracle.com/obe/fmw/bi/bi1113/createanalysis/ps.htm
    also refer the blow one for installation and migration and essential ..basics..
    OBIEE10 repository and webcatlouge in obiee 11g
    http://www.rittmanmead.com/2010/08/oracle-bi-ee-11g-upgrading-from-bi-ee-10g-repository-web-catalog/
    new features OBIEE 11g is having
    http://obieeelegant.blogspot.com/2011/10/obiee11g-features.html
    There are the full new features about OBIEE 11g.
    Top5 New Features in Oracle Business Intelligence Security
    http://oracleintelligence.blogspot.com/2010/10/top5-new-features-in-oracle-business.html
    Top5 New Features for Oracle Business Intelligence System Administrators
    http://oracleintelligence.blogspot.com/2010/10/top5-new-features-for-oracle-business.html
    Top10 New Features for Oracle Business Intelligence Users
    http://oracleintelligence.blogspot.com/2010/10/top10-new-features-for-oracle-business.html
    List of Bug Fixes Included In OBIEE 11.1.1.5.0
    http://obieeelegant.blogspot.com/2011/11/list-of-bug-fixes-included-in-obiee.html
    OBIEE 11g (11.1.1.5.0) Software Only Installation on Windows Server 2008/2003/XP 64 bit SP1 &SP2
    http://obieeelegant.blogspot.com/2011/09/obiee-11g-111150-software-only.html
    Migrating Application Roles from Dev to UAT server and Production server.
    To move Application Roles, please kindly review the following information:
    Oracle Fusion Middleware Application Security Guide 11g Release 1 (11.1.1)
    7.3.2 Migrating Policies with the Command migrateSecurityStorehttp://download.oracle.com/docs/cd/E14571_01/core.1111/e10043/cfgauthr.htm#JISEC2929
    http://www.rittmanmead.com/2010/10/obiee-11gr1-security-explained-working-with-the-default-security-configuration/
    http://www.rittmanmead.com/2011/04/oracle-bi-ee-11g-migrating-security-credential-store-part-3/
    The same guide can be used...But some fetures are changed in it.Here is the RPD 11g step-by-step guide
    http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/bi/bi11115/biadmin11g_02/biadmin11g.htm
    Refer:
    Re: OBIEE 11G Beginners guide
    Thanks
    Deva

  • Need help in copying Invoice date to lower level item in Sales order report

    Hello Experts,
    I am debugging into one Sales order report.I need little bit help.The report is displaying Invoice Date for
    Sales order Billing documents for Higher item in Bill of Material Structures.But as per user requirement,
    I am supposed to show the Invoice date for lower level items also.The field for Higher level item is 'UEPOS'.
    I want to copy the Invoice date for Higher level item to lower level item. Can you please guide me in the logic?
    Thanking you in anticipation.
    Best Regards,
    Harish

    Hi BreakPoint,
    Thanks for the information.
    I have applied the same way but it is showing only lower line items now.
    Invoice dates for Higher level items are not there.
    I am pasting the code here which I have applied.
    Then you can give me more guidence.
    This is to be done only for 'ZREP' sales orders.
    if w_vbak-auart EQ 'ZREP' and w_vbak-uepos is not INITIAL.
                          read table t_final into w_final_ZREP with key vbeln = w_vbak-vbeln
                                                                        posnr = w_vbak-uepos.
                             w_final-erdat_i = w_final_ZREP-erdat_i.
                             else.
                    if w_vbak-auart EQ 'ZREP' and w_vbak-uepos is INITIAL.
                      w_final-erdat_i = w_invdate.
                    endif.
                    endif.
    Can you please sugest me changes here?
    Best Regards,
    Harish
    Edited by: joshihaa on Jul 13, 2010 6:22 PM

  • Audio levels in logic

    Why is the recording level in Logic way lower then my preamp? If I send a signal into my analog preamp, it's way lower in logic and the wave for is way lower too.

    Hi Tegtours
    The Apogee interfaces have the ability to be calibrated to different operating levels. So it could be a case of re-calibrating or at least resetting to the "factory" conditions.
    The 1272 will probably have been racked by an independent tech, so that's a bit of a wild card in terms of how it's wired and Dan Alexander say's it' a line amp, I'll defer to him as he's a Neve expert.
    +WHAT IS A NEVE 1272?+
    +A 1272 is a line amp/summing buss amp from the circa 1970-76 era of neve consoles, such as 8048,8014,8016,and bcm10 portable consoles.+
    +A 1272 is not a mic pre, nor was it ever used by neve as such.+
    +However, when asked what he thought about people rewiring the 1272 as a mic pre, Ruperts' answer was" why not? It's all the same stuff in there".+
    +Inside of a 1272, one finds an input transformer, a line amp card, and an output transformer.+
    +Some enterprising techno gentlemen rewire these components and with the addition of appropriate volume and gain switches, build a preamp that emulates that found in a 1073( or other class a) input module.+
    +The only problem occurs when one attempts to get much over 50 db of gain, which is accomplished in a 1073 by the insertion of an additional amp card. When wringing that extra gain out of one amp card, one encounters frequency response and distortion anomalies. However, assuming that the wiring is done properly, a1272 can be made into a mic pre that exactly emulates the classic neve class a preamp 1073/1064/1066/1089/1084 type of circuit and of course , done right, that sounds wonderful....+
    http://www.danalexanderaudio.com/neverap.html
    If you've purchased a 1272 ( modified to be a mic amp ) from a reputable tech, I'm sure they'll be happy to show you how to interface it correctly.
    Which output are you using from the P2 and what kind of cabling ?
    James

  • Concurrence of low level and semantic EventListener

    Hello,
    when in the following program [Cancel] is clicked right after program start, I don't mind that the low level FocusEvent is first processed. But I do wonder why
    the ActionListener is not processed at all.
    Only when [Cancel] is clicked a second time (meanwhile the caret is in the third textfield, where there is no FocusListener) the program exits.
    Removing the comment slashes in the focusLost method is a solution to this problem. However, I did not find anything concerning this topic in the documentation. Does anybody know a page where I can find a rule on which listener eats which?
    import java.awt.*;
    import java.awt.event.*;
    import javax.swing.*;
    public class Cancel extends JFrame
    { JTextField tf1, tf2, tf3;
      public Cancel()
      { setSize(300,300);
        setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
        Container cp= getContentPane();
        cp.setLayout(null);
        tf1 = new JTextField(20);
        tf1.setBounds(50,50,200,20);
        tf1.addFocusListener(new FocusAdapter()
        { public void focusLost(FocusEvent e)
          { // if (!(e.getOppositeComponent() instanceof JButton))
         tf3.requestFocus();
        tf2 = new JTextField(20);
        tf2.setBounds(50,100,200,20);
        tf3 = new JTextField(20);
        tf3.setBounds(50,150,200,20);
        JButton b = new JButton("Cancel");
        b.setBounds(100,200,100,30);
        b.addActionListener(new ActionListener()
        { public void actionPerformed(ActionEvent evt)
          { System.out.println("Cancel");
         System.exit(0);
        cp.add(tf1);
        cp.add(tf2);
        cp.add(tf3);
        cp.add(b);
        setVisible(true);
      public static void main(String arg[])
      { new Cancel();
    }Regards
    Joerg

    I think the process must be
    (1) Click on (above) Cancel
    (2) Current focus is moved
    (3) New focus is given
    (4) Component recieves click
    but in the code above there is
    (2.5) focus moved to Text field
    and this presumably cuts off the rest of the sequence. I'm not sure where in the depths of Swing this could be confirmed/refuted but since AbstractButton is only the reciever in this chain i'd doubt its there?
    maybe java.awt.EventDispatchThread? or some kind of FocusManager?
    the exact logic is probably distributed among lots of classes though :(
    Frankly, if this behaviour is intentional, I don't
    understand why.hm, if manually reassigning focus didn't take precedence to the user impliciting reassigning focus by clicking on components then it wouldn't have any point though (?)
    asjf

  • CS11_not show low level component

    Please find my qn as below,
    Ex, BOM
    A Header Material
    1 ->Child Material 1
    2 ->Child Material 2
    When I check for my Header material (A) in CS11 I found child material 2 assembly indication is missed out.
    Then I found in material master for material (Child Material 2) selection method as 1 (Selection by Explosion date) then I change the selection method as 2 (Selection By production version).Now if I check in CS11 for my header material A I can able to view the assembly indication for Child Material 2.
    Please explain to me the logic why we change the BOM selection method, then it will show the low level component in CS11.
    Awaiting for your replies.

    Dear Kumar,
    Pl. refer this link for BOM selection logic.
    http://help.sap.com/saphelp_47x200/helpdata/en/f4/7d2b0444af11d182b40000e829fbfe/frameset.htm
    Pl. just try this, when the BOM selection method is 1.
    CS11 - After entering your details, press vew icon next to execute icon and check the Multi- level in extend view.
    It may work for you.
    Madhava

  • Copy higher level item texts to lower level items

    Hi All ,
      I want to copy the texts of the higher level items into lower level items
    i created a logic how to copy the texts from higher level to lower level
    but the problem is there are so many textid's in higher level items,
    how to determine the all id's dynamically with out hard coding,
    is there anytable exists to determine the text id's automatically depending on the salesorder and itemno
    helpful answers  will be rewarded fully
    regards
    krishna

    Hi,
    You have to go as below:
    With Sales Order Item, you get Item Category i.e VBAP-PSTYV.
    With Item Category, go to table TVAP to detemine field TXTGR i.e Text Determination Procedure
    Now check table TTXERN with fields TDOBJECT = VBBP - Denotes Sales Order Item
        and TXTGR retreived from TVAP. Retreive the results of TDID which are the text id's associated for that sales order item.
    So the relation is VBAP -> TVAP -> TTXERN.
    Check and let me know if you have any problem.
    Kind Regards
    Eswar

  • Unable to create a folder in lower levels of root folder

    It appears to me I cannot create a new folder in the lower levels of root folder. Currently, I have to create a new folder right below root folder and then drag it to lower level folder. Why I can't do it in one go. Thanks!

    Hi Ray. That is normal behavior for Snow Leopard now. The OS wants to reserve the root level for itself. Also, folders you create at the root level may not accurately reflect the correct permissions.
    You want to try to keep all your files/folders in your Home (User) folder. That way all your documents remain portable when you move to another Mac or migrate your data back after an Erase & Install. If your docs are scattered all over the drive there is a chance some stuff will not get transferred during a move. So try to keep everything inside your Home folder.
    If you have to place a new folder at the root level, try creating the folder on your Desktop, then dragging it to the root level.
    And Welcome to the Macintosh!
    You might find the following links helpful:
    http://www.apple.com/support/switch101/
    http://www.apple.com/support/mac101/
    Cheers!

  • Issue in determining Low level codes

    Hi Gurus,
    We are having an issue running heuristics when running  along with Temporary low level codes. We are in SCM 7.0 environment.
    The SNP Heuristics jobs are failing with this message "Low-level code
    not available for product XXXXXXX at location XXXX
    Job cancelled after system exception ERROR_MESSAGE"
    Also, the message in the job log  for the product in was flagged for deletion and I am not too sure why is this being picked up for processing and then failing?
    I tried using SDP relevance as '1' for the product flagged for deletion for testing purposes and then the program is not considering this SKU, but stopping at another SKU and also that SKU is flagged for deletion. But the flip side of using SDP relevance is that that setting is at global level andmay have issues where the SKU is active.
    I have couple questions:
    1) Is there a way for the program to skip the code which has an error and move forward instead of failing the job? and provide a spool with error codes.Also, to provide the LLC number for all the correct SKU's instead of failing at that point and not generating the LLC's?
    2) Why is the program also considering the codes which are flagged for deletion which should not be the case?
    Also, I tried another way of maintaining the codes in thevariant which are not flagged for deletion and tried running heuristics in the background but failed at a point and it gave a message stating Low-level code not available for product XXXXXXX at location XXXX,     Job cancelled after system exception ERROR_MESSAGE
    But the product which was displayed in the job log is an active SKU. I am not too sure why the job failed with this error?
    I tried running that individual SKU in the background including the temporary determination of LLC, it was successful. I am not too sure why it failed when it was a part of selection?
    Any suggestions, would really help us a lot.
    Thanks and regards,
    Murali

    Hi Datta - Thanks for the replies. I was able to have a workaround by using selection profiles excluding the Procurement type 'P' and it worked.
    At our client, whenever a product is made 'non-X0'  in ECC, procurement type is set to 'P' and by excluding that piece in the variant I was able to continue with my processing.
    But still, don't you think the products which are flagged for deletion shouldn't be picked up heuristics?
    Thanks and regards,
    Murali

Maybe you are looking for