Generation of test limit masks for frequency response (limit testing in SAV)

Does anyone know a simple way for generating test limit masks from a saved frequency response?
The saved frequency response is from a reference measurement, and I need the limits to follow this response. I must be able to punch in different dB values for upper and lower limit, and for different frequency bands.
Thanks
Truls

Ciao emmanuele,
I think what you are searching for is the Extract Single Tone Information.vi that you can find under Waveform-->Analog Wfm-->Measurements-->Extract Tone.
Have you already taken a look?
Antonios

Similar Messages

  • Using USB 6341 device for frequency generation but level is droped when conect

    i am using USB-6341 Daq Device for frequency generation. The desired frequency is generated correctly but when i connect to other device the voltage level i means the Peak to Peak voltage of the frequency dropped to 0.6, 1 Volt and also the frequency fluctuated. As I disconnect it from the load or the device the frequency becomes constant and peak to peak voltage becomes 5.
    Please some help me to resolve my problem.
    Thanks
    Best Regards
    Naseeb
    Solved!
    Go to Solution.

    Hi Nasib,
    I just wanted to clarify a few more things. Can you answer the following questions?
    What frequency are you trying to generate?
    When there is no load connected do you see the desired frequency?
    When the load is connected, does the frequency always change to the same value or does it fluctuate within a certain range? If it fluctuates, what is the range?
    Also, what microcontroller are you using?
    Can you provide any images?
    Regards,
    Travis Ann
    Applications Engineer
    National Instruments
    Applications Engineer
    National Instruments

  • Frequency response requirements for headphones with CMSS on XFi ???

    Hi,
    I would like to know if someone could tell me what kind of heaphones are suitable for the CMSS mode with the XFi.
    I mean between : flat response/free-field correction/diffuse-field correction.
    Applying HRTF filtering should mean that headphones with flat response is the best option ( same configuration as binaural recordings).
    But I have a big doubt that Creative team expects costumers to possess such a pair of headphones, as it is rather for scientific uses (psychoacoustics, audiology etc...).
    So, if we look at the technical solutions for wide audience we have two options (FF correction and DF correction). Here is a trick because these corrections intend to reproduce some of the effects from HRTF (for two different environment configuration of HRTF measurements). It is why the frequency response of most of the headphones have a notch between 4Hz and 0 kHz.
    To simplify, if we listen binaural sounds with classical headphones the effect of outer pinna is reproduced twice.
    So I guess Creative have implemented a kind of normalization/equalization/correction process to deal with the non-flat frequency response of headphones, but do someone know if they have chosen diffuse field or free field correction ?
    This post might seem a detail but the issue can be very important for the accurate localisation and the coloration? of 3D sounds with headphones.
    Thank you, and please forgive my english!

    The only possibility that I can think of is that 2/2. mode is NOT as simple as headphone mode with crosstalk cancelation. Perhaps the HRTF only kicks in for sound sources outside of the arc directly in front of the listener. If that were the case, you wouldn't percei've any distortion for sound sources in front of you.
    Also, you are wrong regarding DirectSound3D. Keep in mind that Direct3D and DirectSound3D are not the same. The whole point of OpenAL and DirectSound3D is that they present an API to the programmer through which there is NO specification of the number of speakers. When using OpenAL or DirectSound3D, the only thing a programmer can do is specify the location of a mono sound source in 3D space relati've to the listener. The speaker settings for your DirectSound3D or OpenAL device will then determine how this sound is "rendered" by the soundcard. It is not under control of the game. For example, if you have 5. speakers and the 3D position is behind you, the SOUND CARD will make the decision to use the rear speakers. If you use headphones, the SOUND CARD will decide to apply an HRTF to create the illusion of a rear sound source. The point is that the game does not have control over how many speakers you will get sound from.
    However, to further complicate the situation, there are SOME games (HL2 is an example) where DirectSound3D is used, BUT the sound output of the game itself IS a function of the Windows speaker settings. This is not how programmers are SUPPOSED to use DirectSound3D. I've written about this countless times. There is a good post on [H]ard|Forum about this. Do an "advanced" search with my username (thomase) looking for the terms "hl2" and "cmss".

  • What is the frequency response for the new iPhone 5 earbuds?

    I have recently purchased a new iPhone 5 with iOS 6.0.1. I was wandering about the specifications of the new earbuds. What is the frequency response? Thank you for the information.

    Apple has released 2 sets of earbuds with the iPhone 5 release.  There are the earpods and also the in-ear headphones.  Here are the in-ear headphones: http://store.apple.com/us/product/MA850G/B/apple-in-ear-headphones-with-remote-a nd-mic?fnode=49.  The frequency response on these is 5kHz-21kHz.  I was unable to find the earpods response for definite but the best response I got was 30Hz-16kHz.  Hope this answers your question.

  • Frequency Response VS FFT for measring frequency response of a audio ouput signal.

    We have purchased the Sound and Vibration Toolkit and I have some questions.
    From the frequency response example I looked at you measure the input then the ouput and the Vi gives you the difference.
    I want to measure the audio frequency response a radio. 
    So all I have is the output I dont have the audio input  to use as a reference. 
    How would I measure frequency response with the frequency rasponse VI with only the output audio signal?
    I was also looking at the FFT example to measure the audio signal frequency response.
    But from what I can tell so far this only does 1 channel I want to do both channles at the same time.
    Is there a way to do FFT on 2 channels at once and have them output on the same graph?
    Thanks for any help you can provide.

    Hi,
    I was looking through the examples and If you look at the SVXMPL_Multichannel FFT (simulated) examples, this shows how to take an FFT with multiple signals. You just pass in an array of the waveforms.  If you need to put your signals together you can use the merge signals VI, which is located in the Express»SignalManipulation Pallette.  Let me know if you need any further help with this issue.
    Have a great day,
    Michael D
    Applications Engineering
    National Instruments

  • FS7 audio Frequency Response 50Hz - 20KHz?

    Hello guys, I'd like to point out an issue that I couldn't find on the forum regarding the audio frequency response.I've always worked with pro camcorders that recorded 20Hz to 20KHz, which is the standard frequency us humans can hear. So I was surprised when I saw in the FS7 manual that the audio Frequency Response is 50Hz to 20KHz. Is this a typo? Or is the camera actually limited to 50Hz for the lower freqencies?I would assume that the internal mic would be limited to that range, yes, but not the internal recording of the sound? Why limit the camera to 50Hz? Any light on the subject would be appreciated. Thanks!

    The tests I posted are for the mic inputs, not line inputs. I did test both, but since I often run & gun and cannot support all the extra gear to make the FS7 mic inputs on  a par with Sony's EX3 mic inputs, I'm requesting that the high pass filter added with firmware 2.0 be removed in futre updates. We have the wind filter already. No additional filter is needed. While Genelec monitors are pretty good, they're not noted for deep bass response. Most studios use the ubiquitous NS10, which are only good to 80hz, which is why few people notic the deficiency. The custom built system in my screening room has a -3dB point of 7hz. I noticed that even piano sounds thin on this camera. that doesn't surprise me,  since the rolloff starts at 200hz. All I'm asking for is the same quality I'm used to with my EX3's mic inputs. Only 0.5dB down @ 20hz and no thinning of the midrange. When I do large budget orchestra shoots (rare) I use 24/96 sampling, 8 channels and large diaphragm studio condensers. My recordings earned critical praise from Peter Aczel at The Audio Critic. I was a sound engineer for 4 decades before I got my first digital camera. In addition to that, I was consulted as an independent expert on infrasonics for chapter 17 of Ethan Winer's THE AUDIO EXPERT, published by Focal Press. I was responsible for the information on subwoofers and how they operate. Acoustic and speaker design share a long history with me, and my 'day job' is amplifier design, modification and repair. I'm too intimately aware of what's going on in the signal path of an audio system and when modern digital systems deviate intentionally from DAT-quality, it really bothers me. I made a big deal about this in 2007 with the Sony HVR-V1U, and Sony must have listened, because they got it right with the EX1 and EX3 cameras. But sadly, they've reverted to these games with the FS7, though not as badly as 2007.

  • Where can I find the amplitude and phase frequency response when the AC coupling is set on the 4472

    My friends,
      I need to know the amplitude atenuation but most important, the phase distortion introduced on the low frequency components of the signals when the AC coupling is setted (i.e. when the high pass filter is connected) on the 4472 and 4472B DAQBoard.
      I can construct the amplitude frequency response by generating and aquiring a sine waveform of a knew amplitude. But I cannot construct the phase distortion introduced by the circuitry. 
      However, I assume that this crucial information should be available in the DAQ Manual or  in the website of NI, but untill now I haven´t found it.
    Thankning in advance,
    crimolvic from Chile

    crimolvic,
    Here are the Specifications and Datasheet for the 4472.  They indicate a phase non-linearity of less than 0.5 degrees across all frequencies.
    For information on how this varies with frequency, see the attached
    spreadsheet.  This response was the result of testing on a single
    4472.  Although this is classified as a "typical" response, it is
    not gauranteed.
    Have a great, day!
    Travis
    Attachments:
    4472 Phase Linearity.xls ‏21 KB

  • Microphone frequency response function

    Hi all,
    I am trying to do an impact test with 3 types of sensors to investigate surface properties of a material.
    The first sensor is the sensor in the modal hammer.
    The second sensor is a geophone , whilst the last sensor is an air-coupled sensor in the form of a microphone.
    Aside from the microphone which is directly connected to the laptop by USB, the instrumented hammer and the geophone is connected to the NI 9233 which is connected to the NI USB-9162 which finally connects to the laptop.
    This hardware doesn't support analog triggering therefore software triggering is used.
    I am using a vi example from http://www.ni.com/example/28438/en/ 
    I have successfuly setup the LabVIEW VI to read the raw data coming in from all the sensors and to obtain the frequency response of the geophone due to stimulus signal created by the impact hammer.
    I have tried to obtain the frequency response of the microphone due to the modal hammer however no results show up in the graph for the frequency response of the microphone.
    My question is this, what is going on and how do I fix this so that I may obtain the frequency response of the microphone?
    I have attached my vi for your consideration.
    Solved!
    Go to Solution.
    Attachments:
    Y2014M01D28 IRnMicFRFRecAll .vi ‏190 KB

    I cannot fix your VI because I do not have DAQmx or the SVFA toolkit. What I have attached is a simulation which shows some of the concepts.
    The left side simulates acquisition of two channels (hammer and geophone) via the DAQ device and one channel of sound (microphone).  Don't worry too much about the details. I just threw this together quickly. The hammer signal is a square pulse, the geophone signal is one cycle of a sine wave with noise, and the microphone signal is one cycle of a triangle wave plus noise. Each of the pulses is dleayed from the start of the acquisition by different amounts to represent the trasmission delays of the sound and vibration.  The values chosen are arbitrary and do not simulate any physics.
    I used the sampling rate, block size, and duration controls from your VI, so the numbers of samples and the sampling rates should be the defaults from your VI. The three graphs one the left show the three signals as generated. Note that the Microphone graph has X-axis autoscaling turned off. Also the data is all in waveforms or arrays. I avoid the use of the Dynamic Data Type produced by Express VIs because it effectively obscures the data structure.
    The right side shows one way to do the triggering. I used Basic Trigger Level Detection.vi from the Waveform Monitoring palette and the Get Waveform Subset.vi from the Waveform palette. Note that there are separate trigger VIs for the DAQ channels and the sound channel. Because they are not started at the exact same time and they do not have the same dt (= 1/sample rate), the waveforms cannot be combined to use a single trigger. The geophone signal is synchronized with the hammer signal so one trigger from the hammer can be used to get both subsets.
    Note that with the default delays the microphone pulse occurs 30 ms after the geophone pulse but in the subsets it occurs about 4 ms before the geophone. This is due to the independent triggering.
    Since you do not have a common signal or trigger for both devices (DAQ and sound), you cannot know exactly what the timing relationship is between them. On every run the differences in the start times will vary. So you will need to do some kind of calibration to determine the time delays.  I am thinking of some kind of periodic stimulus which will produce several pulses to both the geophone and microphone. The first pulses will have indeterminate delays but subsequent pulses should have reproducible delays. The relative spacing and directions during the calibration runs should be the same as the hammer position for the real experiment.
    Lynn
    Attachments:
    Sound and hammer sim.vi ‏28 KB

  • How to equalize an analog output by a known frequency response?

    I'd like the analog output of my system have a flat response. What I'm going to do is first measure the stimulus signal, then using a filter to compensate the frequency response to make the following output signal flat. The difficulty is how to build a filter according to the frequency response. I know it's easy to do by using digital filter design of signal processing toolkit. But I need to do it by LabVIEW and the response is changed frequently. Any suggestions?
    Bill

    The filter does the signal "adjusting." Filters are typically characterized in the frequency domain, but the work on the signal fed them, which usually occurs sequentially in time. The lookup table is just to select the appropriate filter so you do not have to do a lot of calculations at run time. As an example, suppose you have just treble and bass and only one cut and one boost setting for each. You measure the stimulus and find the bass is too high and the treble too low. You select the bass cut and the treble boost filters for this run. If this is expanded to octave (or third octave) filters and 16 gain/attenuation settings in each band the lookup table approach saves time and may also provide a compact means of recording the equalization settings with your test data (rather than filter coefficients which do not actually indicate the response without characterizing the filter).
    Lynn

  • Frequency response function modal analysis

    After reviewing the signal analysis functions in DIAdem I have realized them to be a bit limited for modal analysis.  I have a couple hammer impact tests that I need to process a frequency response function for, and since this is brand new to me I'm not seeing anything in the embedded function list that is going to help me.  I was wondering if anyone out there has a couple of pointers on generating a FRF plot for modal hammer impact tests.  I did notice that the ChnFFT2 command allows me to generate a transfer function, coherance, and FFT Cross Spectrum channels for analysis.  Though I might be confused and this may be everything I need.  My FFT2 settings are below.
    [code]
    FFTIndexChn      = 0
    FFTIntervUser    = "NumberStartOverl"
    FFTIntervPara(1) = 1
    FFTIntervPara(2) = 2500
    FFTIntervPara(3) = 1
    FFTIntervOverl   = 0
    FFTNoV           = 0
    FFTWndFct        = "Rectangle"
    FFTWndPara       = 10
    FFTWndChn        = "[1]/Time axis"
    FFTWndCorrectTyp = "No"
    FFTAverageType   = "No"
    FFTAmplFirst     = "Amplitude"
    FFTAmpl          = 1
    FFTAmplType      = "PSD"
    FFTCrossSpectr   = 1
    FFTCoherence     = 1
    FFTTransFctType  = "Spectrum H0"
    FFTCrossPhase    = 0
    FFTTransPhase    = 0
    Call ChnFFT2("[1]/Time axis","'[1]/H_1' - '[1]/H_4'","'[1]/A_1' - '[1]/A_4'") '... XW,ChnNoStr1,ChnNoStr2
    [/code]

    Standard modal analysis has something denoted as FRF.  I have a labview application note "The Fundamentals of FFT-Based Signal Analysis..."
    Frequency Response Function
    The frequency response function (FRF) gives the gain and phase versus frequency of a network and is typically computed as
    where A is the stimulus signal and B is the response signal.
    The frequency response function is in two-sided complex form. To convert to the frequency response gain (magnitude) and the frequency response phase, use the Rectangular-To-Polar conversion function. To convert to single-sided form, simply discard the second half of the array.
    You may want to take several frequency response function readings and then average them. To do so, average the cross power spectrum, SAB(f), by summing it in the complex form then dividing by the number of averages, before converting it to magnitude and phase, and so forth. The power spectrum, SAA(f), is already in real form and is averaged normally.
    Refer to the Frequency Response and Network Analysis topic in the LabVIEW Help (linked below) for the most updated information about the frequency response function.
    http://zone.ni.com/devzone/cda/tut/p/id/4278
    So the options for FFT2 are
    No
    DIAdem does not calculate a transfer frequency response.
    Spectrum H0
    DIAdem calculates the transfer frequency response by dividing the FFT of the output signal (A) by the input signal (E): FFT(A)/FFT(E). DIAdem averages the amplitudes of the individual transfer functions.
    Spectrum H1
    DIAdem specifies the cross spectrum and the auto spectrum for each signal pair. DIAdem calculates the transfer frequency response by dividing the averaged spectra: Middle(cross(A,E))/middle(auto(E)). DIAdem does not average phases, because phases can delete each other.
    Spectrum H2
    DIAdem specifies the cross spectrum and the auto spectrum for each signal pair. DIAdem calculates the transfer frequency response by dividing the averaged spectra: Middle(auto(A))/middle(cross(E,A))
    If you assign the values Spectrum H1 or Spectrum H2 to the variable FFTTransFctType, DIAdem averages and divides the cross spectra and the auto spectra and calculates the amplitudes last.
    Which state auto spectrum when FRF is power spectrum. 

  • Loudspeaker Frequency Response

    Hi all! I'm making a program to measure the frequency response of a loudspeaker. I'll be using a sweep to test the loudspeaker. 
    Particularly i would like the program to be like this, which is real time. How can i implement this in LabVIEW. This is the screenshot
    Details about the plot.
    First graph:
    Shows a plot in real-time of a frequency sweep with a constant sine sweep amplitude of 1 V. When sweep is started, the graph shows a plot of FFT moving from left to right, with peak of FFT at maximum amplitude of 1 at corresponding frequency of the sweep.
    Second graph:
    Shows the plot of the Sound Pressure Level in dB versus freqeuncy.
    Please refer to the picture and video link below.
    https://www.youtube.com/watch?v=sKC3ioWXG38, skip to 4:10

    Assuming your idea is to sweep a frequency into an amplifier connected to the loudspeaker, and measure the frequency response with a microphone:
    You need to monitor the signal at the loudspeaker terminals to account for any non-linearity's in the signal generator and amplifier.
    You need to know the frequency response of the microphone, this is difficult, and that is why calibrated microphones are expensive.
    You need an anechoic chamber so that the results are not affected by any room resonances.
    Your sound level plot is in dB (A). My understanding of the A weighting is so that the human perceived loudness is constant across the audio frequency range. If you are concerned about loudspeaker performance, is it worth discarding the complexity of this additional frequency response curve?
    This will be a difficult project. Please let us know how you get on.

  • Frequency Response Analyser or Impedance Spectroscopy

    Hi
    I am looking for a Labview program to allow the freq. response of say a
    tuned circuit.
    I will be using a GPIB Function Gen. From 1Hz to 10MHz and measuring the
    peak-peak value across a shunt resistor that is in series with the tuned
    circuit.
    Thus as the frequency increases then the tuned circuit will respond
    accordingly. If we take samples every 100Hz then we can produce a spectra
    of the tune circuit.
    I actually going to use this for measuring the char. of an electrochemical
    cell.
    Any Suggestions
    Wayne

    Thanks Carlos
    The latter method I did not think of but will try it.
    Cheers
    Wayne
    "JuanCarlos" wrote in message
    news:[email protected]..
    > Wayne,
    >
    > Looks like a sweep sine analysis should give you a good idea about the
    > frequency response. Make sure that you measure the input and output
    > signals of the circuit; this way you can just compeir RMS values and
    > get the frequency response.
    >
    > Another method that you may want to look into is a broad band
    > frequency response. Basically you send white noise to the circuit;
    > then you acquire the input noise and the output noise; the you
    > calculate the FFTs of this data and compair it; after some averaging
    > you get the frequency response graph of your dev
    ice. LabVIEW has a VI
    > called Frequency Response Function that does a large part of the job;
    > together with the "Frequency Analysis of a Filted Design.vi" example
    > you can get a good idea on how to perform this test.
    >
    > I hope this helps.
    >
    > Regards,
    >
    > Juan Carlos
    > N.I.

  • Calculate frequency response using FFT and inverse FFT

    Hi,
    Attached is the program using FFT and inverse FFT to filter a time domain signal. The frequency response of the LPF can be obtained by using the chirp signal from 0 to 5kHz. However, I don't know why the signal obtained from a sine wave input is so strange. The amplitude is wrong and has a envelope outside. Please help to point out what's wrong with that.
    Bill
    Attachments:
    fft filter.vi ‏87 KB

    If you check the help text for sine wave.vi you'll see that it generates the sine wave based on the following formula:
    yi = a*sin(phase[i])
    for i = 0, 1, 2, …, n – 1 and where
    a is amplitude,
    phase[i] = initial_phase + f*360*i
    This means that when you input a=1, f=0,1 and initial_phase=0 you will get a sine wave that is based on samples at every n*36 degrees; i.e. at 0, 36, 72 etc...due to this sample rate you never see the full amplitude (+/- 90 degrees), the wave is clipped at the top. If you input an initial phase of 64 degrees you will get the full amplitude, but the wave is still deformed due to digitalization...
    The lower the frequency you put in, the closer the digitalized representation will be to the true sine.
    Use the Waveform Generator VIs from the analyze palette if you want to have more control over the wave generation (sample rates etc.). (Not available if you have the base package.)
    MTO

  • Audio Frequency Response?

    I'm comparing another unit that I have now and know nothing, I mean nothing, about Hz frequency stuff.
    The iPod Touch rates at 20Hz to 20,000Hz. The unit I have is 100Hz to 12.5Hz. Which is better, as in more/better frequency response?

    Having been an audiophile for decades I can tell you specs lie. Cheap audio equipment often has specs that ON PAPER best units costing literally 100 times as much. It's best to read trusted sources for reviews and use your own ears. I learned about the "arms race" in specsmanship back in the 70's when I was in and out of high end audio salons for years, replacing components in my home system. I finally got what I really like when I got, among other things, a Mark Levinson amp. NOTHING beats listening and since our ears have short term memory, the best way to know which of several pieces of equipment are best for you is to do a carefully controlled A/B test. A/B tests CAN be biased, but done fairly (no salesman shenigans) they are the best way to know which item to buy (presuming your pocketbook matches the quality of your hearing)
    Message was edited by: David.
    Message was edited by: David.

  • MacBook core duo Sept. 2006  - the audio is a mystery.  Any analogue out has bass boost with bass distortion.  With digital out, by USB or Airport Express Airtunes, the frequency response is normal.  Somewhere, Apple put in an analogue bass boost.. why?!

    Since new, my Macbook core duo has sent all audio to all analogue outputs with frequency distortion.  Some physical hardware or firmware in the analogue section adds a bass boost to the frequency response of the audio....  And, this boost adds bass frequency distortion to most, if not all, of the audio analogue output.
    This happens for all audio sources, players, iTunes, videos, streaming audio/video, movies...  any sound source available to the Macbook.  No-one with whom I have talked about this problem, has ever heard of it.  Not elegant ideas for reducing it are to use equalisers on players and iTunes.  iTunes even has a preprogrammed "bass reduce" on its equaliser, as if it knows already that this inherent bass boost "feature" has been built into the Macbook, and cannot be defeated by the user.
    The digital audio is of sufficiently high quality to play well on a very good to excellent sound-system;  so, it's a shame someone has mucked around with the frequency response of the analogue conversion, by designing the frequency distortion right into the computer.  This motherboard, (which includes the sound-section), has been replaced, along with the speakers, and everything sounds exactly the same as it was before.
    To receive a flat frequency response and no frequency distortion, I listen on one receiver using Airtunes, (it doesn't matter whether or not the analogue or digital output jack is wired analogue or connected via optical cable....  the freq. response is flat);  and I listen on a high-end stereo system using an USB output port to an A/D converter, passing the analogue result using long patch-cords connected to the "line-in" jacks of the stereo.  The bummer is that most of my audio sources are not derived from iTunes...  hence, I cannot use Airtunes with Airport express for them.  I've tried using "Airfoil" without luck.  The sound becomes distorted with sibilance and other frequency anomalies.
    Question:  has anyone else discovered this sound imperfection in any Mac product?  And, does anyone know if Apple is aware of it?  Finally, has anyone found an Apple fix for the problem;  or, at least, come up with better solutions than I have?
    Thanks heaps for any impute and answers you may supply!!  junadowns

    Army wrote:
    This might not help you a lot, but if you want a stable system, try using packages from the repo wherever possible, look at the news before you update your system and don't mess things up (like bad configuration etc.).
    When it comes to performance, you won't gain much by compiling linux by yourself! Just use the linux package from [core] or if you want a bit more performance, install the ck-kernel from
    [repo-ck]
    Server = http://repo-ck.com/$arch
    (this has to go to the bottom of /etc/pacman.conf)
    (use that one which is best for your cpu (in your case this might be the package linux-ck-corex).
    Hmmm, Linux-ck-corex doesn't even load.. I am now trying to install the generic one. Hope it works.
    Edit: I will first try linux-lqx...
    Last edited by exapplegeek (2012-06-26 18:33:31)

Maybe you are looking for

  • HP LaserJet P1005 vs Linux drivers [was a PEBKACH after all]

    Hi, I'm trying to set up my HP Laserjet P1005 printer on Arch.  After some googling I found that the foo2xqx driver should support it -- looking at AUR, I found foo2zjs driver package -- which isn't the same, but after a closer look it seems to inclu

  • How to turn off screen while computer is on an hdmi tv/projector hook up

    How to turn off screen while computer is on an hdmi tv/projector hook up?

  • ABAP dump on entering into IC

    Hi, One user faces a very strange issue where she faces a dump on just entering CRM 7.0  IC application. The runtime error she faced on investigating in st22 was BCD_OVERFLOW where there is an overflow during an arithmetic operation. Class -> CL_ABAP

  • I want to know when the method "public void setName" is invoked?

    The jsp file contain the one which list below: <mt:hello name="foo"/> package com.acme.tag; import javax.servlet.jsp.*; import javax.servlet.jsp.tagext.*; public class HelloTag extends TagSupport private String name=""; public HelloTag() super(); � p

  • Panasonic TV tx-L32ET5EW

    Dear All, I bought a new panasonic TV TX-L32ET5EW. There is skype provided with it. I would like to connect an external camera to it. Which camera is supported on this TV?. Do I have to contact panasonic to get this answer or Skype team can provide i