Average versus Total RMS power

HI,
I have version 2.0 of Audition and am trying to figure out how Audition is calculating the Average versus Total RMS power values shown in the amplitude statistics window. The Help section describes the two values as
Average RMS Power Shows the average amplitude. This value reflects perceived loudness.
Total RMS Power Represents the total power of the entire selection.
But I'm looking for the details regarding how they are coming up with these numbers.
Any thoughts?
Thanks,
Ben

BWYH wrote:
I'll generate a few test files and see if I can figure it out myself.
We've all been there and tried that, and the chances of coming up with an absolutely correct answer are not good at all - to the point of vanishing. The problem is that when the average value is computed, it's done something along the lines that Leq is calculated - but not quite. And even with absolutely fixed values in files, you will still find that variations in time make a significant difference to the results.
If you really want to know why you won't be able to do this just from a few files, it's because you will be trying to recreate the values in formulae like the ones here by working back from given results. I have an MSc. in acoustics, and I don't even fancy trying, quite frankly!

Similar Messages

  • How is TOTAL RMS different than AVERAGE RMS

    I create spoken-word recordings.  One company that I work with specifys that the recording should be at a level that measures between -23 RMS and -18RMS.  It appears that audio editing programs label their RMS readings in various ways.  I had been using Average RMS but I think maybe this company is using a standard that comes closer to what ADOBE calls TOTAL RMS.  What attributes cause Audition to give one value to AVERAGE RMS and another value as AVERAGE RMS?

    Total RMS used to be what happens if you add together all of the signals in your measuring time window (in Audition normally 50ms), and perform a RMS calculation on them, and that was a pretty standard way of measuring it. Average RMS in Audition is a little different, though. Reading through the somewhat cryptic help files about this reveals that to all intents and purposes, the RMS measurements will vary at different levels, because some attention is being paid to either Noise Criteria or Noise Rating curves. The lower the level gets with these, the more they favour lower frequencies over higher ones; in other words the figures reflect human annoyance factors rather than a specifically defined measurement criterion. And no, we can't tell you exactly how this is going to work, because the actual curves and weighting figures have never been made public, and it would take me several days to second-guess it from comparative readings. I simply don't have time...
    It's been like this for ever. The original design of this was done by David Johnston, the program's original creator when it was Cool Edit, and I belive he's no longer associated even indirectly with Audition these days. Whether information about this is actually documented anywhere at all, I don't know. Questions about it come up periodically, and we can never answer them satisfactorily.
    The original Cool Edit manual was just as useless about this. The exact words it uses to describe the Average RMS measurement are "Average RMS Power represents the average power of the entire selection. This is a good measure of the overall loudness of the waveform selection." And it's these words I've bolded that make the difference.
    What happens with Total RMS measurements now seems to be a little different, and you end up with a figure much closer to a LUFS reading (you get this acccurately at the bottom of the statistics panel in CC). I know (because this is relatively recent) that the way the LUFS figure is calculated is according to the BS1770-2 standard - and that takes the whole program dynamics into account. What appears to happen with Total RMS in Audition is that it appears that an average of all of the windowed measurements, and that this in general turns out to be pretty close to the LUFS value as a rule.
    Now, judging by the numbers they've quoted, the chances are that the company you are working for is using the recent standard for what you're doing - so if you are using CC, then it's pretty easy - select the whole waveform and quote them the value shown at the bottom of the screen. If you aren't using CC then there are several free LUFS meters available as VST plugins.
    My personal view on this new standard, FWIW, is that it's a complete waste of time. It certainly doesn't achieve what it sets out to, or failing that all the broadcasters that use it have no idea what to do with it. And that very much includes the BBC, I'm afraid. The standard is supposed to keep overall programme dynamics within a certain loudness range, and save viewers/listeners having to adjust volume levels between programmes. And, it fails at that pretty comprehensively. Quite frankly, when it comes to broadcast dynamics standards, they'd have been better off forcing everybody to use the same settings on an Orban Optimod; that would almost certainly have worked better!

  • Average RMS power in Amplitude Statistics

    I am wondering on how to calcuate the average RMS power using the window width? How does the window get the sample information from the sound file?
    Thanks
    Leo

    >...but I could see that from a pure theory point of view it might be possible to "slide the window" over one sample at a time. That is how my professor (many years ago)in digital signal processing would have suggested it be done.
    I don't think he would have! Let's take the standard sample window size of 50ms. At 44.1k that contains 2,205 samples. So there would be absolutely no point in sliding a dirty great window like that over a single sample at all, would there?
    Disregarding that bit about dynamic range (which may well be BS), the basic idea is that you are calculating the square root of the mean value of the signal squared - that's what RMS stands for. Because you've squared the mean value of the wave, it's always going to be a positive value. And note that this is the mean value of a repetition of the
    wave, not a sample. And we are talking about the
    arithmetic average value of all of the waves in the sample window, so all of the positive and negative excursions are taken account of.
    'Series' in this context means that the selected part of the wave being analysed is treated as a sequence of windows of whatever size you've chosen - sliding doesn't come into this at all. Statistically, Audition can give percentages of windows with low and high values, and calculate the average RMS value based on this.
    The reason that I think the dynamic range bit might be BS, or at best completely back to front is that you'd need to make sure that you'd used short samples to catch the quietest and loudest parts to have them treated as such, rather than lumped in with a larger sum with either a higher or lower average value, which you'd get with longer samples. But I have to say that when I tested this with a range of different window sizes, it really didn't make a lot of difference.

  • Creating Weighted Average Sub-Totals in Standard Apex Report

    Based upon the below query, I am trying to have a Standard SQL report produce Weighted Average sub totals for 4 columns
    SELECT   Pool
    , order_by
    , Pool_order
    , Perf_Non_Perf
    , CurrBal
    , LowVal
    , HighVal
    , LossHigh
    , LossLow
    , CalcCurrBal
    , DECODE(CurrBal,0,0,(LowVal/CurrBal * 100 )) AS LowPercent
    , DECODE(CurrBal,0,0,(HighVal/CurrBal * 100 )) AS HighPercent
    , DECODE(CalcCurrBal,0,0,(LossHigh/CalcCurrBal * 100)) AS LossHighPercent
    , DECODE(CalcCurrBal,0,0,(LossLow/CalcCurrBal * 100)) AS LossLowPercent
    From (
    SELECT   Pool
    , 1 as order_by
    , Pool Pool_order
    , Perf_Non_Perf
    , Sum(Current_Balance) AS CurrBal
    , Sum(Low_Value) AS LowVal
    , Sum(High_Value) AS HighVal
    , Sum(Calculated_Cumm_Loss_High) AS LossHigh
    , Sum(Calculated_Cumm_Loss_Low) AS LossLow
    , Sum(Current_Balance_to_Calc_Cumm_L) AS CalcCurrBal
    FROM Current_Data
    GROUP BY Pool, Perf_Non_Perf
    ORDER BY Pool, Perf_Non_Perf DESCThe columns in question are:
    LowPercent
    HighPercent
    LossHighPercent
    LossLowPercent
    Any ideas other than using a union to add the sub-totals in as a separate query (trying to keep this confined to one query if possible)..
    Thank you,
    Tony Miller
    Dallas, TX

    Tony,
    The PL/SQL forum may answer this better but had you considered the ROLLUP or CUBE GROUP BY functions? ROLLUP would look like this in your query:
      SELECT pool,
             order_by,
             pool_order,
             perf_non_perf,
             currbal,
             lowval,
             highval,
             losshigh,
             losslow,
             calccurrbal,
             SUM (DECODE (currbal, 0, 0, (lowval / currbal * 100))) AS lowpercent,
             SUM (DECODE (currbal, 0, 0, (highval / currbal * 100))) AS highpercent,
             SUM (DECODE (calccurrbal, 0, 0, (losshigh / calccurrbal * 100)))
                AS losshighpercent,
             SUM (DECODE (calccurrbal, 0, 0, (losslow / calccurrbal * 100)))
                AS losslowpercent
        FROM (  SELECT pool,
                       1 AS order_by,
                       pool pool_order,
                       perf_non_perf,
                       SUM (current_balance) AS currbal,
                       SUM (low_value) AS lowval,
                       SUM (high_value) AS highval,
                       SUM (calculated_cumm_loss_high) AS losshigh,
                       SUM (calculated_cumm_loss_low) AS losslow,
                       SUM (current_balance_to_calc_cumm_l) AS calccurrbal
                  FROM current_data
              GROUP BY pool, perf_non_perf)
    GROUP BY rollup (pool,
                     order_by,
                     pool_order,
                     perf_non_perf,
                     currbal,
                     lowval,
                     highval,
                     losshigh,
                     losslow,
                     calccurrbal)
    ORDER BY pool, perf_non_perf DESCJeff

  • Snow leopard agent provides a half of total CPU power?

    Dear All
    I am using a snow leopard server as a Xgrid controller.
    Here are several agents.
    Some of them are Leopard machines and the others are of Snow Leopard.
    Xgrid admin on the controller shows total power of each agent.
    I noticed that,
    agents with Snow Leopard seems to provide a half of its total CPU power.
    For example,
    In a case of the MacPro with Leopard which has 8 cores with 3GHz,
    Xgrid admin shows that "Total CPU Power = 24.00 GHz".
    However, the MacPro with Snow Leopard which also has 8 cores with 2.26GHz,
    Xgrid admin shows that "Total CPU Power = 9.04 GHz" which is a half of the total CPU power.
    Why??
    Is there any security like limitation on Snow Leopard that the Xgrid controller can occupy the agent's CPU up to a half of the total CPU Power?
    Thank you for any comments in advance.
    -oyasai

    Well I don't know why it was changed, but this article should explain how to change it back to a one to one core to task ratio.
    http://support.apple.com/kb/HT4020
    -Curt.

  • What is the difference between Total RMS and Average RMS?

    There is a difference I am seeing between Average RMS in Audition 1.5 & Audition CS6.  I inserted the same track into each version and selected the same segment and below are my results.  Thanks!

    Thanks for your response.  Both versions have square wave setting, 50ms window and account for dc box checked. Any other thoughts? Kind of a mystery. 

  • Both Average and Totals for Cross Tab

    Hi, can I have both a total and an average for a cross-tab report?
    12/27/2009                    100                       200                       300
    12/28/2009                    200                       300                       400
          Total                        300                        500                       700
        Average                    150                        250                       350
    thanks in advance

    Look at the Total Wizard. You should be able select the right options to have totals at different level of the data. Also give us an example what you want to achieve.

  • 65 W versus 45 W power adapter ?

    Hi,
    I recently moved from a Tibook 800 to a PBook Alu 1.5.
    The new one came with a 65 w power adapter. I have an extra 45w power adapter that I used with the TiBook.
    What would be the consequence if I used the 45W adapter along with my AluBook ?
    Thanks for any advice on that topic
    Luc

    Let me firstly say that it is not recommended by Apple that you use a 45W adapter with a PowerBook that came with a 65W adapter as it can potentially do damage to both your hardware and battery.
    Now, I do every now and then use 45W adapters with my PowerBook and find that while it does work, it's not perfect. To be honest, 45W is not enough for the PowerBook at its peak demand although for average use it will suffice. I cannot start my PowerBook when it is plugged into a 45W adapter although one started it will do all the right things. My biggest concern with using a 45W adapter is what is happening when the battery is charging as I really don't know what the missing 20W is doing to it.
    In general I would recommend that you use it only as a last resort. You are much better off buying yourself a new 65W adapter if your needs require it.

  • LIS List Method average the total line

    In LIS standard analysis, is it possible to create a list method routine that will show the total line value as an average rather than sum?

    In LIS standard analysis, is it possible to create a list method routine that will show the total line value as an average rather than sum?

  • MARS Query 'Hit-Count' versus 'Total-Count'

    Hi, I have a question about MARS queries: I run queries using 'custom columns' and I continually hit over 5000 entries. I was wondering if there is a way to show the following:
    Custom Colums:
    - event type set
    - source IP address
    - destination IP address, port, and protocol
    - <NEW FIELD> 'Hit-count'
    The reason I posit the 'Hit-count' field is that this would help me see everything that happened on the first three columns and not limit me when MARS says 'only the first 5000 entries will be displayed'.
    If there is any way to count the number of times it happened in a hit-count field, versus counting the number of times it happened and then limiting the displayed results, I would think that would be tremendously useful.
    Please let me know if there is already a way to do this, or if there are any plans to add this! Thanks!

    Don't know about queries, but you define 'Count' in MARS rules, so you could clone the built-in rule and perhaps modify the count value to suit your needs. I know this is not exactly what you are looking for but it might get you going in the right direction. You also have the following variables to play with to further suit your needs:
    ANY-(Default). Signifies that the IP address for each count is any IP address.
    SAME-Signifies that the IP address for each count is the same IP address. This variable is local to its offset.
    DISTINCT- Signifies that the IP address for each count is a unique IP address. This variable is local to its offset.
    $Target01 to $Target20-The same variable in another field or offset signifies that the IP address for each count is the same IP address.
    Have a look at:
    http://ciscosystems.com/univercd/cc/td/doc/product/vpn/ciscosec/mars/5_3/uglc/rules.htm#wp1054961
    Also on strange idea, but it might work, in the "Maximum Number of Rows Returned" why don't you try and put 1000, does the MARS accept that? I seriously doubt it would work, but worth a try. I think they used to have an even lower limit in older version (1000).
    Regards
    Farrukh

  • ALV - Average in totals

    I made an ALV and i want to have at the end of it the AVERAGE function instead of SUM .
    Can i do this ?

    I tried but it doesn;t work ...
      PERFORM ALV_BUILD_FIELDCAT USING 'KONV' 'KWERT' 'ITAB' 'KWERT1'
      ' ' 'LME CASH CU' ' '    NA      NA    NA  'C' NA NA NA NA NA 'C601' NA.
          FORM alv_build_fieldcat                                       *
    FORM ALV_BUILD_FIELDCAT USING REF_TABNAME
                              REF_FIELDNAME TABNAME FIELDNAME
                              KEY FTEXT HOTSPOT OUTLEN
                              ICON JUST DOSUM DECS NOZERO UOM
                              INP CHK EMF EDT.
      ADD 1 TO COL_POS.
      FIELDCAT_LN-REF_TABNAME   = REF_TABNAME.
      FIELDCAT_LN-REF_FIELDNAME = REF_FIELDNAME.
      FIELDCAT_LN-TABNAME       = TABNAME.
      FIELDCAT_LN-FIELDNAME     = FIELDNAME.
      FIELDCAT_LN-KEY           = KEY.
      FIELDCAT_LN-COL_POS       = COL_POS.
      FIELDCAT_LN-NO_ZERO       = NOZERO.
      FIELDCAT_LN-SELTEXT_L     = FTEXT.
      FIELDCAT_LN-SELTEXT_M     = FTEXT.
      FIELDCAT_LN-SELTEXT_S     = FTEXT.
      FIELDCAT_LN-DDICTXT       = 'L'.     " (S)hort (M)iddle (L)ong
      FIELDCAT_LN-HOTSPOT       = HOTSPOT.
      FIELDCAT_LN-QFIELDNAME    = UOM.
      FIELDCAT_LN-OUTPUTLEN     = OUTLEN.
      FIELDCAT_LN-ICON          = ICON.
      FIELDCAT_LN-JUST          = JUST.
      FIELDCAT_LN-DO_SUM        = DOSUM.
      FIELDCAT_LN-CHECKBOX      = CHK.
      FIELDCAT_LN-EMPHASIZE     = EMF.
      FIELDCAT_LN-EDIT_MASK     = EDT.
      IF NOT DECS = SPACE.
        FIELDCAT_LN-DECIMALS_OUT  = DECS.
      ENDIF.
      APPEND FIELDCAT_LN TO FIELDCAT.
    ENDFORM.                    "alv_build_fieldcat

  • RMS terminology and power comparison standards

    I am doing my research in order to find the best of the best 2.1 speaker system for my PC. I already have an AUDIGY 2 SE soundcard and I would like to support it with an excellent sound output set. While comparing different speaker systems I have came up with the following two best of the best products: Creative I-Trigue L3800 (Price 99GBP) and Logitech Z-2300 (Price 91GBP). Even if these are the best of the best it seems that they have a serious difference of power between them. Here is the RMS speaker power specifications:
    1.Creative I-Trigue L3800: 9 Watts RMS per channel (2 channels) 30 Watts RMS subwoofer 2.Logitech Z-2300: Total RMS power: 200 watts RMS Satellites: 80 watts RMS (40 watts x 2)
    Subwoofer: 20 watts RMS Total peak power: 400 watts
    My conclusion from the above data is that Logitech Speakers are 4 times more powerful from those of Creative (200 watts compared to 48 watts). Is my above conclusion correct? What else do I have to compare in order to buy the best system?

    Clearly the " RMS " is the standard.
    They do clarify their goofy ratings
    in small print on some pages.
    Those LT's you mention are great deal.
    Very similar but in a revered brand - KLIPSCH -
    a similar system is available for about $30. more.
    That 'THX' certification is supposedly a big deal.
    Big plus with the L3800 is remote controls functions
    on Zen Vision M.
    good luck

  • Creative uses less wattage to power the systems?

    I directed my question to the support, but im not sure if they are going to reply on this.
    I have Inspire P5800(72W total rms power) and the power adapter says 3,5 ACV, 5000mA which gives 67,5W (3A x 3,5V).So how they power 72W system with 67,5W adapter?
    5W is not that much, you say, but how about this:
    I was reading reviews on T600 and i just found that it has the same adapter as P5800 - 3,5 ACV, 5000mA which gives again 67,5W but the difference here is bigger since these speakers have 6W more rms power which gives more than 10% power loss due to the adapter.
    So here are my questions:
    How can creative supply different speaker setups with different wattage with the same power adapter?
    Why they didnt mentioned anything about this?
    Why they limit the systems by the power adapter? Trying to save money? Trying to cover bad speaker quality?
    Also im wondering if there are other models affected by this issue too...

    Early generation Time Capsules suffered from heat issues, but later versions do not run nearly as hot.
    As a frame of reference, if I put my hand on my 4th Generation Time Capsule for 10-15 seconds or so, I notice that it runs at about the same temperature as an AirPort Extreme that I have nearby.
    I second LaPastenague on the Time Capsule. It was designed to function as a backup device, so this is likely the best use of the product for most users.

  • How is actual bit depth measured

    I am analyzing some recordings I made in 24-bit format in Audacity.  Audacity can record true 24-bit integer files which Audition 3.0.1 recognizes as such.
    After checking a couple of the files in Audition 3.0.1, I found that the meaning of "Actual bit depth" in the amplitude statistics is not entirely clear.  It does seem to be based on the maximum peak in the file, but the bit estimate does not seem too clear.
    For example, in one of the files if I select any portion that includes the highest peak and get amplitude statistics, the actual bit depth reported is 24.  Example of a short selection that includes the peak:
    Mono
    Min Sample Value:    -22003
    Max Sample Value:    26329
    Peak Amplitude:    -1.9 dB
    Possibly Clipped:    0
    DC Offset:    -.003
    Minimum RMS Power:    -44.45 dB
    Maximum RMS Power:    -17.58 dB
    Average RMS Power:    -30.18 dB
    Total RMS Power:    -25.98 dB
    Actual Bit Depth:    24 Bits
    Using RMS Window of 50 ms
    However, as far as I can tell, any selection in the same file that does not inlude the highest peak (but may include nearby close peaks) results in actual bit depth of 16:
    Mono
    Min Sample Value:    -20082
    Max Sample Value:    22172
    Peak Amplitude:    -3.39 dB
    Possibly Clipped:    0
    DC Offset:    -.001
    Minimum RMS Power:    -54.14 dB
    Maximum RMS Power:    -19.96 dB
    Average RMS Power:    -36.26 dB
    Total RMS Power:    -32.95 dB
    Actual Bit Depth:    16 Bits
    Using RMS Window of 50 ms
    So it is unclear what level of peak amplitude distinguishes between 24- and 16-bit actual depth.  If the bit-depth analysis is based on most-significant bits being zero, I would think that the trigger for identifying 16-bit actual depth in a 24-bit file would be to find that the 8 most-significant bits of the 24-bit samples are zero for all samples in the selection.  So for a 24-bit integer file to have actual bit depth of 16 bits for a slection, the greatest peak would be less than -48 dBFS.  But in the example above, the distinction seems to be having a peak amplitidue around -3.4 dB versus -1.9 dB.

    >what actual difference does it make to anything?
    Hard to say what difference it makes to anything without knowing what "actual bit depth" actually measures.  It could be important, or could be useless.  In the past I have not paid much attention to it because it is poorly described.  It recently came to my attention because the files from a recent recording in 24-bit integer format were all reported as 16-bit "actual" bit depth.  This is in contrast to some previous recordings made in the same way which were identified as 24-bit "actual".  This implies there might be something different in the data formatting, the communication between the software and driver, between the driver and card, or something else.
    It is a bit surprising that no one got Synt. to explain it properly.
    >Oh, and the other thing about 24-bit int files is that they can lead you into a very false sense of security. If you decided, for instance, to reduce the amplitude of one by 48dB, then save it, and then decide to increase it again by that 48dB, you'd end up with a 24-bit file with just 16 bits of resolution - simply because it's an integer file. If you did the same thing with Audition's 32-bit floating point system, you'd lose no resolution at all.
    In my workflow that produces original recordings in a 24-bit integer file format, the format is an efficient way of storing 24-bit integer data from a 24-bit card.  Processing is another matter.  I use the Audition preference to convert files automatically to 32-bit when opening.

  • Need work-around for 'amplitude statistics' bug

    Does anyone know how to get adobe audition 3 to report the correct "amplitude statistics" for a selected audio clip?
    It is obviously broken now (it worked fine in Cool edit Pro days), to show this simply do this:
    create a new file
    generate a one second sine wave at 1004 Hz,-6 dB amplitude, no modulation
    check the "amplitude statistics" window
    The results (for all or any part of the constant sine wave) should read -6 dB for all the catagories (Max, Min, Average, Total). In Audition 3 they typically read other (random?) numbers, however if you select a section of the sine wave near the middle with the selection length the same size as the amplitude statistics window it is sometimes possible to get the correct readings. Just the fact the reading change for various selections in a constant level sine wave shows there is a bug.
    Fine.... but does anyone know if there is some way to get a correct reading out of Audition 3 ? Could I turn off some other features, or select the audio section to report the RMS level for in a very particular way?
    Thank You.

    Here is a really simple example of how adobe Audition 3 is reporting
    incorrect values for the amplitude statistics of a selection that
    only includes a CONSTANT AMPLITUDE SINE WAVE. Note that for a selection that has only includes a single constant RMS level signal, the reported RMS levels for:
    MINIMUM RMS LEVEL
    MAXIMUM RMS LEVEL
    AVERAGE RMS LEVEL
    TOTAL RMS LEVEL
    should all be exactly the same (by defination ! ! !)
    How to create an example of the incorrect reporting:
    1) open Audition 3.0
    2) create a new file (for example use mono 16 kHz, 16 bit depth)
    3) generate 10 seconds of silence
    4) make sure "Preferences-Data-smoothing" are both not checked
    5) position cursor at 5 seconds (in middle of silence)
    6) generate a sine wave at -6 dB amplitude, 1 second long
    7) leave freshly generated sine wave selected
    8) open amplitude statistics window
    9) select "0 dB = FS sine wave" and default 50ms window width
    10)note incorrect statistics reported:
    Minimum RMS power -inf dB
    Maximum RMS power -6 dB
    Average RMS power -8.33 dB
    Total RMS power -7.28 dB
    At no point in the selection does a window have 50ms of -inf samples
    The maximum power is correct
    The average power is incorrect, it should also be -6 dB (constant sine wave!)
    The total power should also read -6 dB, for a constant -6 dB sine wave
    11) now close the amplitude statistics window and select the last 200ms of the
    sine wave (from second 5.8 to 6.0)
    12) open amplitude statistics windows again (still using 50 ms windows)
    13) note the (really) incorrect statistics displayed:
    Minimum RMS power -inf dB
    Maximum RMS power -inf dB
    Average RMS power -inf dB
    Total RMS power -inf dB
    I assert the Adobe Audition 3 amplitude statistics numbers are obviously wrong. It looks like the calculation is being done over
    an incorrect section of some data buffer.

Maybe you are looking for

  • Email pushing not working correctly for office email - 3 seperate devices/carriers

    We switched email hosts several months ago and since our email pushing has not worked correctly. Our host/IT guy has been very unwilling to help me through this (I get the pleasure of being the in-house IT source) I personally am on AT&T have no prob

  • Alternative payer/ bill toparty

    Hi gurus, can u tell me how many payers we can define for a customer.My client has requirement that they want four payers. So that if the primary payer does not pay then they can bill the seconday payer & so on But I am not sure how to configure it.

  • Business area in Cross System

    Hi,,    Kindly tell me while we run WE05 for IDOC List then we get the message No cross system business area has been assigned to business area BA11     Send me urgent solution. With Regards,    Samrat

  • Error when searching

    Search works for most queries, but we recently ran into a query where it takes us to the error page.  Looking up the coorelation ID, I get the following: 03/25/2015 13:06:50.09 w3wp.exe (0x4264) 0x17E4 SharePoint Foundation General 8kh7 High Cannot c

  • Error using mySQL

    Hi. I´m trying use mySQL with SRTutorialADFBC tutorial. I can create connections. I can view my tables, etc. But when I try run the jsp page, I get a error 500: java.sql.SQLException: No suitable driver      at java.sql.DriverManager.getConnection(Dr