FFT using16-bi​t precision numbers

Hi,
I was recently putting together a data acquisition program to log and process data continuously over a period of a week.
Hardware: cRIO, 9233 modules
Signal processing requirements: 50kS/s on 8 channels, performing FFTs on all channels with greater than 1Hz resolution (attempted to use 64k sample block size)
We had a fair few issues with the hardware hanging or just stopping without obvious explicit errors. The program appeared to run fine using a 32k sample block size. We assumed that it may be a RAM issue, however we haven't confirmed this.
One consideration was that all the FFT processing is performed on double precision numbers, and it did not appear possible to perform the FFT operation on single or 16-bit precision. The few FFT VIs i had a look at all used "Call library function nodes" that I could not edit.
Just interested to know if there is an easy way to edit these routines, or if anyone has created some VIs that work on varying precision numbers.
Again, not 100% sure this was the sole issue with the above program, but interested nonetheless.
Thanks for your time.
Cheers,
Mike

Hi Mike,
I've looked at the FFT VIs and it seems you are correct that all of the VIs are designed to operate on double precision numbers.  As you also noted, you cannot re-write these VIs because you are using the Real-Time VIs that use dll calls.  Your best bet for varying the precision of your FFTs would seem to be designing your own, or soliciting other users for what you are looking for (as you have already done here).  You can also do the processing on double precision data, but convert back to 16-bit precision after the FFT to conserve space.  I would also recommend that you submit a product suggestion for this;  I suspect this would be useful to a lot of people!
You may also want to check out the Real-Time System Manager which is a very useful tool for monitoring your Real-Time system's performance.
I hope this gives you some helpful information!
Brian A.
National Instruments
Applications Engineer

Similar Messages

  • Formatting a string with time stamp and double precision numbers

    %s\t%f\r%f
    This is a format string that I have in old code that I've decided to change.  Problem is I cannot make sense of the string codes in my own old code! Let me explain what I want, and hopefully someone can explain how to do it.
    I am using the format into string subvi to merge a time stamp (formatted as %m%d%Y%H%M%S%5u) and two different double precision numbers.  This string is then wired into the Write Characters to File subvi so that I can record data as a .txt file, and open it in either Matlab or Excel.  There is a minor problem with the string format above because in excel the first time stamp entry is blank, and the first loop only gives the two double precision numbers withouth the time stamp - the time stamp appears in the next loop (probably a looping issue and not due to the string format, but if you see differently please let me know).  Now what I want to do is 1. potentially fix that problem and 2. add some more doubles. 
    1. Is there a string format issue that is evident that I am not seeing that causes the time stamp to be formatted into the string after a carriage return?  Or should I be looking at looping issues?
    2. How do I add another one - three floating point numbers (double precision)?  Are the \'s marking different numbers in this string constant?  Or is it the %?  I can't find any information about the \'s, but I see that % begins the format specifier. 
    Ideally, I want these data in the following columns:  Date, Time(absolute), FP, FP, FP, carriage return for the next loop (FP is floating point double precision number).
    Thanks,
    Brad

    Hi JonN,
    Here there is no need of string concordinate function (in your code), the same result you can find if you connect the output of the format string to shift register, and shift register in data to initialize string connector in format into string function.
    <<KUDOS ARE WELCOME>>
    ELECTRO SAM
    For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life.
    - John 3:16

  • ARM Luminary TCP debug double precision numbers

    Hi,
    I'm working with Labview and a LM3S8962 Luminary evaluation board.
    I'm debugging with TCP port and when I use double precision float variables I see some like random numbers on my front panel indicators or on my probe points (i.e. 1.5664325e34.....-12.452244e128) for those variables, but at the same time I'm printing the numbers on the board LCD display and they are shown ok. If I do all the opperations on double precision and then convert the result number to single presicion to display it...then indicators and probes works fine. Also if I do debug with USB ULINK JTAG all works ok on single, double and extended presicion.
    Have anybody experienced something like that?
    I'm missing some TCP configuration perhaps? 
    Regards,
    Matthuz

    Hello Kevin, thank you for your reply.
    I'm attaching a little demo to show you what is going on. If I set the debug options to use ULink all works nice, but it doesn't with TCP.
    It could be something with the signal generator that I'm using?
    Attachments:
    Test_Double.zip ‏16 KB

  • Big and high precision numbers

    Hello,
    Is it possible to store and work on numbers in range +- 10.000.000.000,00000001 (8 decimal places), if so how ?
    Type p gives overflow, type DECFLOAT34 skips the decimal part.
          v type DECFLOAT34 value '0.00000001',
          COMPUTE EXACT  v = v + '1000000000.00000001'.
    Trying to compile above code results in an error(line 2):   Incorrect statement: "=" missing.
    Cheers,
    Bart

    Hi,
    @ Chandravadan Jaiswal
    Ah, simple yet significant mistake, I forgot the
    v(lenght_here)
    part.
    tried:
    DATA : v type p DECIMALS 8.
    and got overflow,
    Thanks.
    @ Sri:
    i was succesfully able to obtain a numer like this: 2.000.000.000.000,00000004
    number of digits > 16 ?
    Edited by: Bartosz Bijak on Mar 5, 2010 9:45 AM
    Edited by: Bartosz Bijak on Mar 5, 2010 9:46 AM

  • Is it possible to set the precision for the subtraction operator?

    Hi everyone.
    I have two complex double precision numbers that are suppose to be equals. Each one is the result of different sets of equations.
    When I subtract one from the other the result is not zero.
    I think this can be explain by the fact that they are quite large numbers (about 1E+101), causing the subtraction operator to drop (or round, or do something not legit with) the two numbers. Moreover, if I repeat the subtraction with different large numbers, I obtain about half of the time the expected result (zero) and the rest of the time I obtain a value between 1E+84 and 1E+86.
    What I would like is to get the result right everytime, so I thought about setting a precision to the subtraction operator. Is that possible? If it is, how am I suppose to do that and if not, what's wrong with the subtraction operator?
    Thanks
    tonh

    Thought 1: 1e+101 is pretty big!  Do you *need* all your calcs to carry such a big exponent?  For example, can you run all the calcs on values that have been scaled by 1e-100, then at the end of all the calcs scale back by 1e+100? 
    Thought 2: If you just need to compare for approximate equality, you could divide the difference by the larger of the two values being subtracted.  If you subtract two virtually equal 1e+101 values, you may get a difference in the order of 1e+85.   The division will give you a ratio of ~1e-16.  Ratios in that realm will mean that the two original values are about as nearly equal as the floating point representation can express.   This type of method scales pretty well to work with both very small and very large floating point numbers.
    Thought 3: If the inputs to your calcs are in the order of 1e+85 or less, you may really have your work cut out for you.  You'll need to think carefully about floating point representation error at each step of the calculations to know where you can round, truncate, approximate, etc.  There may be places where rounding will be *necessary* and other places where it is *disastrous*.
    To summarize: you need to apply some of your error knowledge to your code.  The 1e+101 calculated values probably don't have more than 6-8 *significant* digits of accuracy, right?  (Most numbers come from some kind of measurement, or a rounded-off value for a scientific constant, etc.)  You'll need to analyze your values and calcs to understand about how many digits of the 1e+101 numbers are truly *significant*.  Then your code will need to treat values which differ by less than that amount as if they are truly equal.
    -Kevin P.

  • BUG: Large floating point numbers convert to the wrong integer

    Hi,
    When using the conversion "bullets" to convert SGL, DBL and EXT to integers there are some values which convert wrong. One example is the integer 9223370937343148030, which can be represented exactly as a SGL (and thus exactly as DBL and EXT as well). If you convert this to I64 you get 9223370937343148032 instead, even though the correct integer is within the range of an I64. There are many similar cases, all (I've noticed) within the large end of the ranges.
    This has nothing to do with which integers can be represented exactly as a floating point value or not. This is a genuine conversion bug mind you.
    Cheers,
    Steen
    CLA, CTA, CLED & LabVIEW Champion
    Solved!
    Go to Solution.

    Yes, I understand the implications involved, and there definetely is a limit to how many significant digits that can be displayed in the numeric controls and constants today. I think that either this limit should be lifted or a cap should be put onto the configuration page when setting the display format.
    I ran into this problem as I'm developing a new toolset that lets you convert all the numeric formats into any other numeric format, just like the current "conversion bullets". My conversion bullets have outputs for overflow and exact conversion as well, since I need that functionality myself for a Math toolset (GPMath) I'm also developing. Eventually I'll maybe include underflow as well, but for now just those two outputs are available. Example:
    I do of course pay close attention to the binary representation of the numbers to calculate the Exact conversion? output correctly for each conversion variation (there are hundreds of VIs in polymorphic wrappers), but I relied in some cases on the ability of the numeric indicator to show a true number when configured appropriately - that was when I discovered this bug, which I at first mistook for a conversion error in LabVIEW.
    Is there a compliancy issue with EXT?
    While doing this work I've discovered that the EXT format is somewhat misleadingly labelled as "80-bit IEEE compliant" (it says so here), but that statement should be read with some suspicion IMO. The LabVIEW EXT is not simply IEEE 754-1985 compliant anyways, as that format would imply the x87 80-bit extended format. An x87 IEEE 754 extended precision float only has 63-bit fraction and a 1-bit integer part. That 1-bit integer part is implicit in single and double precision IEEE 754 numbers, but it is explicit in x87 extended precision numbers. LabVIEW EXT seems to have an implicit integer part and 64-bit fraction, thus not straight IEEE 754 compliant. Instead I'd say that the LabVIEW EXT is an IEEE 754r extended format, but still a proprietary one that should deserve a bit more detail in the available documentation. Since it's mentioned several places in the LabVIEW documentation that the EXT is platform independent, your suspicion should already be high though. It didn't take me many minutes to verify the apparent format of the EXT in any case, so no real problem here.
    Is there a genuine conversion error from EXT to U64?
    The integer 18446744073709549568 can be represented exactly as EXT using this binary representation (mind you that the numeric indicators won't display the value correctly, but instead show 18446744073709549600):
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111000000000000b
    --> Decimal: 18446744073709549568
    The above EXT value converts exactly to U64 using the To Unsigned Quad Integer "bullet". But then let's try to flip the blue bit from 0 to 1 in the fraction part of the EXT, making this value:
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111100000000000b
    --> Decimal: 18446744073709550592
    The above EXT value is still within U64 range, but the To Unsigned Quad Integer "bullet" converts it to U64_max which is 18446744073709551615. Unless I've missed something this must be a genuine conversion error from EXT to U64?
    /Steen
    CLA, CTA, CLED & LabVIEW Champion

  • Writing binary to file with 24 or 32-bit numbers

    I am using an NI4472 DAQ to sample some analog data at 24-bits and I want to write the data to disk. However LabView only has a VI to write 16-bit data to disk. Is there a way to write 24 or 32 bit binary numbers to a file?

    The VI you are looking at is probably one of the "Easy VIs" that is setup for a specific application. You can create more general programs to write a binary file with any data type you desire. I would recommend taking a look at the Write Binary File example that ships with LabVIEW. It shows a more general approach to writing data to a binary file. In this example they write double precision numbers but you could easily replace the data with I32s.

  • Do numerical indicators display extended precision floats correctly?

    I'm using windows XP sp2 on a new computer with a new intel processor, nothing weird. I'm displaying an extended precision floating point number using a numeric indicator that is set to display an extended data type with thirty digits of precision. I expect to see at least 19 or 20 significant digits out of my extended precision float, but the numeric indicator only ever displays 17 significant digits before going to a trail of zeros. Does the display routine that converts the float to a display string use double precision or what?
    global variables make robots angry

    Yes, I understand what you are saying and you are completely correct. The problem I have is not that I expect a mathematically perfect representation of a number, but rather that LabVIEW calculates and produces an 80-bit extended precision number on my computer and then appears to convert it to a 64-bit representation of that number before displaying it!
    If you convert the extended precision value into an unflattened string in order to attempt to access the binary representation of the data, you’ll find that it is represented by 80-bits. This is a 64-bit fraction plus a 15-bit exponent plus one bit for the sign. Delightfully, the flatten to string function appears to scramble the bits into “noncontiguous” pieces, so about all I can tell for certain is that we have, as expected, an 80-bit extended precision number in memory. The documentation for the other number-to-Boolean array and bit manipulation functions I looked at (even the exponent-mantissa function) all claim to only be able to handle a maximum input of a 64-bit number (double precision float max) -correct me if I’m wrong on this one, because I’d really like to be able to see the contiguous binary representation of 80-bit extended floats.
    It turns out though that what you said about not being able to tell whether we have twenty digits of precision without bit fiddling is not true at all. If you look at the program I wrote, you can prove with simple addition and subtraction that beyond the shadow of a doubt the extended numbers are being stored and calculated with twenty digits of precision on my computer yet being displayed with less precision.
    As you can plainly see in the previous example I sent:
    A =          0.1111111111
    B =         0.00000000001111111111
    A+B=C= 0.11111111111111111111
    We know that
    C-A=B
    The actual answer we get is
    C-A=0.00000000001111111110887672
    Instead of the unattainable ideal of
    C-A=0.00000000001111111111
    The first nineteen digits of the calculated answer are exactly correct. The remainder of the actual answer is equal to 88.7672% of the remainder of the perfect answer, so we effectively have 19.887672 digits of accuracy.
    That all sounds well and good until you realize that no individual number displayed on the front panel seems to be displayed with more than 16-17 significant digits of accuracy.
    As you see below, the number displayed for the value of A+B was definitely not as close to being the right answer as the number LabVIEW stores internally in memory.
    A+B=0.11111111111111111111 (the mathematically ideal result)
    A+B=0.111111111111111105     (what LabVIEW displays as its result)
    We know darned well that if the final answer of A+B-A was accurate to twenty digits, then the intermediate step of A-B did not have a huge error in the seventeenth or eighteenth digit! The value being displayed by LabVIEW is not close to being the value in the LabVIEW variable because if it were then the result of the subtract operation would be drastically different!
    0.11111111111111110500       (this is what LabVIEW shows as A+B)  
    0.11111111110000000000       (this is what we entered and what LabVIEW shows for A)
    0.00000000001111110500    (this is the best we can expect for A+B-A)
    0.00000000001111111110887672 this is what LabVIEW manages to calculate.
    The final number LabVIEW calculates magically has extra accuracy conjured back into it somehow! It’s more than 1000 times more accurate than a perfect calculation using the corrupted value of A+B that the display shows us – the three extra digits give us three orders of magnitude better resolution than should be possible unless LabVIEW is displaying a less accurate version of A+B than is actually being used!
    This would be like making a huge mistake at the beginning of a math problem, and then making a huge mistake at the end and having them cancel each other out. Except imagine getting that lucky on every answer on every question. No matter what numbers I plug into my LabVIEW program, the intermediate step of A+B has only about 16-17 digits of accuracy, but miraculously the final step of A+B-A will have 19-20 digits of accuracy. The final box at the bottom of the program shows why.
    If you convert the numbers to double and use doubles to calculate the final answer, you only get 16-17 digits of accuracy. That’s no surprise because 16 digits of accuracy is about as good as you’re gonna do with a 64-bit floating point representation. So it’s no wonder all the extended numbers I display appear to only have the same accuracy as a 64-bit representation because the display routine is using double precision numbers, not extended precision.
    This is not cool at all. The indicator is labeled as being able to accept an extended precision number and it allows the user to crank out a ridiculous number of significant digits. There is no little red dot on the input wire telling me, ‘hey, I’m converting to a less accurate representation here, ok!’ Instead, the icon shows me ‘EXT’ for ‘Hey, I’m set to extended precision!’
    The irony is that the documentation for the addition function indicates that it converts input to double. It obviously can handle extended.
    I’ve included a modified version of the vi for you to tinker with. Enter some different numbers on the front panel and see what I mean.
    Regardless of all this jazz, if someone knows the real scoop on the original question, please end our suffering: Can LabVIEW display extended floating point numbers properly, or is it converting to double precision somewhere before numerals get written to the front panel indicator?
    Message Edited by Root Canal on 06-09-2008 07:16 PM
    global variables make robots angry
    Attachments:
    numerical display maxes out at double precision 21.vi ‏17 KB

  • Number Precisions

    Hi,
    I tried to parse a String and store the double/float value in the databse. But for very large numbers(Example 22,4, ie 16 integers digits and 4 decimal digits), the precision is lost. Can somebody suggest a way where I can retain the precision.
    Thanks in advance,
    Srinath K P

    See the classes in java.math - BigInteger and BigDecimal. They allow operation on arbitrary precision numbers.
    Example: what's 12 to the power of 74:import java.math.*;
    public class BigIntegers {
      public static void main(String[] args) {
       BigInteger a = new BigInteger("12");
       BigInteger b = a.pow(74);
       System.out.println(b); }
    }Result:
    72345614109462974751442001673415239440886790091234732113689383539734402998206464
    See the documentation for further details:
    http://java.sun.com/j2se/1.3/docs/api/java/math/package-summary.html

  • Need a single-precision/2-byte conversion tool

    Any one have a utility to convert 4-byte single precision numbers to and from a 2-byte representation?
    I only need 3 digits, with one fractional digit (-14.3, for example).
    Thanks...

    At one time I thought I wanted to do this, but never got around to it.  I did find this information useful, however.
    http://www.fox-toolkit.org/ftp/fasthalffloatconversion.pdf

  • Massive battery life drop after Internet Recovery OS re-install

    Hey,
    A couple months ago, I made the brilliant decision to try to install Windows 7 with Boot Camp at 3am on a weekend after several days of no sleep (unfortunately, this is one of the least-poor poor decisions I've made recently, but I suppose that's for another forum). Long story short, Og turn on MacBook, Og open Disk Utility, Og click-click, SSD go bye-bye—formatted, every last bit of it (haha, me make pun-pun!). Luckily, ML ships virtually tard-proofed, so there's little I can do besides showering with Bernie (this pun courtesy of the 'lil sis; I'm innocent) to make him breathe his last. This little bit of Apple genius is irrefutable proof of God—praised be his noodly appendage. But ever since Internet Recovery blessed me with Bernie II (R'amen), I've noticed during countless nights lying with him in bed that his stamina has decreased significantly, versus his pre-IR days. I can't give you firm numbers of his endurance before the operation, but now, after even the least vigourous play, he's drained beyond further exertion after no more than 2-2.5 hours. I've tried various tweaks to increase battery life—making sure his mail parts are only shooting off when I work them manually, the same for receiving messages as well, and I've been monitoring his GPU usage; thanks to ML he's much more conscientious about being discrete, and I rarely have to limit him to using his integrated components. (Note: I'm using writing this as an excuse to start an essay, and its foundation really needs to get laid ASAP, so unfortunately the allusions/euphemisms stop here.) Again, I can't give precise numbers about its original battery life, but I was blown away by it, likely averaging at or in excess of the advertised 7-hours. I'm a geek and spent my childhood with eyes never further than a couple of inches from my CRT (and have the vision to prove it), but I really have no f'ing clue what would be causing this. I never doubted that the battery life decreased (and dramatically so), but now that I've started booting up into Windows regularly, any trace of doubt has vanished. Using Windows 8 set on Maximum Performance (and, as I'm sure most of you are aware, Windows only uses the discrete GPU, even when doing nothing graphically-intensive), the battery life is still significantly better than using ML. I've searched and searched and searched and can't find a single reference to this issue online. I'd appreciate any help you can offer. Just in case it's of any assistance, I'll put some might-be-relevant-maybe info below, including things I've thought of that could be the source/contributing to the problem (again, I really have no idea what I'm talking about when it comes to true geek know-how):
    - It's a rMBP 15" w/16GB RAM
    - When I formatted the HD, I wiped out everything. Yes, obviously there was enough on there for the laptop to launch into Internet Recovery, but I'm thinking that was all that was left. When I've been trying to think of why there's nobody online with this issue, something that's popped into my head is that anyone who knows what Disk Utility does is probably savvy enough (when not sleep-deprived) to not use it to format their HD, or at least to heed to the plenty of "Don't click yes, tardums" warnings that I'm sure popped up as I zoomed by them. And I can't think of many other ways for grandma-level Mac users to wipe their HD, so the lack of online info would make sense.
    - This was a couple months back, and I can't remember precisely the OS version when all this went down. I'm certain it was ML since I think my rMBP shipped with 10.8.0 (if not 10.8.1, if any did), but it could have been anything between 10.8.0 and 10.8.2. I'm pretty sure it wasn't 10.8.0 since I remember updating to 10.8.1 very soon after purchasing the laptop over the summer, but again, not certain.
    Alright, that's it. I'm sure this post doesn't meet forum standards for plethora of reasons, but I'm a poor little private liberal arts university student who is desolate now that his $3,000 laptop isn't getting top-notch battery life, so mods, please dig deep in your hearts and don't delete. Thanks, yous da best.
    -Zack

    There were a number of posts from folks with various models about a downgrade in battery life after an upgrade to Mountain Lion - so you aren't alone. However, since you're still under warranty, I would take the machine to your local Apple Store or an AASP and have them diagnose the problem. Could be that your battery just isn't up to par (and I'm assuming that you've all updates installed?)...
    Good luck,
    Clinton

  • APEX 4.1.1 Memory Leak in IE7

    Hi,
    We busy upgrading our apex and db from 3.0/10G to 4.1.1/11.2G and notice that there appears to be a memory leak when using APEX. At one stage we have had IE7 using over a gig of memory.
    When you load or refresh your page IE7 seems to grab on average 2-5MB of memory for each page load. At first we thought it may have been our apps or setup but this also happens when we go to app 4550 page 1 on apex.oracle.com.
    How to replicate:
    Open task manager to view the Memory Usage.
    Using IE7
    1. Go to http://apex.oracle.com/pls/apex/f?p=4550:1
    2. Go back to Task Manager and note the readings once the CPU Usage for iexplore.exe has stablised to 0.
    3. go back to IE7 and press F5
    4. Repeat steps 2-3 and you will see the Memory usage increases.
    We think this maybe due to a few jQuery UI memory leaks within IE7 and thought this bug ticket maybe of interest http://bugs.jqueryui.com/ticket/7666 (Slightly different versions but similiar experiences)
    Could someone else confirm that they also experience the increasing or have had similiar problems and managed to resolve it?
    TBH, it wouldn't be an issue to use another browser like Firefox to access the builder but this also affects the applications if they include APEX standard Javascript and CSS.
    Thanking you in advance.
    Alistair
    Edited by: Alistair Laing on Jun 16, 2012 2:32 PM
    Added Tags

    Alistair Laing wrote:
    Hi,
    We busy upgrading our apex and db from 3.0/10G to 4.1.1/11.2G and notice that there appears to be a memory leak when using APEX. At one stage we have had IE7 using over a gig of memory.
    When you load or refresh your page IE7 seems to grab on average 2-5MB of memory for each page load. At first we thought it may have been our apps or setup but this also happens when we go to app 4550 page 1 on apex.oracle.com.
    How to replicate:
    Open task manager to view the Memory Usage.
    Using IE7
    1. Go to http://apex.oracle.com/pls/apex/f?p=4550:1
    2. Go back to Task Manager and note the readings once the CPU Usage for iexplore.exe has stablised to 0.
    3. go back to IE7 and press F5
    4. Repeat steps 2-3 and you will see the Memory usage increases.
    We think this maybe due to a few jQuery UI memory leaks within IE7 and thought this bug ticket maybe of interest http://bugs.jqueryui.com/ticket/7666 (Slightly different versions but similiar experiences)
    Could someone else confirm that they also experience the increasing or have had similiar problems and managed to resolve it?Anecdotally, yes. Don't have exact steps for replication or precise numbers, but I have noticed this in passing. On the junk that my client considers a PC suitable for web development the typical IE7 memory footprint with the APEX 3.0 builder and several other tabs running is about 52MB. Add APEX 4.1.1 and it climbs constantly until I have to pull the plug when it gets north of 150MB as the PC can't take it.
    As well that I also have Firefox and 4.1.1 is still experimental at that site...
    At the moment I don't have to resolve it and if I did the only option I'd propose is the replacement of IE7.
    VC wrote:
    Look at this http://www.bbc.co.uk/news/technology-18440979
    Alistair Laing wrote:lol @ VC - I dont shop online at work :-D
    I saw that eariler this week. I do agree with the concept though.So take appropriate action: charge extra for IE7 support.
    The amount of work and effort involved in making our website look normal on IE7 equalled the combined time of designing for Chrome, Safari and Firefox.Is entirely accurate. If it's stated as a requirement, itemise it as an extra on the quote.
    Educate management and bean counters: show them the one line of standards-compliant CSS that's all that is necessary in Safari, Chrome, Firefox and Opera (and just possibly in IE8/9/10), how it isn't supported in IE7, and the tortuous hacks and workarounds that are required to get something equivalent working there.

  • Looking for a particular number, possible?

    Hi there,
    I need to get a particular number, similar to my business name. 
    example: my business name abcdef.
    Now, i am looking for a USA number at the end it will have (*** 222 333) ->(*** abc def)
    any one point me where can i get help to choose this kind of particular number? Or its not possible at all?
    Thank you.

    Hi, Smahi, and welcome to the Community,
    You would need to search through the lists of numbers presented when you arrive at the step to choose your Skype Number in order to determine if the precise numbering sequence is available.  Skype Customer Service is not able to start a subscription, and nor can they search the Skype Number databases to check for available numbers.
    Good luck!
    Regards,
    Elaine
    Was your question answered? Please click on the Accept as a Solution link so everyone can quickly find what works! Like a post or want to say, "Thank You" - ?? Click on the Kudos button!
    Trustworthy information: Brian Krebs: 3 Basic Rules for Online Safety and Consumer Reports: Guide to Internet Security Online Safety Tip: Change your passwords often!

  • LV 8.2 Error in opening vi saved with LV 8.5 in previous version

    Hey,
    I'm currently using LabVIEW 8.5 at my laptop to program, but at my lab we have LV 8.2. I'm having the following problem:
    I save a vi with LV 8.5 in a previous version (8.2), and it sends me a warning wich says:
    "Fixed-point numbers are not supported in the previous version of LabVIEW. They have been converted to double precision numbers."
    When I try to open it with LV 8.2, it gives me the following messages of error:
    Load Error: "Improper Format".
    Load Error: "Unknown Error".
    LabView Generic Error: An error occurred loading "file.vi" . LabView load error code 24: This VI cannot be loaded because it is broken and it has no block diagram.
    Thanks in advance,
    Moritz 

    Hello.
    Could you send us your VI (v8.5) please? Thanks.
    Regards.
    Romain D.
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    NIDays 2010 : Conférence mondiale de l'instrumentation virtuelle
    >>Détails et Inscription<<

  • Update Operation Fatal Error when installing Technical Preview Build 10041 Upgrade

    I posted this in the Microsoft Community forums and was redirected here.
    Below is the original post (URL: http://answers.microsoft.com/en-us/insider/forum/insider_wintp-insider_install/update-operation-fatal-error-when-installing/6eaff1b9-0223-4357-afc8-884382590e82 )
    Hello,
    In trying to perform an upgrade install of the Windows 10 Technical Preview Build 10041 (as is the latest available for direct download as of my writing), I am running into a problem I cannot find mention of anywhere else on the web.
    I fall into two categories with regards to this build: A) Product tester who actually needs to test application performance under Windows 10 on bare metal, B) Semi-crazy techie who loves using (usable) beta software as his primary environment ;)
    so it is rather important to me that I get this working on my machine in some non-virtualized respect. I am reluctant to do a full/fresh install if installation problems are happening, because my Windows 8 product key has been f***ed up from the
    start (likely some random database corruption) and I've had to utilize phone support to get around an "unknown error" the last two times I've had to use it. So, for fear of that key completely crapping out on me, I don't want to move to Win10 unless
    I'm confident it will install and I can stick with it (for better or worse) through the end. Problems within the Preview after I install it I can deal with.
    So, I first tried to install the Preview through the standard Windows Update method. The installer took about 7 hours (but from reading lots of internet discussions, stupidly long install times in that range seem to be a common problem with
    this builds separate from it actually failing). During the reboot between the "Setting up Devices" (Or is it Setting up Drivers? I forget) and "Taking Care of a Few More Things" (Again, possibly paraphrased), for about a split second, underneath
    the Windows logo and throbber, some error like this appeared:
    Fatal Error Performing Update Operation # of ##: ERROR CODE
    It only appeared for a fraction of a second, and I had no chance of reading the precise numbers or error.
    However, the installer then seemed to continue, and went until the main circle was at 100%. As soon as 100% hit, however, the screen went black for something like 30min. Then, I briefly saw "Attempting to recover installation" before "Restoring
    your previous version of Windows." And I was, quite impressively considering how far along that installer was, back in Windows 8.1 completely unharmed.
    I tried again by burning an ISO and doing a disc upgrade install. I let that one run overnight and was asleep for the error, but I was back in Win8.1 in the morning, so I can only assume a similar thing happened there.
    As for my system specs, I'm running on a MacBook Pro 9,1 under Boot Camp. I am upgrading from Windows 8.1.2 Pro with Media Center. I have found other online accounts of people quite successfully installing Windows 10 on Macs, so that isn't the issue.
    Does anyone have any clue as to what this error might have been/be, and how I might fix it? Or at least have it on good authority that a fresh installation would be unaffected (meaning it's software-related)? If not, I can try installing to a VHD,
    which would at least let me product test on bare metal, but wouldn't have the harddrive space to be my daily driver and would probably only get used occasionally.
    Thanks in advance to anyone who can help!
    So far, I have the yet-to-be-tried idea of a clean boot prior to installation.
    If anyone here has any more specific ideas, lemme hear 'em.
    Thanks!

    To the individual who proposed this as an answer to my problem: It's not even applicable. I specifically stated that I was trying to avoid doing a clean installation (at least without knowing more about the problem at hand). An answer saying to do the thing
    you're trying to avoid doing is not an answer. You can see my last reply for the current status of this issue. 6-8 hour blocks of time in which I can't use my computer (as is required to install build 10041) aren't super common for me, but I haven't abandoned
    this thread. There have simply been no more updates. If your motivation as a mod was that you simply don't like there being unanswered threads on this forum, then perhaps you could attempt to contribute rather than arbitrarily marking the first reply as an
    answer.
    I will continue to update this thread as I try new things and get more information.
    Thank you.

Maybe you are looking for

  • Sending attachments to Eudora users using Apple MAIL

    I have recently changed from Entourage to MAIL in Tiger and it seems to work well, however I can't seem to send attachments to anyone using Eudora. They recieve the email but the attachment is missing (apparently the recipient company's server is sto

  • The audio file can not be changed! - help

    I really need help. I just installed Logic Express 8, and I tried to import a song in order to edit it (as a lead in music bed for a podcast). Every song I import in says I am unable to edit it - "The audio file can not be changed! Volume or file is

  • Create Unit Test Vectors From The Unit Test Configuration Window

    I have recently been using more test vectors in the unit test framework. The principle works well but if you are in a unit test and decide you need a test vector you must: Close the unit test configuration Create a test vector Set up your vector (ent

  • Component and Application Configuration not working

    Dear Friends,   I want to disable the reject button in HCM Processes and Forms Webdynpro component HRASR00_PROCESS_EXECUTE and in application asr_process_execute only for some specific roles in Portal. So I tried to use the component configuration fo

  • Numbers (Mavericks) - Looking to add checkbox list

    I'm not completely sure but in the newest Numbers the ability to add a checkbox seems to have disappeared?