Speech Recognition stops paying attention.

hey all--
I was having trouble with my speech recognition (as well as a few other things) and so I finally deleted and reinstalled my system out of frustration. There are a lot of commands in the voice commands I don't want/use so I took some of them out or changed their wordings. I am also adding ones of my own.
My main trouble is that after a little bit, or 4-5 spoken commands, it stops listening to me. The green indicators indicate that it is hearing me just fine, but suddenly I can say "computer" all I want and it won't do a darn thing. If I open speech prefs, turn speakable off and back on, that'll fix it for another 4-5 commands. I am not in a noisy environment. It is very irksome!!
~G

OK, I have one more question now, seems to be the same as this question which was never answered years ago:
Speakable Commands>Application Switching -- how to delete items?
When I click on the triangle from the speech command widget and open speech commands, there is a TON of stuff there that I'll never ever use, but these things do not appear in the Library/Speech/Speakable Items folder so that I might be able to delete them. For example, one item I'd like to turn off is just the word "Home" and the computer keeps thinking that I said "Home" instead of whatever else I meant to say. Another keeps opening iChat or Mail, when I say "open new window" or something like that. I tried clicking, control clicking, option clicking, nothing will highlight a command in the Speech Commands window so that I might delete it.
The "Home" command only shows up when I'm using Word, in the "front window" list. However, the Application Speakable Items/Microsoft Word folder is empty. Is there some other way to get into the speakable commands list and weed out unwanted stuff?

Similar Messages

  • Speech Recognition stops working after a minute of nonuse

    I'm a grad student trying to keep from exacerbating my carpal tunnel and just figured out how to use the voice commands on my macbook. So far, they work great, they hear my voice, it does what I tell it to do, and I'm not constantly hitting apple+tab and apple+c and apple+v and so on, which really helps my wrists!
    BUT. If I go for more than a minute or two without using a voice command, it stops working. I can't pinpoint anything else that triggers it other than if I stop using it for a little bit while I'm reading a paper or something. The microphone is calibrated and working, the little black arrows and the blue and green bars flash on the widget like it hears me, but then it won't give me the sound that says the command was recognized and it doesn't perform the command any time I try to use it again after a few minutes of not being used. It's like the recognition part goes to sleep and won't wake up.
    If I go to the system preferences and toggle Speakable Itemst off and then on again, it starts working again until another minute or two of non-use. If I use spoken commands constantly, it will stay on for a good long while, but I'm not using it constantly that's the end of speech recognition. I really can't stop to just tell it some unneeded command to keep it awake all the time! Eventually, the computer gets confused if I turn the "speakable items" on and off too many times from the System Preferences, and it gets confused about whether or not it's turned on or off, or the widget disappears but the System Preferences panel thinks the Speakable Items is on.
    I read somewhere that other people with a newer OS have a similar problem if they don't use the default speaking voice, so I switched to Alex and restarted, and the same thing is still happening.
    I'm running 10.5.8, and using safari, word 2011, preview, nothing too crazy. My speech recognition is in "Listen only while key is pressed" mode, and like I said, works great until I stop using it for a minute or so. I'm using the internal microphone on my MacBook (dob circa 2008). Any tips on how I can actually keep my Speech Recognition useful? Thanks in advance!

    OK, I have one more question now, seems to be the same as this question which was never answered years ago:
    Speakable Commands>Application Switching -- how to delete items?
    When I click on the triangle from the speech command widget and open speech commands, there is a TON of stuff there that I'll never ever use, but these things do not appear in the Library/Speech/Speakable Items folder so that I might be able to delete them. For example, one item I'd like to turn off is just the word "Home" and the computer keeps thinking that I said "Home" instead of whatever else I meant to say. Another keeps opening iChat or Mail, when I say "open new window" or something like that. I tried clicking, control clicking, option clicking, nothing will highlight a command in the Speech Commands window so that I might delete it.
    The "Home" command only shows up when I'm using Word, in the "front window" list. However, the Application Speakable Items/Microsoft Word folder is empty. Is there some other way to get into the speakable commands list and weed out unwanted stuff?

  • How do you get the speech recognition to stop on my macbook pro?

    How do I get the speech recognition to stop on my macbook pro?

    From "More Like This" section.
    https://discussions.apple.com/message/21322465#21322465
    Best.

  • Speech Recognition Dictation Stopped Working In Most Applications

    In the past I have dictated into the text boxes of many different programs on several computers, but now dictation appears to work only in some programs such as Notepad and Internet Explorer on one of them.  Yet for me it is more important that it work
    in Windows Live Mail and Miranda IM. EDIT: Attempts to dictate in these programs now produces only "What was that?". Exiting and restarting WSR and WSR Macros does not help.
    Simulating keystrokes with the press command and WSR Macros continue to work as expected.

    Hi,
    According to your description, this problem probably caused by Microphone, it would be better to reinstall its driver for test to fix the problem.
    Please check Microphone properties and restore its settings for test.
    In addition, Speech Recognition also has multiple settings to improve itself performance, please check if these settings helpful with your problem.
    Roger Lu
    TechNet Community Support

  • Has anyone figured out how to get speech recognition working with sticky keys enabled on mountain lion?

    I'm trying to use speech recognition to input text on my iMac running the latest mountain lion, 10.8.3.
    I have sticky keys enabled.
    When I try to start speaking by pressing the function key twice nothing happens. I can only get it to work if I disable sticky keys.
    The same problem occurs with all the other modifier keys as shortcut, they do not work with sticky keys.
    When I try to select a different shortcut, I am unable to select a two key combination, but am limited to one.
    If I select the F6 key, or any other single key, I am able to start speech recognition. However the second time that I press the key, it does not stop recognition and process my words. Instead, it restarts the recognition.
    Has anyone figured out how to get speech recognition working with sticky keys enabled?
    Or a way to get an individual key shortcut to start on the first press and process it on the second?
    Or a way to get key combinations to work, as specified by the help:
    Dictation On and Off
    To use Dictation, click On.
    When you’re ready to dictate text, place the insertion point where you want the dictated text to appear and press the Fn (function) key twice. When you see the lighted microphone icon and hear a beep, the microphone is ready for you to speak your text.
    Shortcut
    By default, you press the Fn (Function) key twice to start dictation. If you like, you can choose a different shortcut from the menu.
    To create a shortcut that’s not in the list, choose Customize, and then press the keys you want to use. You can press two or more keys to create your shortcut.

    I noticed with version 10.8.4 of OS X that I am now able to select F6 to activate, and the return key to complete the speech recognition. This is still different than the description of how these should function that's included in the help, but at least it's an improvement.

  • Speech recognition problem with null pointer exception

    im trying to run the simple hello program that came with IBM speech for java
    i've put the speech.jar file in the right folder and it compiles all fine but when i run it i get this error:
    locale is en_US
    java.lang.NullPointerException
    at Hello.main(Hello.java:171)
    but i dont know enough about java speech to figure out what is null and what shouldn't be null...
    thx
    nate
    P.S.
    here is the program code
    import java.io.*;
    import java.util.Locale;
    import java.util.ResourceBundle;
    import java.util.StringTokenizer;
    import javax.speech.*;
    import javax.speech.recognition.*;
    import javax.speech.synthesis.*;
    public class Hello {
    static RuleGrammar ruleGrammar;
    static DictationGrammar dictationGrammar;
    static Recognizer recognizer;
    static Synthesizer synthesizer;
    static ResourceBundle resources;
    static ResultListener ruleListener = new ResultAdapter() {
    public void resultAccepted(ResultEvent e) {
    try {
    FinalRuleResult result = (FinalRuleResult) e.getSource();
    String tags[] = result.getTags();
    if (tags[0].equals("name")) {
    String s = resources.getString("hello");
    for (int i=1; i<tags.length; i++)
    s += " " + tags;
    speak(s);
    } else if (tags[0].equals("begin")) {
    speak(resources.getString("listening"));
    ruleGrammar.setEnabled(false);
    ruleGrammar.setEnabled("<stop>", true);
    dictationGrammar.setEnabled(true);
    recognizer.commitChanges();
    } else if (tags[0].equals("stop")) {
    dictationGrammar.setEnabled(false);
    ruleGrammar.setEnabled(true);
    recognizer.commitChanges();
    } else if (tags[0].equals("bye")) {
    speak(resources.getString("bye"));
    if (synthesizer!=null)
    synthesizer.waitEngineState(Synthesizer.QUEUE_EMPTY);
    Thread.sleep(1000);
    System.exit(0);
    } catch (Exception ex) {
    ex.printStackTrace();
    int i = 0;
    String eh[] = null;
    public void resultRejected(ResultEvent e) {
    if (eh==null) {
    String s = resources.getString("eh");
    StringTokenizer t = new StringTokenizer(s);
    int n = t.countTokens();
    eh = new String[n];
    for (int i=0; i<n; i++)
    eh = t.nextToken();
    if (((Result)(e.getSource())).numTokens() > 2)
    speak(eh[(i++)%eh.length]);
    static ResultListener dictationListener = new ResultAdapter() {
    int n = 0; // number of tokens seen so far
    public void resultUpdated(ResultEvent e) {
    Result result = (Result) e.getSource();
    for (int i=n; i<result.numTokens(); i++)
    System.out.println(result.getBestToken(i).getSpokenText());
    n = result.numTokens();
    public void resultAccepted(ResultEvent e) {
    Result result = (Result) e.getSource();
    String s = "";
    for (int i=0; i<n; i++)
    s += result.getBestToken(i).getSpokenText() + " ";
    speak(s);
    n = 0;
    static RecognizerAudioListener audioListener =new RecognizerAudioAdapter(){
    public void audioLevel(RecognizerAudioEvent e) {
    System.out.println("volume " + e.getAudioLevel());
    static EngineListener engineListener = new EngineAdapter() {
    public void engineError(EngineErrorEvent e) {
    System.out.println
    ("Engine error: " + e.getEngineError().getMessage());
    static void speak(String s) {
    if (synthesizer!=null) {
    try {
    synthesizer.speak(s, null);
    } catch (Exception e) {
    e.printStackTrace();
    } else
    System.out.println(s);
    public static void main(String args[]) {
    try {
    if (args.length>0) Locale.setDefault(new Locale(args[0], ""));
    if (args.length>1) Locale.setDefault(new Locale(args[0], args[1]));
    System.out.println("locale is " + Locale.getDefault());
    resources = ResourceBundle.getBundle("res");
    recognizer = Central.createRecognizer(null);
    recognizer.allocate();
    recognizer.getAudioManager().addAudioListener(audioListener);
    recognizer.addEngineListener(engineListener);
    dictationGrammar = recognizer.getDictationGrammar(null);
    dictationGrammar.addResultListener(dictationListener);
    String grammarName = resources.getString("grammar");
    Reader reader = new FileReader(grammarName);
    ruleGrammar = recognizer.loadJSGF(reader);
    ruleGrammar.addResultListener(ruleListener);
    ruleGrammar.setEnabled(true);
    recognizer.commitChanges();
    recognizer.requestFocus();
    recognizer.resume();
    synthesizer = Central.createSynthesizer(null);
    if (synthesizer!=null) {
    synthesizer.allocate();
    synthesizer.addEngineListener(engineListener);
    speak(resources.getString("greeting"));
    } catch (Exception e) {
    e.printStackTrace();
    System.exit(-1);

    yes, the problem seems to be with allocating the engine, but I don't know exactly why recognizer.allocate() returns null...I have already installed IBM ViaVoice and Speech for Java, and have set the supposedly correct paths...

  • Speech recognition and manual user input

    I  have a main GUI VI that contains two listboxes.  Both listboxes are selectable by the user.  Here is how I want them to interact.
    When the VI is initialized, the first listbox will be filled with string phrases.  If the user double clicks a cell in the first listbox, the second listbox will be populated by sorting the string phrases from the first listbox to only include phrases starting with same first letter.  Next, I want the user to be able to double click one of the phrases in the second listbox or 'say' the phrase into a microphone for speech recognition.  The phrase that is double clicked in the second listbox or identified through speech recognized will then be displayed on the main GUI VI.
    I've got my main GUI VI working such that the first listbox will populate the second listbox after the user double clicks a cell in the first listbox.  I've also got it working such that I'm able to double click the second listbox and display the phrase.  I'm using an event structure to do this thanks to some previous help from Dennis shown here.  I've also got a decent VI working based on this example that will recognize a phrase spoken based on an array of possible input phrases (grammar builder).  So I've got the pieces working independently of each other...
    My question is how to have both the speech recognition listening and second listbox waiting for a double click for input at the same time?  I have an event structure that waits for a double click to determine which phrase from the second listbox goes in the displayed phrase, I get that part, but how to use an event structure for speech recognition too?  Do I need a button to click to start the speech recognition VI?  I don't think I want the speech recognition VI running in the background just listening in some endless loop do I?
    Thanks,
    Mike

    Hi,
    It is a bit unclear what you are actually after.  Yes you could use the speach recognition in a seperate loop running as a daemon / callback or something but what is to stop you coding both bits of code in the same event structure.  You could do these in serial / parrallell just by the way you code them using dataflow.  It definitiely shouldnt be block diagram space as you can always create sub vi's.  You can deal with one of the events then create a user defined event to deal with the next process immediately after the first.
    So to  re-iterate, i am unsure as to what exactly you are after but i gave you my best guess
    Craig
    Message Edited by craigc on 26--01-2010 07:56 AM
    LabVIEW 2012

  • Speech Recognition just won't load in WIN 8.1?

    Am new to WIN 8.1 - can't get Speech Recognition to load/run... Will not run either from Control Panel, Charms (search for Speech Recognition, click Windows speech recognition or from icon in Task Bar nor from START screen..  BUT.. will run/load when
    rebooted (but often won't work/recognise speech correctly (What was that appears??)) as it then automatically appears on bootup; even though it would not run/load in previous session before the reboot.. !!
    Am running HP spectre i5, 4 meg 128GB... This failure is very frustrating - lots of other reports of this occurring but can't find any solutions specific to WIN 8.1.. PLEASE HELP - use dictation  lot as typing is abysmal!!

    Hi.. yep, have seen the video thank you.. the problem is NOT knowing how to use it.. (Have been in computers and using speech recognition, mainly Nuance DS Professional for
    ages..)
    The problem I have is in LOADING the application… sometimes it will load on the click of the mouse (or touch of the screen on this HP SPECTRE) – be it loaded from the task
    bar icon that I set up, from the Windows Speech Recognition command vis the CHARMS or even from the START tiles.. other times it will not!!  Must frustrating (will have to buy a new Nuance DNS professional as windows 8.1 is not a platform for DNS Version
    10 that I have used for ages – needs !!  or maybe go back to WIN&7??)
    Couple of things I have noticed that may help others is that the programme may not load if one clicks the icon too quickly.. I find that hovering the mouse over the icon and
    waiting until “Windows speech recognition” appears, THEN clicking the icon works at times.. after the small win Speech Recognition mode (grey) icon is closed by pressing the red “close” button then it is hard to get the programme to load again.. then one must
    go to Task manager and stop the Speech Recognition programme by clicking on the end task button..
    So.. it’s the initial loading and running of the programme that concerns me..  perhaps the bods at MS could look at how memory is allocated to this programme and why when
    closing the Speech Recognition programme from the “grey” speech recognition mode button it is  subsequently “hard” to reload it again and the programme seemingly seems to still run in Task Manager after this close button has been pressed…??  (But,
    I won’t hold my breath)…
    Cheers

  • Speech recognition server using all kinds of memory!

    I'm running an imac 27" with 8gb of memory and have been doing some video work lately.  Watching things with activity monitor I noticed that speech recognition server process was using 800mb + of memory.  Normally I keep speech recognition started but activated by voice suppressed with the escape key.
    I have already found that -using- speech commands prevents the computer display from going to sleep, like this discussion - https://discussions.apple.com/message/11395271?searchText=speech%20recognition%2 0server#11395271
    I reported this as a bug to Apple, and they admitted there is a problem, and that maybe they will fix it but no guarantee. 
    I created a workaround for the display sleep that seems to alleviate both situations, I "wrote" an applescript to restart speakable items.  It's pretty simple but you have to wait for the processes to go away, around 30 seconds or so.
    Has anyone else seen or reported the excessive memory use with speech recognition?

    That's wishful thinking if you think an update will be made overnight by Apple because of a complaint by an individual. 
    10.7.3 reportedly only updates:  
    Add Catalan, Croatian, Greek, Hebrew, Romanian, Slovak, Thai, and Ukrainian language support
    Address issues when using smart cards to log into OS X Resolve issues authenticating with directory services Address compatibility issues with Windows file sharing
    10.7.2 reportedly only updates: 
    iCloud stores your email, calendars, contacts, Safari bookmarks, and Safari Reading List and automatically pushes them to all your devices. Back to My Mac provides remote access to your Mac from another Mac anywhere on the Internet. Find My Mac helps find a missing Mac by locating it on a map and allows you to remotely lock the Mac or wipe all its data. Getting started with iCloud is easy. After installing the update, OS X will automatically present an iCloud setup panel. Simply enter an existing Apple ID or create a new one and then follow the on screen instructions. To learn more about iCloud visit http://www.apple.com/icloud.  
    The 10.7.2 update also includes Safari 5.1.1 as well as fixes that: Allow reordering of desktop spaces and full screen apps in Mission Control.
    Enable dragging files between desktop spaces and full screen apps. Address an issue that causes the menu bar to not appear in full screen apps. Improve the compatibility of Google contact syncing in Address Book.
    Address an issue that causes Keynote to become temporarily unresponsive.
    Improve VoiceOver compatibility with Launchpad.
    Address an issue that causes a delay in accessing the network after waking from sleep.
    Enable booting in to Lion Recovery from a locally attached Time Machine backup drive. Resolve an issue that causes screen zoom to stop working. Improve Active Directory integration.
    10.7.1:
    Address an issue that may cause the system to become unresponsive when playing a video in Safari. Resolve an issue that may cause system audio to stop working when using HDMI or optical audio out. Improve the reliability of Wi-Fi connections. Resolve an issue that prevents transfer of your data, settings, and compatible applications to a new Mac running OS X Lion. Resolve an issue in which an admin user account could be missing after upgrading to OS X Lion.
    These are each information from the various update pages.    While Apple has been known to include unreported updates in their version updates, the closest to something desired was the Voiceover improvement in 10.7.2.  That though is of speech from putting the cursor over text to read out loud.
    If they are running 10.7.3, and have not seen improvement, then my advice from my other post is good.
    I would also indicate that one is running 10.7.3 in one's profile so one knows one isn't missing any updates that may not already be applied.

  • Speech recognition working only sometimes

    I've got an iPhone 4S. I use the speech recognition regularly in order to dictate a text message (in German, mostly).
    This has worked pretty well.
    Usually, I dictate a sentence, then wait for the iPhone to transcribe it, then dictate the next sentence (i.e., click microphone button, dictate, cick "done").
    But some time ago (can't remember when) it started to work only ever so often.
    The first sentence usually works fine. The next ones don't work any more - it doesn't come back with a transcription :-(
    When I dictate a few hours later another sentence(s), some procedure.
    Sometimes, I get a few sentences in a row, but eventually, it stops working.
    Does anyone have any idea what might be causing this?
    What could I do to improve the speech recognition performance?
    Best,
    Gabriel.

    I couldn't have asked that any better - I have been wondering the same thing since Thursday. I have looked around and can't figure it out either. Anyone with advice - I'd love some direction.

  • Speech recognition works momentarily :(

    MAcbookpro 2011
    I use speech recognition % times (5 commands, and then it doesnt ecognize any commands and i cant calibrate it (doesnt recognize either). No arrow pointing to mic in speech rec. "bubble" Please help

    I have the same issue, except it seems to work just once.  After that it is hearing my voice as the "lines are moving upwards" but there are "no arrows point to mic".  After logging out and in, it works only once.
    The issue is that it stops listening for commands.

  • Palaver speech recognition app packaged for Arch

    Palaver (formerly Ubuntu-Speech-Recognition), has been packaged for Arch:
    https://aur.archlinux.org/packages/palaver-git/
    The git repo is located here:
    https://github.com/JamezQ/Palaver
    A great video demo of what is possible can be found here:
    http://www.techdrivein.com/2013/02/ubun … -demo.html
    This is shaping up to be an interesting project, and as long as it keeps on a good development track, could become the Siri of Linux (don't laugh, it could!)
    The current code is beta, and will be going through restructuring changes as it moves from git to launchpad, so expect a lot of changes in the near future.
    The beta can actually do quite a lot at the moment, especially if you add your own dictionary (which is very easy to do BTW).
    Oh, and just a warning regarding privacy, the application uses Googles speech recognition, and requires a network connection to work. Your voice command is recorded locally and deciphered on Google servers.......
    Cheers.

    Xyne wrote:
    Padfoot wrote:Oh, and just a warning regarding privacy, the application uses Googles speech recognition, and requires a network connection to work. Your voice command is recorded locally and deciphered on Google servers........
    Oh well. Aside from the privacy concerns* I am also disappointed that it is just a wrapper around a web service. A local speech recognition engine would be even more impressive.
    Thanks for the privacy warning.
    * Seriously, sending speech samples to Google so that they can store and analyse them is crazy to me. Do you really want to live in a future of interactive advertisements that can identify you by voice alone and associate it with everything else that you have ever done online? A conversation with a friend at a bus stop may one day trigger targeted ads that reveal things about you that you consider private. I do not understand how so many people can be completely ok with having their private lives catalogued for companies and governments just to get some non-essential services in return. Beyond that having such tools lying around when your government eventually becomes oppressive will ensure its longevity at everyone's expense. Open, democratic societies have an unpatched memory leak that requires a hard reboot every so often.
    I completely understand, and while a local engine would be teriffic, the only one I can think of on Linux with any potential is sphinx. Unfortunately, (last time I checked) it's not in an easily useable state, yet. I should check out the pace of development on that project. Of course, I would be delighted to be proven wrong on this project.
    And while this in no way is intended to dispell any privacy concerns, or justify any possible motives of the companies providing online deciphering, I am guessing this is exactly whar Siri does on Apple products. Also, as Palaver is currently targeting Ubuntu (while still being agnostic enough to easily work on any distro), It needs to have a small footprint considering the push at Ubuntu towards mobile devices. Unfortunately, mobile devices do not lend to local storage of the many samples in multiple languages required to perform the deciphering.
    Cheers.
    [EDIT] While Palaver is a wrapper around an online service, the wrapper is limited to sending the voice sample and getting a string of text back, the application performs the task of deciphering the meaning of the text and taking the appropriate action based on local dictionaries and plugins.[/EDIT]
    Last edited by Padfoot (2013-03-23 22:21:13)

  • How to Switch Off Speech Recognition

    I activated speech recognition just to see what it can do and got the round microphone icon in my screen. As I press Esc I can give commands.
    I want to get rid of this round thing in my screen though and switch off Speech Recognition completely because I never use it.
    I can not find where to switch it off. Who does?

    I have Snow Leopard 10.6.8.  This has happened to me many times.  I don't know why the circle surfaces, but the common response is false - in my preferences, Speakable items is set to OFF.  Turning any of the Speech settings on or off does not make the circle window disappear.  The only way I can get rid of it is to reinstall the operating system from scratch (not an update), which is an incredible pain.  I've done this twice to get rid of the circle, and it's back again!  This is clearly a Snow Leopard bug, since all the responses say when you go into preferences you should see that Speakable items is set to ON, and it's NOT. 
    Seriously, if ANYONE knows how it gets in this state, and how to get rid of it when it's not responding to preferences.  Force Quit doesn't see it.  Maybe there's a low level Console way to kill the app - I'd do anything to get rid of that circle...
    GUYS - SNOW LEOPARD USERS!  I SOLVED IT!  I SOLVED IT!  I still don't know what triggers it, but here's how you get rid of it without a reinstall:
    1) Launch Activity Monitor in flat mode
    2) Look for a running task called something like SpeechRecognitionServer and select it.
    3) Hit the Quit Process button on the upper left!
    Voila, that annoying circle is gone!!! 
    YIPPIE!
    - Jeff
    P. S. I just noticed that this bug still wasn't solved in Lion, so hopefully my solution will help Lion users as well.  This bug definitely didn't exist in Leopard.

  • Speech recognition for a programmer with RSI (not how to do in Java)

    Guys, someone that I know has developed RSI and has almost had to drop out of programming.
    Has any other programmer here been faced with this and been able to work around it? In particular, have you found a decent speech recognition system that allowed you to continue to work?
    Note that I am not interested in learning how to implement speech recognition in a Java program, which is what [most of the posts here|http://forum.java.sun.com/thread.jspa?threadID=5310499&tstart=0] are concerned with.
    I am aware of Dragon Naturally Speaking, but there are some terrible reviews of it which have made me nervous. But if any of you give a glowing review, I will reconsider.
    Since this is a java forum, and one of the best academic research projects in this field is [Sphinx-4|http://cmusphinx.sourceforge.net/sphinx4/] which is based on Java, does anyone know of an end user application comparable to Dragon Naturally Speaking which uses Sphinx-4?

    pm_kirkham wrote:
    No, we just get an intern and pair-program.Thats the first good reason to pair program that I have heard! (I like most of the rest of XP, but PP normally drives me insane...)

  • I wanted to buy a Dragon ,Speech Recognition Programme for Mac but it says its for Lion, and I have Snow Leopard. Would it have worked on Snow leopard?

    I wanted to buy a Dragon ,Speech Recognition Programme for Mac but it says its for OS Lion, and I have OS Snow Leopard. Would it have worked on Snow Leopard?

    The simplest method would be to ask them for their Snow Leopard version and say you can't upgrade to 10.7 or 10.8. I'm almost sure they will sell you a copy.
    Generally if your machine can only upgrade to 10.7 Lion and not directly to 10.8 Mountain Lion, then it's best left on 10.6.8. in my opinion as your going to loose too much other software and slow down your machine in the process.
    This rapid OS X upgrade cycle has caused plenty of problems for users and developers alike, so I suspect you will find a sympathizer with the developer.
    In fact I'm not recommending Mac's to anyone anymore because Apple simply has lost touch with reality.

Maybe you are looking for

  • Text messaging after the new software update

    i have not been able to text out since the update, it sits forever in the message window and i can type the message i want to send but when i hit send nothing happens, it sits for a few minutes and then says "failed to save message" . then goes blank

  • Creating a new portal in a new domain - no baseportal template

    Using Weblogic Platform 7.0 on Solaris 8, I'm trying to work through the procedure at http://edocs.bea.com/wlp/docs70/dev/newdom.htm "Creating a new portal in a new domain." Step 1 "Creating the new domain" goes fine. I can start the portal server us

  • Clearing Account Error (T043G)

    Hi, While posting incomming payment from customer, posting through Transaction Code FEBA_Check_DEPOSIT(Postprocessing Check Deposti Trans.), the system giving message Company code 1300 is missing Table T043G. The message iam getting while clearing cu

  • Multiple agents on same box

    So, if installing a GW patch (say 7.0.4) to my nw65sp8 box, how should I handle this setup? 3 'domains' (MTA's on 3 different boxes, 1 for POA's, 1 for gwia, 1 for webacc. All of my MTA's run on the same box as other agents. I recall when installing

  • Problem in using the Web Service generated from BAPI_FLIGHT_GETLIST

    Hi all,    I am trying to use the Web Service generated from BAPI_FLIGHT_GETLIST. I have got the wsdl file, and trying to invoke it. But, while I can test the BAPI using the Airline ID AA, I can't test from the Web Service using the same data. It sho