10.3.9 Speech Recognition problem

Hi to all. This is my first post.
my question is i was trying to use speech recognition to visit saved webpages. the problem comes after i confirm to make a page speakable, the page isnt being saved. i cant find it under speakable items or under safari in speakable applications. also, i was wondering if anyone could point me to a good link for making my own speech commands using apple scripts.
thanks

Restart your computer with the X key held down, or go into the Startup Disk pane of System Preferences and select your hard disk as soon as you can. If the issue persists, your N key might be stuck, or a setting in the computer's NVRAM might be causing it to act like the N key was being pressed at startup. In the second case, hold down the Option, Command, P, and R keys at startup and release the keys when you hear the second startup chime to clear the setting.
(16011)

Similar Messages

  • Speech recognition problem with null pointer exception

    im trying to run the simple hello program that came with IBM speech for java
    i've put the speech.jar file in the right folder and it compiles all fine but when i run it i get this error:
    locale is en_US
    java.lang.NullPointerException
    at Hello.main(Hello.java:171)
    but i dont know enough about java speech to figure out what is null and what shouldn't be null...
    thx
    nate
    P.S.
    here is the program code
    import java.io.*;
    import java.util.Locale;
    import java.util.ResourceBundle;
    import java.util.StringTokenizer;
    import javax.speech.*;
    import javax.speech.recognition.*;
    import javax.speech.synthesis.*;
    public class Hello {
    static RuleGrammar ruleGrammar;
    static DictationGrammar dictationGrammar;
    static Recognizer recognizer;
    static Synthesizer synthesizer;
    static ResourceBundle resources;
    static ResultListener ruleListener = new ResultAdapter() {
    public void resultAccepted(ResultEvent e) {
    try {
    FinalRuleResult result = (FinalRuleResult) e.getSource();
    String tags[] = result.getTags();
    if (tags[0].equals("name")) {
    String s = resources.getString("hello");
    for (int i=1; i<tags.length; i++)
    s += " " + tags;
    speak(s);
    } else if (tags[0].equals("begin")) {
    speak(resources.getString("listening"));
    ruleGrammar.setEnabled(false);
    ruleGrammar.setEnabled("<stop>", true);
    dictationGrammar.setEnabled(true);
    recognizer.commitChanges();
    } else if (tags[0].equals("stop")) {
    dictationGrammar.setEnabled(false);
    ruleGrammar.setEnabled(true);
    recognizer.commitChanges();
    } else if (tags[0].equals("bye")) {
    speak(resources.getString("bye"));
    if (synthesizer!=null)
    synthesizer.waitEngineState(Synthesizer.QUEUE_EMPTY);
    Thread.sleep(1000);
    System.exit(0);
    } catch (Exception ex) {
    ex.printStackTrace();
    int i = 0;
    String eh[] = null;
    public void resultRejected(ResultEvent e) {
    if (eh==null) {
    String s = resources.getString("eh");
    StringTokenizer t = new StringTokenizer(s);
    int n = t.countTokens();
    eh = new String[n];
    for (int i=0; i<n; i++)
    eh = t.nextToken();
    if (((Result)(e.getSource())).numTokens() > 2)
    speak(eh[(i++)%eh.length]);
    static ResultListener dictationListener = new ResultAdapter() {
    int n = 0; // number of tokens seen so far
    public void resultUpdated(ResultEvent e) {
    Result result = (Result) e.getSource();
    for (int i=n; i<result.numTokens(); i++)
    System.out.println(result.getBestToken(i).getSpokenText());
    n = result.numTokens();
    public void resultAccepted(ResultEvent e) {
    Result result = (Result) e.getSource();
    String s = "";
    for (int i=0; i<n; i++)
    s += result.getBestToken(i).getSpokenText() + " ";
    speak(s);
    n = 0;
    static RecognizerAudioListener audioListener =new RecognizerAudioAdapter(){
    public void audioLevel(RecognizerAudioEvent e) {
    System.out.println("volume " + e.getAudioLevel());
    static EngineListener engineListener = new EngineAdapter() {
    public void engineError(EngineErrorEvent e) {
    System.out.println
    ("Engine error: " + e.getEngineError().getMessage());
    static void speak(String s) {
    if (synthesizer!=null) {
    try {
    synthesizer.speak(s, null);
    } catch (Exception e) {
    e.printStackTrace();
    } else
    System.out.println(s);
    public static void main(String args[]) {
    try {
    if (args.length>0) Locale.setDefault(new Locale(args[0], ""));
    if (args.length>1) Locale.setDefault(new Locale(args[0], args[1]));
    System.out.println("locale is " + Locale.getDefault());
    resources = ResourceBundle.getBundle("res");
    recognizer = Central.createRecognizer(null);
    recognizer.allocate();
    recognizer.getAudioManager().addAudioListener(audioListener);
    recognizer.addEngineListener(engineListener);
    dictationGrammar = recognizer.getDictationGrammar(null);
    dictationGrammar.addResultListener(dictationListener);
    String grammarName = resources.getString("grammar");
    Reader reader = new FileReader(grammarName);
    ruleGrammar = recognizer.loadJSGF(reader);
    ruleGrammar.addResultListener(ruleListener);
    ruleGrammar.setEnabled(true);
    recognizer.commitChanges();
    recognizer.requestFocus();
    recognizer.resume();
    synthesizer = Central.createSynthesizer(null);
    if (synthesizer!=null) {
    synthesizer.allocate();
    synthesizer.addEngineListener(engineListener);
    speak(resources.getString("greeting"));
    } catch (Exception e) {
    e.printStackTrace();
    System.exit(-1);

    yes, the problem seems to be with allocating the engine, but I don't know exactly why recognizer.allocate() returns null...I have already installed IBM ViaVoice and Speech for Java, and have set the supposedly correct paths...

  • Speech Recognition problem (gramcomp)

    I'm trying to add speech recognition to my program. I've tried to compile my grammar using the following  gc /o grammar.cfg /h grammar.h grammar.xml. I get a message "compilation successful" but the header file is empty (but the cfg file is
    OK). Is there something I have to do to get the header file created?

    Hello AlanWK1947,
    >> I've tried to compile my grammar using the following 
    gc /o grammar.cfg /h grammar.h grammar.xml.
    It seems that you are using a command tool to compile your grammar, however it is not clear which one you are using, could you please tell the detail information?
    >> I'm trying to add speech recognition to my program
    If your program is based on .NET, you could check this article:
    https://msdn.microsoft.com/en-us/magazine/dn857362.aspx
    It shows how to add the speech recognition to .NET project.
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Windows speech recognition problem

    I am trying out windows speech recognition but it seems to slow down Firefox? Scrolling becomes very slow and jumpy and selecting things is slow also. This doesn't happen on Internet Explorer or anywhere else for that matter.
    I've disabled my addons to no effect.

    Hi techpadawan1,
    Windows Speech Recognition files are saved in C:\Users\<username>\AppData\Local\Microsoft\Speech\Files, and also under C:\Users\<username>\AppData\Roaming\Microsoft\Speech.
    The training files are located in C:\Users\<username>\AppData\Local\Microsoft\Speech\Files so you can see that it is not a roaming folder.
    Also I found a tool “Speech Recognition Profile Manager Tool”
    You could use it to import/export to specific folder and it only take few seconds.
    Warm remainder:large size file file roaming will cause slow user logon, and some times it will take minitus to log into user interface.
    Regards

  • JSAPI : Problems with the speech recognition HelloWorld example

    I just started off with JSAPI. And I immediately ran into problems. These are the configurations and the stuff I did:
    My system:
    OS: WinXP
    JDK: JDK 6 Update 10
    IDE: Netbeans 6.0
    I downloaded the FreeTTS.zip from the FreeTTS site [http://freetts.sourceforge.net/docs/index.php] . I expanded it and in the lib folder I ran the jsapi.exe and as a result jsapi.jar war generated.
    I included this jar in my project lib as jar.
    Then I copy pasted HelloWorld speech recognition example code given in the site : [http://java.sun.com/products/java-media/speech/forDevelopers/jsapi-guide/Recognition.html#11644]
    First it gave a compile time error that
    rec.deallocate();throws an exception which needs to be handled. Accordingly I surrounded it with a try catch block and caught the exceptions in the manner given below:
       try{
             rec.deallocate();
                        System.exit(0);
                    } catch (EngineException ex) {
                        Logger.getLogger(HelloWorld.class.getName()).log(Level.SEVERE, null, ex);
                    } catch (EngineStateError ex) {
                        Logger.getLogger(HelloWorld.class.getName()).log(Level.SEVERE, null, ex);
    Now the code was compiling without problems.
    When I executed the code a runtime exception (NullPointerException) was thrown at the line
          rec.allocate();  in the psvm(String args[])
    Can anybody please explain to me where I am going wrong? Or is there something more that needs to be added in the code?
    Anxiously waiting to hear from your end.
    Thanks and regards,

    Hi All,
    I am using following code
    import javax.speech.*;
    import javax.speech.synthesis.*;
    import java.util.Locale;
    public class HelloWorld {
         public static void main(String args[]) {
              try {
                   // Create a synthesizer for English
                   Synthesizer synth = Central.createSynthesizer(
                        new SynthesizerModeDesc(Locale.ENGLISH));
                               synth.waitEngineState(Synthesizer.ALLOCATED);
                   //Synthesizer synth = Central.createSynthesizer (null);
                   // Get it ready to speak
                   synth.allocate();
                   synth.resume();
                   // Speak the "Hello world" string
                   synth.speakPlainText("Hello, world!", null);
                   // Wait till speaking is done
                   synth.waitEngineState(Synthesizer.QUEUE_EMPTY);
                   // Clean up
                   synth.deallocate();
              } catch (Exception e) {
                   e.printStackTrace();
    }and i am also getting following error
    java.lang.NullPointerException
            at HelloWorld.main(HelloWorld.java:18)My class path is set to
    jsapi.jar (extracted from freetts 1.2.1)
    freetts.jar
    Please i didn't find any solution from past replies so please do help me someone to solve this...
    Thanking You....

  • Speech recognition window problem

    My speech recognition window (the little circle thing) disappears once in a while for extended periods of time. I have done all the things suggested in the help including:
    1. going to system preferences and turning speech off and on
    2. restarting and turning off and on / turning off, restarting, and then on
    3. deleting the com.apple.speech.recognition.feedback.prefs.plist, emptying trash, and turning off/on, turning off doing the same thing and on, off, deleting, restarting, on, etc.
    4. copying all related speech recognition prefs from another admin's library into mine.
    I once thought it may be some overheating issue as it once came back after i had shutdown and let it off for a while. however, the most recent issue, it came back without shutting it off.
    sometimes it's gone for hours at a time or shortest 30mins. It usually disappears when i turn it off to let my wife login to her setup or when i turn it off when i know i wont be using it for a while or i am going to play a memory intesive game. my wife's login seems to have no problem but she doesn't use it anywhere as much as i do.
    i have set up a few of my own commands but nothing critical. things like CNN, Visualizer, etc.
    any ideas?
    for the record, it's back now......

    hi,
    i removed all preferences stored in <your home>/Library/Preferences/com.apple.speech.*
    and switched speech recognition on/off/on and the window magically appeared -use rm carefully-
    make sure that you clean the speech processes (kill -9) too before removing these files.
    quite happy to find this, this has been a real p..n in the *. Apple guys, please clean this up asap.
    btw: i've an intel mc mini single cpu v10.4.6.
    mac mini   Mac OS X (10.4.6)  

  • Speech Recognition stops working after a minute of nonuse

    I'm a grad student trying to keep from exacerbating my carpal tunnel and just figured out how to use the voice commands on my macbook. So far, they work great, they hear my voice, it does what I tell it to do, and I'm not constantly hitting apple+tab and apple+c and apple+v and so on, which really helps my wrists!
    BUT. If I go for more than a minute or two without using a voice command, it stops working. I can't pinpoint anything else that triggers it other than if I stop using it for a little bit while I'm reading a paper or something. The microphone is calibrated and working, the little black arrows and the blue and green bars flash on the widget like it hears me, but then it won't give me the sound that says the command was recognized and it doesn't perform the command any time I try to use it again after a few minutes of not being used. It's like the recognition part goes to sleep and won't wake up.
    If I go to the system preferences and toggle Speakable Itemst off and then on again, it starts working again until another minute or two of non-use. If I use spoken commands constantly, it will stay on for a good long while, but I'm not using it constantly that's the end of speech recognition. I really can't stop to just tell it some unneeded command to keep it awake all the time! Eventually, the computer gets confused if I turn the "speakable items" on and off too many times from the System Preferences, and it gets confused about whether or not it's turned on or off, or the widget disappears but the System Preferences panel thinks the Speakable Items is on.
    I read somewhere that other people with a newer OS have a similar problem if they don't use the default speaking voice, so I switched to Alex and restarted, and the same thing is still happening.
    I'm running 10.5.8, and using safari, word 2011, preview, nothing too crazy. My speech recognition is in "Listen only while key is pressed" mode, and like I said, works great until I stop using it for a minute or so. I'm using the internal microphone on my MacBook (dob circa 2008). Any tips on how I can actually keep my Speech Recognition useful? Thanks in advance!

    OK, I have one more question now, seems to be the same as this question which was never answered years ago:
    Speakable Commands>Application Switching -- how to delete items?
    When I click on the triangle from the speech command widget and open speech commands, there is a TON of stuff there that I'll never ever use, but these things do not appear in the Library/Speech/Speakable Items folder so that I might be able to delete them. For example, one item I'd like to turn off is just the word "Home" and the computer keeps thinking that I said "Home" instead of whatever else I meant to say. Another keeps opening iChat or Mail, when I say "open new window" or something like that. I tried clicking, control clicking, option clicking, nothing will highlight a command in the Speech Commands window so that I might delete it.
    The "Home" command only shows up when I'm using Word, in the "front window" list. However, the Application Speakable Items/Microsoft Word folder is empty. Is there some other way to get into the speakable commands list and weed out unwanted stuff?

  • Has anyone figured out how to get speech recognition working with sticky keys enabled on mountain lion?

    I'm trying to use speech recognition to input text on my iMac running the latest mountain lion, 10.8.3.
    I have sticky keys enabled.
    When I try to start speaking by pressing the function key twice nothing happens. I can only get it to work if I disable sticky keys.
    The same problem occurs with all the other modifier keys as shortcut, they do not work with sticky keys.
    When I try to select a different shortcut, I am unable to select a two key combination, but am limited to one.
    If I select the F6 key, or any other single key, I am able to start speech recognition. However the second time that I press the key, it does not stop recognition and process my words. Instead, it restarts the recognition.
    Has anyone figured out how to get speech recognition working with sticky keys enabled?
    Or a way to get an individual key shortcut to start on the first press and process it on the second?
    Or a way to get key combinations to work, as specified by the help:
    Dictation On and Off
    To use Dictation, click On.
    When you’re ready to dictate text, place the insertion point where you want the dictated text to appear and press the Fn (function) key twice. When you see the lighted microphone icon and hear a beep, the microphone is ready for you to speak your text.
    Shortcut
    By default, you press the Fn (Function) key twice to start dictation. If you like, you can choose a different shortcut from the menu.
    To create a shortcut that’s not in the list, choose Customize, and then press the keys you want to use. You can press two or more keys to create your shortcut.

    I noticed with version 10.8.4 of OS X that I am now able to select F6 to activate, and the return key to complete the speech recognition. This is still different than the description of how these should function that's included in the help, but at least it's an improvement.

  • I wanted to buy a Dragon ,Speech Recognition Programme for Mac but it says its for Lion, and I have Snow Leopard. Would it have worked on Snow leopard?

    I wanted to buy a Dragon ,Speech Recognition Programme for Mac but it says its for OS Lion, and I have OS Snow Leopard. Would it have worked on Snow Leopard?

    The simplest method would be to ask them for their Snow Leopard version and say you can't upgrade to 10.7 or 10.8. I'm almost sure they will sell you a copy.
    Generally if your machine can only upgrade to 10.7 Lion and not directly to 10.8 Mountain Lion, then it's best left on 10.6.8. in my opinion as your going to loose too much other software and slow down your machine in the process.
    This rapid OS X upgrade cycle has caused plenty of problems for users and developers alike, so I suspect you will find a sympathizer with the developer.
    In fact I'm not recommending Mac's to anyone anymore because Apple simply has lost touch with reality.

  • Speech Recognition not working when tested in IE on Windows 8.1

    Hi,
    I have adapted the example code form the Speech Recognition documentation to fire up Speech recognition in Visual Studio for Web Express 2013 on Windows 8.
    The speech synthensizer works fine when I fire up the app in IE and produce Text-to-speech, but when I try the Speech recognition code, all I get is "Waiting for Localhost" messages, and no result.
    Is there a problem with speech recognition with Web development using browsers? (I tried and failed in Opera as well)
    I am keen to use speech recognition for a webapp, ultimately serving mobile devices with the power of server-based processing.
    Anyone help please?

    Hi simulsys,
    I am moving your question to the moderator forum ("Where is the forum for..?"). The owner of the forum will direct you to a right forum.
    If it is related to the Windows Speech Recognition, this forum would be better:
    http://archive.msdn.microsoft.com/wsrmacros/Thread/List.aspx
    Or it is the Speech Server issue:
    http://social.msdn.microsoft.com/Forums/en-US/3ae80ba0-9a35-4457-8d08-a069e1a54506/windows-speech-recognition?forum=whatforum
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Best speech recognition software for Mac?

    What is the best speech recognition software available for a Mac? I want to be able to control my computer by voice while I lecture. I am aware of Dictate but hoping there is something better on the market.
    Thanks,
    todd

    The problem is a matter of market. Good speech recognition software is expensive to develop, and it's a small market. Even smaller for Macs. MacSpeech has been providing speech recognition products for several years, now, and Dictate is their newest product having been licensed from Dragon. This is the best speech recognition software available on either platform.

  • Speech recognition not working properly!

    I just started trying out the speech recognition tool in Snow Leopard and I seem to be having problems with it. When I calibrate it (using the internal mic) it works fine and understands those phrases perfectly but when I actually say the exact same thing outside of the calibration process the computer doesn't seem to understand. I know it's picking up my voice because the little widget thing animates as is does when it hears sound and the "Speakable Items" option is set to on . It actually worked once yesterday when I tried it for the first time, I told it to switch to an app and it did but that was the end of it.
    What should I do? I haven't updated the software to 10.6.1, would this perhaps solve the issue? I don't know if it worked before I upgraded to Snow Leopard because this is the first time I've tried it out

    Had the same problem.  Could only resolve by going back to Snow Leopard.

  • Speech Recognition Not Working. Why?

    I have a 6-month old Macbook (OSX 10.5.5), and I want to try out the built in speech recognition software. For some reason, it won't accept any input, no matter the configuration I put it in. When I click calibrate, it's slow to open, and no matter what I do with the slider, it won't recognize my commands. I've tried it in a new account, but without success. It appears to be system-wide. Also, there are no items in the speakable items folder, and a feedback box does not appear on the desktop. Any ideas?

    Im assuming you've used the Sound pane to check that the built-in mic is working, and that you've remembered to click on the "Speakable Items" radio button.
    There are a couple of fixes that normally work when changing user helps, but it is possible that two user accounts have the same problems!
    1)
    Have you got any items in Address Book that are marked as "Company" but have no company name in them? This is an old problem that was supposed to have been fixed with Leopard, but I've heard people complaining about it occasionally since. If there are, try going through them and putting in the company names, then log out and back in again.
    2)
    Try deleting the preference files
    com.apple.speech.recognition.recognizer.prefs.plist
    com.apple.speech.recognition.AppleSpeechRecognition.prefs.plist
    com.apple.speech.recognition.feedback.prefs.plist
    all in ~/Library/Preferences/
    Failing those, and the missing Speakable Items suggests that's not the fault, try downloading and reapplying the latest Combo updater.
    If that doesn't do the trick, you may have to reinstall the OS.
    Sorry!
    Archie

  • Speech Recognition Not Working Well

    Not matter how hard I try to get my speech recognition to work on my MacBook Pro, it is very temperamental! Even when I am in a quiet space it will respond, but after a while it won't. I've tried calibrating it, etc but doesn't make a huge difference.
    What makes me think there is a problem is the fact that when speech recognition is activated there is supposed to be lights that light up (on the actual speech recognition widget thing - as I have noticed on various other MacBook Pro users' YouTube videos). But for mine, the bottom, blue light only lights up; and no matter how loud I shout at it, the blue light DOESN'T then go on to the green light. However, when I calibrate it, my speech is recognised as being in the 'green area'; so I am speaking t the right level.
    I appreciate your help!

    Not matter how hard I try to get my speech recognition to work on my MacBook Pro, it is very temperamental! Even when I am in a quiet space it will respond, but after a while it won't. I've tried calibrating it, etc but doesn't make a huge difference.
    What makes me think there is a problem is the fact that when speech recognition is activated there is supposed to be lights that light up (on the actual speech recognition widget thing - as I have noticed on various other MacBook Pro users' YouTube videos). But for mine, the bottom, blue light only lights up; and no matter how loud I shout at it, the blue light DOESN'T then go on to the green light. However, when I calibrate it, my speech is recognised as being in the 'green area'; so I am speaking t the right level.
    I appreciate your help!

  • Does anyone know of a good speech recognition program for Mountain Lion?

    I purchased a speech recognition program in 2008, MacSpeech Dictate. I only used it for a short period of time, but problems with my hands have started up again. MacSpeech Dictate does not work on OS X 10.8. I wondered if there are any speech recognition programs available for this operating system? i found "Dragon Express" in the App store, but it was rated low, and apparently crashes with 10.8. I cannot beieve that Apple would forget about those with disabilities. I have activated the speech commands window and have experimented with it, but it seems limited for browsing the web. I wondered if there was a way to create my own commands using the Speech Command application in Mountian Lion? Any ideas anyone? Have I missed something with Speech Commands on 10.8?

    The built in dictation is pretty good.  The only real downside is that the conversion is done server-side so you have to dictate, then wait a bit, and then dictate, and then wait a bit.
    The gold standard is dragon dictate.  I think it's a few hundred bucks but it's fantastic software.  I belive the Nuance engine underneath it is used by the Mountain Lion dictation feature, though.

Maybe you are looking for

  • Error while updating to os4.3

    hi, I have iPhone 3GS, it was running iOS4.2, today when I tried updating it to iOS4.3 it hanged during update and showed some error. Now, iPhone only shows a USB & iTunes logo, I tried restarting it but no luck. Please help, if you face this or know

  • Data aggregation problem in query

    Hi everyone, I have the data in the following format in ODS with Doc# IT# SC# and DT# as KEY. My requirement is to see the data in query as shown below (Query view 1 and Query view 2). DOC#   IT#   SC#   DT#             QTY 123      01     1       9/

  • Cannot edit photo in CS6 from LR4

    Operating in Windows 7, 64.  Any ideas on what we did wrong?

  • TimeMachine backing up to Buffalo LinkStation

    Hi there, I'm considering upgrading to Leopard primarily because of the Time Machine, but I'm not certain I would be able to use it as I'm envisioning it. I have a Buffalo LinkStation NAS drive, is it possible to configure Time Machine to use it as a

  • Price Changes during Billing

    Dear All We have a mandatory pricing condition for Material price and Customer price at order level. If the prices are not there then an automatic block is there on order. The issue is that if while creating the order the prices are maintained then t