JSAPI JS_ValueToString??

Hi i tried to change the dllcomputesum example for cs3,
because i need to send a string from jsfl to the dll, but i stuck
at this JS_ValueToString..
unsigned short *myString;
myString = JS_ValueToBytes(cx, argv[0],
&lengthReal);//lengthReal is the "yet" hardcoded length of the
string.
but this puts out the C++ warning :
warning C4133: ':': Incompatible types - from 'char *' to
'unsigned short *'
can someone help me or has an example for me, i can´t
find any samples, only one from dreamweaver where JS_valueToString
uses different arguments.
or may someone could explain me why i get a pointer to the
string back but need to use a pointer to a unsigned short for
myString.
here ist what JS_ValueToString should return.:
Returns
A pointer that points to a null-terminated string if
successful or to a null value on failure. The calling routine must
not free this string when it finishes.
iam confused, may someone can help me?
greetz

no one ever used JS-ValueToString? or even c-lexel
extenibilty?
hum

Similar Messages

  • JSAPI - other languages than English; MIDlet - settings loading; JSAPI - al

    Hello!
    I need to create application which uses speech recognition. At first I thought about using CMU Sphinx (PocketSphinx or, possibly, Sphinx4). Later I thought about JSAPI. (Sphinx somehow uses JSAPI but I don't know what is the difference between JSAPI and Sphinx).
    I have read almost all (up to 6.7.9) of the following tutorial: http://java.sun.com/products/java-media/speech/forDevelopers/jsapi-guide.pdf. Unfortunately I couldn't've found one important thing, i.e. how to create acoustic model for other language than English?
    Thanks in advance for your answers :-)!
    PS There are some other things which I'd like to know:
    1) How to load some settings from file (I guess nowadays configuration files are created with the use of XML but I dunno)?
    2) How to maintain algorithm which is used by MIDlet which involves JSAPI? I mean there are some different things which my MIDlet needs to do. I guess it is good habit to divide different goals into separate parts of code (due to object-oriented programming).
         In my case there are some different things:
         a) speech recognition of audio input, i.e. changing input audio stream into output text string
         b) analysis of that text string and according to this string choosing the proper transition in my algorithm
         In general I have written my algorithm on sheet of paper and it takes about ten A4 sheets of paper. Because of it I thought there should be some way to write this algoritm maybe outside the code, in some kind of file which would contain this algorithm. Maybe there is other good way to implement this algorithm, not necesarilly in the code.
         c) sending of results through httpconnection with the use of POST method
         d) receiving in on TomCat on server
    3) Which method should I use to receive the recognized speech? I found these:
         a) FinalRuleResult, b) Result -> getBestToken, c) getSpokenText, d) ResultToken of RuleGrammar
    4) Can you give me any full examples of JSAPI usage? (Not just short parts of code like in this JSAPI guide)?
    Greetins :-)!

    Praveen, Yes the infoobject is language dependent.  The problem did not happen in BWQ but came up in BWP.  so I am not sure at this time what is causing in production that didn't happen in BWQ.
    Thanks,
    Syed.

  • Steps to compile and deploy files on jsap

    steps to compile and deploy files on jsap
    classpath that has to be set
    where to put the .class files
    (Where to fnd JNDI tab)

    jsas-sun java application Server
    I like to depoy servlet files on this server
    But it requires cloudscape to be started
    (database server)
    i have not even a single entry in the name of cloudscape

  • Classpath Problem (Using JSAPI)

    Hi,
    I using IBM Via Voice as a the implementation of the JSAPI on my system. I've downloaded the speech for java pack from IBM Alphaworks and put it in the directory of
    c:\ibmjs
    This gives you access to the speech packages for utilisation in your programs so I should be able to import javax.speech in my programs. However the system doesn't see these classes.
    Heres my Autoexec...
    path=%path%;c:\jdk1.3.1_01\bin;c:\ibmjs\lib;
    CLASSPATH=%CLASSPATH%;c:\ibmjs\lib\ibmjs.jar;
    I presume the whole thing is just a classpath problem, does anybody have any idea from the above paths where I am going wrong?

    i guess you have to put some of ibm's dll files coming with speech for java in the path (not classpath). it's not sufficient to have the directories in the path.
    anyway, there come's an installation manual with speech for java which explaines all that.

  • JSAPI, speech output to file ???

    Hi all,
    I am not new to Java but using JSAPI first time, I have downloaded an example code which takes an Ascii string and speek accordingly, which can be hear on headphone. This code is working fine.
    Now I would like to know that how can I get spoken output to a file, I mean I want to store voice to a file (WAV or any format) generated by the system.
    Looking for your response,

    you'll have a large amount of reading to be done, alot in the topic but
    java provide an example program with source code.
    there is audio recording capabilities in this. so once ya sort of the
    audio capture you could add it as a method and record your
    speech ouput.
    http://java.sun.com/products/java-media/sound/samples/JavaSoundDemo/

  • JSAPI Recognition won't run twice

    I'm working on an implementation of JSAPI's Recognition. 'listen' method is invoked through a web service and it runs fine for the first time. When I run it for a second time it looks like I've got two engines allocated (though I deallocate it) because everything is displayed 2 times. Then the app crashes and blocks a port. When it is run for the third time I displays everything three times and an exception is thrown due to a blocked port. I tried to use both the same Recognizer object for every 'listen' method call and creating a new object for every invocation (the output below is the second case). It doesn't help either when I use different ports for other invocations. When I restart the server it again works only once. So what I think the problem is I am not deallocating or closing sources properly. I think it is JSAPI problem but could also be RTP. Please help.
    I use Cloud Garden's JSAPI implementation.
    Here is the code:
    package asr;
    import javax.speech.*;
    import javax.speech.recognition.*;
    import com.cloudgarden.audio.*;
    import com.cloudgarden.speech.*;
    import javax.media.protocol.*;
    import javax.media.*;
    import javax.media.format.*;
    public class RecognitionEngine {
         public RecognitionEngine(String recognitionEngineName) {
          * TODO: synchronizacja result?
          * @return
         public String listen(String destination, String port) {
              ++counter;
              System.out.println("Times called: " + counter);
              result = null;
              try {
                   String rtpRecoUrl = "rtp://" + destination + ":" + port + "/audio";
                   reco = Central.createRecognizer(new EngineModeDesc(null, null,
                             java.util.Locale.ENGLISH, null));
                   reco.addEngineListener(new TestEngineListener());
                   reco.allocate();
                   reco.waitEngineState(reco.ALLOCATED);
                   CGAudioManager recoAudioMan = (CGAudioManager) reco.getAudioManager();
                   recoAudioMan.addAudioListener(new TestAudioListener());
                   recoAudioMan.addTransferListener(new TransferListener() {
                        public void bytesTransferred(TransferEvent e) {
                             System.out.print("R" + e.getLength() + " ");
                   boolean gramExists = false;
                   for (RuleGrammar r : reco.listRuleGrammars()) {
                        if (r.getName().equals("gram1"))
                             gramExists = true;
                   if (!gramExists) {
                        RuleGrammar gram = reco.newRuleGrammar("gram1");
                        Rule r = gram
                                  .ruleForJSGF("all your base are belong to us {BASE} | hello world {HELLO} | goodbye computer {QUIT} | what time is it {TIME} | what date is it {DATE}");
                        gram.setRule("rule1", r, true);
                        gram.setEnabled(true);
                   // must be called before recognizer is resumed (for SAPI5 engines)!
                   reco.getRecognizerProperties().setResultAudioProvided(true);
                   recoDataSink = recoAudioMan.getDataSink();
                   reco.commitChanges();
                   reco.requestFocus();
                   reco.resume();
                   reco.waitEngineState(reco.LISTENING);
                   reco.addResultListener(new ResultAdapter() {
                        public void resultUpdated(ResultEvent ev) {
                             System.out.println("updated " + ev);
                        public void resultAccepted(ResultEvent ev) {
                             System.out.println("accepted " + ev);
                             FinalResult res1 = (FinalResult) ev.getSource();
                             FinalRuleResult frr = (FinalRuleResult) res1;
                             String[] tags = frr.getTags();
                             if (tags == null)
                                  return;
                             String tag = tags[0];
                             // Uncomment this to hear what recognizer heard.
                             // try {
                             // reco.pause();
                             // frr.getAudio().play();
                             // reco.resume();
                             // } catch (Exception e) {
                             // e.printStackTrace();
                             if (tag.equals("QUIT")) {
                                  System.out.println("Someone said goodbye computer");
                             } else if (tag.equals("HELLO")) {
                                  System.out.println("Someone said hello world");
                             } else if (tag.equals("TIME")) {
                                  System.out.println("Someone said what time is it");
                             } else if (tag.equals("DATE")) {
                                  System.out.println("Someone said what date is it");
                             } else if (tag.equals("BASE")) {
                                  System.out.println("ALL YOUR BASE ARE BELONG TO US!!");
                             result = tag;
                             close();
                   javax.sound.sampled.AudioFormat fmt = recoAudioMan.getAudioFormat();
                   FileTypeDescriptor cd = new FileTypeDescriptor(FileTypeDescriptor.RAW);
                   Format[] outFormats = { new AudioFormat(AudioFormat.LINEAR,
                             fmt.getFrameRate(), fmt.getSampleSizeInBits(),
                             fmt.getChannels(), AudioFormat.LITTLE_ENDIAN,
                             AudioFormat.SIGNED) };
                   System.out.println("DataSink = " + recoDataSink + " rtpRecoUrl="
                             + rtpRecoUrl);
                   ProcessorModel pm = new ProcessorModel(
                             new MediaLocator(rtpRecoUrl), outFormats, cd);
                   listening = true;
                   recoProc = Manager.createRealizedProcessor(pm);
                   System.out.println("RTP connection established");
                   recoDataSink.setSource(recoProc.getDataOutput());
                   recoProc.start();
                   recoDataSink.open();
                   recoDataSink.start();
                   System.out.println("listening");
                   reco.waitEngineState(reco.DEALLOCATED);
                   System.out.println("EXITING");
              } catch (Exception e) {
                   e.printStackTrace();
              } finally {
                   listening = false;
              return result;
         public void close() {
              System.out.println("closing");
              try {
                   recoProc.stop();
                   recoProc.close(); //mine //unblocks port
                   //recoProc.deallocate(); //mine
                   //recoDataSink.stop(); //mine
                   recoDataSink.close();
                   reco.deallocate();
                   reco.waitEngineState(Engine.DEALLOCATED);
                   Thread.currentThread().sleep(2000);
              } catch (Exception e2) {
                   e2.printStackTrace();
              System.out.println("closed!");
         public boolean isListening() {
              return listening;
         private int counter = 0;
         private boolean listening = false;
         private String result;
         private Recognizer reco;
         private Processor recoProc;
         private DataSink recoDataSink;
    }

    output:
    //server starting
    INFO: Server startup in 6153 ms
    INIT!
    Recognizer jest null!
    Times called: 1
    CloudGarden's JSAPI1.0 implementation
    Version 1.7.0
    Implementation contained in files cgjsapi.jar and cgjsapi170.dll
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineAllocatingResources
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineAllocated
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 changesCommitted
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 focusGained
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    DataSink = com.cloudgarden.audio.CGDataSink@800aa1 rtpRecoUrl=rtp://192.168.56.1:12346/audio
    RTP connection established
    listening
    R1536 R6000 R144 R2304 R2304 R2304 R2304 R2304 R2304 Speech started
    R2304 R2304 R2304 R2304 com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerProcessing
    updated javax.speech.recognition.ResultEvent[source=Grammar:gram1, Conf:3, Tokens{}]
    R2304 R2304 R2304 R2304 R2304 R2304 R2304 R2304 R2304 R2304 updated javax.speech.recognition.ResultEvent[source=Grammar:gram1, Conf:3, Tokens{hello,world}]
    R2304 R2304 Speech stopped
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    updated javax.speech.recognition.ResultEvent[source=Grammar:gram1, Conf:3, Tokens{hello,world}, Tags{HELLO}]
    accepted javax.speech.recognition.ResultEvent[source=Grammar:gram1, Conf:3, Tokens{hello,world}, Tags{HELLO}]
    Someone said hello world
    closing
    R0 R-1 com.cloudgarden.speech.CGRecognizer@1fbc355 focusLost
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineDeallocatingResources
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineDeallocated
    EXITING
    closed!
    Speech started
    Speech stopped
    INIT!
    Recognizer jest null!
    Times called: 1
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineAllocatingResources
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineAllocatingResources
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineAllocated
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineAllocated
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 changesCommitted
    com.cloudgarden.speech.CGRecognizer@1fbc355 changesCommitted
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 focusGained
    com.cloudgarden.speech.CGRecognizer@1fbc355 focusGained
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    DataSink = com.cloudgarden.audio.CGDataSink@800aa1 rtpRecoUrl=rtp://192.168.56.1:12346/audio
    RTP connection established
    listening
    R-1 R-1
    INIT!
    Recognizer jest null!
    Times called: 1
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 changesCommitted
    com.cloudgarden.speech.CGRecognizer@1fbc355 changesCommitted
    com.cloudgarden.speech.CGRecognizer@1fbc355 changesCommitted
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerSuspended
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 engineResumed...
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    com.cloudgarden.speech.CGRecognizer@1fbc355 recognizerListening
    DataSink = com.cloudgarden.audio.CGDataSink@800aa1 rtpRecoUrl=rtp://192.168.56.1:12346/audio
    Cannot create the RTP Session: Can't open local data port: 12346
    javax.media.CannotRealizeException
         at javax.media.Manager.blockingCall(Manager.java:2005)
         at javax.media.Manager.createRealizedProcessor(Manager.java:794)
         at asr.RecognitionEngine.listen(RecognitionEngine.java:124)
         at service.IVRService.recognize(IVRService.java:190)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:194)
         at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:102)
         at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
         at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:100)
         at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:176)
         at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:275)
         at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:133)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
         at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
         at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
         at java.lang.Thread.run(Unknown Source)Edited by: mr_b on Aug 17, 2010 3:11 AM

  • JSAPI : Problems with the speech recognition HelloWorld example

    I just started off with JSAPI. And I immediately ran into problems. These are the configurations and the stuff I did:
    My system:
    OS: WinXP
    JDK: JDK 6 Update 10
    IDE: Netbeans 6.0
    I downloaded the FreeTTS.zip from the FreeTTS site [http://freetts.sourceforge.net/docs/index.php] . I expanded it and in the lib folder I ran the jsapi.exe and as a result jsapi.jar war generated.
    I included this jar in my project lib as jar.
    Then I copy pasted HelloWorld speech recognition example code given in the site : [http://java.sun.com/products/java-media/speech/forDevelopers/jsapi-guide/Recognition.html#11644]
    First it gave a compile time error that
    rec.deallocate();throws an exception which needs to be handled. Accordingly I surrounded it with a try catch block and caught the exceptions in the manner given below:
       try{
             rec.deallocate();
                        System.exit(0);
                    } catch (EngineException ex) {
                        Logger.getLogger(HelloWorld.class.getName()).log(Level.SEVERE, null, ex);
                    } catch (EngineStateError ex) {
                        Logger.getLogger(HelloWorld.class.getName()).log(Level.SEVERE, null, ex);
    Now the code was compiling without problems.
    When I executed the code a runtime exception (NullPointerException) was thrown at the line
          rec.allocate();  in the psvm(String args[])
    Can anybody please explain to me where I am going wrong? Or is there something more that needs to be added in the code?
    Anxiously waiting to hear from your end.
    Thanks and regards,

    Hi All,
    I am using following code
    import javax.speech.*;
    import javax.speech.synthesis.*;
    import java.util.Locale;
    public class HelloWorld {
         public static void main(String args[]) {
              try {
                   // Create a synthesizer for English
                   Synthesizer synth = Central.createSynthesizer(
                        new SynthesizerModeDesc(Locale.ENGLISH));
                               synth.waitEngineState(Synthesizer.ALLOCATED);
                   //Synthesizer synth = Central.createSynthesizer (null);
                   // Get it ready to speak
                   synth.allocate();
                   synth.resume();
                   // Speak the "Hello world" string
                   synth.speakPlainText("Hello, world!", null);
                   // Wait till speaking is done
                   synth.waitEngineState(Synthesizer.QUEUE_EMPTY);
                   // Clean up
                   synth.deallocate();
              } catch (Exception e) {
                   e.printStackTrace();
    }and i am also getting following error
    java.lang.NullPointerException
            at HelloWorld.main(HelloWorld.java:18)My class path is set to
    jsapi.jar (extracted from freetts 1.2.1)
    freetts.jar
    Please i didn't find any solution from past replies so please do help me someone to solve this...
    Thanking You....

  • J2ME and JSAPI 2.0

    Hi everyone,
    How do i use JSAPI 2.0 in J2ME and if possible show me some examples.
    Thanx in advance.

    Currently JSAPI is not working well for j2me

  • JSAPI and MBROLA

    I'm playing with a speech synthesizer program and I've got it working with FreeTTS just fine.
    Of course I want to try some better voices so I'm trying to get MBROLA working and that's where I run into problems. I'm trying to test it using the Player demo program that comes with FreeTTS. I invoke it with this command line:
    java -Dmbrola.base="C:\Program Files\Mbrola Tools" -cp Player.jar;jsapi.jar;freetts.jar Player
    The player comes up and the following voices are displayed in the Voice ComboBox:
    kevin
    kevin16
    mbrola_us1
    mbrola_us2
    mbrola_us3
    As long as I only select kevin or kevin16 it all works fine but if I select any of the MBROLA voices I get this stack trace:
    Exception in thread "Thread-4" java.lang.Error: No audio data read
    at de.dfki.lt.freetts.mbrola.MbrolaCaller.processUtterance(MbrolaCaller.java:134)
    at com.sun.speech.freetts.Voice.runProcessor(Voice.java:557)
    at com.sun.speech.freetts.Voice.processUtterance(Voice.java:402)
    at com.sun.speech.freetts.Voice.speak(Voice.java:284)
    at com.sun.speech.freetts.jsapi.FreeTTSSynthesizer$OutputHandler.outputItem(FreeTTSSynthesizer.java:705)
    at com.sun.speech.freetts.jsapi.FreeTTSSynthesizer$OutputHandler.run(FreeTTSSynthesizer.java:632)
    Obviously I'm running on Windows but according to what I've read this is supposed to work on Windows.
    Any ideas?

    [email protected] wrote:
    > Hi,
    >
    > I am using the following book to learn Eclipse 3.5;
    >
    > Professional Eclipse 3 for Java Developers by Berthold Daum.
    >
    > I get the following error when I compile an example that illustrates the
    > use of FreeTTS and the JSAPI:
    >
    > System property "mbrola.base" is undefined. Will not use MBROLA voices.
    >
    > How would I define "mbrola.base" for FreeTTS through Eclipse. The book
    > did not mention anything about setting up mbrola.
    > Please help.
    > Thank you.
    I don't know anything about that book or about MBROLA; you might want to contact
    the author of the book.
    But to set a system property when running a Java app, you typically pass a "-D"
    argument on the command line, or in the Eclipse launch configuration. For
    instance, -Dmbrola.base or -Dmbrola.base=whatever .

  • Jsapi speehc synthesizer and recognizer

    Hello friends, Have any one of u done a project on speech synthesizer and recognizer using jsapi and Freetts...???

    (contd..) to previous thread step5
    //testing
    u can find so many examples in the net though iam giving one below!!
    Class name is Main.java
    code follows
    package tts; // my package
    import java.util.Locale;
    import javax.speech.Central;
    import javax.speech.synthesis.Synthesizer;
    import javax.speech.synthesis.SynthesizerModeDesc;
    import javax.speech.synthesis.Voice;
    import java.io.*;
    import java.lang.*;
    * @author Administrator
    public class Main {
    * @param args the command line arguments
    public static void main(String[] args) {
    System.out.println("args==" + args.toString());
    String voiceName = (args.length > 0)
    ? args[0]
    : "kevin16";
    voiceName = new String("kevin16");
    System.out.println("Voice Name is :" + voiceName);
    //System.out.println("Using voice: " + voiceName);
    try {
    System.out.println("---1---");
    /* Find a synthesizer that has the general domain voice
    * we are looking for. NOTE: this uses the Central class
    * of JSAPI to find a Synthesizer. The Central class
    * expects to find a speech.properties file in user.home
    * or java.home/lib.
    * If your situation doesn't allow you to set up a
    * speech.properties file, you can circumvent the Central
    * class and do a very non-JSAPI thing by talking to
    * FreeTTSEngineCentral directly. See the WebStartClock
    * demo for an example of how to do this.
    SynthesizerModeDesc desc = new SynthesizerModeDesc(
    null, // engine name
    "general", // mode name
    Locale.US, // locale
    null, // running
    null); // voice
    System.out.println("--2--");
    Synthesizer synthesizer = Central.createSynthesizer(desc);
    /* Just an informational message to guide users that didn't
    * set up their speech.properties file.
    System.out.println("--3--");
    if (synthesizer == null) {
    System.out.println("--3(1)--");
    // System.err.println(noSynthesizerMessage());
    System.exit(1);
    System.out.println("--4--");
    /* Get the synthesizer ready to speak
    synthesizer.allocate();
    synthesizer.resume();
    /* Choose the voice.
    desc = (SynthesizerModeDesc) synthesizer.getEngineModeDesc();
    Voice[] voices = desc.getVoices();
    Voice voice = null;
    System.out.println("Length :" + voices.length);
    for (int i = 0; i < voices.length; i++) {
    if (voices.getName().equals(voiceName)) {
    voice = voices[i];
    System.out.println("voice " + voice);
    break;
    if (voice == null) {
    System.err.println(
    "Synthesizer does not have a voice named " + voiceName + ".");
    System.exit(1);
    synthesizer.getSynthesizerProperties().setVoice(voice);
    System.out.println("going 2 play");
    /* The the synthesizer to speak and wait for it to
    * complete.
    //synthesizer.speakPlainText("Hello world!,iam vijay testing Freetts", null);
    String str = new String("heloo world");
    System.out.println("String is ....: " + str);
    synthesizer.speakPlainText(str, null);
    synthesizer.waitEngineState(Synthesizer.QUEUE_EMPTY);
    /* Clean up and leave.
    synthesizer.deallocate();
    System.exit(0);
    } catch (Exception e) {
    e.printStackTrace();
    // TODO code application logic here
    @Override
    protected Object clone() throws CloneNotSupportedException {
    return super.clone();
    @Override
    public boolean equals(Object obj) {
    return super.equals(obj);
    @Override
    protected void finalize() throws Throwable {
    super.finalize();
    @Override
    public int hashCode() {
    return super.hashCode();
    @Override
    public String toString() {
    return super.toString();
    here we are using Kelvin16 as voice type..
    still u have any problem during the running ... let me know

  • Jsapi supports phones?

    what are the phones that the jsapi can support??? i really need your help guys. im working on a voice command application using java. i wanted to know the api that will support my project. thanks in advance guys.

    Currently the javafx is not so much stable so that mobile companies would keep it. I think the javafx 2.0 may be sign of good stability of javafx.
    Thanks.
    narayan

  • Help with JSAPI and JSGF

    Hi, i'm working with JSAPI using a JSGF dictionary, the Speech Recognition works fine with the words of the dictionary, but when I say a different word to the saved in the dictionary, JSAPI recognizes them as the word that most closely matches the dictionary, and I need to simply not recognize them, thanks a lot for the help.

    Hello,
    The dictionary has many many words in it.
    You should edit your grammar in order to limit the answers you get as Best_Match_Result.
    Also you could add your own words in the dictionary by creating the extra words. In order to do that you should be limited and use the sounds as they are already written in the dictionary. (eg there is on OO sound, but OW) I am not sure if you can add your extra own sounds but you could try it :)
    Hope this help,
    Marios

  • Simple jsapi (speech) application

    Hello!
    I read the tutorial at http://java.sun.com/products/java-media/speech/forDevelopers/jsapi-guide/index.html and tried to create a recognizer with this code:
    import javax.speech.*;
    import javax.speech.recognition.*;
    import java.io.FileReader;
    import java.util.Locale;
    public class HelloWorld2 extends ResultAdapter {
    static Recognizer rec;
    public static void main(String args[]) {
    try {
    // Create a recognizer that supports English.
    rec = Central.createRecognizer(
    new EngineModeDesc(Locale.ENGLISH));
    // Start up the recognizer
    rec.allocate();
    } catch (Exception e) {
    e.printStackTrace();
    Compiling works, but running it the nullpointerexception occurs at line 16, the line "rec.allocate();"
    why is that so?
    thank you.

    oh, sorry. it seems i didn't read well enough.
    hmmm the Speech SDK 5.1 for Windows from MS is "JSAPI-compliant", isn't it?
    Is there any free recognizer available supporting the german language?

  • JSAPI: How can I edit a button symbol?

    Hi,
    I have written an export script for Flash CS3 using the
    Javascript API, and now I want to port it to Fireworks CS3. The
    first problem I ran into was that I could not access a button
    symbol's state frames via script - the closest I get is up to the
    Instance object, but I could not access the frames or layers
    inside. I used a for-each-loop to trace all of the Instance
    object's properties but did not find anything that could have
    helped me.
    Then I had another idea: I noticed that unlike Flash,
    Fireworks opens button symbols as a new document when I edit them.
    And the Fireworks Document object does indeed have a property
    called "isSymbolDocument" that returns true if a document "is a
    symbol-editing window."
    But once again I stumbled upon a problem, namely that I did
    not manage to find a "symbol editing command" (the equivalent to
    double-clicking it) in the Javascript API. :(
    I'd be grateful for any hints, maybe there is even a much
    easier way to access button symbol states?
    Greetings,
    Jochen

    I'm not sure if I fully understand your question, but I'll give a shot at what I think it is. If you want a button that can be pressed so that its states go from
    "on, off, red., on, off, red." etc. you will have to use a bit of trickery. In this case it is best to use a picture ring to give the controls the image you want and place a boolean button OVER the ring control. Make the boolean control completly transparent, resize it to cover the entire picture ring, and make sure the boolean is ABOVE the ring by using "move to front". You will then have to have a loop checking the boolean button for a press, and then modify the pic ring accordingly and take your "control value" off of the pic ring. I'd suggest finding a work-around if you can, as this is a lot of work to go
    through.

  • Question about JSAPI

    Hello everybody. I'm trying to develop a new Text to Speech application using Java Speech API. I was wondering if it is possible to integrate new voices in one of the available implementations, because I want to add support for the Romanian language to my app. And if somebody has done it, I'd like to receive some details, about how did he/she do it. Thanks in advance.
    Regards,
    Bogdan.

    Hi,
    I am using Freetts and want to do the same thing as you described in your post. I have tried the: freettsVoice.setAudioPlayer(new SingleFileAudioPlayer()); many times; however, it does not produce any audio files.
    On the contrary, I can easily produce audio files from the command line using -dumpAudio
    Do you have an example on how to do that please and where do you write the file name and path when using the SingleFileAudioPlayer()?
    Any help will be very much appreciated.
    E
    Message was edited by:
    ennair

Maybe you are looking for

  • Fullscreen doesn't work

    Hi, I prepared a footer for a website and have the following issue: There are two buttons on footer bar that interests us. One is fullscreen - clicked activates fscommand function. But fscommand works only when I debug my fla. Otherwise, nothing happ

  • Drag and Drop in ALV

    Hi All, I have the latest version configured. In the ALV, I have new feature to drag and drop the column in the ALV table so that it's position can be changed. Because of this my columns settings are getting disturbed. How to disable this feature. I

  • Re: Portege R600-10Q- Toshiba DVD Player has stop working

    Hello, Please help me, I can't use the Toshiba DVD Player, every time I open it, appears the windows windows saying that the app has stop working... When I see the error report, appears 'app crash'.... I've already gone to the Toshiba site to make th

  • Change task status in notification

    is it possible to change the task status like the notification status (in process again). A completed task -> in process again. I can't find this function and also it isn't possible to delete a completed task. Best Regards Bernd

  • Special Leder and FI balance sheet do not reconcile

    Hi, I have issues with reconciling special ledger and FI balance sheet accounts. The month-end process GWUL (currency translation for SPL) was executed and business users did not execute F.05 for FI, probably this was missed out more than one month.