A-law mu-law wav (8 bit, 8kHz G.711)

I have final cut and I completed an audio project for a client. They requested MP3 (which I know how to compress) and a-law or mu-law wav (8 bit, 8kHz G.711). How to compress my project to a-law and mu-law wav files? I don't even know what the file format is used for, Please help

alaw and ulaw are commonly used in telephony systems, such as VoIP. For example we record voice messages - such as welcomes, prompts or instructional and progress - in natively support format for Asterisk in alaw.
Here is a wiki link for your convenience http://en.wikipedia.org/wiki/G.711
iMac (Intel)   Mac OS X (10.4.8)  

Similar Messages

  • Need to export as a 8 bit, 8 khz, PCM mu-law WAV file

    I have an MP3 that I need to export as a 8 bit, 8 khz, PCM mu-law WAV file (it's for a phone system that is kinda old).
    I've found programs that will do mu-law but not at 8 bit, etc. Quicktime will do the 8 bit and 8 khz WAV, but not PCM mu-law (it only does linear). Ironically, Mu-law (u-law) is an option for sound compression for AVI, but not WAV. It's there. I just need to know how to tap into it.

    alaw and ulaw are commonly used in telephony systems, such as VoIP. For example we record voice messages - such as welcomes, prompts or instructional and progress - in natively support format for Asterisk in alaw.
    Here is a wiki link for your convenience http://en.wikipedia.org/wiki/G.711
    iMac (Intel)   Mac OS X (10.4.8)  

  • Exporting CCITT a-law wav files

    I need to export voice recordings as 8-bit a-law wav files. I'm using CS5.5 on a mac. It was straightforward with 3.0 but CS5.5 seems to offer only 4 basic export formats. I've checked Adobe's document 'Adobe Audition features replaced or not implemented in CS5.5' and there's no mention. Can anybody help?
    Many thanks, Victoria

    Suggest you read some of this post from the developers, especially "So what's missing in Audition CS5.5".
    http://forums.adobe.com/message/3614935#3614935

  • How to make PCM file that is 8 Bit A-low compressed Mono 8kHz?

    Hi folks! I desperately need help. I use Audition 3.0 and i need to make file that must meet next requirements:
    Format: PCM
    Bit Rate: 8 Bit
    Channels: 1 (Mono)
    Channels: 8kHz
    Sample Size: 8 Bit Alow compressed
    I tried to do it my way but it seems to be something wrong there. Maybe the requirements r not clear or maybe i don't understand them the right way. Please tell me how would u do it??????

    I think most of us are a bit confused by your requirement and description.
    All I can think of is that you mean "A-law" encoding (otherwise known as G.711 A-law) which is a coding algorithm fairly commonly used in telephony.
    If that's it, try the "A/mu-law .wav" option up near the top of the list of files types on the Save As menu. 
    (A-law and mu-law are different variants of basically the same algorithm.)
    Bob

  • 24-Bit, 96kHz Microsoft PCM WAVs result in no audio in any media player

    Hi guys,
    I have some downmixes saved by other progarms as 24-Bit 96kHz Stereo WAV files that play fine in all media players. However, when I save these or any other upconverted 16-bit files in Audition 3.0 as "Microsoft PCM WAV" (24-Bit, 96kHz, Stereo), they do not play in any of my media players. The file appears to be playing but there is no sound. The only differences I can see in the file attributes are that there is a new "Audio Format" attribute whose value is "PCM" - this attribute does not exist on the other (working) files. However, if I save the file as a "32-Bit float" Microsoft PCM WAV in Audition, they play fine! Here is an image to explain:
    http://www.aotplaza.com/Files/Audition%20Problem.PNG
    I would think that it is a codec problem but I didn't think WAV files required third party codecs in Windows XP? Any ideas?

    Never mind, I realised that the problem lies in the WAV format used. According to some websites I have visited, high resolution and/or surround sound WAV files can only be played back natively if they are in WAVE EXTENSIBLE format and the files I was saving were in PCM format. I solved this by setting ffdshow to handle all WAV files using DirectShow instead and the files now play back fine. :)
    Out of curiosity, I see no WAVEX save format in Audition although it exists in Audacity (an open source audio editor) - is there a way to save WAV files in this format?

  • How can I change  oval shape applet to (plot)

    Hi this is not my code so, the orignal code is freeware to www.neuralsemantics.com and is Copyright 1989 by Rich Gopstein and Harris Corporation.
    The site allows permission to play with the code or ammend it.
    how could I modify the applet to display (plot) several cycles of the audio signal instead of the elliptical shape. The amplitude and period of the waveform should change in accordance with the moving sliders
    Here is the code from www.neuralsemantics.com /applets/jazz.html
    import java.applet.*;
    import java.awt.*;
    import java.awt.event.*;
    public class JazzMachine extends Applet
                 implements Runnable, AdjustmentListener, MouseListener {
      // program name
      final static String TITLE = "The jazz machine";
      // Line separator char
      final static String LSEP = System.getProperty("line.separator");
      // Value range
      final static int MIN_FREQ = 1;   // min value for barFreq
      final static int MAX_FREQ = 200; // max value for barFreq
      final static int MIN_AMPL = 0;   // min value for barVolume
      final static int MAX_AMPL = 100; // max value for barVolume
      // Sun's mu-law audio rate = 8KHz
      private double rate = 8000d;      
      private boolean audioOn = false;     // state of audio switch (on/off)
      private boolean changed = true;      // change flag
      private int freqValue = 1;           // def value of frequency scrollbar
      private int amplValue = 70;          // def value of volume scrollbar
      private int amplMultiplier = 100;    // volume multiplier coeff
      private int frequency, amplitude;    // the requested values
      private int soundFrequency;          // the actual output frequency
      // the mu-law encoded sound samples
      private byte[] mu;
      // the audio stream
      private java.io.InputStream soundStream;
      // graphic components
      private Scrollbar barVolume, barFreq;
      private Label labelValueFreq;
      private Canvas canvas;   
      // flag for frequency value display
      private boolean showFreq = true;
      // width and height of canvas area
      private int cw, ch;
      // offscreen Image and Graphics objects
      private Image img;
      private Graphics graph;
      // dimensions of the graphic ball
      private int ovalX, ovalY, ovalW, ovalH;
      // color of the graphic ball
      private Color ovalColor;
      // default font size
      private int fontSize = 12;
      // hyperlink objects
      private Panel linkPanel;
      private Label labelNS;
      private Color inactiveLinkColor = Color.yellow;
      private Color activeLinkColor = Color.white;
      private Font inactiveLinkFont = new Font("Dialog", Font.PLAIN, fontSize);
      private Font activeLinkFont = new Font("Dialog", Font.ITALIC, fontSize);
      // standard font for the labels
      private Font ctrlFont;
      // standard foreground color for the labels
      private Color fgColor = Color.white;
      // standard background color for the control area
      private Color ctrlColor = Color.darkGray;
      // standard background color for the graphic ball area
      private Color bgColor = Color.black;
      // start value for the time counter
      private long startTime;
      // maximum life time for an unchanged sound (10 seconds)
      private long fixedTime = 10000;
      // animation thread
      Thread runner;
    //                             Constructors
      public JazzMachine() {
    //                                Methods
      public void init() {
        // read applet <PARAM> tags
        setAppletParams();
        // font for the labels
        ctrlFont = new Font("Dialog", Font.PLAIN, fontSize);
        // convert scrollbar values to real values (see below for details)
        amplitude = (MAX_AMPL - amplValue) * amplMultiplier;
        frequency = (int)Math.pow(1.2d, (double)(freqValue + 250) / 10.0);
        setLayout(new BorderLayout());
        setBackground(ctrlColor);
        setForeground(fgColor);
        Label labelVolume = new Label(" Volume ");
        labelVolume.setForeground(fgColor);
        labelVolume.setAlignment(Label.CENTER);
        labelVolume.setFont(ctrlFont);
        barVolume = new Scrollbar(Scrollbar.VERTICAL, amplValue, 1,
                         MIN_AMPL, MAX_AMPL + 1);
        barVolume.addAdjustmentListener(this);
        // assign fixed size to the scrollbar
        Panel pVolume = new Panel();
        pVolume.setLayout(null);
        pVolume.add(barVolume);
        barVolume.setSize(16, 90);
        pVolume.setSize(16, 90);
        Label labelFreq = new Label("Frequency");
        labelFreq.setForeground(fgColor);
        labelFreq.setAlignment(Label.RIGHT);
        labelFreq.setFont(ctrlFont);
        barFreq = new Scrollbar(Scrollbar.HORIZONTAL, freqValue, 1,
                      MIN_FREQ, MAX_FREQ);
        barFreq.addAdjustmentListener(this);
        // assign fixed size to the scrollbar
        Panel pFreq = new Panel();
        pFreq.setLayout(null);
        pFreq.add(barFreq);
        barFreq.setSize(140, 18);
        pFreq.setSize(140, 18);
        // show initial frequency value
        labelValueFreq = new Label();
        if (showFreq) {
          labelValueFreq.setText("0000000 Hz");
          labelValueFreq.setForeground(fgColor);
          labelValueFreq.setAlignment(Label.LEFT);
          labelValueFreq.setFont(ctrlFont);
        Panel east = new Panel();
        east.setLayout(new BorderLayout(10, 10));
        east.add("North", labelVolume);
        Panel pEast = new Panel();
        pEast.add(pVolume);
        east.add("Center", pEast);
        Panel south = new Panel();
        Panel pSouth = new Panel();
        pSouth.setLayout(new FlowLayout(FlowLayout.CENTER));
        pSouth.add(labelFreq);
        pSouth.add(pFreq);
        pSouth.add(labelValueFreq);
        south.add("South", pSouth);
        linkPanel = new Panel();
        this.composeLink();
        Panel west = new Panel();
        // dummy label to enlarge the panel
        west.add(new Label("      "));
        add("North", linkPanel);
        add("South", south);
        add("East", east);
        add("West", west);
        add("Center", canvas = new Canvas());
      private void composeLink() {
        linkPanel.setLayout(new FlowLayout(FlowLayout.CENTER, 0, 5));
        linkPanel.setFont(inactiveLinkFont);
        linkPanel.setForeground(Color.yellow);
        Label labelName = new Label(TITLE + " \u00a9");
          labelName.setForeground(inactiveLinkColor);
          labelName.setAlignment(Label.RIGHT);
        labelNS = new Label(" Neural Semantics   ");
          labelNS.setForeground(inactiveLinkColor);
          labelNS.setFont(inactiveLinkFont);
          labelNS.setAlignment(Label.LEFT);
        linkPanel.add(labelName);
        linkPanel.add(labelNS);
        // link to Neural Semantics website
        String h = getDocumentBase().getHost();
        if ((h.length() > 4) && (h.substring(0, 4).equals("www.")))
          h = h.substring(4);
        if ((h != null) && (! h.startsWith("neuralsemantics.com"))) {
          // create a hand cursor for the hyperlink area
          Cursor linkCursor = new Cursor(Cursor.HAND_CURSOR);
          linkPanel.setCursor(linkCursor);
          labelName.addMouseListener(this);
          labelNS.addMouseListener(this);
      private void switchAudio(boolean b) {
        // switch audio to ON if b=true and audio is OFF
        if ((b) && (! audioOn)) {
          try {
            sun.audio.AudioPlayer.player.start(soundStream);
          catch(Exception e) { }
          audioOn = true;
        // switch audio to OFF if b=false and audio is ON
        if ((! b) && (audioOn)) {
          try {
            sun.audio.AudioPlayer.player.stop(soundStream);
          catch(Exception e) { }
          audioOn = false;
      private void getChanges() {
        // create new sound wave
        mu = getWave(frequency, amplitude);
        // show new frequency value
        if (showFreq)
          labelValueFreq.setText((new Integer(soundFrequency)).toString() + " Hz");
        // shut up !
        switchAudio(false);
        // switch audio stream to new sound sample
        try {
          soundStream = new sun.audio.ContinuousAudioDataStream(new
                            sun.audio.AudioData(mu));
        catch(Exception e) { }
        // listen
        switchAudio(true);
        // Adapt animation settings
        double prop = (double)freqValue / (double)MAX_FREQ;
        ovalW = (int)(prop * cw);
        ovalH = (int)(prop * ch);
        ovalX = (int)((cw - ovalW) / 2);
        ovalY = (int)((ch - ovalH) / 2);
        int r = (int)(255 * prop);
        int b = (int)(255 * (1.0 - prop));
        int g = (int)(511 * (.5d - Math.abs(.5d - prop)));
        ovalColor = new Color(r, g, b);
        // start the timer
        startTime = System.currentTimeMillis();
        // things are fixed
        changed = false;
    //                               Thread
      public void start() {
        // create thread
        if (runner == null) {
          runner = new Thread(this);
          runner.start();
      public void run() {
        // infinite loop
        while (true) {
          // Volume or Frequency has changed ?
          if (changed)
            this.getChanges();
          // a touch of hallucination
          repaint();
          // let the children sleep. Shut up if inactive during more
          // than the fixed time.
          if (System.currentTimeMillis() - startTime > fixedTime)
            switchAudio(false);
          // let the computer breath
          try { Thread.sleep(100); }
          catch (InterruptedException e) { }
      public void stop() {
        this.cleanup();
      public void destroy() {
        this.cleanup();
      private synchronized void cleanup() {
        // shut up !
        switchAudio(false);
        // kill the runner thread
        if (runner != null) {
          try {
            runner.stop();
            runner.join();
            runner = null;
          catch(Exception e) { }
    //                     AdjustmentListener Interface
      public void adjustmentValueChanged(AdjustmentEvent e) {
        Object source = e.getSource();
        // Volume range : 0 - 10000
        // ! Scrollbar value range is inverted.
        // ! 100 = multiplier coefficient.
        if (source == barVolume) {
          amplitude = (MAX_AMPL - barVolume.getValue()) * amplMultiplier;
          changed = true;
        // Frequency range : 97 - 3591 Hz
        // ! Scrollbar value range represents a logarithmic function.
        //   The purpose is to assign more room for low frequency values.
        else if (source == barFreq) {
          freqValue = barFreq.getValue();
          frequency = (int)Math.pow(1.2d, (double)(freqValue + 250) / 10.0);
          changed = true;
    //                     MouseListener Interface
      public void mouseClicked(MouseEvent e) {
      public void mouseEntered(MouseEvent e) {
        // text color rollover
        labelNS.setForeground(activeLinkColor);
        labelNS.setFont(activeLinkFont);
        showStatus("Visit Neural Semantics");
      public void mouseExited(MouseEvent e) {
        // text color rollover
        labelNS.setForeground(inactiveLinkColor);
        labelNS.setFont(inactiveLinkFont);
        showStatus("");
      public void mousePressed(MouseEvent e) {
        try {
          java.net.URL url = new java.net.URL("http://www.neuralsemantics.com/");
          AppletContext ac = getAppletContext();
          if (ac != null)
            ac.showDocument(url);
        catch(Exception ex){ }
      public void mouseReleased(MouseEvent e) {
    //                              Painting
      public void update(Graphics g) {
        Graphics canvasGraph = canvas.getGraphics();
        if (img == null) {
          // get canvas dimensions
          cw = canvas.getSize().width;
          ch = canvas.getSize().height;
          // initialize offscreen image
          img = createImage(cw, ch);
          graph = img.getGraphics();
        // offscreen painting
        graph.setColor(bgColor);
        graph.fillRect(0, 0, cw, ch);
        graph.setColor(ovalColor);
        graph.fillOval(ovalX, ovalY, ovalW, ovalH);
        // canvas painting
        if (canvasGraph != null) {
          canvasGraph.drawImage(img, 0, 0, canvas);
          canvasGraph.dispose();
    //                          Sound processing
      // Creates a sound wave from scratch, using predefined frequency
      // and amplitude.
      private byte[] getWave(int freq, int ampl) {
        int lin;
        // calculate the number of samples in one sinewave period
        // !! change this to multiple periods if you need more precision !!
        int nSample = (int)(rate / freq);
        // calculate output wave frequency
        soundFrequency = (int)(rate / nSample);
        // create array of samples
        byte[] wave = new byte[nSample];
        // pre-calculate time interval & constant stuff
        double timebase = 2.0 * Math.PI * freq / rate;
        // Calculate samples for a single period of the sinewave.
        // Using a single period is no big precision, but enough
        // for this applet anyway !
        for (int i=0; i<nSample; i++) {
          // calculate PCM sample value
          lin = (int)(Math.sin(timebase * i) * ampl);
          // convert it to mu-law
          wave[i] = linToMu(lin);
        return wave;
      private static byte linToMu(int lin) {
        int mask;
        if (lin < 0) {
          lin = -lin;
          mask = 0x7F;
        else  {
          mask = 0xFF;
        if (lin < 32)
          lin = 0xF0 | 15 - (lin / 2);
        else if (lin < 96)
          lin = 0xE0 | 15 - (lin-32) / 4;
        else if (lin < 224)
          lin = 0xD0 | 15 - (lin-96) / 8;
        else if (lin < 480)
          lin = 0xC0 | 15 - (lin-224) / 16;
        else if (lin < 992)
          lin = 0xB0 | 15 - (lin-480) / 32;
        else if (lin < 2016)
          lin = 0xA0 | 15 - (lin-992) / 64;
        else if (lin < 4064)
          lin = 0x90 | 15 - (lin-2016) / 128;
        else if (lin < 8160)
          lin = 0x80 | 15 - (lin-4064) / 256;
        else
          lin = 0x80;
        return (byte)(mask & lin);
    //                             Applet info
      public String getAppletInfo() {
        String s = "The jazz machine" + LSEP + LSEP +
                   "A music synthetizer applet" + LSEP +
                   "Copyright (c) Neural Semantics, 2000-2002" + LSEP + LSEP +
                   "Home page : http://www.neuralsemantics.com/";
        return s;
      private void setAppletParams() {
        // read the HTML showfreq parameter
        String param = getParameter("showfreq");
        if (param != null)
          if (param.toUpperCase().equals("OFF"))
            showFreq = false;
        // read the HTML backcolor parameter
        bgColor = changeColor(bgColor, getParameter("backcolor"));
        // read the HTML controlcolor parameter
        ctrlColor = changeColor(ctrlColor, getParameter("controlcolor"));
        // read the HTML textcolor parameter
        fgColor = changeColor(fgColor, getParameter("textcolor"));
        // read the HTML fontsize parameter
        param = getParameter("fontsize");
        if (param != null) {
          try {
            fontSize = Integer.valueOf(param).intValue();
          catch (NumberFormatException e) { }
      private Color changeColor(Color c, String s) {
        if (s != null) {
          try {
            if (s.charAt(0) == '#')
              c = new Color(Integer.valueOf(s.substring(1), 16).intValue());
            else
              c = new Color(Integer.valueOf(s).intValue());
          catch (NumberFormatException e) { e.printStackTrace(); }
        return c;
    }thanks LIZ
    PS If you can help how do I Give the Duke dollers to you

    http://www.google.ca/search?q=java+oval+shape+to+plot&hl=en&lr=&ie=UTF-8&oe=UTF-8&start=10&sa=N
    Ask the guy who made it for more help

  • How to: IT Virtualization career with VMware as rock-solid foundation

    Hello all,
    I'm considering exactly what the title says. I've spent the better part of the past month reading/watching about the topic. I mainly researched the VMware product line (really, an ecosystem), along with Certification paths. I'm leaning towards the main DCV dish, with a side of CMA as needs come by. I am now building some sort of 'plan' for myself, contemplating the next 2-5 years or so with a VCP6-DCV goal in mind. There are a number of questions I need to ask to the actual pros, though. (highlighted in bold for anyone wishing to skip the following wall of text)
    First of all, I'm heavily questionning my background. I wish it were enough to dive deep into vSphere right now, but is it? I mean, beyond VCA... I like to think "I can do this, I'm good admin material, I know it", but would an employer see it through the same lens? Probably not. My question is, would you hire ─ at the bottom of the IT/VMware ladder of course ─ this individual? Here's a quick, honest-to-god profile (please skip if you wish as none of this italic part speaks of VMware).
    I'm 32, male, I live in France (but plan to move abroad to a more dynamic region, in terms of technology and the economy in general).
    My 'official' pro background in IT is fairly limited ─ back in 2005 I trained & worked (at the same time) for about a year and a half on general admin/network. So, mostly windows 2000-2003-XP, general IT concepts; we had an HQ with about 30 users and 30 distant sites (car rental company present all over Paris), everything VNC'ed to the server room which hosted a dozen Dell Edge something (distant profiles etc.). Applications such as MS Office or credit card payments (Wynid) were served virtually through Citrix, and I had the opportunity to even run it at home in my lab to learn. It was back in 2005, but the basic concepts of networking, directory management or virtualization seem pretty clear to me, even as of 2015.
    At that time I was young. Well, younger, at least. When I realized that what needed fixing weren't really the computers but the users (j/k... or am I?), I cut short my IT experience and did other, unrelated jobs, even spending some time back at the university to learn law and a bit of sociology (political theory, social interactions, things like that). Enriching, and I can use all of this to fuel my vision of technology, but now I want back in IT as a professional. It's an obvious vocation I've been ignoring for too long.
    Ever since my teens, I've been dabbling in computers.
    Learned Dreaweaver 3 with a bible in just over 3 months and made my first website back in 2000.
    Since 2005 I've always had a windows server at home ─ it seems like it's not really my home unless it's a domain. I've been sharing coffee daily with ADDS ever since.
    I like starting from scratch, designing and building entirely new IT systems (my home is my lab) every other year.
    All my hobbies, at some point, got "computerized". For instance I learned everything I could about metadata for media/content files, from the Dublin Core principles to real-world popular DBs schemas (MusicBrainz notably) passing by actual implementations on my system, just because I need my music and video files to be properly sorted and browsed "smartly", you know... Also built a few home audio/video studios setups for friends...
    Many people around me come to me for advice on technology, of any kind ─ I like science and tech, always reading up the latest and greatest, and apparently able to research what I have yet to know. I get good feedback on my advice, more often than not.
    These are just a few examples of a trend in my life that says "I don't work in IT, but on a daily basis I spend a few hours, often having actual fun, with geeky-IT tasks, even just reading".
    I'm extremely thorough by nature. Reading and carefully understanding, checking and applying a Best Practices whitepaper is often a paradoxical treat to me ─ it may be somewhat boring come page 14, but it's just beautiful when a system runs close to perfection, I often find a deep motivation for these ironing-out/optimization tasks. I'm also very practical, down to earth. I love ideas, I get my good share of it, but ultimately what matters to me are actual results ─ though I never rush things unless I have to, I'm infinitely patient with a project. I spend hours designing, testing (seeking real-world proofs-of-concept) so I don't mess up in actual production. I like things neat, sober and efficient, from variables to comments and names passing by external documentation ─ which I highly value, probably stems from my research-oriented mind. I do document my home setups to some degree!
    Do these things matter? Do they help me, along with these certifications, or is my professional IT inexperience, at 32, the end of that road?
    From what I've read, I'm fairly confident that with the right self-traning, hands on, I have a shot at VCA6-DCV. Practical questions:
    I can get a good book, I can get online training, I probably can build a decent lab for virtualization at home. But how can I get hands-on with the software?
    Evaluation doesn't seem like an option since it's limited in time.
    Unless, maybe, one can get several trials as long as everything is freshly reinstalled. Being a lab we obviously don't care to break everything every 60 days or so, but does it work like this with VMware trials?
    Is there any way I can run a non-productive fully-featured VMware ecosystem at home, just for the sake of training? (note: I'm willing to pay for that right, just can't or rather won't afford $2000 licences of course, I'd rather spend that on training classes)
    I've discovered the fabulous VMware Hands-on Lab.
    Is it really free, unlimited?
    If yes, I suppose that's the recommended way to get hands-on VMware products?
    Should it be the only way, how can I verify that my practices are good in the long run (days, months, years...) if I can't run the systems for more than a few hours? My understanding of IT is that making things work at first is only the prologue to a long day-to-day care and management of systems and apps, and I wouldn't want to miss that kind of understanding with mission-critical infrastructure such as virtualization. Seems like the mother of all critical-to-monitor and analyze/report/optimize etc. That doesn't happen in a few hours. I find myself puzzled by that apparent contradiction between training and real work.
    Now, let's assume I pass my VCA6. Then what? How do I go from there to VCPx?
    With my rather unconventional background, as seen from IT, can I land a level 1 vmware admin job if I get a VCA6-DCV? (I'm doubting, thus also considering maybe VCA6-CMA as well, to demonstrate *more* motivation and skills). I mean, are these VCA's even remotely enough, or should I brush up, say, on my Windows server/system admin and get a basic certification there before I aim at VCPx VMware certifs?
    Assuming I manage, eventually, to have a job involving VMware datacenter products (ESXi cluster, vSphere, vCenter etc.), I understand that 2-5 years worth of pro experience are recommended to consider VCP-DCV. I assume a motivated individual, training at home on top of work, getting involved socially with the community and so on, would probably fall in the lower end of that range. From your experience, is being hands-on at work 'enough' to truly understand and master (read: work skillfully with) VMware products? (again, with VCP in sight) I'm asking since, more often that not, you don't touch critical settings, unless you absolutely have to, in a prod environment (for instance, if my company, back in 2005, hadn't allowed me to run citrix at home for a few months for training's sake, I just wouldn't have gotten my hands dirty enough with it to be able to administer it, later on this job). Which leads us back to my previous questions about hands-on in a lab environment. Which lab, where, and how?
    And importanly enough, about how much would it cost me, overall, from VCA to VCP, including one mandatory class, and the necessary long-term hands-on training?
    Sorry for such a long post but I wanted to hear back from the community with enough understanding of my situation. Please be frank and don't hold back, I need to hear the truth about my chances at this, and the sooner I realize how to proceed, the better. Currently, I'm beginning to train on ESXi and vCenter Server (awesome products, btw!) Any help in reaching my VCP endgame will be greatly appreciated, so thank you in advance for any light you might shed on my path.
    PS: I apologize for any english mistake, it's not my mother tongue.

    Thank you very much for your answer.
    I think I do hear you. Assuming I can clear that first step you suggest (get a job supporting a company IT infrastructure, probably at the desktop), then I need to ramp up towards servers and eventually, it's somehow up to me to either help bring VMware to that company, or join another one where VMware is already present. And, as usual in business, ROI/TOC is the sinews of war. I suppose VMware, as a global IT leader, is somehow built on that premise. All of this makes a lot of sense to me. I think it will help me when making future choices. Very practical ideas are already popping up in my head.
    I think, for now, I'll focus on
    getting that first job in IT infrastructure,
    while training for my VCA at home.
    Then see from there. Regardless of my job, I can probably keep on self-training towards VCP ─ it never hurts to learn, and ultimately it may help me make the best out of a required class  training.
    Just a follow-up question, specifically about VMware and security. I'm digging this way because, thinking of your suggestions, here's one aspect of IT in particular that seems a very good bet to me. Here's the context in France (and several european countries):
    Studies show that small and medium-sized french businesses (SMBs, up to 250 employees) have mostly missed the digital revolution of the last decade or so. The root causes go deep, as there are about 100,000 IT jobs available in France because of a lack of skilled workers, and the general population shows very poor technological culture overall. Thus, there's a lot to do in the backend to ease that necessary transition (cue for newcomers like me, willing to enter this market, I suppose).
    More specifically, there is a huge need for security, which is, too often here, nowhere near acceptable standards. It would be easy to blame IT departments, but they in turn blame their CEOs for not understanding the issue ─ whose main reasons for negliging security are often misconstrued or outdated perceptions: increased cost, tediousness of the end-user experience; but more often than not, it's just blatant unawareness of the risks.
    I think VMware delivers very good solutions to this issue.
    Correct me if I'm wrong, but I understand that virtualization brings a layer of abstraction that makes it inherently superior, from a security perspective, to physical equivalents. And I think it's obvious the end-user experience, should it be different, is better virtualized than not. Therefore ─ you see where I'm going with this: if I had to show a company how VMware is interesting, not only for saving costs or ease of management/deployment, but above all in terms of security, what would be the main selling points, specifically? Also when training myself, which products, which specific designs or implementations or features, in the vast VMware ecosystem, are key to security if I want to bring that to my company? I'd like to have my eyes on that from the beginning. Basically knowing what to say when, one fine day, a manager/recruiter asks me: "how would using VMware be better for our security?" or "how will sponsoring you for VCP-DCV help us in this regard"? (again, rather from a small or medium-sized business perspective)
    I'm thinking SSO, vShield and vSECR from the top of my head, but I confess the whole set of VMware server products and features is still a bit overwhelming to me. Obviously I'll do the bulk of research myself, I just need a few pointers.
    Thanks again for the inspiration and support!

  • Migrating E&M immediate start from 2821 VGW to AS5350XM but problems with signalling

    Hi,
    We currently have 2821 VGW platforms running E&M Immediate start. The T1s go into the DACS and the IP interface connects to the SIP server with a simple number plan.
    So, when a T1 DS0 signals, the VGW talks to the SIP server, the SIP server does a database lookup and talks back to the VGW and passes the connection details.
    However we are now migrating to the AS5350XM platform and replicating the config from the 2821 to the AS5350XM is causing signalling issues.
    Here is a sample of the existing 2821 config:
    voice rtp send-recv
    voice service pots
    voice service voip
     sip
      bind control source-interface GigabitEthernet0/0
      bind media source-interface GigabitEthernet0/0
      rel1xx disable
      min-se 1700
    voice class codec 1
     codec preference 1 g711ulaw
     codec preference 2 g729r8
    controller T1 1/0
     ds0-group 0 timeslots 1 type e&m-immediate-start
     ds0-group 1 timeslots 2 type e&m-immediate-start
     ds0-group 2 timeslots 3 type e&m-immediate-start
    voice-port 1/0:0
     define Tx-bits idle 1111
     define Tx-bits seize 0000
     define Rx-bits idle 1111
     define Rx-bits seize 0000
     timeouts wait-release 1
     connection plar 1000
    voice-port 1/0:1
     define Tx-bits idle 1111
     define Tx-bits seize 0000
     define Rx-bits idle 1111
     define Rx-bits seize 0000
     timeouts wait-release 1
     connection plar 1001
    voice-port 1/0:2
     define Tx-bits idle 1111
     define Tx-bits seize 0000
     define Rx-bits idle 1111
     define Rx-bits seize 0000
     timeouts wait-release 1
     connection plar 1002
    dial-peer voice 1000 pots
     destination-pattern 1000
     port 1/0:0
    dial-peer voice 1001 pots
     destination-pattern 1001
     port 1/0:1
    dial-peer voice 1002 pots
     destination-pattern 1002
     port 1/0:2
    dial-peer voice 1 voip
     preference 1
     destination-pattern .T
     session protocol sipv2
     session target ipv4:192.168.150.2:5060
     session transport udp
     voice-class codec 1
     voice-class sip options-keepalive retry 1
    sip-ua
     retry invite 2
     timers trying 150
     sip-server ipv4:192.168.150.2:5060
    Here is the same on on the AS5350xm:
    voice rtp send-recv
    voice service pots
    voice service voip
     sip
      bind control source-interface GigabitEthernet0/0
      bind media source-interface GigabitEthernet0/0
      rel1xx disable
      min-se 1700 session-expires 1700
    voice class codec 1
     codec preference 1 g711ulaw
     codec preference 2 g729r8
    controller T1 3/0:1
     framing ESF
     ds0-group 0 timeslots 1 type e&m-immediate-start
     ds0-group 1 timeslots 2 type e&m-immediate-start
     ds0-group 2 timeslots 3 type e&m-immediate-start
    cas-custom 0
      define-abcd loop-open 1 1 1 1
      define-abcd loop-closure 0 0 0 0
     cas-custom 1
      define-abcd loop-open 1 1 1 1
      define-abcd loop-closure 0 0 0 0
     cas-custom 2
      define-abcd loop-open 1 1 1 1
      define-abcd loop-closure 0 0 0 0
    voice-port 3/0:1:0
     timeouts wait-release 1
     connection plar 1000
    voice-port 3/0:1:1
     timeouts wait-release 1
     connection plar 1001
    voice-port 3/0:1:2
     timeouts wait-release 1
     connection plar 1002
    dial-peer voice 1000 pots
     destination-pattern 1000
     port 3/0:1:0
    dial-peer voice 1001 pots
     destination-pattern 1001
     port 3/0:1:1
    dial-peer voice 1002 pots
     destination-pattern 1002
     port 3/0:1:2
    dial-peer voice 1 voip
     preference 1
     destination-pattern .T
     session protocol sipv2
     session target ipv4:192.168.150.2:5060
     session transport udp
     voice-class codec 1  
     voice-class sip options-keepalive retry 1
    sip-ua
     retry invite 2
     timers trying 150
     sip-server ipv4:192.168.150.2:5060
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    What I did notice was the following:
    1) When defining loop open and loop closed custom bit configuration is different....on the 2821 you do it on the voice port. On the AS5350, it is not supported on the voice port but appears to be configurable via the T1 controller. I'm not sure if the AS5350 needs any further E&M signalling configuration as the problem lies in the signalling....the AS5350 when the line is cleared down appears to start ringing it again and not following the signalling pattern.
    2) When doing a "show voice port" on the AS5350XM does not show E&M signalling, whereas it does on the 2821:
    AS5350XM:
    5350VGW#sh voice port
    DS0 Group 3/0:1:0 - 3/0:1:0
     Type of VoicePort is CAS
     Operation State is DORMANT
     Administrative State is UP
     No Interface Down Failure
     Description is not set
     Noise Regeneration is enabled
     Non Linear Processing is enabled
     Non Linear Mute is disabled
     Non Linear Threshold is -21 dB
     Music On Hold Threshold is Set to -38 dBm
     In Gain is Set to 0 dB
     Out Attenuation is Set to 0 dB
     Echo Cancellation is enabled
     Echo Cancellation NLP mute is disabled
     Echo Cancellation NLP threshold is -21 dB
     Echo Cancel Coverage is set to 128 ms
     Echo Cancel worst case ERL is set to 6 dB
     Playout-delay Mode is set to adaptive
     Playout-delay Nominal is set to 60 ms
     Playout-delay Maximum is set to 1000 ms
     Playout-delay Minimum mode is set to default, value 40 ms
     Playout-delay Fax is set to 300 ms
     Connection Mode is plar
     Connection Number is 1000
     Initial Time Out is set to 15 s
     Interdigit Time Out is set to 10 s
     Call Disconnect Time Out is set to 60 s
     Ringing Time Out is set to 180 s
     Wait Release Time Out is set to 1 s
     Spe country is not configured
     Region Tone is set for US
     Station name None, Station number None
     Translation profile (Incoming):
     Translation profile (Outgoing):
     lpcor (Incoming):
     lpcor (Outgoing):
     DS0 channel specific status info:
                                          IN      OUT
    PORT            CH  SIG-TYPE    OPER STATUS   STATUS    TIP     RING
    =============== == ============ ==== ======   ======    ===     ====
    2821:
    2821VGW#sh voice port
    recEive and transMit Slot is 0, Subslot is 0, Sub-unit is 0, Port is 0
     Type of VoicePort is E&M
     Operation State is DORMANT
     Administrative State is UP
     No Interface Down Failure
     Description is not set
     Noise Regeneration is enabled
     Non Linear Processing is enabled
     Non Linear Mute is disabled
     Non Linear Threshold is -21 dB
     Music On Hold Threshold is Set to -38 dBm
     In Gain is Set to 0 dB
     Out Attenuation is Set to 0 dB
     Echo Cancellation is enabled
     Echo Cancellation NLP mute is disabled
     Echo Cancellation NLP threshold is -21 dB
     Echo Cancel Coverage is set to 128 ms
     Echo Cancel worst case ERL is set to 6 dB
     Playout-delay Mode is set to adaptive
     Playout-delay Nominal is set to 60 ms
     Playout-delay Maximum is set to 1000 ms
     Playout-delay Minimum mode is set to default, value 40 ms
     Playout-delay Fax is set to 300 ms
     Connection Mode is plar
     Connection Number is 1000
     Initial Time Out is set to 15 s
     Interdigit Time Out is set to 10 s
     Call Disconnect Time Out is set to 60 s
     Ringing Time Out is set to 180 s
     Wait Release Time Out is set to 1 s
     Companding Type is u-law
     Rx  A bit no conditioning set
     Rx  B bit no conditioning set
     Rx  C bit no conditioning set
     Rx  D bit no conditioning set
     Tx  A bit no conditioning set
     Tx  B bit no conditioning set
     Tx  C bit no conditioning set
     Tx  D bit no conditioning set
     Rx Seize ABCD bits = 0000 Custom pattern
     Rx Idle ABCD bits = 1111 Custom pattern
     Tx Seize ABCD bits = 0000 Custom pattern
     Tx Idle ABCD bits = 1111 Custom pattern
     Ignored Rx ABCD bits =  BCD
     Region Tone is set for US
     Analog Info Follows:
     Currently processing none
     Maintenance Mode Set to None (not in mtc mode)
     Number of signaling protocol errors are 0
     Station name None, Station number None
     Translation profile (Incoming):
     Translation profile (Outgoing):
     lpcor (Incoming):
     lpcor (Outgoing):
     Voice card specific Info Follows:
     Operation Type is 2-wire
     E&M Type is 1
     Signal Type is immediate
     Dial Out Type is dtmf
     In Seizure is inactive
     Out Seizure is inactive
     Digit Duration Timing is set to 100 ms
     InterDigit Duration Timing is set to 100 ms
     Pulse Rate Timing is set to 10 pulses/second
     InterDigit Pulse Duration Timing is set to 750 ms
     Clear Wait Duration Timing is set to 400 ms
     Wink Wait Duration Timing is set to 200 ms
     Wait Wink Duration Timing is set to 550 ms
     Wink Duration Timing is set to 200 ms
     Minimum received Wink Duration Timing is set to 140 ms
     Maximun received Wink Duration Timing is set to 290 ms
     Minimum seizure Timing is set to 50 ms
     Delay Start Timing is set to 300 ms
     Delay Duration Timing is set to 2000 ms
     Dial Pulse Min. Delay is set to 140 ms
     Percent Break of Pulse is 60 percent
     Auto Cut-through is disabled
     Dialout Delay is 300 ms
     Hookflash-in Timing is set to 480 ms
     Hookflash-out Timing is set to 400 ms
     DS0 channel specific status info:
                                          IN      OUT
    PORT            CH  SIG-TYPE    OPER STATUS   STATUS    TIP     RING
    =============== == ============ ==== ======   ======    ===     ====
    0/0/0:0          01  e&m-imd     dorm idle     idle                       
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    Can anyone advise if further configuration is required on the AS5350XM in order to pass the E&M signalling correctly ?

    Well for me the forms were ok to open with 11g, just had to correct differences and recompile to run.
    That there script is a ton of help!..well you forgot to mention the Output_File parameter, but I could find that myself now that I knew what to look at. Now if I could also set it not to produce .err files when everything went fine and to set the output names so that the old compiled forms get overwritten..
    the SRW.* calls turned out to be things used in Reports, so it was hopeless to compile in forms..but also not needed
    as for ENABLE_ITEM and DISABLE_ITEM I found these to use as their definitions in a library, that proved sufficient.
    procedure enable_item
    ( p_menuName in varchar2
    , p_menuItemName in varchar2
    ) is
    v_menuItem menuitem;
    begin
    v_menuItem := find_menu_item(p_menuName||'.'||
    p_menuItemName);
    if (not id_null(v_menuItem))
    and (get_menu_item_property(v_menuItem,visible) = 'TRUE')
    then
    set_menu_item_property(v_menuItem, enabled, property_true);
    end if;
    end;
    procedure disable_item
    ( p_menuName in varchar2
    , p_menuItemName in varchar2
    ) is
    v_menuItem menuitem;
    begin
    v_menuItem := find_menu_item(p_menuName||'.'||
    p_menuItemName);
    if (not id_null(v_menuItem))
    and (get_menu_item_property(v_menuItem,visible) = 'TRUE')
    then
    set_menu_item_property(v_menuItem, enabled, property_false);
    end if;
    end;Some forms still playing with me, but the compilation issues there don't seem to be connected with this topic..and now I got issues with reports >.< but that requires a topic elsewhere. Thanks all!

  • [UCCx] Change volume of an audio file (java code)

    Hello guys,
    Thanks to the many examples I compiled on the subject, I was able to create a script that mixes 2 audio wav files into a 3rd one. Basically the goal is to mix a first audio file containing some speech with a second one containing some music, these files being encoded identically (8 bits, 8KHz, mono wav files). The resulting file must be encoded in the same format than the initial ones.
    The mixing operation is performed thanks to the MixingAudioInputStream library found online (it can be found here).
    It is not the most beautiful Java code (I am no developer), but it works:
    Document doc1 = (Document) promptFlux1;
    Document doc2 = (Document) promptFlux2;
    Document docFinal = (Document) promptFinal;
    javax.sound.sampled.AudioFormat formatAudio = null;
    java.util.List audioInputStreamList = new java.util.ArrayList();
    javax.sound.sampled.AudioInputStream ais1 = null;
    javax.sound.sampled.AudioInputStream ais2 = null;
    javax.sound.sampled.AudioInputStream aisTemp = null;
    javax.sound.sampled.AudioFileFormat formatFichierAudio = javax.sound.sampled.AudioSystem.getAudioFileFormat(new java.io.BufferedInputStream(doc1.getInputStream()));
    java.io.File fichierTemp = java.io.File.createTempFile("wav", "tmp");
    ais1 = javax.sound.sampled.AudioSystem.getAudioInputStream(doc1.getInputStream());
    formatAudio = ais1.getFormat();
    aisTemp = javax.sound.sampled.AudioSystem.getAudioInputStream(doc2.getInputStream());
    byte[] bufferTemp = new byte[(int)ais1.getFrameLength()];
    int nbOctetsLus = aisTemp.read(bufferTemp, 0, bufferTemp.length);
    java.io.ByteArrayInputStream baisTemp = new java.io.ByteArrayInputStream(bufferTemp);
    ais2 = new javax.sound.sampled.AudioInputStream(baisTemp, formatAudio, bufferTemp.length/formatAudio.getFrameSize());
    audioInputStreamList.add(ais1);
    audioInputStreamList.add(ais2);
    MixingAudioInputStream mixer = new MixingAudioInputStream(formatAudio, audioInputStreamList);
    javax.sound.sampled.AudioSystem.write(mixer, formatFichierAudio.getType(), fichierTemp);
    return fichierTemp;
    The only downside to this is that the music can be a little loud comparing to the speech. So I am now trying to use the AmplitudeAudioInputStream library to adjust the volume of the second file (it can be found here).
    Here are the additional lines I wrote to do this:
    ais2 = new javax.sound.sampled.AudioInputStream(baisTemp, formatAudio, bufferTemp.length/formatAudio.getFrameSize());
    org.tritonus.dsp.ais.AmplitudeAudioInputStream amplifiedAudioInputStream = new org.tritonus.dsp.ais.AmplitudeAudioInputStream(ais2, formatAudio);
    amplifiedAudioInputStream.setAmplitudeLinear(0.2F);
    audioInputStreamList.add(ais1);
    audioInputStreamList.add(amplifiedAudioInputStream);
    MixingAudioInputStream mixer = new MixingAudioInputStream(formatAudio, audioInputStreamList);
    javax.sound.sampled.AudioSystem.write(mixer, formatFichierAudio.getType(), fichierTemp);
    return fichierTemp;
    The problem is I always get the following exception when executing the code:
    could not write audio file: file type not supported: WAVE; nested exception is: java.lang.IllegalArgumentException: could not write audio file: file type not supported: WAVE (line 30, col:2)
    The error is on the last line (the write method), but after many hours of tests and research I cannot understand why this is not working... so I have added some "debugging" information to the code:
    System.out.println("file1 audio file format: " + formatFichierAudio.toString());
    System.out.println("file1 file format: " + ais1.getFormat().toString());
    System.out.println("file2 file format: " + ais2.getFormat().toString());
    System.out.println("AIS with modified volume file format: " + amplifiedAudioInputStream.getFormat().toString());
    System.out.println("Mixed AIS (final) file format: " + mixer.getFormat().toString());
    AudioFileFormat.Type[] typesDeFichiers = AudioSystem.getAudioFileTypes(mixer);
    for (int i = 0; i < typesDeFichiers.length ; i++) {
    System.out.println("Mixed AIS (final) #" + i + " supported file format: " + typesDeFichiers[i].toString());
    System.out.println("Is WAVE format supported by Mixed AIS (final): " + AudioSystem.isFileTypeSupported(AudioFileFormat.Type.WAVE, mixer));
    System.out.println("Destination file format: " + (AudioSystem.getAudioFileFormat((java.io.File)f)).toString());
    AudioInputStream aisFinal = AudioSystem.getAudioInputStream(f);
    System.out.println("Is WAVE format supported by destination file: " + AudioSystem.isFileTypeSupported(AudioFileFormat.Type.WAVE, aisFinal));
    try {
    // Ecriture du flux résultant dans un fichier
    javax.sound.sampled.AudioSystem.write(mixer, formatFichierAudio.getType(), fichierTemp);
    return fichierTemp;
    catch (Exception e) {
    System.err.println("Caught Exception: " + e.getMessage());
    Which gives the following result during execution:
    file1 audio file format: WAVE (.wav) file, byte length: 146964, data format: ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame, , frame length: 146906
    file1 file format: ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame,
    file2 file format: ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame,
    AIS with modified volume file format: ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame,
    Mixed AIS (final) file format: ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame,
    Mixed AIS (final) #1 supported file format: WAVE
    Mixed AIS (final) #2 supported file format: AU
    Mixed AIS (final) #3 supported file format: AIFF
    Is WAVE format supported by Mixed AIS (final): true
    Destination file format: WAVE (.wav) file, byte length: 146952, data format: ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame, , frame length: 146906
    Is WAVE format supported by destination file: true
    So everything tends to show that the format should be supported and the mixed AIS should be written to the file... but I still get the error.
    I am really confused here, if someone could help it would be great.
    Thanks in advance!
    PS: I have attached print screens of the actual script, without the "volume adjustment" code.

    Hi,
    well I started writing a similar solution but it did not work either so I just put it on hold.
    I also tried to get hold of the streaming "device" abstraction of UCCX to adjust the volume while "playing" but that was a dead end, too, unfortunately.
    Sorry about my previous comment on your StackOverflow post, that time I thought it was kind of out of context but I believe you only wanted to ask about this issue on all available forums.
    G.

  • Non-java midi conversion

    I have a bunch of MIDI files that I want to be able to play in my java program. The problem is, I composed them using a program called Midisoft Recording Session, which plays them back with the sound bank provided by my hardware. When I play them in a java program they sound completely different (sometimes just plain awful). I know that I could just have my program play them back with my hardware's sound bank, but then I would lose some cross-platform functionality. Is there a place where I could find a java sound bank that sounds like a typical sound card's bank? Or is there a better solution, like converting to RMF first?? My current solution is to record my MIDIs to WAV format and compress them into u-law or A-law WAV files, but this results in some quality reduction. (CD-quality WAV files are just too big.)

    I've tried the shadow option as some suggest but this trick doesn't seem to work on *captions in Photo Gallery pages*.
    You can not change photos page caption's font style or force text-2-graphic conversion because caption (and most everything else in photos page, albums, blog and podcast page) is rendered by javascript widget.
    I also tried the font mapping option with the plist but my non-standard "fancy" font doesn't show in the list. I tried adding it manually be hand but it still didn't work, perhaps because I'm not referencing the font name correctly. Do I need to add the .ttf or .dfont extension? Does it matter that my fancy font in my home directory font library folder rather than my System font library folder?
    You need to use font name that shows in 'Font Book' for FontMapping.plist to work.
    Font mapping won't work for photos page caption as mention above, but you maybe able to substitute to another font.

  • Best (legal) way to record/convert audio

    hi,
    i'm developing an application that i hope to offer
    commercially.
    i'm interested in finding the best (and least expensive)
    *legal* method for recording audio using flash media server 3
    (whatever version) and then converting it to WAV (e.g., 8kHz, 8-bit
    mulaw) or a similar format.
    i have found a program called ffmpeg, which works very
    nicely. however, i can not find info on whether this is a legal
    method for decoding audio in a .flv file for use in a commercial
    application. (i plan to manipulate the audio on the server side,
    then stream the result back in mp3 format or something similar.)
    are there features in the latest flash clients that i can
    take advantage of in order to eliminate (or reduce) any license
    fees? (i can force clients to upgrade their flash clients if
    necessary.) if not, what company do i have to deal with for
    licensing, and what's the best approach?
    any advice would be greatly appreciated.
    tom

    On Tue, 26 Feb 2008 21:29:06 +0000 (UTC), "tomjohnson3"
    <[email protected]> wrote:
    > i'm interested in finding the best (and least expensive)
    *legal* method for
    >recording audio using flash media server 3 (whatever
    version) and then
    >converting it to WAV (e.g., 8kHz, 8-bit mulaw) or a
    similar format.
    Recorded audio stream is encoded by Nellymoser proprietary
    codec. The
    only legal way to decode it is to buy Nellymoser ASAO SDK.
    It's worth
    of doing if your estimated income from this project will be
    around $1M
    or so :)
    > i have found a program called ffmpeg, which works very
    nicely. however, i can
    >not find info on whether this is a legal method for
    decoding audio in a .flv
    >file for use in a commercial application.
    See above. And FFMPEG has appropriate disclaimer at its
    homepage. It's
    very dificult to miss it out...

  • Streaming audio to MIDP device

    Hi folks.
    I'm new to the world of MIDlet programming...I'm trying to get a phone to play a stream of G711 (aka u-law) audio from an external location.
    I've successfully written a MIDlet that can grab a u-law wav file from a server (via http) and play it, but what I need to do is have the MIDlet play audio that is streaming from a location. Such that when an http request is made of the external location, it simply starts streaming u-law audio, indefinitely.
    When I capture packets on the wire, I can see the stream of audio being sent to the phone, but it doesn't play any audio.
    Is this possible? Any help would be appreciated.
    Thanks!
    Edited by: feh on May 12, 2008 1:20 PM

    Hello, I also need to play audio from a server via http, but using bluetooth connections. I can play from the localhost using the emulator but i don't know how to do it from the phone using bluetooth.
    Thanks.

  • VOIP can receive calls but can't start any

    implementing VOIP on cisco3662 with 1 NM-HDV-30 and VWIC-1MFT-E1, when show voice port summary ,all the port are idle and dorm, when initial a call, the show voice port summary display that the port status is clearbk , and have not any tone, and about 1 minute, the call disconnect , when the other side call in , I can receive and works well,
    when I debug voice ccapi inout debug cch323 h225, call out only display two lines messages, without the number who take the call
    but call in display a lot of message with the calling number

    my device involved are as following:
    CISCO3662
    IOS Cisco 3660 Series IOS IP PLUS IPSEC 56
    NM-HDV-1E1-30
    the debug commands I entered are
    debug cch323 h225
    debug voip ccapi inout
    the output message is
    01:44:11: cc_handle_periodic_timer: Calling the callback, ccTimerctx - 0x626C8750
    01:44:11: ccTimerStart: ccTimerctx - 0x626C8750
    when I enter
    sh voice port summary
    PORT CH SIG-TYPE ADMIN OPER STATUS STATUS EC
    2/0:0 01 r2-digital up dorm idle idle y
    2/0:0 02 r2-digital up dorm idle idle y
    show voice port
    sbsm3662# sh voice port
    R2 Slot is 2, Sub-unit is 0, Port is 0
    Type of VoicePort is R2
    Operation State is DORMANT
    Administrative State is UP
    No Interface Down Failure
    Description is not set
    Noise Regeneration is enabled
    Non Linear Processing is enabled
    Non Linear Mute is disabled
    Non Linear Threshold is -21 dB
    Music On Hold Threshold is Set to -38 dBm
    In Gain is Set to 0 dB
    Out Attenuation is Set to 3 dB
    Echo Cancellation is enabled
    Echo Cancellation NLP mute is disabled
    Echo Cancellation NLP threshold is -21 dB
    Echo Cancel Coverage is set to 8 ms
    Playout-delay Mode is set to default
    Playout-delay Nominal is set to 60 ms
    Playout-delay Maximum is set to 200 ms
    Playout-delay Minimum mode is set to default, value 40 ms
    Playout-delay Fax is set to 300 ms
    Connection Mode is normal
    Connection Number is not set
    Initial Time Out is set to 10 s
    Interdigit Time Out is set to 10 s
    Call Disconnect Time Out is set to 60 s
    Ringing Time Out is set to 180 s
    Wait Release Time Out is set to 30 s
    Companding Type is A-law
    Rx A bit no conditioning set
    Rx B bit no conditioning set
    Rx C bit no conditioning set
    Rx D bit no conditioning set
    Tx A bit no conditioning set
    Tx B bit no conditioning set
    Tx C bit no conditioning set
    Tx D bit no conditioning set
    Region Tone is set for US
    Station name None, Station number None
    Voice card specific Info Follows:
    Line Signalling Type is r2-digital
    Register Signalling Type is r2-compelled
    Country setting is china
    Answer Signal is group-b 1
    Category is set to 1
    NC Congestion is set to 4
    KA is set to 0
    KD is set to 0
    Caller Digits is set to 1
    Request Category is set to 0
    End of DNIS is set to False
    DNIS Digits min is 0 and max is 0
    ANI Digits min is 0 and max is 0
    Group A Callerid End is set to False
    Metering is off
    Release Ack is set to False
    Unused ABCD Bits Mask configured: 0 1 1 1
    Inverting ABCD Bits Mask configured: 0 0 0 0
    Debounce Time is set to 40ms
    Release Guard Time is set to 2000ms
    Seizure Ack Time is set to 100ms
    Answer Guard Time is set to 0ms
    ANI Timeout is set to 0s
    DS0 channel specific status info:
    IN OUT
    PORT CH SIG-TYPE OPER STATUS STATUS TIP RING
    when issue
    test dsp 2
    1
    Dsp firmware version: 3.6.15
    Maximum dsp count: 15
    On board dsp count: 9
    Jukebox available
    Total dsp channels available 36
    Total dsp channels allocated 30
    Total dsp free channels 6
    Quering dsp status......
    sbsm3662#
    01:43:51: dsp 6 is ALIVE
    01:43:51: dsp 7 is ALIVE
    2
    Voice port 2/0:0 has 30 time slots
    time slot 1: dsp_id=6, signal_channel=128(OPEN), voice_channel=0(IDLE)
    when I make a call
    and show voice port summary
    that is
    2/0:0 01 r2-digital up up idle clearbak y
    2/0:0 02 r2-digital up dorm idle idle y
    when I test dsp 2
    2
    what displayed is
    Voice port 2/0:0 has 30 time slots
    time slot 1: dsp_id=6, signal_channel=128(OPEN), voice_channel=0(ACTIVE)
    time slot 2: dsp_id=7, signal_channel=128(OPEN), voice_channel=0(IDLE)
    sbsm3662#
    I when we call from another site to local, no any problem appear,
    when I show diag in the 3662, I can find all the NM model and the VWIC-1MFT-E1 no which there is not any problem

  • Is FC Studio too much application for hobbyist?

    I visited my son at his college today. They have the academic version of the latest final cut studio for sale in the student store for $299. That's a steal compared to what it sells for in the Apple Store.
    My MBP 2.0 is a little over 3 years old and i'm not sure if it can handle the demands of FCS. I have about 20 gigs of space on my internal HD and a few external drives with about 1tb of free space.
    I edit using FC express now and do ok. But the $299 price is almost too hard to pass up.
    I don't mind paying the $299 but I don't want to do it if my MBP would not be able to handle it.
    I'd appreciate your thoughts and comments.
    Thanks.
    Maurice

    C. L. wrote:
    but why should that stop someone who wants to use it, and pay money for it from doing so.
    Umm, because how you're suggesting they go about it, is a breach of the License Agreement? i.e. ILLEGAL ?
    Sorry, but the licenses are NOT open to our interpretation. Just because you think we should be able to act however we want, doesn't make it right. In doubt? Ask an attorney to clarify it for you. See, there is no gray area here. If you suppose Studio X is being strict, well that's fine: that's the only way to be. You either keep it 100% legit and legal. Or you don't! There is no middle-of-the-road here! Sorry.
    This is just like the people who ASSUME that it's A-Okay to use any music that they want in their videos because it's only for "private" use. Again, we don't have the luxury to interpret the law however we want. Despite our infinite ability to rationalize it.
    So how do you try the software? Go to an Apple Store and have an employee show you as much as possible with FCE and/or FCP. Play around with them for a couple hours. Insufficient? Too bad.
    And who says Apple is the ONLY way to go? Don't like the fact they don't have trial versions? Go to adobe.com and download their 30 day trial for Premiere! Tools are tools! Do you want to be an editor or a button-pusher? Point is: a great carpenter is great despite whose name is stamped on his hammer. Çraftsman, Stanley - irrelevant. Sure you have to commit to "someone's buttons" to start pushing in order to get the editing experience - I get it! But FCP doesn't help you make editing decisions. Neither does Premiere, or AVID, or Vegas, Quantel, Smoke....
    I'm not so naive to think that there aren't 1,000's of people using FCP (& the Suite) illegally. However, when we get folks on the "fringe", like our OP here, it's our duty to bring them into the fold of 100% legitimacy. We should NOT be encouraging anything but this. Bending the rules IS breaking the rules!
    If we can guide our "peers", is it best to have them steal the software outright, bootleg it? Completely and blatantly break the law? Or is it better to encourage them to break the law a LITTLE bit, by spending some money, while still breaking the license agreement? Some money is better than nothing, right? Sounds like you're condoning the latter.

  • N82 Black using Garmin XT - No proximity alerts - ...

    Hi,
    I have the N82 black with Garmin XT installed.
    The maps for my country are detailed (unlike Nokia Maps), and the GPS works fine apart from not making any alert sound for proximity to POI's which I've installed.
    I have tried everything possible (www.POI-factory.com), including changing CSV files to GPX/Mp3 to WAV (44100 bit), and even using the default alert setting - nothing!
    The POI's display with no problem, as do my BMP pictures, all in the right location.
    I am at the end of my tether now after spending several weeks trying to get the solution.
    This is very important, as where I'm located, there are dead-end tracks, jungle (YES Jungle!) and rivers and lakes appear from nowhere, so correct turns could mean the difference between life and death - literally!
    If anyone has got the Proximity alerts working on a N82 black, using Garmin XT, I would be very interested and grateful if they could take me through how they achieved it!
    I have posted this request all over different GPS - POI sites, but no-one seems familiar with either the phone or the Garmin Mobile XT program.
    The version I'm using is ****.60.
    Thank in advance!
    Paul in SE Asia.
    Moderator note: Removed email address
    Message Edited by concordia on 25-Jul-2008 04:01 PM

    Have you tried contacting garmin support?
    It's clearly an issue with the program, not the phone.

Maybe you are looking for