Multi-Channel Application

The following error occurs when trying to execute a multi channel application in Openwave 6.2:
Request timed out.

Could you provide more details? What platform and version of o9iASW Wireless are you using? The latest version should also support xhtml pages.
Rgds,
Thomas

Similar Messages

  • How to Get Multi-Channel Instruments to work... e.g. Battery 2, Ultrabeat

    Hello there. I bought Battery 2 and haven't had problems using it a a stereo instrument, but I am having problems doing the multi-channel applications. This also includes Ultrabeat.
    I have heard of Battery 2 being buggy and such for Logic, but maybe I am the one who is buggy. What is the process of setting up a Multi-Channel instance? This is what I have done so far.
    I have an environment with 32 Aux channels, 32 Busses, 75 Audio, 50 Instruments, and the appropriate Outs and Input Monitoring. I understand then that I choose Ultrabeat as my Multi-Channel instrument, then assign the aux channels' inputs as Battery 2 Output 3, 4, 5, and so forth. How do I get audio coming out of the Aux channels? Or do I notate the notes in the aux channels, or do I notate the notes in the multi-channel? I am lost here and what to do.
    Maybe I am going about this entirely the wrong way and maybe someone could make it more clear for me? Thanks peepz!
    ~trevor

    Vice-Versa to Phillip I have Battery2 but not Logic - I use DP, but I would imagine Logic must operate with a similar philosophy:
    One instance of Battery2 as an AU or VST Instrument Track within DP declares 16 available Midi targets and 32 (or 16 stereo, or any combination) available audio sources to DP; the first two audio channels are the stereo output from the Battery2 Instrument track, the remaining 30 are available as sources for any Auxiliary Tracks subsequently created. Within Battery2, each Cell can be assigned to any of the available mono or stereo outputs; the number of outputs available is actually governed by the 'Outputs' settings under 'File>Options' within Battery2 - I've noticed that DP always lists 15 stereo and 30 mono sources from B2 regardless of the number requested here, but you'll only be able to route Cells to what you've asked B2 for.
    So you raise as many seperate Midi tracks as suites your inputting/editing requirements; I've never had cause to use more than one Midi Channel within my drum parts (might be useful if using multiple Midi input devices for your drum programming I suppose), so all of mine are routed to the Battery2 Instrument track Ch.1 (which is the default input channel for all Cells). And then route the individual Cells within Battery2 to as many separate outputs as suits your mixing requirements; bring those separate outputs in via auxiliary tracks as Phillip describes, and you've 'multi-mic'd your Battery2.
    You can of course do alot of sound tailoring per Cell within Battery2, eq/dynamics/filtering et al., but I find it easier and more CPU efficient to do that on the Aux tracks within DP - B2 can be a bit of a cycle hog if you ask it to do much more than trigger the sample in my experience.
    I've taken three times the space to say what Phillip's already said, essentially, but thought maybe you'd missed the principle of assigning the Cells to seperate outputs within Battery2.
    G5 Dual 2.7, MacMini, iMac 700; P4/XP Desk & Lap.   Mac OS X (10.4.8)   mLan:01x/i88x; DP 5.1, Cubase SX3, NI Komplete, Melodyne.

  • An application for multi-channel measurements

    Does NI have a software solution for multi-channel measurements? I mean systems for measurements, tests and monitoring which contain numerous DAQ devices with thousands of sensors.
    I suppose the software for such system should have the following features:
    Instrument control
    Sensor management (type, s/n, accuracy, calibration data, next calibration date, measurement limits, etc.)
    Data acquisition
    Storing data in databases
    Data visualisation and analysis
    Report generation
    Tools for creating custom user interfaces / data visualisations for monitoring
    As far as I know the DIAdem is great for data analysis, visualisation and report generation but it's not suitable for other tasks. With LabVIEW you can do anything but it's not an "out-of-the-box" solution.
    Just to clarify what I'm talking about, here's an application that seems to fit the description. It's the HBM catman. Maybe someone worked with it? Do you know any analogues for it?

    Just to add to Hooovahh's comments.
    NI has flat out stated that they do not want to make turn-key solutions.  That would take away from them being able to make tools for people to create the solutions.  That is why they have alliance partners.  These partners take the tools made by NI and make really cool stuff.  My latest project was a software package that helped a technician build a jet engine correctly so that the turbine blades do not come out and destroy the engine (just slightly important).  I have also done some test systems for space craft avionics.
    So if you are really serious about this, I highly recommend finding an Alliance Partner to help you out.  If you want, give me a PM and I can work on getting you and a few people on my side to discuss your requirements and proceed from there.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Multi channel access on SSO enviroment

    Hi,
    someone can tell me if OID and SSO Server can support multi channel access?
    I want that the users that comes from Internet must do a strong authentication (with Security Certified), but i want that intranet users can log with a light Authentication (only Username and password). It's possible

    Hi Andreas,
    in the document they speech about different application. But if the application is the same with a different reference for a different channel? Do you think that it work fine? If i assign to application A ip=x.x.x.x for intranet users and i define MediumSecurity_AuthPlugin and, for the same application A assign a virtual host ip=y.y.y.y for internet users i define HighSecurity_AuthPlugin.
    Another question, in the manual they refers to a partner application, Is it the same for External application?
    Thanks for you attention
    Regards
    Stefano

  • X-Fi: cannot get multi-channel sound or SPDIF passthrough

    I've owned an X-Fi XtremeMusic since it first came out, and have been running it with a MCE2005 system. I've had it hooked up to a home theater receiver via SPDIF and 5. analog, but have only had stereo speakers until today. So, right now I'm trying to get multi-channel sound to work. I'm using the latest drivers (have also tried the original drivers) and have all Microsoft patches applied.
    First, I cannot seem to get it out of stereo mode. Using the analog outputs to the 5. analog input on my receiver and setting the X-Fi to 5. speakers refuses to produce sound from anything except front left/right. I've used the Speaker Connection Wizard, the THX Console, the Speaker & Headphone console... and no matter what speaker icon I click on the sound comes out of the front left or right as if it is downmixing. I've also tried 4.0, 6. and 7. modes - no improvement. Also, multi-channel sound does not work in MCE setup, games, etc.
    Quite by accident I discovered that if I enable CMSS I can get *some* sort of sound from the rear speakers, but only in the sense of an effect. Also, with CMSS enabled, when testing the front center I get the word "front" out of the left/right speakers which then fades out before "center". If I test it again I get nothing (except a slight effect from the rear).
    Finally, I cannot get the SPDIF passthrough to work (it DOES work just fine passing out stereo 48khz sound). I've enabled SPDIF on the flexi-jack, I've set Dolby Digital and DTS to "external decoder required", and NONE of my applications present me with an option to use the SPDIF. This includes MCE, VLC and the latest version of CyberLink PowerDVD. Basically it seems that they can't see that there is a SPDIF on the system to use.
    Anyone else experienced this? I've spent 5 hours testing everything I can think of and trying different combinations of settings and I am completely at a loss!

    Well, I guess I'll answer my own problem in the hopes that it might help someone else.
    It seems that hour 8 of frustrated troubleshooting turned up the cure: all of this boils down to Logitech's stupid Quickcam Fusion drivers. With Logitech's "AEC" echo cancellation mode enabled for the built-in microphone, their drivers automatically turn audio acceleration down to second from the bottom. This results in the X-Fi performing nearly no hardware processing (later testing with DXDiag also said I had no hardware buffer). This results in no multi-channel audio, and no SPDIF passthrough.
    There are a couple of solutions...
    ) Disable "AEC" mode from the Logitech Quickcam control panel applet's Audio tab (and reboot). This worked for me.
    2) If this doesn't work, go into the Device Manager and disable the Quickcam's microphone (and reboot). This worked for me, but then I tried # which also worked and still let me use the microphone.
    3) Logitech's site (which mentions this problem/solution but only in the context of it causing BSOD crashes, not with it messing up multi-channel sound cards) mentions renaming one of their Audio directories to disable the microphone, although I don't see why this would be better than #2 (see http://logitech-en-amr.custhelp.com/...p?p_faqid=4066 for more).
    4) Unplug the camera.
    So, I'm back in business - or rather, I'm only NOW in business!
    Interesting that the Creative diagnostics utility turned up anything amiss. It claimed that everything was hunky dory, when the card was obviously in a mode where it could not possibly work correctly.

  • Panning of effects in multi channel encoder?

    I would love to be able to pan effects in the multi channel encoder. For example, I may want the dry sound in the front speakers and the reverb of that sound in the rear speakers. I know that I can duplicate a track to another track and achieve this, but it seems like an unnecessary step.
    J. D.

    you are welcome
    The same keystroke usually works on all Apple Pro applications (including SoundTrack and Logic)
    G

  • How can I use all tracks from multi-channel source audio with Multicam clips?

    I cannot figure this out. Working in Premiere I'm prepping for a 2 camera show, with typically 4 to 6 track dual-system audio recorded by the sound guy. I can sync, and make multicam clips any number of ways–whether sync by timecode, sync by audio, or eyeballing–but whenever I edit using the MC clip, Premiere mixes down all the audio channels into a stereo single track, and I cannot pick and choose which audio track from the original to use. With 2-5 people mic'd on lavs, I need to be able to choose which audio is either cut in or heard at any given shot.
    Is there any way to have control in multicam clips over which track of audio gets edited into the timeline, or even: that all tracks of audio get cut in? It's very important for the editors to be able to see waveform (which also disappears for some reason in the mixdown) and pick and choose which audio is heard?
    I'm sure this might be a simple fix, but I cannot get my head around it for workflow purposes for my assistant editors. We need to use multicam, but this audio thing is driving all of us crazy. None of us can figure out how to have a multicam clip and use it with control over what audio is cut in.

    True, but what I need to be able to do is have my multicam sequence in the source monitor, and cut into my working sequence with either just the audio I need from the sound guy's multi-channel WAV, OR cut in all the sound and from then I can just enable or disable what I need or don't need. But also be able to match back to the Multicam source from my working sequence.
    I don't want to have to open the synced sequence, and razor blade the things I need and have to copy and paste into my working sequence in order to have the full spectrum of my multi-channel mono lav WAV files...with their waveforms visible.
    Essentially, I want my Multicam clips to load in the source monitor, and the ability to cut them into the working sequence without it mixing down the audio into a single track with no visible waveform–which is what it is doing currently. Can I choose to edit in either ALL of the tracks of the WAV file along with the camera shot I want, or control which audio gets cut in?

  • Important question: I have got a PXI-4472 and I am able to do single-channel acquisition. How does multi-channel acquisition work?

    Look at the VI: it is quite eleborate, but the crucial point is Data Acquisition and Trigger&Gate (and, consequentely, Write to File operation).
    I can add plot on the Waveform Graph DATA, this is rather easy, simply adding the channel numbers in the channel control (es. writing 0,3,7 will collect data from the three channels).
    First question: is this operation of adding plots correct?
    Second question: does the sampling rate dwindle when I consider multi-channel acquisition? I mean: 1 channel-->100KHz ; 2 channels-->50 KHz
    More over, and MORE IMPORTANT: is it possible to set different trigger conditions for different channel
    s? How can I control this operation?
    Third: can you have a look at the VI as all (the appearence, the functionality, the logical sequence...). I am looking forward to get your advice. In particular:
    Look at the frontal panel: what do you suggest to make it "smaller"? More tab controls?
    Look at the Block Diagram: do I have to connect ERROR IN and ERROR OUT to every suVI or function that makes this connection possible (as Trigger&Gate and Write File)?
    Attachments:
    Start&StopTrig_SpectralMeas_style+image.vi ‏700 KB

    First: yes, the operation is correct.
    Second: yes again. When you consider a multi-channel acquisition, your sampling rate must be shared among the channels.
    Third: when you specify a list of channels you want to acquire from, the channel considered for analog triggering is the first you put in the list.
    Connect the error clusters whenever you can,in particular when you are dealing with I/O operations.
    Bye!

  • Maintaining default locale in multi-lingual application

    Hello,
    I have a multi-lingual application where the language can be changed at runtime.
    To make the following code work properly, the default locale has to be set each
    time the language changes. Why?
    Since I need to check sometimes the platforms original locale (I do this with
    "Locale.getDefault()"), I am looking for a way not to change the default locale,
    but still to change the resourceBundle.
    In the API I read under "ResourceBundle, Cache Management" that the bundles are cached. Is this the reason why redefining the resourceBundle has no effect? And if yes, how can it be avoided?
    import java.awt.*;
    import java.awt.event.*;
    import java.util.*;
    import javax.swing.*;
    public class Y extends JFrame {
      boolean toggle;
      Locale currentLocale;
      JButton b;
      ResourceBundle languageBundle;
      String country, userLanguage;
      public Y() {
        setSize(300,300);
        setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
        Container cp= getContentPane();
        b= new JButton();
        b.addActionListener(new ActionListener() {
          public void actionPerformed(ActionEvent evt) {
         toggle= !toggle;
         if (toggle)
           country= "DE";
         else
           country= "GB";
         setUserLanguage(country);
        cp.add(b, BorderLayout.SOUTH);
        setUserLanguage("GB");
        setVisible(true);
      public static void main(String args[]) {
        java.awt.EventQueue.invokeLater(new Runnable() {
          public void run() {
         new Y();
      void setUserLanguage(String country) {
        if (country.equals("DE"))
          userLanguage= "de";
        else
          userLanguage= "en";
        currentLocale = new Locale(userLanguage, country);
    //    System.out.println(currentLocale); // The locale changes ...
    //    Locale.setDefault(currentLocale); // Remove comment slashes and it works.
    //    languageBundle.clearCache(); // No effect.
        languageBundle = ResourceBundle.getBundle("MyBundle",currentLocale);
        System.out.println(languageBundle); // ... but the resourceBundle does not change.
        b.setText(languageBundle.getString("ButtonText"));
    The resource bundle files:
    MyBundle.properties
    ButtonText= Just a button
    MyBundle_de_DE.properties
    ButtonText= Nur ein KnopfEdited by: Joerg22 on 18.08.2008 13:26

    What's your default locale? If your default locale is de_DE, that's the expected behavior. The reason for it is the fallback mechanism searches default locale's bundle before falls back to the base bundle, i.e., in case of searching en_GB bundle, the search order is:
    en_GB
    en
    de_DE
    de
    (base)
    So, it will choose MyBundles_de_DE.
    If you do not want this default locale fallback, you can specify ResourceBundle.Control instance, which is returned from ResourceBundle.Control.getNoFallbackControl() method, in your getBundle() call. Or if you do not use JDK6, you could copy the base bundle to MyBundles_en, which is ugly but should work.
    Naoto
    Edited by: naoto on Aug 18, 2008 1:05 PM

  • Multi-entity application

    Hi there,
    I'm developing a multi-entity application and I'm not sure how to optimize the data base. Let say big tables for each entity will have 15000 rows per year and we are expecting at least 150 entities, so we are talking about 2250000 rows per year. It´s a web application and we want an optimized response, there are complicated queries with a lot of JOINS so we did some test and it´s a bit slow.
    well, we thought in two options:
    1.- a column in each table to indicate the entity: tables with 2250000 rows each year.
    2.- each entity connects with a different oracle user wich contains its own tables (big ones): 15000 rows per year but 150 users.
    I asked about this, someone told me that 150 users wouldn't be efficent becouse oracle would spend too many memory to manage it (I dont know).
    Any answer will be welcome
    Thanks!
    PD: sorry about my english, I´m spanish ;-)

    So, wouldn´t you suggestuse a different user for each entity?
    Imagine you have 10 tables, 5 of them have around 15000 rows per year and there will be 150 entities like that. we thoght of using a different user for each one just for the eavy tables, and keep the rest of tables in a common schema. So the entity A access with the user A (all the schemas have synonims to the common schema).
    Ah, It´s a web app.

  • Multi Currency application

    Hi All.
    I have created a multi currency application with only two currencies USD and Euro. But when i try to enter the data into the data forms through USD or Euro, i am not able to do that.
    But when i select local then it allows me to enter the data. I never created a local member and its not their in the dimension, so what is the significance of this local currency.
    Also, i created some currency conversion scripts and run them successfully but when i see data across Euro or USD it is the same data i entered into the local currency.
    Please guide as I am very confused.

    Have a read of this post :- Re: Read/Write functionality in Planning Data Forms
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Multi-Thread application and common data

    I try to make a multi-Thread application. All the Threads will update some common data.
    How could I access the variable �VALUE� with the Thread in the following code:
    public class Demo {
    private static long VALUE;
    public Demo(long SvId) {
    VALUE = 0;
    public static class makeThread extends Thread {
    public void run() {
    VALUE++;
    public static long getVALUE() {
    return VALUE;
    The goal is to get the �VALUE� updated by the Thread with �getVALUE()�
    Thanks for your reply
    Benoit

    That code is so wrong in so many ways......
    I know you're just experimenting here, learning what can and can't be done with Threads, but bad habits start early, and get harder to kick as time goes on. I am going to give a little explanation here about what's wrong, and what's right.. If you're going to do anything serious though, please, read some books, and don't pick up bad habits.
    Alright, The "answer" code. You don't use Thread.sleep() to wait for Threads to finish. That's just silly, use the join() method. It blocks until the threads execution is done. So if you have a whole bunch of threads in an array, and you want to start them up, and then do something once they finish. Do this.
    for(int k=0; k<threads.length; k++) {
      threads[k].start();
    for(int k=0; k<threads.length; k++) {
      threads[k].join();
    System.out.println("All Threads Done");Now that's the simple problem. No tears there.
    On to the java memory model. Here where the eye water starts flowing. The program you have written is not guarenteed to do what you expect it to do, that is, increment VALUE some amount of time and then print it out. The program is not "Thread Safe".
    Problem 1) - Atomic Operations and Synchronization
    Incrementing a 'long' is not an atomic operation via the JVM spec, icrementing an int is, so if you change the type of VALUE to an int you don't have to worry about corruption here. If a long is required, or any method with more then one operation that must complete without another thread entering. Then you must learn how to use the synchronized keyword.
    Problem 2) - Visiblity
    To get at this problem you have to understand low level computing terms. The variable VALUE will NOT be written out to main memory every time you increment it. It will be stored in the CPUs cache. If you have more then one CPU, and different CPUs get those threads you are starting up, one CPU won't know what the other is doing. You get memory overwrites, and nothing you expect. If you solve problem 1 by using a synchronized block, you also solve problem 2, because updating a variable under a lock will cause full visiblity of the change. However, there is another keyword in java.. "volatile".. A field modified with this keyword will always have it's changes visible.
    This is a very short explaination, barely scratching the surface. I won't even go into performance issues here. If you want to know more. Here's the resources.
    Doug Lea's book
    http://java.sun.com/docs/books/cp/
    Doug Lea's Site
    http://g.cs.oswego.edu
    -Spinoza

  • How to display multi-channel image in the 'proxy'?

    There're many examples to show how to display composite channels in the 'proxy'. But I don't find any example to show how to display multi-channel image in the 'proxy'. I found that I can use PSPixelOverlay to display alpha channel data like this:
    int nSpotChannel = gChannelCount - 4;
    PSPixelOverlay* overlay = new PSPixelOverlay[nSpotChannel];
    for(int i = 0; i < nSpotChannel; i++){
           if( i != (nSpotChannel - 1) )
                 overlay[i].next = overlay + i + 1;
           else
           overlay[i].next = NULL;
           overlay[i].data = gChannelData + (4 + i) * nPlaneBytes;
           overlay[i].rowBytes = gProxyRect.Width() * gDocDesc->depth / 8;
           overlay[i].colBytes = 1;
           overlay[i].r  = 230;
           overlay[i].g = 161;
           overlay[i].b = 174;
           overlay[i].opacity = 255;
           overlay[i].overlayAlgorithm = kStandardAlphaOverlay;
    pixels.pixelOverlays = overlay;
    Then, Seeing red part, it will trigger a new problem, that is how to get the color value of the alpha channel by plung-in itself? It seems that no channel color value info is in FilterRecord.
    If you have other solution, please tell me. Many thanks!

    This is what I've been doing - was just curious if there was a way to see a more cohesive image.
    If the individual EQ plugins are in fact the answer, is there any way to smooth how the Analyzer displays? The image I posted above, all of the tonal curves are very smooth. The analyzer tool shows a lot of peaks and valleys within the overall curve and it's hard to pinpoint each instrument's "sweet spot." Vocals for example are very hard to spot.
    - Morgan

  • This is a very simple question about multi-channel audio playback

    I have an mp4 file that i made and i made it 7.1 surround sound, and i'm pretty sure that this 7.1 surround sound works, as it can be played in VLC. i'm using netstream to load my files now. How do i make it so that flash can playback all 8-channels of sound? I suppose kglad would know the answer

    Flash doesn't support multi-channel audio (yet?)

  • Can PCM multi-channel wav (not DD or DTS) be sent over SPDIF

    <Can PCM multi-channel wav (not DD or DTS) be sent over SPDIF? --- X-Fi Elite Pro SB055A and Audio Creation Mode ---
    If I understand correctly, SPDIF format can support uncompressed MULTICHANNEL wav format (within the limitations of its bandwidth) as well as the more usual DD or DTS encoded format on SPDIF.
    One of the neat things about Creative's Audio Creation Mode in the Recorder section is the ability to actually record to a multi-wav format (but the sample size seems to be fixed at 24 bit). This means it is quite simple to have several mixer inputs and record a mixed multi-channel wav file in one go. Great and this works as expected with a 5. ?24 bit/48 kHz recorded test file.
    My question is when one plays this recorded multi-channel wav file back, the mixer monitor (playback) shows the various 5. (say) channels with their separate content. Can this multi-channel *uncompressed* wav data? be sent over the SPDIF output as is to preserve the PCM quality? (so no DD or DTS encoding used)? I have tried this and I?ONLY seem to be able to get a STEREO (2 channel mixed down) version received at another SPDIF-in sound card (my Audigy 2ZS PCMCIA). I tried almost all selections at both ends involving SPDIF.

    @? In answer to my own question, and I think I had known this a while back, SPDIF standard only supports transmission of either 2 channel PCM (uncompressed) stereo with up to 24 bit resolution per channel and up to as much as 92 kHz sampling frequency, or compressed (DD or DTS) multichannel audio up to 5..
    Since each frame of SPDF is 64 bits in length (with a single sample of L and R channels plus extra bits of status in each frame), at 92 kHz max sampling rate, this translates to about 3 Mbits/sec bit rate. However most SPDF don't support the 92 kHz rate (which for 24 bit would be the stereo rate for DVD-Audio) for various copyright protection reasons.
    Here is short demonstration of SPDIF transmission of 24bit/96 kHz between two sound cards with and without "bit accurate for playback" enabled:
    ? http://www.jensign.com/spdif/

Maybe you are looking for