Recording RTMFP P2P stream by sending simultaneous stream to FMS (and other things...)

Only just getting started on this whole domain of learning, so go easy!
If I set up a P2P video/audio chat (similar to the sample VideoPhone thing on the Cirrus site), can I get the stream from both parties to send to a server at the same time so that I can record it? If so, would I have to use a FMS to stream it to and perform the recording (and if so which version could I get away with)? Are there any (preferably free, or just tutorialised) solutions for the recording side of things?
Currently it seems like the only option for doing the P2P thing is to use Stratus/Cirrus unless I use FMS4 Enterprise. Is that correct? 
Is there any example code out there that can help? Has anyone got any experience of how effective this kind of situation can be, in terms of quality of the stream and recording? Does any of this make sense?
The Enterprise version of FMS is "Call for price": can anyone give me ball-park figures?
Would appreciate any advice or pointers on these very general questions.

can I get the stream from both parties to send to a server at the same time so that I can record it?
     I really do not know how one can do that - it would be like creating separate stream to server and replicating same data which you send to peer and i do not know how useful it would be - again it will increase the load on server which we are trying to avoid using P2P
If so, would I have to use a FMS to stream it to and perform the recording (and if so which version could I get away with)?
     I think above answer takes care of this. I am not expert on P2P - so someone can correct me if i am completely off
Are there any (preferably free, or just tutorialised) solutions for the recording side of things?
     Not something i am aware off.
Currently it seems like the only option for doing the P2P thing is to use Stratus/Cirrus unless I use FMS4 Enterprise. Is that correct?
     Yes - as of now only FMS4 Enterprise edition supports P2P

Similar Messages

  • Archiving live stream at FMS and injecting metadata: VP6 good h264 not

    When I record a live stream at FMS, one in which I've injected  metadata in my main.asc file, the archived file plays back fine.  The  metadata plays back too.  I'm able to retreive it just fine - if I  encode VP6.
    If I encode h.264 the file plays back but  the metadata does not.  The fact that the archived file is created and  plays back tells me things are wired correctly.  The only thing I  changed is the format.
    According to FMS docs (http://help.adobe.com/en_US/FlashMediaServer/3.5_SS_ASD/WS5b3ccc516d4fbf351e63e3d11a11afc9 5e-7e42.html#WS5b3ccc516d4fbf351e63e3d11a11afc95e-7f35)
    ..."The recording format is determined by the filename you pass to the Stream.get()
    method."
    So my record code looks like the following:
    application.onPublish = function(client, stream) {
         trace("onPublish");
         s = Stream.get("mp4:streamname.f4v");
         if(s){
             s.record();
         this.doRepublish(this.nc, stream);
    My code that injects the data in to the stream looks like this:
    Client.prototype.sendDataEvent = function(data) {
         trace("Call to sendDataEvent...");
         this.newStream = Stream.get("mp4:streamname.f4v");
         this.newStream.send("onTextData",data);
    All must be wired  correctly because the metadata comes through during the live stream.  On  play back of the archive though, the metadata doesn't appear to be  there.
    Any thoughts?
    Thanks

    My apologies on the s.play() confusion.  I had been trying different versions of the code and posted the one without it.
    Whether I include s.play() or not the file gets created.  Here are the various versions of the onPublish() function I've tried (differences in red):
    1.
    application.onPublish = function(client, stream) {
        trace("onPublish");   
        s = Stream.get("mp4:streamname.f4v");
        if(s){
            s.record();
            s.play("mp4:streamname.f4v");
        this.doRepublish(this.nc, stream);
    2.
    application.onPublish = function(client, stream) {
        trace("onPublish");   
        s = Stream.get("mp4:streamname.f4v");
        if(s){
            s.record();
            s.play(stream);
        this.doRepublish(this.nc, stream);
    3.
    application.onPublish = function(client, stream) {
         trace("onPublish");   
         s = Stream.get("mp4:streamname.f4v");
         if(s){
             s.record();
         this.doRepublish(this.nc, stream);
    All produce the same result - an archived file called mp4:streamname.f4v in my streams folder.  This file plays back fine but does not play back the commands.
    On your other question, about things working fine for VP6, it works fine for FLV.  A file called streamname.flv is produced.  This file plays back fine and does indeed play back commands baked into the file as well.  This is what makes me believe the code is not the problem.  If it works perfectly for one format, there would seem to be very little I could do in my code to break things for the other.
    Can you try this using the record() code snippets in the live docs Stream.record() section?
    http://help.adobe.com/en_US/FlashMediaServer/3.5_SS_ASD/WS5b3ccc516d4fbf351e63e3d11a11afc9 5e-7e42.html#WS5b3ccc516d4fbf351e63e3d11a11afc95e-7f35
    All you'd need is the code snippets there to record your live stream and another server side function to inject commands into that live stream. Here is that function:
    Client.prototype.sendDataEvent = function(data) {
        trace("Call to sendDataEvent...");
        this.newStream = Stream.get("mp4:streamname.f4v");
        this.newStream.send("onTextData",data);
    Do something simple like call onTextData and pass some text in the data parameter.  Then on the client side viewer, handle the onTextData method.  It will receive the text.  Display it in a text area or something.
    If you record while injecting this text into your stream, the text should display on playback of the archived file.  It will if you encode VP6/FLV, but not if you encode H.264/F4V.
    Let me know what you discover.

  • When iCloud updates does it sometimes send your pictures and other things to your email or phone?

    When iCloud updates does it sometimes send pictures and other things on it to your email or phone randomly ?

    Start Firefox in [[Safe Mode]] to check if one of your add-ons is causing your problem (switch to the DEFAULT theme: Tools > Add-ons > Themes).
    See [[Troubleshooting extensions and themes]] and [[Troubleshooting plugins]]

  • How do I stop other devices on my network from receiving me iMessages? I send an iMessage to someone and everyone on my network sees my conversation.

    How do I stop other devices on my network from recieving my IMessage?
    I send a friend an iMessage and other people on my network get to see the conversatiom.

    Are you sending a group MMS? If so that is why. Also if others are using the same apple id as you then they will see all your imessage texts. You need to change your apple id on you device.

  • Can i simultaneously play audio and record a different stream?

    Hi!
    I've got a Mac Pro and am contemplating purchase of a Macbook Pro. In short, I have a radio show, and I play much of my music off of a laptop.
    The station has a Lexicon Alpha connected to our console. One typically plays music via iTunes through a USB connection. Thus, the Lexicon handles digital->analog conversion.
    I'd actually like to record my show from the console to my laptop. I would likely purchase something like an http://www.apogeedigital.com/products/duet.php>Apogee Duet that would handle analog->digital conversion. I presume I'd use something like Logic Pro (or perhaps there is some free software like Garageband that might be fine for this purpose) to record/save the stream. Note: The Duet has a Firewire interface.
    Is this possible on the Macbook Pro?
    I'm guessing that it is possible, simply because:
    1) No digital-analog or analog-digital conversion is being performed onboard, thus the computer is simply handling 2 digital data streams.
    2) Further, the devices are not transmitting a signal on the same bus (USB or Firewire) but, rather, using one of each. Perhaps that doesn't make any difference.
    Thanks in advance for confirming this for me. It will help in a purchasing decision.

    AG,
    Since I have both types of devices an older USB Tascam US-122 and a Presonus Firestudio I tried a quick test. Played Van Morrison's "Brown Eyed Girl" in itunes via Tascam USB output and recorded a stereo guitar track in Logic via the Firestudio. Worked fine on this older G5 desktop.
    pancenter-

  • When FMLE stopped,Remote RTMP stream to FMS 4.5 with rtmfp?

    When FMLE stopped,Remote RTMP stream to FMS 4.5 with rtmfp?
    edit  "applications/multicast/main.asc" ?
    HELP ME !!! THANKS!!!
    * File: main.asc
    * The server-side portion of the multicast sample application.
    * This app accepts publish and unpublish requests from FMLE, and republishes
    * the live stream from FMLE into a target Flash Group.
    // General Constants
    // "Constants" representing multicast event types.
    var TYPE_FUSION = 1;
    var TYPE_IP = 2;
    var TYPE_P2P = 3;
    // StreamContext Description, Constants and Functions
    * Type: StreamContext
    * This application tracks the context for live streams published to the server
    * that are being republished into a Flash Group. The StreamContext "type" used
    * for this is just an Object containing the following members:
    *   client         - The encoding/publishing client.
    *   streamName     - The source Stream name as published by the client.
    *   type           - The multicast event type.
    *   groupspec      - The groupspec identifying the Flash Group and capabilities.
    *   address        - IP multicast address (optional for pure P2P events).
    *   netConnection  - A loopback NetConnection used for the mcastNetStream.
    *   mcastNetStream - The NetStream used to republish the source Stream into
    *                    the Flash Group.
    *   netGroup       - An (optional) NetGroup handle for the target Group.
    *                    Only present for Fusion or P2P events.
    *   state          - One of the state constants defined immediately below
    *                    this comment.
    var STATE_INIT            = 0; // Starting state for a StreamContext.
    var STATE_CONNECTING      = 1; // Establishing loop-back connection.
    var STATE_CONNECTED       = 2; // Connection established.
    var STATE_PUBLISH_PENDING = 3; // Attempting to publish.
    var STATE_REPUBLISHING    = 4; // Actively republishing to multicast.
    var STATE_UNPUBLISHING    = 5; // Shutting down multicast republish.
    var STATE_UNPUBLISHED     = 6; // Unpublished successfully.
    var STATE_DISCONNECTING   = 7; // Shutting down loopback connection.
    var STATE_DISCONNECTED    = 8; // Connection shut down. Done.
    * Registers a source Stream published by the specified client, along with the
    * context for the multicast event, as a StreamContext Object.
    * @param client - The Client publishing the stream.
    * @param streamName - The source Stream name.
    * @param params - The parameters resulting from parsing the source Stream's
    *                 query string.
    * @return The new StreamContext Object for the registered Stream.
    function registerStream(client, streamName, params)
        var streamContext = { "client": client,
                              "streamName": streamName,
                              "type": params["fms.multicast.type"],
                              "groupspec": params["fms.multicast.groupspec"] };
    if (params["fms.multicast.interface"])
      streamContext["interfaceAddress"] = params["fms.multicast.interface"];
        if (params["fms.multicast.address"])
            streamContext["address"] = params["fms.multicast.address"],
        streamContext.state = STATE_INIT;
        updateStreamContextLookups(streamContext);
        trace("Registered multicast context for source stream: " + streamName);
        return streamContext;
    * Updates the indexed lookups installed for the passed StreamContext Object
    * with the application.
    * @param streamContext - The StreamContext Object to (re)index.
    function updateStreamContextLookups(streamContext)
        application.streamTable[streamContext.streamName] = streamContext;
        if (streamContext.netConnection)
            application.netConnTable[streamContext.netConnection] = streamContext;
        if (streamContext.mcastNetStream)
            application.mcastNetStreamTable[streamContext.mcastNetStream] = streamContext;
        if (streamContext.netGroup)
            application.netGroupTable[streamContext.netGroup] = streamContext;
    * Provides access to the StreamContext Object for a registered source Stream
    * by name.
    * @param streamName - A registered source Stream name.
    * @return The associated StreamContext Object; undefined if the source Stream
    *         name is not registered.
    function getStreamContextForSourceStream(streamName)
        return application.streamTable[streamName];
    * Provides access to the StreamContext Object for a given server-side
    * NetConnection hosting a multicast NetStream.
    * @param netConnection - A server-side NetConnection.
    * @return The associated StreamContext Object; undefined if the passed
    *         NetConnection is not indexed to a StreamContext.
    function getStreamContextForNetConnection(netConnection)
        return application.netConnTable[netConnection];
    * Provides access to the StreamContext Object for a given multicast NetStream.
    * @param netStream - A multicast NetStream.
    * @return The associated StreamContext Object; undefined if the passed
    *         NetStream is not indexed to a StreamContext.
    function getStreamContextForMulticastNetStream(netStream)
        return application.mcastNetStreamTable[netStream];
    * Provides access to the StreamContext Object for a given NetGroup associated
    * with a multicast NetStream.
    * @param netGroup - A NetGroup.
    * @return The associated StreamContext Object; undefined if the passed
    *         NetGroup is not indexed to a StreamContext.
    function getStreamContextForNetGroup(netGroup)
        return application.netGroupTable[netGroup];
    * Unregisters the StreamContext from the application.
    * @param streamContext - The StreamContext Object to unregister.
    function unregisterStreamContext(streamContext)
        if (streamContext.netConnection)
            delete application.netConnTable[streamContext.netConnection];
        if (streamContext.mcastNetStream)
            delete application.mcastNetStreamTable[streamContext.mcastNetStream];
        if (streamContext.netGroup)
            delete application.netGroupTable[streamContext.netGroup];
        trace("Unregistered multicast context for source stream: " +
              streamContext.streamName);
    // Application callback functions
    * Initializes global StreamContext lookup tables.
    application.onAppStart = function()
        application.streamTable = {};
        application.netConnTable = {};
        application.mcastNetStreamTable = {};
        application.netGroupTable = {};
    * Handles a publish event for the application by validating the request
    * and bridging the published stream into a target Flash Group. Invalid
    * publish requests are ignored and the publishing client's connection
    * is closed.
    * @param client - The publishing client.
    * @param stream - The published stream.
    application.onPublish = function(client, stream)
        //trace("Handling publish request for source stream: " + stream.name);
        var params = parseQueryString(stream.publishQueryString);
        if (!validateStreamParams(params))
            application.disconnect(client);
            return;
        var prevContext = getStreamContextForSourceStream(stream.name);
        if (prevContext)
            forceCloseStreamContext(prevContext);
        // Register source Stream, and kick off the async process that will
        // eventually wire-up the associated multicast NetStream.
        var streamContext = registerStream(client, stream.name, params);
        openMulticastConnection(streamContext);
    * Handles an unpublish event for the application by shutting down
    * any associated multicast NetStream.
    * @param client - The unpublishing client.
    * @param stream - The source stream being unpublished.
    application.onUnpublish = function(client, stream)
        trace("Handling unpublish request for source stream: " + stream.name);
        var streamContext = getStreamContextForSourceStream(stream.name);
        if (streamContext && (streamContext.state <= STATE_REPUBLISHING))
            destroyStreamContext(streamContext);
    // Callback functions for NetConnection and multicast NetStream/NetGroup wiring.
    * First step in setting up a republished multicast NetStream; open the loopback
    * connection it requires.
    * @param streamContext - The StreamContext Object for the publish event.
    function openMulticastConnection(streamContext)
        var nc = new NetConnection();
        nc.onStatus = netConnectionStatusHandler;
        streamContext.netConnection = nc;
        updateStreamContextLookups(streamContext);
        streamContext.state = STATE_CONNECTING;
        nc.connect(resetUriProtocol(streamContext.client.uri, "rtmfp"));
    * Status event handler for the loopback NetConnection used by the multicast
    * NetStream. Advances setup upon successful connection, or triggers or advances
    * tear-down as a result of connection loss or an unpublish and clean shutdown.
    * @param info - The status info Object.
    function netConnectionStatusHandler(info)
        var streamContext = getStreamContextForNetConnection(this);
        trace("Multicast NetConnection Status: " + info.code +
              (streamContext ? ", Source stream: " + streamContext.streamName : ", Not associated with a source stream."));
        if (streamContext)
            switch (info.code)
            case "NetConnection.Connect.Success":
                streamContext.state = STATE_CONNECTED;
                // If event type is Fusion or P2p, wire up a NetGroup for neighbor
                // bootstrapping and maintenance ahead of (re)publishing the stream.
                var type = streamContext.type;
                if (type == TYPE_FUSION || type == TYPE_P2P)
                    initNetGroup(streamContext);
                else
                    initMulticastNetStream(streamContext);
                break;
            case "NetConnection.Connect.Failed":
            case "NetConnection.Connect.Rejected":
            case "NetConnection.Connect.AppShutdown":
                trace("MULTICAST PUBLISH ERROR: Failed to establish server-side NetConnection for use by multicast NetStream. " +
                      "Status code: " + info.code + ", description: " + info.description + ", Source stream: " +
                      streamContext.streamName);
                streamContext.state = STATE_DISCONNECTED;
                destroyStreamContext(streamContext);
                break;
            case "NetConnection.Connect.Closed":
                if (streamContext.state < STATE_DISCONNECTING)
                    trace("MULTICAST PUBLISH ERROR: Unexpected server-side NetConnection close. " +
                         "Status code: " + info.code + ", description: " + info.description + ", Source stream: " +
                         streamContext.streamName);
                streamContext.state = STATE_DISCONNECTED;
                destroyStreamContext(streamContext);
                break;
            default:
                // Ignore.
    * Initializes the multicast NetGroup following a successful connection of its
    * underlying loopback NetConnection. This hook is optional and only runs for
    * event types of Fusion and pure P2P.
    * @param streamContext - The StreamContext Object for the multicast publish.
    function initNetGroup(streamContext)
        var ng = null;
        try
            ng = new NetGroup(streamContext.netConnection, streamContext.groupspec);
        catch (e)
            trace("MULTICAST PUBLISH ERROR: Failed to construct NetGroup. Error: "
                  + e.name + (e.message ? " " + e.message : "") +
                  ", Source stream: " + streamContext.streamName);
            destroyStreamContext(streamContext);
            return;
        ng.onStatus = netGroupStatusHandler;
        streamContext.netGroup = ng;
        updateStreamContextLookups(streamContext);
    * Status event handler for the multicast NetGroup. Advances to initializing the
    * multicast NetStream upon successful NetGroup connect. Otherwise, triggers
    * shut down.
    * @param info - The status info Object.
    function netGroupStatusHandler(info)
        var streamContext = getStreamContextForNetGroup(this);
        trace("Multicast NetGroup Status: " + info.code +
              (streamContext ? ", Source stream: " + streamContext.streamName : ", Not associated with a source stream."))
        if (streamContext)
            switch (info.code)
            case "NetGroup.Connect.Success":
                initMulticastNetStream(streamContext);
                break;
            case "NetGroup.Connect.Failed":
            case "NetGroup.Connect.Rejected":
                trace("MULTICAST PUBLISH ERROR: Failed to connect multicast NetGroup. " +
                      "Status code: " + info.code + ", description: " + info.description +
                      ", Source stream: " + streamContext.streamName);
                destroyStreamContext(streamContext);
                break;
            case "NetGroup.MulticastStream.UnpublishNotify":
                // At this point, multicast publishers will be notified;
                // continue shut down.
                destroyStreamContext(streamContext);
                break;
            default:
                // Ignore.
    * Initializes the multicast NetStream following a successful connection of its
    * underlying loopback NetConnection.
    * @param streamContext - The StreamContext Object for the multicast publish.
    function initMulticastNetStream(streamContext)
        var ns = null;
        try
            ns = new NetStream(streamContext.netConnection, streamContext.groupspec);
        catch (e)
            trace("MULTICAST PUBLISH ERROR: Failed to construct multicast NetStream. Error: " +
                  e.name + (e.message ? " " + e.message : "") +
                  ", Source stream: " + streamContext.streamName);
            destroyStreamContext(streamContext);
            return;
        var type = streamContext.type;
        if (type == TYPE_FUSION || type == TYPE_IP)
      var iAddr = (streamContext.interfaceAddress) ? streamContext.interfaceAddress : null;
            try
                trace("Multicast NetStream will publish to IP address: " + streamContext.address +
          " on interface address: " + ((iAddr) ? iAddr : "default") +
                      ", Source stream: " + streamContext.streamName);
                ns.setIPMulticastPublishAddress(streamContext.address, iAddr);
            catch (e2)
                trace("MULTICAST PUBLISH ERROR: Failed to assign IP multicast address and port for publishing. Address: "
                      + streamContext.address + " on interface address: " + ((iAddr) ? iAddr : "default") +
          ", Source stream: " + streamContext.streamName);
                destroyStreamContext(streamContext);
                return;
        ns.onStatus = netStreamStatusHandler;
        streamContext.mcastNetStream = ns;
        updateStreamContextLookups(streamContext);
        streamContext.state = STATE_PUBLISH_PENDING;
    * Status event handler for the multicast NetStream. Advances state upon successful
    * connect and publish, or upon successful unpublish. Triggers tear-down if we fail
    * to attach to a source Stream to republish.
    * @param info - The status info Object.
    function netStreamStatusHandler(info)
        var streamContext = getStreamContextForMulticastNetStream(this);
        trace("Multicast NetStream Status: " + info.code +
              (streamContext ? ", Source stream: " + streamContext.streamName : ", Not associated with a source stream."))
        if (streamContext)
            switch (info.code)
            case "NetStream.Connect.Success":
                if (!this.attach(Stream.get(streamContext.streamName)))
                    trace("MULTICAST PUBLISH ERROR: Failed to attach multicast NetStream to source. Source stream: " +
                          streamContext.streamName);
                    destroyStreamContext(streamContext);
        //var stream;
                //stream = Stream.get("liveStream");
                    //return;
                }else{
                this.publish(streamContext.streamName, "live");
                break;
            case "NetStream.Publish.Start":
                streamContext.state = STATE_REPUBLISHING;
                break;
            case "NetStream.Unpublish.Success":
                streamContext.state = STATE_UNPUBLISHED;
                // Wait for unpublish notify event if the context has a NetGroup;
                // otherwise continue shut down now.
                if (!streamContext.netGroup)
                    destroyStreamContext(streamContext);
                    break;
            default:
                // Ignore.
    * The common tear-down hook. Other functions that manage or shut down
    * the StreamContext Object delegate to this function upon detecting a fatal
    * error or during shut down.
    * @param streamContext - The StreamContext Object for the source Stream and
    *                        (potentially wired-up) multicast NetStream.
    function destroyStreamContext(streamContext)
        // Unregister by Stream name immediately; lookups by NetConnection, NetGroup
        // and multicast NetStream remain in place until tear-down is complete.
        delete application.streamTable[streamContext.streamName];
        switch (streamContext.state)
        case STATE_REPUBLISHING:
            streamContext.mcastNetStream.attach(false);
            streamContext.mcastNetStream.publish(false);
            streamContext.state = STATE_UNPUBLISHING;
            return;
        case STATE_CONNECTING:
        case STATE_CONNECTED:
        case STATE_PUBLISH_PENDING:
        case STATE_UNPUBLISHED:
            // Delete status handler callbacks and cleanup in case we arrived here
            // as a result of a force close.
            if (streamContext.netGroup)
                delete streamContext.netGroup.onStatus;
            if (streamContext.mcastNetStream)
                streamContext.mcastNetStream.attach(false);
                delete streamContext.mcastNetStream.onStatus;
            streamContext.netConnection.close();
            streamContext.state = STATE_DISCONNECTING;
            return;
        default:
            // Fall-through.
        // At this point, we either never got to the republishing state or we've
        // proceeded through the clean shut down steps above. Everything for this
        // StreamContext can go away.
        unregisterStreamContext(streamContext);
    * Utility function used to force close a StreamContext in the event that we
    * start handling a republish of a Source stream before the context for its
    * prior incarnation has been torn down.
    * @param streamContext - The StreamContext Object for the source Stream.
    function forceCloseStreamContext(streamContext)
        trace("Force closing previous multicast context for source stream: " + stream.name);
        prevContext.state = STATE_UNPUBLISHED;
        destroyStreamContext(prevContext);
    // Client callback functions
    * A no-op. Answers the RPC in the fashion expected by encoders, but the real
    * work happens in application.onPublish.
    * @param streamName - The name of the stream being published.
    Client.prototype.FCPublish = function(streamName)
        this.call("onFCPublish",
                  null,
                  {code:"NetStream.Publish.Start", description:streamName});
    * A no-op. Answers the RPC in the fashion expected by encoders, but the real
    * work happens in application.onUnpublish.
    * @param streamName - The name of the stream being unpublished.
    Client.prototype.FCUnpublish = function(streamName)
        this.call("onFCUnpublish",
                  null,
                  {code:"NetStream.Unpublish.Success", description:streamName});
    * If the client invoker's ip matches what was captured for a currently publishing
    * stream, assume it's the same client and reset the stream. Otherwise, ignore.
    * @param streamName - The name of the stream being released.
    Client.prototype.releaseStream = function(streamName)
        var streamContext = getStreamContextForSourceStream(streamName);
        if (streamContext &&
            (streamContext.client.ip == this.ip) &&
            (streamContext.state <= STATE_REPUBLISHING))
            // Only tear-down an orphaned stream if it's not
            // already shutting down (see state check above).
            destroyStreamContext(streamContext);
    // Helper functions
    * Validates that a newly published stream has correct metadata (e.g. query
    * string parameters) to republish into a Flash Group. This function also
    * writes a message to the application log for any validation failures.
    * @param params - The quiery string parameters for the source Stream.
    * @return true if valid; otherwise false.
    function validateStreamParams(params)
        var empty = true;
        for (var param in params)
           empty = false;
           break;
        if (empty)
            trace("MULTICAST PUBLISH ERROR: Stream query string is empty.");
            return false;
        if (!params["fms.multicast.type"])
    trace("MULTICAST PUBLISH ERROR: Stream query string does not specify a 'fms.multicast.type'.");
            return false;
        var type = params["fms.multicast.type"];
        if (type != 1 && type != 2 && type != 3)
            trace("MULTICAST PUBLISH ERROR: 'fms.multicast.type' has invalid value: " + type);
            return false;
        if (!params["fms.multicast.groupspec"])
            trace("MULTICAST PUBLISH ERROR: Stream query string does not specify a 'fms.multicast.groupspec'.");
            return false;
        // Fusion and IP require an address:port.
        if ((type == 1 || type == 2) &&
            !params["fms.multicast.address"])
            trace("MULTICAST PUBLISH ERROR: Stream query string does not specify a 'fms.multicast.address'.");
            return false;
        // No obvious validation issues.
        return true;
    * Parses the supplied query string, and if valid, returns an Object populated
    * with the name-value pairs contained in the query string. The simple processing
    * here does not preserve multiple name-value pairings having the same name; the
    * last value seen wins. Parameters with no value are mapped to "" (empty String)
    * in the returned Object.
    * @param queryString - A query string portion of a URI, not including the leading
    *                     '?' character.
    * @return An Object containing a key-value mapping for each name-value parameter
    *         defined in the query string; Object is empty if the query string is
    *         invalid.
    function parseQueryString(queryString)
        var result = {};
        var decoded = "";
        try
            decoded = decodeURIComponent(queryString);
        catch (e) // Invalid URI component; return empty result.
            return result;
        if (decoded.length)
            var params = decoded.split('&');
            for (var i in params)
                var pair = params[i];
         var sepIndex = pair.indexOf('=');
                if (sepIndex != -1)
                    var name = pair.substr(0, sepIndex);
                    result[name] = pair.substr(sepIndex + 1);
                else
                    result[pair] = "";
        return result;
    * Utility function used to swap out the protocol (scheme) portion
    * of a given URI with an alternate.
    * @param uri - The full URI.
    * @param desiredProtocol - The replacement protocol.
    * @return The URI with its protocol replaced.
    function resetUriProtocol(uri, desiredProtocol)
        var sepIndex = uri.indexOf("://");
        return desiredProtocol + uri.substr(sepIndex);

    HELP ME !!! THANKS!!!

  • What is delivery quality for data frames in p2p stream?

    Hi,
    I am planning to replace some shared object solution by sending data frames over p2p stream. Since this runs over UDP I can imagine there is risk involved of partial deliver, non-delivery, or delivery out of sync? (admittedly out-of-sync is probably also possible with shared objects?)
    Can you give an indication what I may expect?
    - Frans

    in Flash Player 10.0, data sent P2P with NetStream.send() is fully reliable.  we disclosed at MAX that Flash Player 10.1 will allow you to explicitly and individually set (from ActionScript) whether or not video, audio, or data is sent fully reliably or with partial reliability over direct connection P2P NetStreams.  NetStream.send()s are delivered in the order they were sent.
    -mike

  • Bug? RTMFP publishing stream order within NetConnection causes DIRECT_CONNECTIONS to unpublish

    The following is from my post at: http://forums.adobe.com/thread/998333
    Using RTMFP, I make a connection to the server with NetConnection.
    I publish NetStream A directly to the server (like you would for RTMP).
    I publish NetStream A_direct so that RTMFP P2P can play it after publishing with DIRECT_CONNECTIONS.
    These NetStreams share the same NET_STATUS handler function.
    I tried this with debug Flash player for 10.3, 11.0, and 11.2.
    One thing I tried was flipping the order of publishing A and A_direct. If I publish A_direct first instead, publishing A unpublishes it and event.target is set to the proper stream when A connects.
    Is that expected behavior? I didn't see anything in the docs about that.

    bump.
    Any one?
    thanks
    Matt

  • Trying to view the recorded or live stream while recording the live stream doesnt work

    The workflow:
    I push a live feed to the rtmp url rtmp://localhost/my-app/tester where my-app is a copy of the live app in AMS but with the following addition of code in main.asc:
    application.onPublish = function( p_client, p_stream) {
       s = Stream.get("my_hello5");
       trace( s.name + " = name, " + s.publishQueryString + " = pqs");
       if(s) {
         s.record();
         trace("recording...");
         s.play(p_stream.name);
    application.onUnpublish = function( p_client, p_stream) {
       s = Stream.get("my_hello5");
       trace("getting stream my_hello5...");
       if(s) {
         trace("stopping recording...");
         s.record(false);
    And Iam facing multiple issues:
    * The file my_hello5.flv gets generated in applications/my-app/streams/_definst_/ . And Once the recording is complete, If I open it using the url below in media player (Mine is Totem movie player on ubuntu) that I have on linux, it plays. But if I open the same url in  the OSMF flash media playback setup page (http://www.osmf.org/configurator/fmp/# ) , it is always showing "buffering".
    The flv url I used is: rtmp://localhost/my-app/my_hello5
    * If I try to open either the flv url or the live feed url inside the osmf page while the recording is still going on, then Iam not able to view either. It just shows "buffering" all the time.
    Your help is valuable and much appreciated.

    As a good practice if should not play a file while it is being recorded...on other hand you can programatically create chunks of recordking at preodic intervals and use those chunks for laying or copy those chunks to a different folder where a different application and make those availabel to subscribers for rtmp streaming..

  • Can I record a live streaming concert (audio

    Is there a way to record a live streaming concert (audio & video) with Quicktime on my iMac?

    Sure.  The built-in iSight camera of all Intel iMacs supports both video and audio.  The following link shows how to use Quicktime to record video and audio with your iSight:
    http://support.apple.com/kb/HT4024
    If you wish to append that video with examples on the computer, SnapZ Pro is a great tool for screen capture:
    http://www.ambrosiasw.com/utilities/snapzprox/
    Note, SnapZ Pro is probably the easiest to record an existing webinar.  Note, when recording any webinar, you should be licensed to record the said video, or allowed to by the professor/school offering it for the use of your own instruction.

  • P2p streaming in PXIe cards from development pc

    Dear community,
    Is it possible to perform a p2p streaming communication between two PXIe FlexRIO cards executing from the development computer (and not from the PXIe controller)?
    Both cards are placed in a 1082 chassis with a 8108 RT controller. The problem is that one of the adapter modules has some issues with RT OS, so we need to use it under Windows.
    The proposed solution is to add both FPGA targets to "My Computer" (development pc, windows) in the LabVIEW project, and execute the host which configures the p2p stream there.
    At this point, -63193 error saying that "The requested feature is not supported" are displayed in the "P2PReader.Get Peer To Peer Reader" and "P2PWriter.Get Peer To Peer Writer" blocks.
    Thank you in advance.
    P.S. executing the same code under the RT controller there is no p2p error (but the FPGA target does not work properly)

    All picture enhancement modes are off. I do not use them.
    55WL768G does not offer local dimming.
    After some additional months of testing I can tell you that the picture black outs of my 55WL768G only happen when connected to a PC (always HDMI connection NOT VGA).
    I tried different picture modes: "Games", "PC", "Hollywood 2" - no change.
    I tried different HDMI ports - no change. I tried a Nvidia 9500GT and also a AMD 4350 as VGA Card - no success. Picture still going away as if signal was lost.
    Happens most often if big white areas appear on screen. Example: change from Windows 7 desktop to an empty browser window with only the toolbar and a big white blank page.
    No drop-outs using LG BDP, DVB-S2 receiver or my Dune HD 301 media player.
    This is more than annoying. It virtually makes use of this TV with a computer impossible.
    As far as I can see there is also no firmware update available beyond the version I already use.

  • ID help for basic rtmfp video stream app

    In the article:
    http://www.adobe.com/devnet/flashplayer/articles/rtmfp_cirrus_app.html
    I need help understanding the following section:
    "Now, create the receiving NetStream:
    private var recvStream:NetStream;
    recvStream = new NetStream(netConnection, id_of_publishing_client);
    recvStream.addEventListener(NetStatusEvent.NET_STATUS, netStreamHandler);
    recvStream.play("media");
    At this point, you hear audio and you can create a Video object to display video. In order to create the receiving NetStream, you must know the 256-bit peer ID of the publisher (id_of_publishing_client). In order to receive audio/video, you must know the name of the stream being published."
    I do not understand what "id_of_publishing client" is supposed to be?
    Where do I get it from or how do I produce it? Can this be string that user can enter in a text field on the flash object and when two strings match it produces a connection between the two?
    Is there a hash system I have to pass the string through to generate a 256bit peer id from a string thats input or do i use the one given to me through adobe labs?
    Is there any relation to creating a Netgroup?
    I have the dev_key and cirrus address to access  the cirrus service provided by adobe, Im just stuck on the ID part that I cant fully understand from the article, do require a database to store and manage the IDs in a simple app?
    Im trying to develop a very basic program that uses the rtmfp examples provided in the article but Im stuck here, I would appreciate anyones help in the matter. Additionally I have looked at the cirrus phone example that uses the cgi file to manage IDs but I find it to be even more troublesome to work with therefore I have resorted to making a extremely basic example that avoid creating an ID database or use my own FMS.
    Thank you in advance for your help.

    the "id_of_publishing_client" is the peer ID of the publisher.  a peer ID is a unique cryptographic identifier that every RTMFP instance has. it's the NetConnection.nearID of an RTMFP NetConnection object. the idea is you use some method (such as a web service database or copy-and-paste between two windows on your development computer) to transfer the peer ID of a publisher (a peer with a NetStream in DIRECT_CONNECTIONS mode) to a subscriber for use in its NetStream constructor.
    a Flash Player's NetConnection peer ID is the SHA256 hash of the RTMFP certificate, which contains (among other things) a Diffie-Hellman public key.  the Diffie-Hellman public/private key pair is made fresh each time you make a new RTMFP NetConnection from your computer's cryptographic pseudorandom number source (example: /dev/urandom on Mac/Unix/Linux computers), and those keys are used (with other data) to establish the symmetric encryption & decryption keys for AES, which is used to encrypt all communication between RTMFP peers.  a client's peer ID is unforgeable* because you must have the private key that goes with the public key in order to generate the same AES keys that your peer is using. without the private key, the communication just won't work. *performing the discrete logarithm of a 1024-bit DH public key to get the private key or finding a collision in SHA256 are both "hard enough" to claim that peer IDs are unforgeable.

  • How should i develop my p2p streaming system

    i have tried some way as following to develop in half year. But it still not works as i wish.
    h1. 1.
    h3. main idea:
    Use jmf RTP api to link each peers.
    h3. implement:
    RTPManager( transmitter) ---> RTPManager (receiver )
    receiver create a merged CloneableDataSource (2 tracks) to RTPManager for tramsmit on receiver.
    And create SendStream to next peer on each RTPManager for tramsmit .
    h3. result:
    The first receiver have good quality. But the next peer's result is bad and lag. The third peer is worse that second peer.
    There is video/audio synchronous problem sometimes. Maybe fail recovery can't be perfect done due to new stream realize time.
    h1. 2.
    h3. main idea:
    Use custom DataSource to buffer a few seconds.
    h3. implement:
    RTPManager/custom rtp sender( transmitter) ---> DataCatcher ---(sychronized queue)--> PullBufferDataSource ----> Player (create by Manager)
    h3. result:
    Data reached PullBufferDataSource and buffer for 5 seconds(timestamp subtraction). if player starts, all buffer in PullBufferDataSource will be eated by the player. Screen displayed by the player is quickly go through. Does player do not support auto time synchronization?
    h1. 3.
    h3. main idea:
    Direct transmit data in disk to peer. Use SeekableStream to play.
    h3. implement:
    Custom data chunk sender( transmitter) ---> catcher(write to data) /////--> ByteDataSource(seekable read from not complete data)
    h3. result:
    it's good for playing from start to end. But if data chunk sender doesn't sends from first position, player remains sound and video stoped. if sended data play in KMplayer, it works.
    Sorry for poor English, and any suggestion?
    Thank you in anvanced..

    h1. 1.
    h3. main idea:
    Use jmf RTP api to link each peers.
    h3. implement:
    RTPManager( transmitter) ---> RTPManager (receiver )
    receiver create a merged CloneableDataSource (2 tracks) to RTPManager for tramsmit on receiver.
    And create SendStream to next peer on each RTPManager for tramsmit .
    h3. result:
    The first receiver have good quality. But the next peer's result is bad and lag. The third peer is worse that second peer.
    There is video/audio synchronous problem sometimes. Maybe fail recovery can't be perfect done due to new stream realize time.There is a lot of processing involved in doing it this way, most of it unnecessary. That'll add delay, which will result in dropped packets.
    A better design would be to implement the RTP forwarding at the network level.
    Suggestion: [http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/RTPConnector.html]
    Write a custom RTP connector (sender and receiver) such that the receiver will not only receive the RTP packets, but also forward them to whoever it forwards them to. This will minimum the processing on each peer, which will reduce the delay and thus minimize packet loss.
    h1. 2.
    h3. main idea:
    Use custom DataSource to buffer a few seconds.
    h3. implement:
    RTPManager/custom rtp sender( transmitter) ---> DataCatcher ---(sychronized queue)--> PullBufferDataSource ----> Player (create by Manager)
    h3. result:
    Data reached PullBufferDataSource and buffer for 5 seconds(timestamp subtraction). if player starts, all buffer in PullBufferDataSource will be eated by the player. Screen displayed by the player is quickly go through. Does player do not support auto time synchronization?RTP systems have a built-in mechanism to prevent the player from going too fast, which is simply, the packets come in real-time, so the player's job is to play them as fast as it can. If you're dropping 30% of your packets, and buffering the packets, you'd expect the player to play the data 30% faster than it should...because those missing packets aren't replaced with blank packets or duplicate packets or anything, they're just simply not in the stream...
    Suggestion: Use a PullBufferDataSource that will manage how fast data is given to the player based on the timestamps of the available frames.
    For instance, if your player is playing packets at 30 FPS, but you're only getting every other frame (50% frame loss), then you'd want to deliver data one frame at a time with a delay equal to one frame... so read, sleep, read, sleep...etc...
    With a PushBufferDataSource, the player will not read data until told to do so with a call to transferData...
    h1. 3.
    h3. main idea:
    Direct transmit data in disk to peer. Use SeekableStream to play.
    h3. implement:
    Custom data chunk sender( transmitter) ---> catcher(write to data) /////--> ByteDataSource(seekable read from not complete data)
    h3. result:
    it's good for playing from start to end. But if data chunk sender doesn't sends from first position, player remains sound and video stoped. if sended data play in KMplayer, it works.If you're doing it directly from disk, then you're not including the file header that JMF needs to figure out how to play the file.
    Suggestion: If you want to send from some point other than the middle, just grab the file header and append it to the beginning of the stream...

  • Recording Live Streams on FMS remotely

    Hi,
    We're using FMSS 3 to stream live feeds from cameras.
    However, we'd also like to have the ability to (via an
    administration site) record these live streams and allow the user
    to view pre-recorded streams instead of the live ones.
    Does anyone know of an example of how I can tell FMS to start
    recording a stream (via a web service or AS), and then how to stop
    it? Just for clarification - we don't want FMS to record a stream
    from the user's own camera. We want to record the live streams that
    are already streaming on FMS.
    Thanks,
    Filip Stanek
    bloodforge.com

    you can creat an app folder in fms folder which is inside of
    ur setup folder,
    for details u can mail at [email protected]

  • Problem with recording a live stream

    hi,
              i implemented an application for recording a live stream. It was working well locally. But now i am connecting with client's communication server. Broadcaster is working well. But the recording portion is not working. It's showing an error "NetStream.Record.NoAccess" . Can anbody help me to avoid this problem? Also now I am not able to connect with local FMS(developer version). When I tried to connect the same application with local FMS, it's showing an error like NetConnection.Connect.InvalidApp and NetConnection.Connect.Closed. Can anybody let me know why this error is showing?
    Regards,
             Sreelash

    I've just played around with the current developer edition of
    the flash media 2 server, and had a strange experience.
    We are using on our productive server still flash
    communication server mx 1.5.3, like i've noticed before, because we
    have only a licencse for this version.
    If i record a video with our app on flash media 2 server the
    problems i've descriped are away...
    ...BUT...
    flash media 2 seems to cut the corrupted part automaticly out
    of the record.
    I recorded a video which is exactly 30seconds long (made with
    a timeout script). On old flashcom the first 7 seconds have these
    descriped failures.
    If i record the same stream on flash media 2 there are no
    failures in the record, but the record is only 23 seconds long....
    This is very very bad adobe :(
    Or is the problem in my code? If there is anybody who can
    help, i would be very happy!

Maybe you are looking for

  • How to change keyboard map fmrweb.res for different forms

    I'm using Oracle Forms 10g with Oracle Forms and Reports Application server 10g. I know how to change fmrweb.res to change how keys function on web forms. The problem I'm having is I only know how to assign the fmrweb.res file when connecting to the

  • Processing changes in Syabse database table

    Can someone help me understand best way to process changes on a database table using XI? I'm crrently doing this using webmethods where webmethods generates a xml document whenever a row is inserted in a database using adapter notification mechanism.

  • No sound after Win 7 install w/Pavillion a1430n

    Hey, I've just did a full install of Windows 7 on my HP Pavillion A1430N and I've got no sound. Got the red x next to the speaker in the icon tray. I've seen while searching that loosing the sound is common with this upgrade/install, but I haven't se

  • WLC4404 with 1242ag Ap Upgrading new 1600 and 2700 Ap

    Hi all, Hope all are doing well I have Cisco Wireless Lan controller 4404 with 1242Ag access point i am planning to Upgrade Access point 1600 and 2700 Series Access Point  I want know my existing WLC4404 it will  support new 1600 and 2700 Access Poin

  • Why won't my music app work?

    My Music app opens, but then it's frozen. It won't let me click on anything, and the screen doesn't move when I try to scroll.