Transcoding AMF

It is my desire to transmit some data encoded in Action
Message Format (AMF3) over a socket to and from Flash. I need to
know two things:
1) How to read AMF3 data off a Flash Socket object. I'm
assuming it will come through as an array of bytes and that array
of bytes must be decoded into actual Actionscript
Strings/Arrays/Objects or whatever.
2) How to turn Actionscript vars into AMF encoded binary
strings.
Surely there is some built-in flash object which can do this?
AMFPHP encodes data into AMF for transmission and its Actionscript
client code looks like the attached example.
That code appears to be AS2 and relies on the remoting and
rpc classes
The other example is from Zend_Amf (a framework) that also
uses AMF and they offer some AS3 code that uses NetConnection and
Responder objects.
It would appear that these built-in flash primitives are
built for Remote Procedure Calls and expect return values to be
communicated. I don't see any actual transcoding of objects
from/into AMF which is what I really want. Any help would be much
appreciated.

Responses to amf calls get decoded into native objects
automatically. AMFPHP has as3/amf3 suport as well in recent
versions.
But you can also do your own encoding in actionscript.
Check out the ByteArray class.
You can set the amf encoding version and use writeObject() to
encode.
Encoding is either amf0 (as1 and as2) or amf3 (as3) version

Similar Messages

  • Transcoding udp video

    Hi, i have a dvbs gateway that are connected to a satellite dish where it gets it signal and output the signals as a udp video (a feed string looks like this udp://224.10.10.1:6000), my question is will and can the adobe media server transcode and re-stream the udp video signals to another formart (rtmp or other formats) so that i can view them over internet.
    Or do i need another adobe program for doing this?

    Here's a general dive on RTMP. You can see the packet structure contains metadata in the header about the content that will follow. The NetStream link I put above talks about it as well when using appendBytes(). The byte(array) parser understands FLV files with a header. After the header is parsed (again, see RTMP and what the server sends (0x12) before invoking a play), it expects all future calls to appendBytes() to be a continuation of that file (or stream). In other words, the first call to appendBytes() is missing the header.
    This is where I rely on FMS for the most part. I know it knows to send a header upon connection. If you're implementing your own streaming setup you're going to need to supply this just like a FMS RTMP connection would (in AMF). I haven't needed to do anything this custom but if you get the structure of an initial RTMP connection (sniff it with a fiddler2 or find a resource on the structure) and encode it in byteArray, send it first to appendBytes with the correct information about the video you're sending, it should work.
    Otherwise it makes sense appendBytes is failing by just getting a chunk of binary without having a header (the instructions on what exactly the binary is and how to use it).
    Strictly per the documentation, On2/VP6 should be fine, I use it for all my old f4v projects. I'm not sure about MPGA/V.

  • Dynamic link in Encore to get AE project, bluray, after transcoding, file still shows untranscoded

    Hi,
    I have an After Effects file. I have several comps that are 1920x1080 at 29.97 drop frame. When I go to Encore, I use, Import After Effects Composition and import several compositions from my AE project. I've tried using both Automatic transcode settings and I've tried to set the settings myself. When I transcode, Encore goes through the motions, but when it's done, the file still shows "Untranscoded". I have checked the project folder and confirmed that the transcoded files are indeed there, but Encore still shows them as untranscoded. What is going on?
    Thanks,
    Stan

    Hi,
    Transcod your comp in AME. Choose H.264 Blu-Ray if you have more than 2 hours videos.
    Bye
    Steph

  • Can I trim without doing ANY transcoding?

    I have video files in MOV and M4V which are H264. Those files play beautifully from a website but they are too long. I have QuickTime Player Pro 7.6.9 so I'm intending using it to edit them (by which I mean just trim them - no fiddling with anything else).
    These MOV and M4V files have already been transcoded. When I "Export" them from QTP the quality seems to degrade. I've read that transcoding more than once may degrade the video quality. It looks as if every option available to Export them from QTP involves more transcoding of some sort.
    Is there a way that I can edit (ie just trim) them in QTP but have it export the files exactly as they are now, without touching the coding at all?

    Currently the file sizes are large. I was hoping that trimming to the few minutes I want would reduce the file size.
    If you trim the clip properly before copying the trimmed portion to a new file container, only the desired content will be contained in the new MOV file making the file size smaller. How much smaller, of course, depends on how much of the clip you trim and remove.
    If I'm understanding you correctly, QTP will mark the In and Out points but when I use 'Save as.." it will leave the full file size intact (??) and still upload to the server the full 15 minutes worth of file (??) and visitors to the website who are using mobiles may view the full 15 minutes still???
    No. What I said was that if you "mark" the content you wish to keep and then use the "Trim to Selection" option to create a "timeline" only containing the segment of the file you wish to keep, the, when you use the "Save As..." command, only the trimmed content will be copied to the new MOV file.
    On the other hand, if you just mark the content without physically trimming it, then the entire movie will be copied to the new file along with the markers which only the QT Player, iTunes, etc. will then play correctly while apps like VLC will continue to play the whole file. In addition. If you use the "Save" command to keep the data in its original file container, then only the markers are saved and the file still contains all of the original data plus the newly added markers.
    I repeat, to create a "smaller" file with only the trimmed data in it, you must mark, trim, and write the desired data to a new MOV file container. Thereafter you simply replace the current online files with the new one(s) you've created. You can then decide whether or not you want to keep or delete the original file and simply keep the new file.

  • Compressor problem.  transcode failed

    Hi,
    I'm using Compressor to transcode H.264 footage from a Canon DSLR to apple prorez 422 LT.  Dragged in several clips to compressor, added settings and destination.  Hit submit.  Went to sleep. Came back in morning and transcode had FAILED.  Tried again with a very short clip.  No progress was made in "time remaining" window -- no time even came up, and the progress bar did not start turning blue at all.  Finally i cancelled the operation.
    What am I doing wrong?
    Thank you,
    Eve

    If you continue to get the 3X Crash error after following the advice you've got in this thread, my suggestion would be to download another one of Digital Rebellion's products – Pro Maintenance Tools. PMT has a Crash Analyzer that reports causes and recommendations. It's a paid app (suite of apps) but you can download a short trial.
    The error means that the Compressor Transcoder has failed (ya think?) and it's not one of the more obvious ones to sort. The one time I had it was fixed by uninstalling Perian; I must have read about a conflict somewhere,
    Russ

  • ProRes files act crazy and transcoding to Animation codec gets ugly

    So, allegedly, the Animation codec is lossless. However, when I take something encoded with Apple ProRes 422 HQ and transcode to Animation within the Quicktime Pro software, I see noticeable differences. Including: color change and increased aliasing.
    The same is true when attempting to export stills from ProRes, even in the highest, most lossless formats. I am wondering if ProRes for PC is actually not quite ready for prime time? When I import a ProRes file into After Effects, I notice the same quality drop associated with trying to transcode it to a lossless format. It seems as though ONLY QT Pro has the capability to display ProRes files properly. If this is the case, this is largely useless, since I can't then USE these ProRes files for anything. What the heck guys? I wouldn't have had clients deliver me ProRes encoded stuff if I knew that it didn't actually maintain its integrity.
    Please help out as soon as possible. Clients with deadlines are waiting on a solution that doesn't involve me spending 4000 grand on a new machine and FCP.

    As a test, try nesting the problem sequence in a new empty sequence then export that new sequence using QuickTime. Let us know whether or not that succeeds.
    Cheers
    Eddie
    PremiereProPedia   (RSS feed)
    - Over 350 frequently answered questions
    - Over 300 free tutorials
    - Maintained by editors like you
    Forum FAQ

  • Any way to transcode more than one file at a time?

    I've noticed that Final Cut Pro X, when doing transcoding (for Optimizing or creative Proxy media) only processes a single file at a time (whereas it can handle processing up to 5 audio files at once), and is only using about 70% of my processor(s) -- I have a dual core, Core i5 processor, 8GB of RAM, about 500MB used by FCPX during the processing.  So this thing should have PLENTY of processor and memory headroom to use, and it should certainly be able to transcode more than one file at a time.  Yet it isn't nearly taking full advantage of the processors, or the RAM.  Is there any solution to this problem?
    --Andy H.

    Try the 'add folder to library' option under file on the main menu.

  • Do I need to transcode .MTS (AVCHD) files before editing?

    I am quite new to all this, so forgive me if my question is unclear or just plain dumb.  I am shooting footage with a Panasonic GH2 which outputs clips as AVCHD. I understand that this is highly compressed.  I am perfectly able to edit these short clips (less than 1 min) in PP CS5 and export them as mp4 or .MOV files.  They seem to play back just fine in the Windows Media Player.  However, I am shooting footage destined for online stock companies and I am concerned that, without some form of decompression prior to editing, I will be degrading the footage too much. Is this the case? If so, what should I transcode to before importing to PP?  I am using Adobe Media Encoder as part of CS5 Production Premium.

    You are mixing a few things up.
    CS5.5 can edit AVCHD natively.
    What MPE can do for you read the link
    http://blogs.adobe.com/premiereprotraining/2011/02/cuda-mercury-playback-engine-and-adobe- premiere-pro.html
    function(){return A.apply(null,[this].concat($A(arguments)))}
    In another way, I guess it's not good news if I have to do any color/exposure correcting, etc inPrPro or After Effects, as I'll be degrading the footage further. Is that correct?
    Yes, but a certified card wont change that.

  • FCPX Project Render Settings - Can you edit in h.264 and Transcode/render only used clips on timeline to Prores during render?

    I have a question on the PROJECT RENDER settings in FCP X. It’s seems to me that one could theoretically import and edit entirely with original h.264 video files without needing to Transcode to ProRes422. Once you’re done with your edit and want to get the added benefits of COLOR GRADING in ProRes422 color space, it seems that FCPX will automatically render your edit in ProRes422 according to these preferences. In that case, a color grade could be applied to the whole edit, and be automatically transcoded/rendered into ProRes 422 during the render process. After rendering, what would show up on the viewer and what would EXPORT would be the rendered Prores files and not the original h.264 files. This saves a lot of time and space of transcoding ALL your media, and in theory should enable you to edit NATIVE video formats like h.264, with automatic benefits of ProRes during render.  I'm assuming the render may take longer because FCPX is having to convert h.264 video files to ProRes422 while rendering. This may be one drawback. But will you your color grade actually use the 4:2:0 color space of the h.264 native media, or will it utilize 4:2:2 color space, since the render files are set to render to ProRes422 ? Can anyone please confirm that this theory is correct and optimal for certain work flows?? Thanks!

    Thanks Wild. That's what I thought - in that the render files would be converted to ProRes422 codec. So do you or anyone else think that there is an advantage to having the 4:2:0 original file be processed in a 4:2:2 color space?
    Yes there is an advantage, any effects and grading will look better than in a 4:2:0 space.
    Most professionals online seem to think so. Also - will rendering of heavy effects and color grading take longer using this method because it's having to convert h.264 media to ProRes during render?
    Yes, it will definitely take longer.
    Can anyone verify from a technical standpoint whether editing and color grading in this workflow will see the same benefits as having transcoded the h.264 media to ProRes in the first place?
    Same benefits from a final product view point, you lose on rendering time though and if you have lots of effects things will seem slow as it will have render everytime from the h264 file rather than a Pro Res file for every change you make. This may be fine on a higher end mac but I'm sure just pummels an older lower end mac as to being almost useless.

  • How can I edit the audio in a QT file without having to transcode it first?

    I have some 320x256 QT files that feature some background noises that I want to edit out and dub over with some new audio. Is there an app that will let me do this while retaining the files at their native resolution?
    The only method I've found so far is to import them into iMovie, which transcodes them to 720x576 DV files from which I can extract the audio, edit and re-export at 320x256, but I'd really like to avoid the transcoding step.

    QuickTime Pro can "extract" the sound track. This will create a copy you can then "export" to a suitable sound editing format (.aiff).
    Edit the .aiff file using your sound editing software and then open the file with QT Pro and export to the original format used in the source movie. Add it "scaled" back into your original source movie and "delete" the older sound track. Use "Save As" and your file will complete.

  • Is it possible to change the "/messagebroker/amf/", from the view of the webserver?

    I have BlazeDS integrated into a J2EE application.  All method innvocations from the Flash client flow through an Apache webserver, until it reaches its destination of the J2EE application.  The Flash client exercises several distinct methods through this BlazeDS service.
    My question is, from an Apache webserver perspective, is there a way for Apache to distinguish one method from another?  Right now, Apache views everything as a POST to the URI of "../messagebroker/amf".  The options seem to be:
    1)  Dig into the POST to find the method signature.  This seems scary given the POST has binary content.
    2)  Try to change the URI of "../messagebroker/amf" for each method.  I have no idea how to go about this.
    3)  Maybe include a method signature in the http header?  Again, not sure how to do such a thing in Flash/Flex.
    If anyone has addressed this requirement in the past, I'd be very greatful for some advice.

    Hi. I haven't heard of anyone trying to do this in the past. I'm just curious, why does the Apache server need to know what method is being called?
    I would think something like 2) you could do mostly on the Apache side. Just brainstorming here but in your client application(s) you could manually create a ChannelSet with a channel that had the method name in the uri, so for example "../messagebroker/amf/method1" and "../messagebroker/amf/method2". The apache server could then do whatever it needed to do for the individual methods and redirect all requests to "../messagebroker/amf".
    Here is a link to the BlazeDS documentation that deals with channels and endpoints. That has some information on manually creating a ChannelSet and channels on the client in case you decide to go that route.    
    http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=lcconfig_2.html
    Hope that helps.
    -Alex

  • Transcoding for a vertical display

    I've been assigned the task of transcoding 16:9 for a flatscreen that is to be turned on it's side.  To tackle this problem I've rotated the image to fit into the "portrait" format, and exported to ProRes.  Here's the kicker, the client has requested DVD's.  When I transcode from Compressor to the DVD format, I get terrible interlacing.
    I understand that when creating a DVD, De-interlacing is not necessary, however that coupled with the flat screen being on its side, I'm not really sure how to tackle this.   Is it because I've rotated the image that I'm getting this?     Any help is very much appreciated.
    With respect,
    Josh

    https://discussions.apple.com/message/22940201#22940201

  • Looks like a pulldown issue, duplicated frames are baked into transcoded files...

    Have a bit of a messy project atm...this started with a client call on my way back from a festival where I ran large format video screens as a VJ & camera switcher/editor (Sasquatch in NW USA) so I was tired from 5 solid days of work.  I dropped by their studio to examine an example clip…tired and thinking they had a good test case pulled up & ready to look at.
    Well, the first clip we analyzed ONLY had shutter speed issues.  Checking the source file/.mov it was apparent it was shot at a shutter speed closer to 1/1000 than 1/60, adding a bit of pixel motion blur in AE solved that...  In retrospect his export from FCP7 had the pulldown problems in the EXPORTED version of the clip, but we hadn't checked that (we watched the source clip play back and he sent the reference clip with me so I could match the edit).  So the issues that have been in almost all subsequent clips were not present in the initial analyzed file unfortunately!   When I got back to my workspace & the files were ingested it became obvious that there's a bad pulldown at work in most of the transcoded footage.  The client had mentioned it looking 'choppy' and rather than just being motion blur it was a case of duplicate frames.  Bad pulldown removal... 
    MOST of the footage as I have it is ProRes 4:2:2 LT 29.97p  with every 3rd frame duplicated (1 out of 3 duplicated rather than 1 out of 5).   The ProRes format didn't come from a digital recorder in this case, it was transcoded from the source media using a software tool to organize & collect the source media and compressor to transcode for editing.  Since these 2 episodes were left in the editing pile for months, the source files are obviously *no longer around*….
    In the past I've dealt with reverse pulldown issues from 24p & 24pA (where the frames are either interlaced or duplicated to achieve the pulldown) but these all give a 5 frame cadence.  Also when done correctly they don't destroy good frames and leave frame dupes.  What we have here seems more like improperly handled PsF (via HDMI or SD output from 1080p cameras) or pulldown incorrectly applied to 29.97 over 60i/59.94i, or...?   In any case these are destined for 60i broadcast, so simply interlacing them again to create the 3rd frame is out of the question, and frame blending isn't much better than the dupe frames.
    Removal of every 3rd frame which (since that = every 2nd frame) obviously gives a visual "jump" over the now removed dupe frame, and leaves footage has to be converted from 20p to 29.97 which looks horrible regardless of how its done (conformed or interpolated).    Removal of every 3rd frame from 29.97 is actually not too hard, as 2/3rds of the frames are still 'good' it's easy to
    apply a conform to the file headers bringing the rate down to 19.98p, bringing that into AE and creating a comp that's set to 'preserve frame rate when nested' under advanced, then putting that into a 29.97 comp..., that works relatively well but the missing frames STILL need to be replaced somehow...
    To make matters worse, some clips have a cadence that doesn't stay consistent (the repeated frames will 'jump' from every 3rd periodically), OR worse yet there are sometimes 4-6 duplicated frames in a row (???), and then other sequences have only occasional frames repeated (meaning they need to be identified manually by stepping frame by frame).  Some of this is potentially due to rounding errors in the pulldown, but there's got to be something more amiss as is evident by the other inconsistencies...very odd.
    In any case what do to about the 3rd frames that need to be 'repaired' or 'created'?  This is the crux of my issue, and software interpolation alone isn't going to cut it...  Since a simple pass on the entire clip(s) causes vast sections get time stretched (not just every 3rd frame synthesized but rather the whole thing interpolated), I tried going in and setting timestretch keyframes manually (so I could manually stretch the area where the 3rd frame needs to be 'created).  As the software applies motion techniques that obviously assumes the distance between frames in both directions is equal...well it just replaces 1 visual problem (dupe frames) with another (visual distortions).   This has been the case with ALL the software tried so far...
    I've tried AE's native higher quality mode (rather than just frame blending, which will look awful.  Twixtor (demo) in AE, Smoke (autodesk), DaVinci Resolve (Light version but I have access to the GPU accellerated version as well) &  Apple's Motion.  All do a horrible job (FootageFIxing directory examples in dropbox) of handling this automatically, which I expected.  However I had originally intended to simply 'replace' every duplicated frame using the interpolated footage, this resulted in WORSE matching to surrounding frames due to the poor interpolation not matching up well on many frames than frame blending, not better (portions of the frame 'jump' out of alignment with both the previous & following frames, after analysis it acts like a spine with a midpoint that is not parallel to a line drawn between the start & end points).
    So dropping 29.97 to 20p then synthesizing every 3rd frame (until cadence jumps) is where I'm at.  Once done synthesized frames have to be checked and a huge % of them (when there's motion on a person in the scene or camera pans etc) have elements…or whole frames…that are incorrectly synthesized.  So pieces of surrounding frames have to be roto'd & composited into selective areas to 'patch' the portions that get 'squashed', 'stretched' or 'rippled' into position (VERY VERY noticeable).
    Lastly, due to a variety of shutter speeds used the motion blur across frames is also an issue, so once a shot has been correctly re-synthesized for the missing frames it must be evaluated for whether motion blur is required, which brings us back to the rendering process that was assumed at the start of this (in other words final output of each clip may need motion blur in post before delivery).
    At this point I'm left with doing the synthesis of the 3rd frame and 'repairing' the areas of that which distort too heavily, or duplicate (some of the camera pans are done to track high speed motion, birds flying through frame etc)...
    Anyone have any interesting techniques I could try, beyond telling the studio I'm working with that they're hosed (that discussion is coming but I'm going to show best efforts and the timeframe required to achieve that across all shots/edits)....

    Oh and the easier way to removing the dupe frames was just to drop it to 19.984, set (nested) comp advanced settings to 'preserve framerate when nested' and watch for the occasional rounding error when back in a 29.97 container composition.  It would be nice to be able to specify 29.976...
    In any case the missing frame needs to be replaced, not removed.  The visual jump i that gap actually looks to be closer to 2 frames when motion of objects is estimated by hand.
    Also (as I think I mentioned) other clips seem to have even further oddities.
    The reality is that I can actually create smoother motion by doing a lot of hand massaging of the footage (scaling previous frames in simple pans, hand-rotoscoping/masking/compositing in sections of frames around missing ones to synthesize motion in onscreen talent & background objects, things tracked across the screen by the camera and so on.
    But that's a heck of a lot of work and probably beyond what's achievable at present I suspect...

  • Zend AMF Data Service Return Problem

    Hi Folks,
    I am working with FB4 and Zend AMF/PHP and MySQL.  I began integrating the PHP stuff using the great article by Mihai Corlan called 'Working in Flash Builder 4 with Flex and PHP.  I followed all the steps exactly, aside from creating my own app-specific PHP classes and functions, etc...  I 'hooked up' the Zend stuff just like the article, created a text datagrid, just like the article, and viola!, it worked.  I then tweaked it a bit and interwove it into my 'real' component.  So far, so good.
    Then I created a second PHP class with a different 'get data' type of function.  It queries a different table in MySQL, but is essentially the 'same' as the query/function in the initial PHP class.
    In FB, in the Data Services window, I choose the 'Connect to Data/Services' function, just like the first time.  I then find/select my PHP class file and FB 'interrogates it' enough to show me the function that exists in the class.  I 'finish' the operation and it adds a new 'service' to the list of services in that window.  Again, so far, so good.
    The problem comes when I try to 'test' the service or 'configure return types' (which basically requires a 'test' operation anyway).  I can enter the 'input' params just fine, but when I try to execute the call, I get the following error:
    InvocationTargetException:There was an error while invoking the operation. Check your operation inputs or server code and try invoking the operation again.
    Reason: An error occured while reading response sent by server. Try encoding the response suitably before sending it. e.g. If a database column contains UTF-8 characters then use utf8_encode() to encode its value before returning it from the operation.
    I don't know where to go after this.  Again - the 2nd PHP class is essentially identical to the 1st.  The function in it is essentially identical, differing only by the input params, the name of the function and the actual SQL it sends to MySQL.  There is no special text, no special characters, no image stuff, nothing.  I do not 'encode' the results of the function in the first class - in fact the code in the second class is practically identical to the first.  I do not know what the error is talking about.  My guess is that it's more of a generic message.
    I can debug the PHP code just fine from within a seperate instance of Eclipse.  The function runs/returns just fine - an array of PHP-defined objects (simple strings).
    Any insights or advice would be welcomed.   Thank you,
    -David Baron

    Thank Jorge, but that was not the issue, though, it may be related.
    I checked the mySQL my.ini file, and there was already an entry for:
    [mysql]
    default-character-set=utf8
    I added the 'default-collation=utf8_unicode_ci', like you suggested, but that didn't do anything.
    I checked the Apache httpd.conf file, and added the following line 'under' the "DefaultType text/plain" line:
    AddDefaultCharset UTF-8    but that did not do anything.
    I checked my mySQL database, all the tables involved.  They were already at UTF-8 (default).  However, some of the 'varchar' columns were defined as 'latin 1-default collation'.   I changed them all to utf-8 (default table collation), but that did not help either.
    Finally, I found the problem, though I don't really know if it is "my" problem, or ZendAMF's problem, or Adobe's problem.
    It turned out that 'some' of my data had a 'bad' character in it.  Specifically, I had 'copied and pasted' some data from MS Word into mySQL Workbench.  Some of the data included the 'elipsis' character - you know, when you type "..." (dot dot dot) in MS Word, it replaces the three periods with a single elipsis character.  Although PHP could easily query and assemble this data into a nice object array, I noticed that that character showed up (in PHP's debugger) as a 'box' character, meaning "bad character".  Thus, I guess, Zend AMF and/or FlashBuilder could not 'bring over' and/or deal with this type of character.  As soon as I replace the few instances of that character with three periods, everything began to work perfectly.
    So... what to do about this?  I thought I was through with silly encoding/decoding of data when I left JavaScript and HTML behind in moving to FlashBuilder technology.  Am I really going to have to worry about this kind of thing?  Or might this be a bug/deficiency somewhere in the stack?
    Thanks for your help,
    -David

  • Design Choices and is LiveCycle needed? best practices for using RTMP/AMF over HTTP/XML communicatio

    Hi,
    I am new to flex/RIA. I am exploring different design choices especially in client server communication. On client side we will be using Flash based RIA (using Actions scripts).
    There will be some simple forms (like for login, registration, payments etc) and some simple reports including with several graphs and charts. Each chart might have 1000 to 1500 data points etc. There are not video or audio content as such. On server side we have Servlets, java API and some EJBs to provide the business logic and real time prices/content (price update is usually every 10 seconds) /data. Some of the content will be static as well.
    I have following questions in my mind. Is it worth it to use RTMP/AMF channels for the followings?
    1. For simple forms processing (Mapping Actions scripts classes to Java classes). Like to display/retrieve/update data for/from registration forms.
    a. If yes, why? Am I going to be stuck with LCDS? Is it worth it? What could be the cons for heavy usage/traffic scenarios
    b. If not what are the alternates? Should I create the web services? Or only servlets are sufficient (ie. Only HTTP+Java based server side with no LCDS+RTMP+AMF)? All forms need to communicate on secure channel.
    2. For pushing the real time prices/content which we may need to update every 15 seconds on user interface using graphs and charts. Can I do it with some standard J2EE/JMS way with RIA (Flex) on front-end? i.e. Flash application will keep pulling data from some topic. Data can be updated after few secs or few minutes which cant be predicted.
    3. Are there any scalability issues for using RTMP? What happens if concurrent users increase 10 times within a year?
    4. What are the real advantages of using RTMP/AMF instead of simple HTTP/HTTPS probably using xml based objects
    5. Do I need to use LCDS if I am using AMF only on client side? Basically I mean if I am sending an object in form of xml from a servlet. Can some technology in Flash (probably AMF) in client side map it an Action script object?
    6. What are the primary advantages of using LCDS in a system? Is there any alternate solutions? Can I use some standard solutions for data push technologies?
    I would like that my server side implementation can be used by multiple types of clients e.g. RIA browser based, mobile based, third party software (any technology) etc.
    I appreciate if you can kindly refer me to some reading materials which can help me deciding the above. If this is not the right place to post this message then please do refer me to the place where I can post such questions.
    Thanks and Kind regards,
    Jalal

    Hi Jalal,
    Let me see if I can help with some of your questions
    1. Yes, you can use LCDS for simple forms processing. Any time you want to
    move data between the Flex client and the server, LCDS (or its free Open
    source cousin BlazeDS) is going to help. I would expect you would use the
    mx:RemoteObject MXML tag to invoke server side code, passing it the form
    data input by the application user.
    2. If you need to push near real-time data, LCDS gives you the RTMP channel
    which can scale quite nicely. You can then use the mx:Consumer MXML tag to
    subscribe the clients to the messages, which can come from almost anywhere,
    include JMS topics or queues.
    3. RTMP (included in LCDS) is the best option for scaling to tens of
    thousands of users and the LCDS servers can be clustered to proved better
    scaling.
    4. The AMF3 protocol used over the RTMP channels performs much faster than
    simple XML over HTTP. See this blog posting for some tests:
    http://www.jamesward.org/census/.
    5. If you are sending a Flex application XML, then I would recommend using
    the E4X API to work with the XML. This is a pretty nice and powerful way to
    work with XML. If you want Actionscript objects (and probably better
    performance), then using AMF serialization to Actionscript objects is the
    way to go.
    6. Primary advantages? There are many, but mainly you can avoid thinking
    about the plumbing and concentrate on solving your application and business
    logic problems.
    Hope this helps you a little
    Tom Jordahl
    Adobe

Maybe you are looking for

  • After cloning form server is not running in r12

    Hi All, after cloning form server is not working --------------------------------------------------------------+--------- ias-component | process-type | pid | status --------------------------------------------------------------+--------- OC4JGroup:d

  • Can i retreive my pictures from my hard drive if the genius bar says it's b

    Can someone please help me with this I am really bummed. my pictures and videos of my now late dog and my past 6 years are on my Powerbook G4 and the genius bar says the hard drive is broken. The weird thing was it was working fine and then I went to

  • IDES data / not available...

    Hello, Finally have the IDES version of CRM up and running.  Completed SGEN, Configured ICM and finally, GUI logon is possible.  Questions: 1)  I checked in all clients, there seems to be no IDES related data.  Is it possible I have missed a step?  O

  • BPM collection pattern - Expression must not return a multiline value

    Hi, I am trying to create an integration process to collect messages - when I run the check function I get the message 'Expression must not return a multiline value' - I have checked everything and cannot find what is causing this message.  When I cl

  • Incoming outdoor FIOS line/cable - needs remounting/hanging - too close to the lawn

    My incoming outdoor FIOS line/cable is very close to the backyard lawn - about 4 ft above the ground. It needs remounting/hanging. I just need Verizon to send someone to fix this problem but its Website is not user friendly to submit such a request.