FX Room and Interlaced Footage...

How important is it to create a separate node tree for each field of interlaced footage in the FX Room?
I have run some tests and found that sometimes it works well without doing this - what is the deciding factor as to whether a single node tree will be successful with interlaced footage?
The Color manual specifies that a node tree must be created for each field, but I'm just wondering if and when this can be avoided.
Is this a bug with Color or is all interlaced footage usually handled this way when working with node trees?
Thanks.

The node tree is a borrow from the Shake topology, as it interfaces nicely with scripting.
although I have seen Shake forget a line here or there. And yes, you have to deal with interlace on a more discrete level with applications that give you real power, instead of plugins. Its the first thing that you deal with on FileIn import, again using Shake as an example.
Computers are notoriously bad with interlacing/deinterlacing -- its because they have no sense of time or sequence. All the frames exist at once, so what differentiates interlace vs. progressiveness?-- nothing. But its not like selecting "render out progressive" is going to help, either.... that would all be post-COLORFX, and the damage will have been done. The SHAKE manual has a great section on the subject.
In the ColorFX room it is pretty much mandatory. There is a good illustration of a deinterlace/ process /re-interlace tree in the Manual. Pages 247, 248 in the Color 1.0 user manual pdf make short but interesting reading. You will be able to get a single node tree to work, as long as its I/O is sorted out.
jPo
Message was edited by: JP Owens

Similar Messages

  • 1080i and 720p footage going to 576i DVD?

    Hi.
    Just getting my head around the best way to attack this.
    I have a pile of 1080i and 720p footage which needs to be edited together with the final destination being a 576i DVD.
    Now, being a country that uses interlacing and as at least half of my footage is interlaced, it seems a bit of a waste to
    convert the 1080i to 720p and then convert it all back to interlaced again with the DVD encoding. So wondering about the merits of an intermediate 720i timeline? So the same settings as the 1080i but 1280x720 dimensions.
    Is this a good/bad/stupid idea? Any better way to do this?
    Thanks a lot

    1.) They will be the same aspect ratio (16:9, square pixels), but each will take up more or less room within the frame than the other.
    2.) If you start a 720p sequence, then yes, anything larger than 1280x720 will need to be reduced to fit entirely within the frame.
    3.) You can select "Scale to Frame Size" by right clicking on the video clip, or manually resize the video through the source video's Effect Controls box. In addition, with the advent of CS4, Adobe has added a nifty little feature that was absent from earlier editions of Premiere, including CS3, and that is "Maximize Render Quality." This feature is checked from within the sequence settings dialog box, or selected from your export settings, and ensures that any footage that is scaled to fit the native resolution of the sequence will be handled appropriately so that it maintains maximum visual sharpness and resolution.
    Keep in mind that there are different flavors of 720p and 1080i, so in future posts it would be helpful to designate which one you are referring to. For example, HD is defined as having a resolution of 1920x1080, and 1280x720 with an aspect ratio of 16:9 and a pixel aspect ratio of 1.0, or 1:1. Panasonic DVCPROHD, through the use of line doubling, is actually 1280x1080 (1.5 pixel aspect ratio), and 960x720 (1.33 pixel aspect ratio), and Sony's HDCAM/XDCAM HD is 1440x1080 (1.33 pixel aspect ratio). Knowing exactly what format you are shooting in and what the native resolution you will be editing in will help maximize your results within the editing software and ensure that your video is not visually distorted. It will also offer you realtime playback direct from the timeline without the need to render, presuming you have a system that is up to spec.

  • What project preset do I use? AVCHD and DSLR footage being used in same sequence

    I am using a Canon DSLR T3i and Panasonic HC-X920. They can both shoot native 24p which is how I will be recording. When I set up my CS6, what project settings should I use? I want to mix the two sources on the same time line so I'm confused on how to do this because on the sequence settings I can only choose either AVCHD or DSLR. I want to do simultaneous multi-camera editing on the same timeline. So its not like I can set up two different sequences with different settings because I want to be able to cut back and forth within the same sequence while using both cameras.  (problem is... two different types of footage ).
    Thank you - Mark

    Thank you so much for your input. Yes, I should have been more specific when it comes to the frame rate; it is indeed 23.976 for the T3i as well as the Panasonic. I have mixed T3i footage recorded at 23.976fps and then tried to mix in a Canon hf-M41 (avchd) camcorder which records in 60i "supposedly". Although Premiere says it is 29.97 when I bring it into the editor. Anyways when I would mix these two formats it looked like crap. The preset was set up for the DSLR and the hf-M41 played like it was in slow motion even upon rendering and exporting. I would get the red line above my M-41 footage because it didn't match the preset settings. I would then render it thinking it would look right but...nope. I just bought the Panasonic HC-X920 today from BandH, it has a 30 day return policy so I will see how the T3i and Panasonic work together within Premiere. I purchased the Panasonic because the Canon M-41 footage looked so crappy when it was mixed with the T3i in the timeline(because of the different fps and interlacing). I used the Adobe Media Encoder but I got the same results of crappy slow motion jittery footage from the M-41 when I tried to convert it to a 23.976fps .MOV file. Since the Panasonic shoots native 23.976, the two "should" work better together. My only concern is the difference in codecs because the frame rate should be identical. I don't want to get that red render bar when I bring the Panasonic footage into the time that was set up for a DSLR  or vice versa. If I do I'm afraid of the obtaing the slow motion effect again. I must have the auto focus from a camcorder when I'm shooting on a jib or other tough moving shots like a steadicam.

  • Interlaced footage not rendering correctly

    Hi Everyone,
    I have made a series of 10 second ads that all share the same graphical ending that contains a logo and text that flies in and then settles. The ending was generated purely in After Effects but is a little time consuming to render so I created a pre-rendered quicktime movie in pro-res 4444 as the ending needs an alpha channel. This pre-render was rendered as a pal interlaced upper field first clip, I have re-imported the render which is correctly interpreted as upper field first and replaced the source in my new comps. When I render out a new ad as interlaced the end frames after the logo has settled appear to have been rendered as interlaced interpreted footage in a progressive comp and give me jaggy edges instead of smooth aliased ones around the logo and text although the preceding motion seems smooth.
    I have had to revert to the original un-rendered composition thus drastically increasing my render times but the output is correct.
    To check wether this is a codec issue I re-rendered the ending to Animation codec but this produces the same results.
    Has anyone else experienced anything like this ?
    I am using After Effects CC 2014 on Mac OS X 10.9.4
    Cheers
    Nic

    OK, I have partially solved this issue. I had forgotten that I had sped the duration of the ending up by 50% so as to fit correctly. When I use the original comp (sped up in the master comp) it renders correctly but when using the original length interlaced pre-rendered file (which is then sped up by 50% in the master comp) I get the error I described in the OP.
    What this seems to mean is After Effects is only using a single field of each frame of an interlaced movie if you speed it up using AE's time stretch. Thus effectively halving the resolution of the frame and producing the jaggy effect visible on the curves of the text.
    Is that how AE has always done this and is there a way around it with frame blending or such? Although I have rendered out a clip that was sped up before rendering that now works correctly when I re-import it I am concerned about using time stretch on other interlaced footage in the future.
    cheers
    Nic

  • Interlaced footage in secondary preview - wrong field order.

    I just got CS5 Production Premium and an EVGA GTX 465, which is basically a GTX470 with less CUDA cores (352 instead of 448). I added the card to the list in the Premiere CS5 txt file (the so called hack) and everything seems to work perfect except for one thing. When I play back interlaced footage that I shot with either of my two AVCHD cameras (Canon HF100 and Panasonic AG-HMC40), the footage in the TV set (which is setup as my second monitor) will play with the fields in the right order but after a few seconds it will start playing as if it were progressive, or as if the fields were in the wrong order. Then, eventually, it will go back to play the fields in the right order, and then the wrong order, and go back and forth like that. It switches every ten or twenty seconds. This happens both when the footage is played back in the source window and in the timeline.
    Thanks to the GPU acceleration, it plays smoothly always even with added effects and even with different tracks at different opacity levels (I tried three tracks, one at 25%, another at 50% and the other at 100% and it played without skipping a frame), but I don't understand why it doesn't always send the right field order. I wonder if it has anything to do with the cable that goes to the TV set, since it's DVI on the end that I connect to the video card, and HDMI on the other end. Regardless of that, I purchased the cable from Monoprice and it's a very thick, well shielded cable. The card comes with a mini HDMI output, but the cable that comes with it is not long enough to reach the TV. Still, I'm not sure that it would make a difference.
    Is this happening to any of you? Could this be a problem with the card itself, with my footage, or with Premiere?
    Note: I had switched the "Multi-display/mixed-GPU acceleration" to "Compatibility mode" in the Nvidia control panel and it didn't make a difference in anything related to CS5, whether it's this particular problem or anything else.

    JSS1138 wrote:
    We should probably just get the details, instead of speculation on what hardware is being used.
    Oh, the system is pretty fast, but Premiere always took a huge toll when external preview was enabled, it was like that in CS3 and CS4 as well. But here are my system specs just in case:
    AMD 1090T @3.8 Ghz (stable)
    16 GB of DDR3 G.Skill RAM
    Gigabyte GA-890FXA-UD5
    Western Digital 1.5 Gb Black Edition (as one of the video drives, the OS drive is a standard WD)
    EVGA GTX465 1 GB
    So while this may not be the fastest computer in the world, it's more than fast enough for HD, with 6 cores and 16 GB of RAM.
    This is something that's either wrong with Premiere's design, but maybe there's a workaround, which is what I'm trying to get to. I've been trying many things in the "Manage 3D" section of the Nvidia control panel, but nothing seems to work. VSync on or off, Triple Buffering, etc, etc, the problem is still there.
    Obviously if I right click on the monitor window and instead of "Both Fields" I choose either first or second field, I get just that, but I don't see why do I have to edit interlaced video with just one field instead of both.

  • Just when I thought I new how progressive and interlaced are working....

    Hey everyone.
    I thought I new the difference between progressive and interlaced but I've still got a few questions. For the most part... i get it. Progressive is like 1 solid picture. Interlaced breaks up the picture into interlaced lines and 2 fields, odd and even . Great, that much I've got.
    Here's my issues.
    This is actually a compressor question but it starts off with Final Cut.
    I shot some footage on Mini DV NTSC 29.97 interlaced.
    When I edit it in Final Cut things look pretty good. I see a little interlacing in the preview window but that's ok, it's the preview window. I know there is a filter to de-interlace my footage but what is the option in my sequence settings of Field Dominance? There is an option of field Dominance- Lower, even and None. Lower and even I assume deal with interlacing fields. I assumed none meant progressive. What's the point of needing a filter if I can change the field dominance within my sequence settings? Does it do the same thing?
    If I export this footage to compressor I use the H.264 codec from inside compressor.
    I noticed that I can turn the Frame Controls tab of my setting on. I then have access to the resizing controls. Resize Filter... not sure what that is. Then Output fields. I change that to progressive. Which I assume deinterlaced everything... but then there is another option right below that to Deinterlace....? Didn't I just do that by saying their are no fields? On a final note after it exports and I go to look at the .mov... the footage is still interlaced.
    One last thing with compressor. The above issues are all part of my setting inside of the inspector for the actual "codec" i have attached to the project. If I mouse click on the project itself... not the settings of the codec but the project itself, the information inside my inspector window changes. Now instead of Codec Settings I have A/V Attributes... there is an option there for Native Field Dominance? Which one do I change? How many field dominances are there? I have 1 in Final Cut another 1 in my AV Attributes of my project in compressor and a 3rd on in the codec settings.
    I hope these questions make sense. They are confusing issues to me so I'm not quite sure how to ask them.
    Thanks a lot.
    -Drew

    It's not surprising that you're confused. It's a pretty confusing subject. Hopefully, I can help you get some sense out of it.
    Most of your questions stem from not quite understanding the definitions of "deinterlacing", "progressive frame rate", and "field dominance". Let's start with the last since it pertains to your first question.
    When you have interlaced footage, you have 2 fields (as you noted) per frame. One field is drawn on the Odd (or Upper) scan lines of the TV. 1,3,5,7...down to 479. The other field takes the Even (or Lower) scan lines: 2,4,6,8...480. Together they are woven into a single frame.
    The field dominance in your sequence settings determines which field is drawn first. DV and NTSC in general is Lower field dominant and if your footage is DV, you should not change this. Interlaced HD is typically Upper field dominant. Again, under most circumstances, this should not be changed. What will likely occur is a "tearing" of the image in movement, particularly noticed in pans or fast action. None draws each line successively which is how progressive footage needs to be interpreted.
    However, progressive does NOT mean deinterlaced. Nor does deinterlaced mean progressive. If you use the deinterlace filter in FCP on your DV footage, you merely remove one field from your footage and REPLACE it with a copy of the field you kept. In other words, if you set the deinterlace for Upper, then the clip will copy the image in the Lower field and paste it into the Upper field. The footage will still be interlaced.
    In Compressor, changing the Output to Progressive in the frame controls will make your interlaced footage progressive. This should always be done when the final viewing medium is a computer. You see, computer monitors always display thing progressively; they have trouble with interlacing frames. They are not built like TVs. If going to TV, then you want to keep it interlaced (unless you're going for the progressive look), as that is how standard def TVs are designed to view video.
    The deinterlace option in Compressor is the same as the filter in FCP, but as you know now, that's not the same thing as making a file progressive.
    Finally, the AV settings for the clip you placed in Compressor are the same as the sequence settings in FCP. They help the program determine how the file should be displayed. If you have DV footage, and you switch the dominance in that setting or in FCP, then you risk having your footage look jittery and generally no good.
    I hope this clears things up for you a bit.
    Andy

  • Mixing 720p and 1080i footage in a sequence

    What's the best choice to edit a movie comprised of clips shot in 720p30 and clips shot in 1080i60? "Upscale and interlace" the 720p footage or "downscale and deinterlace" the 1080i footage?
    Using easy setup, one has the choice to create either a 720p or a 1080i project, and when adding a clip to a sequence in this project, the clip is automatically converted to the project's format. Therefore, this is something that needs to be decided ahead of time. Or is it possible to change the project's settings after attempting to export the sequence if the result is disappoining?

    All right, but assuming that I have the choice of output format (and I do), what is most likely yo result in the best overall result? Interpolating (720p output) or extrapolating (1080i)? Intuitively, I would think that interpolating is most likely to result in less artifacts than extrapolating. So on this measure, a 720p output of a mixture of 720p and 1080i footage would be more recommendable than a 1080i output.
    The good news is that you are saying I can change this in my sequence at any time.

  • Using 24p footage and 60i footage together?

    Hello,
    I am working on a project that will combine 24p footage and 60i footage. What would be the best way to go about combining these two. It's for a short that is shot similar to Cloverfield. I want to use 60i footage for the hand held footage shoot by the actors who are capturing their experience. I'd prefer to have that footage shot on 24p as well and degrade the quality in post, but I don't want to run the risk of them dropping my DVX100a and destroying it. I don't have any experience with 60i, or combining two kinds of footage, so advice is much appreciated!
    Thanks!

    > that they are not true 30p but really 60i rendered to 30p
    Well, I certainly can't speak for all of these cameras, but I have played around with some of this "30p over 60i" footage -- but most of this was from HDV cameras, not AVCHD. Avoid if you can. Basically, the luma is progressive and the chroma is interlaced.
    I have a function (it's actually part of the dv2film and hd2sd functions) that will deinterlace only the chroma, so can recover greatly true progressive output... but this is an unmitigated a pain in the *** to deal with all the time, and you certainly lose at least a little chroma resolution. Avoid :)

  • Slow-mo interlaced footage troubles

    I use an HV30, which can shoot 60 FPS interlaced footage to tape. I want to play it back at half-speed with 30 FPS. When I tried putting it in half-speed, it just played back at 15 FPS, still interlaced. When I deinterlaced it, it still played back at 15 FPS. I guess when you deinterlace, it just cuts out all odd scan lines, instead of making separate frames for even and odd scan lines. Does anybody know how I can make separate frames with the even and odd scan lines so that my 60 FPS footage plays back at half-speed like it should, 30 FPS? Thanks much!

    No no, see, it definatly shoots 60 frames in every second, but it takes every subsequent even and odd frame and combines them into one interlaced frame, which it does 30 times per second. Every time you look at one interlaced frame, the even lines show one frame, the odd another. I want to separate the even numbered lines into one frame and the odd numbered lines into another. I will then have 60 frames per second video. I just need to know how I can do that, then I can slow it down to half speed, achieving 30 frames per second slow motion. Does that make sense?

  • Limitations of editing interlaced footage

    I have a program timeline full of Red and P2 footage in both 29.97fps and 23.97fps for a program that needs to be delivered in 1080i. I have been advised to bring all of my footage up to 29.97i by adding 3:2 pulldown in AE.I was told that AME does not do a good job of adding the pulldown. I would then edit this interlaced footage to a final in Premiere.
    It was also said that "I would then be subject to all of the limitations of editing interlaced footage."
    I don't know what that means. Can anybody help?

    Ask your question on the Premiere Pro/ AME/ AE forums, resepctively. No point whatsoever posting it here. This forum only deals with general suite issues and pre-sales questions.
    Mylenium

  • What is the best way to import and export footage from the 5D Mark II?

    Hello,
    I've just finished shooting what I am considering to be my directorial masterpiece.  Shot it on the Canon 5D (1080p, 24fps), and the footage looks amazing.  Now I am ready to start editing and have been using premiere lately, but I have yet to figure out the proper pipeline.  I want to know the best way to retain resolution before I delve into this project.
    My questions:
    1)  What is the best way to start a new project and import the footage without having to render whilst editing, so as to retain all resolution and originality of the source footage?
    2)  What is the best way/ codec/ format to export this same footage once editing is complete so as to retain that crisp 1080p for which the 5D is so recognized?
    3)  What is the best way/ codec/ format to import and export/ render between premiere and after effects?  I am speaking mostly of vfx and color correction.  I also have some 30fps footage that I intend to slow down in AE and then import into premiere.
    I know this is pretty broad, but as a solo filmmaker I really need someone's guidance.  I rarely ever finish my films with the same, crisp look as the footage.  I need pipeline help, and really appreciate it!

    ascreenwriter wrote:
    Hello,
    I've just finished shooting what I am considering to be my directorial masterpiece.  Shot it on the Canon 5D (1080p, 24fps), and the footage looks amazing.  Now I am ready to start editing and have been using premiere lately, but I have yet to figure out the proper pipeline.  I want to know the best way to retain resolution before I delve into this project.
    My questions:
    1)  What is the best way to start a new project and import the footage without having to render whilst editing, so as to retain all resolution and originality of the source footage?
    2)  What is the best way/ codec/ format to export this same footage once editing is complete so as to retain that crisp 1080p for which the 5D is so recognized?
    3)  What is the best way/ codec/ format to import and export/ render between premiere and after effects?  I am speaking mostly of vfx and color correction.  I also have some 30fps footage that I intend to slow down in AE and then import into premiere.
    I know this is pretty broad, but as a solo filmmaker I really need someone's guidance.  I rarely ever finish my films with the same, crisp look as the footage.  I need pipeline help, and really appreciate it!
    1. Follow the advice above. Also use the Media Browser to import the footage in case you have spanned media files. Import files with the Media Browser.
    2. It largely depends on what you wish to ouput to: Blu-ray, web, etc. This FAQ gives the best answer: What are the best export settings?
    3. Use the Replace with Adobe After Effects Composition function.

  • Can I have 2 HD DVRs in the same room, and control them separately?

    New FIOS TV being installed in two weeks, making the switch from Dish Network. 
    If I have two DVRs in the ame room, and two remotes, can the remotes be set up to work with each DVR separately, or will one remote cause both DVRs to respond?  WIth my Dish Network DVRs, I can set a remote code for each - can I do the same with the two DVRs I'll be receiving?
    Thanks!
    Message Edited by frabman on 12-05-2008 11:51 AM

    DallasTX wrote:
    Replace one of your Verizon DVRs with a TiVo.  I've had a Verizon 6416 and a TiVoHD with an upgraded 500GB hard disk for about two years now.  I prefer the TiVo especially the program guide information.
    The TiVo doesn't do Video-on-Demand, but you would still have the Verizon DVR for that.
    Since you can control multiple TiVos from a single remote in the same room, you could add a second TiVo if you needed three DVRs.
    Message Edited by DallasTX on 12-06-2008 06:39 AM
    As it seems that all the shows I want to watch are on at the same time, having two DVRs (4 tuners) would resolve the "what to record" problem I keep running into.  The fact that you can control two TiVo DVRs with one remote is a great plus.  However, how did you hook both of your DVRs to one set?

  • How can I transfer video taken with my iPhone to my iPad WITHOUT using a computer or iTunes? I'm going to Europe for a month and want to use my iPhone to take pics and video, then come back to the hotel room and use my iPad to edit/store each evening.

    How can I transfer video taken with my iPhone to my iPad WITHOUT using a computer or iTunes? I'm going to Europe for a month and want to use my iPhone to take pics and video, then come back to the hotel room and use my iPad to edit/store each evening. I don't want to use my iPad to take the video...too large/bulky. And I WON'T have a computer with me...I purchased the iPad to take with me so I could use the RDC to my home computer and avoid taking my computer at all. Is this possible?

    here is a cheaper solution than camera kit and a lot easier if you have WIFI available.
    I use the PhotoSync APP (I think its a $1.99) will transfer videos and photos over WIFI to any other IOS device or even PC/MAC. Great app since I like doing video and photos on my IPHONE and transfer to IPAD2 without synching through computer or doing cloud based storage.
    You can even send photos/videos from your MAC/PC to your IOS devices that way too. Makes it so much easier.
    Here is the link;
    http://www.photosync-app.com/

  • I have a hard drive for CD storage that needs to connect to the Ethernet router. sInce my router is not in this room, and in another room, I want to use my Mac as a router for the drive, and share the wifi. Ho do I do this

    I have a hard drive for CD storage that needs to connect to the Ethernet router. sInce my router is not in this room, and in another room, I want to use my Mac as a router for the drive, and share the wifi. Ho do I do this? I gace tried the System Preferences -> Sharing, shared internet to Ethernet, but can't se ethe device on Finder

    Djembe wrote:
    UEFI (unified extensible firmware interface) boot requires Global unique identifier Partition Table (GPT) as opposed to the older Master Boot Record (MBR). If your existing drive is formatted in MBR, you will need to adjust BIOS settings to enable legacy boot in order for it to work properly.
    Is there a performance difference between GPT and MBR? If GPT is better, I do not mind formatting the drive with it.
    5. No special drivers are needed.
    Thanks. What about the thunderbolt port?
    7. I think Lenovo estimates 6 hours.
    Lenovo says 6 hours with the 6-cell battery on its website.
    BrendaEM wrote:
    Hi,
    There was a serious BIOS/UEFI problem with that SSD . Perhaps this thread will save you some headaches. Someone is recomending shutting off Rapid Boot in the setup, which would probable mean little with a SSD, anyway.
    I read through this, and it looks like the problem was fixed in a BIOS update, which I plan to do. However, it also seems like Intel Rapid Start is not even worth it in the first place, as sleep consumes almost no power at all.
    W540: i7-4700mq, K2100m, 8 GB DDR3L, 512 GB SSD
    T510: i7-620m, NVS 3100m, 8 GB DDR3, 512 GB SSD

  • Thoroughly angry and frustrated. I've run out of room and need to make more to add more songs. Once and for all, how do I, if I even CAN, delete music from my iPod Nano WITHOUT losing them from iTunes?

    Thoroughly angry and frustrated. I've run out of room and need to make more to add more songs. Once and for all, how do I, if I even CAN, delete music from my iPod Nano WITHOUT losing them from iTunes?

    You should take the time to familiarize yourself with the documentation that is available.  You can "uncheck" a song in iTunes, and then do a manual update.

Maybe you are looking for

  • Goods Receipt in Back date

    Hello, We have  closed the May month and started a running the business in SAP. But user saying  he wants to show the GR created in 01.01.2011 (Jan month) already closed that month. He is forcing us,  it is possible to create on back dated, make it b

  • Problem in Calling a function in sql statement.

    hi, I am having a function ops_safex_utl.EDIT_ASSC_CNTR_LOG(id number); when i am trying to use this inside a sql statement as shown below, it is giving error (exception part inside the function). SQL> select ops_safex_utl.EDIT_ASSC_CNTR_LOG(688) fro

  • How to retrieve bookmarks either from folders/Files or external drive

    Hello Mozilla, I have looked up your FAQ and Hidden Folder and File Mozilla Support Forum and Community areas to try and search for my past Bookmarks that were on a previous laptop. I had the data backed up to my Passport external drive and transferr

  • ETA or info on new book size and paper stock options, etc.

    I don't know if printing services do or would coincide with product updates, and therefore we should expect these to come with a product update or not, but does anyone have any idea if there will ever be additional book sizes, paper stock and/or cove

  • KinectRegion class API not available for JavaScript

    Hi, Is there any equivalent to Microsoft.Kinect.Xaml.Controls namespace functionality for HTML5/JS?  I find the KinectRegion provides a very simple and direct way to add Kinect events to a standard Windows 8 store app. How can this be achieved with H