Clusters with Compressor 2

Hi,
I'm trying to get a cluster set up to render some long files with Compressor & am not having too much success. I've read some of the discussions & would just like to know if it's really possible to get set up so it actually works properly.
Thanks for any assistance and/or tips!

Well I seem to have everything set properly. Both machines have Qmaster installed & Sharing is turned on. Right now I have the "Share this computer as" QuickCluster with services checked. I've also tried with the "Services and Cluster controller" button checked but nothing seems to happen. When I drag a file into the Compressor window, select my settings and click on "Submit" nothing happens. I opened the Activity Monitor on both machines to see if there was any action, but nothing.
You mention "Once all of the nodes are sharing they should show up in the Qmaster Admin window" but where is that window? Is it the "Services" window where the Share buttons for Compressor and Rendering are checked?
Thanks for the reply - I'll keep on trying!

Similar Messages

  • Using clusters with compressor

    Anyone know where there is a guide or video demo showing how this is done as I dont fully understand it.
    I have tried but it failed.
    I have a 8gb ram 8 core Mac with fcp7 and while doing a export I asked it to use clusters, gave it a name anf hit go, nbot a lot happend, can someone shed soem idiots guide to make this happen for me please,
    regs
    daz

    yes indeed. thanks for this or though I have just been watching a rather good video of this on youtube.
    They showed me step by step how to do it and monitro it which is goo, but 2 things here with my system...
    When I monitor q master in show the eight cores set but only the top cluster core goes green light, the remianing core list below stays grey abd does not say capture status or go green, which makes me think its only using 1 core somehow.
    In the batch monitor it show lots of work going on but not really faster. The processor says it going at 90% but that could one processor going all out.
    Somewhere in my mac is a core monitor which I have yt to find, If in the apple q list the status wnet green and said capture rather than not I would think it would be workng correct but they donot.
    Please help,
    Daz

  • Render Benchmarks: GPU vs Quicktime X vs Compressor Quick Clusters vs Compressor distributed

    I've been using Final Cut since the early days when Apple acquired the original technology and began bundling apps to create Final Cut Studio,  Along the way, I have used Compressor and now have the latest version installed.  I don't use Compressor much because I've always felt the performance and interface lagged behind third-party apps and now FCPX's GPU rendering.
    I've been producing wedding videos for a while.  Due to the long form nature of the ceremonies and the various formats I provide (web/h264, DVD, Blu-ray), I'm even more eager to optimize my rendering times.
    I have tried setting up QMaster clusters multiple times over the years but found they were characteristically unreliable and didn't want to spend a lot of time troubleshooting and nursing them.  With the addition of quick clusters in Compressor, I decided to give it a go again this past week. 
    I have 3 Macs on a gigabit LAN: 2008 Mac Pro 2.8 (8 threads), 2011 iMac i7 3.4 (8 threads), 2011 Mac Mini i5 (4 threads).  All have the same/latest version of ML, FCPX and Compressor.  The Mac Pro is my primary workstation: 24GB RAM, SSD boot drive, RAID0 Media drive for Projects and Events, ATI 5770 GPU.
    I have a 50 min 1080 30p timeline with some mullticam clips, titles multiple audio tracks, CC, NR, etc.  After allowing FCPX to background render the entire Timeline (original media/high quality playback), I choose the Share button and send a 3 min segment to a PR422 1080 30p Master destination.
    From FCPX shared to PR422 Master: 1:09
    Then the same segment:
    From FCPX timeline shared to the default Compressor 1080p Apple Devices (10Mb) destination (not opening Compressor): 14:52
    From FCPX timeline shared to Apple Devices 1080p Best Quality destination: 9:21
    From Finder using the 3Min PR422 Master Segment file and the "Encode Selected Files" Service at 1080p: 3:50
    From Quicktime X using the 3min PR422 Master segment file and 1080p Output format: 3:46
    From Quicktime 7 Pro using the 3min PR422 Master segment file and 1080p Output format: 10:44
    From Compressor using Apple Devices 1080p (10Mbps): 7:19
    Adobe Media Encoder CS6 using the 3Min PR422 Master segment file w/H264 2Pass VBR 10Target/12Max m192Kb Audio 1080 30p: 6:54
    This segment was too short to get a reliable test using Compressor with my "This Computer+" distributed rendering but my tests showed that even with all available threads, the distributed rendering is much slower than GPU rendering or QTX encoding.  I have included screen composites here of the various output files, settings, CPU loads and utilized threads: https://www.dropbox.com/sh/6tgamkjs5z2r6zv/Zg6zcyn3Ya
    I realize this IS NOT scientific in that most of the tests have small variations in compression bit rates and 32 vs 64bit processes.  Regardless, I'm confident that for most circumstances, Compressor is slower than most other workflows.  I couldn't find a situation where I would want to use it. 
    Fastest hands-on workflow: Output a Master PR file then right click the file in Finder and encode to 1080p, about 5min total.
    As a side note: Despite setting the quick clusters up by the book, I could never get the nodes to start rendering.  Interesting because I had no problems with "This Computer+" using the available nodes on my  LAN.
    Observations? Recommendations?

  • Multiple Clusters with same computers

    Is there a way to set up multiple clusters with the same computers.
    For example
    Cluster A has computers 1, 2
    Cluster B has computers 1, 2, 3, 4, 5
    The reason behind this is that computers 1 and 2 are always available, but 3, 4, 5 are used during the day. I'd like to have a day cluster and a night cluster that I can just select as needed.

    Hi Jake, I was playing with this type of thing when FCS1 first came out using some old powerbooks.
    Yes you can kind of do this using UNMANAGED SERVICES but I will stand corrected for COMPRESSOR3.app.
    In FCS2, compressor 3, simply if you use *MANAGED SERVICES* of the service nodes available through a defined a specific cluster (you would have used appleqmaster utility.app) than I believe those service nodes are dedicated to that cluster.
    Sure you can make 'INSTANCES" of compressor for example and the 'renderer" instances dedicated to particular cluster using this as an example... let say...
    HOST1 = MACPRO8CORE with 4 instances of compressor (as C1,C2,C3,C4 via vi 4 virtual clusters) and 8 instances of renderer (R1,R2,..R7 & R8).
    HOST2 = MACBOOKPRO with 1 instance of compressor (as C1 & C2 with 2 virtual clusters) and 2 instances of renderer (R1,R2).
    HOST3 = PowerbookG4 with 1 instance of compressor (as C1) and 1 instances of renderer (R1).
    only as an example
    ClusterA: Host1[C1,C2C3, R1,R2,R4,R5), Host2[C2]
    ClusterB: Host1[C4, R6,R3,R6,R7), Host2[C2,R1R2], HOST3{C1,R1]
    In fact I just tried it...
    However as I thought for managed services be assured that you cannot SHARE a service node with two or more clusters. I will stand corrected.
    Messy but seems to work but useless with the powerbook you'd agree
    However depending on your workflow and commecrial needs (say for busness prioriries for a specific client) for best results use *UNMANAGED SERVICES* ....
    Simply hook everything up over GBE, with all the usual tweaks such as "NEVER COPY SOURCE" and MOUNTING all source targets and compressor work files over HFS on the GBE subnet (and many other tweaks) and treat the set up as a huge bucket.
    Use priority on the batch submission. It's not too smart but at least you have some manipulation over the queues.
    The resource manager in QMASTER is not so smart. So despite a service node going idle it does not seem to redirect work from wone node to another for load balancing..
    I have only tried this with SEGMENTED TRANSCODING.
    Rendering (Shake) works great and seem simpler. My time is with multipass segented transcoding where I want H.264 from DVCPROHD and dont want to wait all day for it, especially where I have tweaked the timing in compressor a bit.
    Try it out.
    BTW as many contest on this forum, QMASTER/COMPRESSOR can be a bugger to fix if it plays up. and it has for me as well.
    post your results.
    HTH.
    w

  • How to share a job with Compressor 4.1?

    Can anyone explain how to set up Compressor on two or more computers to share a coding job? I was never successful, neither with the old versions nor the new one. I have connected two computers running Mavericks via Ethernet. They appear in the preferences list of Compressor as inactive and can be selected (with a tick). Starting a job produces no error. Only the little network monitor window shows some activity: "inactive" (in white or yellow), sometimes: "not found" (in red). The computer which sends the job waits endlessly.
    I deactivated the firewall, connected the computer with DHCP or fixed IP but no success. What else do I have to do?

    Hi Steffen, hats off to you for gathering this valuable information!  I'm going to title this post:
    Setup Distributed Node Processing for Distributed Segmented MULTIPASS Transcoding in Compressor.app V4.1 (2013 version)
    Summary:
    A quick look at those logs of yours.., Qmaster is having trouble accessing its cluster storage and probably your transcode source and target elements.
    This is a bit of a giveaway - looks like the part time helpers at Apple didn’t look at it hard enough
    msg="CSwampService::startupServer: servicecontroller:com.apple.stomp.transcoderx couldn't advertise server, error = error: DNSServiceRegister failed: CRendezvousPublisher::_publish"/>
    <mrk tms="412112937.953" tmt="01/22/2014 20:48:57.953" pid="1975" kind="begin" what="log-session"/>
    <mrk tms="412113195.964" tmt="01/22/2014 20:53:15.964" pid="1975" kind="begin" what="service-request" req-id="D6BAF26C-DD43-4F29-BD72-81BC9CF25753:1" msg="Processing."></mrk>
    <log tms="412113209.037" tmt="01/22/2014 20:53:29.037" pid="1975" msg="Shared storage mount failed: exception = CNFSSharedStorage::_subscribe: command [/sbin/mount 127.0.0.53:/Users/steffen/Library/Application Support/Compressor/Storage/21D262F0-BF7EC314/shared] failed, error = 61"/>
    Let’s look at this and then propose a clean method of establishing and consolidating your cluster.
    Simply the Bonjour service is having a hard time trying to find you and also qmaster been running ragged trying to mount your Cluster 21D262F0-BF7EC314 storage.
    Let's fix it.
    Basics for the above with Compressor v4.1 and Qmaster.
    much has been abstracted from the user to help easy implentation and use. This is good me thinks!
    avoid ticking every option that is available on each host , such facilities aggravate and add to the complexity of your workflow environment
    isolate the network subnets to use for all host access, file system paths, communication, diagnosis, monitoring  and finally data transfer (see later)
    use source objects that will develop segments that will be distributed for processing. A 3 minute clip generally won't segment to cause distribution.
    review any workflow gains by distributed transcoding: slow node hols up process and additional time to assemble qt segments. A cluster dedicated to an iMac or macpro can often be faster.  (Have several clusters defined and submit accordingly (long , most and short )!!)
    All elements/objects used in the source and any target folders SHOULD (not must) be mounted and accessible by each Qmaster node.  You can use sym links I recall. For reasons of efficiently and easy of diagnosis.   
    So.. I'd propose you try and test your setup as follows .
    Try this and start from beginning.  Do your best to perform these work instructions. Try also not to deviate if u can
    Simple Architecture Approach:
    Your main Macbookpro or main work mac (refered to by you as "master") shall be designated the qmasterd controller that services batch submissions AND that also provides transcode services.
    The other macs ("service or slave nodes) will ONLY perform transcoding services and will NOT accept batch submissions.  The slaves / service nodes  will not be able to send their jobs to your master controller for transcoding for example.
    Keep it simple! and please follow these steps.
    Step 1: Quiesce your clusters and Qmaster
    in Compressor.app v4.1 / Preferences / Shared Computers, stop / disable all hosts (both your macs) from automatic file sharing - tick it OFF (it causes issue u have).. More later
    In Compressor.app v4.1 / Preferences / My Computer, stop / disable all hosts (both your macs) stop allowing others to add batches to your host. Slide to OFF
    On all hosts, quit or force out compressor.app v4.1
    On all hosts (macs) use activity monitor.app or unix ps command to Force Quit ( or kill) any currently running qmasterd task and any compressord tasks if you can.
    On all hosts, purge | clean out | delete  the Qmaster and compressor structures. This is documented by several of us on this forum but fundamentally you want to preserve your settings and destination templates and delete the rest.  Do these sub-steps on all hosts where u intend to deploy compressor/Qmaster for your distributed transcode processing
    a. Navigate to /Users/shared/library/Application Support/ and delete the Compressor folder if it exists. By all means use the osx Finder to put it in the trash or just use the faithfully unix rm command to delete it immediately without serialisation : rm -rf /Users/Shared/Library/Application Support/Compressor
    b. Navigate to your home directory ~/Library/Application Support/Compressor and move or copy any personalised values to your desktop so we can reinstate them later. Copy these two folders if they exist.
    Settings
    Layouts
    And also copy /move any customised destination templates that u used. These are files ending in ".template"
    Now using the Finder or unix command delete your ~/Library/Application Support/Compressor folder and all it's objects such: rm -rf  ~/Library/Application Support/Compressor
    c. Dismount (+E or drag into trash) any shared file systems you have manually or automatically shared between your hosts. (Your two macs) . Turn off any auto mounts you may have setup in login items for Qmaster and your source and target libraries.
    d. After you have done Steps 1a - 1c on all your hosts ,
    then RESTART
    and log back into your hosts
    attempt to delete the trash on each.
         6. Check the activity monitor and confirm there are no compressord sub tasks running. Qmasterd might be running. That's ok
    Step 2: set up your dedicated network for your transcoding cluster .. Important!
    In this step you will isolate your cluster network to a dedicated subnet. BTW ,o DNS is needed unless you get exotic with many nodes!
    You will:
    use the macs Wifi network as your primary network for NON TRANSCODING work such as email, internet , iChat, iCloud and bonjour (.local) I'm assuming u have this in place
    use the Ethernet on your macs as your dedicated Qmaster cluster subnet.
    For this procedure will make an IP address range 1.1.1.x subnet and manually assign the IP addresses to each host. Ofcourse you can use a smart DHCP router if u have one or a simple switch and use OSX SERVER.app erver 10.9 ($HK190 , €19) on your MAcbookpro... The later for another time
    a). using system preferences/network on your controller mac("master"), configure the built in Ethernet to manual IP address of 1.1.1.1 Yes yes, dhcp would be nice if we also had a dns for this subnet to identify each host (machine name) however we don't. Leave the subnet default to 255.255.255.0, and router to 1.1.1.1 .. It's not escaping anywhere! ( very private LAN for your cluster! )
    b) repeat step 2a to set the other "slaves" service node only macs whose built in Ethernet to 1.1.1.2 , 1.1.1.3 and so on
    c) connect these hosts (macs) ethernets together in a dedicated hub / zoned switch or if only two macs, just use a cat5/cat6 Ethernet cable
    d) on each host (mac) using system preferences/network check for a Green light against the built in Ethernet
    e) on each host (mac)system preferences/network to make a new network configuration (so that you can fall back incase of error) :  using system preferences/network make a new network location on each mac
    - edit the Location listbox, edit and DUPLICATE the current location
    - select and over type the name and change it to "qmaster work" or some name u like save and close the dialogue
    - back in sys prefs / network select your new location "qmaster work" (from the location list box and "apply"
    - now click the gear wheel icon on lower left and Reorder the network interfaces so that the WIfi is top (first) followed by Builtin Ethernet .
    - click "apply"
    - confirm Ethernet is still green status
    Do this on each host (mac) .. The slave/service nodes
    f) on each host (mac) verify that you can address each mac over you new subnet. There's many ways to do it however do it simply via the /applications/utilities/Terminal.app.
    From mac #1 whose IP address is 1.1.1.1,
    Enter:
    traceroute 1.1.1.2 press return and one line should come back.
    ping 1.1.1.2 and a continuous lines appear with packets and time is ms. Watch 3-4 then use control+C to stop
    Do the same to the other nodes you may have such as 1.1.1.3 etc
    Repeat the above from the other hosts. For example from one of the service (slave) macs say 1.1.1.2, t
    Test the network path back to your main mac 1.1.1.1: using terminal.app from that slave,
    Enter:
    traceroute 1.1.1.1 press return and one line should come back.
    ping 1.1.1.1 and a continuous lines appear with packets and time is ms. Watch 3-4 lines then use control+c  to stop
    At this point you should have a solid network path between your hosts over Ethernet on the subnet 1.1.1.x
    Step 3: mount all filesystems over the Ethernet 1.1.1.x subnet that are to be used for transcoding source (input | read )  and target (output | to be written )
    Simplicity is important at this stage to make sure you know what being accessed.  This is one reason for disabling all the automatic compressor settings
    You will use the Finder's "Connect to Server" (+k) from each slave (server) node to access the source and target filesystems on your master mac for the transcoding.
    These can be saved as favourites in the "Connect to Server" dialogues
    Do this:
    A) locate the volumes / filesystems and folders on your mac master where your source objects is contained. Do the same for where the final distribution transcode is to be written with you user access.. "Steffen"
    B) On each slave mac, use the Finder's "connect to server" dialogue to MOUNT those folders as network volumes on your slave macs
    mount the Source folder. Finder / Go / Connect to Server  or +K
    enter "[email protected]//Users/steffen/movies/my-fcpx-masters. ( choose you source directory path) .
    Click connect & password and use the "+" sign to save as favourite
    - mount the target folder. Finder / Go / Connect to Server  +k
    - enter "[email protected]/users/movies/my-fcpx-transcodes. ( choose your target directory path) . Click connect n password and use the "+" sign to save as favourite
    Do these for all your slave macs.  Remember you are specifying file paths over the 1.1.1.x subnet
    Obviously make sure your slaves have read and write access. Yes and you could also mark these folders and Shared in Finder info then everyone can see them ... your choice
    C) verify your access: on each slave use the finder to create a ftest older in those recently mounted volumes. Delete the test folder.
    So now all your networks and workflow folders are all mounted and accessible by your slave hosts and it's user
    step 4: Set up Compressor v4.1 and Qmaster
    Care is initially required here NOT to click needlessly on options.
    Recall that you purged most of the compressor.app v4.1 state information on step 1. ?
    Compressor.app v4.1 will appear new when u start it. 
    on the master mac 1.1.1.1 , launch compressor.app v4.1
    open compressor v4.1 preferences (command+,)
    using compressor.app V4.1 preferences:
    My Computers tab: turn OFF . Don't allow others to process on this computer
    Share Computers tab: UNTICK (disable) automatically share files.
    Advanced tab: WHen Sharing My Computer listbox: select Ethernet as the preferred network interface 1.1.1.x). Don't use all interfaces : - trouble
    Do not specify additional instances yet! Less to troubleshoot
    On each slave mac 1.1.1.2 -1.1.1.x
    launch compressor.app v4.1
    open compressor v4.1 preferences (command+,)
    using compressor.app preferences:
    My Computers tab: turn ON (yes ON) to allow others to process their stuff on this slave computer
    Share Computers tab: UNTICK (disable) automatically share file
    Advanced tab: WHen Sharing My Computer listbox select Ethernet as the preferred network interface 1.1.1.x). Don't use all interfaces : - trouble
    Do not specify additional instances yet! Less to troubleshoot !
    On the master mac, 1.1.1.1
    using Compressor.app v4.1, select destinations and add a new CUSTOM destination and navigate the dialogue to your target folder you specified in step 3b ~/movies/my-fcpx-transcodes as an example.
    Use this custom destination on the batch later
    in Compressor.app V4.1 preferences/Shared Computers, click the plus "+" sign in bottom left cornet to create a new cluster called "unnamed ".
    - click in the name and change the name to "Steffenscluster" (I'm not connected to my network as I do this..)
    - tick on the slaves 1.1.1.2 to 1.1.1.x to add the to your new cluster .. assumethee are on the right in a list.
    Your cluster "Steffenscluster" is now active!
    Care care! One more thing to do. You SHOULD make the cluster storage for master 1.1.1.1 available to all your slaves. This is important for this test !!
    On the master 1.1.1.1, Use the finder to verify that you have these directories built from compressor.app v4.1
    /Users/Shared/Library/Application Support/Compressor
    and your own home directory: ~/Library/Application Support/Compressor
    Dig deeper for the /storage folder in each to see the hexadecimal named folder that represents this cluster "Steffencluster"!
    These should be manully mounted on each slave. Recall we DISABLED automatic file sharing.
    on each slave mac 1.1.1.2 - 1.1.1.x, mount the masters cluster storage file systems. Do this for verifying access of each cluster Slave
    on each slave mac, use the Finder's "connect to server" dialogue to MOUNT those folders as network volumes on your slave macs
    mount Qmaster cluster storage folder 
    Use Finder / Go / Connect to Server  or +k
    enter "[email protected]/Users/Shared/Library/Application Support/Compressor/Storage
    Click connect & password and use the "+" sign to save as favourite
    - mount users Qmaster cluster storage folder 
    Use Finder / Go / Connect to Server  or command+k
    enter "[email protected]/Users/steffen/Library/Application Support/Compressor/Storage
    Click connect & password and use the "+" sign to save as favourite
    This you may have 4 new network volumes (file systems) mounted on each mac slave over your dedicated 1.1.1.x subnet!
    Step5: submit a job
    On master mac 1.1.1.1 launch the new compressor.app v4.1 "Network Encoder Monitor " .. Use +E . new in COmpressor.app V4.1
    You should see all your nodes all cluster service points for your master and slaves.  Heres you see just one (this macbookair!)
    On each host (mac) Launch the activity monitor.app and filter compressord .. There they are on each host! 
    Nearly there.  
    On the mac that's the master 1.1.1.1 (controller )
    Submit a job:
    Use finder to move some footage etc that more than 15 mins for example into your source directory folder from step 3b (eg ~/movies/my-fcpx-masters.)
    In compressor.app v4.1 Add this +I to the batch
    Drag your custom destination on to the batch
    Set your encoding setting (apple devices best)
    Open the inspector in compressor.app and select "video" and make sure MULTIPASS is checked. Then change to frame controls at your leisure.  Better means slower
    Select the General tab and make sure JOB SEGMENTING is ticked!
    Now cross fingers and submit it (or +B)
    Step 6: Monitoring the Workflow Progress
    In compressor.app v4.1 Use the active tab to watch the progress
    Open the disclosure triangle and see the segments
    Unfortunately u can really see which node is processing. (No more share monitor .. btw for those who care.. thats buried now in /Applications/Compressor.app/Contents/PlugIns/Compressor/CompressorKit.bundle/Contents/Embedde dApps/Share Monitor.app/Contents/MacOS/Share Monitor
    Look at the network encoder monitor (cmd + E) to see the instances processing your work
    Lots of small and overdetailed steps here Steffen and its worth working through.
    Simply all these things need to be availble to get your cluster to work EVERYTIME.
    I might update this and post a more detailed disalogue/transcript on my blog and post it here.
    Epilogue:
    I for one rather like the new compressor.app V4.1. Qmaster is buried and works well when not teased or unintentionally fooled.
    I would like the ability to :
    specify the location of the qmaster cluster storage rather than have it in the root systems file system - I used to have it on my disk array
    compressor to be applescriptable
    Post your results for others to see.
    Warwick
    Hong Kong

  • How do I create an MPG2 with Compressor that is neither 16:9 or 4:3?

    I have a quicktime movie, exported from FCP which is 1280X800. I need to create an MPG2 also at 1280X800.
    How do I do this?
    Compressor only seems to create a 16:9 or 4:3 MPG2 regardless of the original source file.
    Thanks.

    A program Stream.
    It's for a BrightSign video player (HD210) http://www.brightsign.biz/products/hd210.php
    The Program Stream MPG2s that I create with Compressor work fine with the player but I just can't find a way of changing the dimensions to 1280X800.
    Thanks.

  • SAP ECC 6.0 installation in windows 2008 clustering with db2 ERROR DB21524E

    Dear Sir.,
    Am installing sap ECC 6.0 on windows 2008 clustering with db2.
    I got one error in the phase of Configure the database for mscs. The error is  DB21524E 'FAILED TO CREATE THE RESOURCE DB2 IP PRD' THE CLUSTER NETWORK WAS NOT FOUND .
    DB2_INSTANCE=DB2PRD
    DB2_LOGON_USERNAME=iil\db2prd
    DB2_LOGON_PASSWORD=XXXX
    CLUSTER_NAME=mscs
    GROUP_NAME=DB2 PRD Group
    DB2_NODE=0
    IP_NAME = DB2 IP PRD
    IP_ADDRESS=192.168.16.27
    IP_SUBNET=255.255.0.0
    IP_NETWORK=public
    NETNAME_NAME=DB2 NetName PRD
    NETNAME_VALUE=dbgrp
    NETNAME_DEPENDENCY=DB2 IP PRD
    DISK_NAME=Disk M::
    TARGET_DRVMAP_DISK=Disk M
    Best regards.,
    please help me since am already running late with this installation to run the db2mscs utility to Create resource.
    Best regards.,
    Manjunath G
    Edited by: Manjug77 on Oct 29, 2009 2:45 PM

    Hello Manjunath.
    This looks like a configuration problem.
    Please check if IP_NETWORK is set to the name of your network adapter and
    if your IP_ADDRESS and IP_SUBNET are set to the correct values.
    Note:
    - IP_ADDRESS is a new IP address that is not used by any machine in the network.
    - IP_NETWORK is optional
    If you still get the same error debug your db2mscs.exe-call:
    See the answer from  Adam Wilson:
    Can you run the following and check the output:
    db2mscs -f <path>\db2mscs.cfg -d <path>\debug.txt
    I suspect you may see the following error in the debug.txt:
    Create_IP_Resource fnc_errcode 5045
    If you see the fnc_errcode 5045
    In that case, error 5045 which is a windows error, means
    ERROR_CLUSTER_NETWORK_NOT_FOUND. This error is occuring because windows
    couldn't find the "public network" as indicated by IP_NETWORK.
    Windows couldn't find the MSCS network called "public network". The
    IP_NETWORK parameter must be set to an MSCS Network., so running the
    Cluster Admin GUI and expanding the Cluster Configuration->Network to
    view all MSCS networks that were available and if "public network" was
    one of them.
    However, the parameter IP_NETWORK is optional and you could be commented
    out. In that case the first MSCS network detected by the system was used.
    Best regards,
    Hinnerk Gildhoff

  • Ingested videos not playing in stereo with Quicktime or after exporting with compressor

    Why does it seem like every time I try to do something just mildly complex I'm stymied by some bug or problem with the software?
    Here's the deal:
    0) I have footage was shot with a stereo camcorder, a JVC GR-DV3000 (circa 2003), with built-in stereo mics.
    1) I ingest the footage from the Mini DV tape and save it to a camera archive using Final Cut Pro X (latest version).
    2) I use Final Cut Pro to pull the events out of archive.
    3) When the events are played from within Final Cut Pro X, I hear strereo and audio meters indicate the audio is in stereo (left and right channels have different volume levels).
    4) When the events are played with the open source software, VLC, I hear stereo, too.
    5) When I export the files for playing on an Apple device using Final Cut Pro, the audio is in stereo as you would expect.
    BUT:
    5) When I play the event clips with QuickTime from my hard drive after step 2 above, I get no audio from the right channel. Weird, right?
    But most annoying of all:
    6) When I use Compressor 4 (instead of Final Cut) to output the event clips to "SD for Apple Devices" using the default audio settings, I get no audio from the right channel, either.
    7) I've got hundreds, if not thousands, of clips. I'm not about to save each clip manually with FCPX. I need to batch convert these files with Compressor.
    8) I've tried various combinations of the audio import settings ("Analyze and fix audio problems," "Separate mono and group stereo audio," and "Remove silent channels." These options have not solved the issue for me. The clips still play in mono in QuickTime.
    When I look at the properties of the file in QuickTime, here's what it shows:
    DV/DVCPRO - NTSC, 720 x 480 (640 x 480)
    Linear PCM, 16 bit little-endian signed integer, 32000 Hz, Left
    Linear PCM, 16 bit little-endian signed integer, 32000 Hz, Unused
    Linear PCM, 16 bit little-endian signed integer, 32000 Hz, Unused
    Linear PCM, 16 bit little-endian signed integer, 32000 Hz, Unused
    Why it's telling me the other 3 channels are unused, I have no idea. As I mentioned, the footage was shot in stereo and even Final Cut plays it in stereo.
    Here's what VLC says about the very same file:
    Stream 0:
    Type: Audio
    Codec:PCM S16 LE (sowt)
    Channels: Mono
    Sample Rate: 32000 Hz
    Bits per sample: 16
    Stream 1:
    Type: Audio
    Codec:PCM S16 LE (sowt)
    Channels: 1
    Sample Rate: 32000 Hz
    Bitrate: 512 kb/s
    Stream 2:
    Type: Audio
    Codec:PCM S16 LE (sowt)
    Channels: 1
    Sample Rate: 32000 Hz
    Bitrate: 512 kb/s
    Stream 3:
    Type: Audio
    Codec:PCM S16 LE (sowt)
    Channels: 1
    Sample Rate: 32000 Hz
    Bitrate: 512 kb/s
    Stream 4:
    Type: Video
    Codec: (DV Video (dvc)
    Resolution: 720 x 480
    Frame Rate: 29.970030
    Decoded Format: Planar 4:1:1 YUV
    If anyone can point in the right direction, I'd much appreciate it.

    I ended up just using iMovie to capture video from the camera. The clips now play in stereo in QuickTime perfectly.
    It's interesting to note that iMovie saves the clips as .dv file types, not .mov file types like FCPX does.
    And it turns out the iMovie capture is *much* smoother. It actually "just works." Ingesting tapes with FCPX, I had to cross my fingers, spin on my head and pray three times to a statue of Steve Jobs before it would work (and even then it would do weird stuff).
    The only downside of using iMovie is that it doesn't have the camera archive feature.
    I really don't understand why, with $120 billion in the bank, Apple seems to be churning out software as an afterthought. No, software is not where they make their money, but they are ruining their reputation. Ah, well, first world problems.

  • Problem with compressor 2.01

    Hallo everyone.
    I've a great problem with compressor 2.01 and Final cut pro 5.04.
    When I try to export a movie from final cut to compressor, all the two application crash.
    All is regular when I export the movie with Quick time.
    This happens to me for the first time I used Final cut.
    The second problem is that when I try to reinstall compressor from the original CD, compressor is gray and not black.
    I've used yet every possibilities also with Disk warrior applcation without succes
    Please help me. Thank you

    You need to update to Compressor 2.3.1 for Leopard compatibility.
    Use Software Update, under the Apple menu item at top left of your screen, or look here: http://www.apple.com/finalcutstudio/finalcutpro/download/

  • Exporting data clusters with type version

    Hi all,
    let's assume we are saving some ABAP data as a cluster to database using the IMPORT TO ... functionality, e.g.
    EXPORT VBAK FROM LS_VBAK VBAP FROM LT_VBAP  TO DATABASE INDX(QT) ID 'TEST'
    Some days later, the data can be imported
    IMPORT VBAK TO LS_VBAK VBAP TO LT_VBAP FROM DATABASE INDX(QT) ID 'TEST'.
    Some months or years later, however, the IMPORT may crash: Since it is the most normal thing in the world that ABAP types are extended, some new fields may have been added to the structures VBAP or VBAK in the meantime.
    The data are not lost, however: Using method CL_ABAP_EXPIMP_UTILITIES=>DBUF_IMPORT_CREATE_DATA, they can be recovered from an XSTRING. This will create data objects apt to the content of the buffer. But the component names are lost - they get auto-generated names like COMP00001, COMP00002 etc., replacing the original names MANDT, VBELN, etc.
    So a natural question is how to save the type info ( = metadata) for the extracted data together with the data themselves:
    EXPORT TYPES FROM LT_TYPES VBAK FROM LS_VBAK VBAP FROM LT_VBAP TO DATABASE INDX(QT) ID 'TEST'.
    The table LT_TYPES should contain the meta type info for all exported data. For structures, this could be a DDFIELDS-like table containing the component information. For tables, additionally the table kind, key uniqueness and key components should be saved.
    Actually, LT_TYPES should contain persistent versions of CL_ABAP_STRUCTDESCR, CL_ABAP_TABLEDESCR, etc. But it seems there is no serialization provided for the RTTI type info classes.
    (In an optimized version, the type info could be stored in a separate cluster, and being referenced by a version number only in the data cluster, for efficiency).
    In the import step, the LT_TYPES could be imported first, and then instances for these historical data types could be created as containers for the real data import (here, I am inventing a class zcl_abap_expimp_utilities):
    IMPORT TYPES TO LT_TYPES FROM DATABASE INDX(QT) ID 'TEST'.
    DATA(LO_TYPES) = ZCL_ABAP_EXPIMP_UITLITIES=>CREATE_TYPE_INFOS ( LT_TYPES ).
    assign lo_types->data_object('VBAK')->* to <LS_VBAK>.
    assign lo_types->data_object('VBAP')->* to <LT_VBAP>.
    IMPORT VBAK TO <LS_VBAK> VBAP TO <LT_VBAP> FROM DATABASE INDX(QT) ID 'TEST'.
    Now the data can be recovered with their historical types (i.e. the types they had when the export statement was performed) and processed further.
    For example, structures and table-lines could be mixed into the current versions using MOVE-CORRESPONDING, and so on.
    My question: Is there any support from the standard for this functionality: Exporting data clusters with type version?
    Regards,
    Rüdiger

    The IMPORT statement works fine if target internal table has all fields of source internal table, plus some additional fields at the end, something like append structure of vbak.
    Here is the snippet used.
    TYPES:
    BEGIN OF ty,
      a TYPE i,
    END OF ty,
    BEGIN OF ty2.
            INCLUDE TYPE ty.
    TYPES:
      b TYPE i,
    END OF ty2.
    DATA: lt1 TYPE TABLE OF ty,
          ls TYPE ty,
          lt2 TYPE TABLE OF ty2.
    ls-a = 2. APPEND ls TO lt1.
    ls-a = 4. APPEND ls TO lt1.
    EXPORT table = lt1 TO MEMORY ID 'ZTEST'.
    IMPORT table = lt2 FROM MEMORY ID 'ZTEST'.
    I guess IMPORT statement would behave fine if current VBAK has more fields than older VBAK.

  • Problem with clustering with JBoss server

    Hi,
    Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
    I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
    in my cluster.
    I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
    As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
    Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
    Thanking you in advance
    with respect
    abhirami

    hi ,
    In this scenario u can use the load balancer instead of fail over clustering .
    I would suggest u to create apache proxy for redirect the request for many jboss instance.
    Rgds
    kathir

  • Problem with clustering with JBoss server---help needed

    Hi,
    Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
    I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
    in my cluster.
    I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
    As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
    Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
    Thanking you in advance
    with respect
    abhirami

    hi ,
    In this scenario u can use the load balancer instead of fail over clustering .
    I would suggest u to create apache proxy for redirect the request for many jboss instance.
    Rgds
    kathir

  • Quality Issues with Compressor 2.0.1

    Has anyone had any contact with Apple engineers about the problems folks are having with macro-blocking artifacts with Compressor since 2.x appeared?
    Is Apple aware of this serious step-back in reliability and quality?
    We too are experiencing this and it is very troublesome. Really takes the fun out of doing this type of work.
    I would like hear from those who have never had the problem or who had the problem and resolved without reverting to older versions.
    I think the one common factor among those experiencing the problem is that we are all working in DV25 (NTSC or PAL).
    Comments?
    thanks,
    MACK

    Jamie, here is a link to a more detailed discussion on the problem with Compressor's (v2) 2-pass MPEG2 VBR encodings.
    Waymen, "MPEG-2 conversion issues" #8, 06:15pm Sep 13, 2005 CDT
    The problem is somewhat random and can be hard to reproduce or detect. However, since it usually (always?) occurs in sequences that contain at least some difficult to compress material it's possible that very clean video sources will suffer less from this defect (since they won't be "wasting" bits to encode noise or image shortcomings that were already present in the original source).
    Some users have complained about poor fades to black, but probably the worst example I've seen was a slow cross-fade between two clips that both contained a fair amount of motion. In the latter case, Compressor 1.2.1 handled the transition much better even when using the same bit rate and quality settings. However, when Compressor 2 fails the image degenerates into some fairly large, very low detail compression blocks for maybe one or two frames. Then it recovers, everything will look fine until you see another random occurrence of the problem. You may only see two bad frames out of several hundred, but when you see it you know immediately that something is wrong.
    I doubt that this is a configuration problem or something that is hardware specific since I've been able to duplicate the exact same failure on two systems that had absolutely nothing in common as far as installation or software setup. The only commonality was the source video clip (which looks fine) and Compressor 2. However, this same clip looks fine when encoded with Compressor 1.2.1.
    Note, also, that I'm using only Compressor 2 presets on Compressor 2, this is not the problem caused by using "old" presets imported from Compressor 1.X.
    Anyway, I've already sent Apple a bug report, so there is little that I can do at this point other than to hope for a fix from Apple. That and restricting myself to one-pass encodings (as discussed in my earlier post).

  • Can I Encode to These Settings with Compressor??

    Greetings,
    I need some help from you good people.
    I have been tasked with preparing our video program for an SD Television Broadcast.
    In my many years of using Final Cut, I have never had to export for an official SD Broadcast.  So I declare up front that this is a new world for me from web and DVD creation. 
    But I seem to be having a hard time matching all of these settings using an MPEG-2 Transport Stream in Compressor.  So I want to know if I can use Compressor to match these settings or will I miss some of what the station wants?
    I've tracked down most of the major settings in Compressor.  I highlighted below in bold and red what I am having trouble with.
    I will also add that I was trying to work with the GOP I and P frames settings but do not see a choice for selecting 1 frame for each one.
    Most of my other issues are with the audio settings.  I went to "Extras" and selected Multiplex but don't have any other control over the audio.
    So any help would be much appreciated.  Thanks!
    These are the requirement sent from the station:
    (BOLD AND RED are what I need help with)
    VIDEO:
    MPG TYPE: MPG-2
    STREAM TYPE: PROGRAM (VIDEO + AUDIO)
    VIDEO RESOLUTION: 720x480
    FRAME RATE: 29.97 (NON-DROP) 4:3 DISPLAY
    FIELD ENCODING: TOP FIELD FIRST
    DEINTERLACING: USE TOP FIELD
    BITRATE TYPE: VARIABLE (If they say a variable type, why would they state a Constant Bit Rate below??)
    RATE CONTROL MODE: MODE 1
    CBR BITRATE: 8000Kbps
    GOP STRUCTURE:
    I FRAMES: 1
    P FRAMES:  1
    PROFILE ID: 4:2:2
    LEVEL ID: MAIN LEVEL
    AUDIO: MPEG 1
    AUDIO MODE: MPEG LAYER 2
    AUDIO FREQUENCY: 48000 HZ STEREO
    BITRATE: 256 KBPS* VARIABLE BITRATE EMBEDDED
    MODE: STEREO
    NO DE-EMPHASIS
    MULTIPLEXING TYPE: MPG-2
    Thanks in advance!
    -Al

    Disclosure: I haven't prepared a show for SD Broadcast either, but have been working with Compressor/DVDStudio Pro for a decade.
    Looking at the specs, there are only a couple of unclear areas...  the big thing is, they're asking for a Program Stream rather than Elementary Streams.
    I looked into that codec and it seems capable of handling the task once you clear up the VBR/CBR and interlace/de-interlace questions.
    Good luck!

  • 3 Node hyper-V 2012 R2 Failover Clustering with Storage spaces on one of the Hyper-V hosts

    Hi,
    We have 3x Dell R720s with 5x 600GB 3.5 15K SAS and 128 GB RAM each. Was wondering if I could setup a Fail-over Hyper-V 2012 R2 Clustering with these 3 with the shared storage for the CSV being provided by one of the Hyper-V hosts with storage spaces installed
    (Is storage spaces supported on Hyper-V?) Or I can use a 2-Node Failover clustering and the third one as a standalone Hyper-V or Server 2012 R2 with Hyper-V and storage spaces.  
    Each Server comes with QP 1G and a DP10G nics so that I can dedicate the 10G nics for iSCSI
    Dont have a SAN or a 10G switch so it would be a crossover cable connection between the servers.
    Most of the VMs would be Non-HA. Exchange 2010, Sharepoint 2010 and SQL Server 2008 R2 would be the only VMS running as HA-VMs. CSV for the Hyper-V Failover cluster would be provided by the storage spaces.

    I thought I was tying to do just that with 8x600 GB RAID-10 using the H/W RAID controller (on the 3rd Server) and creating CSVs out of that space so as to provide better storage performance for the HA-VMs.
    1. Storage Server : 8x 600GB RAID-10 (For CSVs to house all HA-VMs running on the other 2 Servers) It may also run some local VMs that have very little disk I/O
    2. Hyper-V-1 : Will act has primary HYPER-V hopst for 2x Exchange and Database Server HA-VMs (the VMDXs would be stored on the Storage Servers CSVs on top of the 8x600GB RAID-10). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    3. Hyper-V-2 : Will act as a Hyper-V host when the above HA-VMs fail-over to this one (when HYPER-V-1 is down for any reason). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    The single point of failure for the HA-VMs (non HA-VMs are non-HA so its OK if they are down for some time) is the Storage Server. The Exchange servers here are DAG peers to the Exchange Servers at the head office so in case the storage server mainboard
    goes down (disk failure is mitigated using RAID, other components such as RAM, mainboard may still go but their % failure is relatively low) the local exchange servers would be down but exchange clients will still be able to do their email related tasks using
    the HO Exchange servers.
    Also they are under 4hr mission critical support including entire server replacement within the 4 hour period. 
    If you're OK with your shared storage being a single point of failure then sure you can proceed the way you've listed. However you'll still route all VM-related I/O over Ethernet which is obviously slower then running VMs from DAS (with or without virtual SAN
    LUN-to-LUN replication layer) as DAS has higher bandwidth and smaller latency. Also with your scenario you exclude one host from your hypervisor cluster so running VMs on a pair of hosts instead of three would give you much worse performance and resilience:
    with 1 from 3 physical hosts lost cluster would be still operable and with 1 from 2 lost all VMs booted on a single node could give you inadequate performance. So make sure your hosts would be insanely underprovisioned as every single node should be able to
    handle ALL workload in case of disaster. Good luck and happy clustering :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for

  • Spry menu displays fine in FF and Chrome but not in IE

    hello all - as stated my Spry menu drops down fine (vertical) in FF and Chrome but in IE8 it displays horizontal. pls help & ty! here is the CSS and an example page- http://www.thelders.com/BlueRidge/ContactUs.html @charset "UTF-8"; /* SpryMenuBarHor

  • How to upload the files using struts

    Hi all My requierment is to upload the files from the client machine and save it on the server, the file size must not increase more than 250 MB and to validate that we cannot do validation on the client machine so we have to validate it on the serve

  • HT1846 Windows 7 64 bit on a macbook 2.1 ????

    I have a white macbook 2.1 I think that its early 2007. i recently put lion on it but now i want to try the developer preview for windows. I have a legitimate copy of windows 7 but it is the 64 bit version. I see some people say yes and some people s

  • ASSIGN_BASE_TOO_SHORT

    Hi, I'm calling HR_INFOTYPE_OPERATION and I'm getting a short dump with ASSIGN_BASE_TOO_SHORT error. It's occuring on the line: ASSIGN record TO <record> CASTING TYPE (tabname). and it appears to be because the field symbol <record> is blank? Can som

  • How to set up raid??

    hi can anybody help me with setting up a raid ? i dont know alot about hardware so ill be so glad if anyone can help thanks alot