Qmaster

Hello,
I want to connect 2 computers to compress in compressor with qmaster. How do I cable both comps?
Thanks

If both Macs are relatively new, you can just connect them with a Cat5e (ethernet) cable. If they're older macs, you may need a crossover cable. If they're not Macs, you're on your Own...

Similar Messages

  • QMASTER hints 4 usual trouble (QM NOT running/CLUSTEREd nodes/Networks etc

    All, I just posted this with some hints & workaround with very common issues people have on this forum and keep asking concerning the use of APPLE QMASTER with FCP, SHAKE, COMPRESSOR and MOTION. I've had many over the last 2 years and see them coming up frequently.
    Perhaps these symptoms are fixed in FCS2 at MAY 2007 (now). However if not here's some ROTS that i used for FCP to compressor via QMASTER cluster for example. NO special order but might help someone get around the stuff with QMASTER V2.3, FCP V5.1.4, compressor.app V2.3
    I saw the latest QMASTER UI and usage at NAB2007 and it looked a little more solid with some "EASY SETUP" stuff. I hope it has been reworked underneath.. I guess I will know soon if it has.
    For most FCP/COMPRESSOR, SHAKE. MOTION and COMPRESSOR:
    • provide access from ALL nodes to ALL the source and target objects (files) on their VOLUMES. Simply MOUNT those volumes through the APPLE file system (via NFS) using +k (cmd+k) or finder/go/connect to server. OR using an SSAFS such as XSAN™ where the file systems are all shared over FC not the network. YOu will notice the CPU's going very busy for a small while. THhis is the APPLE FILE SYSTEM task,,, I guess it's doing 'spotlight stuff". This goes away after a few minutes.
    • set the COMPRESSOR preferences for "CLUSTER OPTIONS" to "Never copy source to Cluster". This means that all nodes can access your source and target objects (files) over NFS (as above). Failure to to this means LENGTHY times to COPY material back an forth, in some cases undermining the pleasure gained from initially using clustering (reduced job times)
    • DONT mix the PHYSICAL or LOGICAL networks in your local cluster. I dont know why but I could never get this to work. Physical mean stick with eother ETHERNET or FIREWIRE or your other (airport etc whic will be generally way to slow and useless), Logical measn leepin all nodes on the SAME subnet. You can do this siply by setting theis up in the system preferences/QMASTER/advanced tab under "Use Network Interfaces". In my currnet QUAd I set this to use BUILT IN ETHERNET1 and in the MPBDC's I set this to their BUILTIN ETHERNET.
    • LOGICAL NETWORKS (Subnet): simply HARDCODE an IP address on the ETHERNET (for eample) for your cluster nodes andthe service controller. FOr eample 3.1.1.x .... it will all connect fine.
    • Physical Networks: As above (1) DONT MIX firewire (IPoFW) and Ethernet(IPoE). (2) if more than extra service node USE A HUB or SWITCH. I went and bought a 10 port GbE HUB for about $HK400 (€40) and it worked fine. I was NEVER able to get a stable system of QMASTER mixing FW and ETHERNET. (3) fwiw using IP of FW caused me a LOAD of DISK errors and timouts (I/O errors) on thosse DISKs that were FW400 (al gone now) but it showed this was not stable overall
    • for the cluster controller node MAKE SURE you set the CLUSTER STORAGE (system preferences/QMASTER/shared cluster storage) for the CLUSTER CONTROLLER NODE IS ON A SHARED volume (See above). This seems essential for SHAKE to work. (if not check the Qmaster errors in the console.app [see below] ). IF you have an SSAFS like XSAN™ then just add this cluster storage on a share file path. NOte that QMASTER does not permit the cluster storage to be on a NETWORK NODE for some reason. So in short just MOUNT the volume where the SHARED CLUSTER file is maintained for the CLUSTER controller.
    • FCP - avoid EXPORT to COMPRESSOR from the TIMELINE - it never seems to work properly (see later). Instead EXPORT FROM SEQUENCE in the BROWSER - consistent results
    • FCP - "media missing " messages on EXPORT to COMPRESSOR.. seems a defect in FCP 5.1 when you EXPORT using a sequence that is NOT in the "root" or primary trry in the FCP PROJECT BROWSER. Simply if you have browser/bin A contains(Bin B (contains Bin C (contains sequence X))) this will FAIL (wont work) for "EXPORT TO COMPRESSOR" if you use EXPORT to COMPRESSOR in a FCP browser PANE that is separately OPEN. To get around this, simply OPEN/EXPOSE the triangles/trees in the BROWSER PANE for the PROJECT and select the SEQUENCE you want and "EXPORT to COMPRESSOR" from there. This has been documented in a few places in this forum I think.
    • FCP -> COMPRESSOR -> .M2V (for DVDSP3): some things here. EXPORTING from an FCP SEQUENCE with CHAPTER MARKERS to an MPEG2 .M2V encoding USING A CLUSTER causes errors in the placement of the chapter makers when it is imported to DVDSP3. In fact CONSISTENTLY, ALL the chapter markers are all PLACED AT THE END of the TRACK in DVD SP# - somewhat useless. This seems to happen ALSO when the source is an FCP reference movie, although inconsistent. A simple work around if you have the machines is TRUN OF SEGMENTING in the COMPRESSOR ENCODER inspector. let each .M2V transcode run on the same service node. FOr the jobs at hand just set up a CLUSTER and controller for each machine and then SELECT the cluster (myclusterA, hisclusterb, herclusterc) for each transcode job.. anyway for me.. the time spent resolving all this I could have TRANSCODED all this on my QUAD and it would all have ben done by sooner! (LOL)
    • CONSOLE logs: IF QMASTER fails, I would suggest your fist port of diagnosis should be /Library/Logs/Qmaster in there you will see (on the controller node) compressor.log, jobcontroller.com.apple.qmaster.cluster.admin.log, and lots of others including service controller.com.apple.qmaster.executorX.log (for each cpu/core and node) andd qmasterca.log. All these are worth a look and for me helped me solve 90% of my qmaster errors and failures.
    • MOTION 3 - fwiw.. EXPORT USING COMPRESSOR to a CLUSTER seems to fail EVERY TIME.. seems MOTION is writing stuff out to a /var/spool/qmaster
    TROUBLESHOOTING QMASTER: IF QMASTER seems buggered up (hosed), then follow these steps PRIOR to restarting you machines.
    go read the TROUBLE SHOOTING in the published APPLE docs for COMPRESSOR, SHAKE and "SET UP FOR DISTRIBUTED PROCESSING" and serach these forums CAREFULLY.. the answer is usually there somewhere.
    ELSE THEN,, try these steps....
    You'll feel that QMASTER is in trouble when you
    • see that the QMASTER ICON at the top of the screen says 'NO SERVICES" even though that node is started and
    • that the APPLE QMASTER ADMINSTRATOR is VERY SLOW after an 'APPLY" (like minutes with SPINNING BEACHBALL) or it WONT LET YOU DELETE a cluster or you see 'undefined' nodes in your cluster (meaning that one was shut down or had a network failure)..... all this means it's going to get worse and worse. SO DONT submit any more work to QAMSTER... best count you gains and follow this list next.
    (a) in COMPRESSOR.app / RESET BACKGROUND PROCESSES (its under the COMPRESSOR name list box) see if things get kick started but you will lose all the work that has been done up to that point for COMPRESSOR.app
    b) if no OK, then on EACH node in that cluster, STOP the QMASTER (system preferences/QMASTER/setup [set 0 minutes in the prompt and OK). Then when STOPPED, RESET the shared services my licking OPTION+CLICK on the "START" button to reveal the "RESET SERVICES". Then click "START" on each node to start the services. This has the actin of REMOVING or in the case where the CLUSTER CONTROLLER node is "RESET" f terminating the cluster that's under its control. IF so Simply go to APPLE QMASTER ADMINISTRATOR and REDFINE it. Go restart you cluster.
    c) if step (b) is no help, consult the QMASTER logs in /Library/Logs/Qmaster (using the cosole.app) for any FILE MISSING or FILE not found or FILE ERROR . Look carefully for the NODENAME (the machine_name.local) where the error may have occured. Sometimes it's very chatty. Others it is not. ALso look in the BATCH MONITOR OUTPUT for errors messages. Often these are NEVER written (or I cant find them) in the /var/logs... try and resolve any issues you can see (mostly VOLUME or FILE path issues from my experience)
    (d) if still no joy then - try removing all the 'dead' cluster files from /var/tmp/qmaster , /var/sppol/qmaster and also the file directory that you specified above for the controller to share the clustering. FOR shake issues, go do the same (note also where the shake shared cluster file path is - it can be also specified in the RENDER FILEOUT nodes prompt).
    e) if all this WONT help you, its time to get the BIG hammer out. Simply, STOP all nodes of not stopped. (if status/mode is "STOPPING" then it [QMASTER] is truly buggered). DISMOUNT the network volumes you had mounted. and RESTART ALL YOUR NODES. Tis has the affect of RESTARTING all the QMASTERD tasks. YEs sure you can go in and SUDO restart them but it is dodgy at best because they never seem to terminate cleanly (Kill -9 etc) or FORCE QUIT.... is what one ends up doing and then STILL having to restart.
    f) after restart perform steps from (B) again and it will be usually (but not always) right after that
    LAstly - here's some posts I have made that may help others for QMASTER 2.3 .. and not for the NEW QMASTER as at MAy 2007...
    Topic "qmasterd not running" - how this happened and what we did to fix it. - http://discussions.apple.com/message.jspa?messageID=4168064#4168064
    Topic: IP over Firewire AND Ethernet connected cluster? http://discussions.apple.com/message.jspa?messageID=4171772#4171772
    LAstly spend some DEDICATED time to using OBJECTIVE keywords to search the FINAL CUT PRO, SHAKE, COMPRESSOR , MOTION and QMASTER forums
    hope thats helps.
    G5 QUAD 8GB ram w/3.5TB + 2 x 15in MBPCore   Mac OS X (10.4.9)   FCS1, SHAKE 4.1

    Warwick,
    Thanks for joining the forum and for doing all this work and posting your results for our benefit.
    As FCP2 arrives in our shop, we will try once again to make sense of it and to see if we can boost our efficiencies in rendering big projects and getting Compressor to embrace five or six idle Macs.
    Nonetheless, I am still in "Major Disbelief Mode" that Apple has done so little to make this software actually useful.
    bogiesan

  • Having trouble setting up Distributed Processing / Qmaster

    Hey Guys,
    I was able to (at one point) use Compressor 3 via Final Cut Pro Studio with Qadministrator to allow Distributed processing either via managed clusters or QuickCluster - both worked on both macs (a 2007 2.6Ghz Core 2 Duo iMac and a 2009 3.06Ghz Core 2 Duo iMac).
    I just upgraded to Final Cut Pro X and Compressor 4. As far as I can tell, the Qmaster settings now reside in the Compressor application itself, so I tried setting it up as best as I could using both the QuickCluster and managed cluster options (very similar to when using the older Qmaster) but no dice. I can see my controller's cluster from the secondary iMac, but it always displays submissions as "Not Available" and it does not help with processing. I've tried everything I can think of - I tried using FCS Remover for the older version of Final Cut, I tried looking around via terminal to see if there are any residual files & settings prior to the FCPX install, I've tried following as many instructions as I could find (including apple's official documentation on Compressor 4 on setting up a cluster) but NOTHING seems to work. I'm at a loss!!
    Unfortunately, any documentation or refrences to issues with Qmaster / Distributed processing is related to older versions of Compressor and whatnot.
    Can anyone help or have any suggestions? I have no idea how to get this working and I am having trouble finding anything useful online during my research
    Perhaps someone is familiar and can help me set it up correctly? I'm verrrry new to Final Cut in general, so I appologize in advance if i'm a bit slow but i'll try to keep up!
    Thanks,

    In spite of all Apple's hype I'm not sure distributed processing is actually working.
    First I ran in to the problem with permisions on the /Users/Shared/Library/Application Support folder.  There's some info about that in this discussion.  You'll need to fix it on each computer you're trying to use as a node.
      https://discussions.apple.com/thread/3139466?start=0&tstart=0
    Then I finally found some decent documentation on Compressor 4 here
      http://help.apple.com/compressor/mac/4.0/en/compressor/usermanual/#chapter=29%26 section=1
    However no matter what I tried I could not get the compression to spread across more than one computer.  I tried managed clusters, quick clusters, and even "this computer plus".  I was testing on a mac pro, a mac mini, and a macbook air.  I could disable the mac pro and the processing would move to the mini, disable the mini and it would move the the macbook air.  No matter what I do though it won't run on multiple machines.
    I'm also having trouble doing any kind of compositing in FCPX and getting it to compress properly.  I see this error
    7/20/11 11:07:42.438 PM ProMSRendererTool: [23:07:42.438] <<<< VTVideoDecoderSelection >>>> VTSelectAndCreateVideoDecoderInstanceInternal: no video decoder found for 'png '
    and then I end up with a hung job in the Share Monitor that if I try to cancel just sits there forever saying "canceling".
    I'm seeing a bunch of Adobe AIR Encrypted Local Storage errors in the log too.  Don't know what that has to do with FCPX but I'll have a look and try and figure it out.

  • Stuck on grey screen screen at start up...apparent Qmaster bug.

    So I found what appears to be a bug. At least for me...and at least in 10.6. Not sure if this effects other OS versions.
    I was having an issue with start up getting stuck on grey screen. Symptom is eventually start-up circle segments would stop spinning and left hanging there on the grey screen.
    There are Apple doc's and ample google hits on this subject. But I have found nothing pertaining to Qmaster as being a possible cause.
    It took me awhile to figure it out and was not an obvious issue.
    I thought it might be helpful posting this here to bring this up on a google search for someone who may be having the same problem.
    My issue:
    Doing image sequence Maya render's. However, I doubt it is Maya related.
    If I pause the render in the Batch Monitor. And shutdown. Something with Qmaster still being active hangs my startup. It is repeatable behavior.
    I have to start up holding shift key for safe mode.
    You will note, if you have 'Show Qmaster service status in menu bar', under the Advance button of QM's System Pref's pane, that the green light is on.
    That green light being on, gave me the clue that it was something to do with Qmaster.
    Clicking; Stop Sharing and restarting clears the start up hang.
    It appears, whatever permutations Qmaster expresses with a paused render on start up, is somehow preventing normal start up.
    Since I have read that menu bar items may be one possible cause for start up issues. It's also possible and therefore worth mentioning that unchecking "Show Qmaster service status in menu bar". May also resolve the problem. I have not tested that.

    Issues like this are often caused by startup jobs that are taking far too long to finish. It definitely looks like a serious bug so it's worth reporting it to Apple at either of the following places:
    http://www.apple.com/feedback/compressor.html
    http://bugreport.apple.com

  • Error when submitting job to Qmaster cluster

    Hi all,
    I'm new to working with the Qmaster cluster but I created a cluster (at least I think I did it right) using the distributed processing apple document from the help menu. Everything looks right...I have an active cluster with two machines. What is a little weird is that the Cluster that I can choose in Compressor when submitting a job has a format like "ThisComputer.RScomputer.local:50411" instead of the name of the cluster I made (I called it Zeus Cluster).
    So, I choose this long cluster name and submit the job but I get this error:
    Error: An internal error occurred: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out'.
    Has anyone seen this error. What could I be doing wrong? Apple Qadministrator has the Cluster active and both machines are sharing fine.
    Any help would be appreciated. Thank you!

    Have you looked in the /Library/Logs/Qmaster for any specific detail? (use /applications/utilities/console.app )
    there's usually some detail in there that will give you an insight. If you see smething in there of significance, by all means post it here so w can examine and suggest to you.
    I have had his before, and in my case is related to the cluster setup I had.

  • Qmaster...here we go again!

    Qmaster services...Compressor and all related things... has been a nightmare from the beginning...after all that pain, now Tiger was quite reliable...
    Due to Final Cut Server needs Leopard came along, bringing back the nighmares..."qmasterd not running" what a ****!!
    Genius at Apple like to joke!
    I have to reinstall FCS more times a day, that's the only way i can make it it work.
    It's that possible??? It works for a while, then suddenly...boom!! a time bomb.
    Cluster is gone, no more Proxy for assets
    "qmaster not running"
    Qmaster Preferences make System Preferences crash, Console state "Exited abnormally: Bus error"
    noway to restart qmaster in any known manner, nothing worth, except reinstalling the package again and again.
    Any help out there?
    Thanks.
    Eri

    I had a conversation about Qmaster and it's lack of reliability with real Apple Sales Engineer yesterday. I can't provide any concrete answers but maybe I can help set expectations a little.
    As a result of my tinkering with Qmaster and this conversation I had, I'm convinced that Qmaster is ultimately only consistently reliable in most cases for people who are running it on a set of servers that are always left on, have fixed IP addresses, assigned domain names and properly configured firewalls. The closer I've moved my Qmaster environment to that environment, the more reliably it has worked for me. When I set up two student computer labs as nodes in the cluster, things go sideways within a day or so and I'm back to having to reset Qmaster services on those lab machines.
    Once I've got Qmaster running on a server class machine, I've been most successful by doing the following:
    1. Make sure the machines are always on. No sleeping. No restarting.
    2. Make sure the machine has a fixed IP address (no DHCP) and an assigned domain name.
    3. Make sure your firewall is configured correctly. Qmaster, unlike most apps, uses a range of ports, so it gets tricky. And don't forget Qmaster uses NFS.
    4. Being patient. Once I submit a job, I wait ten minutes to check Batch Monitor to see what's happening. If the job hasn't started running by then I take action. Remember: it takes time to chop a video file into segments, pass them out to a set of machines, start processing and get useful feedback on the status of those jobs.
    I've had best luck setting up one server as a Quickcluster and all the other as unmanaged compressor nodes. When I submit a job, I use the Quickcluster and check "Include unmanaged services on other computers" box. I'm just getting into applying my Qmaster knowledge to Final Cut Server so I'm not sure how that applies to FCS just yet.
    Finally, the Apple Sales Engineer raised a question that has lingered with me as I've used Qmaster to distribute compression tasks to multiple machines: in many cases the overhead introduced when a video file has to be broken into pieces, copied across the network, processed and reassembled is not worth the effort, e.g. encoding would ultimately go faster on one machine running a whole bunch of cores. Now if you have lots of long projects to encode to something processor intensive like H.264, that probably goes out the window some if you're like me and you have 40+ Dual Core 2 iMacs sitting in computer labs and faculty office unused all night.

  • Can qmaster be told to work on one thing at a time?

    Greetings all,
    I have qmaster set to operate on my systems with managed resources and a controller. I originally was experimenting with running qmaster on two or three files at a time. This worked fine, and now I'm ambitious: I want it to work with 10+ files, while I'll set to run over night.
    I'm running into two problems. First, when I set it to work with around 56 files (for a total of ~70 targets) it seemed to freak out and just stopped doing anything. I presume that overloads it, and I'm fine with doing ~10 jobs a night - that's not the focus of this post (but if anyone knows of a fix, advice would be appreciated). The problem is that qmaster seems to insist on copying the source file of every single file in queue to all computers involved. Each file is 10-20 GB and I'm finding that many times the systems' HDs will fill and cause the encodes to fail.
    All of the systems are linked through a gigabit switch, and the source media is on an external pair of RAID 0 HDs connected to the controller via Firewire 800. All systems have the HDs involved mounted through the network, but of course they insist on copying the file to their internal HD. What I'd really like to do would be to somehow have qmaster only copy files as they're being worked on, and to have the systems work on one video at a time. I suppose this could theoretically be accomplished by splitting the batch up into smaller batches and submitting them at different priority levels... I don't mean to sound like a whiner, but what a hassle. Does anyone have any ideas? Thanks very much in advance!

    Yes, i realize my solution needs the idea that followed it to be feasable in order to work properly.  My bad!
    I did wonder about using a combination like that, but I hoped it could all be done with one object.  Ah well - just means adding an extra little bit of code.  No probs.
    Thank you anyway.
    Never say "Oops." Always say "Ah, interesting!"

  • Disc will no longer burn - QMaster error

    Yesterday (and on many other days) I burned a Blu-ray Disc out of FCP7 (using Share).
    Today when I follow the exact same procedure as I always do, I get the "Share Failure. An Internal error occurred: Apple Qmaster File Agent not found." error message.
    I don't want to use Compressor -- I want to use Share out of FCP7, using the same menus I'm used to. What's wrong?

    Open Compressor 3.5 after you have exported a self contained, current settings QT out of FCP.
    Under File, choose "New Batch from Template" and choose Create Blu ray.
    Drag the self contained movie file into the image well where the down arrow is represented.
    In the Inspector, choose Job Action, and select the menu you want.
    Enter a title, encode and burn.

  • Leopard - QMaster and Virtual Cluster problem

    Hi guys,
    Up until yesterday I had my MacPro Octo running under 10.4 where I did succesfully set up a Virtual Cluster using 4 instances for Compressor. It worked as a charm and my MacPro was doing it's job perfectly.
    Today, I made a bootable backup of my 10.4 install and installed 10.5 using the erase and install options ( clean install ). I installed all my software again and tryed setting up my Virtual Cluster again, using the same settings I had under 10.4. Sadly I can't seem to get it working.
    In the QMaster Preferences pane, I have the QuickCluster with Services option checked. For the compressor entry in the Services I have the Share Option checked and used 4 isntances for the selected service. The Quickcluster received a descent name and the option to include unmanaged services from other computers is checked.
    I have the default options set in the Advanced tab, ( nothing checked except log service activity to log file and Show QMaster service status in the Menu Bar). I then started the Cluster using the Start Sharing button.
    Now I open u Compressor and add a file to process ( QT encode to iPod ), but when I hit the Submit button, my Virtual Cluster doesn't show up in the Cluster Dropdown. If I now leave the Compressor GUI open for 5 minutes, it will eventually show up in the list, and I can pick it. Sadly, picking it from the list at this point and hitting the Submit button makes Compressor Hang.
    I checked my logs, but the only thing concerning Compressor I could find is this :
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:41 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488391.647220 218488361.647369) 1'], server [tcp://10.0.1.199:49167]
    4/12/07 20:12:41 Batch Monitor[190] exception caught in -[ClusterStatus getNewStatusFromController:withOptions:withQueryList:]: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488391.647220 218488361.647369) 1'
    4/12/07 20:17:55 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488705.075513 218488675.075652) 1'], server [tcp://10.0.1.199:49167]
    I tried Stopping and then Restart Sharing and I noticed the follwoing entries in my log :
    4/12/07 20:23:26 compressord[210] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 compressord[211] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 compressord[213] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 qmasterca[269] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 qmasterqd[199] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:27 QmasterStatusMenu[178] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489009.603992 218489007.604126) 1'], server [tcp://10.0.1.199:49407]
    4/12/07 20:23:27 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489037.738080 218489007.738169) 1'], server [tcp://10.0.1.199:49407]
    4/12/07 20:23:27 Batch Monitor[190] exception caught in -[ClusterStatus getNewStatusFromController:withOptions:withQueryList:]: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489037.738080 218489007.738169) 1'
    Batch Monitor immediately detects the cluster being active again, but Compressor doesnt, leaving me only This Computer available in the Cluster drop down when submitting a batch.
    In my Activity Monitor, I notice that CompressorTranscoder is not responing ( the 4 CompressorTranscoderX processes are fine ) and the ContentAgent proces isn't responding neither.
    Does anyone have any clue on what I could check next or how I could fix my problems ?
    Thanks a lot in advance,
    Stefaan

    Bah, this is crazy, today it doesn't work anymore. Yesterday my cluster was showing up in the Dropdown window, and I could submit a batch to it, and it got processed over my virtual cluster.
    Today, after finishing the second part of my movie, I tried it again. I didn't change anything to my settings, my machine hasn't even rebooted (just recovered from sleep mode) and my cluster isn't showing up at all anymore. Even the Qmaster menu doesn't show it
    Guess, I'll have to wait out until it appears again, or try a few things out

  • Qmaster actually make a performance difference thats NOTICEABLE??

    I have Apple Qmaster 2.3.1 installed on a Dual 2.7 Powermac and an i7 new intel based Macbook Pro. The FCP Suite is all installed on the PMac Tower, and Compressor doesnt allow you to use the same serial on a second computer in the network (the doc. doesnt state it MUST be on all machines, only the Qmaster).
    I tried this setup with an ethernet cable as well as with Airport networking (computer-to-computer setup, no firewall turned on, internet access turned off for security)
    According to the "getting started quickly" segment of the 'Apple Qmaster and Compressor 2' pdf that came with my FCP Suite, I have installed it correctly.
    I can see and submit to the cluster that is originated on the Powermac, see the active service (Compressor cluster only allows one core of the Macbook Pro to be used according to documentation, and sadly only 3 kinds of render software can make use of all 4 but NOT ANY compressor or FCP rendering? LAME... or please correct me if Im wrong)
    I did a transcode of resizing and change of codec and timed it using just This Computer first then through the cluster....
    over a 22 min. encode in both settings, there was barely a difference of 5% between the trial encodings... is this thing actually working? is there a BETTER way to test the cluster to see if I actually would benefit keeping the laptop in cable length of the tower? (ie: is there a verifiably FAST encoding I could try to see ANY difference; .mov to .mpv or something?)
    I then downloaded and tried to do a cluster using Pooch but that breaks shortly after submission with an app crash... so that cant even get any performance vector to compare with Qmaster....
    Seriously... if a 5 to 10% increase in encoding speed is what Apple thinks is QUALITY performance, when using a virtual 6 chip system via Qmaster, I am underwhelmed... therefore Im thinking something must be wrong instead. Tested the cable with a high bandwidth video iChat and a few things, seems to have no prob with throughput, so cable doesnt look like the culprit.

    Well on the i7 it also had its firewall turned off (hoping this opens all available ports by doing this), had the setup of the prefs pane of Qmaster set to 'services only' as per the client/cluster rules in the guidelines, had the Managed settings turned 'off', the Network choice under the Advanced tab set to All Interfaces and a working cable confirmed by doing a machineToMachine test of conductivity by making an iChat audio/video connection.
    On the client machine (tower, serving the submission to the cluster) the setup was the same only with the prefpane set to Quickcluster with services, Services with Managed unchecked, Quickcluster option set to 'include unmanaged services from other computers', and the rest all identical settings.
    I just heard a Pick Our Brains on DP Buzz podcast that said H.264 does not multithread, which maybe effecting things even if transcoding to a different size (RE: re-encoding perhaps after resizing?), so maybe I will try a QT file output using x264 codec which the author loosly is suggesting IS multithreading (dunno but will try)
    Further more, I re-read the manual and on pg. 15 it is conflicting: "NOTE: the compressor 2... limited to computers that have either FCP Studio or DVDSP 4 installed". well, not possible without multi license release as it checks and warns you if you try installing the same software using the same license number... also is conflicting the message later on in the install instructions (below)
    Later on the same page the restrictions are different...: "Each computer in the network will require Apple QMaster AND/OR Compressor 2...", then again muddled on pg 16 (installation) by saying "make sure the client software (meaning qmaster I think) is on atleast one computer" and further on down "software (either Compressor or Apple Qmaster"
    SOOOOOO, what I gathered by the majority of the conflicting specs is that ONE machine should atleast have ALL the FCP and Qmaster installed on it, and the rest atleast have Qmaster installed... when it says either/or, I went with the 'or' boolean meaning I was allowed only to install Qmaster without the installer warning me about things.
    (I am not doing Dolby D. Pro audio so I dont have to install FCP/DVDSP as a result thankfully...because I dont see buying another license just to test out improved cluster processing)
    Wouldnt mind knowing if anyone found Pooch to be a better solution, and more flexible as it supposedly will work with a number of processes let alone with a QT codec for exporting a number of different formats... anyone try Pooch with their iMovie or FCP or DVDSP?

  • Using Qmaster with DVD Studio Pro OR Qmaster requires FCP on all machines?

    Greetings everyone,
    After a few days of tinkering I've finally managed to set up a managed cluster. However, I have a problem (surprise, surprise). When I submit a job to the cluster, Final Cut Pro suddenly opens up on all of the computers being used. I have Final Cut Studio 2 under an academic license, and all copies report that there is another copy running with the same serial number and that they must shut down. They render a few frames before suddenly quitting the job, regardless of whether I leave the message up or OK it away. I've done some reading from this thread which makes it sound like all systems must have a copy of FCP installed, or to use a slightly different method:
    http://discussions.apple.com/thread.jspa?threadID=375923&tstart=195
    It's a bit upsetting as I'd originally read that systems could aid in rendering with just QMaster installed, and that they didn't need FCP installed. Hence, having FCP open up was unexpected, and I get the feeling that uninstalling FCP from those systems wouldn't help me.
    But let me explain what I do normally, and perhaps some of you can guide me.
    My work has me converting hundreds of tapes to DVD. Without QMaster, I would import the videos in Final Cut Pro, set chapter markers and in and out points, and then export the video (with barely any processing - an export typically takes 4 minutes on a dual 800 Mhz G4). I'd then import it to DVD Studio Pro, handle the menus, and then do a build and format. That process would take ~2-3 hours per DVD, and that is the part that I'd like to optimize.
    The setup is such that I have one dual G5 wired to our hard drives (all Firewire 800), and then four other systems that connect to the G5 to access the HDs through a gigabit network switch. This has worked quite nicely in that it allows systems that only have Firewire 400 to work at Firewire 800 speeds regardless.
    I'd originally thought that DVD Studio Pro 4 would automatically be able to process jobs through Qmaster, but I couldn't find a setting for it and it didn't seem to do it on its own. Instead, I was planning to redo my work flow such that the encoding of the video (the most time-consuming part of DVD Studio Pro's build procedure) would be done in Final Cut Pro. Or more specifically, it would go through Compressor. However, as I mentioned above, attempting to work this way causes Final Cut Pro to open up on all nodes within the cluster. As they all have the same serial number, they become unhappy under such conditions. I can uninstall them from the systems as I only ever use one copy at a time anyway, but as I mentioned from the link above, it seems that rendering through FCP -> Compressor will require FCP to be installed on all participating systems.
    Does anyone have any advice for what to do? Is there a way to get around having FCP activate on all participating systems, or is there a way to do it directly through DVD Studio Pro (ideally in a manner that wouldn't cause DVD Studio Pro to activate on all participating systems)?
    Any advice is much appreciated. The distributed encoding is really the only reason why we upgraded to Final Cut Studio 2, so I'm going to feel rather foolish if this doesn't work out.

    A QCluster network isn't designed to distribute Rendering from FCP, rather it's designed to distribute encoding tasks.
    As mentioned in that thread you linked to, the best workflow I can think of in your situation is to export Quicktime reference movies from FCP, then bring those reference files into Compressor for distribution to the QCluster for the transcode to MPEG 2 for DVD. There is a wealth of good information in that thread.
    In short, if you attempt to use a Cluster to export from FCP, then all of your nodes will need to have separate licenses of FCP. But if you use the reference movie method, FCP is not required for the encoding.
    One question for clarification with your workflow, are you logging & capturing these clips then sending them directly to DVD? or are you editing them in a timeline prior to export to DVD?
    If you're just using FCP as your Log & Capture tool, then you can skip the reference movie step and just grab the captured clips in your Capture Scratch folder and send those directly to Compressor.

  • Is there a recommended maximum number of jobs QMaster can handle?

    In our setup we need to render some 4,000 - 8,000 short video segments (only a few seconds each). Right now we are submitting them all one after the other via the command line to a cluster (QAdministrator has been set to allow 10,000 batches in the queue).
    Has anyone experienced problems with queues this large? When rendering, QMaster randomly dies - mostly at around 1,000 batches. The batches remain in the spool but I need to reinstall the QMaster Service Node package to get everything off and running again.
    I don't like the idea of having to babysit the queue and I submitting the next one after the previous one negates the whole idea of using clusters.
    I've thought about adding more equipment to clear out the queue faster, but I don;t want to recommend this route if it doesn't work as that would be a huge expense wasted.
    It's almost as if QMaster has a memory leak but that's just a guess.
    Any thoughts or wisdom would be welcome.

    I would imagine most people submitting such a large number of jobs are using something more robust like Qube:
    http://www.pipelinefx.com/products/qube-film.php
    Qmaster is very buggy and would need a complete rewrite before it could be qualified for use in such an environment.

  • ECC / Parity errors but 'only' when running qmaster / Compressor ?!!

    I have a Mac Pro that came with 4GB of (Apple) Ram. I recently installed 4 addiitonal (to fill both risers to 8GB total), of branded, supposedly apple-certified Ram from one of the major vendors we all have come to love and trust.
    Anyway, all seems fine - but now I am running QMASTER/Compressor which in fact wires (or at least pages) so much ram as to make good use of my 8: I have only 80MB free- but hey that's good, the memory is being used.
    Here's the odd thing, when doing compressor / qmaster tasks (this is both the cluster server and a service node) I now get ECC Parity errors:
    - Of my 8 chips, 3 have trown ECC Parity errors (all corrected), ranging from 1 on one chip to now 35 on another. This is in a 24h period
    - THis is on the APPLE ORIGINAL RAM - none of the 'new' chips have any errors ?!!!
    - Errors show up in 'about system' as well as the console log- but I ran EXTENDED Hardware test twice - and both times the memory (whole system) comes up clean!
    - ECC errors don't show up - as far as I can tell, if I don't 'extremely' tax the system - i.e. running email, Safari, iTunes and, maybe WoW keep it 'clean'! but also leave between 3-6GB memory 'free'
    - The errors occur on 2 seperate risers
    So, here's my dilemman - I can replicate this everytime i use compressor/qmaster. However, I never (yet) had a related crash or issues.
    It started with 'just' one chip, now it's 3 - again, all the original apple ram.
    My memory is installed correctly (i.e. both apple pairs together and both OWC together).
    I would LOVE to get some suggestions. Logic would say this is related to the new memory BUT I also never ran a compressor/qmaster renderfarm before now. I already opened a case with Apple and they are sending me one new chip (I opened at the time only one chip was having the errors).
    So - normal? Low enough at high volume I shouldn't worry? Routine with compressor and qmaster Or time to panic?
    THANKS!
    Dan

    This is very interesting. We, too, recently experienced a hard RAM Parity Panic/Crash on a Qmaster/Compressor encoder node. About two weeks earlier, we experienced a RAM Parity Panic/Crash on a system that has nothing to do with Qmaster.
    Both machines are dual G5 Xserves with factory-installed RAM. The encoder had six 512MB sticks while the non-encoder server had 2 1MB sticks.
    The encoder died, but when rebooted kept working by ignoring the bad RAM and working off only 2GB of the good RAM (must be in pairs).
    Apple replaced both sticks under warranty.
    As for Qmaster/Compressor causing the errors, I have no insight. If that were the case, I assume we'd see this on most if not all our encoders and cluster controllers. And that hasn't happened.
    I'd say, you got bad RAM and it needs to be fixed.
    The panic.log should give you insight into which dimm slot threw the error.
    Good luck!

  • Managed Services NOT showing up in Qmaster Service Browser

    This seems to be a strange one. The typical issue is people that have not set their Qmaster Preferences to Managed Services, when this happens the node does show up in the Qmaster Service Browser section of the Qadministrator but cannot be dragged into the cluster.
    This is NOT my issue. What I'm seeing (or not seeing) is my node in the Qmater Service Browser if I set my node to Managed Services. If I uncheck this box it shows up just fine (of course it can't be used in the cluster), when I check Managed it disappears. I have two other nodes in this cluster that work just fine. This one worked, but disappeared when we moved it across the room (attached with a 25' Cat6 cable).
    I've never seen this before. If anyone has any ideas I would appreciate the input!!
    Thanks!

    Our EP-RUNTIME Version is:  1000.7.40.9.0.20141122063500
    thanks,
    Harman

  • Make sure Compressor 3.5.3 uses Qmaster cluster

    How do you make sure Compressor 3.5.3 uses the Q master cluster?  I know you have to set up Qmaster, but I don't know what the settings are supposed to be. My MacPro 2,1 has 8 cores.
    Also, how do you tell if your version of Compressor is running as 32 bit or 64 bit and how do you change it from one to the other?
    I have two identical MacPros but one takes three times longer to process the same files in Compressor. I'd like to get them on even ground. They have the same separate raids as well and they both test equally with AJA and Black Magic speed tests.

    Steve Garman wrote:
    I have two identical MacPros but one takes three times longer to process the same files in Compressor. I'd like to get them on even ground. They have the same separate raids as well and they both test equally with AJA and Black Magic speed tests.
    Also, just to point out that you have three threads on this question.
    Russ

  • MBP Pro Qmaster Start Sharing Greyed Out

    I have fcp studio 2 and MBpro and dual g5. On my G5 in the system preferences for qmaster none of it is greyed out and sharing is on. On my MBP It is all greyed out and I can not select any options or click start. ANy ideas ?
    On the g5 in activity monitor I see qmasterd running. On the MBP it is not only QmasterStatusMen.
    The G5 has FCP studio 1 on it and the MBP has FCP studio 2.
    Thanks!
    Powerbook g4   Mac OS X (10.4.2)  

    Hi D May,
    Qmaster in FCS 1 is not compatible with the Qmaster in FCS 2. You need to have your Qmaster apps on the same version to work.
    Hope that helps!
    Cheers!

Maybe you are looking for

  • ORA-19802: cannot use DB_RECOVERY_FILE_DEST without DB_RECOVERY_FILE_DEST_S

    Hi all, We are using Oracle 11g R2 RAC on OEL 5.6. After set the db_recovery_file_dest to FRA diskgroup, I forgot to set the DB_RECOVERY_FILE_DEST_SIZE, and I've started an instance, like this: SQL> ALTER SYSTEM SET db_recovery_file_dest='+FRA' SCOPE

  • Choppy streaming issues

    just happened recently. noticed my mac was running hot. tech support suggested that it was the battery and i should send it back and they're sending me a new one. so in the mean time i'm running it solely off of AC power without the batter in it. don

  • IMac - Mac Book Memory

    Will the 1GB PC2-5300 memory module that I pulled out of my iMac work on a Mac Book?

  • Help in a logic

    hi abapers. i have filled an  internal able for sale orders. and want to made changes in SO thru BAPI. So first i hav to fill internal tables of bapi against each SO and then SAVE in it. Please send a small algo for that loop at itab_so. Code to fill

  • 10.6.5, 10.6.6 Kernel Panic

    I sat out 10.6.5 because it would cause a Kernel Panic soon after I slept the computer, or thereafter after every startup. Unfortunately, 10.6.6 is showing the exact same problem. The machine will start, and run normally, then after 2 minutes, or 10