Restore Cluster in Compressor

This isn't compressor 4 but I don't see any other place to post. It's a MacPro Quad with 16 gigs of ram running 10.68. I used compressor to change formats on 5 files today and it worked fine.  I forgot a 6th file so went to do that the exact same way only on the second "submit" it was grayed out becaquse the only choic for cluster was "none."  I went to qmaster and it said it wasn't running. How could that be? How do you make it run. I rebooted the machine but no help.  I only want a simple cluster on the one machine. Is there any way I can get this goinig again. Why did it stop running in the first place?  Help me get it going again. Please! I'm running FCP Studio3  7.0.3.

Quick cluster with services is checked
Services
X   X  Compressor
Options for selected services (grayed out)
Identify this quick cluster as (all grayed out)
Security  (grayed out)
Reset services (did that)
Start Sharing  (Did this) and it said, "Unable to start services because qmasterd is not running. Please consult your operation manual)

Similar Messages

  • How to restore cluster to different servers

    Our 2-node cluster is being included in DR test this year. Our site provider is being asked to prepare servers, network and storage to host the cluster. We will restore from EMC BCV backup and then try to bring up the cluster.
    OS = solaris, storage is asm on emc san, only oracle clusterware for management. DB is 11.1.0.7.
    Questions:
    1) do the servers need to have the same host names as what we have today in production ? (Network will be isolated to DR site). Can we restore the cluster from backup if the servers have different host names?
    2) do the storage device names need to have the same names as on our local san - disk groups internally are expecting certain device names I believe?
    WOuld appreciate replies from anyone who has performed similar disaster recovery exercises before using EMC BCV backups (which is a type of hot backup).
    Thanks.

    Use oracle dataguard technology for oracle database DR setup. So the server names can be different and you can easily do the tasks of failover to DR site and switchback to production after testing the failover.

  • Setting up multi-Mac cluster for Compressor is not working

    Hi
    I have Mac Pro 8 Core Intel Xenon ( Master ) and MacBook Pro Intel Core 2 Duo ( slave ) .
    I am using FCS 3 .
    I try set up Cluster for both of them. Everything is fine to the point when is starting sheering jobs between computers. MacBook Pro doesn't take any action. I can see in Batch Monitor that job is segmented for all cores ( we have total 10 cores- if I sign all off them in Cluster I got in Batch Monitor 20 segments ) but all jobs take my master.
    I called Apple Care but they told me to reinstall compressor but this doesn't help.
    What I should do ??

    Thank you for closing your thread with a workable solution. Hardly anyone does that around here. It is a valuable and gracious act.
    congratulations on figuring it out on your own. I do not believe any of us would have suggested you do a complete reinstallation of both the MacOS and FCS to solve that problem because you did not include the single most vital factor: you had migrated your FCS system to a new Macintosh using Migration Assistant.
    bogiesan

  • Exchange 2010 - Clustering & DAG problems after restoring CLUSTER from domain

    Hi there!
    SITE 1
    Primary EXCHANGE server (2010SP3 with latest CU) with all the roles installed
    SITE 2
    Secondary Exchange server (2010SP3 with latest CU) with only mailbox role for DAG purpouse.
    SITE 1 and SITE 2 are connected with site-to-site-vpn.
    Both servers are on 2008 r2 ENT.
    About 3-4 months ago we have accidentely delete DAG node from domain. We have managed to restore it from domain with using AD RESTORE and checking that DAG is member of all the required Exchange groups in the domain.
    Now we are having some big problems if site-to-site-vpn dropps, our primary Exchange server in SITE1 is not working.
    If VPN dropps between the sites, OWA gets unavailable as it Exchange servers would think that Exchange in SITE2 is primary server.
    Please advice us how to track and repair the root of the problem.
    With best regards,
    bostjanc

    Running command:
    Get-MailboxDatabaseCopyStatus –Server "exchangesrvname" | FL MailboxServer,*database*,Status,ContentIndexState
    Gives as an output that all the databases are healthy:
    Example of 1 database report:
    MailboxServer      : ExchangeSRVname
    DatabaseName       : DatabaseName1
    ActiveDatabaseCopy : exchange2010
    Status             : Mounted
    ContentIndexState  : Healthy
    Running command:
    Test-ReplicationHealth –Server "exchange2010.halcom.local" | FL
    Also gives output that everything is fine.
    We still need to solve this issue so we will be unmarking the thread being ansered.
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ClusterService
    CheckDescription : Checks if the cluster service is healthy.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ReplayService
    CheckDescription : Checks if the Microsoft Exchange Replication service is running.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ActiveManager
    CheckDescription : Checks that Active Manager is running and has a valid role.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : TasksRpcListener
    CheckDescription : Checks that the Tasks RPC Listener is running and is responding to remote requests.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : TcpListener
    CheckDescription : Checks that the TCP Listener is running and is responding to requests.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ServerLocatorService
    CheckDescription : Checks that the Server Locator Service is running and is responding to requests.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : DagMembersUp
    CheckDescription : Verifies that the members of a database availability group are up and running.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ClusterNetwork
    CheckDescription : Checks that the networks are healthy.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : QuorumGroup
    CheckDescription : Checks that the quorum and witness for the database availability group is healthy.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : FileShareQuorum
    CheckDescription : Verifies that the path used for the file share witness can be reached.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    bostjanc

  • Will A Group of iMacs Be A Good Cluster for Compressor?

    Sorry for the double post, but it was suggested I post this question here as well.
    OK, so we have a FCS2 and FCS3 station here and over 15 new iMacs (i3 and i5). Questions is, will iMacs provide enough of a boost to make it worthwhile to try and set up a cluster of some kind? I'm gonna have to read up on setting one up since up to now, I've been a one man, one station deal.
    Thanks for any opinions.
    P.S. I know I would need two versions of Compressor.

    I have another question on this topic. If I set up other Macs on my network to help with rendering, how smart is the software in terms of not disrupting the normal use of that remote computer?
    In other words, will it only take advantage of unused resources and scale as necessary? I also use Net Render from Maxon and it is an intelligent client that only uses was is not in use.
    Thanks.

  • Compressor won't work when submitted to Cluster

    I've set up my Qmaster preferences to open 4 instances of Compressor 3 on my Mac Pro about a month ago. Now, out of the blue, I'm getting a strange error message when I choose the local cluster and Compressor will not submit the batch. I have not updated any software or changed any Qmaster settings.
    The error message is: "Error: An internal error occurred: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out'
    When I submit to "This Computer" Compressor works fine.
    In the preferences pane, my Qmaster settings are
    Share this computer as: Services only
    Services: Compressor
    Share CHECKED
    Managed UNCHECKED
    Options: 4 Instances (on a dual Quad-core)
    Services: Rendering
    All UNCHECKED
    Everything else set to defaults.
    I've also noticed in the Activity monitor compressrd shows as (Not Responding)--I'm guessing that's not good!
    Like I said, I had applied the 3.0.1 update when it first came out and it seemed to work fine since then.
    I think my next step is going to be to wipe the HD and reinstall everything, but I'm hoping there is a quicker way. Any suggestions?
    MacPro 3.0 Ghz 2x Quad
    Mac OS 10.4.10
    Quicktime 7.1.6
    Final Cut Studio 2 (all the x.0.1 updates)

    Just had the same problem.
    Compressor was stuck on "cancelling" and never completed cancelling.
    Other jobs just did not start because cancelling was in progress.
    Quitting and restarting all programs did not work.
    I finally killed all batch processes using compressor/reset background processing menu item.
    Then I encountered this NSPortTimeoutException problem.
    I threw away all compressor settings but that did not do much.
    Quit/restart still no effect
    Then I set the Apple QMaster prefs to 'services and cluster controller'.
    Did not do much either.
    Then I quit & restart all processes again. This time it seems to work again as it should.
    No idea what really was the problem though...

  • Compressor 4.1 shared cluster computer not appearing - but slave can see master

    On my main "master" machine, I cannot see one of my shared cluster machines. However, on the particular shared cluster machine (slave) that my "master" cannot see, the slave can see the master in the list of shared computers. So, in other words, I suppose I could launch the Compressor job from the "slave" machine [since the source file is on a network drive that all machines can access], but it doesn't make any sense.
    Why can't the "master" see the "slave".
    I've tried restarting the master computer, tried turning off wifi and ethernet on the master computer, tried manually inputting the IP address and host name on the master computer in the list of shared computers by hitting the "plus" button. Also tried setting the "Use network interfaces" option to either both, ethernet only, or wi-fi only, also tried resetting the compressor queue, and trashing the Compressor preferences.
    Nothing works. What's strange is that this master computer was able to see the slave computer two weeks ago, using Compressor 4.1. Nothing has changed. They are both on the same 10.0.0.x network and subnet. The master machine can ping the slave machine.
    Any ideas?

    Just posted this in this discussion forum. Might be helpful even though its specifically for Compressor V4.0.
    Run through some of the the procedure if you are on Compressor V4.0 earlier.
    https://discussions.apple.com/thread/5781288?tstart=0
    As you're using Compressor V4.1, check the cluster and compressor logs for errors and network and file system permission errors:
    ~/Library/Logs/Compressor (new in V4.1) to see what he jobcontroller and services node are doing
    check /var/logs (system log)
    else for compressor V4.1 it seem sensitive to hardware configurations now.
    might be of some help.
    w

  • How to restore Multiband Compressor?

    All of a sudden, in the middle of an editing session [Audition CC 2014/Yosemite], Multiband Compressor stopped working. When I click on it, nothing happens. It doesn't crash Audition, as some have reported, but the effect just doesn't launch.
    I searched this site and the web, no joy.
    How do I restore this important tool?
    I tried restarting, resetting PRAM/SMC...everything I can think of.
    If I have to unin/reinstall, how do I make sure my Favorites, etc are saved first? I found files & folders in the /7.0 folder and copied them, but don't want to re-do everything.
    Thanks in advance.

    If you have copied all the settings and files from 7.0 folder as a backup to somewhere safe you could try deleting the original 7.0 folder. If Audition doesn't find the folder when it is restarted it will re-create it with all the default settings. You can then try to see if that restores the Multiband Compressor effect.
    If all is working correctly you can then re-import all your settings from the backed up folder.

  • Can't get remote cluster machine to encode video - getting 'Media Server application unexpectedly quit'

    Hi
    Using Compressor 4 / Apple QuarterMaster admin etc, I have set up a Cluster and it works well to Share > Export Using Compressor Settings.  That took some doing, and I'm happy that it works.
    If I do all my processing in the foreground, or alternatively in the background on my FCPX computer, ie without running or using any cluster in the background, it all goes well.  Quick, error free. 
    If I activate the cluster and only put my local machine's compressor services into it, and then send my compressor batch to the cluster, it works perfectly well.  But that defeats the point of having a cluster.  I want the remote machine to do all the work so my local machine doesn't slow down.
    Unfortunately, if I add the remote machine's services into the cluster, the encoding always fails with 'Media Server application unexpectedly quit' in the error log.  I know that the cluster is distributing segments of the file to the remote machine - this can be seen in the Share Monitor, but they never get processed and sit there waiting until I get the error message.
    The remote machine is a modern 4G core 2 duo MacBook. It's never been used for this before.  It has Compressor 4 and the Pro Apps Update installed and both appear to work.  Both machines are running 10.6.8.  The remote machine has Compressor services initialised in Apple QMaster sharing.  The QMasterAdmin sees these services.  They are accessible over Bonjour and appear in the Cluster.  Compressor services from both the local and the remote machine are configured exactly the same way.  The same QT codecs exist on both machines.  I've restarted, shift-restarted, etc etc.
    So I am at a loss here.  The remote machine just won't compress anything it is sent. 
    Any ideas?  Must both have FCPX installed?
    Anyone actually got a remote machine in a cluster to work with Compressor 4?  I can't figure it out.
    Chris.

    OK... finally sorted it out.
    It's a bug, as far as I can tell. 
    Any job entered directly into Compressor 4 in the normal Compressor manner will be successfully rendered by any working cluster from any machine that can access the cluster.  That's good.  It means that the underlying distributed processing model works well.
    HOWEVER - any job forwarded to a Compressor 4 cluster that includes non-local compressor services (ie compressor services not resident on the same machine), using Share > Export Using Compressor Settings direct from FCPX will fail.
    To confirm this bug, I made a cluster on a remote machine.  It was a dual core machine, so I enabled 2 compressor services on that machine, and that's all the cluster was.  Simple.
    I then manually entered a video file (ProRess 422 720p) into Compressor on the remote machine.  I did this by physically setting up new job using the compressor user interface.  A bog standard ProRes422 720p file rendered fine this way on the remote machine.  As did anything else I gave it.  Good.
    Then on my main machine, I opened Compressor and made a job based on the same file and settings, and sent it to the remote machine's cluster.  No problem at all!  Great!
    So now I know that both Compressor versions, and the clustering model, are working fine.  In fact I can send all sorts of files to the cluster, from any other version of compressor, and have them processed on the remote machine, and get the result back on my desktop later on.  Excellent.
    But, if I try put this same file into a FMPX timeline, and go Share > Export Using Compressor Settings.... and select a cluster with remote (non-local) compressor services, it does not work.  Rendering the video segments on the remote machine times out and fails, every time.  It doesn't matter what file format I use, it just fails. 
    So it's a bug. 
    From what I can tell, Final Cut Pro X somehow messes up the Share > Export Using Compressor settings where the cluster includes non-local compressor services, causing all jobs to fail.  The same Share > Export Using Compressor Settings will work quite happily if all the services on the cluster are on the same machine as FCPX, or if the job is sent to This Computer in the background.  But any attempt to send files to a cluster using any remote services will fail.
    I hope this saves some people from wasting as much time as I have!
    One workaround is to export to ProRes then put this file manually into Compressor, sending the job to the remote cluster.  This is a two step process with a large intermediary ProRes file.  If one goes Export as QuickTime movie, generating the intermediary file prevents further FCPX work being done. 
    To get the intermediary in the background one could use  Share > Export Using Compressor Settings via either This Computer, or to a cluster using only local services.  Then once complete, manually add it to a compressor job.
    So this is a FCPX bug as far as I can tell.
    Chris.

  • Unable to add remote system into cluster using osx 10.5.2

    About a month ago, I had a quartermaster managed compressor cluster setup with three (3) systems. I was running FCP 6.0. on one system with quartermaster on that system managing the cluster. Compressor, quatermaster, and quicktime were installed on the other systems. All systems were running osx 10.5. FCP 6.0 suite tools was installed on one system. One of the systems was an intel and I had a two (2) instances setup as well as a virtual cluster on the intel. All worked perfectly.
    All machines were upgraded, to the latest quartermaster, compressor, quicktime, and osx 10.5.2 with the leopard graphics updates.
    Now I can no longer join the remote systems into the cluster. On these systems I have share and managed set, for both rendering and compressor. Yet in quartermaster they only show up as rendering nodes. If I remove the shared option, then the nodes appear as a unmanaged compressor service. But they are greyed and cannot be added to a cluster.
    Before the update, they would display in quartermaster as both rendering and compressor services and could be added to a managed compressor cluster.
    Did the updates break something or is there a new requirement that I am missing?
    thanks

    I'm having exactly the same problem on multiple machines, both Intel octocore and G5 quadcore. I'm running 10.5.4 with all the latest updates on all machines. Everything was working, now we can't drag any of the machines into a cluster to make a new one. Like you say, they only appear if Managed is unchecked (on the machine providing the QMaster service), and then are still greyed out, and not draggable. And you can't save a cluster without specifying the cluster controller, which you can't because nothing can be dragged in. The nodes appear to be unlocked (although the icon isn't very obvious), but even if they're locked, there is no password entry that pops up when clicked, and none have a password set in their QMaster System Preferences.
    To test, I did a totally 100% fresh pristine Leopard install and a dual G5, ran all OS upgrades, then did a fresh FCP Studio 2 install, and ran upgrades again and repaired permissions just for good measure. No dice. Exactly the same problem as on the other machines. This is a brand new install and it doesn't work!
    Very frustrating problem and I can't believe more people aren't seeing it. Totally fresh install, what else can be done? Well, time to call AppleCare, I guess.

  • Trouble with Qmaster, Compressor

    Hello all!
    I am trying to compress a 48 Minute long HDV sequence to HD-DVD using the H.264 60 Min encode settings, and I am having problems.
    I have read how to setup Qmaster to run multiple cores, so I have made a 4 core quick cluster on Qmaster with the following settings
    "Quick Cluster with Services" - Checked
    Under the services box, I have "Share" checked for Compressor
    I have 4 instances selected (out of 16 possible)
    "Include unmanaged services from other computers" - Checked.
    All other options unchecked.
    On Advanced menu, only "Log service activity to file" was checked.
    What happens is that when I submit the job in compressor (I select the 4 cluster when I submit), I see all the instances in Activity monitor, but one of them is red and non-responding. I can't get all the instances to do work at the same time, as the cpu utilization is only indicating one instance doing work at a time. I thought they were all supposed to be utilized.
    In the end, I waited 36 hours, and the encode still wasn't finished.
    Any ideas what could be going on? Am I set up correctly? Let me know any thoughts!
    Thanks,
    Ken

    Hi Ken here's some ideas. I use compressor/Qmaster all the time. I too have a brand new Nehalem MACPRO 2.93 @ 12GB.
    Firstly for distribution, *compressor is your best friend*... see why...
    Some diagnostics tips first:
    • go check out /var/logs or the utilities/console.app (leave the late in the dock). Look under the various directories for QMASTER. There is usually a wealth of information in there.
    • COMPRESSOR.app/prefs: not always but worth it to (a) MOUNT the shared cluster storage you used in /system prefs/qmaster/advanced: shared cluster storage
    • COMPRESSOR.app/: look at HISTORY
    • Batch MOnitor.app : check on the "i" status for each node running a segment in the multi pass segmented transcode (segment=TICKED in inspector video for compressor). Look at the "status" and "logs" in the pane.
    • turn off ALL network devices (airport, EN & FW) until you get it working QMASTER seems to go looking everywhere for other nodes.. so dont let it.
    • COMPRESSOR.app/prefs: set "NEVER COPY SOURCE TO CLUSTER" - we'll work about his when yu start to use other nodes outside your MACPRO.
    • COMPRESSOR.app/reset background processes - to start
    OK when you have all tjis done... let look at some ways to commence to fire up Compressor using Qmaster cluster.
    • assume QUICKCLUSTER ste up: click "managed services=ticke" if you like
    • *systems prefs/qmaster:* set SERVICES: for number of instances I am using 14! (*yes 14 instances*) on this MACPRO and it rides all the 16 cores and I can edit fine on FCP at the same time..PLenty of head room in these new MACPROS 2009 models (+doubters go buy a dell p.o.s+ ). *CAVEAT= YOU HAVE ENOUGH RAM* - not sure if 6GB will cut it so trial and error.. works great for me thou and I have 8GB not used from 12GB!. NOte in the non Nehalem MACPRO era the R.O.T. was one instance of compressor per core... however since I have bene usingthis baby on the last few days I have cranked it to 14 instances and it runs fine and all cpres are nearly 100% usage with a very responsive machine. THe transcodes are at least 3-4 times faster than my 8*core 2007 MACPRO!.
    •  *systems prefs/qmaster:* / advanced: tick the port ranges =on.
    OK .. now you're ready to rock!
    1) *systems prefs/qmaster:*: setup: clock OPTION=CLICK over the start button to RESET any transcodes that were going.
    2) Set up a job in compressor.app making sure "job segmenting" is ticked in the ENCODER inspector.
    3) submit and select your cluster in the list box.
    4) sit back while your new MCPRO 2009 earns its money! - watch all the cores light up GREEN in the activity monitor!
    5) watch teh console.app "all messages for errors" form QMaster.
    OK.. this is only a summary.
    POst your results of PM me if you need help.
    I love compressor and Qm especially on this NEW MAC PRO.. it just smokes anything else I've used for transcoding.
    Oh: *and one other tip for your workflow* which you can take or leave regarding HDV. Yep its not on may favourite codec list but here's some good tips for getting your compsition ready for distribution as you are trying to do from when I owned a SONY-Z1P HDV some years back.
    This will cut 10's of hours of what you are seeing. (well maybe not that magnitude).
    With the edit prepared and not rendered:
    • take ALL the footage fomr the HDV that you ingested form the tape in the CAPTURE folder and use COMPRESSOR.app and transcode it to PRORES422 only. (Same resolution etc etc).... Simply try this out please. USe "copy and BLOCK SELECT + paste all the codec+targets information in the COMPRESSOR window.. its simple... submit it to your QMASTER and come back in say 10 minutes .. it shoud all be done. (what you have done is to make a mezzanine instance of LGOP HDV footage into iFRAME PRORES422 - no quality has been lost nor gained.
    • in your FCP project DISCONNECT all the media from your Sequences and then RECONNECT the media to the same file names that are the new PRORES 422 - (conforming) in FCP.
    • now your edit sequqnces are pointing to the PRORES versions of your clips. edit at will here and watch the play head and edits slice through prores422 like the proverbial +hot knife through butter+ (assume you have a fast disk system underneath.. my MACBOOKPRO does this in its sleep).
    *when your ready to make a distribution*, try this from within FCP timeline/sequence:
    • export QUICKTIME MOVE" as a reference movie (untick "self contained") and call the movie client_ref.mov (etc) . *This step is very important*. And +DONT EXPORT TO COMPRESSOR+. this is nuts at the moment. wait for FCS V3 soon I hope.
    • last step: make this distribution using COMPRESSOR by using the reference clip client_ref.mov as input to compressor.
    And as they say .. thats it!
    Ph one more cool thing.. while it is transcoding and all your cores are ringing out at 99% each, try setting up a COMPRESSOR DROPLET. Set this on your desktop and drag and drop client_ref.mov onto it.. and watch compressor and master do their work.
    Yes.. compressor is your best mate when you set him up properly.
    HTH
    w
    sorry for the typos, I'm in a hurry...

  • Where are Cluster Files Sent Emergency?!

    I tried setting up a render cluster using Compressor 2, it was 10 video files over 60 GB, well I set the cluster to the shared cluster, not "my computer" so it started copying the source files. Then I got an error that my system disk was full, so I tried deleting the batch job but it just froze up...so now I have 145 MB free on my boot up drive and all the problems that creates. How do I delete the cluster files, where are they as they don't seem to appear anywhere...I'm assuming they're invisible so any insight into how to purge them would be greatly appreciated. Thanks!

    I found the solution. I looked in the Qmaster control panel under advanced and saw the storage was in var/spool/qmaster went in with the command line and rm'd all the files...space problem solved.

  • FCP to compressor w/cluser problem FCP tmp files.

    So I have no problem submitting single video to my cluster.
    I use "Never copy source to Cluster" from compressor.
    I keep source media on an NFS share on my desktop client.
    I am submitting to a 6 machine cluster of G5s.
    I recently started attempting to tackle submitting FCP projects through compressor. I have read that there are issues submitting with FIle->Export->Compressor and thus have been attemping to submit right from the sequence file in the browser. The problem that I am having is this.
    <mrk tms="254678829.330" tmt="01/26/2009 11:07:09.330" pid="1736" kind="end" what="service-request" req-id="7A5BF634-78FA-478E-B15D-4AB049CA04BB:1" msg='Preprocessing job request error (exception=Source file "/private/var/folders/Jq/JqzDuvOaH6yDxo3jBmBOVk+TI/TemporaryItems/44D666EA-870F-4B47-92A8-5A.fcp" not accessible on the cluster and remote copy is disabled).'></mrk>
    FCP seems to be creating a tmp file of the project on my local desktop which is NOT apart of the cluster. This tmp file is not being read by the cluster. I have even mounted the /private/var/folders/Jq/JqzDuvOaH6yDxo3jBmBOVk+TI/TemporaryItems/ location on my desktop machine to all the cluster nodes in addition to already having mounted the root location of the drive that contains the media for the project.
    Any ideas would be more than welcome.

    It is still possible to submit FCP sequences to Compressor, however, the batch will fail when you try to send that batch to a cluster. What QMaster does is "open separate copies" of a program per instance. For example, if you have a Cluster where service nodes are broken up into multiple instances each, QMaster will expect to open Final Cut Pro (since that's where the source footage is located) on every single instance.
    If you want to use your cluster for encoding, export a Quicktime file from FCP and submit that file to Compressor.
    Some better explanations:
    http://support.apple.com/kb/TS1099
    http://discussions.apple.com/message.jspa?messageID=8870118#8870118
    http://discussions.apple.com/thread.jspa?messageID=8834921&#8834921

  • Extreme render time compressor ProRes (LT) to H.264. need help.

    I have a 24 minute project that I'd like to export from FCP7 using Compressor 3.
    Currently, the estimated time to completionis 12 hours and 36 minutes and increasing.
    and (according to my Activity Monitor) Compressor is using ~375% of the CPU.
    i am frustrated.
    I shot with a Canon Rebel T2i @ 720p/60
    All video files were converted to ProRes422 (LT) before editing.
    Using a 27' iMac w/ 2.93 GHz i7
    I will be uploading the compressed file to vimeo, whose Compressor settings recommendations i have used and are as follows:
    extension: .mov
    Audio encoder: AAC, stereo, 44.100 kHz
    Video Encoder:
    format: QT
    W: 1280
    H:720
    Pixel aspect ratio: square
    crop: none
    padding: none
    Frame rate: (100% of source)
           selected: 60
    frame controls: off
    codec type: H.264
    multi-pass: on, frame reorder: on
    pixel depth: 24
    spatial quality: 75
    min. spatial quality: 75
    key frame interval: 30
    temporal quality: 50
    min. temporal quality: 25
    average data rate: 5.12 mbps
    every time i try to export with these settings, it takes upwards of 12 hours, and the only time a project has completed, the video file is over 5 hours long and more than 5 gb.
    i would be delighted if anyone can provide insight and direction in how i can more effectively prepare my video for uploading to the internet.
    thanks
    andrew

    Have you tried setting up a cluster using qmaster in system preferences and submitting to the cluster in compressor.  That can speed things up enormously.  Check out the manual and do some searching here on how to setup.  If you can't figure it out, post back and I'll try to help.
    If you need to do this regularly, I think Matrox makes a hardware solution that speeds up h264 exports.
    http://www.matrox.com/video/en/home/

  • Cluster Storage on Xsan - Constant Read/Write?

    Wondering if anyone else has seen this. I have a Qmaster cluster for Compressor jobs, and if I set the Cluster Storage location to somewhere on the xsan instead of my local drive, throughout the whole compression job my disks will be reading and writing like mad - 40megabytes/sec of read and write, constantly. This obviously hurts the performance of the compression job. If I set the Cluster Storage to the internal disk (/var/spool), I get better performance and virtually no disk activity until it writes out the file at the end. It's especially bad when using Quicktime Export Components.
    Has anyone seen this? I've openned a ticket with Apple, but I'm wondering if it's just me?

    Is your Compressor Preferences set to Never Copy? Thats what it should be set to. I personally haven't seen this behavior, and have a 3 clusters (3 X 5 node) connected to the same san.
    Its also possible your san volume config has something to do with it. If the transfer size (block size and stripe breadth) is too low then I could imagine something like this happening.

Maybe you are looking for

  • APEX application development with an existing data model

    Dear all, We - as a company - are trying to build an application in ApEx with an existing data model. The idea is that the data model that has all sorts of TAPIs and business rules defined is going to be re-used in an ApEx environment. I am actually

  • How to download a text file using classes

    Hi Guys I want to download a text file using classes. Any idea how to do it ? Thanks Sameer

  • Doesn't detect external monitor

    Hi, I'm trying to connect my Imac to my tv (sony) but when i connect o dont get the blue screen, it's like my imac doesnt detect the monitor, but on the tv the HDMI port becomes active, no picture or anything. I tried my Imac using my friends tv (sam

  • Using java.util.zip.ZipFile

    I have files that are about 4GB (with possible growth in the future) that will be sent to a file system and a java process will be used to unzip them. Is there a size limit for unzipping files using java.util.zip.ZipFile? annie

  • JMF and Darwin Streaming Server.

    I have darwin streaming the sample files. I use JMF to connect to darwin. The connection is made, The server knows Jmf is connected , the server streams, verified with packet monitor. The problem is that JMFStudio just does not start playing the vide