Distributed processing FCPX 10.0.3

I have FCP X installed on an iMac and a MacBook.  Can I use them together to speed up transcoding video files at import, do I need to buy Compressor, or is this not possible in FCP X (please don't reply "it used to be in FCP7" or similar, I just want to know what I can do with what I have).
Thanks

Compressor will not import the files into FCP X. You have to munually trancode to a Pro Res format in Compressor. FCP X will analyse during import not compressor. Compressor was meant more to be used for exporting after finishing a project in FCP X. I have used compressor to trancode H264 camera files to FCP native pro res format with great results.

Similar Messages

  • Compressor won't do distributed processing with FCPX.

    Running 4.1.3 with FCPX and Send to Compressor has all of the distributed processing groups greyed out in the selection drop down. I can only run compressor on This Computer.
    If I export the file from FCPX and then drop that into Compressor from the finder it works fine. Is there a reason it is not working when directly connected?
    I have deleted and reinstalled  both apps but get the same results.

    The thought is that the send to compressor menu items should have an option to use Compressor with distributed processing. Why can it not just export it to a temp location for me and then drop that into Compressor for me instead of requiring me to export it and then go open the finder and drop it in myself. There could be a second menu item "Send to Compressor (Distributed Encoding)" that you setup a defined temp location in preferences.
    Not a huge deal to click a few extra things, but would be nice to have things connected a little better.
    Side:
    I got a 4k camera and was about get a 5k iMac when I realized by chance that the new 5k model does not support TDM and I cannot cycle through my screenless Macs. Big bummer.

  • Having trouble setting up Distributed Processing / Qmaster

    Hey Guys,
    I was able to (at one point) use Compressor 3 via Final Cut Pro Studio with Qadministrator to allow Distributed processing either via managed clusters or QuickCluster - both worked on both macs (a 2007 2.6Ghz Core 2 Duo iMac and a 2009 3.06Ghz Core 2 Duo iMac).
    I just upgraded to Final Cut Pro X and Compressor 4. As far as I can tell, the Qmaster settings now reside in the Compressor application itself, so I tried setting it up as best as I could using both the QuickCluster and managed cluster options (very similar to when using the older Qmaster) but no dice. I can see my controller's cluster from the secondary iMac, but it always displays submissions as "Not Available" and it does not help with processing. I've tried everything I can think of - I tried using FCS Remover for the older version of Final Cut, I tried looking around via terminal to see if there are any residual files & settings prior to the FCPX install, I've tried following as many instructions as I could find (including apple's official documentation on Compressor 4 on setting up a cluster) but NOTHING seems to work. I'm at a loss!!
    Unfortunately, any documentation or refrences to issues with Qmaster / Distributed processing is related to older versions of Compressor and whatnot.
    Can anyone help or have any suggestions? I have no idea how to get this working and I am having trouble finding anything useful online during my research
    Perhaps someone is familiar and can help me set it up correctly? I'm verrrry new to Final Cut in general, so I appologize in advance if i'm a bit slow but i'll try to keep up!
    Thanks,

    In spite of all Apple's hype I'm not sure distributed processing is actually working.
    First I ran in to the problem with permisions on the /Users/Shared/Library/Application Support folder.  There's some info about that in this discussion.  You'll need to fix it on each computer you're trying to use as a node.
      https://discussions.apple.com/thread/3139466?start=0&tstart=0
    Then I finally found some decent documentation on Compressor 4 here
      http://help.apple.com/compressor/mac/4.0/en/compressor/usermanual/#chapter=29%26 section=1
    However no matter what I tried I could not get the compression to spread across more than one computer.  I tried managed clusters, quick clusters, and even "this computer plus".  I was testing on a mac pro, a mac mini, and a macbook air.  I could disable the mac pro and the processing would move to the mini, disable the mini and it would move the the macbook air.  No matter what I do though it won't run on multiple machines.
    I'm also having trouble doing any kind of compositing in FCPX and getting it to compress properly.  I see this error
    7/20/11 11:07:42.438 PM ProMSRendererTool: [23:07:42.438] <<<< VTVideoDecoderSelection >>>> VTSelectAndCreateVideoDecoderInstanceInternal: no video decoder found for 'png '
    and then I end up with a hung job in the Share Monitor that if I try to cancel just sits there forever saying "canceling".
    I'm seeing a bunch of Adobe AIR Encrypted Local Storage errors in the log too.  Don't know what that has to do with FCPX but I'll have a look and try and figure it out.

  • What's the best network for distributed processing?

    Hey guys,
    I have a network of three computers... I have distributed processing all set up on Compressor, and it runs really fast through the ethernet connection I have set up. 
    What's the absolute best way to set up this network?
    The macbooks both have a firewire input, but that's not being used to transfer the data in this case... it's all ethernet. 
    The mac pro has 2 firewire ports... leaving one potentially open for a network. 
    Do you think it would be beneficial to set up a firewire network for these three computers, or is there any reason why a simple ethernet network is preferable?
    Thanks in advance for all of your help...
    Oh!  One more question!
    If I was going to set up a firewire network, how would I do it?
    ---Trav

    Hi MrWizard.. do you have Compressor.app V4.1? (Dec 2013) installed or the older ones.? According to you sign you're on older OSX 10.7.5.
    The idea you post is sound HOWEVER... it's somewhat fickle and troubesome.. I'm assuming old V3.4 Compressor/Qmaster later ....
    Hardware:
    You can CERTAINLY use the Firewire ports as NETWORKS NIC using IP over FW.. It certain works between TWO macs as Ive used it yeas back..
    Assume you have a FW hub or use the single  FW HUB  in the MACPRO ... or scout around for a fw switch  800 (if they exist!) or just use a FW800 to 400 cables and an cheap legacy FW400 hub... I saw one once in Taipei!
    Simply on each configure the Firewrie INTERFACEs in the System Preferences / Network .... (+ add interface etc etc), apply
    Either
    hard code (manual) an IP address (1.1.1.x)  for  each,
    Install and deploy OSX SERVER on your MACPRO (IP - 1.1.1.1) and set up a simple DNS and domain "mrwizards.cool_network"  for that FW subnet (1.1.1.1x)  then DHCP server  for that subnet so that when the two MACBOOM PROS  connect tey get a an IP address such as 1.1.1.2-253 and assign machine names. Set the dsn serahc on each machine to add 1.1.1.1 (your macpro to resolve machine names0. More work however easier to manage.
    test your local DNS between the machines or TRACEROUTE / PING the machines from each other and verify the IP path over the FW NICs works ok.
    as long as you can ascertain the IP traffic around your 1.1.1.x subnet the better.
    Once you configure these as a separate NETWORK, you will need some way to provide FireWire however... this is only the start of your headaches. Im assuming you are on an olde Compressor/Qmaster V3.5 with legacy FCS 3 suite... true/false?
    The configurations takes patience and trial and error and some clear understanding of what you are trying ot acheive if its really worth the effort.
    TRANSCODING over nodal cluster: AKA DISTRIBUTED SEGMENTED MULTIPASS TRANSCODING!
    ON paper you'd imaging the throughput would be worth it. Sadly the converse is true!.. it often takes LONGER to use a bunch of nodes rather than submission on a SINGLE machine using a CLUSTER on a single machine....
    the elapsed time for the submission relies on the processing speed of the slowest machine and its network connection
    final assembly of the quicktime peices for the object
    hmmmm often quickier just to use a single CLUSTER on the MACPRO!.
    You MAY want to consider this option assuming the hardware you have mentioned:
    what about this instead of FireWre NICS?
    USE Gbit ETHERNET allaround.
    Steps:
    Set the WIFI on the host to use the WIFI only network for you usual internal activities and bonjour (.local) etc.. assume low IP traffic. - all works oK
    OPTION: on the MACPRO dedicate EN0 (first ethernet NIC to WIFI router). Turn off WIFI on MACPRO.
    Utilise the ETHERNET (EN0 - builit in ethernet) on the MACBOOKs (older ones? .. will be very slow BTW)
    connect the MACPRO Second ETHERNET, and the MACBOOk (pros?) ethenets to a cheap 8 x port Ethernet HUB switch ($HK150 / €15) they are all switches these days). This gives you an isolated subnet... no router needed. Plug in the hub/switch and power on!
    look at step 3. above .. QUICK START: just hard code IP addresses as follows in each SYSTEM PREF/network):
    MAC PRO ip 1.1.1.1
    MACBOOK (Pro) #2 : ip address 1.1.1.2
    MACBOOK (Pro) #2 : ip address 1.1.1.3
    test the 1.1.1.1-1.1.1.3 network... (ping or traceroute)
    V3.5 Compressor and Qmaster for ye old FCS - dig in to QMASTER PREFS as:
    MACPRO - set servces and ALSO MAKE CONTROLLER., set NETWORK INTERFACE to the interface on the ETHERNET 2 (not all). set instances to more than 1 one instance but NOT max! Tick MANAGED SERVICES
    MACBOO (pro)'s set servces only, set NETWORK INTERFACE to the interface on the ETHERNET  (not all). set instances to more than 1 one instance but NOT max! Tick MANAGED SERVICES
    launch APPLE QADMINISTRATOR.app and make a managed cluster - easy (DONT waste your time with Quickcluster!.. just dont)
    IMPORTANT: make sure teh SOURCE and TARGETS of ALL the pbjects tha the transcodes will use are AVAILALBE (mounted and access) on ALL the machines that you want to participate.
    set up batch and make sure SEGMENTED is ticked in the inspector..... submit the job
    See how you go.
    However if you have Compressor V4.1 (dec 2013) then just set your network up and it should just work with out all teh voodoo in the previous steps.
    Post your results for others to see.
    Warwick
    Hong Kong

  • Distributed Processing Error...

    I am trying to run distributed processing through compressor and I keep running into the same error each time. I get "Quicktime Error: -120" and it gives me a different HOST each time. I did a little troubleshooting to narrow down where the problem is and it allows me to use distributed processing on the appropriate cluster with everything EXCEPT H.264.
    I can do MPEG-2, AIFF, Dolby Digital, etc. very quickly and easily with no problems at all. As soon as I try to do a compression to H.264, it gives me the Quicktime Error: -120 message and says it fails.
    Does anyone know what I can do fix this problem?
    I have Mac OS X (10.4.9), Final Cut Studio 2, with Compressor 3 and QMaster 3. Thank you for your help.

    I am getting the same error, but I am tring to encode on my machine and not another machine.
    I am Encodeing with the following settings...
    Name: 16.9 for Web
    Description: Web Compression
    File Extension: mov
    Estimated file size: 439.45 MB/hour of source
    Audio Encoder
    AAC, Stereo (L R), 48.000 kHz
    Video Encoder
    Format: QT
    Width: 640
    Height: 360
    Pixel aspect ratio: Square
    Crop: None
    Padding: None
    Frame rate: (100% of source)
    Frame Controls: Automatically selected: Off
    Codec Type: H.264
    Multi-pass: On, frame reorder: On
    Pixel depth: 24
    Spatial quality: 75
    Min. Spatial quality: 25
    Key frame interval: 24
    Temporal quality: 50
    Min. temporal quality: 25
    Average data rate: 1.024 (Mbps)
    Fast Start: on
    Compressed header
    requires QuickTime 3 Minimum
    Here is a copy of the log that was created.
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <services>
    <service displayName="POWERMACG5" hostName="tim-saffords-computer.local" type="servicecontroller:com.apple.stomp.transcoder" address="tcp://127.0.0.1:49164">
    <logs tms="208896850.332" tmt="08/15/2007 11:54:10.332" pnm="compressord">
    <mrk tms="209929152.114" tmt="08/27/2007 10:39:12.114" pid="296" kind="begin" what="service-request" req-id="FB2E1D96-82D0-44AD-9E6B-B2C9D96CBFCA:1" msg="Preflighting."></mrk>
    <mrk tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" kind="begin" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <log tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" msg="Cluster storage URL = null"/>
    <log tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" msg="Not subscribing, null cluster storage."/>
    <mrk tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" kind="end" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <mrk tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" kind="begin" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.178" tmt="08/27/2007 10:39:12.178" pid="296" msg="Source file /private/var/tmp/folders.501/TemporaryItems/F2131D24-40B1-461B-BD70-F6.fcp is directly accessible."/>
    <log tms="209929152.179" tmt="08/27/2007 10:39:12.179" pid="296" msg="Source file can be opened."/>
    <log tms="209929152.179" tmt="08/27/2007 10:39:12.179" pid="296" msg="Source file can be read."/>
    <mrk tms="209929152.179" tmt="08/27/2007 10:39:12.179" pid="296" kind="end" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.334" tmt="08/27/2007 10:39:12.334" pid="296" msg="Enabling post processing due to streaming options: 2"/>
    <mrk tms="209929152.350" tmt="08/27/2007 10:39:12.350" pid="296" kind="end" what="service-request" req-id="FB2E1D96-82D0-44AD-9E6B-B2C9D96CBFCA:1" msg="Preflighting service request end."></mrk>
    <mrk tms="209929152.458" tmt="08/27/2007 10:39:12.458" pid="296" kind="begin" what="service-request" req-id="FB2E1D96-82D0-44AD-9E6B-B2C9D96CBFCA:3" msg="Preprocessing."></mrk>
    <mrk tms="209929152.471" tmt="08/27/2007 10:39:12.471" pid="296" kind="begin" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <log tms="209929152.472" tmt="08/27/2007 10:39:12.472" pid="296" msg="Cluster storage URL = null"/>
    <log tms="209929152.480" tmt="08/27/2007 10:39:12.480" pid="296" msg="Not subscribing, null cluster storage."/>
    <mrk tms="209929152.480" tmt="08/27/2007 10:39:12.480" pid="296" kind="end" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <mrk tms="209929152.480" tmt="08/27/2007 10:39:12.480" pid="296" kind="begin" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.480" tmt="08/27/2007 10:39:12.480" pid="296" msg="Source file /private/var/tmp/folders.501/TemporaryItems/F2131D24-40B1-461B-BD70-F6.fcp is directly accessible."/>
    <log tms="209929152.481" tmt="08/27/2007 10:39:12.481" pid="296" msg="Source file can be opened."/>
    <log tms="209929152.494" tmt="08/27/2007 10:39:12.494" pid="296" msg="Source file can be read."/>
    <mrk tms="209929152.495" tmt="08/27/2007 10:39:12.495" pid="296" kind="end" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.507" tmt="08/27/2007 10:39:12.507" pid="296" msg="preProcess for job target: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov"/>
    <log tms="209929152.533" tmt="08/27/2007 10:39:12.533" pid="296" msg="Enabling post processing due to streaming options: 2"/>
    <log tms="209929152.543" tmt="08/27/2007 10:39:12.543" pid="296" msg="done preProcess for job target: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov"/>
    <mrk tms="209929152.544" tmt="08/27/2007 10:39:12.544" pid="296" kind="end" what="service-request" req-id="FB2E1D96-82D0-44AD-9E6B-B2C9D96CBFCA:3" msg="Preprocessing service request end."></mrk>
    <mrk tms="209929152.632" tmt="08/27/2007 10:39:12.632" pid="296" kind="begin" what="service-request" req-id="BF170344-CFF9-4F61-A97A-502B6D3039FF:1" msg="Processing."></mrk>
    <mrk tms="209929152.633" tmt="08/27/2007 10:39:12.633" pid="296" kind="begin" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <log tms="209929152.633" tmt="08/27/2007 10:39:12.633" pid="296" msg="Cluster storage URL = null"/>
    <log tms="209929152.633" tmt="08/27/2007 10:39:12.633" pid="296" msg="Not subscribing, null cluster storage."/>
    <mrk tms="209929152.633" tmt="08/27/2007 10:39:12.633" pid="296" kind="end" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <mrk tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" kind="begin" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" msg="Source file /private/var/tmp/folders.501/TemporaryItems/F2131D24-40B1-461B-BD70-F6.fcp is directly accessible."/>
    <log tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" msg="Source file can be opened."/>
    <log tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" msg="Source file can be read."/>
    <mrk tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" kind="end" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.648" tmt="08/27/2007 10:39:12.648" pid="296" msg="starting _processRequest for job target: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov-1"/>
    <log tms="209929152.648" tmt="08/27/2007 10:39:12.648" pid="296" msg="Writing transcode segment: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov-1"/>
    <log tms="209929152.769" tmt="08/27/2007 10:39:12.769" pid="296" msg="QuickTiime Transcode, rendering in YUV 8 bit 444"/>
    <log tms="209929153.993" tmt="08/27/2007 10:39:13.993" pid="296" msg="Time for QuickTime transcode: 0.524838 seconds. status = -120"/>
    <log tms="209929154.300" tmt="08/27/2007 10:39:14.300" pid="296" msg="Done _processRequest for job target: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov-1"/>
    <mrk tms="209929154.366" tmt="08/27/2007 10:39:14.366" pid="296" kind="end" what="service-request" req-id="BF170344-CFF9-4F61-A97A-502B6D3039FF:1" msg="Processing service request error: QuickTime Error: -120"></mrk>
    </logs>
    </service>
    </services>

  • Distributed Processing

    I have a design question. I am looking to do some distributed processing on my data set (which is basically an object with six integer fields and a string). However, the processing will involve more than one object at a time (e.g. each request needs to work with all objects with attribute = value), which is potentially a large number. I think that rules out entry processors...unless I can run a filter within an entry processor. There is however also the scalability issue in that I will eventually have about 50 million objects minimum.
    As an invocable, I think I'll run into re-entrancy issues but assuming I can run queries on the backing map, I believe there is still the issue that there is no guarantee that all data will be processed and fail-overs etc. And from the documentation, I get the impression that invocables are meant more for management related tasks than raw data processing. Is there a way to guarantee fail-overs and complete processing? Should I really be doing heavy-duty querying on the backing map?
    I am also unable to partition some of the data off to another cache service as although an object A will have many related objects B, an object B can also have many related A (n:n relationship). Objects are related if they have the same value x for any attribute y.
    Is there any other option for processing this data? Result set sizes can run into millions.
    Please feel free to ask me more details. I have only attempted to give an overview of my problem here.
    Example
    Cache contains 10 million objects of type A, which define many-to-many relationship between a and b
    Cache contains 10 million objects of type A, which define many-to-many relationship between b and c
    Cache contains 10 million objects of type A, which define many-to-many relationship between c and d
    Same type objects are used since nature of relationship is the same in all cases.
    Challenge is to find the set <a,d>, where <a,b>, <b,c> and <c,d> relationships hold true. Sets a, b, c and d are not necessrily distinct sets in that values in d could also be in a, for example.
    Another example is to find set <d, b> where d values in set (x, y, z).
    Thanks in advance for your advice.

    Hi,
    user11218537 wrote:
    Thanks Robert. Very helpful comments.
    The way I have been doing intersections is by using the InFilter, with the appropriate extractors.
    For example, running with the same example you have put down:
    1. Retrieve a.x1...a.x6 from a. Let's call this S1.
    2. Retrieve the union of all b.x(i)-s for b-s where b.x1...b.x6 intersected with S1 is not empty. Let's call this union S2.
    3. Retrieve the union of all c.x(i)-s for c-s where c.x1...c.x6 intersected with S2 is not empty. Let's call this union S3.
    4. Retrieve all d.id-s where d.x1...d.x6 intersected with S3 is not empty.
    The thing is that in the above case, S1 runs into millions, as do S2, S3, S4. I have indexes defined on all six fields so retrieving S1 is not the issue, say with extractor getProperty1()
    In this case, I may have misunderstood what you had to do, but in my example S1 would be the up-to-six different integer values. You could possibly provide a bit more detailed information on what you need to do.
    But say S1 is 1 million strong. Defining then an InFilter from these values to retrieve matching S2 causes problems with delays and outofmemory exceptions. I am currently doing something like InFilter(S1, getProperty3()) for example, saying S2 contains objects where property 3 has any of the values in S1.
    Do you mean I should be doing the intersection manually, rather than cascading down the set of values via the InFilter? To resolve the outofmemory issues (sometimes with heap space, sometimes with "PacketReceiver"), I chop down the sets into 1000-10,000 strong InFilter and run the multiple smaller instances. Takes a long time though. Basically, what is the correct way to get 1 million objects with matching values from say a 50 million strong set? I'm guessing InFilter is inherently parallel in its implementation.
    My point is that you don't get them on the client side. You just work with the property values in a parallel aggregator to join up the values to S2.
    I'll look at the ParallelAwareAggregator to see if I could do the same in parallel. So instead of doing millions, each node is doing 10-100,000 from the set S1, and a proportional number from S2, S3 and S4. I'm guessing the logic will be the same, where InFilters are used to do the intersection.
    And you don't need to return the full set of matching cache entries to the client side at all, only the next partial result (which you still need to union up on the client side in aggregateResults).
    I will look at whether reverse maps are possible, but if I used a reverse map, would that be faster since I do have already have relevant indexes defined. I.e. is cache.get(keys) faster than cache.keySet(new Filter(keys, getProperty1()), if getProperty1() is indexed?
    Thanks.The reverse cache among others
    - gives you a single node to go for the set of cache keys having that particular x(i) value for the entire cache not only for that particular node, so it makes it more scalable than having to go for it to all nodes... also, since it is materialized in a cache, it possibly has a lower memory footprint as well (single serialized set, instead of one set per node and objects for each backing map key in the reverse index value)
    - gives you some control on the amount of data you retrieve from the cache at the same time, if you bump into OutOfMemory problems (you don't need to come up with the full matching set of keys in one go).
    You can also access the reverse cache from an aggregator to cut down on the latency for multiple reverse cache lookups if the reverse cache values likely have a large common part...
    Best regards,
    Robert

  • Distributed processing for Photoshop?

    Is there a way to link together several computers to process large documents more quickly?  I have seen programs like Qmaster and Compressor for Final Cut users–is there anything like that for processing in Photoshop?

    And what specifically would you need or hope to achieve? In order to process data on multiple cores/ computers you would have to slice it up fist which in itself introduces considerable overhead and then you would also need ways of verifying integrity, deal with seam blending of separately processed "tiles" and a million other things. Batch processing multiple documents across a distributed network - perhaps one day. Processing single large documents distributed? Extemely unlikely as it woul require to rewrite PS and its algorithms from the ground up.
    Mylenium

  • Does distributed processing work best on a wired network?

    Tried setting up qmaster (quick cluster) today with two MBa's over a wireless network; the submitted batch job simply
    went into waiting mode & I noticed a lot of network traffic (at a slow rate) to my client system; I expected wireless to be
    slower than wired but I also expected some sort of asyncrohnous behavior from Compressor/clustering to start the
    encoding/compression process while data was being transferred..but cpu idled on both nodes..anyone?

    Location Services (including Maps and "Find My Mac") relies on a database of known Wi-Fi access points. You don't necessarily have to be connected to a wireless network, but Wi-Fi does have to be turned on and you have to be within range of one of those access points.

  • Distributed Processing in LR

    Hi there,
    are there any plans to offer a plugin to use more than one computer for larger jobs ob batch processing images, eg. from raw to tiff?
    In my office, there is a total of four computers, and even if these are a not the newest machines, together they are probably faster than the newest hexacore computers.
    These computers are connected via gbit ethernet which gives transfer speeds of about 100mbyte/s, so there shouldn't be any performance problems due to a slow connection.
    This could be done with a main server running Lightroom or Bridge, and one or more host computers which are running a host program controling the camera raw plugin. Each host computer gets a raw file transfered, processes it and transfers back the tiff file to the main server.
    As far as i can guess, it should be fairly easy to program such a software, and it could probably be sold as a plugin. For most hobbyists and small businesses, it would be a no-brainer to buy such plugin instead of upgrading their computers, because the price for a comparable performance gain is muss less.

    This is a user-to-user forum, and - although occasionally an Adobe staff member will post here - we are just Lr users like you. As users we don't have any knowledge of Adobe's plans for the future.
    Personally, I don't think it will be easy to accomplish what you want - if it is possible at all. Also, I don't think a plug-in could do it.
    Lr saves everything you do in its database (= catalog). Currently this database requires that only one instance of Lr has it opened and works with it.
    What you want would require that several instances of Lr work with the same database. This would be very difficult because you had to prevent conflicting data. You have to prevent that computer A edits an image and computer B edits the same image.
    Since Lr does not open only single images but you can select multiple images - even across different folders - it would be very difficult, if not impossible to prevent conflicting data.
    Or, if you would create a "first come first served" prevention, users would often find that images they want to work with are blocked.
    The way Lr is designed, it doesn't seem practical even if it is technically possible.
    That is probably one of the reasons that the Lr catalog cannot be stored on a network.
    But: There is a way of accomplishing that more than one person can work on the images. This would require a meticulous workflow that had to be adhered to strictly by all members.
    You would have your photos on a network drive / server. Each computer would create its on Lr catalog - let's call them sub-catalogs. At the end of the day all the sub-catalogs had to be combined into one (1) master-catalog. All computers would send their sub-catalog over to the one computer storing the master-catalog. Then in the master catalog you would do >File >Import from another catalog to import one sub-catalog after the other into the master catalog.

  • Can anyone make distributed processing work?

    I have two high-spec iMacs, connected via Thunderbolt and ethernet. I go through the Compressor set up for group transcoding, tried many different settings combinations - it NEVER uses the second machine in the transcode.
    Episode Pro on same machines does it no problem. Has anyone got this working in Compressor?

    I haven't tried with the 4.1 versions because the only machine I could test it with is running an earlier OS/ It's purposely frozen in that state and will be some time before it's upgraded. I have considered installing 10.9 on an external drive with a FCP X and Compressor and booting from that. If I have the time I'll try it and post the results.
    In the meantime, I hope someone else will  have some theories on why your processing setup isn't working and chime in.
    Finally, just to add I think a lot of folks have taken the simpler route and opted for multiple instances to speed up their work.
    Russ

  • Distributed processing using a PC

    Does any one know of a way i could use my pc as a node type thingy for powering plugs on my powerbook with logic? I am looking for a method which does not require midi syncing or having 2 soundcards.
    Any ideas welcome, thanks!

    I found a little app which does almost the same thing. it's called wormhole, and works very well. there is considerable latency in my setup, but it's great to be able to use PC plug ins in a logic session.
    would it be preferable to use ethernet or firewire to get maximum network bandwidth?

  • Distributed Processing:  Licenses for All Machines?

    I'm only just now looking at Compressor 2, for my first video project. There's a chance that using Compressor's compute-cluster feature may be worthwhile.
    However, in a quick scan of the docs and of this discussion group, I haven't been able to locate the answer to a the most basic question as to whether I'd even think about it setting up a cluster: Does each machine in the cluster need to have its own separate Compressor 2 license?
    If I need to buy a more licenses, I'll probably won't bother trying to add my 1.5GHz PowerBook to the already more powerful Dual-2GHz critter I have without a cluster. But if the one license works, perhaps it'll be worthwhile.

    Check out the OS X VNC FAQ: http://www.redstonesoftware.com/osxvnc/OSXvnc.html
    When MacOS X starts up on older systems it will disable certain video functions when no monitor is plugged in. We are looking at ways to solve this in OSXvnc, but for the time being you can purchase a dongle that makes the MacOS X machine think that a monitor is plugged in.
    So I guess it will work on more recent systems, but you will have to try it out. You can find the same results for any OS X VNC software, so the same should apply here.

  • Distributed Audio Processing & Massive G5 Issues

    Hi All,
    Iv recently purchased Logic studio to run on my Dual 2GHZ G5 with 1gb ram. Logic Studio doesnt seem to like the machine to say the very least. I can have 6 Audio instruments running 4 audio tracks & a few plugs, say 10 & it goes into total shutdown. The CPU meter's off the hook, im getting some sort of digital crackling/distortion & loads of "cannot process audio in time" errors.
    Could the issue be that there's only 1 GB of ram? If thats the case, i'll 4 GB's tomorrow. There's still 95GB left on the system drive.
    Iv also just purchased a Macbook pro & im trying to set up distributed processing to see if that makes a difference. Iv connected both machines via eithernet, loaded logic onto both machines, launched the nodes & configured the rack view settings to toggle DAP on/off.
    When I turn DAP on, I get a delay on the tracks that the other machine is meant to be handling? Has anyone ever had this issue? Its pointless using DAP if there's going to be a noticeable delay.
    ANY help with this is greatly appreciated.
    Thanks

    Im running 512 atm. Any suggestions as to what I should be running????
    512 should be fine - you should be able to get down to 128 really. Is there any improvement at 1024? Can you playback more reliably? At 512 I have nary a glitch.
    How much ram are you using?
    4GB. More than 1GB will help - but personally I think you have a more fundamental problem. I think you need to sort out the G5 performance issues first before you look at DAP. Your system definitely is under performing.
    What interface are you using? Have you tried it with iternal audio? Any change?
    Yes the i/o safety buffer is off. Should it be on?
    leave it off.
    Regards
    Stephen

  • Very slow processing in Compressor

    Compressor is taking a huge amount of time to output my 18-minute video.
    As a newbie to Compressor I took the easy, quick-start way and "Sent" the file to Compressor from Final Cut and chose HD720p video sharing for YouTube as my batch setting.
    That was an hour ago and it's not even halfway through.
    Using an iMac with OS 10.7.5 and 16 Gb of RAM.  The original video is in HD and seems to work normally in Final Cut. I exported the file directly from FC as a .mov file and it was about 3 Gb, but took ony about 25 minutes to export.
    Is there a better alternative? Does Adobe Premier have a more efficient workflow? I find Compressor extremely kludgy and un-Mac-like.
    Thanks

    stufromhalifax wrote:
    That was an hour ago and it's not even halfway through.
    The good news is the progress bar is not linear and half way is usually more like two thirds.
    I don't think everyone agrees but I and others have found that the fastest and most reliable workflow - whatever the version of FCP and Compressor – is to export a self contained QT movie (Share>Master Fille in FCPX)…bring that into Compressor…apply one of the video sharing presets or a custom one of your own. Then upload using the Web site's uploader. And for FCPX, my experience is that even more time is saved (all in) if I don't render anything that I don't have to; so I keep Background rendering off – or set to a really high time interval.
    My 3 year old iMac without hyper threading will take slightly more than 2 hours to encode an 18 minute QT to h.264. I would expect it to take about 80-90 minutes if I used my Quick Cluster (which I would) and well under an hour with one pass encoding.
    As mentioned on a number of similar threads, processing can also be speeded up two ways: by distributed processing (including Quick Clusters that take advantage of the multiple cores in your iMac) and/or by changing the encoding to single pass. 
    I have used older versions of Pr quite a bit and prefer FCP, but I've not used CS6, which certainly has a lot of fans. They do offer a 30 day trial.
    Good luck.
    Russ

  • Authorizations setting for running the process chain

    Hai
    Iam planning to run the process chain for loading the data into ODS. But i dont have authorization for it.
    so what are the authorizations i need to run the process chain in my system. And how can i set all those authorizations to my user-id.  I have all authorization rights .
    Pls let me knw
    kumar

    Hi,
    Authorizations for Process Chains
    Use
    You use authorization checks in process chain maintenance to lock the process chain, and the processes of the chain, against actions by unauthorized users.
    ·        You control whether a user is allowed to perform specific activities.
    ·        You control whether a user is allowed to schedule the processes in a chain.
    The authorization check for the processes in a chain runs when the system performs the check. This takes place upon scheduling or during synchronous execution. The check is performed in display mode. The check is performed for each user that schedules the chain; it is not performed for the user who executes the chain. The user who executes the chain is usually the BI background user. The BI background user automatically has the required authorizations for executing all BI process types. In attribute maintenance for the process chain, you can determine the user who is to execute the process chain.
    See also: Display/Maintenance of Process Chain Attributes ®  Execution User.
    Features
    For the administration processes that are bundled in a process chain, you require authorization for authorization object S_RS_ADMWB.
    To work with process chains, you require authorization for authorization object S_RS_PC. You use this authorization object to determine whether process chains can be displayed, changed or executed, and whether logs can be deleted. You can use the name of the process chain as the basis for the restriction, or restrict authorizations to chains using the application components to which they are assigned.
    Display/Maintain Process Chain Attributes
    Use
    You can display technical attributes, display or create documentation for a process chain, and determine the response of process chains during execution.
    Features
    You can display or maintain the following attributes for a process chain:
    Process Chain ® Attribute ® ...
    Information
    Description
    ( Rename)
    You can change the name of the process chain.
    Display Components
    Display components are the evaluation criterion in the process chain maintenance. Assigning the process chains to display components makes it easier to access the chain you want.
    To create a new display component, choose Assign Display Components in the input help window and assign a technical name and description for the display component in the Display Grouping dialog box that appears.
    Documents
    You can create and display documents for a process chain.
    For more information, see Documents.
    Last Changed By
    Displays the technical attributes of the process chain:
    ·        When it was last changed and who by
    ·        When it was last activated and who by
    ·        Object directory entry
    Evaluation of Process Status
    If you set this indicator, all the incorrect processes in this chain and in the overall status of the run are evaluated as successful; if you have scheduled a successor process upon error or always.
    The indicator is relevant when using metachains: Errors in the processes of the subchains can be evaluated as “unimportant” for the metachain run. The subchain is evaluated as successful, despite errors in such processes of the subchain. If, in the metachain, the successor of the subchain is scheduled upon success, the metachain run continues despite errors in “unimportant” processes of the subchain.
    Mailing and alerting are not affected by this indicator and are still triggered for incorrect processes if they have an upon error successor.
    Polling Indicator
    With this indicator you can control the response of the main process for distributed processes. Distributed processes, such as the load process, are characterized as having different work processes involved in specific tasks.
    With the polling indicator you determine whether the main process needs to be kept until the actual process has ended.
    By selecting the indicator:
    -         A high level of process security is guaranteed, and
    -         External scheduling tools can be provided with the status of the distributed processes.
    However, the system uses more resources; and a background process is required.
    Monitoring
    With the indicator in the dialog box Remove Chain from Automatic Monitoring?, you can specify that a process chain be removed from the automatic monitoring using CCMS.
    By default CCMS switches on the automatic process chain monitoring.
    For more information about the CCMS context Process Chains, see the section BW Monitor in CCMS.
    Alerting
    You can send alerts using alert management when errors occur in a process chain.
    For more information, see Send Alerts for Process Chains.
    Background Server
    You can specify here on which server or server group all of the jobs of a chain are scheduled. If you do not make an entry, the background management distributes the jobs between the available servers.
    Processing Client
    If you use process chains in a client-dependent application, you can determine here in which client the chain is to be used. You can only display, edit, schedule or execute the chain in this client.
    If you do not maintain this attribute, you can display, edit, schedule or execute the process chain in all clients.
    Process variants of type General Services that are contained in a process chain with this attribute set will only be displayed in the specified client.
    This attribute is transported. You can change it by specifying an import client during import. You must create a destination to the client set here in the target system for the import post processing (transaction RSTPRFC)  The chain is activated after import and scheduled, if necessary, in this client.
    Execution User
    In the standard setting a BI background user executes the process chain (BWREMOTE).
    You can change the default setting so that you can see the user that executes the process chain and therefore the processes, in the Job Overview. You can select the current dialog user who schedules the process chain job, or specify a different user.
    The setting is transported.
    The BI background user has all the necessary authorizations to execute all BI process types. Other users must assign themselves these authorizations so that authorization errors do not occur during processing.
    Job Priority
    You use this attribute to set the job priority for all of the jobs in a process chain.
    Hareesh

Maybe you are looking for

  • I found my baby.. from XP to Linux

    Hey Archers, I have only begun my Linux journey about a few months ago after hearing about "Linux Mint" from a guy at Fry's Electronics. I installed Virtual Box and tried it out. It was very nice as everything worked out of the box.  I then began an

  • ERROR on Event Viewer

    Hi All I have installed and configured EPM-11.1.1.3 Hyperion Foundation, Essbase, Planning, Reporting & Analysis and ERP Integrator. The Configuration was successful without any error. Financial Management, Startegic Finance and Performance Scorecard

  • Accessing a particular file generates a kernal panic

    Hi all, There's a file on my hard drive that systematically generates a kernel panic whenever I try to access it. In particular, I cannot delete it. I can send it to the Trash, but then trying to empty the Trash generates a kernel panic. Here what pa

  • Parsing Strings to Int, NumberFormatException ??

    Hey everyone... I am trying to convert some string to an int value.. Now this is what is exactly happening.. First i read some bytes into an array of bytes from a file. Then i use the toString() to convert it to a string. Once i get the string repres

  • Mouse Clicking in R12

    After using R12 for a period of time, the mouse ceases to work. I am only able to tab through forms. If I re-launch it will start to work again for a period of time before happening again.