Final Cut Server Error Result Too Large

In Final Cut Server when trying to check out a project or add files to a project i get the following error ::
Error: could not read block 5469422 of relation 1663/16385/16653: Result too large
I have seen similar post's but no resolution.
Final Cut Server on Xserve 10.5.8
XSAN 2.2
Present on all 12 client machines

Does this occur only on specific projects or all projects?  If it only happens on a specific project it's likely a invalid asset in which deleting and re-cataloging it would be the simplest solution.  If this happens with any asset you try to check out, then there may be a larger database issue and you may consider restoring from a backup.

Similar Messages

  • "ERROR: Could not read block 64439 of relation 1663/16385/16658: Result too large"

    Hi,
    I've already archived a lot of assets in my final cut server but since one week there is a message appearing when I click on an asset and choose "Archive". The pop-up says: "ERROR: Could not read block 64439 of relation 1663/16385/16658: Result too large"
    Does anyone know what's the problem and/or have any suggestions to solve my problem?! I can't archive anymore since the first appearance of this message.
    What happened before?
    -> I archived some assets via FCS and then transfered the original media to an offline storage media. That system worked fine for the last months and my normal server stays quit small in storage use. But now, after I added some more new productions and let FCS generate the assets, it doesn't work anymore...
    It's not about the file size - I tried even the smallest file I found in some productions.
    It's not a particular production - I tried some different productions.
    It's not about the storage - there's a lot of storage left on my server.
    So, if someone knows how get this server back on the road - let me know.
    THNX!
    Chris

    I would really appreciate some advice re: recent FCS search errors.
    We're having similar issues to C.P.CGN's 2 year old post, it's only developed for us in the last few weeks.
    Our FCS machine is running 10.6.8 mac os and 1.5.2 final cut server with the latest
    OS 10.6.x updates.
    FCS is still usable for 6 of 8 offliners, but on some machines, searching assets presents "ERROR: could not read block 74012 of relation1663/16385/16576: Input/output error."
    Assuming the OS and/or data drives on the FCS machine were failing, I cloned the database drive today and will clone the OS drive tomorrow night, but after searching the forums and seeing similar error messages I'm not so sure.
    FCS has been running fine for last 4 years, minus the recent Java security issues.
    Thanks in advance, any ideas appreciated!
    cheers,
    Aaron Mooney,
    Post Production Supervisor.
    Electric Playground Daily, Reviews On The Run Daily, Greedy Docs.
    epn.tv

  • Setting kern.ipc.maxsockbuf above MB errors with "Result too large"

    Hi
    I'm running Snow Leopard 10.6.4 on a machine that has 16GB memory.
    When I attempt to increase the kern.ipc.maxsockbuf above 4MB I get an error that states "Result too large". I don't have this problem on my 10.5 machines. Anyone know where this limitation comes from? Is it some hard limit on Smow Leopard?
    Regards - Tim
    # setting to 4MB works fine.
    sudo sysctl -w kern.ipc.maxsockbuf=4194304
    kern.ipc.maxsockbuf: 500000 -> 4194304
    # setting to 1 above 4MB starts the error.
    sudo sysctl -w kern.ipc.maxsockbuf=4194305
    kern.ipc.maxsockbuf: 4194304
    sysctl: kern.ipc.maxsockbuf: Result too large

    Firstly, there's no such thing as Apache 9.3, there's Apache 1 (and subversions) and Apache 2 (and subversions). Your error message -
    Oracle-HTTP-Server/1.3.28Shows you're using Apache 1.3.28
    Secondly, I'm confused by your comment -
    I do not have Apache 9.3 or higher but I think oracle should offer this in its companion CDOracle does offer the Apache server, if you're saying you didn't get it from Oracle then where did your Apache server come from?
    Thirdly, I notice from your config file -
    ErrorLog "|E:\oracle\product\10.1.0\Companion\Apache\Apache\bin\rotatelogs logs/error_log 43200"That you're piping the logs through rotatelogs, are you sure the logfiles haven't just been renamed?

  • "result too large" error when accessing files

    Hi,
    I'm attempting to make a backup copy of one of my folders (using tar from shell). For several files, I got "Read error at byte 0, reading 1224 bytes: Result too large" error message. It seems those files are unreadable. Whatever application attempts to access them results with the same error.
    The files reside on the volume that I created a day ago. It's a non-journaled HFS+ volume on external hard drive. They are part of an Aperture Vault that I wanted to make an archive copy and store offsite. Aperture was closed (not running) when I was creating the archive.
    This means two things. The onsite backup of my photos is broken, obviously (some of the files are unreadable). My offsite backup is broken, since it doesn't contain those files.
    I've searched the net, and found couple of threads on some mailing lists describing same problem. But no answer. Couple of folks on those mailing lists suggested it migh point to full disk. However, in my case, there is some 450GB of free space on the volume I was getting read errors on (the destination volume had about 200GB free, and system drive had about 50GB free, so there was plenty of space all around the system too).
    File system corruption?
      Mac OS X (10.4.9)  

    Here's the tar command with the output:
    $ tar cf /Volumes/WINNIPEG\;TOPORKO/MacBackups/2007-05-27/aperture.tar Alex\ -\ External\ HD.apvault
    tar: Alex - External HD.apvault/Library/2003.approject/2007-03-24 @ 08\:17\:52 PM - 1.apimportgroup/IMG0187/Thumbnails/IMG0187.jpg: Read error at byte 0, reading 3840 bytes: Result too large
    tar: Alex - External HD.apvault/Library/2006.approject/2007-03-24 @ 08\:05\:07 PM - 1.apimportgroup/IMG2088/IMG2088.jpg.apfile: Read error at byte 0, reading 1224 bytes: Result too large
    tar: Alex - External HD.apvault/Library/Jasper and Banff 2006.approject/2007-03-25 @ 09\:41\:41 PM - 1.apimportgroup/IMG1836/IMG1836.jpg.apfile: Read error at byte 0, reading 1224 bytes: Result too large
    tar: Alex - External HD.apvault/Library/Old Scanned.approject/2007-03-24 @ 12\:42\:55 AM - 1.apimportgroup/Image04_05 (1)/Info.apmaster: Read error at byte 0, reading 503 bytes: Result too large
    tar: Alex - External HD.apvault/Library/Old Scanned.approject/2007-03-24 @ 12\:42\:55 AM - 1.apimportgroup/Image16_02/Info.apmaster: Read error at byte 0, reading 499 bytes: Result too large
    tar: Alex - External HD.apvault/Library/Vacation Croatia 2006.approject/2007-03-25 @ 09\:47\:17 PM - 1.apimportgroup/IMG0490/IMG0490.jpg.apfile: Read error at byte 0, reading 1224 bytes: Result too large
    tar: Error exit delayed from previous errors
    Here's the "ls -l" output for one of the files in question:
    $ ls -l IMG_0187.jpg
    -rw-r--r-- 1 dijana dijana 3840 Mar 24 23:27 IMG_0187.jpg
    Accessing that file (or any other from the above list) gives same/similar error. The wording differes from command to command, but basically it's the same thing (read error, or result too large, or both combined). For example:
    $ cp IMG_0187.jpg ~
    cp: IMG_0187.jpg: Result too large
    The console log doesn't show any related errors.

  • After installing Final cut server client on OSX 10.6.8 error: Apple QuickTime or the QuickTime Java component is not installed.

    After installing Final cut server client on OSX 10.6.8 error: Apple QuickTime or the QuickTime Java component is not installed.
    I know this error on windows machines but cannot get a solution for OSX.

    I have fixed this by installing the latest combo update

  • Error when executing IDB: "bind(): Result too large"

    I'm trying to use USB debugging in the iPad, as per this guide:
    http://help.adobe.com/en_US/air/build/WS901d38e593cd1bac7b2281cc12cd6bced97-8000.html
    But when I try to execute "idb.exe -forward 7936 7936 1" (1 being my iPad handle), I get the error message:
    "bind(): Result too large"
    What's happening?

    Here's the tar command with the output:
    $ tar cf /Volumes/WINNIPEG\;TOPORKO/MacBackups/2007-05-27/aperture.tar Alex\ -\ External\ HD.apvault
    tar: Alex - External HD.apvault/Library/2003.approject/2007-03-24 @ 08\:17\:52 PM - 1.apimportgroup/IMG0187/Thumbnails/IMG0187.jpg: Read error at byte 0, reading 3840 bytes: Result too large
    tar: Alex - External HD.apvault/Library/2006.approject/2007-03-24 @ 08\:05\:07 PM - 1.apimportgroup/IMG2088/IMG2088.jpg.apfile: Read error at byte 0, reading 1224 bytes: Result too large
    tar: Alex - External HD.apvault/Library/Jasper and Banff 2006.approject/2007-03-25 @ 09\:41\:41 PM - 1.apimportgroup/IMG1836/IMG1836.jpg.apfile: Read error at byte 0, reading 1224 bytes: Result too large
    tar: Alex - External HD.apvault/Library/Old Scanned.approject/2007-03-24 @ 12\:42\:55 AM - 1.apimportgroup/Image04_05 (1)/Info.apmaster: Read error at byte 0, reading 503 bytes: Result too large
    tar: Alex - External HD.apvault/Library/Old Scanned.approject/2007-03-24 @ 12\:42\:55 AM - 1.apimportgroup/Image16_02/Info.apmaster: Read error at byte 0, reading 499 bytes: Result too large
    tar: Alex - External HD.apvault/Library/Vacation Croatia 2006.approject/2007-03-25 @ 09\:47\:17 PM - 1.apimportgroup/IMG0490/IMG0490.jpg.apfile: Read error at byte 0, reading 1224 bytes: Result too large
    tar: Error exit delayed from previous errors
    Here's the "ls -l" output for one of the files in question:
    $ ls -l IMG_0187.jpg
    -rw-r--r-- 1 dijana dijana 3840 Mar 24 23:27 IMG_0187.jpg
    Accessing that file (or any other from the above list) gives same/similar error. The wording differes from command to command, but basically it's the same thing (read error, or result too large, or both combined). For example:
    $ cp IMG_0187.jpg ~
    cp: IMG_0187.jpg: Result too large
    The console log doesn't show any related errors.

  • QT Error -8961 generated when exporting from Final Cut Server

    Not sure about this one.  We have a user that is trying to export a clip through quick time via Final Cut Server, but they are getting error -8961.
    Has any one heard of this error before?

    Yes, go into the Admin window in the Java client (as opposed to the System Preferences pane) and go to Transcode Settings. FInd the setting you wish to be able to export with, double click to open it, and add the "Export" device to its list of destination devices. Save and close. You may need to log out and back into FCSvr to see the changes reflected in your Export window.

  • Final Cut Server Launch Error

    Hi Folks,
    I wanted to run this by the community; I can not seem to get Final Cut Server running on the client side. Every time I attempt to access the media management (http://mydomain/finalcutserver), Java begins to load the software, but subsequently fails to load with an error message “unable to lunch application”. Has any one encountered this before? Is this a Final Cut software error? If so, do you have any suggestions as to how I may go about fixing this?
    Basically what happens is that Java throws an exception (see below):
    Exception:
    CouldNotLoadArgumentException[ Could not load file/URL specified: C:\Documents and Settings\olejnikp\Local Settings\Temporary Internet Files\Content.IE5\6Z9XEFXU\Final Cut Server[1].jnlp]
    at com.sun.javaws.Main.launchApp(Unknown Source)
    at com.sun.javaws.Main.continueInSecureThread(Unknown Source)
    at com.sun.javaws.Main$1.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
    *Wrapped Exception:*
    java.io.FileNotFoundException: C:\Documents and Settings\olejnikp\Local Settings\Temporary Internet Files\Content.IE5\6Z9XEFXU\Final Cut Server[1].jnlp (The system cannot find the file specified)
    at java.io.FileInputStream.open(Native Method)
    at java.io.FileInputStream.<init>(Unknown Source)
    at java.io.FileInputStream.<init>(Unknown Source)
    at com.sun.javaws.jnl.LaunchDescFactory.buildDescriptor(Unknown Source)
    at com.sun.javaws.Main.launchApp(Unknown Source)
    at com.sun.javaws.Main.continueInSecureThread(Unknown Source)
    at com.sun.javaws.Main$1.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
    Message was edited by: Shiloh Arts
    Message was edited by: Shiloh Arts
    Message was edited by: Shiloh Arts

    I had a similar error with one of my Mac clients. I found the following info that cleared it up for OS X users.
    Here's how to reliably reinstall the Java Client on a Mac.
    Delete the Java Applet, usually found on the Desktop. (Sometimes an end-user has placed this in their Applications folder.)
    Delete the following folders (directories):
    ~/Library/Caches/Java/
    ~/Library/Caches/com.apple.finalcutserver/
    Delete the following file:
    ~/Library/Preferences/com.apple.finalcutserver.plist
    (Note: ~ means the user's home folder, ie. /Users/<username>)
    Now, go back to the URL of your Final Cut Server box:
    http://<ip or domain here>/finalcutserver
    ...and the reinstall should go well.

  • Final Cut Server Client Error won't start Application

    Hello I am new to Final Cut Server 1.1 I have setup the server on a Mac Pro Intel based with 8Gb of memory 2x3.2Ghz Quad Core processor. the server is running fine I am able to connect to it from another workstation (windows and MAC) through the web browser but when the application starts this is the exception I get on the MAC client:
    java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.sun.javaws.Launcher.executeApplication(Launcher.java:1812)
    at com.sun.javaws.Launcher.executeMainClass(Launcher.java:1750)
    at com.sun.javaws.Launcher.doLaunchApp(Launcher.java:1532)
    at com.sun.javaws.Launcher.run(Launcher.java:135)
    at java.lang.Thread.run(Thread.java:637)
    Caused by: java.lang.UnsatisfiedLinkError: quicktime.QTSession.Gestalt(I[I)S
    at quicktime.QTSession.Gestalt(Native Method)
    at quicktime.QTSession.gestalt(QTSession.java:935)
    at quicktime.QTSession.open(QTSession.java:641)
    at quicktime.QTSession.open(QTSession.java:608)
    at com.apple.FinalCutServer.javaui.quicktime.QTInit.open(QTInit.java:37)
    at com.apple.FinalCutServer.javaui.FinalCutServer.main(FinalCutServer.java:263)
    ... 9 more
    I have checked that all apple updates have been applied but can't get around this error. Has anyone any suggestions? We are very eager to get this server running.
    Thank you!

    Thank You
    That solution worked like a charm on the MAC workstations but it still does not work on the Windows XP workstations. Is there anything that addresses that issue?
    Newbie

  • Storage issues for Proxies using Final Cut Server

    Hi there,
    we have a fairly high amount of material that is just being put into Final Cut Server to be archived again.
    I dont mind the Xserve being busy to create these proxy files, but they tie up too much space!
    (Maths: 500 GB / 40 h of DVCAM footage result in more than 100 GB proxy files).
    We have those 40 h running though more than once a month plus a whole LOT of material from the last 2 years - and our Xsan is only 7 TB in size.
    Although we could theoretically buy another fiber raid this solution is not really future proof - it just pushes the time when we have to buy the next one a couple of months forward.. on top of that I cannot afford to have expensive, fast fiber channel storage used for proxies of files that are long archived and have only very limited use (and IF we need them, stick in the archive device and done).
    Any ideas how to get rid of proxy files from archived assets?
    I dont really want to take pen and paper and delete the proxies of the files by hand from the bundle.. dont think FCSvr will like this either.
    thanks for any advice
    tobi

    So I'm not sure how your math is adding 100GB of proxy files
    Are you creating VersionS and/or Edit Proxies of everything?
    I ask because using the default Clip Proxy setting gives you file sizes similar to the ones below. These numbers aren't accurate because the default Transcode setting uses Variable Bit Rate (VBR) encoding for both video and audio, but assuming you had a relative constant 800kbps stream here's how large your Proxies.bundle file should be
    800kbps * 30secs = 2.4mb
    800kbps * 60secs = 4.8mb
    800kbps * 60secs * 60min = 280.8mb per hour
    280.8mb per hours * 40= 11.2GB
    Also note, that deleting an asset from FCSvr doesn't delete the proxy files so you could have a lot of proxies left over from a few historical scans.

  • Long HDV and XDCAM clips through Final Cut Server

    Hi,
    I know that I could post this in Compressor and/or Quicktime and/or Leopard/Tiger Forums, but as this problem it`s causing my Final Cut Server to not working well, I think it makes sense to post here.
    To resume: HDV and XDCAM clips (all on original codec, only xdcam transferred to mov in xdcam transfer) with file size bigger than exactly 4GB always FAILS in "Final Cut Server"/"Compressor alone in ANY machine installed from scratch"/"Anything that uses Quicktime". Source and Destination being localy or networked. DV or DV50 can get bigger than that, I`ve never seen this error with DV`s codecs.
    I had seen this problem before I bought final cut server. When I was transcoding our interviews for transcription. We always worked with BETA SP and captured in DV or DV50. We were starting to go to HDV and now to XDCAM and this error has started to appear.
    I have a "proxy" made a year ago from a HDV and it`s original has 8GB. I tried this file again and as I expected didn`t work. In ANY computer with any installations with only the latest updates. I`ve remembered that I was using Tiger at this time, and Quicktime/Tiger/ProApps had a old version.
    My two cents are that some update from Tiger/Leopard/QuickTime/ProApps made this happen.
    So ANY HDV/XDCAM file bigger than 4GB it`s FAILING in ANY system. (Tiger or Leopard, With or without FCP and QT PRO, Installed from scratch and updated every thing).
    Some tests I`ve made:
    -Compressor works with this two codecs to lower res in TIGER and with QuickTime Version 7.4.0 and LOWER. (had to be tested, but I think that was my version of QT when I managed to make the proxy of this two codecs).
    -Compressor doesn`t work on any system with QT above 7.4.0.
    -CATDV (http://www.squarebox.co.uk/) version 6 can make proxy versions of this two codecs in TIGER ONLY! They already know of this problem and CATDV 7 works on both.
    -Compressing in compressor with H.264 (same codec as the FCSvr proxy) BUT without changing the resolution, size and etc. Works OK in both Tiger and Leopard.
    -Exporting through QT works in ANY system BUT I loose TC, so it`s not an option (and I can`t input this workflow in FCSvr without using compressor).
    -Exporting from FCP as QuickTime Movie with H.264 doesn`t work on Both (Tiger and Leopard).
    -Exporting fro FCP as QuickTime Conversion with H.264 works but as exporting from QT I loose TC and can`t input this into FCSvr workflow.
    Error message are always: "QuickTime error = -50" or "QuickTime error: corrupted data" (and of course it`s not corrupted because it`s used - even in FCP to edit - and opened viewed, catdv can do proxy and everything).
    I`m surprised not to see anyone freaking out about this issue... I don`t think that It`s only me.
    This is some problems that I came across while trying to find a solution:
    http://www.paulescandon.com/blog/?p=39
    http://discussions.apple.com/thread.jspa?messageID=4912265&#4912265
    http://www.squarebox.co.uk/faq155.html
    http://discussions.apple.com/thread.jspa?messageID=5513003&#5513003
    http://discussions.apple.com/thread.jspa?threadID=1903644&tstart=0
    http://discussions.apple.com/thread.jspa?threadID=1780228
    http://discussions.apple.com/thread.jspa?threadID=1853507
    As you can see this is not the first time I`ve posted about this issue.
    I`m sending a bug report as well.
    Thank You all in advance.
    Regards

    Hi Chará!
    We should, but I think the others will be disappointed eheheheh.
    It doesn`t matter what version of Leopard I`m using (right know I`m using Leopard Server 10.5.6, but this issue has happened even with 10.4 if QT is updated and all updates of 10.5 up, as I`
    ve mentioned above).
    What file size are those clips that are longer than 10min?
    I really don`t get, how it`s possible that just the two of us has HDV and XDCAM files with file size longer than usual? Maybe people that do interviews don`t like to record the entire interview and when capturing those clips would be under this file size limit? Or maybe we are the only ones in the whole world that transcode longer HDV and XDCAM clips (it`s funny that it doesn`t even need to be FCSrv, with only ONE machine and compressor this issue happens all the time).
    I`m convinced that I have to install a Tiger version in a virtual machine with compressor and doesn`t update anything to test this further on. But I`m certain that with TIGER and CATDV this works ok! And I think that CATDV uses QT engine to do this transcodes.
    This is too much of a mystery to me. Maybe it`s time to spend 24+ hours with apple in a international call and probably not getting anywhere.
    I`m only posting this here in FCSrv forum because this issue it`s must worse when you have hundreds of clips with this issue and if you bought the package it doen`t matter if it`s a problem with QT+OSX or whatever, apple has to solve it for FCSrv use.
    Do you reckon?
    Let`s get in touch through email: lucas at rwcine.com.br
    (at = @ and delete spaces).
    Regards

  • Final Cut Server on edit suite?

    We're thinking about purchasing Final Cut Server, but I'm not clear about whether we'll need to put it on a dedicated system or if we could run it on an edit suite. We're rarely going to have more than 3 users at a time using it, and probably 75% of the media is already on one Mac Pro in a combination of internal and external drives and an Xsan. It would be a big advantage in terms of both cost and convenience to be able to run Final Cut Server on that computer, but is that a reasonable thing to do? Would it have a significant impact on editing? Would it use up too much CPU time and memory?

    FCS uses a lot of processing power when you add things to the library because it is doing several things at once. I've maxed out a 4 core 2.6GHZ easily, especially when there are errors. The whole thing will redline. We're actually thinking about using an 8 core instead. we're also offloading a lot of the compression to the server to free up the editing machines.

  • Final Cut Server Edit Proxy

    Hi,
    I need to do remote editing (with FCP7) over small bandwith WAN (1Mbps) therefore I configure Final Cut Server (V1.5.1) Edit Proxy to use a Quicktime H.264 codec with the following parameters:
    File Extension: mov
    Estimated size: unknown
    Audio Encoder
    AAC, Stereo (L R), 48.000 kHz
    Video Encoder
    Format: QT
    Width: (100% of source)
    Height: (100% of source)
    Pixel aspect ratio: Default
    Crop: None
    Padding: None
    Frame rate: (100% of source)
    Frame Controls: Automatically selected: Off
    Codec Type: H.264
    Multi-pass: Off, frame reorder: On
    Pixel depth: 24
    Spatial quality: 0
    Min. Spatial quality: 25
    Temporal quality: 50
    Min. temporal quality: 25
    The workflow is:
    The remote editor choose on Final Cut Server the FCP project he has to work on, do a Check Out of the project on his Final cut Pro 7 local station by selecting "use: edit proxy" and "keep media with project". Project and associated footage are then uploaded from the server to the remote FCP7 station with no problem.
    When the upload is complete, the editor simply open the project file to start editing on his station. This workflow was working allright till january: when opening the project the FCP7 doesn't reconnect the media with an error message:
    "one or more of the updates requested by Final Cut Server could not be applied. (one or more replacement (s) had unexpected track setting.)" and media files are Offline! When trying to reconnect manually by indicating the proper path for the missing files, I have to uncheck "Matched Name and reel only" to select the edit proxy files. It looks like it was still linked with the original media...
    Looking at the project file, the paths for the proxy files are the proper ones...
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>fileSubs</key>
    <array>
    <dict>
    <key>newurl</key>
    <string>file://localhost/Users/XXXXX/Desktop/testbdw10/media/_FCS__XXX.XXX.XXX.XXX/dev/3/1670_179_004301.mov</string>
    <key>originalurl</key>
    <string>file://localhost/Macintosh%20HD/Users/XXXXX/Documents/Final%20Cut%20Pro%20Docume nts/Capture%20Scratch/Untitled%20Project%201/179004301.mov</string>
    </dict>
    <dict>
    <key>newurl</key>
    <string>file://localhost/Users/XXXXX/Desktop/testbdw10/media/_FCS__xxx.xxx.xxx.xxx/dev/3/1674_179_003601.mov</string>
    <key>originalurl</key>
    <string>file://localhost/Macintosh%20HD/Users/XXXX/Documents/Final%20Cut%20Pro%20Documen ts/Capture%20Scratch/Untitled%20Project%201/179003601.mov</string>
    </dict>
    </array>
    <key>version</key>
    <integer>0</integer>
    <key>versionMin</key>
    <integer>0</integer>
    </dict>
    </plist>
    If I change the Edit Proxy setting for ProRes 422 (Proxy) on Final Cut Server edit proxy setting then the reconnection is working OK. I cannot use this workaround as the files are then far too big to be tranferred over a 1Mbps WAN.
    The setup with the Quicktime H.264 edit proxy was working fine after, it seems, the installation of Pro applications update 2009-01 issued end of october 2009, but since january 2010 it does not work anymore.
    Final Cut Server is version 1.5.1 with Mac Os X server is 10.5.8
    Editing station are running Final Cut Pro 7.0.1 and Mac Os X 10.5.8
    What could be wrong ? Does anybody encountered this type of problem?
    Any suggestions welcomed!
    Thanks.

    I'm dealing with this issue right now (FCSv 1.5.1 latest pro app updates, 10.6.2, FCP7). I was under the impression that you could make edit proxies that were 25% resolution (as opposed to just using an efficient codec), but Final Cut Pro is having a rough time getting it right. I too am only getting success with ProRes-Proxy, audio passthrough, and 100% res/everything else. I tried using Apple Intermediate Codec with default "100%" settings and audio passthrough but had no luck. It's way heavier than ProRes Proxy anyways.
    The only suggestion I can make is try your h264 solution with audio passthrough (or Linear PCM) instead of AAC. If that makes a difference we're on to something. Some people started having this issue after an iTunes update, so maybe AAC is the source of the problem.
    The edit proxy workflow really needs tweaking from Apple to cater to internet-based solutions rather than fiber/SAN environments where bandwidth is a non-issue.
    I'll report back later after some more testing.

  • Final Cut Server + Media Manager Best Practices

    Is anyone using Final Cut Server to organize FCP Media Manager?
    I don`t want MM and it`s individual contains cataloged by FCS, only a "bundle" from the directory of MM (that has the project and a sub-directory called Media with all medias from that project).
    I`ve tried different solutions but always get stuck in some way.
    1- Get the directory from MM and put on a Device being watched by a scan production. A production with the name from the directory is created and the project and medias are catalogued as well.
    But I don`t want those medias and projects catalogued by FCS. I`ve tried to set the recursion limit to 0 or 1 but the production is not created, neither the bundle (from the directory) asset. Nevertheless you can`t archive productions (with MatrixStore you can), so a workaround would be to archive only the bundle inside the production (if you managed to get only the directory as a bundle inside the production). But to get worse, if you update the directory as a bundle (regardless if it`s inside a production or not) and your archive device is a FTP device it says "ERROR: E_NOTSUPP Sorry, copying a bundle to an ftp server is not supported".
    2- Zip with "ditto" (because the 2GB problem with normal zip) the MM directory and put on a device being watched. This won`t be scanned by a scan production to create a production with the name of it contents because it`s not a directory it`s a zip file and even if I forget about productions and only uploads the zip files to get each MM directory created as a zip takes too long and much cpu power so it`s not possible with many MM directories (the case). And to get things worse (always murphy), before I`ve deleted those directories I checked the integrity of zip files and to my surprise some of them were ruined. So I don`t trust compressing anything for my backup. I`ve tried Tar, ditto, archive from finder (that`s the same as ditto) but it takes too long or break the file.
    The bottom line: I just want a MM as directory bundle in FCS (preferentially inside a production automatically created) AND being able to archive it to a FTP device.
    Thanks in advance

    Well it sounds like you have two problems: getting a successful scan production to work and archiving. I think I can help with the first one.
    When you set up your device make sure that it is one directory level up from each folder you want turned into a production. So for example - XSAN/Show1/MM/ - if you make XSAN the device and set the recursion limit to 0, Show1 will become a production and Show2, Show3, etc for example. Set the recursion limit to 1 and MM becomes a production - Show2/MM/ and Show3/MM, etc for example.
    Here's what I would do. Device 1 = XSAN/Show1 with recursion = 0. In the production info section of the scan production response set the title = Show1_[0]. The [0] will automatically fill in the name of the directory, ie - MM. This way each folder within Show1 becomes a production and the name is preceded by Show1 for easy sorting. This way you can isolate your MM folder and treat it differently.
    That should get your productions scanning correctly. I'm not sure about archiving to an FTP device, zipping bundles, or automatically adding bundled assets to productions with a scan. As far as I know, the only way to add a bundled asset is to manually upload a folder. I might be wrong about this though.
    Hope this helps even a little.
    Message was edited by: Franbot

  • Quotacheck: searchfs Result too large: Result too large

    Aside from a 2006 post regarding this issue, I'm unsure how to resolve my scenario. We're using OSX server's time machine AFP goodies, but we needed to enable quotas for users. Simple? Maybe, but not mac style... so you head into the terminal, read some old posts on outdated forums... use repquota, quotacheck, and quotaon...
    And everything seemed to work, until you add a user (through edquota) who's quota isn't in fstab, who can't be found in repquota...
    sigh...
    I turned off quota checking, tried starting from scratch... what do I get with but an error who's last mention on this forum is from 2006:
    sudo quotacheck -a -v
    * Checking user and group quotas for /dev/rdisk4 (/Volumes/ColdStorage)
    34
    quotacheck: searchfs Result too large: Result too large
    Any ideas of ways around? The 2006 posts seem to indicate that after attempting variations of quotacheck, I might eventually break through!

    Hello,
    I've run into the same issue on our setup as well. (Xserve G5 10.4.8, data is an Xserve RAID, level 5 1TB, used for home directories) I'm working with Apple to see if there is a solution to this issue or if it is a bug. In the meanwhile, they recommended running quotacheck with the path to the share rather then -a
    sudo quotacheck -v /Volumes/OURDATA
    Using the command this way seems to work about half of the time for me, the other half still giving the same error message. I'm hoping this is a cosmetic issue with quotacheck, and not a hint of a problem with our setup.
    I'll be sure to post if I find anything else out.
    Matt Bryant
    ACTC
    Husson College and the New England School of Communications

Maybe you are looking for

  • Can not get data from database

    hi all,     there is a problem ,  when i write like below : SELECT * FROM bsis INTO CORRESPONDING FIELDS OF TABLE it_temp           WHERE bukrs = p_bukrs             AND hkont = p_hkont. p_bukrs , p_hkont are all on the selection screen , and p_bukrs

  • The UI text of the menus are not readable so it's impossible to work with

    Hi  I install the photoshop cs6(x64) and i have the following problem. The UI text of the menus(not the toolbar drop down menus) are not readable so it's impossible to work with. After that, I completely uninstall the cs6 photoshop and i tried to ins

  • Exporting from Keynote - audio/slides no longer in sync

    Well after researching this, it seems there is no answer to my issue, but I thought I'd ask anyway. I have set up a Keynote presentation using Smart Builds. I have four slides. Each slide has approximately 40 pics and one song. The timing of the pics

  • Copy a text from another dokument

    hello... if I want to copy a text it doesn`t take the textstile from pages automatically. do you know how I can set this up? thank ypu.. best regards franzi

  • ICal SSL problem

    I am getting this error message when I try to publish to my WebDav server over SSL (https): "Unexpected secure name resolution error (code -9812). The server name mydomain.com may be incorrect." I am trying to publish to an address like this: https:/