Big files support

Please add big files support. I can will use Lightroom for manage picture about 15000x10000 (from drum scan).

I also had issues opening AVCHD files on case-sensitive filesystems.
You can try these steps to solve the issue:
Right click on the AVCHD file and select 'Show Package Contents'
Right click on the BDMV file and select 'Show Package Contents'
Rename index.bdm to INDEX.BDM (it seems that all files in the BDMV folder should be in UPPERCASE for Quicktime to open the AVCHD file).
Go back twice and try again to double-click on the AVCHD file (Quicktime should open a window showing the multiple clips, as expected).

Similar Messages

  • Not enough space on my new SSD drive to import my data from time machine backup, how can I import my latest backup minus some big files?

    I just got a new 256GB SSD drive for my mac, I want to import my data from time machine backup, but its larger than 256GB since it used to be on my old optical drive. How can I import my latest backup keeping out some big files on the external drive?

    Hello Salemr,
    When you restore from a Time Machine back up, you can tell it to not transfer folders like Desktop, Documents. Downloads, Movies, Music, Pictures and Public. Take a look at the article below for the steps to restore from your back up.  
    Move your data to a new Mac
    http://support.apple.com/en-us/ht5872
    Regards,
    -Norm G. 

  • Moving big files(600MB) with FTP Adapter error The IO operation failed

    I everybody, I have the next trouble:
    I need to move big files from one server to another remote server through ftp protocol. All the configuration is correct and I am able to move little files
    with no problem, but when I move big files the server shows the next error:
    Exception occured when binding was invoked. Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'readEBS'
    failed due to: The IO operation failed. The IO operation failed. The "OPER[NOOP][NONE]" IO operation for "/tmp/TestLogSOA/DetalleCostos3333333.dvd"
    failed. ". The invoked JCA adapter raised a resource exception. Please examine the above error message carefully to determine a resolution.
    java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLException: XA error: XAResource.XAER_NOTA start() failed on
    resource 'SOADataSource_ohsdomain': XAER_NOTA : The XID is not valid oracle.jdbc.xa.OracleXAException at
    oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:1532) at oracle.jdbc.xa.client.OracleXAResource.start(OracleXAResource.java:321) at
    weblogic.jdbc.wrapper.VendorXAResource.start(VendorXAResource.java:51) at weblogic.jdbc.jta.DataSource.start(DataSource.java:722) at
    weblogic.transaction.internal.XAServerResourceInfo.start(XAServerResourceInfo.java:1228) at
    weblogic.transaction.internal.XAServerResourceInfo.xaStart(XAServerResourceInfo.java:1161) at
    weblogic.transaction.internal.XAServerResourceInfo.enlist(XAServerResourceInfo.java:297) at
    weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:507) at
    weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:434) at
    weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1592) at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1496) at
    weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:439) at weblogic.jdbc.jta.DataSource.connect(DataSource.java:396) at
    weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:355) at
    oracle.integration.platform.xml.XMLDocumentManagerImpl.getConnection(XMLDocumentManagerImpl.java:623) at
    oracle.integration.platform.xml.XMLDocumentManagerImpl.insertDocument(XMLDocumentManagerImpl.java:208) at
    sun.reflect.GeneratedMethodAccessor1534.invoke(Unknown Source) at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at
    org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at
    org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at
    org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at
    org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at
    org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at
    org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy285.insertDocument(Unknown Source) at
    oracle.integration.platform.instance.store.MessageStore.savePayload(MessageStore.java:244) at
    oracle.integration.platform.instance.store.MessageStore.savePayloads(MessageStore.java:99) at
    oracle.integration.platform.instance.InstanceManagerImpl.persistPayloads(InstanceManagerImpl.java:773) at
    oracle.integration.platform.instance.InstanceManagerImpl.persistReferenceInstanceBean(InstanceManagerImpl.java:1106) at
    oracle.integration.platform.blocks.adapter.AbstractAdapterBindingComponent.createAndPersistBindingInstance(AbstractAdapterBindingComponent.java:502)
    at oracle.integration.platform.blocks.adapter.AdapterReference.createAndPersistBindingInstance(AdapterReference.java:356) at
    oracle.integration.platform.blocks.adapter.AdapterReference.request(AdapterReference.java:171) at
    oracle.integration.platform.blocks.mesh.SynchronousMessageHandler.doRequest(SynchronousMessageHandler.java:139) at
    oracle.integration.platform.blocks.mesh.MessageRouter.request(MessageRouter.java:179) at
    Thanks!!!

    Hi idavistro,
    You can try setting the XA Transaction Timeout for SOADataSource.
    1. Log in to the WebLogic Admin Console.
    2. Select in the left tree: Services-> Datasources-> SOADataSource->Transaction.
    2. Select Set XA Transaction Timeout.
    3. Set XA Transaction Timeout to 0.
    4. Restart the server and check if the error still appears.
    Regards,
    Neeraj Sehgal

  • Broken Ftp Connection and big files problem

    I have a problem with big-files downloading.
    Does anybody know how to resume downloading using FTP-connection?
    Or how can I get bytes from FTP connected file using something like random access to avoid the restart of downloading the file?
    "InputStream" does not support "seek"-like methods.

    From RFC 959
    RESTART (REST)
    The argument field represents the server marker at which
    file transfer is to be restarted. This command does not
    cause file transfer but skips over the file to the specified
    data checkpoint. This command shall be immediately followed
    by the appropriate FTP service command which shall cause
    file transfer to resume.
    You should also be aware of RFC 959 Section 3.4.2 on BLOCK MODE transfers which is what allows FTP to REST a connection and "skip" n-bytes of a file.

  • Keeping "CS Web Service session" alive while uploading big files.

    Hi.
    I have a problem when I'm uploading big files, which takes longer than the session timeout value, causing the upload to fail.
    As you all know uploading a file is a three step process:
    1). Create a new DocumentDefinition Item on the server as a placeholder.
    2). Open an HTTP connection to the created placeholder and transfer the data using the HTTPConnection.put() method.
    3). Create the final document using the FileManager by passing in the destination folder and the document definition.
    The problem is that step 2 take so long that the "CS Web Service Session" times out and thus step 3 can not be completed. The Developer guide gives a utility method for creating an HTTP connection for step 2 and it states the folllowing "..you must create a cookie for the given domain and path in order to keep the session alive while transferring data." But this only keeps the session of the HTTP Connection alive and not the "CS Web Service Session". As in my case step 2 completes succesfully and the moment I peform step 3 it throws an ORACLE.FDK.SessionError:ORACLE.FDK.SessionNotConnected exception.
    How does one keep the "CS Web Service Session" alive?
    Thanks in advance
    Regards.

    Okay, even a thread that pushes dummy stuff through once in a while doesn't help. I'm getting the following when the keep alive thread kicks in while uploading a big file.
    "AxisFault
    faultCode: {http://xml.apache.org/axis/}HTTP
    faultSubcode:
    faultString: (409)Conflict
    faultActor:
    faultNode:
    faultDetail:
    {}:return code: 409
    <HTML><HEAD><TITLE>409 Conflict</TITLE></HEAD><BODY><H1>409 Conflict</H1>Concurrent Requests On The Same Session Not Supported</BODY></HTML>
    {http://xml.apache.org/axis/}HttpErrorCode:409
    (409)Conflict
         at org.apache.axis.transport.http.HTTPSender.readFromSocket(HTTPSender.java:732)
         at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.java:143)
         at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32)
         at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
         at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83)
         at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165)
         at org.apache.axis.client.Call.invokeEngine(Call.java:2765)
         at org.apache.axis.client.Call.invoke(Call.java:2748)
         at org.apache.axis.client.Call.invoke(Call.java:2424)
         at org.apache.axis.client.Call.invoke(Call.java:2347)
         at org.apache.axis.client.Call.invoke(Call.java:1804)
         at oracle.ifs.fdk.FileManagerSoapBindingStub.existsRelative(FileManagerSoapBindingStub.java:1138)"
    I don't understand this, as the exception talks about "Concurrent Requests On The Same Session", but if their is already a request going on why is the session timing out in the first place?!
    I must be doing something really stupid somewhere. Aia ajay jay what a unproductive day...
    Any help? It will be greatly appreciated...

  • OVM Manager 3.1.1 CLI clone big files error

    Hello,
    We use ovm manager cli for backup purposses. During a clone operation of a big file after about 10 to 12 min. we get an error:
    Error Msg: com.oracle.odof.exception.PermissionException: Exchange is not connected
    But the clone operation is being continued successfully_.
    So, from client point of view operation failed, but for server succeded.
    Clone command is as follows:
    ssh admin@ovmm -p 10000 "clone VirtualDisk id=$VM_FILE_ID target=$VM_FILE_REPO_ID cloneType=Sparse"
    The whole message:
    Command: clone VirtualDisk id=0004fb00001200004d032c969c42095d.img target=0004fb000003000072340b1e2eb70904 cloneType=Sparse
    Status: Failure
    Time: 2013-01-18 14:42:58.955
    Error Msg: com.oracle.odof.exception.PermissionException: Exchange is not connected
    Fri Jan 18 14:42:58 CET 2013
    OVM> Failed to complete command(s), error happened. Connection closed.
    We blamed timeouts, but after setting higher values nothing has changed.
    Thank you in advance
    Gregory

    This really sounds like this nasty timeout issue, which had its first appearances in the early builds of OVM 3.0.3 and prevented e.g. the creation of rather large storage repositories over iSCSI when using a standard 1 GBit connection. The command would simply timeout on the OVMM after 120 secs, rather than waiting for the ovs-agent to report back…
    In OVM 3.1.1 (368) this timeout had been upped to 10 mins, but I knew that this only would last for months… and I told Oracle Support so, but eh… you know… ;)

  • Big File vs Small file Tablespace

    Hi All,
    I have a doubt and just want to confirm that which is better if i am using Big file instead of many small datafile for a tablespace or big datafiles for a tablespace. I think better to use Bigfile tablespace.
    Kindly help me out wheather i am right or wrong and why.

    GirishSharma wrote:
    Aman.... wrote:
    Vikas Kohli wrote:
    With respect to performance i guess Big file tablespace is a better option
    Why ?
    If you allow me to post, I would like to paste the below text from my first reply's doc link please :
    "Performance of database opens, checkpoints, and DBWR processes should improve if data is stored in bigfile tablespaces instead of traditional tablespaces. However, increasing the datafile size might increase time to restore a corrupted file or create a new datafile."
    Regards
    Girish Sharma
    Girish,
    I find it interesting that I've never found any evidence to support the performance claims - although I can think of reasons why there might be some truth to them and could design a few tests to check. Even if there is some truth in the claims, how significant or relevant might they be in the context of a database that is so huge that it NEEDS bigfile tablespaces ?
    Database opening:  how often do we do this - does it matter if it takes a little longer - will it actually take noticeably longer if the database isn't subject to crash recovery ?  We can imagine that a database with 10,000 files would take longer to open than a database with 500 files if Oracle had to read the header blocks of every file as part of the database open process - but there's been a "delayed open" feature around for years, so maybe that wouldn't apply in most cases where the database is very large.
    Checkpoints: critical in the days that a full instance checkpoint took place on the log file switch - but (a) that hasn't been true for years, and (b) incremental checkpointing made a big difference the I/O peak when an instance checkpoint became necessary, and (c) we have had a checkpoint process for years (if not decades) which updates every file header when necessary rather than requiring DBWR to do it
    DBWR processes: why would DBWn handle writes more quickly - the only idea I can come up with is that there could be some code path that has to associate a file id with an operating system file handle of some sort and that this code does more work if the list of files is very long: very disappointing if that's true.
    On the other hand I recall many years ago (8i time) crashing a session when creating roughly 21,000 tablespaces for a database because some internal structure relating to file information reached the 64MB hard limit for a memory segment in the SGA. It would be interesting to hear if anyone has recently created a database with the 65K+ limit for files - and whether it makes any difference whether that's 66 tablespaces with about 1,000 files, or 1,000 tablespace with about 66 files.
    Regards
    Jonathan Lewis

  • Sudhir Choudhrie - Can't Able to Download Big Files

    Hi, This is Sudhir Choudhrie, i've chosen Firefox for my daily browsing and downloading need because it has a function which lets you pause the running download and can start it whenever or next day you want. And you don't need any third-party trial download manager. But nowadays i finding it difficult to download big files. Whenever i choose to start a paused downloading file, i can't be able to download that file again. Please tell me if it's a website error or some error in Firefox. Thanks,

    Hi,
    Thank you for your question. This I do not know the answer, however I would be happy to test a download url. If this is an issue with downloads it is also possible to troubleshoot [https://support.mozilla.org/en-US/kb/cant-download-or-save-files]

  • [REQ] gpac with Large File Support (solved)

    Hi All
    I´m getting desperated.
    I´m using mp4box from the gpac package to mux x264 and aac into mp4 containers.
    Unfortunately all mp4box version don´t support files bigger then 2Gb.
    There is already a discussion going on at the bugtracker
    http://sourceforge.net/tracker/index.ph … tid=571738
    I tried all those methods with gpac 4.4 and the csv version.
    But it still breaks up after 2Gb when importing files with mp4box.
    So..anybody an idea how to get a version build on arch which supports big files?
    thanks
    Last edited by mic64 (2007-07-16 17:16:44)

    ok  after looking at this tuff with patience I got it working.
    You have to use the cvs version AND the patch AND extra flags from the link above.
    After that Files >2Gb work.
    Last edited by mic64 (2007-07-16 17:27:33)

  • Validate and reject checkin for big files???

    Is there a way to validate and reject checkin for big files?
    From client side, sound like, a custom checkin policy won't work. If user overwrite the policy at checkin, then it still open up another "backdoor" for them to checkin.
    From server side, I try TFS plugin but that doesn't work. The CheckinNotification event won't notify after the fact until the checkin already committed and it's already late.
    Any other suggestion?
    Thx.

    Hi Garynguyen, 
    Thanks for your post.
    I think you need create the check-in policy and create server plugin to enforce the check-in policy, please refer to the solution in this article:
    https://binary-stuff.com/post/how-to-enforce-check-in-policies.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Can Server 2012 FSRM send email notification to Administrator while a user copying a big file(e.g. 100MB) onto server?

    Can FSRM send e-mail notification to Administrator while a user copying a big file(e.g. 100MB) onto server?
    Environment:
    Windows Server 2012 Standard 64bit
    FSRM install and test email ok
    Qouta management and file screen path: D:\Data
    Thanks

    Hi,
    You can create quotas to limit the space allowed for a volume or folder and generate notifications when the quota limits are approached or exceeded.
    For more detailed information, please refer to the articles below:
    Working with Quotas
    http://technet.microsoft.com/en-us/library/cc770989(v=ws.10).aspx
    The best feature you've never heard of...
    http://blogs.technet.com/b/seanearp/archive/2008/03/11/the-best-feature-you-ve-never-heard-of.aspx
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Problems uploading big files via FTP and downloading files

    I've been having problems uploading big files like video files (.mov 12MB) via FTP to my website (small files like .html or .doc I can upload but it takes longer than usual). I'm using Fetch 4.0.3. as FTP. Same problems when downloading files via Bit Torrent. I recently moved to Spain, since then I seem to have the problem. But my roommate with a PC doesn't have that problem. Connecting to internet with Ethernet cable also didn't resolve the problem. I also tested it from a Starbucks coffee connecting to Internet from there but still couldn't upload that 12MB file to the FTP. The security settings for firewall are set to "allow all incoming connections". I didn't change any of my settings so I don't know what the problems could be. I have a MacBook Pro, Mac OS X (10.5.7) Any suggestions? Thanks!

    Welcome to Apple Discussions!
    Much of what is available on Bittorrent is not legal, beta, or improperly labelled versions. If you want public domain software, see my FAQ*:
    http://www.macmaps.com/macosxnative.html#NATIVE
    for search engines of legitimate public domain sites.
    http://www.rbrowser.com/ has a light mode that supports binary without SSH security.
    http://rsug.itd.umich.edu/software/fugu/ has ssh secure FTP.
    Both I find are quick and a lot more reliable than Fetch. I know Fetch used to be the staple FTP program, but it grew too big for my use.
    - * Links to my pages may give me compensation.

  • BCC FlexUI big file import timeout.

    Hi mates.
    We are facing an issue with BCC when importing a big file with cross-shells. During the importing process for the file, the first screen is shown (Step 1 of 2) and, as the file takes long to be imported (2-3 minutes for a large number of assets), the second screen is never shown despite the process ends successfully. We are using Atg 10.0.3
    Have you faced any similar problem? We have wrapped projectAssets.jsp with transaction marks, but it didn't work. Is there nay FlexUI time-out configuration we could set up to get the second screen even when the file takes long to import?
    Thank you very much.
    Regards.

    Hi Joel.
    Thank you very much for your prompt response. We have monitored the process since the BCC user clicks on the Import button to make sure we haven't got the problem you are telling me. The importing process is finishing successfully in the back end after a couple of minutes, transaction timeout is set up properly in Weblogic to allow long running transactions in customer's environment so, we are not having connection timeouts. The thing is the callback to the FlexUI is missing, probably because the interface is listening for a callback only during a period of time lower than the time the file needs to be imported, thus, the interface doesn't update the state and hangs in Step 1 despite the importation has been done and it should reach the Step 2. We opened an SR about this case (3-6766177811) and they recommend us to apply a change to projectAssets.jsp, wraping it into a transaction. We've tested the change and it doesn't work, we are still facing the same issue when importing big files.
    I am completely new with Flex, so, I don't know very much how this behaves. Is there any parameter related to a listener timer or something similar that could be set up in FlexUI to get call backs from long-running transactions?
    Thank you very much for your support.
    Kind Regards.
    Felix Rodriguez.

  • Photoshop CC slow in performance on big files

    Hello there!
    I've been using PS CS4 since release and upgraded to CS6 Master Collection last year.
    Since my OS broke down some weeks ago (RAM broke), i gave Photoshop CC a try. At the same time I moved in new rooms and couldnt get my hands on the DVD of my CS6 resting somewhere at home...
    So I tried CC.
    Right now im using it with some big files. Filesize is between 2GB and 7,5 GB max. (all PSB)
    Photoshop seem to run fast in the very beginning, but since a few days it's so unbelievable slow that I can't work properly.
    I wonder if it is caused by the growing files or some other issue with my machine.
    The files contain a large amount of layers and Masks, nearly 280 layers in the biggest file. (mostly with masks)
    The images are 50 x 70 cm big  @ 300dpi.
    When I try to make some brush-strokes on a layer-mask in the biggest file it takes 5-20 seconds for the brush to draw... I couldnt figure out why.
    And its not so much pretending on the brush-size as you may expect... even very small brushes (2-10 px) show this issue from time to time.
    Also switching on and off masks (gradient maps, selective color or leves) takes ages to be displayed, sometimes more than 3 or 4 seconds.
    The same with panning around in the picture, zooming in and out or moving layers.
    It's nearly impossible to work on these files in time.
    I've never seen this on CS6.
    Now I wonder if there's something wrong with PS or the OS. But: I've never been working with files this big before.
    In march I worked on some 5GB files with 150-200 layers in CS6, but it worked like a charm.
    SystemSpecs:
    I7 3930k (3,8 GHz)
    Asus P9X79 Deluxe
    64GB DDR3 1600Mhz Kingston HyperX
    GTX 570
    2x Corsair Force GT3 SSD
    Wacom Intous 5 m Touch (I have some issues with the touch from time to time)
    WIN 7 Ultimate 64
    all systemupdates
    newest drivers
    PS CC
    System and PS are running on the first SSD, scratch is on the second. Both are set to be used by PS.
    RAM is allocated by 79% to PS, cache is set to 5 or 6, protocol-objects are set to 70. I also tried different cache-sizes from 128k to 1024k, but it didn't help a lot.
    When I open the largest file, PS takes 20-23 GB of RAM.
    Any suggestions?
    best,
    moslye

    Is it just slow drawing, or is actual computation (image size, rotate, GBlur, etc.) also slow?
    If the slowdown is drawing, then the most likely culprit would be the video card driver. Update your driver from the GPU maker's website.
    If the computation slows down, then something is interfering with Photoshop. We've seen some third party plugins, and some antivirus software cause slowdowns over time.

  • Adobe Photoshop CS3 collapse each time it load a big file

    I was loading a big file of photos from iMac iPhoto to Adobe Photoshop CS3 and it keep collapsing, yet each time I reopen photoshop it load the photos again and collapse again. is there a way to stop this cycle?

    I don't think that too many users here actually use iPhoto (even the Mac users)
    However, Google is your friend. A quick search came up with some other non-Adobe forum entries:
    .... but the golden rule of iPhoto is NEVER EVER MESS WITH THE IPHOTO LIBRARY FROM OUTSIDE IPHOTO.In other words, anything you might want to do with the pictures in iPhoto can be done from *within the program,* and that is the only safe way to work with it. Don't go messing around inside the "package" that is the iPhoto Library unless you are REALLY keen to lose data, because that is exactly what will happen.
    .....everything you want to do to a photo in iPhoto can be handled from *within the program.* This INCLUDES using a third-party editor, and saves a lot of time and disk space if you do this way:
    1. In iPhoto's preferences, specify a third-party editor (let's say Photoshop) to be used for editing photos.
    2. Now, when you right-click (or control-click) a photo in iPhoto, you have two options: Edit in Full Screen (ie iPhoto's own editor) or Edit with External Editor. Choose the latter.
    3. Photoshop will open, then the photo you selected will automatically open in PS. Do your editing, and when you save (not save as), PS "hands" the modified photo back to iPhoto, which treats it exactly the same as if you'd done that stuff in iPhoto's own editor and updates the thumbnail to reflect your changes. Best of all, your unmodified original remains untouched so you can always go back to it if necessary.

Maybe you are looking for