Microstate acctg. in large file compilation

I am trying to enable microstate accounting by using the following code
#include <procfs.h>
     sprintf(procname,"/proc/%d/ctl",(int)getpid());
     ctlfd = open(procname,O_WRONLY);
     ctl[0]=PCSET;
     ctl[1]=PR_MSACCT;
     write(ctlfd,ctl,2*sizeof(long));My application needs to be compiled with the large file compilation environment flags. But the compiler (SunStudio 8 on Solaris 8) gives me an error:
"Cannot use procfs in the large file compilation environment".
Is there any way to get around this problem?
Or is there any other way to enable microstate accounting that would work with the large file compilation?

Your question is not about C++, but about Solaris process accounting.
Inspection of the procfs header shows that it requires a consistent ILP32 (32-bit int, long, and pointer) or LP64 (64-bit long and pointer) model. It won't work with the ILP32 model modified with 64-bit file access. The error message is coming from the procfs header.
If you can build your application as a 64-bit program, using -xarch=v9, it should compile. But your application needs to be 64-bit clean, with no hidden ILP32 assumptions.
Lint provides options for identifying C code that is not 64-bit clean. The C++ compiler has an -xport64 option for the same purpose. Refer to the C and C++ Users Guides for details.

Similar Messages

  • IdcApache2Auth.so Compiled With Large File Support

    Hi, I'm installing UCM 10g on solaris 64 Bit plattform and Apache 2.0.63 , everything went fine until I update configuration in the httpd.conf file. When I query server status it seems to be ok:
    +./idcserver_query+
    Success checking Content Server  idc status. Status:  Running
    but in the apache error_log and I found the next error description:
    Content Server Apache filter detected a bad request_rec structure. This is possibly a problem with LFS (large file support). Bad request_rec: uri=NULL;
    Sizing information:
    sizeof(*r): 392
    +[int]sizeof(r->chunked): 4+
    +[apr_off_t]sizeof(r->clength): 4+
    +[unsigned]sizeof(r->expecting_100): 4+
    If the above size for r->clength is equal to 4, then this module
    was compiled without LFS, which is the default on Apache 1.3 and 2.0.
    Most likely, Apache was compiled with LFS, this has been seen with some
    stock builds of Apache. Please contact Support to obtain an alternate
    build of this module.
    When I search at My Oracle Support for suggestions about how to solve my problem I found a thread which basically says that Oracle ECM support team could give me a copy IdcApache2Auth.so compiled with LFS.
    What do you suggest me?
    Should I ask for ECM support team help? (If yes please tell me How can I do it)
    or should I update the apache web server to version 2.2 and use IdcApache22Auth.so wich is compiled with LFS?
    Thanks in advance, I hope you can help me.

    Hi ,
    Easiest approach would be to use Apache2.2 and the corresponding IdcApache22Auth.so file .
    Thanks
    Srinath

  • *Added* code to existing source file, compiled it, and class file shrunk

    Another newbie here. Fortunately, my classpath is ok, so I'm able to compile a .java file.
    I added one line of code (System.out.println) to write the value of a variable to a log. After compiling with javac, I noticed that the resulting [new] class file was smaller than the existing class file. I looked at each of the class files with Textpad. It's gibberish, but I quickly saw that a large block of code was missing in the new class file, even though the size of the source file had been increased.
    There is a difference, however, between how the two class files were created. The existing class file was compiled (along with many others) by exporting an .EAR file from a development environment (WSAD) to the WebSphere Administrator Console. Conversely, I am now compiling the same source file with javac on my machine.
    I suspect that this is the reason why I can add code to a .java file, compile it, and have the resulting class file actually lose code. Even if I am correct, I don't know what to do about it.
    Does anyone have an idea?
    Regards,
    Daniel T.

    Thank you both for your replies. I've read many posts over the past few months, and I know how important it is to provide as much info as possible, when asking a question here. That said, I have another tasty tidbit...
    After replacing the existing (larger) class file with the new (smaller) class file, my application now produces this:
    "*Error 500: LinkageError while defining class*..." [name of class]
    *"...(Unsupported major.minor version 50.0) This is often caused by having a class defined at multiple locations within the classloader hierarchy. Other potential causes include compiling against an older or newer version of the class that has an incompatible method signature. Dumping the current context classloader hierarchy: ==> indicates defining classloader ==>[0] com.ibm.ws.classloader.CompoundClassLoader@6bd156d5 Local ClassPath:"*
    ...[the entire classpath]...
    Original exception--- java.lang.UnsupportedClassVersionError:
    I'm guessing that my focus should mostly be on the 'Original exception', and maybe I need to revisit the JRE or JDK or JVM (these terms are somewhat nebulous to me, so please forgive me using them interchangeably) on my machine. For now, I'll just keep trying stuff. Thanks again for the replies!
    Regards,
    Daniel T.

  • Multiple files as book vs. single large file

    I am beginning two projects with a coworker. Both are multi-page publications (large newsletter and an annual report). We've run into the problem before of both needing to be in the same file at the same time and having to go back and forth opening and closing the files over and over, which quickly becomes a huge hassle. We don't have InCopy or any of the software that would allow us to both work on the same file.
    So my question is should I create a file for each spread in the publications and then compile them together in a master ID file as a "book" or just deal with it and create two larger files?

    I do a magazine every two months
    And every article is a separate file and compiled in a book.
    I design, and someone else edits.
    So I'm all for the Book, it really works.

  • UTL_FILE.get_line won't read large files ?

    I am trying to read a large fixed length flat file. If I cut the file down to really small it will read it but it reads it as a single line. If I try to read a larger file > 32k I get a READ_ERROR. I am pretty sure it has to do with the end of line marker but I saw nothing about that in the UTL_FILE documentation. This is on UNIX, new line character after each record in the file. Standard unix flat file.
    Any ideas on what to do?
    Thanks in advance
    Matt
    [email protected]
    my code:
    BEGIN
    BEGIN
    std_file := UTL_FILE.FOPEN('&4','&1','r',32767);
    EXCEPTION
    WHEN UTL_FILE.INVALID_PATH THEN
    RAISE_APPLICATION_ERROR(-20011,'Invalid Path for STD file, &4/&1');
    WHEN OTHERS THEN
    RAISE_APPLICATION_ERROR(-20014,'Other Error trying to open STD file, &4/&1');
    END;
    IF UTL_FILE.is_open(std_file) = FALSE THEN
    RAISE_APPLICATION_ERROR(-20015,'Could not open STD file, &4/&1');
    END IF;
    -- READ STD FILE HEADER
    BEGIN
    UTL_FILE.get_line(std_file,hdr_text);
    EXCEPTION
    WHEN UTL_FILE.INVALID_FILEHANDLE THEN
    RAISE_APPLICATION_ERROR(-20017,'STD read file handle not valid');
    WHEN UTL_FILE.INVALID_OPERATION THEN
    RAISE_APPLICATION_ERROR(-20018,'STD read invalid operation error');
    WHEN UTL_FILE.READ_ERROR THEN
    RAISE_APPLICATION_ERROR(-20019,'STD read error');
    WHEN NO_DATA_FOUND THEN
    RAISE_APPLICATION_ERROR(-20020,'STD read no data found');
    WHEN VALUE_ERROR THEN
    RAISE_APPLICATION_ERROR(-20021,'STD read value error');
    END;
    -- PROCESS TRANSACTIONS
    LOOP
    BEGIN
    tran_text := NULL;
    UTL_FILE.get_line(std_file,tran_text);
    EXCEPTION
    WHEN no_data_found THEN EXIT; -- EOF
    WHEN value_error THEN
    RAISE_APPLICATION_ERROR(-20010,'STD record too long.');
    END;
    std_rowcount := std_rowcount + 1;
    END LOOP;
    UTL_FILE.FCLOSE(std_file);
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    RAISE_APPLICATION_ERROR(-20001,'No Data Found.');
    WHEN UTL_FILE.INVALID_PATH THEN
    RAISE_APPLICATION_ERROR(-20002,'Invalid Path ');
    WHEN UTL_FILE.INVALID_MODE THEN
    RAISE_APPLICATION_ERROR(-20003,'Invalid Mode ');
    WHEN UTL_FILE.INVALID_OPERATION THEN
    RAISE_APPLICATION_ERROR(-20004,'Invalid Operation ');
    END;
    null

    We are still hung up on this. I tried implementing the code from STEVE'S XML book but still haven't resovled it.
    The clob is being created via XSU see below. The new char[8192] appeasr to force the output file to 8K
    with trailing characters on small clobs but adds a carraige return each 8K on larger ones.
    As usual any input is appreciated from all. Doese anyone know of a good JAVA forum like this one?
    Thanks
    PROCEDURE BuildXml(v_return OUT INTEGER, v_message OUT VARCHAR2,string_in VARCHAR2,xml_CLOB OUT NOCOPY CLOB) IS
    queryCtx DBMS_XMLquery.ctxType;
    Buffer RAW(1024);
    Amount BINARY_INTEGER := 1024;
    Position INTEGER := 1;
    sql_string     VARCHAR2(2000) := string_in;
    BEGIN
    v_return := 1;
    v_message := 'BuildXml completed succesfully.';
    queryCtx := DBMS_XMLQuery.newContext(sql_string);
    xml_CLOB := DBMS_XMLQuery.getXML(queryCtx);
    DBMS_XMLQuery.closeContext(queryCtx);
    EXCEPTION WHEN OTHERS THEN
    v_return := 0;
    v_message := 'BuildXml failed - '||SQLERRM;
    END BuildXml;
    create or replace and compile java source named sjs.write_CLOB as
    import java.io.*;
    import java.sql.*;
    import java.math.*;
    import oracle.sql.*;
    import oracle.jdbc.driver.*;
    public class write_CLOB extends Object
    public static void pass_str_array(oracle.sql.CLOB p_in,java.lang.String f_in)
    throws java.sql.SQLException, IOException
    File target = new File(f_in);
    FileWriter fw = new FileWriter(target);
    BufferedWriter out = new BufferedWriter(fw);
    Reader is = p_in.getCharacterStream();
    char buffer[] = new char[8192];
    int length;
    while( (length=is.read(buffer)) != -1) {
    out.write(buffer);
    is.close();
    fw.close();
    /

  • Large file with fstream with WS6U2

    I can't read large file (>2GB) with the stl fstream. Can anyone do this or is this a limitation of WS6U2 fstream classes. I can read large files with low level function C functions

    I thought that WS6U2 meant Forte 6 Update 2. As to more information, the OS is SunOS 5.8, the file system is NFS mounted from an HP-UX 11.00 box and it's largefile enabled. my belief is that fstream does not implement access to large files, but I can't be sure.
    Anyway, I'm not sure by what you mean by the access to the OS support for largefiles by the compiler, but as I mentioned before, I can read more then 2GB with open() and read(). My problem is with fstream. My belief is that fstream must be largefile enabled. Any idea?

  • Along the lines of How To Lead Large Files

    I have some mainframe extract files loaded onto a Solaris drive that are between 1 and 4 GB to be used in an initial load of a data warehouse. I can't even open a file with file sizes that large. (We're running JDK 1.2.2 - not sure if that matters.) I'm using this statement -
    bufReader = new BufferedReader(new InputStreamReader(new FileInputStream(fileName)));
    This is the error I get on that statement -
    java.io.FileNotFoundException: /wrhs_data/export.13.aug02 (Value too larg)
    at java.io.FileInputStream.open(Native Method)
    at java.io.FileInputStream.<init>(FileInputStream.java:68)
    at com.cofinity.importer.MyFileReader.open(MyFileReader.java:40)
    at com.cofinity.importer.MyFileReader.main(Compiled Code)
    The statement works fine on files less than 220 MB. It breaks somewhere between 220 and 804 MB.
    From the error message it seems that the underlying Native call can't handle opening such a large file. I've searched for the "value to larg" sub-message and found nothing. I tried eliminating the BufferedReader and just using the InputStreamReader, but I received the same error.
    Does anyone know how Java can read large files in the 1 to 4 GB range? (I suppose I could use something like Informatica to split the files up, but our disk space is at a premium.) Any help would be greatly appreciated.
    Thanks,
    Steve

    Well it appears to fail in open(). I tried your code on a binary file of size 25739135241 bytes (23.9+ gibibytes) on AIX and it did just fine, so it may be something in the runtime, try upgrading to a newer JDK/SDK, failing that, use the OS to stream in your data:
    BufferedReader br = new BufferedReader(
            new InputStreamReader(System.in)
    );And just pipe/redirect your file to your Java processes' standard in.

  • Axis large file uploads

    Hi,
    I am running axis1.3 on websphere Z/OS and i am uploading large files. I a getting the following error when
    error1:
    1)i try to upload the files of size 30mb. Initially i tried to upload 30mb it got success
    2) tried to upload again 30mb but this time the error occurs as below
    Exception in thread "main" AxisFault
    faultCode: {http://xml.apache.org/axis/}HTTP
    faultSubcode:
    faultString: (0)null
    faultActor:
    faultNode:
    faultDetail:
    {}:return code: 0
    {http://xml.apache.org/axis/}HttpErrorCode:0
    (0)null
    at org.apache.axis.transport.http.HTTPSender.readFromSocket(HTTPSender.j
    ava:744)
    at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.java:144)
    at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrateg
    y.java:32)
    at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
    at org.apache.axis.SimpleChain.invoke (SimpleChain.java:83)
    at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165)
    at org.apache.axis.client.Call.invokeEngine(Call.java:2784)
    at org.apache.axis.client.Call.invoke(Call.java :2767)
    at org.apache.axis.client.Call.invoke(Call.java:2443)
    at org.apache.axis.client.Call.invoke(Call.java:2366)
    at org.apache.axis.client.Call.invoke(Call.java:1812)
    at gov.ssa.ere.client.EREWSSOAPStub.submitBulk (Unknown Source)
    at gov.ssa.ere.client.StubClient.main(Unknown Source)
    the same error occurs when i tried to upload more than 30mb ....
    error2:
    sometimes my request got into the server and the server even process my requests in the mean time my client got the error as below (i have used the client timeout also as 100000)
    Exception in thread "main" AxisFault
    faultCode: {http://xml.apache.org/axis/}HTTP
    faultSubcode:
    faultString: (0)null
    faultActor:
    faultNode:
    faultDetail:
    {}:return code: 0
    {http://xml.apache.org/axis/}HttpErrorCode:0
    (0)null
    at org.apache.axis.transport.http.HTTPSender.readFromSocket(HTTPSender.j
    ava:744)
    at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.java:144)
    at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrateg
    y.java:32)
    at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
    at org.apache.axis.SimpleChain.invoke (SimpleChain.java:83)
    at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165)
    at org.apache.axis.client.Call.invokeEngine(Call.java:2784)
    at org.apache.axis.client.Call.invoke(Call.java :2767)
    at org.apache.axis.client.Call.invoke(Call.java:2443)
    at org.apache.axis.client.Call.invoke(Call.java:2366)
    at org.apache.axis.client.Call.invoke(Call.java:1812)
    at gov.ssa.ere.client.EREWSSOAPStub.submitBulk (Unknown Source)
    at gov.ssa.ere.client.StubClient.main(Unknown Source)

    I would imagine that this has to do with you reading the entire file into memory to send it via HTTP.  This would be a limitation of the WebKit implentation that currently exists.  I don't know if it has been fixed in the latest AIR 2.0 beta that just got pushed out onto Labs.
    Ultimatly, you may want to take one of these OSS projects you found (the as3httpclientlib looks most promising), and compile a very small flash app that co-exists in your JS AIR app.  You would be able to call it directly from JS, and use the functions that you expose just like the rest of the API's that exist in the AIR framework.

  • [BUG] Performance Issue When Editing Large FIles

    Are other people experiencing this on the latest JDevelopers (11.1.1.4.0 and the version before it) ?
    People in the office have been experiencing extremely slow performance happening seemingly at random while editing Java files. We have been using JDev for almost 10 years and this has only become an issue for us recently. Typically we only use the Java editing and the database functionality in JDev.
    I have always felt that the issue has been related to network traffic created by Clearcase and have never really paid attention, but a few days ago after upgrading to the latest version of JDev, for the first time I started having slowdowns that are affecting my speed of work and decided to look into it,
    The main symptom is the editor hanging for an unknown reason in the middle of typing a line (even in a comment) or immediately after hitting the carriage return. All PCs in the office have 2Gig or more RAM and are well within recommended spec.
    I've been experimenting for a few days to try and determine what exactly is at the root of the slowness. Among the things I have tried:
    o cutting Clearcase out of the equation; not using it for the project filesystem; not connectiing to it in JDev
    o Not using any features other than Java editing in JDev (no database connections)
    o never using split panes for side by side editing
    o downloading the light version of JDev
    o Increasing speed of all pop-ups/dynamic helpers to maximum in the options
    o disabling as many helpers and automations as possible in the options
    None of these have helped. Momentary freezes of 3-5 seconds while editing are common. My basic test case is simply to move the cursor from one line to another and to type a simple one-line comment that takes up most of the line. I get the freeze most usually right after typing the "//" to open the comment - it happens almost 100% of the time.
    I have however noticed a link to the file size/complexity.
    If I perform my tests on a small/medium sized file of about 1000 lines (20-30 methods) performance is always excellent.
    If I perform my test on one of our larger files 10000 lines ( more than 100 methods) the freezes while editing almost always occur.
    It looks like there is some processor intensive stuff going on (which cannot be turned off via the options panel) which is taking control of the code editor and not completing in a reasonable amount of time on large Java files. I have a suspicion that it's somehow related to the gutter on the right hand side which show little red and yellow marks for run-time reports of compile errors and warnings and haven't found any way to disable it so I can check.

    Just a small follow-up....
    It looks like the problem is happening on only a single Java file in our product! Unfortunately it happens to be the largest and most often updated file in the project.
    I'm still poking around to figure out why JDev is choking consistently on this one particular file and not on any of the others. The size/complexity is not much bigger than the next largest which can be edited without problems. The problem file is a little unusual in that it contains a large number of static functions and members.
    Nice little mystery to solve.

  • BT Cloud - large file ( ~95MB) uploads failing

    I am consistently getting upload failures for any files over approximately 95MB in size.  This happens with both the Web interface, and the PC client.  
    With the Web interface the file upload gets to a percentage that would be around the 95MB amount, then fails showing a red icon with a exclamation mark.  
    With the PC client the file gets to the same percentage equating to approximately 95MB, then resets to 0%, and repeats this continuously.  I left my PC running 24/7 for 5 days, and this resulted in around 60GB of upload bandwidth being used just trying to upload a single 100MB file.
    I've verified this on two PCs (Win XP, SP3), one laptop (Win 7, 64 bit), and also my work PC (Win 7, 64 bit).  I've also verified it with multiple different types and sizes of files.  Everything from 1KB to ~95MB upload perfectly, but anything above this size ( I've tried 100MB, 120MB, 180MB, 250MB, 400MB) fails every time.
    I've completely uninstalled the PC Client, done a Windows "roll-back", reinstalled, but this has had no effect.  I also tried completely wiping the cloud account (deleting all files and disconnecting all devices), and starting from scratch a couple of times, but no improvement.
    I phoned technical support yesterday and had a BT support rep remote control my PC, but he was completely unfamiliar with the application and after fumbling around for over two hours, he had no suggestion other than trying to wait for longer to see if the failure would clear itself !!!!!
    Basically I suspect my Cloud account is just corrupted in some way and needs to be deleted and recreated from scratch by BT.  However I'm not sure how to get them to do this as calling technical support was futile.
    Any suggestions?
    Thanks,
    Elinor.
    Solved!
    Go to Solution.

    Hi,
    I too have been having problems uploading a large file (362Mb) for many weeks now and as this topic is marked as SOLVED I wanted to let BT know that it isn't solved for me.
    All I want to do is share a video with a friend and thought that BT cloud would be perfect!  Oh, if only that were the case :-(
    I first tried web upload (as I didn't want to use the PC client's Backup facility) - it failed.
    I then tried the PC client Backup.... after about 4 hrs of "progress" it reached 100% and an icon appeared.  I selected it and tried to Share it by email, only to have the share fail and no link.   Cloud backup thinks it's there but there are no files in my Cloud storage!
    I too spent a long time on the phone to Cloud support during which the tech took over my PC.  When he began trying to do completely inappropriate and irrelevant  things such as cleaning up my temporary internet files and cookies I stopped him.
    We did together successfully upload a small file and sharing that was successful - trouble is, it's not that file I want to share!
    Finally he said he would escalate the problem to next level of support.
    After a couple of weeks of hearing nothing, I called again and went through the same farce again with a different tech.  After which he assured me it was already escalated.  I demanded that someone give me some kind of update on the problem and he assured me I would hear from BT within a week.  I did - they rang to ask if the problem was fixed!  Needless to say it isn't.
    A couple of weeks later now and I've still heard nothing and it still doesn't work.
    Why can't Cloud support at least send me an email to let me know they exist and are working on this problem.
    I despair of ever being able to share this file with BT Cloud.
    C'mon BT Cloud surely you can do it - many other organisations can!

  • TC and WD500 External drive - Locks up when copying large files

    I have a TC with a Western Digital 500GB my Book connected to the USB port on the TC that I use for network Storage. Everything works fine until I try to copy large files or directories from the Finder mounted share. After the copy freezes the external drive can no longer be seen by airport utility. If I look at disks it sees the disk but no partition. I have to power cycle the external drive to have TC see the drive again. Then I disconnect the drive and plug it in directly to the MAC and run disk utility and his repairs the journal. I have to do the repair every time it locks up. Should the external drive be formatted different or does it have to be connected to a USB hub. Or is this just a bug with the TC and afp ?

    I have this very same problem.
    I tried to copy a 4GB+ folder from the Macbook to the 500GB MyBook attached to my TC and it froze.
    Now the the files on the MyBook are invisible when attached to the TC ("0 Items") but are available when the drive is connected directly to the Macbook. I have run disk utility but it hasn't helped. The disk is FAT 32 formatted and the files on the MyBook are still invisible over the network.
    When the drive attached to the Macbook directly I can see a 'phantom' file with the same title as the 4GB+ folder I'd tried to copy. This file is "ZERO KB" and it cannot be deleted (error code -43).
    Anyone have a clue what is happening here?

  • Unable to transfer large files from MB to External HDs

    Hi,
    This may be an unusual one. My 1st edition Macbook won't let me transfer large files to my external hard-drives, either by USB, Ethernet or wirelessly through my home wi-fi network.
    I've started video editing so I'm importing DVcam tapes through firewire from the camera. A full DV tape translates to about 8-12gb I think.
    Using iMovie and saving the import to my Freecom 3.5inch network harddrive vis USB or ethernet, it fails saying 'Unexpected error, error code 1309'.
    When I try quicktime pro instead to import the tape to the external HD, I get 'Operation could not be completed, An attempt to add a resource to the file failed'. This happens too with my WD 2.5inch external drive.
    However, there is no such problem when I import to the MB's internal harddrive. When I then try and shift the large file off the laptop to an external drive, via USB, ethernet of wirelessly, for editing and save keeping, it fails half way through.
    I have the same problem when I try back-up my 17gb virtual windows machine I use with VMware fusion off the laptop to an external drive.
    I am currently burning an 18gb iMovie project from the MB's internal HD, over 5 DVDs using Toast's disc-spanning and will reimport them to the external, which is really time consuming.
    Also to note, Final Cut Express doesn't have these problems as it seems to break up what would be an 18gb movie file into several smaller files, which my MB is quiet happy to let go onto an external drive. However, the problem re-emerges on Final Cut Ex when I try and import a HDV tape. Possibly FCE isn't breaking them up into small enough pieces for my MB to handle?
    Any ideas why this is happening and what I can do to fix it? Or does this mean a new laptop?

    Irish Apple Fan wrote:
    Hi,
    This may be an unusual one. My 1st edition Macbook won't let me transfer large files to my external hard-drives, either by USB, Ethernet or wirelessly through my home wi-fi network.
    I've started video editing so I'm importing DVcam tapes through firewire from the camera. A full DV tape translates to about 8-12gb I think.
    Using iMovie and saving the import to my Freecom 3.5inch network harddrive vis USB or ethernet, it fails saying 'Unexpected error, error code 1309'.
    When I try quicktime pro instead to import the tape to the external HD, I get 'Operation could not be completed, An attempt to add a resource to the file failed'. This happens too with my WD 2.5inch external drive.
    I'm a bit late to the party, but specifically there's a 4GB file size limit in the FAT32 format that is standard on many external hard drives. I've run across this problem before, and usually it copies over most of the file and then quits when it reaches the limit.

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

  • In making a "highlights" movie, using clips from different imported iMovie events, can I delete the larger iMovie event file from the Events browser and still work w the smaller clips in the Projects browser w/o having the larger files still loaded?

    I have sucessfully imported 150 Sony digital 8mm movies (each one hour in length) into iMovie as 150 iMovie events. I have since successfully converted them from their original 13 Gb (.dv) file to an exported smaller 1.3 Gb (.4mv "large file") movie that I am happy with, using the iMovie projects browser. So now I have 150 " .4mv" movies on my internal HD as well as about half of my original raw data " .dv " movies on my internal hard drive.
    Due to their large size (over 2 Tb), I do not have all the larger raw data (.dv) files on my 2Tb internal drive, just about half of them. What I want to do now is to create a new project in the Projects browser for each of my kids, and reload, starting with Tape #1, each of these larger files and do a highlights movie for each of my kids, wherein I pick out smaller clips from each 1 hour .dv iMovie event and paste them into the appropriate kid's Highlights project in the Projects browser.
    Here's my question: If I load the first 5 large files back onto my internal HD, and paste in various shorter clips into each of my kids' Highlights project, and then if I delete those first 5 large files (they are backed up on 2 other 3Tb external HDs), can I keep doing this (reloading the next 5 large .dv files to work with), and ultimately take each of my kid's Highlights project and export as a .4mv movie EVEN THOUGH the earlier large .dv files are no longer on my internal HD, OR does my iMac need to have all these larger files loaded on my internal HD for me to eventually export each of my kid's Highlights project to a .4mv movie?
    I have a 2011 era 27" iMac desktop w 2Tb HD internal and 250 Gb flash drive, and Lion OSX and iMovie 11.

    Thanks. I tested it out and you were correct. I loaded 2 .dv movies from my external HD back onto my internal HD, and got them re-imported into iMovie, took a few short clips from each of the 2 iMovie events and pasted them into a new project in the Project Browser. Then I deleted these 2  "source clips" from my internal HD, closed iMovie and then re-opened it and found that iMovie would NOT export the smaller clips for a "highlights" .4mv movie without the "source clips" being available.
    I read your link on Quicktime. It talks about mostly trimming, which is what I did with each iMovie event before I took each one as a project to export as a smaller .4mv file. But if I use Quicktime (do I need QT Pro or basic), what advantage is it to me to use QT over iMovie (I must admit I am a novice at iMovie and have never used QT or QT Pro as a tool)? Will it then convert any edits I make to a .m4v movie or do I need iMovie to do that?
    Does QT allow trimming multiple segments out of a movie during one edit session, or can you only do 1 at a time? By that I mean that, for example, when I use Sonys Picture Motion Browser for my .mts movies, you can only set one Start and one End point for each edit/trim you do: it does not allow you to set multiple Start Stop points like iMovie allows in its Event or Project browser. You can only do one "trim" at a time, save it and then reopen to do another trim. not very useful.

  • If you delete a large file from the mac hard drive does it also remove it from time machine on external hard drive

    if you delete a large file from the mac hard drive does it also delete it from time machine on the external hard drive ?

    As the others say, no, the backup copy won't be deleted immediately, but it will eventually.
    If you want to delete all backups of a selected item, such as for space or security reasons, see #12 in Time Machine - Frequently Asked Questions.

Maybe you are looking for