Large exp file direct to tsm

How we can direct exp to go directly on tsm, so that dont filled up disk....
Any example on metalink notes will help...

probably suggest to get with sys admins or backup team on that...usually what i have done in past is...took the dmp file on the file system and then asked the backup team or sys admin to push to it tapes (tsm)....thats the best pet...if you have disk space issue ...trying compressing your export dmp file...

Similar Messages

  • Sending exp dmp file direct to tape

    I am wondering how to send exp dmp file directly to tape, if someone can give me example would help...

    10203 on AIX TL07
    I am trying to exp my db directly to tape because of its big size, wondering this is right command to do that , i will replace /dev/rmt/0m/ according to our setup...
    exp user/password file=/dev/rmt/0m/sid.dmp volsize=50G full=y consistent=y direct=y statistics=none log=/oracle/fullexp.log

  • HT201302 Transferring large video file from Iphone

    My video file is extremely large and won't transfer by click and drag to hard drive. I have tried every way possible.  Any suggestions?

    Windows XP cannot import video files larger than 500 MB using Camera and Scanner Wizard. You can transfer these files directly from your device by selecting them in My Computer and dragging or copying the movie files to your computer's hard drive. If that doesn't work, you'll have to try third-party software, like this:
    http://www.wideanglesoftware.com/touchcopy/index.php

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

  • How to split large video files into multiple individual clips by date

    Hello. I'm looking for a freeware/shareware program (if one exists, or willing to pay for one if it does this well) that will:
    1) Analyze a large AVI file identifying the start/start points of each segment of video and the dates the video segments were shot.
    2) Then allow me to organize (for example, join 2 or more segments together if it's a video with a lot of start/stop points during the same shot) the segments identified in step 1 into logical clips that could be split apart from the 1 large AVI file.
    3) Lastly, it would split my one large AVI file into the multiple clips or files that I identified in step 2.
    A piece of software called FootTrack does a great job analyzing AVI files as I describe in steps 1 & 2, but it doesn't allow you to split or export the one avi file into multiple avi files.
    I really don't want to have to do this one clip at a time, I've got to believe there's a program out there that will automate this process for me.
    Thanks

    I don't know of a way to automate it. I just did about 15 years worth. It wasn't too hard. I use an application called mpegstreamclip. Its very easy and free to download either from the apple site or direct from its site.
    http://www.squared5.com/
    Bascically you set the in and out points with the i and o keys and export.

  • IMPORT & EXPORT from/to compressed files directly...

    for newbies a tip to import(export) directly to(from) compressed files using pipes . Its for Unix based systems only, as I am not aware of any pipe type functionality in Windows. The biggest advantage is that you can save lots of space as uncompressing a file makes it almost 5 times or more. (Suppose you are uncompressing a file of 20 GB, it will make 100 GB) As a newbie I faced this problem, so thought about writing a post.
    Lets talk about export first. The method used is that create a pipe, write to a pipe(ie the file in exp command is the pipe we created), side by side read the contents of pipe, compress(in the background) and redirect to a file. Here is the script that achieves this:
    export ORACLE_SID=MYDB
    rm -f ?/myexport.pipe
    mkfifo ?/myexport.pipe
    cat ?/myexport.pipe |compress > ?/myexport.dmp.Z &
    sleep 5
    exp file=?/myexport.pipe full=Y log=myexport.logSame way for import, we create a pipe, zcat from the dmp.Z file, redirect it to the pipe and then read from pipe:
    export ORACLE_SID=MYDB
    rm -f ?/myimport.pipe
    mkfifo ?/myimport.pipe
    zcat ?/myexport.dmp.Z > ?/myimport.pipe &
    sleep 5
    imp file=myimport.pipe full=Y show=Y log=?/myimport.logIn case there is any issue with the script, do let me know :)
    Experts, please have a look...is it fine ? (Actually I dont have Oracle installed on my laptop(though have Fedora 6) so couldnt test the scripts)
    I posted the same on my blog too. just for bookmark ;)
    Sidhu
    http://amardeepsidhu.blogspot.com

    actually, only the compression thing runs in the background. rest proceeds like normal only. just instead of giving normal file we use pipe as a file.
    nice article about named pipes
    Sidhu

  • Large swap file (900MB), but no pageouts

    My MacBook Air 13" with 4GB of RAM accumulates a large swap file over time, but pageouts are 0 (page ins: 1.1million). Anyone know why? Is it possible to see which process/application currently have pages stored in the swap file?

    Sorry to dig this old thread up, but I am seeing an identical behavior to the original poster, and I just wanted to say—you did an excellent job of explaining how page ins can be very large with no pageouts, but I don't think this explains the real mystery, which is that there is a large amount of swap space, and a large amount the system says is used, but there are no page outs. You have not explained how a swap file gan grow in usage with no page outs, and if I understand things correctly, this should not be possible.
    I'm having the same issue on my new MacBook Pro with Retina display. I have 16GB of RAM and for the most part I don't use more than 4-6GB of that—I bought it for the occasional times I need to do a lot of VM testing, but I haven't needed to do that yet. I consistently see my swap usage grow to be as large as 2-3GB with a total size for all the swapfiles in /var/vm being 3-4GB.
    I don't need the space, and the system isn't slow or anything. I just want to know how this is possible. I have been using Mac OS X for 10 years now, and working on linux servers for 5 years or so. I've never seen swap usage be more than 0KB when there are no page outs.
    I've attached some screenshots of what I am seeing:
    Screen capture from Activity Monitor.
    Screen capture from Terminal executing 'du -hsc /var/vm/swapfile*' to tally the total size of the swapfiles.
    I should note that it tends to take a day or two of use to start to see this, in a series of sleep cycles here and there. I put my laptop to sleep at night as well as to and from work, etc. It probably sleeps/wakes 5-7 times a day in all. I tend to notice that the usage creeps up, starting atound 50 MB, then I will notice it being a few hundred some time later. It really makes me wonder if this has to do with some kind of discrete vs. dedicated graphics switching or something, perhaps a very low level operation that is somehow avoiding getting counted by the system's resource tracking facilities. I have no idea, but I would love it if there were someone out there who could explain it or point me in the right direction.
    Thanks for your time.

  • How do I compress a large pdf file to fit in an email?

    how do I compress a large pdf file to fit in an email?

    No
    Envoyé depuis Molto pour iPad
    De: pwillener
    Envoyé: jeudi, février 12, 2015 07:14 AM
    À: René Allamelle
    Objet:  how do I compress a large pdf file to fit in an email?
    how do I compress a large pdf file to fit in an email?
    created by pwillener in Adobe Acrobat.com Services - View the full discussion
    But generally it is never a good idea to send e-documents as email attachments.  Better use a file sharing service (Acrobat.com, Dropbox, Google Drive, Microsoft OneDrive, ..), upload the document, then send the shared download link via email.
    If the reply above answers your question, please take a moment to mark this answer as correct by visiting: https://forums.adobe.com/message/7187079#7187079 and clicking ‘Correct’ below the answer
    Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page:
    Please note that the Adobe Forums do not accept email attachments. If you want to embed an image in your message please visit the thread in the forum and click the camera icon: https://forums.adobe.com/message/7187079#7187079
    To unsubscribe from this thread, please visit the message page at , click "Following" at the top right, & "Stop Following"
    Start a new discussion in Adobe Acrobat.com Services by email or at Adobe Community
    For more information about maintaining your forum email notifications please go to https://forums.adobe.com/thread/1516624.

  • HELP!! Very Large Spooling / File Size after Data Merge

    My question is: If the image is the same and only the text is different why not use the same image over and over again?
    Here is what happens...
    Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
    Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
    Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
    How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)

    The size of the EMF spool file may become very large when you print a document that contains lots of raster data
    View products that this article applies to.
    Article ID : 919543
    Last Review : June 7, 2006
    Revision : 2.0
    On This Page
    SYMPTOMS
    CAUSE
    RESOLUTION
    STATUS
    MORE INFORMATION
    Steps to reproduce the problem
    SYMPTOMS
    When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
    Back to the top
    CAUSE
    This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
    This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
    Back to the top
    RESOLUTION
    To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
    2. Click the Advanced tab.
    3. Click the Print directly to the printer option.
    Note This will disable all print processor-based features such as the following features: N-up
    Watermark
    Booklet printing
    Driver collation
    Scale-to-fit
    Back to the top
    STATUS
    Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
    Back to the top
    MORE INFORMATION
    Steps to reproduce the problem
    1. Open the properties dialog box for any inbox printer.
    2. Click the Advanced tab.
    3. Make sure that the Print directly to the printer option is not selected.
    4. Click to select the Keep printed documents check box.
    5. Print an Adobe .pdf document that contains many groups of raster data.
    6. Check the size of the EMF spool file.

  • Opening large photo files produces black screen

    When opening a small (less than a meg) photo, it works find. I'm able to move to the next file. If I open a large file (8-10 megs) it sometimes works fine however if I use the arrow to move to the next file in the folder I get a black screen. Also if I have a folder of many large photo files I can see all the thumbnail photos at a small size. If I try to increase the view size to large some will be black, however if I click on the file many times it opens fine, sometimes not.  It was working fine, just recently started to act up.  Using adobe photoshop I can open each file so I know they are there and not corrupt files.

    Hello @JohnBruin,
    Welcome to the HP Forums, I hope you enjoy your experience! To help you get the most out of the HP Forums I would like to direct your attention to the HP Forums Guide First Time Here? Learn How to Post and More.
    I have read your post on how opening large photo files produces a black screen on your desktop computer. I would be happy to assist you in this matter!
    If you boot your system into Safe Mode, are you able to open large photographs? Is this a recent or a reoccurring issue? 
    In the meantime, I recommend following the steps in this document on Computer Locks Up or Freezes (Windows 8). This should help prevent your system from defaulting to a black screen. 
    I would also encourage you to post your product number for your computer. Below is a is an HP Support document that will demonstrate how to find your computer's product number. 
    How Do I Find My Model Number or Product Number?
    Please re-post with the results of your troubleshooting, as well as the requested information above. I look forward to your reply!
    Regards
    MechPilot
    I work on behalf of HP
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos, Thumbs Up" on the right to say “Thanks” for helping!

  • Re: Compiling large pgf file?

    Hi Gilbert,
    I didn't find any solution around this problem. Forte's take was
    that its
    Microsoft's compiler problem and Microsoft's take was that we are doing
    something adnormal i.e. Why we are compiling such a big project, doesn't
    sound good. Anyway now we have switched to Sun Sloaris Platform and I
    think
    its not an issue.
    --Shahzad
    Gilbert Hamann wrote:
    Shazad,
    Back in October of 1997 you posted a request for help to the
    Forte-Users
    list regarding the compilation of a large (12MB) .pgf file. Following
    the
    thread, I didn't see if you had any success.
    We are following the same path, and are trying to compile a 22 MB .pfg
    file.
    For the future we are rearchitecting our application to make this more
    manageable, but for now, getting this to compile would help us a great
    deal.
    The same as you, we are getting:
    fatal error LNK1141, failure during build of exports error during
    compilation, aborting
    Did you manage to work around this problem?
    Any help would be much appreciated.
    Thanks,
    -gil
    Gilbert Hamann
    Team Lead, Energy V6 Performance Analysis
    Descartes Systems Group, Inc.
    [email protected]
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Thanks folks.
    I would like to try the suggested workaround at a future point in time.
    One of our initial attempts was similar, but consisted of splitting the .obj
    files into two libraries (using lib.exe) and then linking these two
    libraries to main.obj. It might have worked but we then had another problem
    (our own) which I didn't solve until later.
    The main solution for us seems to be to use MSVC 6.0 (instead of 5.0). When
    there are no other problems in our code, setup, or environment, the compile
    and link works flawlessly, albeit slowly.
    -gil
    -----Original Message-----
    From: Fiorini Francesco [mailto:[email protected]]
    Sent: October 19, 1998 9:43 AM
    To: 'cshahzad'; Gilbert Hamann; [email protected]
    Subject: R: Compiling large pgf file?
    Hello chaps,
    presently here in Ds we have a workaround for that problem
    here are the steps
    1) when fcompile goes belly up, it produces a linkopt.lrf file containing
    the link command
    2) save it (i.e. linkopt.old), just to have a backup copy 
    3) edit the original linkopt file by adding the /VERBOSE option to the link
    statement. That thing will tell the linker to generate more stuff.
    4) from the command line, issue a
            link @linkopt.lrf > mylink.log
    5) edit mylink.log, and copy the statements following the lib.exe statement
    into
       a new file called, i.e., lib.lrf. Make sure the lib.exe line won't be
    included into the file
    6) do, from the prompt,
            lib @linkopt2.lrf
    Once done, you should have a .LIB and .EXP file into the codegen directory.
    at that point, re-issue the
    link @linkopt.old
    command.
    You should get the executable.
    Bye
    Francesco Fiorini
    sys.management dept.
    ds data systems s.p.a
    43100 Parma - Italy
    tel +39-05212781
    -----Messaggio originale-----
    Da: cshahzad [ <a href=
    "mailto:[email protected]">mailto:[email protected]</a> <<a
    href=
    "mailto:[email protected]">mailto:[email protected]</a>> ]
    Inviato: Monday, October 05, 1998 3:25 PM
    A: Gilbert Hamann; [email protected]
    Oggetto: Re: Compiling large pgf file?
    Hi Gilbert,
        I didn't find any solution around this problem. Forte's take was
    that its
    Microsoft's compiler problem and Microsoft's take was that we are doing
    something adnormal i.e. Why we are compiling such a big project, doesn't
    sound good. Anyway now we have switched to Sun Sloaris Platform and I
    think
    its not an issue.
    --Shahzad
    Gilbert Hamann wrote:
    Shazad,
    Back in October of 1997 you posted a request for help to the
    Forte-Users
    list regarding the compilation of a large (12MB) .pgf file.  Following
    the
    thread, I didn't see if you had any success.
    We are following the same path, and are trying to compile a 22 MB .pfg
    file.
    For the future we are rearchitecting our application to make this more
    manageable, but for now, getting this to compile would help us a great
    deal.
    The same as you, we are getting:
    fatal error LNK1141, failure during build of exports error during
    compilation, aborting
    Did you manage to work around this problem?
    Any help would be much appreciated.
    Thanks,
    -gil
    Gilbert Hamann
    Team Lead, Energy V6 Performance Analysis
    Descartes Systems Group, Inc.
    [email protected]
    To unsubscribe,  email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL: http://pinehurst.sageit.com/listarchive/
    <http://pinehurst.sageit.com/listarchive/> >
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • How to parse large xml file

    I need to parse large xml file which contains following tag. The size of the file is upto 10MB-50MB or more.
    <departments>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_253">
    <bss_depart id="253">
    <attributes>
    <name_one>abc</name_one>
    </attributes>
    </bss_depart id="253">
    </b_depart id="Bss_253">
    </a_depart id="124">
    </department>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_254">
    <mss_depart id="253">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_254">
    <mss_depart id="255">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    <department>
    <a_depart id="125">
    <b_depart id="Bss_254">
    <mss_depart id="253">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    I want to get the infomation for that xml file. like mss_depart id=233, building xpath dyanmically for every id and loading
    that using dom4j. which is very very slow.
    Is there any other solution for that to read the data using sax parser only.
    I want to execute the xpath or data for the following way.
    //a_depart/@id ------> all the ids of a_depart tags if it returns 3 values say 123,124,125
    after that i want to execute
    //a_depart[@id='123']/b_depart/@id like this ...to retrive the values of all the levels ...
         I am executing following xpath for every unique ids at all levels.
         List l = doc.selectNodes(xPathForID);
         List l1 = doc.selectNodes(xPathForAttributes+attributes.get(j)+"/text()");
    But it is very slow and taking lot of time.
    Is there any other way to solve this problem. If any please mail me it is urgent.
    I am using jdk1.4 and jdk1.5
    Is there any support for sax parser to execute xpath in jdk1.5 direclty, with out using dom4j
    Thanks in advance....

    I doubt you will find a preexisting solution to your problem.
    SAX is usually recommended for processing big files (where "big" is undefined"). It works on big files by avoiding the messy problem of storing the data -- that is left as an exercise to you.
    DOM (and its variants) works by building a Document object as the head of the tree of objects for the entire contents. With DOM, you can then use XPath, because there is something to search that is already in memory. To use XPath, you seem to have two choices, build a DOM-ish tree, or if you can find an XPath processor (I'm not sure if one exists) that can process the XML file directly, but it will be slow, since you are looking for "all" occurences of an attribute, and this means you have to read the entire file each time.
    It might be worth exploring a hybrid approach -- use SAX to get some information, and build your own objects to store the data. Maybe a HashMap as the main index. But, that will keep you from using XPath, since you do not have the data structures it expects.
    A third alternative would be to look at JAXB. It builds Java code from a Schema of your data and then when you import the data, it creates the necessary objects and fills in values. But, I don't think XPath woll work there either.
    Dave Patterson

  • How to insert large xml file to XMLType column?

    Hi,
    I have a table with one column as XMLType(Binary XML storage option and Free Text Indexing). When i try to insert a large XML as long as 8kb, i'm getting an error ORA-01704:string literal too long.
    Insert into TEST values(XMLTYPE('xml HERE'));
    How to insert large XML values to XMLType column?
    Regards,
    Sprightee

    For a large XML file, you basically have two options - you can load the string directly as an XMLType, or you can load the string as a CLOB and cast it on the database side to an XMLType.
    If you decide to load the XML as XmlType client-side, then you may be interested to know that versions of Oracle after 11.2.0.2 support the JDBC 4.0 SQLXML standard. See the JDBC driver release documentation here:
    http://docs.oracle.com/cd/E18283_01/java.112/e16548/jdbcvers.htm#BABGHBCC
    If you want to load as a CLOB, then you'll need to use PreparedStatement's setClob() method, or allocate an oracle.sql.clob object.
    For versions before 11.2.0.2, you can create an XMLType with a constructor that includes an InputStream or byte[] array.
    HTH
    Edited by: 938186 on Jun 23, 2012 11:43 AM

  • About to buy Panasonic HDC-SD9: Advice on handling large AVCHD files

    I'm getting married soon and want to buy a Hi-Def camcorder to use during the wedding ceremony and honeymoon. I like to Panasonic HDC-SD9 but have some basic questions. Please accept my ignorance!
    a) I understand that the HDC-SD9 records in AVCHD, which on conversion to AIC in IMovies 8 greatly increases the file size. The harddrive on my Macbook is only 120GB and a converted one-hour AVCHD video is likely to make a serious dent into this. What is the best way to handle this?
    b) Should I use the disk utility to make a copy of the SD card and delete the files from the SD card? How do I ressurect this when I need to use the raw footage? Can I play directly from the Disk Copy?
    c) Should I convert the large AIC file into another form to save it on my harddrive? What is the best format to reduce file size? How do I do this?
    d) Ideally, we want to give our wedding guests a copy of the wedding footage on a standard def DVD. Can we use IDVD to make a DVD from the AIC file? How much footage can be stored on a normal DVD?
    e) What programmes can be used to play the AVCHD file? Can I play it back on Quicktime 7.5.5?
    Many thanks

    If you go the AVCHD route; My suggestion to most of the questions is to buy an external hard drive (format to Mac OS Extended) and load your footage to that and keep the drive. Then guard it with your life.
    IMO: Tape is still an option that has advantages. The tape is a backup and they are inexpensive.
    Most NLEs work with the formats and file sizes are smallish. AVCHD can bloat to nearly 50 gig an hour, DV, no more than 13.5 gig.
    Al

  • Extracting multiple clips from large .mov file

    Hi
    For some reason this summer our camcorder has amalgamated all our clips in to one large 6Gb .mov file called "Private.mov". I can't import the separate clips into iMovie straight from the SD card as previously.
    I can open the file in Quicktime and open each clip separately and then export it to a separate file and then import into iMovie. This works but as there are over 300 clips will take for ever. I can open multiple clips but can't do more than 10 at a time or it gets confusing on the screen as to what has been saved etc
    Is there anyway of automating this and doing this in one batch or
    What has gone wrong with the camera/the file this time?
    Thanks for your help
    Sarah

    why dont you import the file direct into iMove which will create individual clips from the original file

Maybe you are looking for

  • Mac Mini supporting 2 x 30" Cinema Display. How do I?

    I currently have 2 x 30 inch Apple Cinema Displays. I am considering saving space and getting a new mac-mini. I know the displayport/thunderbolt port can power 1 x 30 inch display, but how about dual screens. The HDMI port does not max out past 1920x

  • Vendor approval for outside operation

    Hi,    I am trying to define a process for vendor approval for an external operation. We have service operations on our production orders and we are not using the 04 but only the 03 inspection type. Since the service operation is for account assignme

  • Control indicators for controlling area XYZ1 do not exist". Pls help.

    While Posting to the Customer Downpayment thru t code F-29. following error occured "Control indicators for controlling area XYZ1 do not exist". Pls help.

  • How to connect to SAPSERV3 ?

    hi there, i have a sap-note saying the following: "On SAPSERV3, in the directory ~ftp/general/R3server/abap/note.0317722 you will find subdirectories containing the required programs RH77S0BU and RH77S0RE." i have no idea how i can reach 'sapserv3' ?

  • Connecting SAP MDM to UG (UniGraphics)

    Hello all, I wonder if anyone there connected SAP MDM to UG (or other CAD-CAM systems), without creating the whole master data in those systems. I have a customer that wants to use the search and hierarchy capabilities of MDM (by many users) - when t