Audio extraction from 400+ QT files with between 1-8 channels

Hi there,
I need to extract all the audio channels individually from more than 400 QT files, with the number of channels ranging from 1 to 8.
Do you know of any script or automator action out there which could help me? Ideally I want to extract the audio as AIFF.
I've seen these, but as far as I can tell they won't extract each track/channel on its own:
http://developer.apple.com/samplecode/ExtractMovieAudioToAIFF/index.html
http://www.deepniner.net/xtract2wave/
Thanks for any help.
Macbook Pro   Mac OS X (10.4.8)  

I have brought my export back into PP and I don't see audio. As you suspect, my export settings were incorrect. I failed to set Channels to 8 Channel; it was set to Stereo. I am creating a new test file for the broadcaster.
Thank you!

Similar Messages

  • Audition jumps time code forward on audio extracted from a .mov file

    I have a video file with a starting time code of 10:41:17:23.   I open this file in Audition, apply a noise reduction effect, and export the file out as a new .wav file.
    However, instead of the audio file having a starting time code of  10:41:17:23, the time code on the file starts at  10:41:34:20. 
    So, the time stamp has jumped forward 16 seconds, 21 frames.   And, the jump amount is not consistent.  Another file had a 37 second plus jump.
    Since we are talking about the STARTING time stamp, in all instances, I believe frame rate, codec, etc. are immaterial unless someone can convince me otherwise.
    If I was dealing with different frame rates/codecs/etc., I could understand the time code being out of sync later in the file, but I cannot understand why it would be out of whack at the very start of the file.
    Anyone got any ideas on possible cause/cure?   It's a pain having to manually search for the audio file for the correct location instead of being able to just punch in the same time code the video uses.

    OK. I double checked and the timecode showed 23.976 for my test .mov file in Premier Pro,  with a sample rate of 48,000  32-bit. 
    I made sure my timecode default was set to SMPTE 23.976 in Preferences/Time Display in Audition.  I then opened the .MOV file in Audition, let it split the audio out, and moved the audio to the editor panel.
    The timecode in the audition editor panel shows 23.976.  The preview panel also shows 23.976. 
    For what is worth, the "time code" block of blue numbers in the lower bottom left of the Audition preview panel shows 00:00:00:00.    I believe this is because the primary "Time Code Start" field in the Canon 70D .mov format files is empty and the camera is putting the time code in the Alternate Time code fields.
    I then opened the Effects drop down menu and applied Noise Reduction/Restoration, option Adaptive Noise Reduction, using the preset "Light Noise Reduction."
    After the apply completed I saved the modified audio file with a format of Wave PCM, leaving the Sample type at the default of 48000 HZ Stereo, 32-bit and Format settings of Wave Uncompressed 32-bit floating point (IEEE).
    I left the box checked next to "Include Markers and Other Metadata."  And, clicked OK.
    I then went to Premier Pro and opened the just saved audio file.  The starting time code on the file is 10:38:52:22.   This does not match the starting time code on the original MOV file which is 10:38:14:10.
    So, based on my limited understanding of both Premier Pro and Audition, I am gettting the exact same sample rate settings and time code settings all the way through the process in both Premier Pro and Audition up to the point of saving the modified file out of Audition.  It is at the point of saving that things get changed.
    Charles asked me to post screenshots and a sample file somewhere.  I am working on doing those tonight.  I just have to figure out where to post them.  My original test files were quite large, but I got the same results with a 2 second file in my second test, so I'll post those files someplace.

  • How to extract one of several audio tracks from a .vob file?

    Hi guys,
    hope you can help me, i'm looking for a solution for hours now but can't find one.
    I would like to know how to extract one audio track from a .vob file when there are different tracks/languages on the dvd? Media encoder does it very well, but i can't find the option for choosing the right audio track/language for extraction. Please help
    Kind regards
    Alex

    This is because the OMF specification requires mono audio files
    It really depends on what you are trying to do with the two tracks as to the best answer as far as 'grouping'
    You could select all the clips on both tracks and right mouse click and select 'group'
    You could send both tracks to a bus
    You could export the two tracks to a stereo file and bring that back into the session
    Lots more things I'm sure but a little more detail as to what you are trying to do would help

  • Problem parsing XML with schema when extracted from a jar file

    I am having a problem parsing XML with a schema, both of which are extracted from a jar file. I am using using ZipFile to get InputStream objects for the appropriate ZipEntry objects in the jar file. My XML is encrypted so I decrypt it to a temporary file. I am then attempting to parse the temporary file with the schema using DocumentBuilder.parse.
    I get the following exception:
    org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element '<root element name>'
    This was all working OK before I jarred everything (i.e. when I was using standalone files, rather than InputStreams retrieved from a jar).
    I have output the retrieved XML to a file and compared it with my original source and they are identical.
    I am baffled because the nature of the exception suggests that the schema has been read and parsed correctly but the XML file is not parsing against the schema.
    Any suggestions?
    The code is as follows:
      public void open(File input) throws IOException, CSLXMLException {
        InputStream schema = ZipFileHandler.getResourceAsStream("<jar file name>", "<schema resource name>");
        DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
        DocumentBuilder builder = null;
        try {
          factory.setNamespaceAware(true);
          factory.setValidating(true);
          factory.setAttribute(JAXP_SCHEMA_LANGUAGE, W3C_XML_SCHEMA);
          factory.setAttribute(JAXP_SCHEMA_SOURCE, schema);
          builder = factory.newDocumentBuilder();
          builder.setErrorHandler(new CSLXMLParseHandler());
        } catch (Exception builderException) {
          throw new CSLXMLException("Error setting up SAX: " + builderException.toString());
        Document document = null;
        try {
          document = builder.parse(input);
        } catch (SAXException parseException) {
          throw new CSLXMLException(parseException.toString());
        }

    I was originally using getSystemResource, which worked fine until I jarred the application. The problem appears to be that resources returned from a jar file cannot be used in the same way as resources returned directly from the file system. You have to use the ZipFile class (or its JarFile subclass) to locate the ZipEntry in the jar file and then use ZipFile.getInputStream(ZipEntry) to convert this to an InputStream. I have seen example code where an InputStream is used for the JAXP_SCHEMA_SOURCE attribute but, for some reason, this did not work with the InputStream returned by ZipFile.getInputStream. Like you, I have also seen examples that use a URL but they appear to be URL's that point to a file not URL's that point to an entry in a jar file.
    Maybe there is another way around this but writing to a file works and I set use File.deleteOnExit() to ensure things are tidied afterwards.

  • There are error message "No such interface supported" when play Audio/Media from USB thumb disk with Metro APP like Xbox.

     There are error message "No such interface supported" when play Audio/Media from USB thumb disk with Metro APP like Xbox in windows. how can I solve this problem?   if I use destop player like windows media player, no such error.
    would you please give some solution for this error? thanks!

    OS is window10, Metro APP is Xbox or MultiMedia 8; any video format will occur error only by following steps:  
    a. Put Audio / Media / Photo files into USB thumb disk or Micro SD card.
    b. Connect USB thumb disk / Micro SD Card  to Platform.
    c. Open 'File Explorer', click USB thumb disk.
    d. Open audio file with metro app: Music. / Open Media file with metro app: Video./Open Photo file with metro app.

  • 2nd Audio track from merged .m4v files does not play on iPod Touch

    I used Handbrake to encode two .m4v files of a concert DVD that spans two DVDs. I then took those two files and combined them into Quicktime Pro to create a single .m4v file. The file plays fine in QT but when I transfer the file to my ipod, only the first audio track (from the first file) plays. Once the video from the second file plays, the audio is silent. Here's what the movie properties look like:
    Enabled Name Start Time Duration Format
    ====================================================
    Y Video Track -- 0:00:00:00 -- 2:21:53:90 -- H.264
    Y Audio Track1 -- 0:00:00:00 -- 2:21:53:90 -- AAC
    Y Audio Track2 -- 1:27:28:87 -- 0:54:24:02 -- AAC
    N Text Track -- 0:00:00:00 -- 2:21:53:90 -- Text
    Any suggestions?
    Thanks,
    Dan

    So basically, anytime I combine more than one mp4 file in QT7 Pro, it will always create multiple audio tracks when doing a simple "save as" (vs. export as)
    Not exactly. QT has it own set of rules. As you noted, the video track had no problem merging into a single track. This would not have happened if you had merged video compressed differently or added an image/stretched image sequence. Not knowing your specific work flow or settings, I hesitate to make any claims here. Frankly, when merging audio, I normally prefer to use GarageBand when I want to create a single master audio track and still retain control of the other tracks for mixing purposes. However, I could just as well have exported the audio in the original file using QT Pro to a single audio file and used that audio track to replace both original tracks keeping both the original video and chapter tracks. It is all a matter of which work flow you consider easiest for a particular job or project.
    If I've exported out my chapter file as a plain text file, how do I go about reintegrating that withing the final mp4 file using QT?
    http://www.apple.com/quicktime/tutorials/chaptertracks.html
    The above link is a somewhat old tutorial explaining how to create a chapter text track from scratch. Since you already have the chapter track, just refer to the steps indicating how to add the text file back to your main movie file. (Ignore the step that says to set a "Preload" option as this pop-up has been removed.)

  • I am trying to create a print ready PDF from a word file with unacceptable results.

    I am trying to create a print ready PDF from a word file with unacceptable results.
    The word file has a trim size of 6” x 9”. It has been set to mirror margins with the inner, top and bottom margins set to 0.75”, the outer margin is set to 0.5” and the gutter to 0.14”.
    It doesn’t matter if I create the PDF from inside Word, or open Acrobat Pro 11.0.9 and click Create From File, while the resulting document size is correct and the odd numbered pages reflect the correct margins, the even numbered pages do not. This results in some text near the outer margin, as well as the page numbers being omitted.
    Does anyone know how to correct this?
    I just noticed that some of the odd numbered pages' text is also cropped. Apparently Acrobat is refusing to set side margins to smaller than 1" (@ 3cm).

    I'm away from my printer, so I'll try it later. Even so, the proposed test is irrelevant. I operate a small publishing house and am trying to upload certain novels to Ingram, the largest book distributor in the world. The specifications I've set are the specifications they've asked for. Since they've said that the results I'm obtaining are unacceptable, and since they demand submission in PDF form, this renders Acrobat Pro for Mac completely unacceptable for anyone in the publication industry. As far as I can tell, Adobe has a serious bug here that it needs to fix—and at once.

  • I can't use thunderbird 31.2.0 to send attachement from a pdf file with Acrobat PRO XI

    I can't use thunderbird 31.2.0 (Mozilla) to send attachement from a pdf file with Acrobat PRO XI. Each time, Acrobat replies (in french) : "il n'y a pas de client pour la messagerie par défaut". Thunderbird is indicated as "by default" in the list of programs of Windows.
    Even if I type my e-mail address, etc. in the account of Acrobat, it doesn't change. Each time, I have to record a PDF file in a folder and return to Thunderbird to send it as attached. It is a too long process,  compared with Windows XP with Acrobat Reader or Acrobat 5, I had before.
    Would you please help me to explain the process. My OS is Windows 8.1.
    Thank you

    Thank you for your answer.
    What i want to do : to send PDF files as attached documents in a message
    generated by thunderbird by my e-mail address [email protected] or
    another one I have.
    When I introduce my e-mail address under Edit-Preferences-Email
    Accounts, Acrobat ask the e-mail account, the password, the IMAP for
    ingoing message (and I use POP) and SMTP for outgoing messages. Even I
    introduce all the datas, it doesn't change, Acrobat is unable to send
    the message. And the process is not convenient, because I need all my
    outgoing messages be documented inside Thunderbird.
    So, I repeat my request : how can I use thenderbird as program by
    default from Acrobat or any other software ?
    Thank you for your next message.
    Jean-Luc Rongé
    Le 21-10-14 14:24, ANAND8502 a écrit :
    >
          I can't use thunderbird 31.2.0 to send attachement from a pdf
          file with Acrobat PRO XI
    created by ANAND8502 <https://forums.adobe.com/people/ANAND8502> in
    /Acrobat Installation & Update Issues/ - View the full discussion
    <https://forums.adobe.com/message/6850947#6850947>

  • How can i make a picture from a video file with final cut pro x?

    how can i make a picture from a video file with final cut pro x?

    Go to the "share" menu, select "save current frame"

  • How to extract Attribute Value from a DBC file with LabWindows and NI-XNET library

    Hi all,
    For my application, i would like to feed my LabWindows CVI Test program with data extracted from *.dbc file (created by another team under Vector CANdb++).
    These files contains all CAN frame definition
    and also some extra information added to :
    Message level,
    Signal level,
    Network Level
    These extra information are set by using specific ATTRIBUTE DEFINITIONS - FUNCTIONALITY  under Vector CANdb++
    The opening of the DataBase works under NI-XNET DataBase Editor as in LabWindows using: nxdbOpenDatabase ( ... )
    No attribute seems be displayable under the NI-XNET DataBase Editor (it's not a problem for me)
    Now, how, using the NI-XNET API and CVI, be able to extract these specially created attributes ?
    Thanks in advance.
    PS : In attached picture, a new attribute called Test_NI, connected to a message
    Attachments:
    EX1.jpg ‏36 KB

    Hi Damien, 
    To answer your question on whether the XNET API on LabWindows/CVI allows you to gain access to the custom attributes in a DBC file, this is not a supported feature. The DBC format is proprietary from Vector. Also, custom attributes are different for all customers and manufacturers. Those two put together make it really difficult for NI to access them with an API that will be standard and reliable.
    We do support common customer attributes for cyclic frames. This is from page 4-278 in the XNET Hardware and Software Manual : 
    "If you are using a CANdb (.dbc) database, this property is an optional attribute in the file. If NI-XNET finds an attribute named GenMsgSendType, that attribute is the default value of this property. If the GenMsgSendType attribute begins with cyclic, this property's default value is Cyclic Data; otherwise, it is Event Data. If the CANdb file does not use the GenMsgSendType attribute, this property uses a default value of Event Data, which you can change in your application. "
    Link to the manual : http://digital.ni.com/manuals.nsf/websearch/32FCF9A42CFD324E8625760E00625940
    Could you  explain us the goal of this attribute, and what you need it on your application.
    Thanks,
    Christophe S.
    FSE East of France І Certified LabVIEW Associate Developer І National Instruments France

  • Extracting file from a TAR file with java.util.zip.* classes

    Is there a way to extract files from a .TAR file using the java.util.zip.* classes?
    I tried in some ways but I get the following error:
    java.util.zip.ZipException: error in opening zip file
    at java.util.zip.ZipFile.<init>(ZipFile.java127)
    at java.util.zip.ZipFile.<init>(ZipFile.java92)
    Thank you
    Giuseppe

    download the tar.jar from the above link and use the sample program below
    import com.ice.tar.*;
    import java.util.zip.GZIPInputStream;
    import java.io.*;
    public class untarFiles
         public static void main(String args[]){
              try{
              untar("c:/split/20040826172459.tar.gz",new File("c:/split/"));
              }catch(Exception e){
                   e.printStackTrace();
                   System.out.println(e.getMessage());
         private static void untar(String tarFileName, File dest)throws IOException{
              //assuming the file you pass in is not a dir
              dest.mkdir();     
              //create tar input stream from a .tar.gz file
              TarInputStream tin = new TarInputStream( new GZIPInputStream( new FileInputStream(new File(tarFileName))));
              //get the first entry in the archive
              TarEntry tarEntry = tin.getNextEntry();
              while (tarEntry != null){//create a file with the same name as the tarEntry  
                   File destPath = new File(
                   dest.toString() + File.separatorChar + tarEntry.getName());
                   if(tarEntry.isDirectory()){   
                        destPath.mkdir();
                   }else {          
                        FileOutputStream fout = new FileOutputStream(destPath);
                        tin.copyEntryContents(fout);
                        fout.close();
                   tarEntry = tin.getNextEntry();
              tin.close();
    }

  • Slow extraction in big XML-Files with PL/SQL

    Hello,
    i have a performance problem with the extraction from attributes in big XML Files. I tested with a size of ~ 30 mb.
    The XML file is a response of a webservice. This response include some metadata of a document and the document itself. The document is inline embedded with a Base64 conversion.  Here is an example of a XML File i want to analyse:
    <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
       <soap:Body>
          <ns2:GetDocumentByIDResponse xmlns:ns2="***">
             <ArchivedDocument>
                <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***">
                   <Metadata archiveDate="2013-08-01+02:00" documentID="123">
                      <Descriptor type="Integer" name="fachlicheId">
                         <Value>123<Value>
                      </Descriptor>
                      <Descriptor type="String" name="user">
                         <Value>***</Value>
                      </Descriptor>
                      <InternalDescriptor type="Date" ID="DocumentDate">
                         <Value>2013-08-01+02:00</Value>
                      </InternalDescriptor>
                      <!-- Here some more InternalDescriptor Nodes -->
                   </Metadata>
                   <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream">
                      <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/>
                   </RepresentationDescription>
                </ArchivedDocumentDescription>
                <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0">
                   <Data fileName="20mb.test">
                      <BinaryData>
                        <!-- Here is the BASE64 converted document -->
                      </BinaryData>
                   </Data>
                </DocumentPart>
             </ArchivedDocument>
          </ns2:GetDocumentByIDResponse>
       </soap:Body>
    </soap:Envelope>
    Now i want to extract the filename and the Base64 converted document from this XML response.
    For the extraction of the filename i use the following command:
    v_filename := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    For the extraction of the binary data i use the following command:
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    My problem is the performance of this extraction. Here i created some summary of the start and end time for the commands:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 15:46:11,402668000
    10.09.13 - 15:47:21,407895000
    00:01:10,005227
    v_filename_bcm := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    10.09.13 - 15:47:21,407895000
    10.09.13 - 15:47:22,336786000
    00:00:00,928891
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    As you can see the extraction of the filename is slower then the document extraction. For the Extraction of the filename i need ~01
    I wonder about it and started some tests.
    I tried to use an exact - non dynamic - filename. So i have this commands:
    v_filename := '20mb_1.test';
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    Under this Conditions the time for the document extraction soar. You can see this in the following table:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 16:02:33,212035000
    10.09.13 - 16:02:33,212542000
    00:00:00,000507
    v_filename_bcm := '20mb_1.test';
    10.09.13 - 16:02:33,212542000
    10.09.13 - 16:03:40,342396000
    00:01:07,129854
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    So i'm looking for a faster extraction out of the xml file. Do you have any ideas? If you need more informations, please ask me.
    Thank you,
    Matthias
    PS: I use the Oracle 11.2.0.2.0

    Although using an XML schema is a good advice for an XML-centric application, I think it's a little overkill in this situation.
    Here are two approaches you can test :
    Using the DOM interface over your XMLType variable, for example :
    DECLARE
      v_xml    xmltype := xmltype('<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> 
           <soap:Body> 
              <ns2:GetDocumentByIDResponse xmlns:ns2="***"> 
                 <ArchivedDocument> 
                    <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***"> 
                       <Metadata archiveDate="2013-08-01+02:00" documentID="123"> 
                          <Descriptor type="Integer" name="fachlicheId"> 
                             <Value>123</Value> 
                          </Descriptor> 
                          <Descriptor type="String" name="user"> 
                             <Value>***</Value> 
                          </Descriptor> 
                          <InternalDescriptor type="Date" ID="DocumentDate"> 
                             <Value>2013-08-01+02:00</Value> 
                          </InternalDescriptor> 
                          <!-- Here some more InternalDescriptor Nodes --> 
                       </Metadata> 
                       <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream"> 
                          <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/> 
                       </RepresentationDescription> 
                    </ArchivedDocumentDescription> 
                    <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0"> 
                       <Data fileName="20mb.test"> 
                          <BinaryData> 
                            ABC123 
                          </BinaryData> 
                       </Data> 
                    </DocumentPart> 
                 </ArchivedDocument> 
              </ns2:GetDocumentByIDResponse> 
           </soap:Body> 
        </soap:Envelope>');
      domDoc    dbms_xmldom.DOMDocument;
      docNode   dbms_xmldom.DOMNode;
      node      dbms_xmldom.DOMNode;
      nsmap     varchar2(2000) := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns2="***"';
      xpath_pfx varchar2(2000) := '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/';
      istream   sys.utl_characterinputstream;
      buf       varchar2(32767);
      numRead   pls_integer := 1;
      filename       varchar2(30);
      base64clob     clob;
    BEGIN
      domDoc := dbms_xmldom.newDOMDocument(v_xml);
      docNode := dbms_xmldom.makeNode(domdoc);
      filename := dbms_xslprocessor.valueOf(
                    docNode
                  , xpath_pfx || 'ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName'
                  , nsmap
      node := dbms_xslprocessor.selectSingleNode(
                docNode
              , xpath_pfx || 'ArchivedDocument/DocumentPart/Data/BinaryData/text()'
              , nsmap
      --create an input stream to read the node content :
      istream := dbms_xmldom.getNodeValueAsCharacterStream(node);
      dbms_lob.createtemporary(base64clob, false);
      -- read the content in 32k chunk and append data to the CLOB :
      loop
        istream.read(buf, numRead);
        exit when numRead = 0;
        dbms_lob.writeappend(base64clob, numRead, buf);
      end loop;
      -- free resources :
      istream.close();
      dbms_xmldom.freeDocument(domDoc);
    END;
    Using a temporary XMLType storage (binary XML) :
    create table tmp_xml of xmltype
    xmltype store as securefile binary xml;
    insert into tmp_xml values( v_xml );
    select x.*
    from tmp_xml t
       , xmltable(
           xmlnamespaces(
             'http://schemas.xmlsoap.org/soap/envelope/' as "soap"
           , '***' as "ns2"
         , '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/ArchivedDocument/DocumentPart/Data'
           passing t.object_value
           columns filename    varchar2(30) path '@fileName'
                 , base64clob  clob         path 'BinaryData'
         ) x

  • Extracting from an xml file..

    hi.. i am required to write a program that extracts and perform certain calulations with stock numbers. I have been able to complete the calcuations part but need a little help(ideas, guildines) when xtracting stuff from a xml file that has six lines..
    <?xml version="1.0"?>
    <Portfolio>
    <Investment><Stock symbol="RY"></Stock><Qty>15</Qty><Comment>this is a good one; esp. the BookValue</Comment><BookValue>58.5</BookValue></Investment>
    <Investment><Comment>this is not "bad"</Comment><Stock symbol = "NT"></Stock><Qty>10</Qty> <BookValue>108</BookValue></Investment>
    <Investment><Qty>2</Qty><BookValue>45.75</BookValue><Comment>not sure about this; >= last time</Comment><Stock symbol= "BMO" > </Stock></Investment>
    </Portfolio>
    I'm not sure if this post will display properly but the correct format is this..
    Line 1: <?xml version=...
    Line 2: <Portfolio...
    Line 3: <Investment...
    Line 4: <investment...
    Line 5: <investment...
    Line 6: </Portfolio.....
    The stuff i'm supposed to extract are..
    RY 15 58.50
    NT 10 108.00
    BMO 2 45.75
    I am new to java and i am still beginning to realize the many methods that enable me to do this.. if anyone has any ideas please let me know.. thank you for ur time.
    (I'm not requesting actual coding..) Thanks again.

    Hey kk.. thanks for helping out but unfortunatly.. i can't use a parser like u.. since i'm taking an introductory course we're only limited to a number of classes to use(we're not suppoesd to know much we have to work with what we have). This is the coding i have so far..
    public class Test
    {     public static void main(String[] args)
         {     YorkReader reader = new YorkReader("pfxml.a1.xml");
              String line = reader.readLine();
              String tag = line.substring(0,7);
              String investTag = "<Invest";
              while (!tag.equalsIgnoreCase(investTag))
                   York.println(line);
                   line = reader.readLine();
                   tag = line.substring(0,7);
              while(tag.equalsIgnoreCase(investTag))
                   York.println(line);
                   line = reader.readLine();
                   StringTokenizer st = new StringTokenizer(line, "Stock");
                   String line1 = st.nextToken();
                   StringTokenizer stSymbol = new StringTokenizer(st.nextToken(), "\"");
                   String symbolS = stSymbol.nextToken();
                   York.println();
                   York.println(line1);
                   York.println();
                   tag = line.substring(0,7);
              reader.close();
    I've managed to get the program to skip lines that don't match the string "Invest" and once i get to invest i'm planning to use loop and extract the stuff i need out of the line.. but for some reason(IT'S DRIVING ME CRAZY).. is that when i try to use the Tokenizer class to parse out what i need.. it doesn't work. U see.. in the code i used the string "Stock" as the delimiter.. but when i check to see if the code is working it's not returning what i'm asking for.
    When i ask it to print out the first token it returns this..
    <Inves
    and when i ask for the second it gives me..
    men
    what is going on... argh.. i thought i had a great idea to do this too.. like wat i was planning was.. use "Stock" as the delimiter and get the string
    symbol="RY"></
    then use tokenizer again and use " as the delimiter to get the RY string which is what i want.. but for some reason it's not doing that.. if u have time could u take a look at my coding and let me know what's wrong?.. thanks a lot for ur help.. i really appreciate it.

  • Login / out history extraction from 2008R2 Event Logs with a PowerShell script?

    Hi folks,
    I think I'm asking something similar to a few other posts, but instead of hijacking their threads, I thought I'd start my own.
    As the subject suggests, I'm trying to extract from a 2008R2 server's Event logs a table of users and their respective login / out events. Not just asking AD for their last login times, but a list of login / out events.
    So far, I'm using:
    Get-EventLog -logname security -Newest 1000 | where {$_.eventID -eq 4624 -or 4634 }
    but the list is long, and contains host authentication connections as well as users. I believe I need something like the ability to filter on "user is domain user", or "user is not a computer", or similar, and then pipe it to Export-CSV,
    but the data is not a CSV file, but more like Text. ie:
    Index : 87290035
    EntryType : SuccessAudit
    InstanceId : 5156
    Message : The Windows Filtering Platform has permitted a connection.
    Application Information:
    Process ID: 1688
    Application Name: \device\harddiskvolume2\windows\system32\dns.exe
    Network Information:
    Direction: %%14592
    Source Address: 192.168.xx.xx
    Source Port: 53
    Destination Address: 192.168.xx.xx
    Destination Port: 44242
    Protocol: 17
    Filter Information:
    Filter Run-Time ID: 66055
    Layer Name: %%14610
    Layer Run-Time ID: 44
    Category : (12810)
    CategoryNumber : 12810
    ReplacementStrings : {1688, \device\harddiskvolume2\windows\system32\dns.exe, %%14592, 192.168.xx.xx...}
    Source : Microsoft-Windows-Security-Auditing
    TimeGenerated : 28/01/2011 4:46:35 PM
    TimeWritten : 28/01/2011 4:46:35 PM
    UserName :
    Why is that even coming up as a result?
    Ideally, I would like a CSV file containing these columns:
    User,timestamp,computer,logon/off
    I've thought about adding a script to the Group Policy where it runs on local machines and appends details to a file on a network share, but I think I would prefer to run this locally, perhaps periodically as a script.
    -- Ebor Administrator

    Thanks Matthew for the links. While I was initially thinking that's looking rather complicated, and my solution was simplistic in comparison, I'm finding (with no surprises, really) that things can get rather complicated quickly. If only parsing was easier
    (or if only they didn't use "Here-Strings" instead, using normal Strings... </grumble>), as it's now looking at almost ten lines (mostly for readability).
    In short, I'm now looking at:
    Get-ADUser -Filter * -SearchBase "OU=Users,OU=Ebor Computing,DC=Ebor,DC=Local" | Sort-Object | ForEach-Object -Process {
    $UserName = $_.SamAccountName
    $MsgQuery="*" + $UserName + "*"
    $EventID = $_.EventID
    $Events = Get-EventLog -logname security -Message $MsgQuery | where {$_.EventID -eq 4624 -or $_.EventID -eq 4634} | ForEach-Object -Process {
    $SrcAddr = "Unknown"
    $idx = $_.message.IndexOf("Source Network Address:")
    if ($idx -gt 0) {$SrcAddr = $_.message.substring($idx+23,15).trim()}
    $UserName+","+$SrcAddr+","+$EventID+","+$_.TimeGenerated | Out-File -FilePath $UserName"_login_events.csv" -Append
    Eeuuw... don't know why that was parsed as it was above... Either way, this takes a very long time, but gives a separate file for each user and goes back the entire length of the Event Log's history for reporting purposes.
    Noting that I had to query AD for the users thus has to run from the AD Powershell, instead of the normal PS, as I don't know the appropriate module load command to get a normal PS to work with AD. Keeping this limitation in mind, I think it works, but needs
    some tweaking for formatting and output I think.
    I'm tempted to create an RODC for this to run on, but what else does the DC do, really? May as well warm up the CPU for an hour or so ;-) I guess one of the improvements could be to determine if the cycles are being taken up with poor String parsing, or
    with AD querying. Another would be to add some comments... ;-)
    -- Ebor Administrator

  • How Can I produce a Pro_Res files with 5.1 (6 channels of audio)

    I need to screen a feature film in a digital Theater and I want to make sure the Dolby Digital Mix is heard the way the film was mixed and not the way FCP does a 2 channel mix down.
    I have the timeline setup with the 6 channels but when I go to export the file to deliver one QT to the exhibitor, it won't let me export the 5.1 mix.
    I've read several post, Exported the file with Compressor but when I check the channels my audio was not exported the right way.
    Any ideas?

    there is this tutorial online @ Creative Cow from Shane Ross:
    http://library.creativecow.net/articles/ross_shane/multi-audio/video-tutorial
    Are they screening from the QT or from tape?

Maybe you are looking for