Buggy AAC Conversion process drops File Properties?

Hi,
I just decided (apparently crazily) to convert my MP3s that had been ripped from CDs over to AAC (m4a) format. After multiple passes through it finally completed.
Now having looked at my files, they have lost all of their file summary information. Which means that I can't index them any more on windows, and worse, when I transferred them to my phone, they are all marked as "unknown".
Does anyone have some software that I can use to restore all the file properties information. (Note: getting the CD information using iTunes doesn't seem to do anything at all).
tx
P

You should always, whenever possible, rip fro the
original CD.
Don't convert from a lossy format to another lossy
format.
>> Fair enough... but nevertheless, it was a feature of iTunes to convert my Mp3s which had all their file properties in Windows, and then after being converted, they lost them.
What did you convert them with?
>> itunes
Now having looked at my files, they have lost all
of their file summary information.
Do you mean the iD3 tag info?
>> no. the id3 tags are there, but the file properties are missing, Summary, album info etc, which means that my phone and other devices (that don't read id3, but read file props) no longer work right :(
getting the CD information using iTunes doesn't
seem to do anything at all).
You have to have RIP'd the CD in iTunes for this to
work.
>> ic... well.. sigh

Similar Messages

  • FVU file conversion process related issues

    Hi,
    I am in process of conversion of text file into FVU file for TDS purpose.
    When I get text file through J1INQEFILE, its not conatining TAN of deductor.
    I checked in configuration.....like section code place, business place and even in SM30 J_1I_SECCODE....TAN is there at all three places...
    I have three requirements now
    1. How to update TAN in text file
    2. I want to get mobile no of TDS deductor in text file
    3. Where to update BSR code and even i want it to be updated in text file.
    I expect some of the experts to reply who had done already for their requirements. Its urgent please...

    Hi,
    Issue resolved. Problem is TAN no, MOBILE no of TDS deductor, and BSR code not been updated in text file that we generate thrugh J1INQEFILE.
    I have done manually by putting these things in text file and uploaded. I got the FUV file.
    For TAN updating, SAP told us to apply some note.
    SO, issue is NO MORE now.
    Thanks

  • File Adapter content conversion delimited/possitional file format.

    Hi,
    I have the following file to JDBC scenario, but having some issues with the file content conversion due to the file structure.
    Example:
    =======
    000038A020301
    000038A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049572=BN01 =BOMETLSS_ML_STD_30A7
    000038A020200=AA96=AB001=AC17000.000=AD1200=AF13021537=AE=AG8005992427=AH10
    OLRENDZZZZ
    Example 2:
    ========
    000040A020301
    000040A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049570=BN01 =BOMETLSS_ML_STD_30A7
    000040A020200=AA96=AB001=AC17000.000=AD1200=AF13021537=AE=AG8005992425=AH10
    000041A020301
    000041A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049571=BN01 =BOMETLSS_ML_STD_30A7
    000041A020200=AA96=AB001=AC17000.000=AD1200=AF13021537=AE=AG8005992426=AH10
    000042A020301
    000042A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049572=BN01 =BOMETLSS_ML_STD_30A7
    000042A020200=AA96=AB001=AC17000.000=AD1200=AF13021537=AE=AG8005992427=AH10
    000043A020301
    000043A020104=AA1=AC1=AD200619=AG1=AH1=AI1=AK3049568=BN01
    000043A020200=AA73=AB001=AC3700.000=AD1300=AF13047285=AE200619=AG8005992423=AH10
    000043A020200=AA73=AB002=AC5500.000=AD1300=AF13047285=AE200619=AG8005992423=AH10
    000043A020200=AA73=AB003=AC1800.000=AD1300=AF13047285=AE200619=AG8005992423=AH10
    000043A020200=AA73=AB004=AC5000.000=AD1300=AF13047285=AE200619=AG8005992423=AH10
    000044A020301
    000044A020104=AA1=AC1=AD200619=AG1=AH1=AI1=AK3049569=BN01
    000044A020200=AA73=AB001=AC3700.000=AD1300=AF10008536=AE200619=AG8005992424=AH10
    000044A020200=AA73=AB002=AC5500.000=AD1300=AF10008536=AE200619=AG8005992424=AH10
    000044A020200=AA73=AB003=AC2500.000=AD1300=AF10008536=AE200619=AG8005992424=AH10
    000044A020200=AA73=AB004=AC5000.000=AD1300=AF10008536=AE200619=AG8005992424=AH10
    OLRENDZZZZ
    Example Explained:
    ==============
    Position 1-9 is a "Transactional number".
    Position 10-11 is "Record type".
    Position 12-13 is "Line Item count".
    Four record types exist:
    03 = Location header
    01 = Transactional Header
    02 = Line Item
    OLRENDZZZZ = EoF marker.
    The equal sign "=" is a field separator/delimiter.
    In each delimited field, after the first equal sign in the record, the first two characters represent a field qualifier/field name tag/identifier and there only the data begins until the following delimiter.
    Each record is ended in a "CLRF"/'nl'.
    The file is build up, but not locked and only completed until the EoF marker "OLRENDZZZZ" is inserted by the application on the last record of the file.
    My solution so far:
    =============
    Record Structure: row,*
    Record Sequence: Ascending
    row.fieldNames: field1,field2,field3,ect.......
    row.fieldSeparator: =
    row.endSeparator: 'nl'
    row.keyFieldInStructure: ignore
    ignoreRecordsetName: true
    This brings the file into the integation server as xml as follow:
    ============================================
    <?xml version="1.0" encoding="utf-8"?>
    <ns:SAPtoFuelFACS xmlns:ns="urn:engenoil-com:i_fuel_facs_sap">
         <row>
              <field1>000038A020301</field1>
         </row>
         <row>
              <field1>000038A020101</field1>
              <field2>AA1</field2>
              <field3>AC1</field3>
              <field4>AD</field4>
              <field5>AG1</field5>
              <field6>AH1</field6>
              <field7>AI1</field7>
              <field8>AK3049572</field8>
              <field9>BN01</field9>
              <field10>BOMETLSS_ML_STD_30A7</field10>
              <field11>BP0003049572</field11>
         </row>
         <row>
              <field1>000038A020200</field1>
              <field2>AA96</field2>
              <field3>AB001</field3>
              <field4>AC17000.000</field4>
              <field5>AD1200</field5>
              <field6>AF13021537</field6>
              <field7>AE</field7>
              <field8>AG8005992427</field8>
              <field9>AH10</field9>
         </row>
         <row>
              <field1>OLRENDZZZZ</field1>
         </row>
    </ns:SAPtoFuelFACS>
    So far, so good.
    The problem I am having is that I have to check for the EoF marker "OLRENDZZZZ" to be present before picking up the file, else the file is not completed.
    I have tried a script to rename files in msg pre-processing in the channel, but the problem is the file channel has to be triggered and the original file mask is necessary for this, but then this mask is a valid pickup file mask. So to me it seems the only way is to do this is during the content conversion process as the files not matching the file criteria, where a EoF "OLRENDZZZZ" definition is not present, will not be picked up and be ignored until it is present or totally independent with a batch job.
    If someone has a more elegant way to solve this problem with just using the file channel configuration where every thing is pretty much apparent, I would greatly appreciate it if you could assist.
    Regards
    Willie Hugo

    The problem I am having is that I have to check for the EoF marker "OLRENDZZZZ" to be present before picking up the file, else the file is not completed.
    I suggest a script.
    Say The files are dropped in FolderA. Have a script transfer a file to FolderB only if it finds the EoF marker in a file. Thus FolderB will be what XI will poll and that will always have the complete file.
    Hope this sounds good!!!

  • Extracting "File Properties" metadata from Photoshop CS3

    After processing scanned images in Photoshop CS3, I am up loading them to a DAM solution that has the capability to extract embedded metadata using custom defined keys. I would like to know the syntax/structure of embedded metadata labels that display in the "File Properties" template of Bridge CS3 (e.g. File Size, Dimensions, Color Space) so that I can extract that metadata.
    Thank you!

    I have this printer and use Leopard, but I don't have this problem I am useing the 6.2 driver. There are two drivers on their website, I am using only the driver that came with the machine. I would re-install the driver. Secondly and this caught me up a bit at first. You open the CS print manager, press print and then the Epson driver emerges, there i s no direct access from CS3 to the driver. Secondly in the Epson driver there are two or three set of drop down menus, the one that permits setting the paper profile is not exactly where you expect it, just have a look through all the menus(I am not sitting in fronto of my mac right now so I am of limited help). Good luck.

  • Processing Multiple Files for more than 100 Receive Location - File Size - 25 MB each file, file type DML

    Hi Everybody
    Please suggest.
    For one of our BizTalk interface, we have around 120 receive locations.  We are just moving (*.dml) files from the source to destination without doing any processing.  
    We receive lots of files in different receive locations and in few cases the file size will be around 25 MB and so while moving these large files, the CPU usage is varying between 10% to 90+% and the time consuming for this single huge file is around
    10 to 20 minutes.  This solution was already in place and was designed by the previous vendor for moving the files to their clients.  Each client has 2 receive locations and they have around 60 clients.  Is there any best solution for implementing
    this with in BizTalk or outside BizTalk? Please suggest.
    I am also looking for how to control the number of files which gets picked from the BizTalk receive location.  For example, If we have say 1000 files in receive location and we want to pick at a time only 50 files only (batch of 50) then is it possible?
    because currently it is picking all the files available in source location, and one of the process is dropping thousands of files in to the source location, so we want to control  the number of files getting picked (or even if we can control to pick the
    number of KBs).  Please guide us on how we can control the number of files.

    Hi Rajeev,
    25 MB per file, 1000 files. Certainly you got to revisit the reason for choosing BizTalk.
    “the time consuming for this single huge file is around 10 to 20 minutes”
     - This is a problem.
    You could consider other file transfer options like XCopy or RobotCopy etc if you want to transfer to another local/shared drive. Or you can consider using SSIS
    which does comes with many adapters to send to destination system depending on their destination transfer protocol.
    But in your case, you have some of the advantages that you get with BizTalk. For your scenario, you have more source systems (more Receive locations), with BizTalk
    it’s always easier to manage these configurations, you can easily enable and disable them when a need arise. You can easily configure tracking; configure host instances based on load etc. So you can consider following design for your requirement. This design
    would suit you well since you’re not processing the message and just pass it through from source to destination:
    Use a custom pipeline component in the Receive Locations which receives the large file.
    Stores the received file into disk and creates a small XML metadata message that contains the information about where the large file is stored.
    The small XML message is then published into the
    message box db
    instead of the large file. Let the metadata file also contain the same context properties as the received file.
    In the send port, use another custom pipeline component that process the metadata xml file, retrieve the location of the disk where the file is stored, access the file and send it to destination.
    Read the following article on this design..
    http://www.codeproject.com/Articles/180333/Transfer-Large-Files-using-BizTalk-Send-Side
    This way you don’t need to publish the whole message into message box DB which would considerably reduce the processing time and utilises host instance to process
    more files. This way you can still get the advantages of BizTalk and still process large files.
    And regarding your question of restricting the Receive location to handles the number of files from receives location. No it’s not possible.
    Regards,
    M.R.Ashwin Prabhu
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • How can I restart the conversion process?

    When I installed I-tunes the program started converting my WMA files to the AAC format. I let it run all day but at the end of the day I had to shut down the computer, so I stopped the process figuring it would restart when I restarted the program the next day. It does not do that. The only way I can find to do the conversion is one album at a time. Is there a way to restart the automatic conversion process?
    Compaq Presario v5000   Windows XP  

    I don't know of an automated way to do it, but it is easy enough to do it yourself in one large batch. Note I use a Mac version of iTunes and still use iTunes6, but the following should be more or less the same anyway.
    Right click the headings in your library and you will get a pop up of all the different headings. Turn on "Type" by checking it. Type is a column showing the encoding method (e.g. AAC, MP3, AIFF, WAV, etc.). Now you can click on this heading to sort your library by encoded Type. So now all the WAV files should be grouped together. You can now easily select the entire batch of WAV files, right click, and pick "Convert to xxx" where xxx is the encoder type you currently have set in your preferences for importing files. So if it is not in the format you want, go to preferences and change it. Once you pick the Convert function it will convert all the selected files to the new format.
    Note that converting makes a copy leaving the original in place. So the other advantage of having the ability to sort and select a large number of files by Type is that you can also select all the WAV files to delete in one step after you have converted them all.
    Patrick

  • Processing one file at a time in Biztalk

    We are changing the way we're processing files in that where we used to receive hundreds of small files throughout the day, we are now going to have these files batched into larger files that get dropped to our incoming Biztalk directory. I'm concerned about
    having the server pick up multiple large files at one time. Is there a way to tell a Biztalk application to only process one file at a time, and not to pick up the next file until the previous one has finished? Note, I do not have a way to build this into
    the Orchestration, as this is a Biztalk application that is actually installed by our Trizetto(QNXT) product. It uses Biztalk for XML serialization. 
    Thanks.

    I wouldn't worry about the size of the messages unless they are approaching 1GB. BizTalk can process roughly 400 msgs/sec. Also if you're worried about processing multiple large files try load balancing. With BizTalk it is quite "easy" to load
    balance. In a nut shell you have to have the dll of the application installed on all the servers you want to be apart of load balancing and BizTalk takes care of the rest. Your large files would more than likely split up and processed on different servers.
    http://msdn.microsoft.com/en-us/library/aa578057(v=bts.80).aspx
    The link above is for high availability but could also be used to get a better understanding of load balancing. Also if you use de-batching to split up the 3000-4000 records that would help with processing them. If an orchestration receives a file with 4000
    records in them that instance alone processes the whole file. Whereas if you split up the messages each message gets it's own instance of the orchestration which leads to better performance. The only drawback is that it takes more resources so your environment
    needs to be fairly large. Also if one server is busy another server with the same dll can take the de-batched messages and process them.
    Here is a link to my blog post about de-batching. It is very basic but gives a good base for de-batching.
    http://camartin.azurewebsites.net/post/BizTalk-Debatching

  • SSIS 2012 Script Task to Get File Properties

    Hello,
    I researched on how to grab a file properties such as file size, file modified date, etc and I came across the following
    link:
    I followed exact steps and when I went to execute the package, I got the following error:
    Below is the code:
    // C# code
    // Fill SSIS variables with file properties
    using System;
    using System.Data;
    using System.IO; // Added to get file properties
    using System.Security.Principal; // Added to get file owner
    using System.Security.AccessControl; // Added to get file owner
    using Microsoft.SqlServer.Dts.Runtime;
    using System.Windows.Forms;
    namespace ST_cb8dd466d98149fcb2e3852ead6b6a09.csproj
    [System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]
    public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
    #region VSTA generated code
    enum ScriptResults
    Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
    Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
    #endregion
    public void Main()
    // Lock SSIS variables
    Dts.VariableDispenser.LockForRead("User::FilePath");
    Dts.VariableDispenser.LockForWrite("User::FileAttributes");
    Dts.VariableDispenser.LockForWrite("User::FileCreationDate");
    Dts.VariableDispenser.LockForWrite("User::FileExists");
    Dts.VariableDispenser.LockForWrite("User::FileInUse");
    Dts.VariableDispenser.LockForWrite("User::FileIsReadOnly");
    Dts.VariableDispenser.LockForWrite("User::FileLastAccessedDate");
    Dts.VariableDispenser.LockForWrite("User::FileLastModifiedDate");
    Dts.VariableDispenser.LockForWrite("User::FileOwner");
    Dts.VariableDispenser.LockForWrite("User::FileSize");
    // Create a variables 'container' to store variables
    Variables vars = null;
    // Add variables from the VariableDispenser to the variables 'container'
    Dts.VariableDispenser.GetVariables(ref vars);
    // Variable for file information
    FileInfo fileInfo;
    // Fill fileInfo variable with file information
    fileInfo = new FileInfo(vars["User::FilePath"].Value.ToString());
    // Check if file exists
    vars["User::FileExists"].Value = fileInfo.Exists;
    // Get the rest of the file properties if the file exists
    if (fileInfo.Exists)
    // Get file creation date
    vars["User::FileCreationDate"].Value = fileInfo.CreationTime;
    // Get last modified date
    vars["User::FileLastModifiedDate"].Value = fileInfo.LastWriteTime;
    // Get last accessed date
    vars["User::FileLastAccessedDate"].Value = fileInfo.LastAccessTime;
    // Get size of the file in bytes
    vars["User::FileSize"].Value = fileInfo.Length;
    // Get file attributes
    vars["User::FileAttributes"].Value = fileInfo.Attributes.ToString();
    vars["User::FileIsReadOnly"].Value = fileInfo.IsReadOnly;
    // Check if the file isn't locked by an other process
    try
    // Try to open the file. If it succeeds, set variable to false and close stream
    FileStream fs = new FileStream(vars["User::FilePath"].Value.ToString(), FileMode.Open);
    vars["User::FileInUse"].Value = false;
    fs.Close();
    catch (Exception ex)
    // If opening fails, it's probably locked by an other process
    vars["User::FileInUse"].Value = true;
    // Log actual error to SSIS to be sure
    Dts.Events.FireWarning(0, "Get File Properties", ex.Message, string.Empty, 0);
    // Get the Windows domain user name of the file owner
    FileSecurity fileSecurity = fileInfo.GetAccessControl();
    IdentityReference identityReference = fileSecurity.GetOwner(typeof(NTAccount));
    vars["User::FileOwner"].Value = identityReference.Value;
    // Release the locks
    vars.Unlock();
    Dts.TaskResult = (int)ScriptResults.Success;
    Eventually I am looking to just grab the Modified Date from the Windows Explorer folder and insert into table. Any suggestions? Thank you in advance!
    Sanjeev
    Sanjeev Jha

    Hi SSISJoost,
    I am so glad you responded to this thread. You are absolutely right. I copied the entire code including the project name (guid) and that solved the error problem.
    Now, what did you do to get the message box? I added the watch and I could see the values but how do I get these values in a table? If I remember correctly, in your blog, you mentioned something about using derived columns. I am familiar with Derived Columns
    but how do I do that? I appreciate your response.
    Thank you.
    Sanjeev
    Sanjeev Jha
    I used a second script task to show all variable values. It has a
    MessageBox in it and between all
    variables I added a
    newline to make it more readable...
    But with an Execute SQL Task and parameters you can also put these values in a Table... or you can read the file in a Data Flow Task and add those variables (as metadata) to each record with a Derived Column
    Please mark the post as answered if it answers your question | My SSIS Blog:
    http://microsoft-ssis.blogspot.com |
    Twitter

  • Aim to process all files in folders on desktop to run through photoshop and save in multiple locations

    Aim to process all files in folders on desktop to run through photoshop and save in multiple locations
    Part one:-
    Gather information from desktop to get brand names and week numbers from the folders
    Excluding folders on desktop beginning with "2" or "Hot"
    Not sure about the list of folders
    but I have got this bit to work with
    set folderPath to "Hal 9000:Users:matthew:Desktop:DIVA_WK30_PSD" --<<this would be gained from the items on the desktop
    set {oldTID, my text item delimiters} to {my text item delimiters, ":"}
    set folderName to last text item of folderPath
    set my text item delimiters to "_WK"
    set FolderEndName to last text item of folderName
    set brandName to first text item of folderName
    set my text item delimiters to "_PSD"
    set weekNumber to first text item of FolderEndName
    set my text item delimiters to oldTID
    After running this I have enough information to create folders in multiple locations, (i need to know where they are so that photoshop can later save them in those multiple locations
    So I need the following folders created
    Locally
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName: brandName + "_WK" + weekNumber + "_LR" --(Set path for Later)PathA
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName: brandName + "_WK" + weekNumber + "_HR"--(Set path for Later)PathB
    Network
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:"Week" + weekNumber
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:"Week" + weekNumber:brandName + "_WK" + weekNumber + "_LR"  --(Set path for Later)PathC
    Volumes:GEN:Website_Images --(no need to create folder just set path)PathD
    FTP (Still as a normal Volume) So like another Network
    Volumes:impulse:"Week" + weekNumber
    Volumes:impulse:"Week" + weekNumber:Brand
    Volumes:impulse:"Week" + weekNumber:Brand:brandName + "_WK" + weekNumber + "_LR"  --(Set path for Later)PathE
    Volumes:impulse:"Week" + weekNumber:Brand:brandName + "_WK" + weekNumber + "_HR"  --(Set path for Later)PathF
    I like to think that is end of Part 1
    Part 2
    Take the images  (PSD's) from those folders relevant to the Brand then possibly run more applescript that opens flattens and then saves it in the locations above.
    For example….
    An image in folder DIVA_WK30_PSD will then run an applescript in Photoshop, lets call it DivaProcessImages within this we then save to PathA, PathB, PathC, PathD, PathE, PathF the folder path of C should therefore look like this
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:Week30:DIVA_WK30_LR and of course save the image as original filename.
    Then from the next folder
    An image in folder Free_WK30_PSD will then run an applescript in Photoshop, lets call it FreeProcessImages within this we then save to PathA, PathB, PathC, PathD, PathE, PathF the folder path of C should therefore look like this
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:Week30:Free_WK30_LR and of course save the image as original filename.
    The photoshop applescript i'm hoping will be easier as it should be a clearer step by step process without any if's and but's
    Now for the coffee!!

    Hi,
    MattJayC wrote:
    Now to the other part, where each folder was created (and those that already existed) how do I set them as varibles?
    For example,
    set localBrandFolder_High_Res to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", localBrandFolder)
    This line was used to create more than one folder as it ran though the folders on the desktop. The next part is I will need to reference them to save files to them.
    You can use a records
    Examples
    if you want the path of localBrandFolder_High_Res  of "Diva", if "Diva" is the second folder of the Desktop
    You get the path with this : localBrandFolder_High_Res of record 2 of myRecords
    if you want the path of localWeekFolder  in the first folder of the Desktop
    You get the path with this : localWeekFolder of record 1 of myRecords
    Here is the script
    set myRecords to {}
    set dtF to paragraphs of (do shell script "ls -F ~/Desktop | grep '/' | cut -d'/' -f1")
    repeat with i from 1 to number of items in dtF
        set this_item to item i of dtF
        if this_item does not start with "2_" and this_item does not start with "Hot" then
            try
                set folderPath to this_item
                set {oldTID, my text item delimiters} to {my text item delimiters, ":"}
                set folderName to last text item of folderPath
                set my text item delimiters to "_WK"
                set FolderEndName to last text item of folderName
                set brandName to first text item of folderName
                set my text item delimiters to "_PSD"
                set weekNumber to first text item of FolderEndName
                set my text item delimiters to oldTID
            end try
            try
                set this_local_folder to "Hal 9000:Users:matthew:Pictures:2011-2012"
                set var1 to my getFolderPath("WK" & weekNumber, this_local_folder)
                set var2 to my getFolderPath(brandName, var1)
                set var3 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var2)
                set var4 to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", var2)
                --set up names to destination folders and create over Netwrok including an already exisiting folder
                set this_Network_folder to "DCKGEN:Brands:Zoom:Brand - Zoom:Upload Photos:2012:"
                set var5 to my getFolderPath("WK" & weekNumber, this_Network_folder)
                set var6 to my getFolderPath(brandName, var5)
                set var7 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var6)
                set website_images to "DCKGEN:Website_Images:"
                --set up names to destination folders and create over Netwrok for FTP collection (based on a mounted drive)
                set this_ftp_folder to "Impulse:"
                set var8 to my getFolderPath("Week" & weekNumber, this_ftp_folder)
                set var9 to my getFolderPath(brandName, var8)
                set var10 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var9)
                set var11 to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", var9)
                set end of myRecords to ¬
      {localWeekFolder:var1, localBrandFolder:var2, localBrandFolder_Low_Res:var3, localBrandFolder_High_Res:var4, networkWeekFolder:var5, networkBrandFolder:var6, networkBrandFolder_Low_Res:var7, ftpWeekFolder:var8, ftpBrandFolder:var9, ftpBrandFolder_Low_Res:var10, ftpBrandFolder_High_Res:var11}
            end try
        end if
    end repeat
    localBrandFolder_High_Res of record 2 of myRecords -- get full path of localBrandFolder_High_Res in the second folder of Desktop
    on getFolderPath(tName, folderPath)
        tell application "Finder" to tell folder folderPath
            if not (exists folder tName) then
                return (make new folder at it with properties {name:tName}) as string
            else
                return (folder tName) as string
            end if
        end tell
    end getFolderPath

  • Problem with file content conversion in receiver file adapter

    Hi All
    I have a problem with file content conversion in receiver file adapter.
    This is my recordset structure: Header_Record,1,Claim_Record,*,Check_Rec,1
    These are the content conversion parameters:
    Header_Record.fieldSeparator = ,
    Header_Record.endSeparator = 'nl'
    Claim_Record.fieldSeparator = ,
    Claim_Record.endSeparator = 'nl'
    Check_Rec.fieldSeparator = ,
    Check_Rec.endSeparator = 'nl'
    In SXMB_MONI, i can see that the data is correctly extracted from proxy, and correctly mapped to receiver message, and i see a checkered flag (success).
    But, Adapter status is RED with the following error message:
    Conversion initialization failed: java.lang.Exception: java.lang.Exception: Error(s) in XML conversion parameters found: Parameter '1.fieldFixedLengths' or '1.fieldSeparator' is missing
    In communication channel monitoring, i get the following error message:
    Message processing failed. Cause: com.sap.aii.af.ra.ms.api.RecoverableException: Channel has not been correctly initialized and cannot process messages
    What is going wrong here? Can anyone please tell me?
    Thanks
    Chandra

    Posted in Incorrect forum.
    Posted again in Process Integration forum

  • Problem Validating & Processing Transformation file in NW 7.0 version

    I am trying to Validate & Process Transformation File against my Data file I am getting the error message that I have at the below. When I validate the Conversion files, I see the creation of corresponding .CDM files. I even deleted the .CDM files and recreated the files.
    So my question is why it is giving "Sheet does not exist  (CONVERSION)" warning for which it is rejecting the records inside the Data File?
    DATA FILE:
    C_Category,Time,R_ACCT,R_Entity,InputCurrency,Amount
    ACTUAL,2007.DEC,AVG,GLOBAL,USD,1
    ACTUAL,2007.DEC,END,GLOBAL,USD,1
    ACTUAL,2007.DEC,AVG,GLOBAL,JPY,110
    ACTUAL,2007.DEC,END,GLOBAL,JPY,110
    ACTUAL,2007.DEC,HIST,GLOBAL,JPY,110
    ACTUAL,2007.DEC,HIST,GLOBAL,USD,1
    ACTUAL,2008.MAR,AVG,GLOBAL,USD,1
    ACTUAL,2008.MAR,END,GLOBAL,USD,1
    ACTUAL,2008.MAR,AVG,GLOBAL,JPY,107.5
    ACTUAL,2008.MAR,END,GLOBAL,JPY,105
    ACTUAL,2008.MAR,HIST,GLOBAL,JPY,110
    ACTUAL,2008.MAR,HIST,GLOBAL,USD,1
    TRANSFORMATION FILE:
    FORMAT = DELIMITED
    HEADER = YES
    DELIMITER = ,
    AMOUNTDECIMALPOINT = .
    SKIP = 0
    SKIPIF =
    VALIDATERECORDS=YES
    CREDITPOSITIVE=YES
    MAXREJECTCOUNT=
    ROUNDAMOUNT=
    SPECIFICMAPPING=YES
    *MAPPING
    C_Category=*col(1)
    Time=*col(2)
    R_ACCT =*col(3)
    R_Entity=*col(4)
    InputCurrency=*col(5)
    Amount=*col(6)
    *CONVERSION
    C_Category=[COMPANY]C_Category.xls!CONVERSION
    Time=[COMPANY]Time.xls!CONVERSION
    R_ACCT=[COMPANY]R_ACCT.xls!CONVERSION
    R_Entity=[COMPANY]R_Entity.xls!CONVERSION
    InputCurrency=[COMPANY]InputCurrency.xls!CONVERSION
    ERROR
    [Start validating transformation file]
    Validating transformation file format
    Validating optionsu2026
    Validation on options was successful
    Validating mappingsu2026
    Validation on mappings was successful
    Validating conversionsu2026
    Sheet does not exist  (CONVERSION)
    Sheet does not exist  (CONVERSION)
    Sheet does not exist  (CONVERSION)
    Sheet does not exist  (CONVERSION)
    Sheet does not exist  (CONVERSION)
    Validation on conversions was successful
    Creating the transformation xml file; wait a moment
    Transformation xml file saved successfully
    Connecting to server...
    Begin validate transformation file with data fileu2026
    [Start test transformation file]
    Validate has successfully completed
    ValidateRecords = YES
    Task name CONVERT:
    No 1 Round:
    Record count: 12
    Accept count: 0
    Reject count: 12
    Skip count: 0
    Error: All records are rejected

    *CONVERSION
    C_Category=C_Category.xls
    Time=Time.xls
    R_ACCT=R_ACCT.xls
    R_Entity=R_Entity.xls
    InputCurrency=InputCurrency.xls
    On validating with the above format, I still getting the same error
    [Start validating transformation file]
    Validating transformation file format
    Validating optionsu2026
    Validation on options was successful
    Validating mappingsu2026
    Validation on mappings was successful
    Validating conversionsu2026
    Sheet does not exist  (CONVERSION)
    Sheet does not exist  (CONVERSION)
    Sheet does not exist  (CONVERSION)
    Sheet does not exist  (CONVERSION)
    Sheet does not exist  (CONVERSION)
    Validation on conversions was successful
    Creating the transformation xml file; wait a moment
    Transformation xml file saved successfully
    Connecting to server...
    Begin validate transformation file with data fileu2026
    [Start test transformation file]
    Validate has successfully completed
    ValidateRecords = YES
    Task name CONVERT:
    No 1 Round:
    Record count: 12
    Accept count: 0
    Reject count: 12
    Skip count: 0
    Error: All records are rejected

  • Automatically add files to iTunes not processing mp4 files on Windows 8.1

    I have iTunes 12.1.1.4 install on a Windows 8 (x64) system.  I have been using iTunes for quite some time and my library is located at c:\Music.  On earlier versions (version 10 of iTunes for example) I would drop an MP4 file into c:\Music\Automatically Add to iTunes" directory and iTunes would process this file and add to either the movies or TV Shows section.
    I have not used this feature in some time but now have started to.  Currently, if I place the mp4 in that directory, iTunes does not do anything with the video.  I have gotten to using "Add Folder to Library" and iTunes will then process the directory.  I do have "Keep iTunes Media folder organized" and "Copy Files to iTunes Media folder when adding to library" checked.
    How can I get iTunes to recognize that a file has been placed in this directory and process it?

    Two possibilities:
    iTunes has processed the file but has categorized it as "Home Movies" rather than other video kinds - in this case you'll be able to find it under that heading in iTunes.  Note that there's a bug in iTunes 12 where even if you then change the media kind - for example, to "TV Show" or "Movie" - it will be shown in the new category in the iTunes UI but the files will still be in the "Home Movies" folder under iTunes Media
    If iTunes has not been able to process the file, you should find within a Not Added folder under Automatically Add to iTunes - the most likely cause of this is that the MP4 files are using a codec that is not compatible with iTunes.

  • Adobe Bridge CS 4 missing file properties

    I'm working with photoshop files (CS 4) and a few seem to have dropped some of the metadata from the file properties in the bridge browser. Dimensions, Resolution, Bit depth, and Color Mode are missing, and the Color Space is listed as Untagged, even though it is in fact tagged as Adobe 1998 in the Photoshop file. It's as if I inadvertently hit it a quick key to dump the information, as it has only occurred on a couple of files. The thumbnail appears smaller too. Any ideas?

    I tried all of the above to no avail. I saved a copy in another folder and the file still had the diminished thumbnail and missing data. I added another layer (the files contain many layers and adjustment layers) to the file, cropped and re-saved and it picked up the missing data on the new version. I made some benign changes on my original, but alas it didn't work.
    When I get a chance I'll do some more testing on this. It may have something to do with I import certain layers. I have been flowing layers in from both Aperture and Bridge, and from the latter, sometimes as smart objects. So maybe there is something going on there.
    Except for this minor glitch, and some more serious PS stability issues, I am preferring the Bridge/Photoshop workflow to the Aperture/Photoshop workflow, and I'm thinking of jettisoning Aperture altogether in favor of the "browser" method of managing files.
    I appreciate all the suggestions and will get back to this problem soon. But for know I have to meet a deadline.

  • Extracting "File Properties" metadata

    After processing scanned images in Photoshop CS3, I am uploading them to a DAM solution that has the capability to extract embedded metadata using custom defined keys. I would like to know the syntax/structure of embedded metadata labels that display in the "File Properties" template of Bridge CS3 (e.g. File Size, Dimensions, Color Space) so that I can extract that metadata.
    Thank you!

    In Preferences>Metadata there are two choices for dimensions - inches and cm, check the box for your poison.

  • Author metadata separated by semi-colons truncated in file properties and "Get Info"

    I'm using Acrobat Professional 9.0 (CS3) for Mac to edit the metadata for a collection of PDFs to be made available on the web. When I enter the data, I am inputting a list of authors separated by commas, like this: Smith J, Watson C, Brown J. If I click on "Additional metadata", the data I've already entered is transposed into the various XMP fields. And the commas separating the author names are changed to semi-colons. I gather that this happens because XMP wants to separate multiple authors with semi-colons, and Acrobat wants the metadata in XMP fields to match the metadata stored for the file properties. Fine.
    However, if I save such a PDF and then use Get Info on my Mac (OSX 10.4) to look at the file properties, the list of authors is now truncated where the first semi-colon appears. The list is also truncated in Windows XP if I right-click and select properties. The list is also truncated when I look at the file properties in Preview on my Mac, or if I look at file properties using FoxIT, or using Adobe Acrobat Reader 7 or earlier. The only way a site visitor will actually be able to view the full list of authors in a file saved this way is to use Adobe Reader version 8 or later.
    I would like to preserve XMP/Dublin Core/etc metadata in the proper format in the XMP code, but would also like users of standard, popular file viewers to be able to access the full list of authors. Is there a way to do this with Acrobat 9?
    Also, once I've saved a file and the XMP metadata has been altered, Acrobat seems to permanently change the way that the authors are listed in the file properties. I cannot manually change those settings any longer without Acrobat overriding my changes and converting commas to semi-colons, or surrounding the entire list of authors in quotation marks. Is there a way to get around these Acrobat overrides and manually take control of my metadata again?
    Does Windows Vista read the authors list correctly in the file properties if it is separated by semi-colons?
    It seems to me that in an attempt to get XMP metadata working smoothly across the entire CS line, Adobe has jumped the gun somewhat and is now forcing Acrobat users to use "file properties" metadata that is really only fully compatible with Adobe products. Is there a way I can get some backwards compatibility on this?
    Thanks for any suggestions or insight anyone can provide to this vexing issue.
    Phil.

    Bridge has some pretty powerful and helpful features. However, I am unable to figure out how to access the non-XMP "file properties" fields through Bridge, and if I add metadata via Bridge, then I run into the same problem regarding the use of semi-colons to separate authors.
    If I had more time, or a larger set of files I might investigate the use of ExtendScript to import all my metadata from an Excel file (where it already exists) into the PDF file properties and XMP metadata.
    The best solution for my case though appears to be to use Acrobat 9 and to do the double-edit process for each file. I should be able to just cut-and-paste the metadata from the Excel file, and then if I save the Authors list to the end, I can simply paste it once into the XMP field (through the Advanced metadata button) and then return to the regular file properties page and paste it again in there, where Acrobat will add quotes around it.
    Lastly, if anyone else happens to find this post and is looking for similar information, I would recommend searching in the Bridge forum as well as the Acrobat forum.
    Phil.

Maybe you are looking for