Manually adding BOM to UTF-16LE file?

hi.
i have a bash script that needs to preform something on a string from standard input, save it in a file and convert the file to UTF-16LE with BOM for further processing by another application.
i use iconv to convert the text file to UTF-16LE, but iconv actually creates a little-endian file WITHOUT the bom. (converting to UTF-16 creates a big endian file WITH bom)
i see no way of creating LE with BOM with iconv, so i thought maybe i could simply add the byte-order marks (FF FE) to the beginning of the unicode file. how can i do that?
many thanks in advance
tench

If you want to do everything from within bash script, then you can use something like
{code}
#!/bin/sh
# I think xpg_echo is ON by default, but just in case...
shopt -s xpg_echo
cat > infile
# assume the input is in UTF-8
(echo '\xFF\xFE\c'
iconv -f UTF-8 -t UTF-16LE infile) > outfile
{code}
Of course use of infile can be omitted if you don't need it.

Similar Messages

  • Probs with read-in of UTF-16LE file

    hi all,
    the following code works only on a small UTF-16LE file, but not on a file of say > 100 KB... with such a file the first isr.read() causes the program to hang...! wrapping with BufferedReader does not solve the prob...
    InputStreamReader isr = new InputStreamReader( new FileInputStream("filename"), "UTF-16LE");
    for( int i = 0; i < 50; i++ ){
    int ch = isr.read();
    System.out.println( "char " + i + ": " + ch );
    FLUMMOXED... PLS HELP!!

    Simple example
    try {
            BufferedReader in = new BufferedReader(new FileReader("FileName.txt"));
            String str;
            while ((str = in.readLine()) != null) {
                //do whatever whith str       
            in.close();
        } catch (IOException e) {
            //Handle Exception...

  • Byte Order Mark (BOM) not found in UTF-8 file download from XI

    Hi Guys,
    Facing difficulty in downloading file from XI in UTF-8 format with byte order mark.
    Receiver File adapter has been configured to download the file in UTF-8 file format. But the byte order mark is missing. Same works well for UTF-16. Could see the byte order mark at the beginning of  file "FEFF" for UTF-16BE - Unicode big endian.
    As per SAP help, UTF-8 supposed to be the default encoding for TEXT file type.
    Configuring the Receiver File/FTP Adapter in the SAP help link.
    http://help.sap.com/saphelp_nw04/helpdata/en/d2/bab440c97f3716e10000000a155106/frameset.htm
    Could you please advice on how to achieve BOM in UTF-8 file as it is very important for the outbound file to get loaded in our vendor system.
    Thanks.
    Best Regards
    Thiru

    Hi!<br>
    <br>
    Had the same problem. But here, we create a "CSV"-File which must have the BOM otherwise it will not be recogniced as UTF-8.
    <br>
    Therefore I've done the folowing:
    Created a simple destination-structure which represents the CSV and done the mapping with the graphical-mapper. The destination-Structure looks like:
    <br>
    (?xml version="1.0" encoding="UTF-8"?)<br>
    (ONLYLINES)<br>
         (LINE)<br>
              (ENTRY)Hello I'm line 1(/ENTRY)<br>
         (/LINE)<br>
         (LINE)<br>
              (ENTRY)and I'm line 2(/ENTRY)<br>
         (/LINE)<br>
    (/ONLYLINES)
    As you can see, the "ENTRY"-Element holds the data.<br>
    <br>
    Now I've created the folowing Java-Mapping and added that mapping within the Interface-Mapping as second step after the graphical mapping:<br>
    <br>
    ---cut---<br>
    package sfs.biz.xi.global;<br>
    <br>
    import java.io.InputStream;<br>
    import java.io.OutputStream;<br>
    import java.util.Map;<br>
    <br>
    import javax.xml.parsers.DocumentBuilder;<br>
    import javax.xml.parsers.DocumentBuilderFactory;<br>
    <br>
    import org.w3c.dom.Document;<br>
    import org.w3c.dom.Element;<br>
    import org.w3c.dom.NodeList;<br>
    <br>
    import com.sap.aii.mapping.api.StreamTransformation;<br>
    import com.sap.aii.mapping.api.StreamTransformationException;<br>
    <br>
    public class OnlyLineConvertAddingBOM implements StreamTransformation {<br>
    <br>
         public void execute(InputStream in, OutputStream out) throws StreamTransformationException {<br>
              try {<br>
                   byte BOM[] = new byte[3];<br>
                   BOM[0]=(byte)0xEF;<br>
                   BOM[1]=(byte)0xBB;<br>
                   BOM[2]=(byte)0xBF;<br>
                   String retString=new String(BOM,"UTF-8");<br>
                   Element ServerElement;<br>
                   NodeList Server;<br>
                   <br>
                DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory.newInstance();<br>
                DocumentBuilder docBuilder = docBuilderFactory.newDocumentBuilder();<br>
                Document doc = docBuilder.parse(in);<br>
                doc.getDocumentElement().normalize();<br>
                NodeList ConnectionList = doc.getElementsByTagName("ENTRY");<br>
                int count=ConnectionList.getLength();<br>
                for (int i=0;i<count;i++) {<br>
                    ServerElement = (Element)ConnectionList.item(i);<br>
                    Server = ServerElement.getChildNodes();<br>
                    retString += Server.item(0).getNodeValue().trim() + "\r\n";<br>
                }<br>
                <br>
                out.write(retString.getBytes("UTF-8"));<br>
                   <br>
              } catch (Throwable t) {<br>
                   throw new StreamTransformationException(t.toString());<br>
              }<br>
         }<br>
    <br>
         public void setParameter(Map arg0) {<br>
              // TODO Auto-generated method stub<br>
              <br>
         }<br>
    <br>
    /*<br>
         public static void main(String[] args) {<br>
              File testfile=new File("c:\\instance.xml");<br>
              File testout=new File("C:\\testout.txt");<br>
              FileInputStream fis = null;<br>
              FileOutputStream fos= null;<br>
              OnlyLineConvertAddingBOM myFI=new OnlyLineConvertAddingBOM();<br>
              try {<br>
                    fis = new FileInputStream(testfile);<br>
                     fos = new FileOutputStream(testout);<br>
                    myFI.setParameter(null);<br>
                    myFI.execute(fis, fos);<br>
              } catch (Exception e) {<br>
                   e.printStackTrace();<br>
              }<br>
                    <br>
                    <br>
         }<br>
         */<br>
    <br>
    }<br>
    --cut---
    <br>
    This Mapping searches all "ENTRY"-Tags within the XML-Strucure and creates a big string which startes with the UTF-8-BOM and than combined each ENTRY-Element, separated by CR/LF.<br>
    <br>
    We use this as Payload for an Mail-Adapter (sending via SMTP) but it should also work on File-Adapter.<br>
    <br>
    Hope it helps.<br>
    Rene<br>
    <br>
    Besides: could someone tell SAP that this editor is the WORSEST editor I've ever seen. Maybe this guys should copy somethink from wikipedia :-((
    Edited by: Rene Pilz on Oct 8, 2009 5:06 PM

  • Encoding issue - having UTF-16LE BOM "FF FE"

    Hello Experts,
             The scenario is as follows:
    SAP sends IDOCS -> SAP-PI (collects IDOCS & creates IDOC xml) -> Content conversion done at Receiver CC -> Text File having pipe delimited is placed in an FTP location.
    Requirement :
    Currently SAP R/3 is sending Balkan and Cyrilllic chracters to PI. Both SAP-R/3 and PI are Unicode compliant. SAP-PI version being used  is SAP-PI 7.1. BPM is used to collect the idocs based on time.
    The SAP-PI while converting the IDOC to idoc XML, it has header as "encoding=UTF-8".
    The text file that is getting created at the FTP location is an ANSI file. (If you open the text file with EDITPLUS(ver3) tool, you can check the file type as ANSI. )We need to change this as UTF-16LE.
    In the receiver CC, in the first Target tab, we have maintained Transfer Mode as "Binary" in FTP connection parameters.
    In the Processing tab, we have maintained the File type as Text and Encoding as "UTF-16LE".
    We also switched from Binary-Binary, Binary-Text and Text-Text in both the tabs, but the file that is getting put, is still an ANSI file in the FTP location.
    In PI all the characters are coming correctly. But the time of creating the file, the file is getting created as ANSI.
    We need to have the file type as UTF-16 having BOM(Byte Order Mark) as "FF FE". If you open in the EDITPLUS text editor, it should show as UTF-16.
    Please if any of you experts have come across any solution for this issue, please let me know the steps. It an issue in production and need your help asap.
    Points to be awarded to the best answer and an answer that helps us solve the problem.
    Thanks.
    Deb.

    from another discussion in SDN forum, I have learned that PI does not add a BOM.
    UTF-16LE and UTF-16BE do not have a BOM, as the byte order is clear from declaration.
    So you have to add the BOM with an OS script.
    When you put UTF-16LE in receiver channel, the target file should be in UTF-16LE. if this does not work, check if UTF-16LE is installed in server, where PI is running. But if it is missing, an error message would happen in channel monitor.
    You have to check the encoding of the file with a hex editor. You cannot verify this with with Notepad or any other text editor.

  • Read in a file in UTF-16LE

    hi all,
    the following code works only on a small UTF-16LE file, but not on a file of say > 100 KB... with such a file the FIRST command isr.read() causes the program to hang...! wrapping with BufferedReader does not solve the prob...
    InputStreamReader isr = new InputStreamReader( new FileInputStream("filename"), "UTF-16LE");
    for( int i = 0; i < 50; i++ ){
    int ch = isr.read();
    System.out.println( "char " + i + ": " + ch );
    FLUMMOXED... PLS HELP!!

    the file is a normal file with normal line lengths...
    it can be read in to sthg like TextPad and viewed...
    so somehow this app is able to decode it OK...
    question is what exactly is happening between
    BufferedReader, InputStreamReader and
    FileInputStream... and how to buffer the whole
    process so that manageable chunks can be decoded... I have used BufferedReader the way you have but on (UTF-8) log files with lengths in excess of 100 MBytes without any problem like this. I have to believe that BufferedReader is not able to detect the end of line.
    I would create a wrapped Reader that prints (in hex?) the characters it is reading before passing them to the BufferedReader.

  • PNG files sent to iphone Photo App do not appear on 'date taken' (manually added in windows explorer)

    I have over 1000 PNG files that do not have exif 'date taken'. They only have exif 'date created or modified'
    The 'date created/modified' is not the actual date that I took the screen shots.
    I added the 'date taken' manually in windows explorer. However, it seems that this does not write this data into the exif 'date taken' which remains blank.
    When I transfer the PNG file from the PC to IPhone Photo App, it does not appear on the 'date taken' (that I added manually) but appears on the exif 'date created or modified'.
    Interestingly, when I do same as above but with JPG file, it does show up correctly based on the 'date taken' (that I added manually in windows explorer).
    So I converted the PNG file to JPG using one of the online tools. However, when I do this, the converted file does not retain the 'date taken'.
    I can fix the above by:
    1.  Use Paint to save the PNG file as JPG; this retains the 'date taken' but this is not the solution I am looking for as I would need to do this for each file (there is no bulk 'save as' in Paint)
    2. Use a file conversion tool to bulk convert from PNG to JPG; but again this is not solution, as I would have to manually add the 'date taken' in windows explorer to each JPG file
    Alternatively, I tried to rename the file name to include the 'date taken' and then use an exif date changer app to set the xhif dates based on file name.
    However, I was not able to find any software that would allow me to do this. The software that I have seen only picks up the 'date taken' from file EXIF data. But in my case there is no such data in the file - only the 'date taken' that I manually added in Windows Explorer.
    Any advice is appreciated. All I want to do is transfer PNG files from PC to Iphone Photo app and have these files appear in Photo app based on the 'date taken' (the one I manually added in windows - not the date taken in exif data).

    Have you touched the "More" button (on the iPhone).
    then gone to Audio Books.
    Are they there?
    I had an audio book still on my iphone, it survivedf the iOS5 update.
    To get rid of it, I plugged iPhone into Mac,
    Go to itunes.
    Click on Iphone in iTunes.
    Go to Books tab,
    Scroll down, there is audio books.
    Choose sync selected audio books,
    And untick the ones I don;t want.
    Dows this work for you?

  • Reading UCS-2LE or UTF-16LE from file

    Hello,
    I am writing a program to compile some simple statistics from an exported iTunes library .txt file. I have determined that when iTunes exports the file, it exports it in either the UCS-2LE or UTF-16LE encoding. I have viewed these files in a hex editor and the first 4 digits are ff fe.
    I would like to read text from this file into my program, but I can't because it is in a weird encoding. Right now, I have to "Save as" the file to a different encoding (usually I use ASCII), to be able to read it.
    This is the code I am currently using to read the input.
    FileReader freader = new FileReader(file_name);
    BufferedReader input_file = new BufferedReader(freader);I am fairly new to Java so any help is appreciated, and please try to be as specific as possible.
    Thanks.
    Message was edited by:
    imdandman

    Hello,
    I am writing a program to compile some simple
    statistics from an exported iTunes library .txt file.
    I have determined that when iTunes exports the file,
    it exports it in either the UCS-2LE or UTF-16LE
    encoding. I have viewed these files in a hex editor
    and the first 4 digits are ff fe.iTunes also exports an XML file at the library root, in case that would be a more attrative option. I think there's a parsing library here, but it may have been retired:
    http://www.macdevcenter.com/pub/a/mac/2003/09/03/mytunes.html
    There's also an iTunesFileChooser that parses the XML file quite well and returns arrays of Song and Playlist objects. I've used it with iTunes 7, and it works for me.
    http://www.robbiehanson.com/iTunesJava.html

  • Files manually added to the TC when Time Machine fills the disk?

    I've ordered a Time Capsule & have tried to find the answer to this question, but can't get a definative answer.
    On a 1TB drive say I add 200GB of files manually (used as a NAS) alongside the Time Machine backup. When the Time Machine backups reach 800GB would Time Machine stop at 800GB & delete the oldest backups to make room for new backups continually, or would it delete the files I manually added to make room for more Time Machine backups?

    Hi,
    I think many are waiting for a clear answer on that very relevant question. Anyway, I am.

  • Encoding Problem - can't read UTF-8 file correctly

    Windows XP, JDK 7, same with JDK 6
    I can't read a UTF-8 file correctly:
    Content of File (utf-8, thai string):
    &#3648;&#3617;&#3655;&#3604;&#3648;&#3621;&#3639;&#3629;&#3604;&#3586;&#3634;&#3623;
    When opened in Editor and copy pasted to JTextField, characters are displayed correctly:
    String text = jtf.getText();
    text.getBytes("utf-8");
    -32 -71 -128 -32 -72 -95 -32 -71 -121 -32 -72 -108 -32 -71 -128 -32 -72 -91 -32 -72 -73 -32 -72 -83 -32 -72 -108 -32 -72 -126 -32 -72 -78 -32 -72 -89
    Read file with FileReader/BufferedReader:
    line = br.readLine();
    buffs = line.getBytes("utf-8"); //get bytes with UTF-8 encoding
    -61 -65 -61 -66 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes(); // get bytes with default encoding
    -1 -2 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    Read file with:
    FileInputStream fis...
    InputStreamReader isr = new InputStreamReader(fis,"utf-8");
    BufferedReader brx = new BufferedReader(isr);
    line = br.readLine();
    buffs = line.getBytes("utf-8");
    -17 -65 -67 -17 -65 -67 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes();
    63 63 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    Anybody has an idea? The file seems to be UTF-8 encoded. What could be wrong here?

    akeiser wrote:
    Windows XP, JDK 7, same with JDK 6
    I can't read a UTF-8 file correctly:
    Content of File (utf-8, thai string):
    &#3648;&#3617;&#3655;&#3604;&#3648;&#3621;&#3639;&#3629;&#3604;&#3586;&#3634;&#3623;
    When opened in Editor and copy pasted to JTextField, characters are displayed correctly:
    String text = jtf.getText();
    text.getBytes("utf-8");
    -32 -71 -128 -32 -72 -95 -32 -71 -121 -32 -72 -108 -32 -71 -128 -32 -72 -91 -32 -72 -73 -32 -72 -83 -32 -72 -108 -32 -72 -126 -32 -72 -78 -32 -72 -89 These values are the bytes of your original string "&#3648;&#3617;&#3655;&#3604;&#3648;&#3621;&#3639;&#3629;&#3604;&#3586;&#3634;&#3623;" utf-8 encoded with no BOM (Byte Order Marker) prefix.
    >
    Read file with FileReader/BufferedReader:
    line = br.readLine();
    buffs = line.getBytes("utf-8"); //get bytes with UTF-8 encoding
    -61 -65 -61 -66 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes(); // get bytes with default encoding
    -1 -2 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    Read file with:
    FileInputStream fis...
    InputStreamReader isr = new InputStreamReader(fis,"utf-8");
    BufferedReader brx = new BufferedReader(isr);
    line = br.readLine();
    buffs = line.getBytes("utf-8");
    -17 -65 -67 -17 -65 -67 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes();
    63 63 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14 These values are the bytes of your original string UTF-16LE encoded with a UTF-16LE BOM prefix.
    This means that there is nothing wrong (the String has been read correctly) with the code and that your default encoding is UTF-16LE .
    Edited by: sabre150 on Aug 1, 2008 5:48 PM

  • Why are newer versions of Firefox having problems with UTF-16le (Windows default unicode set)

    I have a website that has multiple languages on it, and so I've been using UTF-16le for it. Everything was working well on multiple browsers until the last few months, when only Firefox stopped displaying it properly. I can force the page into UTF-16le, but then some of my graphical links no longer work and I cannot navigate through the pages unless I force every single page to UTF-16le EVERY SINGLE TIME. This problem is not unique to my computer, either, as this has happened with every computer I have tried in the last few months.

    As answered before a few weeks back [[/questions/770955 *]]: the server sends the pages as UTF-8 and that is what Firefox uses to display the pages. You need to reconfigure the server and make them send the pages with the correct content type (UTF-16) or with no content type at all if you want Firefox to use the content type (BOM) in the file.
    A good place to ask questions and advice about web development is at the mozillaZine Web Development/Standards Evangelism forum.<br />
    The helpers at that forum are more knowledgeable about web development issues.<br />
    You need to register at the mozillaZine forum site in order to post at that forum.<br />
    See http://forums.mozillazine.org/viewforum.php?f=25

  • Why do manually added mp3s in the Podcasts app all show up in the same folder?

    Let me explain!
    I like to download lectures and talks and audio books from other sources than iTunes.
    I like to add these mp3 files to iTunes, change the Media type of them from Music to Podcast, and transfer them to my iPod Touch. This is so I will more easily know which ones have been listened to and can be removed.
    This worked (and works) well in the Music app. Everything was separated into folders based on what album name I had manually assigned to the ID3 tags of the files.
    But in the Podcast app, all of these files are now lumped into one folder.
    I currently have eight un-listened-to episodes of Philip K. Dick's Valis on my iPod Touch. But the folder shows a count of 106 unheard episodes in relation to this audio book, because it lumps all of my other manually added podcast media tracks into the same folder.
    Why is this?
    Why does the Podcasts app refuse to let me separate manually added files into individual folders based on album information?
    Had the My Stations feature let me populate its playlists in a more rational manner, and also allowed me to manually change the running order of the tracks (something which only seems to be possible in the On-The-Go playlist), I would not think that this omission was such an annoyance.
    I have been searching for months for mentions of this problem, but I have not seen a single one, and so I have finally decided to make my own thread.
    Let me know if the nature of my problem remains unclear.
    Joakim

    That doesn't do anything.
    When I plug my phone in and attempt to sync it with iTunes and click on "Steven's iPhone" under "Devices" the display to the right just says "loading..." and it stays like that for hours. Nothing happens.
    As far as I can tell, there is literally no way for me to delete the photos from my iPhone and I therefore, cannot upgrade to the newest software, download new apps, etc.

  • Need help to read and write using UTF-16LE

    Hello,
    I am in need of yr help.
    In my application i am using UTF-16LE to export and import the data when i am doing immediate.
    And sometimes i need to do the import in an scheduled formate..i.e the export and imort will happend in the specified time.
    But in my application when i am doing scheduled import, they used the URL class to build the URL for that file and copy the data to one temp file to do the event later.
    The importing file is in UTF-16LE formate and i need to write the code for that encoding formate.
    The problem is when i am doing scheduled import i need to copy the data of the file into one temp place and they doing the import.
    When copying the data from one file to the temp i cant use the UTF-16LE encoding into the URL .And if i get the path from the URl and creating the reader and writer its giving the FileNotFound exception.
    Here is the excisting code,
    protected void copyFile(String rootURL, String fileName) {
    URL url = null;
    try {
    url = new URL(rootURL);
    } catch(java.net.MalformedURLException ex) {
    if(url != null) {
    BufferedWriter out = null;
    BufferedReader in = null;
    try {
    out = new BufferedWriter(new FileWriter(fileName));
    in = new BufferedReader(new InputStreamReader(url.openStream()));
    String line;
    do {
    line = in.readLine();
    if(line != null) {
    out.write(line, 0, line.length());
    out.newLine();
    } while(line != null);
    in.close();
    out.close();
    } catch(Exception ex) {
    Here String rootURL is the real file name from where i have to get the data and its UTF-16LE formate.And String fileName is the tem filename and it logical one.
    I think i tried to describe the problem.
    Plz anyone help me.
    Thanks in advance.

    Hello,
    thanks for yr reply...
    I did the as per yr words using StreamWriter but the problem is i need a temp file name to create writer to write into that.
    but its an logical one and its not in real so if i create Streamwriten in that its through FileNotFound exception.
    The only problem is the existing code build using URL and i can change all the lines and its very difficult because its vast amount of data.
    Is anyother way to solve this issue?
    Once again thanks..

  • When creating new table in sqllite db via Flex it become encoded as "utf-16le"

    Hi Guys
    I have some annoying problem with my AIR application
    The application is communicating with a local DB (sqllite).
    as part of initial installation I'm checking if the db exist.
    in case not then:
    I create one (file)
    create the relevent tables inside
    and populate them.
    For some reason, on the tables creation step the sqllite db become encoded as UTF-16le instead of UTF-8.
    The question is how can I make the tables creation step to leave the db as UTF-8
    thanks in advance for your help.
    This is my creation code
    the "connection" is from flash.data.SQLConnection type
    The "file" contain the following information
    <sql>
    <statement>
    CREATE TABLE IF NOT EXISTS MYTABLE
          MYTABLE_VERSION                NUMBER NOT NULL,
           MYTA|BLE_INSERT_DATE                 DATE NOT NULL
    </statement></sql>
    The below is the relevent code:
    var stream:FileStream = new FileStream();
                stream.open(file, FileMode.READ);
                var xml:XML = XML(stream.readUTFBytes(stream.bytesAvailable));
                stream.close();
                var statement:XML = null;
                try
                    connection.begin(lockType);
                    for each (statement in xml.statement)
                        var stmt:SQLStatement = new SQLStatement();
                        stmt.sqlConnection = connection;
                        stmt.text = statement;
                        stmt.execute();           
                } catch(err:Error)
                    connection.rollback();
                    throw err;
                connection.commit();

    It doesn't look like you're using DBSequence domain for the OrderLinesId attribute. If you are then you do not need to fill in the sequence as you've done in the create method.
    Getting back to create issue, You may want to set the 'order' id (foreign key) values before calling super and then call the getOrder() (or getXXX where XXX is the order accessor in this entity) method to verify if the order of the given ID exists/found in the cache.
    By the way, are you also using a similar create() in order with DBSequence as the type for the pK and you force a sequence value on top of it via setAttribute?
    Yes, this is the create method inside CrpOrderLinesImpl.java
    protected void create(AttributeList attributeList) {
    super.create(attributeList);
    SequenceImpl s = new SequenceImpl("CRP_ORDER_LINES_ID_SEQ", getDBTransaction());
    setAttribute("OrderLinesId",s.getSequenceNumber());
    Thanks,
    Brad

  • HT1386 I can no longer see anything in the "Manually Added Songs" field when managing music on my iPhone4.

    I can no longer see anything in the "Manually Added Songs" field when managing music on my iPhone4. why?

    If you are using iTunes Match, your songs are long gone.
    iTunes Match is now a streaming service, with no regards to letting you download individual files.
    Otherwise, I'm sorry for wasting your time.

  • Manually added tracks put my iPhone over capacity. Now I can't sync. Help!

    I have a number of Audiobook I tried to sync to my phone.  I could not find anywhere in the iPhone settings to choose to sync these (anyone know where?).  So I manually added them by dragging them from iTunes onto the iPhone. 
    However they are very large files and put my iPhone "over capacity".  Now whenever I try to sync I get an error message saying:
    "The iPhone “Mr Shiney” cannot be synced because there is not enough free space to hold all of the selected items (additional 7.38 GB required)."
    But I can't unselect these files!  They don't appear in the "Manually Added Songs" section of the Music tab.  My phone is stuck in this weird state where I can no longer sync.
    I have tried deleting the songs from iTunes.  But this now just says the songs couldn't be found so I still can't sync.
    Should I restore from backup?  Won't I lose all my app content?  Any ideas what to do?
    My iPhone is a 3G on iOS 4.2.1. iTunes is 10.5.

    Have you touched the "More" button (on the iPhone).
    then gone to Audio Books.
    Are they there?
    I had an audio book still on my iphone, it survivedf the iOS5 update.
    To get rid of it, I plugged iPhone into Mac,
    Go to itunes.
    Click on Iphone in iTunes.
    Go to Books tab,
    Scroll down, there is audio books.
    Choose sync selected audio books,
    And untick the ones I don;t want.
    Dows this work for you?

Maybe you are looking for

  • Saxon process in a java class

    Greetings, I'm new to Java and having problems using Dr. Michael Kay's Saxon XSLT processor in a Java class. Here is my code: import net.sf.saxon.*; import net.sf.saxon.TransformerFactoryImpl; import net.sf.saxon.Transform; import java.io.*; import j

  • Can you pass SQL to a function?

    I have the following function: CREATE OR REPLACE PROCEDURE run_query(p_sql IN VARCHAR2) IS   v_v_val     VARCHAR2(4000);   v_n_val     NUMBER;   v_d_val     DATE;   v_ret       NUMBER;   c           NUMBER;   d           NUMBER;   col_cnt     INTEGER

  • TS1363 forgot my pass word to unlock my i pod touch

    i forgot my pass word to unlock my touch i pod

  • Sending an excel file in email

    Hello all! I need a help, please someone helps me... I always send pdf by email without problems, nor i need to send an excel file by email i am using demo mail package, can anyone give an example please??? THank you!

  • Can't launch PSE 7

    Why would I not be able to launch PSE 7 after installing from a purchased CD?  (No error msg) (thread title edited by forum host)