Triple byte unicode to utf-16

I need to convert a triple byte unicode value (UTF-8) to UTF-16.
Does anyone have any code to do this. I have tried some code like:
String original = new String("\ue9a1b5");
byte[] utf8Bytes = original.getBytes("UTF8");
byte[] defaultBytes = original.getBytes();
but this does not seem to process the last byte (b5). Also, when I try to convert the hex values to utf-16, it is vastly off.
-Lou

Good question. Answer is, it does.
Oops, sorry, I think I left my brain in the kitchen :)
I was somehow thinking that "hmmm, e is not a hexadecimal digit so that must result in an error"... but of course it is...
Am I representing the triple byte unicode character
wrong? How do I get a 3 byte unicode character into
Java (for example, the utf-16 9875)?It's simply "\u9875".
If you have byte data in UTF-8 encoding, this is what you can do:try {
    byte[] utf8 = {(byte) 0xE9, (byte) 0xA1, (byte) 0xB5}
    String stringFromUTF8 = new String(utf8, "UTF-8");
} catch (UnsupportedEncodingException uee) {
    // UTF-8 is guaranteed to be supported everywhere
}

Similar Messages

  • Unicode or UTF-8?

    Hi, all,
    I'm developing a JSP application that will work with international characters, both displaying them on webpages and storing them in a MySQL database. I'm a bit confused about whether I should use Unicode or UTF-8 for those character strings. (I've read up on both of these encodings and they appear to be very similar in many respects.)
    Can anyone give me any suggestions as to which I should use and why?
    Thanks,
    Dmitri.

    UTF-16 uses 2 bytes for all characters.
    UTF-8 generally uses anywhere from 1 to 6 bytes toActually, UTF-16 uses 16-bit tokens, and represents characters with one or more tokens, like all UTF encodings.
    Generally, the encodings 'UTF-N' use N-bit tokens, and encode the 32-bit UNICODE scalar values (character set) with one or more tokens. Typically, the lower values in the encoding represent the UNICODE scalar values directly.
    UNICODE defines 'UTF-8', 'UTF-16', and 'UTF-32', the latter two in big- and little-endian forms as well as self-specifying forms (using initial bytes of a file or stream). 'UTF-32' encoding just uses the UNICODE scale values directly.
    There is also 'UTF-7', which is used in mime encoding to get through 7bit character sets. There are also unofficial 'UTF-6' etc for specialist use.
    UTF-8 has the advantage that it does not contain null (zero valued) bytes, which means that it works transparently with code expecting to see one byte characters (assuming the legacy code doesn't try to manipulate multi-byte characters!).

  • Convertion of byte array in UTF-8 to GSM character set.

    I want to convert byte array in UTF-8 to GSM character set. Please advice How can I do that?

    String s = new String(byteArrayInUTF8,"utf-8");This will convert your byte array to a Java UNICODE UTF-16 encoded String on the assumption that the byte array represents characters encoded as utf-8.
    I don't understand what GSM characters are so someone else will have to help you with the second part.

  • Truncate byte array of UTF-8 characters without corrupting the data?

    Hi all,
    I need to be able to determine if the byte array, which is truncated from the original byte array representing UTF-8 string, contains corrupted character. Knowing if the byte array contains corrupted character allows me to remove it from the truncated array.
    As in the sample code below, when truncate the string with 16 bytes it displays ok. However, truncate with 17 bytes, the last character is corrupted. Is there a way to check to see if the character is corrupted so that it can be removed from the truncated byte array?
    Thanks in advance,
    Phuong
    PS: The Japanese characters I chose it randomly from Unicode charts. I don't know their meaning so if it is offensive, please forgive me.
    import java.awt.BorderLayout;
    import java.awt.event.ActionEvent;
    import java.awt.event.ActionListener;
    import java.io.UnsupportedEncodingException;
    import javax.swing.BoxLayout;
    import javax.swing.JButton;
    import javax.swing.JFrame;
    import javax.swing.JLabel;
    import javax.swing.JPanel;
    import javax.swing.JScrollPane;
    import javax.swing.JTextArea;
    import javax.swing.SwingUtilities;
    public class TestTruncateUTF8 extends JFrame
        private static final long serialVersionUID = 1L;
        private JTextArea textArea = new JTextArea(5,20);
        private JLabel japanese = new JLabel("Japanese: " + "\u65e5\u672c\u3041\u3086\u308c\u306e");
         * @param args
        public static void main(String[] args)
            SwingUtilities.invokeLater(new Runnable() {
                @Override
                public void run()
                    JFrame frame = new TestTruncateUTF8();
                    frame.setVisible(true);
        public TestTruncateUTF8()
            super("Test Truncated");
            JButton truncate17Button = new JButton("Truncate 17 bytes");
            truncate17Button.addActionListener(new ActionListener() {
                @Override
                public void actionPerformed(ActionEvent e)
                    truncates(17);
            JButton truncate16Button = new JButton("Truncate 16 bytes");
            truncate16Button.addActionListener(new ActionListener() {
                @Override
                public void actionPerformed(ActionEvent e)
                    truncates(16);
            JPanel panel1 = new JPanel();
            panel1.setLayout(new BoxLayout(panel1, BoxLayout.Y_AXIS));
            panel1.add(japanese);
            panel1.add(truncate16Button);
            panel1.add(truncate17Button);
            panel1.add(new JScrollPane(textArea));
            this.setLayout(new BorderLayout());
            this.add(panel1, BorderLayout.CENTER);
            this.pack();
            this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        private void truncates(int numOfBytesToTruncate)
            try
                byte[] bytes = japanese.getText().getBytes("UTF-8");
                byte[] newBytes = new byte[numOfBytesToTruncate];
                System.arraycopy(bytes, 0, newBytes, 0, numOfBytesToTruncate);
                TestTruncateUTF8.this.putTextInsideJTextArea(bytes, newBytes);
            catch (UnsupportedEncodingException e1)
                e1.printStackTrace();
        private void putTextInsideJTextArea(byte[] original, byte[] truncated)
            try
                textArea.append("\nOriginal String:  " + new String(original, "UTF-8"));
                textArea.append("\nTruncated String: " + new String(truncated, "UTF-8"));
                textArea.append("\n*****************************\n");
            catch (UnsupportedEncodingException e)
                e.printStackTrace();
    }

    Since the byte array is in UTF-8, you can easily examine whether it is corrupt or not by taking a look at the last 4 bytes (at most). That is because the bit distribution of each byte (1st, 2nd, 3rd, and 4th) in UTF-8 encoding is well defined in its spec.
    BTW, a Japanese Hiragana/Kanji character typically has 3 bytes in UTF-8, so truncating with neither 16 nor 17 bytes would produce correct truncation.
    HTH,
    Naoto

  • Japanese characters, outputstreamwriter, unicode to utf-8

    Hello,
    I have a problem with OutputStreamWriter's encoding of japanese characters into utf-8...if you have any ideas please let me know! This is what is going on:
    static public String convert2UTF8(String iso2022Str) {
       String utf8Str = "";
       try {          
          //convert string to byte array stream
          ByteArrayInputStream is = new     ByteArrayInputStream(iso2022Str.getBytes());
          ByteArrayOutputStream os = new ByteArrayOutputStream();
          //decode iso2022Str byte stream with iso-2022-jp
          InputStreamReader in = new InputStreamReader(is, "ISO2022JP");
          //reencode to utf-8
          OutputStreamWriter out = new OutputStreamWriter(os, "UTF-8");
          //get each character c from the input stream (will be in unicode) and write to output stream
          int c;
          while((c=in.read())!=-1) out.write(c);
          out.flush();          
         //get the utf-8 encoded output byte stream as string
         utf8Str = os.toString();
          is.close();
          os.close();
          in.close();
          out.close();
       } catch (UnsupportedEncodingException e1) {
          return    e1.toString();
       } catch (IOException e2) {
          return e2.toString();
       return utf8Str;
    }I am passing a string received from a database query to this function and the string it returns is saved in an xml file. Opening the xml file in my browser, some Japanese characters are converted but some, particularly hiragana characters come up as ???. For example:
    屋台骨田家は時間目離れ拠り所那覇市矢田亜希子ナタハアサカラマ楢葉さマヤア
    shows up as this:
    屋�?�骨田家�?�時間目離れ拠り所那覇市矢田亜希�?ナタ�?アサカラマ楢葉�?�マヤア
    (sorry that's absolute nonsense in Japanese but it was just an example)
    To note:
    - i am specifying the utf-8 encoding in my xml header
    - my OS, browser, etc... everything is set to support japanese characters (to the best of my knowledge)
    Also, I ran a test with a string, looking at its characters' hex values at several points and comparing them with iso-2022-jp, unicode, and utf-8 mapping tables. Basically:
    - if I don't use this function at all...write the original iso-2022-jp string to an xml file...it IS iso-2022-jp
    - I also looked at the hex values of "c" being read from the InputStreamReader here:
    while((c=in.read())!=-1) out.write(c);and have verified (using character value mapping table) that in a problem string, all characters are still being properly converted from iso-2022-jp to unicode
    - I checked another table (http://www.utf8-chartable.de/) for the unicode values received and all of them have valid mappings to a utf-8 value
    So it appears that when characters are written to the OutputStreamWriter, not all characters can be mapped from Unicode to utf-8 even though their Unicode values are correct and there should be utf-8 equivalents. Instead they are converted to (hex value) EF BF BD 3F EF BF BD which from my understanding is utf-8 for "I don't know what to do with this one".
    The characters that are not working - most hiragana (thought not all) and a few kanji characters. I have yet to find a pattern/relationship between the characters that cannot be converted.
    If I am missing some....or someone has a clue....oh...and I am developing in Eclipse but really don't have a clue about it beyond setting up a project, editing it and hitting build/run. It is possible that I may have missed some needed configuration??
    Thank you!!

    It's worse than that, Rene; the OP is trying to create a UTF-8 encoded string from a (supposedly) iso-2022 encoded string. The whole method would be just an expensive no-op if it weren't for this line:   utf8Str = os.toString(); That converts the (apparently valid) UTF-8 encoded byte array to a string, using the system default encoding (which seems to be iso-2022-jp, BTW). Result: garbage.
    @meggomyeggo, many people make this kind of mistake when they first start dealing with encodings and charset conversions. Until you gain a good understanding of these matters, a few rules of thumb will help steer you away from frustrating dead ends.
    * Never do charset conversions within your application. Only do them when you're communicating with an external entity like a filesystem, a socket, etc. (i.e., when you create your InputStreamReaders and OutputStreamWriters).
    * Forget that the String/byte[] conversion methods (new String(byte[]), getBytes(), etc.) exist. The same advice applies to the ByteArray[Input/Output]Stream classes.
    * You don't need to know how Java strings are encoded. All you need to know is that they always use the same encoding, so phrases like "iso-2022-jp string" or "UTF-8 string" (or even "UTF-16 string") are meaningless and misleading. Streams and byte arrays have encodings, strings do not.
    You will of course run into situations where one or more of these rules don't apply. Hopefully, by then you'll understand why they don't apply.

  • Perform unicode to UTF-8 conversion on F110 bacs payment file in ABAP

    Hi,
    I am facing a conversion issue for the UK BACS payment files.
    The payment run tcode F110 creates a payment file but the file when created on the application server has soem sort of code conversion. If I removed the # value, i can read most of the data.
    The data example is as below-
    #V#O#L#1#0#0#1#5#8#8# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #2#4#3#3#0#9#
    #H#D#R#1#A#2#4#3#3#0#9#S# # #1#2#4#3#3#0#9#0#0#0#0#0#2#0#0#0#1#0#0#0#1# # # # # # # #1#0#1#1#2#
    #H#D#R#2#F#0#2#0#0#0#0#0#1#0#0# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
    #U#H#L#1# #1#0#1#1#3#9#9#9#9#9#9# # # # #0#0#0#0#0#0#0#0#1# #D#A#I#L#Y# # #0#0#0# # # # # # # #
    This is then transferred to the bank via the FTP UNIX Script but after the conversion which is happening as-
    #Perform unicode to UTF-8 conversion on bacs file
    $a = "iconv -f UNICODE -t UTF-8 $tmpUNI > $tmpASC";
    The need going forward is to bring the details via the interface and then make an uplaod.
    The ABAP code should be able to make the conversion, remove the additional chracters and then send the file across.
    I have searched everywhere but I am not able to find out how to make the same conversion in ABAP.
    We are on ECC6.
    Can someone please help me?
    Regards,
    Archana

    Hi Archana,
    can  you please check SAP notes 1064779 and 1365764 (including the attachment) and see if this helps you ?
    Best regards,
    Nils Buerckel
    SAP AG

  • Handling Multi-byte/Unicode (Japanese) characters in Oracle Database

    Hello,
    How do I handle the Japanase characters with Oracle database?
    I have a Java application which retrieves some values from the database; makes some changes to these [ex: change value of status column, add comments to Varchar2 column, etc] and then performs an UPDATE back to the database.
    Everything works fine for the English. But NOT for Japanese language, which uses Multi-byte/Unicode characters. The Japanese characters are garbled after the performing the database UPDATE.
    I verified that Java by default uses UTF16 encoding. So there shouldn't be any problem with Java/JDBC.
    What do I need to change at #1- Oracle (Database) side or #2- at the OS (Linux) side?
    /* I tried changing the NLS_LANG value from OS and NLS_SESSION_PARAMETERS settings in Database and tried 'test' insert from SQL*plus. But SQL*Plus converts all Japanese characters to a question mark (?). So could not test it via SQL*plus on my XP (English) edition.
    Any help will be really appreciated.
    Thanks

    Hello Sergiusz,
    Here are the values before & after Update:
    --BEFORE update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,65,74,61,6c,69,6e,6b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    --AFTER Update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,45,54,41,4c,49,4e,4b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    So the values BEFORE & AFTER Update are the same!
    The problem is that sometimes, the Japanese data in VARCHAR2 (abstract) column gets corrupted. What could be the problem here? Any clues?

  • Character encoding (unicode to utf-8) conversion problem

    I have run into a problem that I can't seem to find a solution to.
    my users are copying and pasting from MS-Word. My DB is Oracle with its encoding set to "UTF-8".
    Using Oracle's thin driver it automatically converts to the DB's default character set.
    When Java tries to encode Unicode to UTF-8 and it runs into an unknown character (typically a character that is in the High Ascii range) it substitutes it with '?' or some other wierd character.
    How do I prevent this.

    my users are copying and pasting from MS-Word. My DB
    is Oracle with its encoding set to "UTF-8".Pasting where? Into the database? If they are pasting into the database (however they might do that) and getting bad results then that's nothing to do with Java.
    Using Oracle's thin driver it automatically converts
    to the DB's default character set.Okay, I will assume that is correct.
    When Java tries to encode Unicode to UTF-8 and it
    runs into an unknown character (typically a character
    that is in the High Ascii range) it substitutes it
    with '?' or some other wierd character.This is false. When converting from Unicode to UTF-8 there are no "unknown characters". I don't know what you mean by the "High Ascii range" but if your users are pasting MS stuff into your Java program somehow, then a conversion from something into Unicode is done at that time. If "something" isn't the right encoding then you have the problems already, before you try to write to the DB.
    How do I prevent this.First identify the problem. You have input coming from somewhere, then you are writing to the database. Two different steps. Either of them could have a problem. Test them separately so you know which one of them is the problem.

  • VIM Plugin VJDE, Ruby Error: invalid byte sequence in UTF-8

    Hello
    I'm trying to install the vim VJDE Plugin for java syntax highlighting.
    wget tarball
    tar xvzf tarball
    makepkg -s
    pacman -U ...
    No Problems here.
    When i run vim foo.java it shows me this mesage:
    Error detected while processing /usr/share/vim/vim73/plugin/vjde/vjde_template.vim:
    line   18:
    ArgumentError: invalid byte sequence in UTF-8
    Code on line 18:
    ruby Vjde::VjdeTemplateManager.load_all(VIM::evaluate('g:vjde_install_path'))
    So.. I'm no ruby programmer but i dont see any non UTF-8 Character in it.
    When i comment it out, the error does not show.
    Couldn't google anything about it. Maybe it's just a bug in the current version of Ruby.
    Would be nice if anyone can help me.
    Regards, Archdove
    Last edited by Archdove (2011-09-23 18:21:35)

    Hi,
    It's a encoding problem. I wrote about this problem to author. He uses en utf8 locale, but some files has unrecognized encoding. Enconv can't convert to utf8.
    $ find -type f -a ! -name readtags -a ! -name '*.class' -a ! -name '*.jar' | xargs enconv
    enconv: Cannot convert `./src/previewwindow.cpp' from unknown encoding
    enconv: Cannot convert `./src/wspawn.cpp' from unknown encoding
    enconv: Cannot convert `./src/tipswnd.lex' from unknown encoding
    enconv: Cannot convert `./src/vjde/completion/ClassInfo.java' from unknown encoding
    enconv: Cannot convert `./src/vjde/completion/Completion.java' from unknown encoding
    enconv: Cannot convert `./src/tipswnd.c' from unknown encoding
    enconv: Cannot convert `./plugin/vjde/vjde_java_completion.vim' from unknown encoding
    enconv: Cannot convert `./plugin/vjde/project.vim' from unknown encoding
    enconv: Cannot convert `./plugin/vjde/vjde_tag_loader.vim' from unknown encoding
    enconv: Cannot convert `./plugin/vjde/tlds/java.vjde' from unknown encoding
    I'm looking how to convert to utf8. Try open file e.g. src/previewwindow.cpp in vim with fencs=gbk,utf8,default. Vim detect fenc cp936. In line 644 are chinese characters(?): /* 另一个回调函数 */
    Any idea?

  • Sql Plus and Unicode (or utf-8) characters.

    Hello,
    i have problem with Sql Plus and unicode files. I want to execute Start {filename}, where {filename} is file in unicode format (this file have to contains german and polish characters). But i receive error message. It is possible to read from unicode (or utf-8) file and execute commands from this file)?
    Thanks in advance.
    Pawel Przybyla

    What is your client operating system characterset?

  • GUI_UPLOAD and UNICODE or UTF-8

    Hello,
    I wanted to convert some files in SAP, i.e. :
    - data is processed in MS Excell and saved as Unicode file
    - I read the file into SAP, process the strings and write it back
    The problem is that there are some special polish characters in the file. I read the file using GUI_UPLOAD FM, but I didn't manage not to have these special characters replaced (by # by default).
    Anyone came across this problem before?
    Regards,
    Michal

    Hello Satya,
    Actually I forgot to write that I used the CODEPAGE parameter... I tried two cases:
    1. Save as Unicode (in Excel or Notepad) and than use CODEPAGE = "4103"
    2. Save as UTF-8 in Notepad and use CODEPAGE = "4110"
    In both cases the polish caracters are replaced by '#'.
    In fact I tried many other code pages (all from tcp00a), but all other combinations return an error.
    Regards,
    Michal

  • The use of CL_ABAP_CONV_OUT_CE to create an unicode-16 (UTF-16) file

    Hello,
    I have to create a file iwth normal text in UTF-16 format. In ABAP the creation of an UTF-8 file is very easy (open dataset for output in UTF-8).
    However UTF-16 is barely documented. and the normal open dataset does not support utf-16.
    The only thing i could find out that you have to use class CL_ABAP_CONV_OUT_CE for it and open it as BINARY.
    But i don't know how to do it. Could someone help. an small example would be perfect.
    Thanx in advance.
    Regards, Frank

    Hi,
    Please check this piece of code
    DATA conv TYPE REF TO cl_abap_conv_in_ce.
    DATA buffer(4) TYPE x.
    DATA text(100) TYPE c.
    buffer = '41424344'.
    conv = cl_abap_conv_in_ce=>create(
    encoding = 'UTF-8' ).
    conv->convert(
    EXPORTING input = buffer
    IMPORTING data = text ).
    write: / text.
    Example for class cl_abap_conv_out_ce.
    data: text(100) type c value 'ABCD',
    conv type ref to cl_abap_conv_out_ce,
    buffer type xstring.
    conv = cl_abap_conv_out_ce=>create(
    encoding = 'UTF-8'
    endian = 'L'
    call method conv->write( data = text n = 4 ).
    buffer = conv->get_buffer( ).
    write:/ buffer.
    Also
    you do not need to replace TRANSLATE ... TO UPPER/LOWER CASE in Unicode systems.
    You just need to take care that the arguments fit:
    The arguments of these instructions must be single fields of type C, N, D, T or STRING or structures of character-type only.
    Regards
    Hiren K.Chitalia

  • Converting Unicode to UTF-8 character set through Oracle forms(10g)

    Hi,
    I am working on oracle forms (10g) where i need to load files containing unicode character set (multilingual characters) to database.
    but while loading the file , junk characters are getting inserted into the database tables.
    while reading the file through forms , i am using utl_file.fopen_nchar,utl_file.get_line_nchar functions to read the unicode characters ...
    the application server , and database server characterset are set to american utf8 characteset.
    In fact , when i change the text file characterset to utf8 through an editor(notepad ++,etc) , in that case , data is getting inserted into database properly,(at least working for english characters) , but not with unicode ...
    Any guidance in this regard are highly appreciated
    Thank you in advance
    Sanu

    hi
    please check out the following link.
    http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm
    sarah

  • Java text to unicode or UTF-16?

    Hi all,
    I get a string inputed in by the user, say 〹. I want to add this to an internal string, but as a unicode formated string. Otherwise when I write this string out using a XML formater, it encodes the ampersand character as &
    Does anybody know how to do this? Is there a text -> unicode class or method available?
    thanks,
    Justin

    String str = "〹";
    char c = (char) Integer.parseInt(str.substring(str.indexOf('#') + 1,str.lastIndexOf(';')));

  • Unicode characters longer than 2 bytes

    It seems that Flex 3 only handles double-byte Unicode characters.  Unicode has characters outside the BMP (Basic Multilingual Plane), which have codes greater than 2^16 and cannot be encoded in two bytes, but can be encoded in UTF-8.  Will such characters be supported in the future, e.g. in Flex 4?
    Thanks,
    Francisco

    How to tell whether a "character" (really a UTF-16 code unit) in an AS String is actually part of a surrogate pair:
    D800..DBFF: high surrogate
    DC00..DFFF: low surrogate
    everything else: a character

Maybe you are looking for

  • Ipod needs to be restored but not quite so simple

    my ipod 30gig video said it was corrupt after a week or so of use, so i pressed the restore button in itunes as advised. i get all the warnings about losing all my data and such. i press ok and continue, it tells me its recovering it. then recovery i

  • One Iphone, one itunes, two computers

    Can anyone tell me..I now have two computers authorized under my one account of itunes I added my work PC today). My iphone 3G has only been synced with my home computer. When I sync to my work computer now, will my library transfer to itunes at work

  • 10g upgrade on AIX platform

    I am just wondering whether someone else has already done a similar task (from 8.1.7.4 to 10.1.0.4 on AIX 5.2)...Which are the main risks or issues in doing this job? I have already done a research within this forum, Metalink and Web but nothing / no

  • CSS Question: IE7 for Windows Not Behaving

    Hi all, I hope there is a CSS buddy out there today.  I have a page that I've built using only CSS (New to doing this) and the page works in Safari, Firefox on both Mac and PC, but IE7 on Windows is rendering the logo/header/navigation wrong. Maybe t

  • B320 bluetooth issue

    Hello, I have an IdeaCentre B320 with Bluetooth keyboard and mouse. Both work just fine. I'm trying to add a Bluetooth wireless speaker (Soulmate) and am having the hardest time. I have no idea where to go to get my computer to recognize a new device