How to preserve 2-byte characters when "Edit Link" ?

Keynote 6.2.2 will remove any 2-byte characters presented in the hyperlink of an object.
However, the previous version doesn't have the problem.
Since internationalized domain names are getting popular, I hope Apple can fix the bug soon.

after google a brunch, I learned that javac (hence java.exe) does not support BOM (byte order mark).
I had to use a diff editor to save my text file as unicode without BOM in order for native2ascii to convert into a ascii file.
Even the property file is in ascii format, I'm still having trouble to display those character in dos console. In fact, I just noticed I can use system.out.println to display double byte character if I embedded the character itself in java source file:
public class Run {
     public static void main(String[] args) throws UnsupportedEncodingException{
          String msg = "中文";    //double byte character
                try{
                System.out.println(new String(msg.getBytes("UTF-8")) + " new string");  //this displays fine
                catch(Exception e){}
                Locale locale = new Locale("zh", "HK");
          ResourceBundle resource = ResourceBundle.getBundle("test.MessagesBundle", locale);
                System.out.println(resource.getString("Hey"));      //this will display weird characterso it seems like to me that I must did something wrong in the process of creating properties file from unicode text file...

Similar Messages

  • How to display double byte characters with system.out.print?

    Hi, I'm a newbie java programmer having trouble to utilize java locale with system io on dos console mode.
    Platform is winxp, jvm1.5,
    File structure is:
    C:\myProg <-root
    C:\myProg\test <-package
    C:\myProg\test\Run.java
    C:\myProg\test\MessageBundle.properties <- default properties
    C:\myProg\test\MessageBundle_zh_HK.properties <- localed properties (written in notepad and save as Unicode, window notepad contains BOM)
    inside MessageBundle.properties:
    test = Hello
    inside Message_zh_HK.properties:
    test = &#21890; //hello in big5 encoding
    run.java:
    package test;
    import java.util.*;
    public class Run{
      public static void main(String[] args){
        Locale locale = new Locale("zh","HK");
        ResourceBundle resource =
            ResourceBundle.getbundle("test.MessageBundle", locale);
        System.out.println(resource.getString("test"));
      }//main
    }//classwhen run this program, it'll kept diplay "hello" instead of the encoded character...
    then when i try run the native2ascii tool against MessageBundle_zh_HK.properties, it starts to display monster characters instead.
    Trying to figure out what I did wrong and how to display double byte characters on console.
    Thank you.
    p.s: while googling, some said dos can only can display ASCII. To demonstrate the dos console is capable of displaying double byte characters, i wrote another helloWorld in chinese using notepad with C# and compile using "csc hello.cs", sure enough, console.write in c# allowed me to display the character I was expecting. Since dos console can print double byte characters, I must be missing something important in this java program.

    after google a brunch, I learned that javac (hence java.exe) does not support BOM (byte order mark).
    I had to use a diff editor to save my text file as unicode without BOM in order for native2ascii to convert into a ascii file.
    Even the property file is in ascii format, I'm still having trouble to display those character in dos console. In fact, I just noticed I can use system.out.println to display double byte character if I embedded the character itself in java source file:
    public class Run {
         public static void main(String[] args) throws UnsupportedEncodingException{
              String msg = "&#20013;&#25991;";    //double byte character
                    try{
                    System.out.println(new String(msg.getBytes("UTF-8")) + " new string");  //this displays fine
                    catch(Exception e){}
                    Locale locale = new Locale("zh", "HK");
              ResourceBundle resource = ResourceBundle.getBundle("test.MessagesBundle", locale);
                    System.out.println(resource.getString("Hey"));      //this will display weird characterso it seems like to me that I must did something wrong in the process of creating properties file from unicode text file...

  • How to check double byte characters

    Hi
    My requirement: I have to accept the string (may include double byte characters and special characters). Need to check that wether that string contains any special characters(like %,&,..), if so should display error message.
    My solution: Starting i tried by usign the ASCII values. But the my code dividing the Double Byte characters into two characters.
    Code:
    package JNDI;
    public class CharASCIIValues {
         public static void main(String[] args) {
              // TODO Auto-generated method stub
              String s = args[0];
              char ch[] = s.toCharArray();
              for(int i=0;i<=ch.length;i++){
                   System.out.println(" "+ch[i]+"="+(int)ch);
    I ran with some double characters (japanese)
    But i got the out put was = ?=63 ?=63 ?=63 ?=63 ?=63 ?=63 1=49 2=50 3=51 h=104 e=101 l=108 l=108 o=111
    The ? are double byte charcters.
    Queries:
    Do i need to set any java setting to support DB characters.
    Please help me to come out this problem....any help/information will be appreciated.

    First of all Java strings are encoded in a modified version of UTF-8 that uses 2 bytes per character. That is the char datatype is equivalent to an unsigned short.
    Second what exactly do you mean by double byte? Whether a character ends up encoded in two bytes or not depends on the encoding used (UTF-8, UTF-16 (both unicode), BIG5, GB2312 (both chinese), iso-8859-1(Latin-1), ASCII, etc...). This means that there are no "double byte characters" there are only "double byte characters when encoded in <your encoding>".
    Where does your string come from? What encoding are you using to read the string in the first place? Are you sure you are creating the string using the right encoding?

  • How do you add a Comment when editing document in Word for ipad?

    How do you add a Comment when editing a document in Word for ipad. I've subscribed to Office, so have full editing functions. Can Track changes etc in Review, but can't see how to add a marginal Comment.

    Comments, like they are in Numbers for OS X, are not really supported in iOS. Normally, they rely on a mouse-over to read them. Since there is no mouse on the iPad, it stands to reason they are not there.
    I believe Eric Ross's comments is more geared towards used said Numbers docs on iWork.com.

  • How to maintain good photo resolution when editing

    when editing with photographs does anyone a trick on how to keep it at it's highest resolution for output to HiDef DVD

    Start with a High Definition codec such as DVCProHD and keep it there from output to encoding.
    BTW, what HD player are you going to play it on?

  • How far does iPhoto preserve the RAW file when editing?

    I'm trying to get a better understanding of how iPhoto handles RAW photos and hope you'll answer a humble, possibly misguided question:
    How far through the editing process does iPhoto preserve RAW data?
    Here's what I mean by this: I understand that iPhoto cannot "edit" a RAW file, but there are a couple of ways it look like it was. Let's take a WB adjustment as an example.
    Option 1: "Convert on import." iPhoto converts the RAW to a JPG when I import the file (or begin to edit it). Then adjusting the WB will be a filter applied to a JPG file (meaning there wasn't much advantage to shooting RAWs in the first place)!
    Option 2: "Fully RAW." iPhoto imports the RAW file and then "processes" it whenever you display or print it. When you edit it (e.g., adjusting the white balance), it keeps track of the new parameters (e.g., WB, or contrast, or cropping, etc.) and then uses these parameters the next time it processes the RAW for display. This would mean, for example, that there would be no difference in quality between
    A. Taking a freshly-imported RAW and setting the white balance to X, and
    B. Taking a freshly-imported RAW, setting the white balance to Y, closing iPhoto, re-opening iPhoto then editing the picture again and changing the WB to X.
    Option 3: "Somewhere in-between." Files are imported as RAW, but then converted to JPG at the end of the first editing. Subsequent edits are treated as JPG edits (including the resulting loss in image quality with each edit).
    Can anyone clarify this? How does the answer differ for different types of edits (e.g., WB vs. image rotation vs., expanding the highlights, etc.)? Is the answer different for Aperture vs. iPhoto?
    Thanks!
    -Mike

    How about the 4 th option... None of the above.
    Option 3 is closest. As the Help says:
    When you edit a RAW-format photo and click Done, iPhoto applies your changes to the RAW image data and stores the data as a JPEG file (the original RAW file remains unchanged).
    It can also be saved as tiff is you prefer.
    Here's where it's different from your option 3:
    Subsequent edits are treated as JPG edits (including the resulting loss in image quality with each edit)
    There is no loss of image quality with each edit because of iPhoto's _[Non-Destructive Editing Feature|http://docs.info.apple.com/article.html?path=iPhoto/7.0/en/11464.html]_ .
    You can, of course, re-process the RAWs if you choose. This will bring you right back to the Original file.
    Aperture is quite different. It's at your option 2
    Regards
    TD

  • How to support non alphanumeric characters when using WORLD_LEXER?

    BASIC_LEXER has an attribute of printjoins which we can specify the non alphanumberic characters as normal alphanumberic in query and included with the token. However, WORLD_LEXER doesn't have this attribute. So in order to use some non alphanumberic characters and treat them as alphanumberic characters in Oracle Text Index, such as ><$, what should I can?
    Thanks in advance for any help.

    I use WORLD_LEXER to create Oracle Text Index to support UTF-8.
    Below is the script to create table and index:
    REM Create table
    CREATE TABLE my_test
    ( id VARCHAR2(32 BYTE) NOT NULL,
    code VARCHAR2(100 BYTE) NOT NULL,
         CONSTRAINT "my_test_pk" PRIMARY KEY ("id"));
    REM create index
    exec ctx_ddl.create_preference('stars_lexer','WORLD_LEXER');
    exec ctx_ddl.create_preference('stars_wordlist', 'BASIC_WORDLIST');
    exec ctx_ddl.set_attribute('stars_wordlist','substring_index','TRUE');
    exec ctx_ddl.set_attribute('stars_wordlist','PREFIX_INDEX','TRUE');
    -- create index for Table corrosion level
    CREATE INDEX my_test_index
    ON my_test(code)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS ('LEXER stars_lexer STOPLIST stars_stop WORDLIST stars_wordlist SYNC(EVERY "SYSDATE+5/1440") TRANSACTIONAL');
    INSERT INTO my_test('1', 'English word1');
    INSERT INTO my_test('2', '违违');
    INSERT INTO my_test('3', '违违&^|违违');
    When I query:
    select * from corrosion_levels r where contains(r.CORROSION_LEVEL, '{%违${|违%}') > 0
    ID CODE
    3 违违$(|违违
    2 违违
    Actually, the result what I want is: 3 违违$(|违违
    So the requirement is that all non-alphanumeric characters should be treated as normal alphanumeric charcters. Please tell me how to implement it.

  • Special characters when editing

    In a text area that others will be editing, I've used &#8029; for a non-breaking hyphen in the original source created in Dreamweaver. I'm wondering how an editor will enter a non-breaking hyphen. I use a Mac, but I've read that on a PC it can be entered with Alt+2011. Will that hold through future loads and saves of that page? I've seen what InContext does to the code when a page is edited and saved. Seems it's just rearranged, but I wonder if those special characters will hold.

    I solve my problem, with  Sap Help.
    They send me: check the charset settings on client & J2EE as follows
    On Client side: Internet Explorer menu > Tools -> Internet Options -> Advanced
    please select 'Always send URLs as UTF-8'.
    On J2EE engine:  . It is important to check the parameter exists for all server nodes in config tool
    -Dhtmlb.useUTF8=X
    -Dfile.encoding=ISO-8859-1
    -Dfile.encoding=UTF8

  • How to convert muti-byte characters from US7ASCII database to UTF-8

    Hi Guys,
    We have a source database with CHARCTERSET as US7ASCII and our traget database has characterset of UTF-8. We have "©" symbol in the source databse and when we are inserting this value into our target database it is being converted as "¿".
    How can get make sure that "©" symbol is inserted on the target database. Both the databases are on version 10.2 but have a different CHARACTERSET. In the oracle documentation it mentioned that if the target database characterset is not a superset of source database characterset then this will happen but in our case UTF-8 is a superset of US7ASCII.
    Thanks,
    Ramu Kalvakuntla
    Edited by: user11905624 on Sep 15, 2009 2:58 PM

    user11905624 wrote:
    When I tried the DUMP('COLUMN','1016), this is what I got:
    Typ=96 Len=1 CharacterSet=US7ASCII: a9Considering 7-bit ASCII standard character set, the code 0xA9 is invalid.
    This has likely happened due to a pass-through scenario. See [NLS_LANG FAQ|http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm] (example of wrong setup). E.g. Windows 125x code pages all defines a character 'copyright sign' with encoding A9.
    If proper char set conversion takes place, I would expect (illegal) codes 0x80-FF to be "catched" and converted to replacement character (like U+fffd).
    Going back to the issue, how exactly are you transfering data or retrieving and inserting from source to target database?
    Edited by: orafad on Sep 17, 2009 10:56 PM

  • How to not create Lightroom duplicate when editing in Photoshop

    When I choose to Edit in Photoshop from within Lightroom 3, the edited copy is added to Lightroom. This might sometimes be useful, but I want to choose whether or not the edited copy is added to lightroom.
    How can I make this choice?
    David

    That's not my experience - maybe I've got some setting wrong somewhere.
    I have two categories of images in Lightroom: DNG files originating from digital camera Raw files, and files which are PSD that were scanned from old photos.
    Opening the DNG files in Photoshop then selecting "Save As" does not make the duplicate in Lightroom PROVIDED I have "As a copy" checked. But then the copy is not the file that's currently open so I have go find the copy and open it to do any work on it.
    Opening PSD files from Lightroom in Photoshop, the duplicate is created immediately the file opens in Photoshop unless I select the "Edit original" option. But even then, if I subsequently use Save As within Photoshop, the copy appears in Lightroom.
    My version of Lighrtoom is 3.4.1
    My version of Photoshop is CS5.1
    I'm on a Mac running Snow Leopard

  • How to preserve custom page numbers when combining PDF's?

    When combining multiple PDF's into one, the custom page numbers are lost.
    Example: As four separate files (one page each), the pages are numbered as follows:  AC-77, AC-78, AC-79, AC-80.  When combining them into a single PDF, the page numbers become 1, 2, 3, 4.
    How can I preserve the original page numbers?

    What is ' custom page numbers' ? How did you create the page numbers?

  • How to preserve a background color, when generating a PDF file

    I am trying to create a PDF file from some application. Please note that the picture in this application has the black background. So I invoke a Print command and set a printer as "Adobe PDF". As a result, I have generated
     a brilliant PDF file of my picture, but on the WHITE background. When selecting the Adobe PDF printer, I have looked through all its settings ( in the Adobe PDF settings, I have found several tabs: General, Images, Fonts, Color, Advanced, PDF/X), none of them generated the original background color. 
    So how can a generate a PDF file, having the original background color (black, in my case) ?
    Oleg

    Now I feel that the background definition in the Adobe PDF printer and the background definition in the application the Adobe PDF printer is invoked from -- two different things. So I guess how can I define the black background in the Adobe PDF printer? I cannot find such settings.

  • How to automatically load function parameters when editing c/c++ DLL call?

    I built a DLL in CVI and is used in TestStand, I want this DLL to have the ability to automatically load function parameters after specifying Module pathname and function name in "Edit C/C++ DLL Call" dialog? How can I do it?

    Staring with CVI 7.1 and TestStand 3.1 you do not need a type library for TestStand to load parameters automatically. Just make sure you declare your exported functions with the __declspec(dllexport) modifier as in the following code:
    __declspec(dllexport) void fun(void)

  • How to retain new line characters when using Node.getNodeValue()

    Hi,
    I have xml as shown bellow
    *<DETAILS>*
    this is line one
    this is line two
    thi is line three
    *</DETAILS>*
    when I parse this using dom( Node.getNodeValue() ), the value I am getting is as shown bellow
    this is line onethis is line twothis is line three
    but I wanted as is I have in my xml i.e,
    this is line one
    this is line two
    this is line three
    ( to show that as is in HTML page)
    can some body help me ??
    Thanks in advance .

    Hi,
    You could try this:
    replace(your_data, chr(10), '')

  • How do I reveal the cut points when editing Adobe Connect recording?

    Since the roll-out of AC 9.1, when I edit the timeline of a recording I see the timeline of the position of my cursor ON TOP OF the timeline point of the cut points, therefore I can't see where I'm about to cut a recoring. How to see the cut points when editing the timeline?
    Thanks,
    --Kevin

    Hi Kevin,
    There is a know bug for the black marker overlapping the time-line with a black marker.
    But we have a trick to help you do your editing effectively. Follow the steps mentioned below and you will be able to edit your recording with precision:
    1. Pause the recording from where you want to start your editing.
    2. Double Click on the LEFT marker. It will take both the markers to that point on the recording timeline.
    3. Play the recording and pause the recording again at the point till where you want to edit the recording.
    4. Double Click on the RIGHT marker. It will take the right marker to the end point of your to-be-recording.
    5. Cut and Save.
    Do let us know if this helps!
    Thanks
    Sameer Puri

Maybe you are looking for