Writing Unicode characters to scripting parameters on Windows

I am trying to read/write a file path that supports Unicode characters to/from scripting parameters (PIDescriptorParameters) with an Export plug-in. This works fine on OS X by using AliasHandle together with the "typeAlias" resource type in the "aete" section of the plugin resource file.
On Windows I am having trouble to make Photoshop correctly display paths with Unicode characters. I have tried:
- Writing null-terminated char* (Windows-1252) in a "typePath" parameter -- this works but obviously does not support Unicode.
- Writing null-terminated wchar* (UTF-16) in a "typePath" parameter -- this causes the saved path in the Action palette to be truncated to the first character, caused by null bytes in UTF-16. It appears PS does not understand UTF-16 in this case?
- Creating an alias record with sPSAlias->WinNewAliasFromWidePath and storing in a "typePath" or "typeAlias" parameter -- this causes the Action palette to show "txtu&", which does not make sense to me at all.
The question is: what is the correct scripting parameter resource type (typePath, typeAlias, ... ?) for file paths on Windows, and how do I write to it in such way that Photoshop will correctly display Unicode characters in the Actions palette?

Hi
Skip the first (4 or 6 characters) and you'll get the Unicode value.
regards
Bartek

Similar Messages

  • Writing unicode characters on JSwing components

    I'm new to computers in general, but how can I write text onto a JTextarea - i need to type into one TextArea in Spanish and in English in the other - can I set up a textarea to be able to write words that include Spanish characters - I've read through most of the forums, but most don't mention this!! please help!!

    Hi all,
    If we're talking input (as in entering different languages into textfields), it's pretty much down to your operating system if you don't want to spend any money. Windows 2000 is about the only OS I'm aware of that will let you enter non-English languages into textfields off the bat. And even then, you need a foreign version. For instance, the Chinese version of Win2K will let you enter Chinese directly into Java textfields (albeit dodgily due to some pretty bad bugs in the JDK!)
    If you're trying to switch input languages between textfields, I doubt the OS could handle it. It sounds to me like it could only be handled by an input method (for those yet to discover this, there is a page or two in the JDK documentation on the Java Input Method Framework). Java supplies the framework only to allow you to enter text in different languages for text components. The problem is that Sun has never supplied its own input methods (I imagine it could blow the JDK's size out quite nicely if they ever tried!). The only input methods out there are sold by third-party developers, but as much as they cost, they are good and reliable. SlangSoft make an excellent one called Spirus which really covers all bases. You can get a thirty-day trial of it at www.slangsoft.com.
    Hope that covers it!
    Martin Hughes

  • Terminal.app and the European Unicode characters?

    Does anyone have the unicode characters working properly in Terminal.app?
    If I try to write in GNU nano 1.2.4 for instance "örrör" it translates into:
    (one empty line)
    örr
    ör
    which isn't certainly right. This is especially awkward when editing an unicode text file where the text then easily becomes more or less garbled. Usually more.
    It doesn't seem to make any difference whether or not I use the Finnish extended (unicode) keyboard layout or the conventional one in nano. If the Terminal.app window preferences are set as UTF-8, it says:
    ?rr
    ?r
    which looks even more garbled.
    In plain bash the characters print like this:
    å = \345
    ä = \344
    ö = \366
    so my mighty apple translates the example string "örrör" as "\366rr\366r".
    Any ideas, anyone?
    PowerBook G4 @ 1.5 GHz   Mac OS X (10.4.4)   1.25 GB DDR SDRAM
    Debian Sarge 3.1 as a slave fetchmail server.

    Hi solarflare,
       My first (and essentially only) language is English as well. However enough folks have asked that I have experimented with multibyte characters. There are so many apps and options involved, it's difficult to get consistent results. However, I'll recount as many settings as I can recall.
       To begin with, you are right about the LC settings. It helps many apps to have:
    export LCALL=enUS.UTF-8
    export LANG=en_US.UTF-8
    set in your shell startup scripts. Then the system should be set to produce unicode when you type. In the "Input Menu" tab of the "International" pane of "System Preferences", you should select a unicode keyboard layout, such as U.S. Extended.
       To configure the Terminal, you need to open the "Terminal Inspector" by selecting "Window Settings..." in the "Terminal" menu. To type many multibyte characters, you need the option key. To use it, you must have the "Use option key as meta key" checkbox unchecked, although I find the meta key too important in UNIX to leave that unchecked. In the dropdown menu in the "Display" pane of the "Terminal Inspector", you should set the "Character Set Encoding" to "Unicode (UTF-8)". In the "Emulation" pane of the same window, you must uncheck the "Escape non-ASCII characters" checkbox. That is important as I've read that it is checked by default and that can produce some pretty strange results.
       Now it's helpful to use a very modern shell. For instance, the latest beta version of zsh-4.3 has the best unicode support of all versions of zsh. After you've chosen a good shell, you're at the mercy of the application that you're using. As I gather you've noticed, vim has excellent unicode support and picks up on the LC settings. I have no idea about nano but it is meant to be a minimal text editor.
       I know that my settings allow me to type extended characters and the "Character Palette" lets me insert more. As far as other command line utilities go, the best you can do is to choose well and keep your apps as up-to-date as possible. Fink or Darwin Ports can often help in that regard.
    Gary
    ~~~~
       This generation doesn't have emotional baggage.
       We have emotional moving vans.
             -- Bruce Feirstein

  • GlyphID reverse lookup to get Unicode characters

    In my plug-in I have a GlyphID extracted from an IPMFont* but the glyph does not have a unicode value because it is a ligature, a combination of many unicode characters. Is there a way I can query the IPMFont* object to find out what unicode characters need to be used to convert to this ligature. In a TrueType font this information would be held in the 'GSUB' (Glyph SUBstitution) table.
    a simple example of this would be:
    'f' + 'f' + 'i' = 'ffi'
    '1' + '/' + '2' = '½'
    So the glyphID I have would be the 'ffi' or the '½' glyph and I need to find out the 3 unicode characters which, when used in combination, would cause that glyph to be used.
    This is then extended to Arabic and Hindi fonts where the Ligatures are highly important in drawing the script correctly.
    Using Utils<IGlyphUtils>->GlyphToCharacter (font, glyph, &userAreaChar) does not work as the glyph has no Unicode character representation so the function just returns 0.
    Likewise Utils<IGlyphUtils>->GetUnicodeForGlyphID (font, glyph) gives the same result

    IGlyphUtils.h might be useful but I have yet to discover a routine that gives me the information I need.
    Do I have to use glyphUtils->GetOTFAttribute and iterate through it to find which combination of unicode characters result in a particular glyphID? And what parameters should I use for GetOTFAttribute to get the ligature table?

  • AAT Apple Advanced Typography for the writing systems of world scripts

    Thomas Gewecke writes:
    If I had to choose one problem which does exist and causes considerable practical difficulties for a lot of people, it would be that lack of full OpenType support in OS X (and the resulting requirement for rare AAT fonts) makes it impossible for Mac users to do everything they might want in a number of important scripts, or to do anything at all in quite a few others.
    This is a frequently asked question, so perhaps the simplest solution is to try to support this in a separate thread. It is probably preferable to repeat what Apple has published on the Unicode mailing list on the subject of writing systems in world scripts. A link to a supplier of AAT fonts for lesser languages is included in the references (Bassa, Brahmi, Burmese, Cambodian, Georgian, Inuktitut, Kannada, Laotian, Lepcha, Limbu, Malayalam, N'ko, Osmanya, Sinhala, Tai Le, Tamil, Telugu, Tibetan ...). The most advanced Arabic implementation is Mishafi from Diwan in London - this has earned praise on Typophile. There are several independent software publishers (aside from Apple iWork) that support authoring with AAT Apple Advanced Typography.
    According to the Apple Unicode Liason, Deborah Goldsmith, as of OS X 10.2 it is possible for the small type maker to support a writing system in a world script through the optional Apple MORX Metamorphosis Extended tables in the SFNT Spline Font file format. Dropping an SFNT and an input method into the operating system adds the shaping for the writing system. And according to the Apple Unicode Liason, as of OS X 10.4 the optional Apple MORX tables for complex composition and the optional Microsoft GSUB tables for complex composition may peacefully cohabit in the selfsame SFNT Spline Font file (leaving aside the issue of whether this is sound advice, or whether sound advice should say that an SFNT should contain either TrueType or Type 1, either MORX or GSUB - not both in either case).
    Hope this helps,
    Henrik
    References :
    http://www.mail-archive.com/[email protected]/msg13047.html
    http://lists.apple.com/archives/carbon-dev/2006/Nov/msg00579.html
    http://www.xenotypetech.com/
    http://www.diwan.com/mishafi/main.htm
    http://www.typophile.com/node/16858
    http://www.typophile.com/node/18098

    Please pardon any speedwriting in the following - it's off the cuff :
    From the little I understand about the technical details of the differences between AAT and OpenType, I'd guess AAT to be the superior system, from the user's (or font designer's) point of view.
    The issue is the business model. Apple TrueType 2 and Apple ColorSync 2 were developed to provide very, very, very highend character:glyph transforms and colour:colourant transforms in an application-independent manner.
    The application model was Java and OpenDoc and while OpenDoc is defunct, as is Taligent, Java within which AAT is embedded is alive and kicking. In the application model the idea was that the small developer did not have to independently do elements outside the scope of the application.
    Similarly, a graphics library was available to avoid the problem that PostScript is inherently unreliable as it is a programming model that can be used to extend the PostScript graphics model, causing PostScript programs to crash at critical times.
    Software publishers in the nineteen-nineties published software for the standalone personal computer with its suite of standalone software. And the standalone software had its own Application Programming Interface that locked XTensions, Plug-ins and more into one and only one suite.
    Adobe did NOT want QuickDraw GX and Adobe still has a 'white' paper in which the company states that the idea of the SFNT Spline Font file format as an application-independent product that takes over large parts of line layout is objectionable.
    The pendulum does not stand still, however. In the 1970s there were terminals for time-shared centralised computing. In the 1980s and 1990s there were decentralised 'personal' computers in local area networks with their own storage and with graphic displays and printers.
    The growth of instructure, both in terms of distributed networking and in terms of an international character set, permits a blend of time-shared computing and 'personal' graphics computing which was intended e.g. with the Apple MessagePad and with Java (Amelio married the ideas).
    Apple bled external and internal developers with the late lamented GX and the application model was ten years ahead of its time. Modern imaging models are founded on the ideas implemented in GX and absolutely NOT on the ideas implemented in PostScript and PDF.
    Jonathan Seybold told Apple to do an application for pagination, since an application for pagination is the sine qua non for a composition model, a separation model, and a document model (the Apple Portable Digital Document model).
    Apple did not do that as it would have caused increased commercial conflict with Adobe, Quark and Macromedia at a time when hardcopy production was still the high end of the Apple product portfolio, and so while ColorSync survived TrueType as prosumer and pro solution suffered.
    Ironically, XTT's developer did manage at one time to make his primary Tibetan font usable in both Mac and Windows environments -- by somehow combining AAT and OpenType elements within it -- but his considerable effort was then torpedoed, I gather, by some (unannounced, as usual) changes Apple made in font implementation in 10.4.
    Deborah Goldsmith gave bad advice on the Unicode mailing list, I'm sorry but it was not sound technically. Dov Isaacs, Adobe's technical quality manager, gave sound advice in saying that Type 1 splines and TrueType splines should not be housed in the selfsame SFNT Spline Font file. I read what the Xenotype developer posted, and Apple bungled as Apple has bungled other important things like supporting a decent international default separation for ColorSync. There is no excuse whatsoever - if Apple cannot use Apple software with Apple defaults to do a decent job then Apple needs to find out whether it is working for Apple customers or is working for itself.
    I don't know what might be necessary to make it possible for Mac users to employ OpenType fonts for complex scripts, but I can't believe that this goal is simply beyond the capabilities of Apple's engineers. Nor do I understand why Apple seems to keep, well, stalling on this issue.
    The answer lies in the document model, not in the internationalisation model or in the SFNT imaging model. I am not an expert on Indic scripts (I don't speak or write any of them), but as I understand the matter Indic calligraphic scripts are simulated typographically using a feature called insertion that splits one character code into two glyph codes for vowels.
    This type of typographic simulation does not pose problems if you are authoring with the aim of archiving and accessing hardcopy, since your audience is not interacting with the character string, but if you are authoring, archiving, and accessing softcopy then simulation methods such as insertion and bidirectionality may pose problems.
    Specifically, if in the process of producing your softcopy pagination you loose the source character stream and the mapping of said character stream to the reshaped and reordered glyph stream, then you have to try to synthesise the character stream. And the more complex the reshaping and reordering, the less likely you are to get a successful simulation.
    This is the issue between Adobe PDF produced from PostScript and Microsoft XPS which retains as well the source character stream as the mapping of the character stream to the glyph stream. Adobe PDF, by contrast, is basically a viewable graphic of the glyph stream - Adobe PDF does not even retain semantics for reshaping of formal Danish typography.
    Hope this helps,
    Henrik
    References:
    http://www.freepatentsonline.com/y2007/0136660.html

  • Printing unicode characters in Java - help

    Hi there,
    I want to print out unicode characters through java programming language in windows system. For example, I want to print Devanagari characters. I found out that '\u0900' to '\u0975' represent devanagari characters. So I tried following,
    out = new PrintStream(System.out, true, "UTF-8");
    out.println('\u0911');
    but they print characters like ��� and not the actual devanagari characters. Just to be more clear, devanagari script is used by Hindi, Nepali and similar languages.
    If you knew about it and could give any suggestions, that would be very helpful.
    Thanks in advance!

    priyankabhar wrote:
    I am not sure, it is just a windows system and I am trying to print to the command line. Please suggest me how I can find out if my console supports it.Use the CHCP command to find out what code page your console uses. And as already suggested, Google is a good resource if you don't know what a "code page" is.

  • Oracle Discoverer Desktop Report output showing unicode characters

    Hi,
    Oracle Discoverer Desktop 4i version Report output showing the below unicode characters.
    kara¿ah L¿MAK HOLD¿NG A.¿
    We ran the same query in sql at that time the data showing correctly.
    Please let me know, is there any language settings/ NLS settings are need to set
    Thanks in Advance.

    Hi
    Let me give you some background. In the Windows registy, every Oracle Home has a setting called NLS_LANG. This is the variable that controls, among other things, the numeric characters and the language used. The variable is made up of 3 parts. These are:
    language_territory.characterset
    Notice how there is an underscore character between the first two variables and a period between the last two. This is very important and must not be changed.
    So, for example, most American settings look like this: AMERICAN_AMERICA.WE8MSWIN1252
    The second variable, the territory, controls the default date, monetary, and numeric formats and must correspond to the name of a country. So if I wanted to use the Greek settings for numeric formatting, editing the NLS_LANG for Discoverer Desktop to this setting will do the trick:
    AMERICAN_GREECE.WE8MSWIN1252
    Can you please check your settings? Here's a workflow:
    a) Open up your registry by running Regedit
    b) Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE
    c) Look for the Oracle Home corresponding to where Discoverer Desktop is installed. It's probably called KEY_BIToolsHome_1
    d) Clicking on the Oracle Home will display all of the variables
    e) Take a look at the variable called NLS_LANG - if it is correct Exit the registry
    f) If its not correct please right-click on it and from the pop-up select Modify
    f) Change the variable to the right setting
    g) Click the OK button to save your change
    h) Exit the registry
    Best wishes
    Michael

  • CRVS2010 Beta - Cannot export report to PDF with unicode characters

    My report has some unicode data (Chinese), it can be previewed properly in the windows form report viewer. However, if I export the report document to PDF file, the unicode characters in exported file are all displayed as a square.
    In the version of Crystal Report 2008 R2, it can export the Chinese characters to PDF when I select a Chinese font in report. But VS2010 beta cannot export the Chinese characters even a Chinese font is selected.

    Barry, what is the specific font you are using?
    The below is a reformatted response from Program Management:
    Using non-Chinese font with Unicode characters (Chinese) the issue is reproducible when using Arial font in Unicode characters field. After changing the Unicode character to Simsun (A Chinese font named 宋体 in report), the problem is solved in Cortez and CR both.
    Ludek

  • How to display special characters in Script...

    hi all,
    Can any one tell me how to display special characters in script...
    how to write in text element
    thanks in advance,
    prashant

    Hi Prashant ,
      What special characters would you like to include .
    There are a set of characters / icons /symbols that can be included in Script , for that open a window in edit mode and in the menu there will be an option called Insert  , here you can find a lot of characters/symbols that can be included .
    Regards,
    Arun

  • How do I get unicode characters out of an oracle.xdb.XMLType in Java?

    The subject says it all. Something that should be simple and error free. Here's the code...
    String xml = new String("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<x>\u2026</x>\n");
    XMLType xmlType = new XMLType(conn, xml);
    conn is an oci8 connection.
    How do I get the original string back out of xmlType? I've tried xmlType.getClobVal() and xmlType.getString() but these change my \u2026 to 191 (question mark). I've tried xmlType.getBlobVal(CharacterSet.UNICODE_2_CHARSET).getBytes() (and substituted CharacterSet.UNICODE_2_CHARSET with a number of different CharacterSet values), but while the unicode characters are encoded correctly the blob returned has two bytes cut off the end for every unicode character contained in the original string.
    I just need one method that actually works.
    I'm using Oracle release 11.1.0.7.0. I'd mention NLS_LANG and file.encoding, but I'm setting the PrintStream I'm using for output explicitly to UTF-8 so these shouldn't, I think, have any bearing on the question.
    Thanks for your time.
    Stryder, aka Ralph

    I created analogic test case, and executed it with DB 11.1.0.7 (Linux x86), which seems to work fine.
    Please refer to the execution procedure below:
    * I used AL32UTF8 database.
    1. Create simple test case by executing the following SQL script from SQL*Plus:
    connect / as sysdba
    create user testxml identified by testxml;
    grant connect, resource to testxml;
    connect testxml/testxml
    create table testtab (xml xmltype) ;
    insert into testtab values (xmltype('<?xml version="1.0" encoding="UTF-8"?>'||chr(10)||'<x>'||unistr('\2026')||'</x>'||chr(10)));
    -- chr(10) is a linefeed code.
    commit;
    2. Create QueryXMLType.java as follows:
    import java.sql.*;
    import oracle.sql.*;
    import oracle.jdbc.*;
    import oracle.xdb.XMLType;
    import java.util.*;
    public class QueryXMLType
         public static void main(String[] args) throws Exception, SQLException
              DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
              OracleConnection conn = (OracleConnection) DriverManager.getConnection("jdbc:oracle:oci8:@localhost:1521:orcl", "testxml", "testxml");
              OraclePreparedStatement stmt = (OraclePreparedStatement)conn.prepareStatement("select xml from testtab");
              ResultSet rs = stmt.executeQuery();
              OracleResultSet orset = (OracleResultSet) rs;
              while (rs.next())
                   XMLType xml = XMLType.createXML(orset.getOPAQUE(1));
                   System.out.println(xml.getStringVal());
              rs.close();
              stmt.close();
    3. Compile QueryXMLType.java and execute QueryXMLType.class as follows:
    export PATH=$ORACLE_HOME/jdk/bin:$PATH
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib
    export CLASSPATH=.:$ORACLE_HOME/jdbc/lib/ojdbc5.jar:$ORACLE_HOME/jlib/orai18n.jar:$ORACLE_HOME/rdbms/jlib/xdb.jar:$ORACLE_HOME/lib/xmlparserv2.jar
    javac QueryXMLType.java
    java QueryXMLType
    -> Then you will see U+2026 character (horizontal ellipsis) is properly output.
    My Java code came from "Oracle XML DB Developer's Guide 11g Release 1 (11.1) Part Number B28369-04" with some modification of:
    - Example 14-1 XMLType Java: Using JDBC to Query an XMLType Table
    http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/xdb11jav.htm#i1033914
    and
    - Example 18-23 Using XQuery with JDBC
    http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/xdb_xquery.htm#CBAEEJDE

  • Unicode characters not displayed in text property

    I am developing a web application with Flex Builder. I write
    the text for each label using a font called Dhivehi which is
    written from left to right, and then copy the text and paste it in
    the label property called text.
    However in the source code view the text property of the
    label shows
    text=""
    The issue is that when rendered the text is rversed. So I
    want to run a function once the application is loaded, to reverse
    the text in the label, so that the text will appear in it's
    original way.
    any help will be very much appreciated

    Hi,
    I have a strange problem here with Windows.Forms.RichTextBox, when I assign a .ToString() value of sting builder to a rich text box’s .Rtf Property the Unicode characters containing in string builder gets converted to ???? symbols in .Rtf property of rich
    text box.
    Could you please let me know if Rich text box’s .Rtf property can hold Unicode characters? or is there any other way to store the Unicode characters in rich text box?
    Thanks & Regards,
    Tabarak
    Hello,
    To clarify and help you get proper solution, I would recommend you share a rtf string or even a simple sample which could reproduce that issue with us.
    We will based on that sample to help you.
    Regards,
    Carl
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • What table column size is needed to accomodate Unicode characters

    Hi guys,
    I have encounter something which i dont understand and i hope gurus here will shed some light on me.
    I am running a non-unicode database and i decided to port the data over to a unicode database.
    So
    1) i export the schema out --> data.dmp
    2) then i create the unicode database + create a user
    3) then i import the schema into the database
    during the imp i can see that character conversion will take place.
    During importing of data into the unicode database
    I encounter some error
    saying column size is too small
    so i went to check the row that has the column value that is too large to fit in the table.
    I realise it has some [][][][] data.. so i went to the live non-unicode database and find the row. Indeed it has some [][][][] rubbish data which i feel that someone has inserted other language then english into the database.
    But regardless,
    I went to modify the column size to a larger size, now the row can be accommodated. However the data is still [][][].
    q1) why so ? since now my database is unicode, during the import, this column data [][][] should be converted to unicode already but i still have problem seeing what language it is.
    q2) why at the non-unicode database, the [][][] data can fit into the table column size, but on unicode database, the same table column size need to be increase ?
    q3) while doing more research on unicode, it was said that unicode character takes up 2 byte per character. Alot of my table data are exactly the same size of the table column size.
    E.g Name VARCHAR2(5);
    value - 'Peter'
    Now if converting to unicode, characters will take 2byte instead of 1, isnt 'PETER' going to take up 10byte ( 2 byte per character ),
    why is it that i can still accomodate the data into the table column ?
    q4) now with unicode database up, i will be supporting different language characters around the world. How big should i set my column size to ? the longest a name can get ? or ?
    Thanks guys!

    /// does oracle automatically "look" at the each and individual characters in a word and determine how much byte it should take.
    Characters usually originate from a keyboard, which has an associated keyboard layout and an associated character set encoding (a.k.a code page, a.k.a. encoding). This means, the keyboard driver knows that when a key with a letter "á" on it is pressed on a French keyboard, and the associated character set encoding is MS Code Page 1252 (Oracle name WE8MSWIN1252), then one byte with the value 225 is generated. If the associated character set encoding is UTF-16LE (standard internal Windows encoding), two bytes 225 and 0 are generated. When the generated bytes travel through APIs, they may undergo character set conversions from one encoding to another encoding. The conversion algorithms use translation tables to find out how to translate given byte sequence from one encoding to another encoding. In case of translation from WE8MSWIN1252 to AL32UTF8, Oracle will know that the byte sequence resulting from conversion of the code 225 should be 195 followed by 161. For a Chinese characters, for example when converting it from ZHS16GBK, Oracle knows the resulting sequence as well, and this sequence is usually 3 bytes.
    This is how AL32UTF8 data gets into a database. Now, when Oracle processes a multibyte string, and needs to look at individual characters, for example to count them with LENGTH, or take a substring with SUBSTR, it uses information it has about the structure of the character set. Multibyte character sets are of two type: fixed-width and variable-width. Currently, Oracle supports only one fixed-width multibyte character set in the database: AL16UTF16, which is Oracle's name for Unicode UTF-16BE encoding. It supports this character set for NCHAR/NVARCHAR2/NCLOB data types only. This character set uses two bytes per each character code. To find the next code, 2 is simply added to the string pointer.
    All other Oracle multibyte character sets are variable-width character sets, including AL32UTF8. In most cases, the length of each character code can be determined by looking at its first byte. In AL32UTF8, the number of 1-bits in the most significant positions in the first byte before the first 0-bit tells how many bytes a character has. 0 such bits means 1 byte (such codes are identical to 7-bit ASCII), 2 such bits mean two bytes, 3 bits mean 3 bytes, 4 bits mean four bytes. 1 bit (e.g. the bit sequence 10) starts each second, third or fourth byte of a code.
    In other ASCII-based multibyte character sets, the number of bytes is usually determined by the value range of the first byte. Bytes below 128 means a one-byte code, bytes above 128 begin a two- or three-byte sequence, depending on the range.
    There are also EBCDIC-based (mainframe) multibyte character sets, a.k.a shift-sensitive character sets, where a sequence of two-byte codes is introduced by inserting the SO character (code 14=0x0e) and ended by inserting the SI character (code 15=0x0f). There are also character sets, like ISO-2022-JP, which use more complicated byte sequences to define the length and meaning of byte sequences but Oracle supports them only in limited number of places.
    /// e.g i have a word with 4 character. the 3rd character will be a chinese character..the rest are ascii character
    /// will oracle use 4 byte per character regardless its ascii(english) or chinese
    No.
    /// or it will use 1 byte per english character then 3 byte for the chinese character ? e.g.total - 6 bytes taken
    It will use 6 bytes.
    Thnx,
    Sergiusz

  • Unicode characters appear as      (boxes) in AIR for IOS

    I am developing an application for iPhone and it has an input Text box. (Placed on stage from flash CS6). In the Desktop emulator everything works fine. But, on the actual phone, everything I type appears as       . But surprisingly, when I am inputting the text, (i.e. the textbox is in edit mode) the characters are displayed correctly. But as soon as I leave the box it appears as       . Is there someway I can display the text the same way it is displayed during the edit mode?
    I am trying to input Devanagari देवनागरी characters.
    P.S. I did some research on this and found that, if I specify the  font for the textbox = "DevanagariSangamMN" then there is some improvement.
    I say improvement becasue althought the boxes       are replaced by actual characters , they aren't correctly formated.
    For e.g. during edit mode the text appears as this: कार्यकर्ता
    But as soon as I leave the textbox it appears as this: कार्यकर्ता (I typed this here by adding special unicode characters ZWNJ so that characters won't join)
    Anyway, I don't like the idea of having to specify font names? What If some user would like to input chinese, how would I know what chinese font to use?
    Isn't there some way to let IOS handle things, (just like how it handles things when I am inputing text).
    Thanks.

    Thanks to everyone who replied.
    The conclusiver answer is that there are only 2 ways to display H264 video in AIR for IOS
    (more info here http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.htm l#play%28%29)
    1. Progressive download
    2. HLS format (slight caveat, in my tests at least, OSMF 1.6.1 does'nt handle this, but if you use the NetStream directly with StageVideo enabled it works)
    Updated Matrix is :
    FMS 4.5 H.264 Streaming test matrix
    RTMP
    HDS
    HLS
    HTTP Progressive Download
    AIR for Android
    Yes
    Yes
    No
    Yes
    AIR on Windows (Desktop)
    Yes
    Yes
    No
    Yes
    AIR on IOS
    No
    No
    Yes
    Yes
    Safari Browser on IOS
    No
    No
    Yes
    No

  • Oracle Receiver JDBC Adapter - Handling Unicode Characters

    We have an IDOC to JDBC scenario.
    In this IDOC is sending data like -  10/14u2019/P7 After 4 there is special character coming from SAP ( It is not single quote).
    Mapping is going through OK and data is getting saved in Oracle Database as 10/14&#x19;/P7 with & # x 19;
    I came across following solution in forums and SAP Note.
    I am not sure how to modify Oracle JDBC URL to handle Unicode characters properly.
    Or is there any other approach we can follow to achieve this..
    Any input is really appreciated
    Q: I am inserting Unicode data into a database table or selecting Unicode data from a table. However, the data inserted into or retrieved from the table appears garbled. Why doesn't the JDBC Adapter handle Unicode correctly?
    A: While the JDBC Adapter is Unicode-aware, many JDBC drivers and/or database management systems aren't by default and need a codepage or Unicode-awareness to be configured explicitly. For the respective JDBC drivers, this codepage setting is often configured via the driver URL. For details, refer to the documentation of your JDBC driver or database management system.

    Hi Simona,
    1.To start the visual admin, execute "go" file:
    On Windows: Run \usr\sap\<SAPSID>\JC<xx>\j2ee\admin\go.bat
    On UNIX: Run /usr/sap/<SAPSID>/JC<xx>/j2ee/admin/go
    2.supply the credentials to login into visual admin
    3.under "cluster" tab select "server node"
    4.you will find "log viewer" under "services"
    Since you are new, I recommend you to take help from your BASIS team.
    Hope it helps !
    Hi Alwin,
    Just a quick clarification.
    I used the URL you have mentioned, when we were on SP5. After that we upgraded to SP9.
    From SP9, if you try to use the URL http://XISERVER:50000/AdapterFramework then it automatically redirects to a new webpage with the link to the URL i have mentioned.
    Regards,
    Sridhar

  • Exporting unicode characters to PDF using JRC not working.

    We have a requirement to support unicode characters (Russian) in our reports. We are using the JRC with the R2 release. When I view the report in the viewer, the characters are correct, but when I export to PDF, they show as ???&#39;s. Is this a bug? When I export from the report designer to pdf, they show correctly, but I have heard it uses a different reporting engine fromt the JRC.

    Solution is quite simple don't worry too much about it.
    JRC PDF Export engine only support for windows-1252 encoding scheme. If your character set using encoding scheme other than windows-1252 you will get bunch of ????. There is simple way to convert this encoding scheme in Java.
    As a example Arabic character scheme using windows-1256 character scheme and we can covert this to JRC supported windows-1252 by
    JRCSupportedCharacterString = new String((InputCharacterString.toString()).getBytes("windows-1256"), "windows-1252");
    InputCharacterString - windows-1256 encoded
    JRCSupportedCharacterString  - windows-1252 encoded (JRC Suppoted)
    Now JRC will correctly process your character string.
    Note: make sure to set font type of Fields in your report template for relevant font style (ex. Arabic, Chinese or whatever)
    Java encoding names and more information about conversion are available at
    http://mindprod.com/jgloss/encoding.html#CONVERSION
    Happy coding............

Maybe you are looking for

  • Apply Patch in EBS R.12.1.1

    Hello All, I have some issues in applying application patch. my ebs version is R.12.1.1 Linux x86 and patches will be apply are *9055234*, *9366099*, and *10364346*. I follow steps to apply patch from Hussein Sawwan. but I face some problem. Steps to

  • Brazil Localisation

    Hi, I am in the process of designing the Finance Solution at one of the Leading construction companies (constructing structures for oil exploration). The primary requirements and some preliminary design thoughts are as follows: - Group reporting unde

  • How do i look at my account

    what do i go under to see charge that was made

  • Need to extract class files from win32 exe

    I have a very old java exe which uses snjrt11.dll and snjawt11.dll files to run java classes packed in it. Is there any way to extract those classes that were packed I guess by Symantec's Cafe...

  • Can SOA replace SAP Business Workflow?

    I finished BIT600 and covered the practicals on 300 pages of the 500 pages of BIT601. Now I hear that the company I work for will replace workflow with SOA? I not that informed on SOA, but would it be possible for this to replace SAP Business Workflo