XDK support for UTF-16 Unicode

Hi,
Does the Oracle Java XDK, specifically the XSQL servlet and api,
support UTF16-Unicode.
Presumably, .xsql files have to be stored, read and queries
executed in a Unicode compliant format for this to work. Is this
possible currently???
Thanks,
- Manish

If you are using XDK 9.0.1 or later, and you are using JDK 1.3,
this combination supports UTF-16. XSQL inherits the support
for "free" in this case.

Similar Messages

  • Does photoshop cs5 sdk support file names with unicode characters?

    Hi,
    For the export module, does the windows sdk have support for filenames with unicode characters?
    The ExportRecord struct in PIExport.h has an attribute filename declared as char array.
    So I have a doubt whether filenames with unicode characters will be supported for export module.
    Thanks in advance,
    Senthil.

    SPPlatformFileSpecificationW is not available in ReadImageDocumentDesc.
    It has SPPlatformFileSpecification_t which is coming as null for export module.
    I am using phostoshop cs5 sdk for windows.
    Thanks,
    Senthil.

  • Call upon even better support for Unicode

    Hello
    Following some messages I have posted regarding problems I encountered while developing a non-English web application, I would like to call upon an even better support for Unicode. Before I describe my call, I want to say that I consider Berkeley DBXML a superb product. Superb. It lets us develop very clean and maintainable applications. Maintainability is, in my view, the keyword in good software development practices.
    In this message I would like to remind you that the US-ASCII 8-bit set of characters only represents 0.4% of all characters in the world. It is also true to say that most of our software comes from efforts of American developers, for which I am of course very grateful.
    But problems with non US-ASCII characters are very very time consuming to solve. To start with, our operating systems need to be configured especially for unicode, our servers too, our development tools too, our source code too and, finally, our data too. That's a lot of configuring, isn't it? Believe me, as a Flemish french-speaking, danish-speaking developer who develops currently a new application in Portuguese, I know what I am talking about.
    Have you ever tried to write a Java class called Ação.java, that loads an xml instance called Ação.xml that contains something like <?xml version="1.0" charset="utf-8"?></ação variável="descrição"/>? It takes at least the double of time to have all this work right in a web application on a Linux server than it would take to write a Acao.java that loads Acao.xml containing <?xml version="1.0" charset="us-ascii"?></acao variavel="descricao"/> (which is clearly something we do not want in Portugal).
    I have experienced a problem while using the dbxml shell to load documents that have utf-8 encoded names. See difficulties retrieving documents with non ascii characters in name The work around is not to use the dbxml shell, with which I am of course not very happy.
    So, while trying not to be arrogant and while trying to express my very very great appreciation for this great product, I call upon even better support for Unicode. After all, when the rest of us, that use another 65279 characters in our software, will be able to use this great product without problems, will it not contribute to the success of Berkeley DBXML?
    Thank you
    Koen
    Edited by: koenheene on 29/Out/2009 3:09

    Hello John and thank you for replying,
    You are completely correct that it is a shell problem. I investigated and found solutions for running dbxml in a Linux shell. On Windows, as one could expect, no solution so far.
    Here is an overview of my investigation, which I hope will be useful for other developers which also presist writing code and xml in their language.
    difficulties retrieving documents with non ascii characters in name
    I was wondering though if it would not be possible to write the dbxml shell in such a way that it becomes independent from the encoding of the shell. Surely their must be, not? Rewrite dbxml in Java? Any candidates :-) ?
    Thanks again for the very good work,
    Koen

  • Unicode-UTF8-Support for MaxDB?

    Hello,
    according to general information MaxDB just supports UTF16. If one is
    facing the decision to migrate an Oracle 9.2 Non-Unicode-DB to either a Unicode-UTF16-MaxDB or a Unicode-UTF8-Oracle-DB, the answer probably would be to go for a Unicode-UTF8-Oracle-DB because the SAN costs used should be app. half of the MaxDB-SAN-costs.
    Is UTF8-Support soon ready for MaxDB?
    Thanks for your reply.
    br
    Chris

    Hi Chris,
    yeah, that note needs a small change (UTF-16 changed into USC-2).
    Indeed, a 2TB DB under UTF-8 needs about 3TB under USC-2. So yes, one would need more space for MaxDB, but the TCO is not only dependent on the cost of the SAN. Easy admin, being reorganisation free are only a few of the examples which can drastically lower the TCO of a system. That added to the low cost of the MaxDB software itself <i><b>can</b></i> make a difference.
    Because of other topics, which I cannot go into and currently have a higher priority, the UTF-8 support is planned long-term.
    I'm not sure if this is allowed, but this (german) article is quite an interesting read:
    <a href="http://www.computerwoche.de/produkte_technik/storage/581295/">http://www.computerwoche.de/produkte_technik/storage/581295/</a>
    With the Unicode / ASCII mixture I just meant that for example a Unicode WebAS contains ASCII data aswell. Another example: in MDM there's an explicit distinction between the ASCII and UNICODE data. Our interfaces support all three (ASCII, UTF-8 and UCS-2).
    Regards,
    Roland

  • Support for Unicode Strings

    Hi
    I was wondering if Berkeley DB supports storing and retrieving Unicode Strings. (I guess it should be supporting it but just wanted to confirm) and if yes what encoding does it use.
    Thanks,
    KarthikR

    Again, I'm not an authority, but I believe so, yes. From what I've read, BDB only knows about "data," which basically means a sequence of bytes. Thus, in C you could define a struct that contains your specific schema for keys and values, but you'd just write the raw data behind the struct to BDB, which doesn't care or know about your particular format.
    So you'd probably have to convert your unicode string to a byte sequence in some way or another. UTF-8 is the first option that comes to mind, but maybe UTF-16 or UTF-32 (if you really don't care about space) would be simpler to implement.
    Hope this helps,
    Daniel

  • Unicode support for QT text tracks?

    Does Quicktime NOT support unicode text? Is it not possible to create a text track with Chinese, Korean or other subtitles? I have a unicode file, but the text gets garbled when I try to open it in QT Pro. I have tried the language descriptor {language:23}, but no luck, and putting {textencoding:256} does not help either. That is for UTF-16, but it does not display correctly. There does not seem to be a descriptor for UTF-8. I tried {textencoding:64} but no luck either.

    hi,
    Use atinav.com driver. I hope it will solve ur problem......
    regards,
    Shaan

  • Unicode Support for Brio 8

    <p>Does Brio 8 (or any of the subsequent Hyperion versions)provide support for Unicode characters. If no, how do wetackle reporting from databases containing non-English characterslike German, Japanese etc. Thanks..Cheers..</p>

    It's not what you describe, but here is more detail on what I'm doing.
    This an example of the value string I'm storing. It's a simple xml object, converted to a unicode string, encoded in utf-8:
    u'<d cdt="1267569920" eml="[email protected]" nm="\u3059\u3053\u3099\u304f\u597d\u304d\u306a\u4e16\u754c" pwd="2689367b205c16ce32ed4200942b8b8b1e262dfc70d9bc9fbc77c49699a4f1df" sx="M" tx="000000000" zp="07030" />'
    The nm attribute is Japanese text: すごく好きな世界
    So when I add a secondary index on nm, my callback function is an xml parser which returns the value of a given attribute:
    Generically it's this:
    def callbackfn (attribute):
    """Define how the secondary index retrieves the desired attribute from the data value"""
    return lambda primary_key, primary_data: xml_utils.parse_item_attribute(primary_data, attribute)
    And so for this specific attribute ("nm"), my callback function is:
    callbackfn('nm')
    As I said in my original post, if I add this the the db, I get this type error:
    TypeError: DB associate callback should return DB_DONOTINDEX/string/list of strings.
    But when I do not place a secondary index on "nm", the type error does not occur.
    So that's consistent with what Sandra wrote in the other post, i.e.:
    "Berkeley DB never operates on the value part of a record. Values are simply payload, to be stored with keys and reliably delivered back to the application on demand."
    My guess is that I need to add an additional utf-8 encoding or decoding to the callback function, or else define a custom comparsion function so the callback will know what to do with the nm attribute value, but I'm not sure what exactly.

  • What is current CommSuite support for non-ASCII passwords?

    Hello all,
    Some of our users managed to change their passwords to non-ASCII strings (via replication from MSAD by ISW) and no longer have access to their communications services.
    While replicating the problem, I have set a (UTF-8 non-ASCII) string as my password in DSEE directly, and *can* log in to Convergence with this password. However, if I change the working password to a non-ASCII string from Convergence itself - it is accepted during the secondary password check, there is no error returned, SOME password is apparently saved into the LDAP directory, but neither of the original non-ASCII plaintext strings can be used for login back into Convergence. Restoration of access is only doable by admin at this point.
    Checking email by IMAP from Thunderbird no longer works with a changed non-ASCII password (including the state when it still works for Convergence).
    Delegated Admin has an explicit check for non-ASCII characters in the password and refuses to set a misbehaving one.
    I see that among the standards supported by CommSuite, there is IMAP4rev1, and RFC 5255 refers to it as the reason that non-ASCII passwords and usernames are for now not supported, though this is expected to be a temporary state of things, and software can prepare for the future by implementing checks for valid UTF-8 strings as well.
    https://wikis.oracle.com/display/CommSuite/Messaging+Server+Supported+Standards
    http://tools.ietf.org/html/rfc5255
    5.1.  Unicode Userids and Passwords
       IMAP4rev1 currently restricts the userid and password fields of the
       LOGIN command to US-ASCII.  The "userid" and "password" fields of the
       IMAP LOGIN command are restricted to US-ASCII only until a future
       standards track RFC states otherwise.  Servers are encouraged to
       validate both fields to make sure they conform to the formal syntax
       of UTF-8 and to reject the LOGIN command if that syntax is violated.
       Servers MAY reject the LOGIN command if either the "userid" or
       "password" field contains an octet with the highest bit set.
       When AUTHENTICATE is used, some servers may support userids and
       passwords in Unicode [RFC3490] since SASL (see [RFC4422]) allows
       that.  However, such userids cannot be used as part of email
       addresses.
    So, the main question at this point is: does or does not all of the CommSuite stack support non-ASCII passwords?
    If no - please confirm, so we can instruct the users to not create problems for themselves (and maybe manage to set up some policy to not accept non-ASCII passwords to MSAD/DSEE in the first place).
    If yes - what should be done to enable support in Convergence/IMAP/SMTP/XMPP/WCAP/WABP/... services - perhaps, setting the LANG/LC_ALL locale environment variables or equivalent JVM flags for UTF-8 in server startup scripts, etc.? (I know that DSEE ldapsearch requires either envvars or a command-line flag for charset encoding of values, so I figure similar quirks may be relevant for some other software)
    Thanks in advance for either response,
    //Jim Klimov

    I can't respond for the suite, but the Messaging Server product should work with UTF-8 usernames and passwords as long as the standard SASL authentication mechanisms that are documented to use UTF-8 are used (e.g. SASL PLAIN). IMAP LOGIN may work fine with UTF-8 as well even though that's non-standard. We do not implement SASLprep, however, so the strings provided by the client to the server must be identical UTF-8 strings for authentication to succeed. If they are provided in a different decomposition, different canonical form or non-standard charset, that's not supported and will fail. We don't test this scenario extensively, so you may encounter bugs (that we'd have to prioritize and fix as with other bugs). Messaging Server recently implemented a restricted option (broken_client_login_charset) for a customer who was stuck with broken clients that sent ISO-8859-1 for the IMAP login command arguments.

  • OpenJPA support for MaxDB?

    I'm not finding anything on the internet that shows that MaxDB will work with OpenJPA. If it isn't supported, is there a persistance abstraction like OpenJPA (or Hibernate which unfortunately does not have an acceptable license so we can't use it)?
          Thanks,
          Harold Shinsato

    Hi Chris,
    yeah, that note needs a small change (UTF-16 changed into USC-2).
    Indeed, a 2TB DB under UTF-8 needs about 3TB under USC-2. So yes, one would need more space for MaxDB, but the TCO is not only dependent on the cost of the SAN. Easy admin, being reorganisation free are only a few of the examples which can drastically lower the TCO of a system. That added to the low cost of the MaxDB software itself <i><b>can</b></i> make a difference.
    Because of other topics, which I cannot go into and currently have a higher priority, the UTF-8 support is planned long-term.
    I'm not sure if this is allowed, but this (german) article is quite an interesting read:
    <a href="http://www.computerwoche.de/produkte_technik/storage/581295/">http://www.computerwoche.de/produkte_technik/storage/581295/</a>
    With the Unicode / ASCII mixture I just meant that for example a Unicode WebAS contains ASCII data aswell. Another example: in MDM there's an explicit distinction between the ASCII and UNICODE data. Our interfaces support all three (ASCII, UTF-8 and UCS-2).
    Regards,
    Roland

  • Multilingual support for BI Publisher reports

    Hi all,
    We are currently using BI Publisher APIs to generate reports. Our aim is to enable multi-language support for these reports.The translatable strings based on the locale are stored in xliff files. Using jedit, UTF-8 encoding is chosen as the default encoding for these xliff files. It works fine with french and some other single byte languages but with double byte languages like chinese,japanese it is throwing the below mentioned error.
    Interestingly, with the same type of encoding (UTF-8), the BI Publisher stand alone server successfully supports the multi byte feature.
    java.io.UTFDataFormatException: Invalid UTF8 encoding.
    at oracle.xml.parser.v2.XMLUTF8Reader.checkUTF8Byte(XMLUTF8Reader.java:160)
    at oracle.xml.parser.v2.XMLUTF8Reader.readUTF8Char(XMLUTF8Reader.java:187)
    at oracle.xml.parser.v2.XMLUTF8Reader.fillBuffer(XMLUTF8Reader.java:120)
    at oracle.xml.parser.v2.XMLByteReader.saveBuffer(XMLByteReader.java:450)
    at oracle.xml.parser.v2.XMLReader.fillBuffer(XMLReader.java:2363)
    at oracle.xml.parser.v2.XMLReader.tryRead(XMLReader.java:1087)
    at oracle.xml.parser.v2.XMLReader.scanXMLDecl(XMLReader.java:2922)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:519)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:283)
    at oracle.apps.xdo.template.FOProcessor.mergeTranslation(FOProcessor.java:1922)
    at oracle.apps.xdo.template.FOProcessor.createFO(FOProcessor.java:1706)
    at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:1027)
    at glog.server.report.BIPReportRTFCoreBuilder.format(BIPReportRTFCoreBuilder.java:61)
    at glog.server.report.BIPReportCoreBuilder.build(BIPReportCoreBuilder.java:152)
    at glog.server.report.BIPReportCoreBuilder.buildFormatDocument(BIPReportCoreBuilder.java:77)
    at glog.webserver.oracle.bi.BIPReportProcessServlet.process(BIPReportProcessServlet.java:279)
    at glog.webserver.util.BaseServlet.service(BaseServlet.java:614)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:654)
    at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:445)
    at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:379)
    at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:292)
    at glog.webserver.report.BIPReportPreProcessServlet.getDocument(BIPReportPreProcessServlet.java:57)
    at glog.webserver.util.AbstractServletProducer.process(AbstractServletProducer.java:79)
    at glog.webserver.util.BaseServlet.service(BaseServlet.java:614)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at glog.webserver.session.ParameterValidation.doFilter(ParameterValidation.java:29)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at glog.webserver.screenlayout.ClientSessionTracker.doFilter(ClientSessionTracker.java:59)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at glog.webserver.util.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:44)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
    at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
    at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283)
    at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767)
    at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697)
    at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690)
    at java.lang.Thread.run(Thread.java:595)
    <Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected.
    oracle.apps.xdo.XDOException
    java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at oracle.apps.xdo.common.xml.XSLT10gR1.invokeNewXSLStylesheet(XSLT10gR1.java:641)
    at oracle.apps.xdo.common.xml.XSLT10gR1.transform(XSLT10gR1.java:247)
    at oracle.apps.xdo.common.xml.XSLTWrapper.transform(XSLTWrapper.java:181)
    at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:1151)
    at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:275)
    at oracle.apps.xdo.template.FOProcessor.createFO(FOProcessor.java:1809)
    at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:1027)
    at glog.server.report.BIPReportRTFCoreBuilder.format(BIPReportRTFCoreBuilder.java:61)
    at glog.server.report.BIPReportCoreBuilder.build(BIPReportCoreBuilder.java:152)
    at glog.server.report.BIPReportCoreBuilder.buildFormatDocument(BIPReportCoreBuilder.java:77)
    at glog.webserver.oracle.bi.BIPReportProcessServlet.process(BIPReportProcessServlet.java:279)
    at glog.webserver.util.BaseServlet.service(BaseServlet.java:614)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:654)
    at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:445)
    at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:379)
    at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:292)
    at glog.webserver.report.BIPReportPreProcessServlet.getDocument(BIPReportPreProcessServlet.java:57)
    at glog.webserver.util.AbstractServletProducer.process(AbstractServletProducer.java:79)
    at glog.webserver.util.BaseServlet.service(BaseServlet.java:614)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at glog.webserver.session.ParameterValidation.doFilter(ParameterValidation.java:29)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at glog.webserver.screenlayout.ClientSessionTracker.doFilter(ClientSessionTracker.java:59)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at glog.webserver.util.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:44)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
    at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
    at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283)
    at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767)
    at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697)
    at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: oracle.xdo.parser.v2.XMLParseException: Start of root element expected.
    at oracle.xdo.parser.v2.XSLProcessor.reportException(XSLProcessor.java:806)
    at oracle.xdo.parser.v2.XSLProcessor.newXSLStylesheet(XSLProcessor.java:571)
    Thanks,
    Anuradha .

    Hi Klaus,
    Even notepad does not solve the issue!
    Is there any other text editor with reliable encoding, by which we can save as UTF-8?
    Thanks
    Kosstubh

  • Support for Asian Languages

    Well, it looks like support for Asian Languages in Dynamic
    Textfields wasn't included in this version as well. I have a simple
    flash file embeded in an html file with 2 text boxes. Here's the
    code I'm using:
    var strJapanese = "あらわせて"
    var strChinese = "我想你";
    japanese_txt.text = strJapanese;
    chinese_txt.text = strChinese;
    I currently have these fonts installed in the \Windows\Fonts\
    directory:
    mingliu.ttc
    msgothic.ttc
    but all I see is a bunch of squares in the text boxes. I have
    verified that my pocket pc can use these fonts in other programs.
    Any ideas?
    Thanks,
    BackBayChef

    I used both the options but no one helps. Check the images for the understanding the problam. I try Arial unicode ms as other font  but it does give correct visiblity of font.
    tell some another option.

  • Support for Malayalam language

    I wish to buy a Mac. Whether there any support for Malayalam language there? If so which unicode version is using?

    Good News! While reading the features of the upcoming OS X Lion, I found extended language support, including Malayalam! The fonts are probably the same as those by nickshanks, but they might be different since the original fonts were made for iOS devices.
    Expanded language support
    Twenty new font families for document and web display of text provide support for the most common languages in the Indian subcontinent, including Bengali, Kannada, Malayalam, Oriya, Sinhala, and Telugu. Devanagari, Gujarati, Gurmukhi, Urdu, and Tamil have been expanded. And three new font families support Lao, Khmer, and Myanmar.
    http://www.apple.com/macosx/whats-new/features.html

  • Problem with the support of certain characters unicode in Crystal Reports

    Hello,
    Crystal Reports seems to have problems in the treatment of the following characters unicode: ethiopic Digits & Numbers u+1369… u+137c.
    As soon as one of these characters appears in the viewing of a Formula Field with Format>Paragraph>Text interpretation positioned with “RTF” or “HTML”, the component CrystalReportViewer posts the following dialog box
    (Failure of the Report Application Server)
    Any help would be appreciated…
    Regards,
    Claude

    What version of CR are you using?
    Any CR Service Packs applied?
    What version of CR was the report created in?
    What version of .NET are you using?
    What OS?
    Looking at this blog:
    http://blogs.msdn.com/b/michkap/archive/2005/02/01/364376.aspx
    I note that support for ethiopic Digits would not be supported until VS 2005 at the earliest:
    As such, the update will not be seen in Windows until Longhorn or in the .NET Framework until the version after Whidbey.
    So, if you're using .NET 2005 or later the point is probably moot..., but my other questions do need to be answered.
    - Ludek
    Senior Support Engineer AGS Product Support, Global Support Center Canada
    Follow us on Twitter
    Got Enhancement ideas? Try the SAP Idea Place
    Share Your Knowledge in SCN Topic Spaces

  • RH for Word and Unicode

    With RoboHelp 7, does RoboHelp for Word now support Unicode
    and therefore most non-western languages, i.e. Chinese, Japanes,
    Russian, etc.?
    In the online Help of RoboHelp for Word 7(trail version), the
    following statement can be found under "What's new in Adobe
    RoboHelp 7":
    Support for Unicode
    Create content in multiple languages, in the same RoboHelp
    project. Topic, index, TOC, and glossary can have content in
    multiple languages. You can import, open, save, and publish
    Unicode-encoded content.
    I am not sure if this statement is also refering to the
    WinHelp output of Robohelp for Word as it is known that Micorsoft's
    WinHelp compiler (HCW: Help Compiler Workshop), which RoboHelp for
    Word possibly still uses, has not been changed for 10 years. HCW
    supports DBCS but definitely not Unicode.
    It would of course be a nice supprise if Adobe have actually
    managed to make the Unicode encodiing work in RoboHelp for Word.
    Could you give more details on this?
    I have performed some tests with a Japanese WinHelp 4 project
    and the result looks good. I think you have to install RH7 on a
    Japanese Windows system. I find running RH7 on an English Windows
    with the Japanese Language files installed isn't enough.

    Interesting question.
    I am certainly one of those who would go the HTML route as
    the HTML code created there is much cleaner and less prone to
    issues than HTML created by Word. Also you can get at the HTML and
    do things in the help that are not possible when working with Word.
    However, you have hit the nail on the head when you mention future
    authors in your company. Increasingly I believe that being able to
    work with an HTML tool, which is not the same as needing to have a
    good understanding of HTML, will become a basic expectation of
    anyone seeking a technical authoring role, if it is not already.
    But there are many organisations where the job is part time and
    knowledge of Word is all that can be expected. Personally I would
    go the HTML route and force the issue. It should not take too much
    to get someone else to the point where they can follow on.
    When I first created help I used RH HTML I started by using
    Word as the editor. I hit some problems and was persuaded to use
    the HTML editor. I got help along the way and now I would not
    consider anything else. If I can do it, anyone can.
    If you are going to import from a project created using Word,
    do take a look at the topic on my site about importing from Word.

  • ANN: New XDK release for 9.0.2B and 9.0.1.1.0A available

    New XDK Releases for 9.0.2B Beta Version and 9.0.1.1.0A Procudtion Version are online at:
    http://technet.oracle.com/tech/xml/xdkhome.html
    What's new:
    - Oracle9i XDK for C and C++ released on Linux
    - Oracle TransX Utility Aids loading data and text.These release is part of the XDK for Java 9.0.2 Beta.
    - Oracle SOAP APIs added to the XDK for Java
    Support for the SOAP services have been added to the XDK for Java 9.0.1.1.0A Production.
    - XML Schema Processor for Java supports both LAX Mode and STRICT Mode Validation
    The XML Schema Processor for Java now supports both LAX Mode and STRICT
    Mode Validation. Compared with STRICT Mode where every element from the root on down is required to be defined and validated, LAX Mode Schema validation provides developers the ability to limit the XML Schema
    validation to the subsections of the XML document.
    This will simplify XML Schema definitions and the validation process by skipping the ignorable content. Furthermore, with LAX mode, developers can divide their XML documents into small fragments and carry out the XML Schema validation in a parallel or progressive way. This results, in a much more flexible and productive schema validation process.
    - SAX2 Extension support in the Java XML Parser
    The XML Parser for Java now supports two new handlers - LexicalHandler and DeclHandler.
    XML documents now can be parsed through Oracle XML Parser SAX APIs with full access
    to the DTD declarations and the lexical items like comments and CDATA sections.
    This ensures that the complete content model is preserved.
    - JAXP 1.1 supports is now added to the XDK for Java
    - New Differ Bean in the XDK for JavaBeans
    A new bean has been added the the JavaBean XDK which analyses the differences between to XML documents and outputs the XSL sytlesheet that will convert one into the
    other. This is extremely useful when converting an XML document retrieved from a SQL query into an xHTML page on the web.
    Stylesheets can now be automaticly created for use in XSQL pages.
    - XML Compression now supported in the Java XML Parser
    Now developers can take advantage of a compressed XML stream when serializing their DOM and SAX outputs. This new functionality significantly reduces the size without losing any information. Both DOM and SAX compressed streams are fully compatible and the SAX stream can be used to generate a
    corresponding DOM tree. These compressed streams can also be stored in Oracle as a CLOB for efficient storage and retrieval.

    bump up for east cost.

Maybe you are looking for

  • IP-address on Macbook Air using VPN

    I'm running Server on OS X Mavericks 10.9.3 on my iMac (mid 2010). I have no problem connecting to L2TP VPN with my iPhone 5s and my Macbook Air (early 2014 running Mavericks 10.9.3). When I am connected to the VPN with my iPhone on the wifi at work,

  • SYNC Microsoft outlook Calender

    My ipod is not allowing me to sync anything from Microsoft outlook....I have outlook running> can someone help me break the secret code. LOL I would like to have my scheduled tasks viewable on the ipod. THank you Ipod 5th gen.   Windows XP  

  • C# Getting path of folder that is created in Visual Studio and it's located in app directory.

    Alright i created folder that will contain video resources for my app.But now i need to connect it with my code.I need to get file path of folder that i created in Visual Studio,but it needs to work on other machines.I tryed Path.Combined(Environment

  • Program run as part of the setup did not finish as expected

    I'm trying to update iTunes 10.5 and keep getting this meesage...."There is a problem with this Windows Installer package.  A program run as part of the setup did not finish as expected"  I've done everything I can think of to try and resolve...even

  • Projects in a plant

    Hi, in one manufacturing plant, we work for different projects. each project we procure seperately by seperate mrp controllers. we want to run mrp individual for each project. pls suggest what method of planning is best in this scenario?