Re: Japanese Database

Hello Billy,
When we added support for double byte characters to
Ambassador, we had similar concerns. We installed
Forte and Ambassador onto a PC running the Japanese
version of MS Windows and tested Kanji support storing
Kanji text in the same MS Access database we use for
storing western language text. This worked fine.
I expect it will also work fine with Sybase.
If you have Forte and the Japanese version of
MS Windows, I suggest you download the demo version
of the Ambassador Translation Wizard, and you can
see this in action.
You can find the demo version at:
http://www.lindhard.com/forte/ambassador.html
Good luck!
Kerry
We have been contacted by a company that would
like us to develop a database that contains
information from Japan. We are in the very early
stages of conversation, however, I think want us
to store names and addresses etc in their native
language. We would convert and store all survey
answers etc. into English and store them that way.
I was curious if any out there has any experience
in using the Kanji character set in Forte' and or
Sybase. We are just in the feasibility phase now
and any help would be appreciated.
Thanks,
Billy Haynes
Chief Information Officer
[email protected]
Addressing Your Needs, Inc.
The Sales and Marketing Database Specialists
Visit us on the web @ www.ayn.com
- Sales Lead Management
- Relationship Marketing
- Database Design & Development
- Fulfillment
- Telemarketing
- Response Focused Marketing
- Direct Marketing Creative
- Automated Fax and Voice Systems
2324 Ridgepoint Drive, Suite A
Austin, TX 78754
512/349-3100 (voice)
800-525-6170 (toll free)
512/349-3101 (fax)--
| | / \ Kerry Bellerose [email protected]
| || C | Senior Consultant direct: 45 45 94 01 03
| | \___/ Lindhard Forte´ Solutions desk: 45 45 82 21 21
| |_____ Datavej 52 fax: 45 45 82 21 22
| | 3460 Birkeroed, Denmark
|_________| http://www.lindhard.com/forte

Answer, as best I can discover, is that FileMaker are aware of this issue, and have released v4 of FileMaker 11 to fix it for Lion. And told everyone using version 10 (which I believe I only bought last year) that they can go jump.
Lovely bunch, FileMaker. I'd definitely urinate on them if they were on fire.

Similar Messages

  • Help needed in setting up Japanese Database

    Hi there,
    Help needed in setting up Japanese Database.
    I created database with UTF8 character set on Sun Solaris O/S.
    Oracle version 8.1.7.
    I am accessing the DB through SQL*Plus (Windows client).
    I downloaded the Japanese font on client side and also set the NLS_LANG environment variable to Japanese_Japan.UTF8. Still, I am not able to view Japanese characters. O/S on client side is Windows 2000 professional (English). Is O/S (client) need to be Japanese O/S? When I try to retrieve sysdate, its displaying in Japanese but not all characters in Japanese. Can anyone help me out how to set up the client and is there any parameters to be setup at server side? I also tried to insert japanese characters into table through client, but it displaying as "?????" characters. Any help in this regard is appreciated.
    Thanks in advance,
    -Shankar

    lol
    your program is working just fine.
    do you know what accept does? if not read below.
    serversocket.accept() is where java stops and waits for a (client)socket to connect to it.
    only after a socket has connected wil the program continue.
    try putting the accept() in its own little thread and let it wait there while your program continues in another thread

  • How does world_lexer index special characters in Japanese database?

    Does any one have any information on how world_lexer indexes special characters (such as $, #,& -, _ etc.) in Japanese database?
    Your help will be highly appreciated.

    We worked this out via e-mail - the short answer for posterity is that the special characters are not added to the token list and printjoins is not an attribute for the world_lexer.
    -Ron

  • How to install Japanese database in English OS.

    Hi,
    I am trying to install an Japanese DB on English OS by setting 'NLS_LANG =JAPANESE_JAPAN.JA16EUC'. After creating the DB when I check the DB NLS_DATABASE_PARAMETER still it shows as AMERICAN_AMERICA.JA16SJIS. Can anybody help me out?
    Thanks,
    Prince

    I am trying to install an Japanese DB on English OSWhat does this mean exactly? What are you installing, how, and what do you mean by "Japanese DB"?
    by setting 'NLS_LANG =JAPANESE_JAPAN.JA16EUC'. AfterThis is a client side setting (even if you are using it on a database host). Details to be found in the NLS_LANG FAQ and a relevant section in your case might be the one about priority of nls parameters.
    creating the DB when I check the DB
    NLS_DATABASE_PARAMETER still it shows as
    AMERICAN_AMERICA.JA16SJIS. Can anybody help me out?Where do you see this? Can you post the output verbatim?
    To find out the character sets of a database, try
    select * from nls_database_parameters where parameter like '%CHARACTERSET';

  • How to Insert  Chinese characters in Japanese Database

    Hi all,
    I am having following characteristics on my computer
    Machine OS --Windows Server 2003
    OS language --Japanese
    Oracle
    Oracle9i Release 9.2.0.1.0 - Production
    NLS_LANGUAGE     JAPANESE
    NLS_CHARACTERSET     JA16SJIS
    Now, i want to insert into database chinese characters. Please guide me how to do the following thing.
    How to insert chinese characters on local machine and if i want to insert on the remote databse (i can not create database link for remote database). I have to send batch file or SQL file and they will execute it on their side.
    if i use this command
    alter session set nls_language = "SIMPLIFIED CHINESE"
    and then insert the records and revert back to japanese character set. Is this correct way....?
    Thanks in advance,
    Pal

    As dombrooks has pointed out, unless all the Chinese characters you are trying to store can be represented in the Shift-JIS character set, which seems unlikely, but I'm not an expert on East Asian languages and I believe there are some glyphs that are shared between various languages, then you're not going to be able to store this data in this database in CHAR or VARCHAR2 columns.
    Depending on the national character set, you may be able to store the data in NCHAR/ NVARCHAR2 columns, though using these data types can substantially increase application complexities since various languages and libraries don't support NCHAR/ NVARCHAR2 columns or require you to jump through some hoops to use them. Your applications would also have to support both character sets, so your applications would all have to be Unicode enabled most likely, which is certainly possible but it may not be a trivial change.
    Justin

  • Japanese UTF-8 data

    Hi,
    Have installed Oracle (8.1.6) japanese client on windows 2000 japanese workstation
    and using DBA studio, i'm looking at
    UTF-8 japanese database hosted on another
    oracle 8i windows2000 machine.
    My problem is that i'm getting junk data.
    Despite of the fact i can look at the
    data from a browser
    Thanks
    ppsr

    Did you specified the correct NLS_LANG parameter ?
    Take a look into the Globalization Support FAQ at:
    http://technet.oracle.com/products/oracle8i/ for environment configuration information

  • Display problem when reading Traditional Chinese in Crystal Report 2008

    Hi All,
    I have tried to connect MS SQL 6.5 to build up a report. However, I encounter a problem when read Traditional Chinese. all chinese characters turn into adnormal characters. I wonder it is problem of SQL Server or some setting that I may not know. In addition, I can able to read chinese in the application program , ISQL / Query Analyzer. Please help to give idea and suggestion to fix this. Thanks
    SQL Server: MS SQL Server  6.5
    Client OS: Window XP ( Traditional Chinese Version)
    Server OS: Window NT and Window Server 2000
    Stsyou

    Hi
    If you are using a Chinese language build database, Crystal Reports English build might not display the characters properly as non english databases include one byte characters as ASCII code character and double-byte characters as their own language code character.
    Traditional Chinese, Simplified Chinese, Japanese (Kanji, Hiranga, and Katakana), Korean and Vietnamese use double-byte characters.
    However, a possible workaround to this issue is to verify that the encoding on the database client is configured according to the examples below. Also, you will need to install the language pack in the English environment.
    CONFIGURATION EXAMPLES:
    Shift-JIS for a Japanese database
    Big5 for a Traditional Chinese database
    GB2312 for a Simplified Chinese database
    iso-2022-kr for a Korean database
    Windows 1258 for a Vietnamese database
    EXAMPLE SCENARIO:
    An Oracle database saves non-English language data such as Japanese.
    The Windows operating system is English.
    STEP TO CONFIGURE THE CHARACTER CODE IN THE ENVIRONMENT SETTING
    (This step is based on the details of the EXAMPLE SCENARIO.)
    1. Configure the system environment on the database client side to 'Shift-JIS' as the character code set in the Environment Setting.
    This results in the NLS_LANG variable set to Japanese_Japan.JA16SJIS under the registry HKEY_Local_Machine\Software\Oracle\Homeo.
    STEPS TO INSTALL LANGUAGE PACK IN AN ENGLISH ENVIRONMENT
    (These steps are based on the details of the EXAMPLE SCENARIO.)
    1. Add languages to the computer system by clicking:
    Control Panel > Regional Options > General
    2. Select the check box for 'Traditional Chinese', 'Simplified Chinese', 'Japanese', 'Korean' or 'Vietnamese'.
    3. Click 'Apply'.
    4. When you are prompted, insert the Windows CD-ROM to install the language pack.
    5. Restart the computer.
    Upon completing these steps, you are able to display the languages characters in the Crystal Reports Designer in a Windows English environment.
    ==========
    NOTE:
    If the database table and field names use non-English language characters, Data Explorer in CR will not correctly display these names. However, when you preview the report, the non-English data displays correctly.
    ==========
    Configuring the database client according to the examples and installing the language pack will display the characters successfully. However, there are cases when this workaround does not resolve the issue.
    For further information about CR and double-byte languages, refer to knowledge base articles, c2008083 and c2008349.
    Hope this helps!!!
    Regards
    Sourashree

  • Arabic text showing garbage in old version report

    I have an Arabic report created in old version of Crystal Report and used in VB 6 application.
    Its working well from the VB 6 application.
    Now I want to make slight changes in report when I open it in any of Crystal Report versions 8.0, 9.0, Visual Studio 2005, Visual Studio 2008 etc. it shows Arabic text as garbage boxes in both data and labels instead of text in Arabic.
    I tried everything to resolve this problem, I tried several different versions of Crystal Report but not solved. I am not sure which version it was originally created but I know about the version of ActiveX controls application is using i.e. 8.0.
    Any clue to solve this problem? Anyone faced similar problem.
    Thank you very much in advance.

    Hi Adil,
    Please try the following:
    This behavior has been resolved in CR 9 as CR 9 supports UNICODE. For versions earlier than CR 9, proceed to the solution provided in this article.
    ====================
    In order to display double-byte characters correctly in the English Build of Crystal Reports, follow these rules:
    · Use an operating system that is able to handle the appropriate language set.
    For example:
    To show Thai characters in the Crystal Reports designer, a Thai operating system must be used.
    · Use an appropriate font for each field.
    For example:
    To show traditional Chinese characters (Big5 encoding) in the Crystal Reports designer, select a font such as Mingliu.
    · If you are using a Japanese database client, ensure the encoding on the client is set to Shift-JIS. Please note that we do not fully support UNICODE.
    NOTE ======
    The English version of Crystal Reports has not been fully tested with non-English characters or on non-English operating systems.
    Crystal Reports is localized into the following languages: Japanese, French, German, Spanish, Italian, and Portuguese. Customers using data stored in these languages are recommended to use these versions of Crystal Reports as they fully support the respective language and operating system.
    ============
    Known limitations of using non-English data with the English build of Crystal Reports are:
    · Exporting to Microsoft Word and Excel 8.0 yields corrupted characters. This is a limitation of the export dlls.
    · Non-English characters within the Crystal Reports formula editor appear corrupted, however, the output of the formula appears correctly when previewing the report.
    · Database field names and table names in non-English characters may appear corrupted in the field explorer or may be unable for selection.
    NOTE:Same steps can be applied for  languages such as the European languages French, German, Italian, Spanish, and Portuguese as well as Greek, Thai, Arabic, Hebrew, and Russian.
    ============
    Hope this helps,
    Shraddha

  • Inserting JAPANESE characters in a database

    Can somebody please let me know how to insert the JAPANESE (Kanji) Characters into the Database.
    Database Version: Oracle 9.2.0.1.0
    Paremeters Setting:
    NLS_CHARACTERSET - UTF8
    NLS_NCHAR_CHARACTERSET - UTF8
    Server OS: Win2K
    Client OS: Win2K

    Not sure what your overall requirements from an application
    support standpoint. But a simple way would be to use UNIST.
    Here is a description:
    UNISTR takes as its argument a string and returns it in the national character set.The national character set of the database can be either AL16UTF16 or UTF8.
    UNISTR provides support for Unicode string literals by letting you specify the Unicode encoding value of characters in the string. This is useful, for example, for
    inserting data into NCHAR columns.
    The Unicode encoding value has the form '\xxxx' where 'xxxx' is the hexadecimal value of a character in UCS-2 encoding format. To include the backslash in
    the string itself, precede it with another backslash (\\).
    For portability and data preservation, Oracle Corporation recommends that in the UNISTR string argument you specify only ASCII characters and the Unicode
    encoding values.
    Examples
    The following example passes both ASCII characters and Unicode encoding values to the UNISTR function, which returns the string in the national character
    set:
    SELECT UNISTR('abc\00e5\00f1\00f6') FROM DUAL;
    UNISTR
    abceqv

  • Handling Multi-byte/Unicode (Japanese) characters in Oracle Database

    Hello,
    How do I handle the Japanase characters with Oracle database?
    I have a Java application which retrieves some values from the database; makes some changes to these [ex: change value of status column, add comments to Varchar2 column, etc] and then performs an UPDATE back to the database.
    Everything works fine for the English. But NOT for Japanese language, which uses Multi-byte/Unicode characters. The Japanese characters are garbled after the performing the database UPDATE.
    I verified that Java by default uses UTF16 encoding. So there shouldn't be any problem with Java/JDBC.
    What do I need to change at #1- Oracle (Database) side or #2- at the OS (Linux) side?
    /* I tried changing the NLS_LANG value from OS and NLS_SESSION_PARAMETERS settings in Database and tried 'test' insert from SQL*plus. But SQL*Plus converts all Japanese characters to a question mark (?). So could not test it via SQL*plus on my XP (English) edition.
    Any help will be really appreciated.
    Thanks

    Hello Sergiusz,
    Here are the values before & after Update:
    --BEFORE update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,65,74,61,6c,69,6e,6b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    --AFTER Update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,45,54,41,4c,49,4e,4b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    So the values BEFORE & AFTER Update are the same!
    The problem is that sometimes, the Japanese data in VARCHAR2 (abstract) column gets corrupted. What could be the problem here? Any clues?

  • Japanese and English together in database table?

    Hi all, on to my next challenge... ;-)
    I'm wondering if there are any issues with database tables that contain both Japanese and English (Roman) characters. I'm in the process of establishing an online language-learning site that will target (mostly) customers in Japan. As part of the registration process, I need to collect names in kanji (Japanese characters) along with other information in roman script (email, username, password, etc.).
    Right now I have these fields set to utf8_unicode_ci collation. Is this correct? Am I confusing character set with collation? I doubt very many of you have experience in this area, but if you do I'd love to hear from you.
    David

    Hi,
    I don´t know if my answer is of help in your particular situation -- however, I once made a CMS with ADDT where form pages had to contain both english & russian input fields, I succeeded having them display the respective chars differently this way:
    1. setting the page´s "content-type" to UTF-8
    2. no need to apply something to the "english" aka roman form fields, but...
    3. adding the following bold stuff to the "value" of a russian field, example for PHP:
    " size="100" maxlength="240" />
    I actually didn´t define any "collation" for the MySQL database, so I can´t say something about that.
    Günter Schenk
    Adobe Community Expert, Dreamweaver

  • Should be able to enter both Japanese data and English data into the database without

    Scenario 1:
    Database Char Set: UTF 8
    National CharSet :UTF8
    String Type:Varchar2
    Problem :Unable to enter more than 1/3rd of the field length specified when entering Kanji(Japanese) Data
    Scenario 2:
    Database Char Set: JA16EUC/JA16SJIS
    National CharSet :JA16EUCFixed/JA16SJISFixed
    String Type:NVarchar2
    Problem :Unable to enter/retrieve English data written into the database but works fine with Japanese
    Scenario 3:
    Database Char Set: UTF8
    National CharSet :JA16EUCFixed
    String Type:NVarchar2
    Problem :Unable to enter/retrieve English data written into the database but works fine with Japanese
    null

    You will not be able to display the process form, or edit those values from the view profile screen.
    You would need to create custom fields that are mapped from the process form to the user's profile. Then you would need to create user form update triggers from the User Defined Fields that when the user changes them, they get pushed to the target process form by adding those task names into the provisioning process definition. This would then trigger the updates to the target resource.
    -Kevin

  • US7ASCII database and Japanese data

    Hi,
    I am trying to use 9.0i OCCI to build a simple client app which reads Japanese messages out of our database. The database has US7ASCII char set ( I ran SELECT * FROM NLS_DATABASE_PARAMETERS to determine it. )
    I have set NLS_LANG to JAPANESE_JAPAN.JA16SJIS in the registry, on my client machine but data comes out as garbage. It comes out alright via Remedy client application. What do I need to do in order to read it with OCCI?
    Please help.
    Thank you.

    Thank you very much Justin for your reply.
    I managed to fix my app by setting the char set to US7ASCII on my client, but you confirmed my suspicion. It appears that Oracle indeed uses 8 bits for data even if you specify 7bit char set, but as you pointed out the disaster may come when you least expect it. I read Oracle documentation, it seams that the most optimum solution would be to upgrade the char set to UTF8 by running
    ALTER DATABASE CHARACTER SET UTF8
    in which case the data will be stored in Unicode, and there will be virtually no storage space increase or performance penalty.
    I guess my further question is - what will be the best way to build my C++ client in that case? Logically it seams that I should use wchar_t for simplicity ( I am not sure about performance). My understanding is that data coming from Oracle will be in Unicode, provided that I set my clientfs NLS_LANG to UTF8. But I guess there is a difference between UTF8 and 16 bit encoding, which is used by Windows. Will OCCI handle that if I simply build my client with _UNICODE setting, or do I have to do some transformations?
    Another issue is that we currently have a lot of C++ char based code, which manipulates that data, then stores it in different files. There are a lot of char strings embedded in the code. To convert it all to wchar_t might be a quite sizable effort, so I might need to resort to multi-byte strings instead of Unicode. If I set my clientfs NLS_LANG to JA16SJIS will I be getting the correct data out of Oracle?
    Thank you again.
    I appreciate you help.

  • What's going on -- when save Japanese charcters to database?

    From this Oracle Forum discussion, I learned that when you get
    the input value(from html form), you have to do a 'translation',
    because of Servlet design behavior, for example:
    translatedParameter = new String(request.getParameter
    ("param").getBytes("ISO-8859-1"),"Shift_JIS").
    I tried this, and it does work well- the japanese characters are
    saved properly to Oracle database.
    But this works only when I use Jrun Servlet engine+oracle
    database. If I use Jakarta-tomcat, I do NOT need to do
    any "translation" coding to save to oracle database properly.
    what's more, if I use SQL database, I also do not need to do
    this kind of decoding no matter what servlet engine is used.(it
    do not work if i do the "translation" stuff )
    I am confused :since the reason to do the "translation" is
    because of the servlet design behavior, it should be required
    for all type of servlet engine and database. but the truth is
    not. do anyone know what's going on ?

    EDITOR=nano
    sudo visudo
    export EDITOR=nano && sudo visudo
    They both open with vi. So how the heck do I edit my sudoers file then ?
    PS:
    I'm sure vi is a powerful editor (in the right hands) but it's waaaaay off the KISS principle.
    Nano is like 30 times easier. I couldn't even find the QUIT command in vi, lol.
    Last edited by DSpider (2010-11-07 14:23:06)

  • Multilingual word game/database with new words (japanese)

    Hi there!
    I guess the subject may be a bit confusing, but let me tell you what I'm looking for.
    I've started learning japanese language. I found the option to use japanese in my MacBook Pro. I want to create a database of new words that I learn using japanese (Hiragana) and Polish language. But I don't want it to be just normal text file or normal database. I want it first of all to be able to search words and then it could have the option to "play" with new words in a way it would help me remember new words (something like flash cards). Does anyone know anything like it?
    I could write it in Java using Mysql but it would take so much time...
    Anyways, one more thing - is there or how to create a shortcut to shitch languages is Tiger 10.4?
    Thanks a lot
    Adam M

    I want it first of all to be able to search
    words and then it could have the option to "play"
    with new words in a way it would help me remember new
    words (something like flash cards). Does anyone know
    anything like it?
    Try doing a google search for "os x language flash cards" .
    Anyways, one more thing - is there or how to create a
    shortcut to shitch languages is Tiger 10.4?
    Command + Space. You can see the info at the bottom of system prefs/international/input menu (input menu shortcuts).

Maybe you are looking for

  • A/R Invoice with freight Linked to A/R Down Payment invoice

    Hi All, scenario: SAPBO 2007 SP1 PL 12 A/R Down Paymenti invoice  100 + tax20 = 120 (it is paid) A/R Invoice : 80 + 20(freight) + 20(tax) = 120 When I linked the a/r down payment invoice  the system returns this error: "Total amount of this payment m

  • Unable to install firefox

    I've been having terrible problems with Firefox. It wasn't showing any of my downloaded extentions or add ons so I uninstalled it. Now I am trying to reinstall it, but after downloading the file when I try to install it I get a message saying 'File i

  • SeeBurger Implementation

    Hi All, Can any body provide me with the details on how to configure Seeburger adapters? There is a requirement which I need to work on. I have the PI server but I dont have access to SeeBurger as of now. I am new to SeeBurger but well versed with th

  • Was AMD Radeon GPU compatibility removed in CS6?

    I was wondering... I checked out the AE CS6 trial and noticed that I cannot use my GPU whereas in CS5.5 I was able to. I'm using an AMD Radeon HD 6950 2GB and CS6 tells me the GPU is either not available or has an incompatible device or display drive

  • HT1725 There were problems downloading some purchased items.

    There were problems downloading some purchased items. For more information on the items that could not be downloaded, click below. There was a problem downloading "Prelude to Foundation (Unabridged) Part 2 / Prelude to Foundation (Unabridged) / Isaac