[SOLVED] Finch and special characters?

EDIT: Solved it by setting my SSH clients to send the locale at connect. Found out that locals were fine on server and on all computers but it changed local to POSIX when I ssh'd into the server. Easy fix to just set the ssh.conf to send LOCALE on connect and sshd to accept it.
Hi everyone!
Today I have been playing around a bit with my torrent/fileserver after I bought a new router. After I have changed some configs I can't get special characters to work in finch, like åäö.
Finch works fine with special characters on my desktop and laptop (also running archlinux) but not on the server. I usually run a screen with Finch 24/7 on the server, wich I can ssh to, that's why I want it to work there, too .
I have tried both en_US.utf8 and sv_SE.utf8 as locals. Does anyone have any idea how I can fix this? It worked before I started messing around but I can't understand what I have changed... Only thing is that I'm now using netcfg instead of setting ip in rc.conf, but it seems to be irrelevant...
Thanks for answering!
Here is my rc.conf:
# /etc/rc.conf - Main Configuration for Arch Linux
# LOCALIZATION
# LOCALE: available languages can be listed with the 'locale -a' command
# HARDWARECLOCK: set to "UTC" or "localtime", any other value will result
# in the hardware clock being left untouched (useful for virtualization)
# TIMEZONE: timezones are found in /usr/share/zoneinfo
# KEYMAP: keymaps are found in /usr/share/kbd/keymaps
# CONSOLEFONT: found in /usr/share/kbd/consolefonts (only needed for non-US)
# CONSOLEMAP: found in /usr/share/kbd/consoletrans
# USECOLOR: use ANSI color sequences in startup messages
LOCALE="sv_SE.utf8"
HARDWARECLOCK="localtime"
TIMEZONE="Europe/Stockholm"
KEYMAP="us"
CONSOLEFONT=
CONSOLEMAP=
USECOLOR="yes"
# HARDWARE
# MOD_AUTOLOAD: Allow autoloading of modules at boot and when needed
# MOD_BLACKLIST: Prevent udev from loading these modules
# MODULES: Modules to load at boot-up. Prefix with a ! to blacklist.
# NOTE: Use of 'MOD_BLACKLIST' is deprecated. Please use ! in the MODULES array.
MOD_AUTOLOAD="yes"
#MOD_BLACKLIST=() #deprecated
MODULES=()
# Scan for LVM volume groups at startup, required if you use LVM
USELVM="no"
# NETWORKING
# HOSTNAME: Hostname of machine. Should also be put in /etc/hosts
HOSTNAME="servzor"
# Use 'ifconfig -a' or 'ls /sys/class/net/' to see all available interfaces.
# Interfaces to start at boot-up (in this order)
# Declare each interface then list in INTERFACES
# - prefix an entry in INTERFACES with a ! to disable it
# - no hyphens in your interface names - Bash doesn't like it
# DHCP: Set your interface to "dhcp" (eth0="dhcp")
# Wireless: See network profiles below
# Enable these network profiles at boot-up. These are only useful
# if you happen to need multiple network configurations (ie, laptop users)
# - set to 'menu' to present a menu during boot-up (dialog package required)
# - prefix an entry with a ! to disable it
# Network profiles are found in /etc/network.d
# This now requires the netcfg package
NETWORKS=(main)
# DAEMONS
# Daemons to start at boot-up (in this order)
# - prefix a daemon with a ! to disable it
# - prefix a daemon with a @ to start it up in the background
DAEMONS=(syslog-ng net-profiles netfs crond sshd acpi)
Last edited by Ginux (2010-04-03 11:17:12)

Since you're apparently using utf-8 for editing, your setting for inputenc should reflect that:
\usepackage[utf8]{inputenc}
I tried your sample with this and it worked here.
EDIT: explanation: inputenc package is there to convert a (supported) outside encoding (eg. utf-8, latin1, cp1250, il2, ... many many) into the internal tex encoding, so that tex assigns appropriate glyphs to appropriate letters coming in from your file.
Last edited by bender02 (2008-11-04 23:36:03)

Similar Messages

  • Remove spaces and special characters from a form field

    Hi,
    I am tragically new to all of this, but am trying to create a form in Adobe Acrobat 9.  I am trying to use a Custom Format script to take inputs in a form field and automatically remove spaces and special characters (hyphens specifically).  For example, if a user inputs "RAN-99 06" I would like it to change to "RAN9906."  I found this script that will not let users input special characters
    if (!event.willCommit) {
        event.change = event.change.replace(/[\$#~%\*\*\^\-\(\)\+=\[\]\{\};\"\<\>\?\|\\\!]/g, "");
    And that's okay, but I can't figure out how to disallow spaces.  Also, the perference would be for a script to allow users to input the data as they like, but to clean it up after they leave the text field.
    Thanks in advance!

    From the description, I assume that the script is currently in the Keystroke event. In fact, that would be a most logical way to have it; simply ignore anything unwanted when entered.
    If you want to allow the user to enter anything, but "clean it up" when done, you would place your code in the Validate event. You will have to adjust your Regular Expression so that it works globally, but that's the whole difference. This will change the value.
    Note that you can also enter the code into the Format event. However, that would only change the visual representation of the value, but internally, the value would remain as entered.
    Hope this can help.
    Max Wyss.

  • Is it possible to search for strings containing spaces and special characters?

    In our RoboHelp project, there are figures with text labels such as Figure 1, Figure 3-2, etc.
    When I search for "Figure 3" I get all pages containing "Figure" and "3", even if I surround it in quotes.  Similarly, searching for "3-2" treats the '-' character as a space and searches for all pages containing '3' or '2'.
    Is there a way to search for strings containing spaces and special characters?

    In that case I think the answer is no if you are using the standard search engine. However I believe that Zoom Search does allow this type of searching. Check out this link for further information.
    http://www.grainge.org/pages/authoring/zoomsearch/zoomsearch.htm
      The RoboColum(n)
      @robocolumn
      Colum McAndrew

  • Validating text field based on the combination of alphabets and special characters

    Hi Everyone,
    I am Using Oracle Apex 4.2 Version . I want to do validation for a textbox where it should accept all alphabets,numbers and special characters (abc12#$ , zbc, 123, nd12, 23_6!, @%77).
    But it should NOT accept all special characters only.(@#$#!)
    Pelase do help if any knows this.
    Thanks in advance,
    Nikhil.

    Hi Nikhil,
    Here is one way that could work.
    CREATE TABLE t (x VARCHAR2 (30));
    INSERT ALL
    INTO t
    VALUES ('XYZ123')
    INTO t
    VALUES ('XYZ 123')
    INTO t
    VALUES ('xyz 123')
    INTO t
    VALUES ('X1Y2Z3')
    INTO t
    VALUES ('123123')
    INTO t
    VALUES ('abc12#$')
    INTO t
    VALUES ('@%77')
    INTO t
    VALUES ('!@#$')
    INTO t
    VALUES ('~%^&*()_+')
    INTO t
    VALUES ('23_6!')
    INTO t
    VALUES ('zbc')
    INTO t
    VALUES ('123*456')
    SELECT * FROM DUAL;
    SELECT x
    FROM   t
    WHERE  ( Regexp_like (x, '[[:alpha:]]') -- include alpha characters
              OR Regexp_like (x, '[[:digit:]]') -- include numbers
              OR Regexp_like (x, '[[:punct:]]') ) -- include special character
           AND ( Regexp_like (x, '^[^%]*$');
                 AND Regexp_like (x, '^[^*]*$') ) -- exlude special characters % and *
    Jeff

  • Accents and special characters - how to manage

    Hi,
    please could you explain how it is possible to handle accents and other special characters in Business Objects XI 3.1 (Universes and Reports)? Which are the parameters/settings I need to configure for
    Database (ORACLE)
    Application Server (AIX)
    Universe
    WebI
    I want to display italian accents ò à è ù ì and some special characters like '@'. Even if they are correctly stored in DB, in WebI I retrieve characters without accent and a '?' for '@'
    Any suggest?
    Then, one more thing ...I noticed that creating a new report WebI, characters with accents appear without accents, instead if I create a report with Rich Client and I publish it, then accents are correctly displayed also opening the report with Web Intelligence. I have the application server on AIX, but Rich Client runs on WIN Server ...
    Thanks a lot
    Edited by: Tube Girl on Mar 22, 2010 4:25 PM

    Hi,
    I had almost the same problem (only different language) and manged to solve it after "google-ing" these links:
    http://www.forumtopics.com/busobj/viewtopic.php?p=658760&sid=7fd77de133ee32213d987cb1550ae568
    https://oraclespin.wordpress.com/2008/05/01/setting-nls_lang-for-oracle/
    http://docs.oracle.com/html/B13804_02/gblsupp.htm
    FYI, system is AIX 5.3 with Oracle 10g.
    I've edited .profile for boxi user and added appropriate export of NLS_LANG variable, then restarted BO and now reports are correct.
    I guess you should try adding this line to .profile of your boxi user
    export NLS_LANG=ITALIAN_ITALY.WE8MSWIN1252
    then restart and see results.
    Hope this helps,
    regards!

  • PDFs using BI publisher and special characters - pound sterling

    I have a table with character strings which include the £ pound sterling sign.
    If I create an xml file from the table using the escape sequence for the pound sign & # 163; and load it into BI publisher, the pdf renders correctly using an rtf template.
    If I include the pound sign in the rtf template itself the pdf renders correctly and if I use a report query in Apex based on the table, and an rtf template then the pdf renders correctly.
    However there is a limit to the number of columns that can be sent to BI via the report query - we have found this to be 119 columns. For more than this we have been using a stored function to return xml data as a clob and use the clob to generate the pdf with the rtf report layout (using get_print_document). This method works fine but the pound sign is not rendered correctly (appears as "?").
    The NLS_LANG is set to english_united kingdom in the database and in BI publisher. The locale in bi publisher is en_GB. BI will create the pdf correctly with the pound sign if using a standard xml file (with escape seq for pound) so I am sure it is not a language issue in BI/Apex.
    Any ampersands in the xml cause the pdf to fail completely and we cannot represent special characters with a sequence containing the ampersand for this reason. I have tried "& # 163;" and "& amp;#163;" and variations and have also tried using CDATA, none of which gives the correct pdf output. I have also tried to switch the encoding to Windows-1252 or ISO-8859-1 in the header of the xml (xml generated by stored procedure so can control this), but this gives incorrect results too.
    We also need to send other special characters to BI via get_print_document and apex (bullet points, ampersands, dashes ) but cannot use the escape sequences because of the ampersand problem.
    Has anyone had any success with this?
    Is there any plan ( for a future version of Apex) to increase the number of columns that can be used in report query using Apex/BI publisher?
    Thanks
    Kathryn

    Hi Kathryn
    I've had exactly the same problem as you've mentioned.
    Firstly, I've also found that I cannot select more than 119 columns from a view using a report query and have opted to the use the stored function to return the xml (same as you).
    After conducting a lot of searching and experimenting i've found that you can use the following escape characters in REPLACE in your stored function:-
    '£' can be replaced with CHR(194)||CHR(163)
    '%' can be replaced with '%25'
    '&' can be replaced with '%26amp;'
    I found the above escape characters from the xml file or by opening the xml file in Wordpad. Therefore I'm sure you'll be able to find the escape characters for the other symbols that you mentioned i.e. bullet points, dashes etc.
    I hope this has helped. Good luck and let me know if you get any developments with the limitation on the number of columns that can be selected from a report query as this would save a lot of trouble (as I'm sure you're aware).
    Thanks
    Natalie

  • A problem with regex and special characters

    Hello,
    I am using regex in my application but i have a problem with special characters. Here is the explanation of what i am doing:
    I have a certain piece of text that i want to parse and replace every occurrence of a given word with some sort of a tag which have the word found inside it.
    so that: go Going Go to gOschool by bus and to learn and to play GO Go
    and i need to replace the word "go" (case insensitive and only at word boundaries) should be:
    *<start>go<end> Going <start>Go<end> to gOschool by bus and to learn and to play <start>GO<end> <start>Go<end>*
    Consider the following code and call the method with the parameter"go?"
    The Matcher finds a weird match at the word "G?oing" with only the letter G !!!
    It also ignores the "?" in the pattern completely.
    Any clue of what is happening i would be very grateful...
    private static String replaceMatches(String strToFind)
            String resultArticle="";
            String article = " "+"go? G?oing Go? to gOschool by bus and to learn and to play GO? Go?*"+" ";
            strToFind = "\\b"+ strToFind +"\\b";
            String linkPart1= "<start>";
            String linkPart2 = "<end>";
            Pattern p = null;
            try{
                p=Pattern.compile(strToFind, Pattern.CASE_INSENSITIVE);
            Matcher m = p.matcher(article);
            String[] res = p.split(article);
            int i=0;
            //System.out.println("result of split: "+res.length );
            while(m.find())
                resultArticle+=(res[i]+" ");
                resultArticle+=linkPart1;
                resultArticle+=m.group().trim();
                resultArticle+=(linkPart2+" ");
                i++;
            if(i<res.length)
                resultArticle+=res;
    //System.out.println("result of match: " + i);
    System.out.println(article);
    //System.out.println(resultArticle.trim()+scripts);
    catch(PatternSyntaxException ex){}
    return resultArticle.trim();
    }Thanks                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    tarek.mamdouh wrote:
    because split will not work when trying to replace the first word if i don't append a space at the beginning.Split doesn't work anyway. And my question wasn't why do you add spaces (which you really don't need to do), but why do you do them with " " + "go" rather than just " go"
    replaceAll will replace all the occurrences in the text with only one word. without taking into consideration the case of the word i need to replace.No.
    >
    If i use replacaAll(article, strToFind) the output will be:
    <start>go?<end> G?oing <start>go?<end> to gOschool by bus and to learn and to play <start>go?<end> <start>go?<end>No. I showed you the actual output of an actual replaceAll.
    which is not what i want as i need to keep the case of the words i am replacingThe replaceAll I showed you does that.
    Please study the examples given and read the docs carefully rather than making claims based on inaccurate guesses.

  • Apple, please address this issue!!! Searching and special characters

    I've found at least one other topic here regarding this issue, but Apple has yet to address it, or even confirm that it's in fact an issue.
    A big portion of my library consists of international artists, many who have special/nonstandard characters in their names (e.g. "Tiësto"). In previous versions of iTunes, I could search for "Tiesto" (note the absence of the umlaut over the 'e') and all of those tracks would appear. However, after upgrading to iTunes 9, I'm forced to search for "Tiësto" in order to view those tracks.
    Thinking about this issue, it would seem that this is normal behavior, as an artist named "Pàz" isn't necessarily the same as "Paz". But the reality is that NO ONE wants to take the time to search for the alt-code of a special character, and then type it in. Additionally, most people (including me) have a number of reasons for not renaming the artist to conform to standard characters.
    Clearly, something has changed between iTunes 8 and 9. And it's incredibly frustrating. A big chunk of my library is virtually inaccessible to me now, unless I search for terms that don't contain special characters.
    Apple, **PLEASE** address this issue. I'm not the only one with the problem and it's become a huge pain in my side to have to work around this.

    I don't see how it's sloppy or why there should be any reason that it shouldn't have given you both.
    Even putting aside that gracenote and others don't have consistent spelling in their databases, that when you download from the iTunes store, who knows what you're going to get, so that one's library could be all over the place and one might not even notice it, accents and other markings don't change the fact that a u is the same letter as ú.
    Yes, it's pronounced different. Yes, the difference is very important. But it's the same base letter. And even if it's considered a different letter in another language, in English we have 26 letters and all the accents and tildes and umlauts don't make them a new letter, which is what we're talking about here.
    It's a huge pain that looking for an album that has "Zürich" in the field and I type in "Zurich" I can't find it. I have to think about if I really do have that album, maybe I deleted it by accident, etc. And then I think to type "rich" and then it pops up.
    That's way too many steps. I don't know keyboard shortcuts and I don't want to learn them. In English, it should be letters, not characters, that count.

  • ESB or BPEL file adapter and special characters

    Hi,
    We have a scenario where we import rows from .csv file through an ESB project into a database. We use the file adapter for this. There appears to be a problem with special characters (like é). Both in the ESB control (with variable tracking) and in the database, they appear as upside down questionmarks (¿). I've tried doing the same with a BPEL project (file adapter as client PL) and in the BPEL console, I also see strange characters instead of the expected special characters (diamond shaped characters, like ♦ to be precise).
    I can't find anything about character sets of character set conversions in the documentation. What am I missing?
    Regards,
    Arjan

    see
    http://download-west.oracle.com/docs/cd/B31017_01/inte
    grate.1013/b28994/nfb.htm#CIAEFBHHI've looked into the properties mentioned. They are set when you go through the wizard. Everything is set to UTF-8, which should provide me with all special characters I need.
    BPEL does the exact same thing, so I'm starting to believe that the problem really is with the file adapter.
    Regards,
    Arjan

  • Multi languages and Special Characters in PI

    Hi gurus
    Different languages data will be coming from the source xml file and PI has to handle that data and send it to ECC  system throguh IDoc Receiver Adapter.
    .our scenario is MDM to R/3. Pi file adapter is picking the xml file from source directory path.
    the file encoding we have used is "ISo-8859" file type as "TEXT", we have also tried checking by giving the file name as"Binary", but we are still facing the issue.
    The special characters which are showing up are Å#ó
    i was trying to look at the below mentoined blog , but that blog was not availiale in sdn.
    http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/9420 [original link is broken]
    thanx in advance

    > i tiried as you suggested, but i do not have the access for the RFC destination in R/3 server.
    I mean the RFC destination in PI, not in R/3
    > i have a doubt here
    > In the xml file received by PI, the entire data will be in English, only for two particular fields the data will be in Foreign Languages(russian,greek or any other langauge).The fields are maintained under International Version Segement.
    You have to split the XML, create different IDocs by language and choose an IDoc receiver channel with tied RFC logon language 
    see SAP note 745030
    The customer should consider to use a unicode R/3 system when he wants to use different language descriptions.
    In a non-unicode system the texts from other languages cannot be read, for example the greek characters cannot be displayed when logging on in russian. In a unicode system there is no restriction.

  • Have a problem in Numbers with entering data directly into a cell when a table reaches a certain size and special characters are used?

    I am creating a list of words with special characters in some of them. I get to a point where I cannot enter data directly into the cell. I have to use the data entry bar at the top in the toolbar. Any solutions available?
    This is the table I am creating...
    From row 25 on, I am only able to enter data into the cell through the toolbar at the top and not directly into the cell itself. I believe that this problem originates from the special characters I have inserted in the previous records because if I do not use special characters, then the spreadsheet acts flawlessly. Have there been issues in Numbers with data entry, the size of a table, and using special characters?

    I would try removing the rows with "special" characters one at a time to see which one is causing the problem

  • File Content Conversion (receiver) and special characters

    Hi all,
    I have a scenario that has a file receiver channel with content conversion. The record structure in the flat file is field-width delimited (hence no field separator) and the parameter 'fieldLengthTooShortHandling' has the value 'Cut' because the receiving system needs only specific widths for the fields. Hence if the field value exceeds the length permitted, the extra characters are clipped.
    I observed that some characters are not handled properly while creating the text file. For example, one of the fields contained a "minus" character (not the hyphen). The flat file was created successfully. I opened the file in notepad and found that the "minus" character has appeared correctly and the column count in that record was as expected. However when the same file was opened in Textpad, the minus character was displayed as â | |  ('a' with caret, bar, bar) So, all the fields after this field were shifted ahead by 2 characters and hence the total column count of the record had gone beyond the actual one.
    All this started due to the error reported by the receiver system which processes the flat file. Due to shift of characters in the flat file, the processing failed. Moreover that system cannot process the special characters (like minus or non-Latin accented characters etc.) So although there is no issue in XI interface as such, I just want to know if anyone has more informtion on why the characters are displayed differently as mentioned above.
    Regards,
    Shankar

    Define data type like
    order_recordset
    order_row 1..unbound
    f1
    f2
    All are same except communication channel configuration,
    Message Protcol : File content select, then below you got additional parameters.
    there you fill
    Document name : Your sender message type.
    Document namespace : Give your scenario namespace
    Recordset name : order_recordset ( mentioned in the data type)
    Recordset structure : order_row, *
    Name Value
    order_recordset.fieldSeparator : 'nl'
    order_row.fieldSeparator : ,
    order_row.endSeparator : 'nl'
    based on your text file you fill the above parameter values.

  • Dynamic text, system fonts, and special characters...

    This one is boggling my mind so if someone can help, please
    do
    I've got a dynamic text area pulling text from a MySQL
    database via AMFPHP. The text includes special characters such as
    accents, umlauts, etc (multi-language site platform). Most of them
    work fine, and ALL of them work fine when I'm on a Mac client.
    However, if I use a Windows client machine in either Firefox or
    IE6, there are a couple characters that for whatever reason just
    don't seem to show up -- instead I get the [] box character.
    The only characters I've found that seem to be affected like
    this are European quote characters like &#146; &#147; and
    &#148; (hex characters 146, 147, 148). I'm using the familiar
    ampersand-pound-number-semicolon escape sequence for them. And like
    I said, they all display fine on Mac/Firefox and Mac/Safari. Why
    are all my other special characters (umlauts, accents, etc) working
    fine and just these things failing? It's also worth noting that
    they look fine if I dump them out to a PHP file and pull it up in a
    browser...
    Please, if anyone can help... I've been bashing my head
    against this for the better part of the day!!

    Hi,
    did you take a look at the "gateway.php' ? There you can
    define charsets. Maybe a western europe charset will
    help you out. Since french language is using a lot more
    accents and so.
    This is what you can find in the gateway.php of amfphp. Just
    have to set the right one ;)

  • XmlSaveDom unresolved symbol, and special characters in output...

    I have a couple of issues with XmlSaveDom() that I'm hoping someone can help with...
    Firstly, under windows I'm having difficulty locating the library that it lives in. I'm linking to:
    oraxml10.lib (from the 10.1.0.2 XDK, the latest version I can find) and oci.lib (from the 10.2 client)...but getting an unresolved symbol on XMLSaveDom().
    I'm using the XDK because under windows I only have a client install which does not appear to have any of the XML headers or libraries in it. I'm using
    10.1.0.2 because it's the latest version I can find to download from technet.
    Secondly, under Solaris I'm just linking to libclntsh.so from 10.1.0.2 server home, and this resolves XMLSaveDom() ok. However when I use XMLSaveDom() I find unusual special characters in the output periodically:
    ÿ¾Ý <SOME_TAG>some_value</SOME_TAG>
    The strange character sequence is always the same.
    Any help with either of these would be greatly appreciated.

    The DOM is not required to escape the characters, so it is correct that you get the literal ampersand characters when you ask the DOM for a getNodeValue().
    When an XML document is serialized -- using, for example, the XMLDocument.print() method -- it is when this external form of the document is produced that escaping occurs.
    You can always call XMLNode.print() to serialize the value of a node and it's children into a PrintWriter that wraps a StringWriter to get the string equivalent of the properly escaped values.

  • XMLParser and Special Characters

    Hi,
    I'm trying to read in an XML Document from a stream (e.g. a file) using XMLParser. The document contains german text (i.e. lots of special characters like umlauts �, �, � and others).
    If I read this stream into a text string all these special characters are perfectly handled (i.e. � looks like an �, etc.).
    However, if I import the stream into an XMLParser.Document using ImportDocument the umlauts seem to be scrambled. If the imported document is without any changes exported again to a stream (using ExportDocument) the umlauts are not displayed correctly anymore.
    Example Stream:
    <?xml version="1.0" encoding="iso-8859-1" ?>
    <UserID>M�ller</UserID>
    If this stream is imported into an XMLParser.Document and then exported again it contains
    <UserID>M��ller</UserID>
    I'm using correct XML encoding iso-8859-1 which is for western european languages and I guess it should not be a Forte locale issue since simple string handling of the stream works fine.
    Thanks for any hints,
    Daniel

    Let's start at the basics. Right now you are quite limited by your database character set as US7ASCII. You need to migrate to something that will support Latin and Greek characters at least. Maybe EL8ISO8859P7, or UTF-8. Please look at documentation Scanner Utility, available for Oracle 8.1.6 and above to make sure migration is safe before doing any import/export. The title of paper is: Database Character Set Migration, at: http://technet.oracle.com/products/oracle8i/listing.htm#nls
    UTF-8 will give you more versatility in the languages that your customer supports now or in the future. There is some performance overhead using Unicode but how much depends? I would base a large part of the Unicode decision on how likely it would be that other languages would need to be supported in the future and special character support.
    The special characters that your customer would like to support may already exist in Unicode. IF they don't or you choose another character set then your customer will need to look at the National Language Support Guide, Appendix 'B' "Customizing Locale Data"
    Are you running Greek windows? Otherwise how will you enter Greek characters? If you are using Greek windows you probably need to set your client NLS_LANG to EL8MSWIN1253.
    On your Forms questions you might want to take a look at the following :
    1. Chapter 4 of "Oracle Forms Developer and Reports Developer Release 6i: Guidelines for Building
    Applications" discusses How to design MultiLingual Applications.
    http://otn.oracle.com/docs/products/forms/doc_index.htm

Maybe you are looking for