Spanish Dictionary and problems with spanish special characters

I need a spanish dictionary, with all spanish words. I know that your priority are france, spanish, germany and Italy. So, the component has problems with spanish special words, this problem was notified before.
Have you ready something? A new version?
Thank you for your time.

For the special character issue, you can check the reference:
http://forums.adobe.com/message/2430501

Similar Messages

  • Problem with Icelandic special characters on Mac

    Hello
    I am working on a Flash publication for students, and I want it to run on Mac as well as PC. Everything goes fine, except a problem with three special characters in my language, Icelandic. I am working on a registration and login page where I am using text boxes and text input boxes. Everything looks correct on PC, but on Mac the characters Þ Ð Ý are lost.
    I have tried different fonts etc.
    Any idea what is wrong?
    Jónas Helgason

    Hello Jónas,
    Did you ever figure this out ?
    I have a similar problem except only with two letters (both upper and lower case).  These two Icelandic letters can't be entered into a Flex TextInput box in the Flex apps I am creating when they are loaded on a Mac.  The letters are known as &Eth, &eth, &Thorn and &thorn in HTML terminology.  Typing these characters on the keyboard results in the following:  { [ ? /
    However I can copy the characters in question from some other app like TextEdit and paste them into a TextInput box in my Flex app and all is well, they show up correctly.
    This happens regardless of the Mac browser used and the Flash plugin version used (have tried both 9 and 10) and also happens in the standalone Flash Player application.
    Does anyone have any idea how to fix this or is this a bug in Flash Player ?  This is really annoying as it makes text input into Flex apps on Icelandic Macs very difficult.
    There must be something wrong with the mapping of keyboard key codes into character codes on the Mac that is causing this.
    Btw, I just heard from a friend that this problem does not exist in MacOS 10.6.  I am running 10.4 and have tested this on 10.5 and it exists on both of those OS versions.
    Rgds,
    Hordur Thordarson
    Lausn hugbunadur
    http://lausn.is

  • Problems with using special characters in Interactive Report Search

    Hi!
    I am currently developing an Application on Application Express 3.1.2.00.02 including a page with an Interactive Report, facing the problem that I cannot use special german characters in the Searchbar.
    So if i try to find a name like 'Schröder' the created Filter looks like this 'Schröder' and i won't get any valid search results. By the way the rest of the application supports these special characters like using them in Buttons or any other Page elements.
    Does anyone have a clue how to fix this problem, because it's driving me nuts ;)
    Thanks in advance
    Philipp
    Edited by: philipp_m on 10.06.2009 11:15

    Does noybody have a clue how to solve this problem. I tried to find out where the Problem occures. The Ajax Request looks like this
    f01     contains
    f01     Schröder
    f01     15
    p_flow_id     100
    p_flow_step_id     50
    p_instance     3176950818119673
    p_request     APXWGT
    p_widget_action     QUICK_FILTER
    p_widget_action_mod     ADD
    p_widget_mod     ACTION
    p_widget_name     worksheet
    p_widget_num_return     15
    x01     14175446766823030
    x02     14176526259823035
    So I guess it has to be inside the Javascript file (apex_ns_3_1.js). I hope someone can help me.
    Bye
    Philipp

  • Problems with Turkish special characters

    Hello!
    We are producing Oracle Help with WebWorks Publisher 2003 Professional for FrameMaker. There are some problems with the Turkish version of our online help: The Turkish special characters (for example the dotless i) aren´t displayed correctly in the navigation panes TOC and index. They are replaced by other characters (for example by a "y")
    Has anyone an idea how to solve this problem?
    Thank you very much.
    Kind regards,
    Miriam Rassenhofer

    Miriam,
    What encoding are the TOC and index XML files being generated in? You should use UTF-8 for the minimum of problems. I presume WebWorks has an option for this. Other than that, make sure the top of those XML files has the proper XML declaration with the encoding:
    <?xml version="1.0" encoding="UTF-8"?>You may want to try opening the XML files in an XML-aware text editor to ensure they look right there (JDeveloper is one such editor).
    If all of that is working, post back and we can talk offline about getting a snippet of one of those XML files for us to look at.
    -brian

  • Problem with german special characters in APEX

    Hi,
    we have a problem with all the special characters in german language in our Application.
    APEX version 3.1.0.00.32 is installed on a oracle database 9.2.0.6.0
    The nls_characterset of the database is: American_America.WE8ISO8859P1
    We have modified the wdbsvr.app file on our HTTP-Server like it's shown in the installationguide for APEX and have set the nls_lang parameter to American_America.AL32UTF8.
    If I look at the source code of the html-pages of our application, there are already the following settings in the header of every page:
    meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"
    With this settings all the german special characters like ä ö ü are not shown correctly in the browser.
    What can I do that the pages are shown correctly?
    Thanks for help!

    Hi Petra,
    ok, so my guess was correct. So the solution is to set the charset attribute in the HTML header to "charset=UTF-8". This is actually the way it should be. But I'm wondering why it is not in your case? Are you using a custom template for the page(s) in question where the charset attribute is set to a custom value? The meta tags in the HTML header are usually set by/through the #HEAD# substitution string in the header definition section of the page template, cp. one of the page templates in Shared Components --> Templates. And as far as I know you are not able to change this substitution string, you can only switch the inclusion on/off with the option "Include Standard CSS and JavaScript" in each page definition. (I might be mistaken, though, I'm quite new to APEX...)
    Regards
    Frank

  • Mysql problem with german special characters

    hi,
    I wrote a software and it worked quite good, but after I installed it on a new machine with j2se 1.4 I've problems with the german special characters.
    this code works good on the old machine (jdk 1.3.1) and prints the wanted characters like �,�,�.
    Class.forName("org.gjt.mm.mysql.Driver").newInstance();
    java.sql.Connection conn;
    conn = DriverManager.getConnection("jdbc:mysql://localhost/testdb?user=testuser&password=xxxx");
    Statement s = conn.createStatement();
    ResultSet r = s.executeQuery("select something from testtb where id='1'");
    r.first();
    System.out.println( r.getString(1) );
    but on the new machine (j2se 1.4) I only receive the character ?.
    I updated my org.gjt.mm.mysql to the current MySQL Connector/J 3.0.9 and added
    conn = DriverManager.getConnection("jdbc:mysql://localhost/testdb?user=testuser&password=xxxx&useUnicode=true&characterEncoding=ISO-8859-1");
    but I've got still the same problem.
    Thanks in advance
    Markus

    with "wanted characters like �,�,�"
    I meant: like &#x00E4;,&#x00FC;,&#x00F6;

  • Problems with transforming special characters

    Hi,
    I develop a small educational application ( http://sourceforge.net/projects/pauker/ ). I work with JDK-1.4.0 on Mandrake Linux 8.2. At first I used serialized objects to save the lessons to a file. This worked well until I wanted to change some public members of the involved classes. That's why I switched over to the new and shiny XML. Now I have a different problem!
    Pauker saves its lessons in gziped XML files. Users from all over the world can create lessons containing very different characters. There are European characters like ������� and asian characters. Loading this lessons on a system with a different encoding works fine. Saving such a lesson on a system with a different encoding can destroy the lesson.
    Example:
    On a german system a user creates a lesson with the letter � on a card side and saves it. A different user working on an english system loads this lesson. The character "�" is displayed correctly. The english user saves the lesson. The character "�" will be replaced by a question mark in the xml file. Next time the english user loads the lesson she will not see "�" but "?" on the display.
    Here is a little example program that does the transformation in exactly the way Pauker does. Please test it out.
    import java.io.*;
    import javax.xml.parsers.*;
    import javax.xml.transform.*;
    import javax.xml.transform.dom.*;
    import javax.xml.transform.stream.*;
    import org.w3c.dom.*;
    public class XMLTest {
    public XMLTest() {
    try {
    // create document
    DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance();
    DocumentBuilder documentBuilder = documentBuilderFactory.newDocumentBuilder();
    Document document = documentBuilder.newDocument();
    // fill document
    Element element = document.createElement("Element");
    document.appendChild(element);
    element.appendChild(document.createTextNode("�������"));
    // transform to XML
    TransformerFactory transformerFactory = TransformerFactory.newInstance();
    Transformer transformer = transformerFactory.newTransformer();
    //transformer.setOutputProperty(OutputKeys.ENCODING, "ISO-8859-1");
    transformer.setOutputProperty(OutputKeys.INDENT, "yes");
    transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2");
    DOMSource source = new DOMSource(document);
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    StreamResult result = new StreamResult(outputStream);
    transformer.transform(source, result);
    System.out.println("original: �������");
    System.out.println(outputStream.toString());
    } catch (Exception e) {
    e.printStackTrace();
    System.out.println();
    public static void main(String[] args) {
    new XMLTest();
    So what do I have to do to fix this problem? I have a lot of new features waiting in the CVS but this bug is still open and discussed. I dont want to release a new version with such a gaping hole in it...
    Thanks a lot!
    If you prefer to reply peronally, please use Ronny.Standtke at gmx.de

    I don't see how you can say that there's a problem with saving your XML files when the code you post doesn't actually save it to a file. Your transform is being written to a ByteArrayOutputStream, which isn't used except for this statement which I assume is for debugging:System.out.println(outputStream.toString());Of course that is useless for debugging the problem you describe, for two reasons:
    1. the toString() method uses your system's default encoding, which may be ISO-8859-1 but is definitely not UTF-8. You could write toString("UTF-8") but that is a waste of time because:
    2. You use System.out.println() to examine the data, which writes it to a console that also probably does not use UTF-8. I don't know what encoding it does use, but UTF-8 is unlikely.
    So, save your files using the UTF-8 encoding as robadmin suggested. And to test the result, make sure you use a tool that understands the UTF-8 encoding.

  • Problem with norwegian 'special characters' (æøå) in LaTeX

    Having installed just about every latex-package in the repos I still cant get this to work. This is what I have in my .tex file:
    \documentclass[12pt,norsk,a4paper]{article}
    \usepackage[norsk]{babel}
    \usepackage[latin1]{inputenc}
    \usepackage[T1]{fontenc}
    \begin{document}
    Testing æøå - ÆØÅ
    \end{document}
    Any suggestions?

    What characters are you trying to put into math mode?
    Generally, with regular LaTeX, you can't put special characters in math mode. You need to use the LaTeX symbol commands instead. For example, use \rightarrow and not →. The only exception would be if you were using XeLaTeX rather than regular (pdf)LaTeX, which generally has better Unicode support, especially if you had loaded the unicode-math packge: but that's still considered in beta status, I believe.
    You can also escape to text mode inside math with the \text command from the amsmath package, e.g., $\text{ÆØÅ}$ should work.
    Last edited by frabjous (2010-09-17 04:12:07)

  • Problems with german special characters on DOC export.

    Hello,
    I have a problem when I export a pdf to doc via the adobe cloud feature. Everything works fine except for the special german characters. And since I want to use this tool to convert my latex pdf to word to use the spellchecker for me the entire system is broken. Do I need to change to a special encoding of my textfiles for this to work? Can you fix this somehow?
    Cheers
    nenTi

    I use Arial as font generated by MiKTex. So I doubt it's a font problem. Also it detects all other characters without a problem, only the german special chars äöüÄÖÜß are a problem for the system.
    Try it yourself:
    "Hier ist mein scheiß überfordertes konvertierungs script und rödel vor sich hin ohne ännähernd befriedigendes Ergebniß."
    You won't have the special chars editable in the doc. It is visible but you can't edit it because it is converted to something strange.

  • Af:inputFile problems with enconding, special characters

    Hi all,
    searched around the web and in these forum, found this topic:
    af:inputFile encoding file name is not known
    As the author I'm getting the same issue with characters like âáàíí and all things like that
    in my language(Brazilian).
    Instantly after I choose a file the name goes with a � in the place of these chars I mentioned before.
    I changed everything to UTF-8 like my template and jspx file:
    <jsp:directive.page contentType="text/html;charset=UTF-8"/>
    I also changed my web.xml header:
    <?xml version = '1.0' encoding = 'UTF-8'?>
    And everything in Jdeveloper is set to UTF-8 like IDE and compiler preferences.
    Added this lines to weblogic.xml too:
    <charset-params>
    <input-charset>
    <resource-path>/*</resource-path>
    <java-charset-name>UTF-8</java-charset-name>
    </input-charset>
    </charset-params>
    Everything seens to not work :(
    Using Jdeveloper 11.1.1.3 / Windows XP SP3 Portuguese
    Developing with ADF/EJB3/JPA
    Regards,
    Renan.

    Please check that the compiled pages are UTF-8 encoded. Open the project properties, select 'Compiler' and check the 'Character Encoding*' is set to UTF-8. Next select the 'JSP' node under 'Compiler' and check the encoding there.
    Timo

  • [SOLVED]Gnumeric and problem with non-US characters

    I am trying here to get Gnumeric to recognize the UTF-8 or ISO-8859-15 characters, but there even doesn't seem to be any locale option in menus? Googling didn't show up anything spectacular.
    I tried this:
    LC_ALL=fr_FR gnumeric &
    but got an error from command line:
    (gnumeric:7559): Gtk-WARNING **: Locale not supported by C library.
            Using the fallback 'C' locale.
    And everything is still in the US format, even how dates are represented.

    Thanks, rdoggsv - I got the menus in my language in gnumeric and as a bonus - I got the euro sign to work with your advice
    But still I don't have characters like ä,ö,õ,ü at gnumeric. I get them nicely at kwrite, konsole, konqueror, openoffice and opera.
    This missing chars - thing seems to be Gnome-specific. For example I can't get those chars visible at Gimp, graveman and galculator. But in Abiword the chars are visible in the editor's text editing field but not when for example I try to save a file.

  • Problem with reading special characters in unix

    Hi,
    Iam trying to read the data from a file by the following code
    FileInputStream inputFile = new FileInputStream(xx);
    InputStreamReader reader = new InputStreamReader(inputFile);
    BufferedReader bufferedReader = new BufferedReader(reader);
    String s= bufferedReader.readLine();
    one of the line in file has the character �
    It working fine on windows, but on unix Iam getting it as ��.
    I tried with InputStreamReader contructor which accepts the charset name also, like by giving Cp1252, and latin1 etc, but iam not able to get around this problem.
    Any Ideas Please?

    ��This suggests to me that your input file is encoded in UTF-8. But that's only a guess, you need to find out for sure. Asking the person who produced the file would be the most reliable way. When you do find out, specify that encoding as the second parameter of the InputStreamReader constructor.

  • Utl_smtp   and  problem with french language characters

    Hello,
    I am using utl_smtp to send email.
    Here is part of the proc:
    First try:
    mail_conn := utl_smtp.open_connection(mailhost, 25);
    utl_smtp.helo(mail_conn, mailhost);
    utl_smtp.mail(mail_conn, sender);
    utl_smtp.rcpt(mail_conn, recipient);
    utl_smtp.open_data(mail_conn);
    utl_smtp.write_data(mail_conn, 'MIME-Version: 1.0' ||CHR(13)||
    CHR(10)||'Content-Type: text/plain; charset=WE8ISO8859P1' ||
    CHR(13)||CHR(10)|| 'Content-Transfer-Encoding: 8bit' || CHR(13)||CHR(10) || message);
    utl_smtp.close_data(mail_conn);
    utl_smtp.quit(mail_conn);
    This gives the output:
    .... adresse par element X doit etre ....
    which is wrong. It should be:
    .... adressé par élément X doit être ....
    I then made the following modif:
    mail_conn := utl_smtp.open_connection(mailhost, 25);
    utl_smtp.helo(mail_conn, mailhost);
    utl_smtp.mail(mail_conn, sender);
    utl_smtp.rcpt(mail_conn, recipient);
    utl_smtp.open_data(mail_conn);
    /*This is the modif */
    utl_smtp.write_data(mail_conn, 'MIME-version: 1.0' || utl_tcp.CRLF);
    utl_smtp.write_data(mail_conn, 'Content-Type: text/plain; charset=WE8ISO8859P1'||utl_tcp.CRLF);
    utl_smtp.write_data(mail_conn, 'Content-Transfer-Encoding: 8bit' ||utl_tcp.CRLF);
    utl_smtp.write_raw_data(mail_conn,utl_raw.cast_to_raw(utl_tcp.CRLF || message));
    /* end of modif */
    utl_smtp.close_data(mail_conn);
    utl_smtp.quit(mail_conn);
    This gives the output:
    FROM: [email protected]
    SUBJECT: Demande pour XYZ
    TO: [email protected]
    .... adressé par élément X doit être ....
    which is the intended result except for the FROM, SUBJECT and To.
    Of course, I can make that disappear by tweaking the modif. However, I want to know why the original procedure is not working even though I followed the utl_smtp specs.
    Note:
    message := '
    'FROM:'||V_SENDER||CHR(13)||CHR(10)||
                   'SUBJECT:'||Demande pour XYZ ' ||CHR(13)||CHR(10)||
                   'TO:'||V_COURRIEL||CHR(13)||CHR(10)||CHR(13)||CHR(10)||V_CORPS_MESSAGE;
    Database:
    NLS_CHARACTERSET : WE8ISO8859P1
    NLS_LANGUAGE : FRENCH
    NLS_NCHAR_CHARACTERSET : AL16UTF16
    Base : Oracle9i Release 9.0.1.3.0
    Thanks

    Hello,
    Issue was resolved.
    ==============================================
    PROCEDURE Send_Mail (sender IN VARCHAR2,recipient IN VARCHAR2,message IN VARCHAR2, myheader IN varchar2) IS
    mailhost VARCHAR2(50) :='mail.mystmpexample.com'; -- get host name;
    mail_conn utl_smtp.connection ;
    BEGIN
    mail_conn := utl_smtp.open_connection(mailhost, 25);
    utl_smtp.helo(mail_conn, mailhost);
    utl_smtp.mail(mail_conn, sender);
    utl_smtp.rcpt(mail_conn, recipient);
    utl_smtp.open_data(mail_conn);
    utl_smtp.write_data(mail_conn, myheader || CHR(13)||CHR(10));
    utl_smtp.write_data(mail_conn, 'MIME-version: 1.0' || CHR(13)||CHR(10));
    utl_smtp.write_data(mail_conn, 'Content-Type: text/plain; charset=WE8ISO8859P1'|| CHR(13)||CHR(10));
    utl_smtp.write_data(mail_conn, 'Content-Transfer-Encoding: 8bit' || CHR(13)||CHR(10));
    utl_smtp.write_data(mail_conn, CHR(13)||CHR(10));
    utl_smtp.write_raw_data(mail_conn,utl_raw.cast_to_raw(message));
    utl_smtp.close_data(mail_conn);
    utl_smtp.quit(mail_conn);
    EXCEPTION
    WHEN OTHERS THEN
    utl_smtp.quit(mail_conn);
    RAISE_APPLICATION_ERROR(-20000,'Impossible d''envoyer le courrier du a l''erreur suivante : ' || SQLERRM);
    END; -- procedure Send_Mail
    ==============================================
    Please note that the line:
    utl_smtp.write_data(mail_conn, CHR(13)||CHR(10));
    is vey important since it is going to separate the header from the message itself. This will allow the message to come out alone in the email.
    Variable myheader contains the From, To and Subject and each needs to be followed by CHR(13)||CHR(10
    Thanks

  • ### Problem in retrieving special characters with Oracle 9i JDBC drivers

    hi,
    We are having some problem with retrieving special characters like '�' from the database.
    Our application is using JDK1.3.1 with Oracle 9i at the back end(Version: 9.0.1.0.0). We are using oracle 9i thin drivers (classes12.zip) for database interaction.
    To relieve the data from database we are using PreparedStatement in two ways
    1. Creating a preparedstatement from connection object without any parameters and then retrieve the
    data using it. This gives the results in correct format i.e. special characters like '�'
    2. Create the preparedstatement by passing the following parameters.
    i) ResultSet.TYPE_SCROLL_INSENSITIVE
    ii) ResultSet.CONCUR_READ_ONLY
    In this case we are not able to retrieve the special character like '�' correctly. Instead the ResultSet
    returns 'h'
    I think this is the problem with Oracle drivers. Does anyone have any information about the mentioned problem.
    rgds

    I don't know exactly (because I am using JDK 1.4 with ojdbc14.jar where these problems seem to be rare...) but you may consider this:
    1. Add nls_charset12.zip to your classpath to ensure that the encoders are present (may or may not help)
    2. Swith to JDK 1.4, and do this:
    Instead of String s = getString(column)
    use
    byte[] bytes = getBytes(column);
    ByteBuffer bb = ByteBuffer.wrap(bytes); // in package java.nio
    CharBuffer cb = Charset.forname("ISO-8859-x").decode(bb);
    String s = cb.toString();
    The latter method allows you to perform the encoding/decoding manually.
    3. Change the character encoding in the database to unicode upon database setup.
    4. Try playing with NLS parameters (alter session ...)

  • Firefox doesn't reconvert special characters in the file names when download a file with any special characters in the file name

    <blockquote>Locking duplicate thread.<br>
    Please continue here: [/questions/815207]</blockquote><br>
    if i try to download a file with any special characters in file name (e.g. File_Name.pdf), it doesn't reconvert them from the "sanitize url" process and download the file an incorrect name (e.g. File%5FName.pdf).
    This is really annoying.
    Thank you for your patient

    Start Firefox in <u>[[Safe Mode]]</u> to check if one of the extensions is causing the problem (switch to the DEFAULT theme: Firefox (Tools) > Add-ons > Appearance/Themes).
    * Don't make any changes on the Safe mode start window.
    * https://support.mozilla.com/kb/Safe+Mode
    * [[Troubleshooting extensions and themes]]

Maybe you are looking for

  • "Disc could not be read or written to"

    Ok my iPod Mini has been all messed up. It started earlier when I updated it, but nothing I deleted or added was changed. It would update on iTunes but nothing would happen. I restored it and it's at that now. Now, whenever I try to connect it, i get

  • NO AUDIO ON BATCH EXPORT.  First 2 times i posted this it never showed up....

    Why do I randomly get audio from batch export using AME.  Usually it doesn't work at all as in NO audio but other times several of the files in a batch will have audio.  And other times none. What gives? running Mavericks on G5 with 12GB of RAM and Q

  • Problem sync iCloud Photo Library

    Why in my iCloud Photo Library are indicated 4 video and my devices download in Photo only 3 but fail to download the 4th? They are constantly trying to download it without success for four days. How can i solve it? Please help me. Thanks

  • SWCV in IR

    Hi, what should we do if we dont get SWCV in IR? thanks guna

  • Find Stolen MacBook Pro

    I have registered the MacBook Pro with my apple Id, and I see that the MacBook is offline and cannot be located in the find my iPhone app. If the thief reloads the operating system will I still be able track the MacBook because logically the serial n