No Umlauts returned

Hi all,
Querying a table with ctx_doc.snippet doesn't return the german umlauts, so the results are not displayed correctly.
The data Storage is multicolumn datastorage, the current lexer is ctxsys.default_lexer.
The target of the query contains a clob with contains mostly html docs
Is there a way to prevent from hiding the umlauts?
regards
Franz Langner

Hello Barbara,
please excuse the late reply,
I've overseen this thread several times.
Here's a snippet of the output from ctx_report.create_index_script:
begin
ctx_ddl.create_preference('"IDX_DOCS_VOLLTEXT_DST"','DIRECT_DATASTORE');
end;
+/+
begin
ctx_ddl.create_preference('"IDX_DOCS_VOLLTEXT_FIL"','AUTO_FILTER');
end;
+/+
begin
ctx_ddl.create_section_group('"IDX_DOCS_VOLLTEXT_SGP"','AUTO_SECTION_GROUP');
end;
+/+
begin
ctx_ddl.create_preference('"IDX_DOCS_VOLLTEXT_LEX"','BASIC_LEXER');
ctx_ddl.set_attribute('"IDX_DOCS_VOLLTEXT_LEX"','MIXED_CASE','NO');
end;
+/+
begin
ctx_ddl.create_preference('"IDX_DOCS_VOLLTEXT_WDL"','BASIC_WORDLIST');
ctx_ddl.set_attribute('"IDX_DOCS_VOLLTEXT_WDL"','STEMMER','GERMAN');
ctx_ddl.set_attribute('"IDX_DOCS_VOLLTEXT_WDL"','FUZZY_MATCH','GERMAN');
end;
+/+
begin
ctx_ddl.create_stoplist('"IDX_DOCS_VOLLTEXT_SPL"','BASIC_STOPLIST');
I've compared to another Index and found out, that some attributes are different there:
ctx_ddl.set_attribute('"IDX_ZVVDATEN_LEX"','COMPOSITE','GERMAN');*
ctx_ddl.set_attribute('"IDX_ZVVDATEN_LEX"','MIXED_CASE','YES');*
ctx_ddl.set_attribute('"IDX_ZVVDATEN_LEX"','ALTERNATE_SPELLING','GERMAN');*
end;
+/+
I'll try to change the malfunctioning index to that.
Thanks for your hint,
regards
Franz

Similar Messages

  • German Umlauts OK in Test Environment, Question Marks (??) in production

    Hi Sun Forums,
    I have a simple Java application that uses JFrame for a window, a JTextArea for console output. While running my application in test mode (that is, run locally within Eclipse development environment) the software properly handles all German Umlauts in the JTextArea (also using Log4J to write the same output to file-- that too is OK). In fact, the application is flawless from this perspective.
    However, when I deploy the application to multiple environments, the Umlauts are displayed as ??. Deployment is destined for Mac OS X (10.4/10.5) and Windows-based computers. (XP, Vista) with a requirement of Java 1.5 at the minimum.
    On the test computer (Mac OS X 10.5), the test environment is OK, but running the application as a runnable jar, german umlauts become question marks ??. I use Jar Bundler on Mac to produce an application object, and Launch4J to build a Windows executables.
    I am setting the default encoding to UTF-8 at the start of my app. Other international characters treated OK after deployment (e, a with accents). It seems to be localized to german umlaut type characters where the app fails.
    I have encoded my source files as UTF-8 in Eclipse. I am having a hard time understanding what the root cause is. I suspect it is the default encoding on the computer the software is running on. If this is true, then how do I force the application to honor german umlauts?
    Thanks very much,
    Ryan Allaby
    RA-CC.COM
    J2EE/Java Developer
    Edited by: RyanAllaby on Jul 10, 2009 2:50 PM

    So you start with a string called "input"; where did that come from? As far as we know, it could already have been corrupted. ByteBuffer inputBuffer = ByteBuffer.wrap( input.getBytes() ); Here you convert the string to to a byte array using the default encoding. You say you've set the default to UTF-8, but how do you know it worked on the customer's machine? When we advise you not to rely on the default encoding, we don't mean you should override that system property, we mean you should always specify the encoding in your code. There's a getBytes() method that lets you do that.
    CharBuffer data = utf8charset.decode( inputBuffer ); Now you decode the byte[] that you think is UTF-8, as UTF-8. If getBytes() did in fact encode the string as UTF-8, this is a wash; you just wasted a lot of time and ended up with the exact same string you started with. On the other hand, if getBytes() used something other than UTF-8, you've just created a load of garbage. ByteBuffer outputBuffer = iso88591charset.encode( data );Next you create yet another byte array, this time using the ISO-8859-1 encoding. If the string was valid to begin with, and the previous steps didn't corrupt it, there could be characters in it that can't be encoded in ISO-8859-1. Those characters will be lost.
    byte[] outputData = outputBuffer.array();
    return new String( outputData ); Finally, you decode the byte[] once more, this time using the default encoding. As with getBytes(), there's a String constructor that lets you specify the encoding, but it doesn't really matter. For the previous steps to have worked, the default had to be UTF-8. That means you have a byte[] that's encoded as ISO-8859-1 and you're decoding it as UTF-8. What's wrong with this picture?
    This whole sequence makes no sense anyway; at best, it's a huge waste of clock cycles. It looks like you're trying to change the encoding of the string, which is impossible. No matter what platform it runs on, Java always uses the same encoding for strings. That encoding is UTF-16, but you don't really need to know that. You should only have to deal with character encodings when your app communicates with something outside itself, like a network or a file system.
    What's the real problem you're trying to solve?

  • German special Characters ( umlaut) not displaying in WebI

    Hi There,
    i'm working with on BO XI R2 SP3 installed on Solaris.
    I have problems with special german characters(umlaut) like ü, ö, ä. They are not being displayed correctly in Webi; actually they are displayed following this logic ü = u, ö = o, ä = a.
    The problem is that when i'm using a string containing one of those in the filter conditions, the query so generated returns empty results because is not matching the data in the database.
    Is there any way to display umlaut in the report result and in the LOV, as they are in the database?
    Please any suggestion is more than welcomed.
    Pierluca

    Hi,
    Could you please follow the below steps:
    1>     Check if you have international language support pack installed on Bo server. Check if you have Arialuni.ttf under fonts file under Windows directory on BO Server.
    2>     Specify Arialuni.ttf file name in fontaliases.xml under <bo installation>\fonts\
    3>     Connectivity modification
    Oracle 9
    a> Modify oracle.sbo file under
    Windows
    u201C\BOInstalledfolder\dataAccess\RDBMS\connectionServer\oracleu201D
    Under the corresponding target database engine, add the Unicode parameter with the UTF8 value as specified below;
            <DataBase Active="Yes" Name="Oracle 9">
              <Parameter Name="Library">dbd_oci9</Parameter>
              <Parameter Name="Unicode">UTF8</Parameter>
            </DataBase>
    You can do the modification under the DEFAULT section. This applies for all target databases.
    b>     Windows: Modify NLS_LANG setting in Registry
    Under Oracle/HOME0 folder, you can find NLS_LANG definition
    Default setting is(example in UK English):
         ENGLISH_UNITED KINGDOM.WE9ISO8859P 15
    Changed to:
         ENGLISH_UNITED KINGDOM.UTF8
    MS SQL Server 2000
    a>     Modify odbc.sbo file under
    u201C\BOInstalledfolder\dataAccess\RDBMS\connectionServer\odbcu201D
    Under the corresponding target database engine,
    u2022     add the Unicode parameter with the UCS2 value as specified below;
    u2022     Check the Library parameter to set with the correct Unicode library name (See Connection Server release note for more information).
        <DataBase Active="Yes" Name="MS SQL Server 2000">
              <Parameter Name="Family">Microsoft</Parameter>
              <Parameter Name="Version">rdbms_mssqlserverodbc.txt</Parameter>
              <Parameter Name="SQL External File">sqlsrv</Parameter>
              <Parameter Name="SQL Parameter File">sqlsrv</Parameter>
              <Parameter Name="Array Bind Available">True</Parameter>
              <Parameter Name="Library">dbd_wmssql</Parameter>
              <Parameter Name="Unicode">UCS2</Parameter>
              <Parameter Name="Driver Level">31</Parameter>
                </DataBase>
    You can do the modification under the DEFAULT section. This applies for all target databases.
    DB2 UDB
    a>     Modify db2.sbo file under
    u201C\BOInstalledfolder\dataAccess\RDBMS\connectionServer\db2u201D
    Under the corresponding target database engine, add the Unicode parameter with the UTF8 value as specified below;
            <DataBase Active="Yes" Name="DB2 UDB v8">
              <Parameter Name="Binary Slice Size">30000</Parameter>
              <Parameter Name="Max Rows Available">True</Parameter>
              <Parameter Name="Unicode">UTF8</Parameter>
               </DataBase>
    You can do the modification under the DEFAULT section. This applies for all target databases.
    b>  Define the Environment Variable DB2CODEPAGE with the value 1208.
    Teradata V2R5
    a>  Modify teradata.sbo file under
    u201C\BOInstalledfolder\dataAccess\RDBMS\connectionServer\teradatau201D
    Under the corresponding target database engine, add the Unicode parameter with the UTF8 value as specified below;
            <DataBase Active="Yes" Name="Teradata V2 R5">
              <Parameter Name="Unicode">UTF8</Parameter>
            </DataBase>
    You can do the modification under the DEFAULT section. This applies for all target databases.
    4> Universe - parameters - UNICODE_STRINGS - Yes
    Please let me know if this works for you.
    Thanks,
    Madhu.

  • Loading a CSV file with Umlaut characters (àáä)

    Hai,
    We are uploading a CSV file though a Custom JSP page built based on Oracle JTF framework.
    The JSP page is loading the data into FND_LOBS table using JTF object, oracle.apps.jtf.amv.ServletUploader.
    The CSV file in the FND_LOBS table stored properly with the umlaut characters.
    Now the JSP page invokes a Java object to read and parse the data. We are selecting the data first into BLOB object and then using the Input Stream Reader to get the data.
    Here is the sample code:
    oraclepreparedstatement = (OraclePreparedStatement)oracleconnection.prepareStatement(" SELECT FILE_DATA FROM FND_LOBS WHERE FILE_ID = :1 ");
    oraclepreparedstatement.defineColumnType(1, 2004);
    oraclepreparedstatement.setLong(1, <file id>);
    oracleresultset = (OracleResultSet)oraclepreparedstatement.executeQuery();
    blob = (BLOB)oracleresultset.getObject(1);
    InputStreamReader inputstreamreader = new InputStreamReader(blob.getBinaryStream());
    lineReader = new LineNumberReader(inputstreamreader )
    lCSVLine = lineReader.readLine();
    I tried printing the character set used by the InputStreamReader and it returned as ASCII
    Now I tried setting the different character sets to read Umlaut characters(german chars) but nothing has worked.
    InputStreamReader inputstreamreader = new InputStreamReader(blob.getBinaryStream(),"UTF-8");
    Can someone please let me know where and how to set the Character Set to accept the Umlaut characters like àáä?
    Thanks,
    Anji

    Thank you for the quick response.
    Requirement:
    I need to retrive the BLOB object with umlaut characters from database, parse the data with comma delimeter into strings and store in database.
    I am viewing the umlaut data from the database table using TOAD utility tool.
    I tried the same code example provided above but it is not working as expected. The umlaut characters are translated to 'ýýý.
    CODE EXAMPLE:
    Input:
    test_umlaut (sno NUMBER, col1 VARCHAR2(100), col3 BLOB);
    insert into test_umlaut(sno,col3) values(200, utl_raw.cast_to_raw('äöüÄÖÜ' ))
    Note: Verified that the database is showing the umlaut characters on selecting the col3 and storing in a flat file
    --- code
    OraclePreparedStatement oraclepreparedstatement10 = null;
    OracleResultSet rs = null;
    oraclepreparedstatement10 = (OraclePreparedStatement)oracleconnection.prepareStatement(" SELECT col3 FROM test_umlaut WHERE sno = 200 ");
    oraclepreparedstatement10.defineColumnType(1, 2004);
    rs = (OracleResultSet)oraclepreparedstatement10.executeQuery();
    while(rs.next()) {
    BLOB b = (BLOB)rs.getObject(1);
    InputStream is = b.getBinaryStream();
    InputStreamReader r = new InputStreamReader(is,"utf8");
    BufferedReader br = new BufferedReader(r);
    String line;
    while( (line = br.readLine()) != null) {
         System.out.println(line);
         OraclePreparedStatement oraclepreparedstatement12 = null;
         OracleResultSet oracleresultset12 = null;
         oraclepreparedstatement12 = (OraclePreparedStatement)oracleconnection.prepareStatement(" INSERT INTO test_umlaut(sno,col1) VALUES (300,?) ");
         oraclepreparedstatement12.setString(1,line);
         oraclepreparedstatement12.executeUpdate();
    br.close();
    r.close();
    is.close();
    Output: Verified the output from the database table which is inserted in the loop above.
    select col1 from test_umlaut where sno=300
    ýýý

  • Problems with German umlaut in OCI query

    Hello,
    I have some problems in getting results, if I search for data with a German umlaut like "ä", "ö" or "ü".
    In SQL*Plus the following query returns on row:
    SQL> SELECT  SYSBE.ID AS SYSBE_ID
      2          , SYSBE.NACHNAME AS SYSBE_NACHNAME
      3          , SYSBE.VORNAME AS SYSBE_VORNAME
      4       FROM  SYS_BENUTZER SYSBE
      5          LEFT JOIN SYS_ABTEILUNGEN SYSAB ON SYSAB.ID = SYSBE.ABTEILUNG_ID
      6        WHERE  SYSBE.STATUS = 'aktiv'
      7           AND ((SYSAB.KUERZEL <> 'SYS') OR (SYSAB.KUERZEL IS NULL))
      8           AND SYSBE.ID NOT IN (SELECT PMPD.MITARBEITER_ID
      9                 FROM  PM_PROJEKT_MITARBEITER PMPD
    10                 WHERE  PMPD.PROJEKT_ID = 26 AND
    11                   PMPD.MITARBEITER_ID = SYSBE.ID)
    12           AND (REGEXP_LIKE(SYSBE.NACHNAME, 'hö', 'i')
    13              OR REGEXP_LIKE(SYSBE.VORNAME, 'hö', 'i'))
    14        ORDER BY SYSBE.NACHNAME, SYSBE.VORNAME;
      SYSBE_ID SYSBE_NACHNAME
    SYSBE_VORNAME
            52 Höfling
    AlexanderIf I execute this from PHP via the oci8.dll no rows will be returned. Yesterday the DBA helped me to trace the query and it looks exactly the same as above.
    Can anyone help?
    Regards,
    Stefan

    Are your NLS environment variables the same on both systems? Were they set prior to starting up Apache/IIS?

  • Converting to BLOB in AL32UTF8 destroys German Umlaut characters

    Test Case.
    Two database
    (A) NLS_CHARACTERSET= WE8MSWIN1252
    (B) AL32UTF8
    I have used the following function to convert CLOB data to BLOB
    create or replace function clob_to_blob (p_clob_in in clob)
    return blob
    is
    v_blob blob;
    v_offset integer;
    v_buffer_varchar varchar2(32000);
    v_buffer_raw raw(32000);
    v_buffer_size binary_integer := 32000;
    begin
      if p_clob_in is null then
        return null;
      end if;
      DBMS_LOB.CREATETEMPORARY(v_blob, TRUE);
      v_offset := 1;
      FOR i IN 1..CEIL(DBMS_LOB.GETLENGTH(p_clob_in) / v_buffer_size)
      loop
        dbms_lob.read(p_clob_in, v_buffer_size, v_offset, v_buffer_varchar);
        v_buffer_raw := utl_raw.cast_to_raw(v_buffer_varchar);
        dbms_lob.writeappend(v_blob, utl_raw.length(v_buffer_raw), v_buffer_raw);
        v_offset := v_offset + v_buffer_size;
      end loop;
      return v_blob;
    end clob_to_blob;If i input ÄÖÜ to the function,In WE8MSWIN1252 the returning BLOB looks ok, but in AL32UTF8 the BLOB's characters comes out like ÄÖÜ.
    Now if i were to save the values (CLOB and BLOB) in both occations the CLOB will be stored without problem, but BLOB will be saved incorrectly only in AL32UTF8 environment.
    The only difference between the two db's are the characterset.
    Is this a know limitation or am i doing something wrong.
    PS. I have seen similar behaviour when using dbms_lob.converttoblob
    thank you

    When describing the problem you missed one very important point: how do you look at the content of the BLOB to tell if it is OK or not? Remember that observation usually influences the experiment results ;-)
    The sequence like ÄÖÜ looks like AL32UTF8 encoding of umlauts viewed with a WE8MSWIN1252 viewer. If you look with a viewer that expects WE8MSWIN1252 at a BLOB that contains AL32UTF8 text (which is the case with an AL32UTF8 database), then this is what you should expect.
    -- Sergiusz

  • German umlaute and file download

    hi to all!
    hope someone can help me.
    I have a directory, let's say test, with a WEB-INF directory in it. ordinarry web-app...
    within "test" there is a file called gl�hw�rmchen.pdf. I want to download this file. when i use a link
    Gl�hw�rmchen a 404 STATUS CODE is returned. Tomcat 4.1.18 doesn't find the resource but it is DEFINITELY in there.
    I know that this is no bug, i just don't understand somethig. How can i tell Tomcat 4.1.18 to return files with german umlaute???
    please help...
    ciaou,
    wendigo

    sorry that's not the solution of the problem. I have prooved that my file is in the correct directory. It's NOT a context path problem. Other files are downloaded.....
    test.txt
    gl�hw�rmchen.pdf
    gluehwuermchen.pdf
    The JSP Source Code.....
    <%@page contentType="text/html;charset=ISO-8859-1"%>
    <html>
    <head><title>JSP Page</title></head>
    <body>
    Textdatei
    Gluehwuermchen.pdf Datei</br>
    Gl�hw�rmchen.pdf Datei</br>
    </body>
    </html>
    when I click the first or second a link, the files are transmitted, but the third file NOT.... The filename is written correctly. The Apache Tomcat 4.1.18 just doesn't find the file. REMEMBER!!! The Webserver FINDS the two other files in THE SAME DIRECTORY BUT NOT THOSE containing german umlaute. WHY????
    hope you can help me...
    wendigo

  • Umlaut {or dieresis} for the letter "n"

    How do you get that on a OS X Yosemite keyboard?  And for the letter "n".

    Well found - another of Apple's cleverly hidden facilities. Maddeningly there seems no way to bring this up except by locating the letter from elsewhere. So: copy this -

    and paste it into the search field in Character Viewer. Hit return and the result will be n ¨ . Select the ¨ and click 'Add to Favorites'.
    Now you can go to Favorites in the sidebar to show the umlaut. Then hit the n key followed by double-clicking the umlaut:  n̈
    Note that the Favorite only appears in the program in which you set it (sigh) - if you want to use this in another program you will have to go through the entire procedure again. At least once done you don't need to start by finding the character in the Wikipaedia page.

  • German Umlaute and SQLite3?

    I'm struggling with SQLite 3 and German Umlaute (ä, ö, ü, etc.)
    It seems as if I cannot do WHERE x LIKE y clauses with German Umlaute recognized. Here's a code excerpt:
    const char *sql = "SELECT zip, city, number, mobile, active, remarks FROM rawData WHERE city LIKE ?";
    if (sqlite3preparev2(database, sql, -1, &searchCityStatement, NULL) != SQLITE_OK) {
    NSAssert1(0, @"Error: Failed to prepare statement with message '%s'.", sqlite3_errmsg(database));
    sqlite3bindtext(searchCityStatement, 1, [c UTF8String], [c length], SQLITE_TRANSIENT);
    So the idea here is that I search for rows WHERE city LIKE <parameter c>. It works fine without Umlaute. E.g. rows like Berlin, London, etc. are returned. However, when I search e.g. for Lünen, it does not work. I believe that the [c UTF8String] strips out Umlaute, but just using c does not work, either.
    Any help?

    SQLite does not properly handle accented characters. From the SQLite documentation:
    The LIKE operator is not case sensitive and will match upper case characters on one side against lower case characters on the other. (A bug: SQLite only understands upper/lower case for 7-bit Latin characters. Hence the LIKE operator is case sensitive for 8-bit iso8859 characters or UTF-8 characters. For example, the expression 'a' LIKE 'A' is TRUE but 'æ' LIKE 'Æ' is FALSE.).
    Although it does not explicitly state it, 'Lünen' LIKE 'Lunen' is FALSE.

  • Desktop.open() fails when file contains German umlauts

    Hello,
    I want to open a file using the Desktop-API:
    // check whether opening a file is supported or not
    if (!desk.isSupported(Desktop.Action.OPEN)) {
      // display error message box
      JOptionPane.showMessageDialog(getFrame(),getResourceMap().getString("errLinkUnsopportedMsg"),
             getResourceMap().getString("errLinkUnsopportedTitle"),JOptionPane.ERROR_MESSAGE);
      return;
    desk.open(linkfile);I'm catching all possible exceptions here:
                 catch (IOException e)
                catch (IllegalArgumentException e)
                catch (SecurityException e)
                catch (UnsupportedOperationException e)
                catch (URISyntaxException e)But when I want to open a file that contains German umlauts, my application throws following exception:
    Exception in thread "AWT-EventQueue-0" java.lang.RuntimeException:
    Non-Java exception raised, not handled!
    (Original problem: *** -[NSCFArray initWithObjects:count:]:
    attempt to insert nil object at objects[0])
    at apple.awt.CDesktopPeer._lsOpen(Native Method)
    at apple.awt.CDesktopPeer.lsOpen(CDesktopPeer.java:53)
    at apple.awt.CDesktopPeer.open(CDesktopPeer.java:33)
    at java.awt.Desktop.open(Desktop.java:254)
    at zettelkasten.ZettelkastenView.eventHyperlinkActivated(ZettelkastenView.java:9609)
    at zettelkasten.ZettelkastenView.access$6000(ZettelkastenView.java:119)
    at zettelkasten.ZettelkastenView$14.hyperlinkUpdate(ZettelkastenView.java:7014)The file does definitely exist. I chose it with a filechooser, and I debugged the source step-by-step. Before calling the desktop.open() command, I check whether the file exists or not (File.Exists()).
    When I open any file without umlauts, everything is fine. Only files with umlauts in their filename seem to make trouble.
    My OS is:
    Mac OS X 10.5.5, running the latest Java 6, using NetBeans 6.5
    Is there any solution, or at least a workaround?
    Thanks in advance!
    Daniel
    Edited by: DnlLdck on Jan 29, 2009 6:23 AM

    Same thing happened to me, but I was trying to browse an URI which contains non-English characters. Haven't try it on Windows yet. Maybe it does not support non-English characters on Mac.

  • UTF 8 + umlauts

    hi!
    i'm new in java and have to handle Germans Umlauts '���' as input params, but i don't really now how to get them into UTF 8- may anybody help me and post me some code?
    and is it possible to filter '\n' out of in coming text?
    thanks
    michael

    solution for Replacing String '\n' to Char '\10'
    private String replaceNewLine(String Psmessage)
    String Fs_message;
    StringBuffer F_stringBuffer = new StringBuffer ();
    int i = 0, j = 0;
    int Fi_lengthOldNewLine = OLDPRAEFIX_NEWLINE.length();
    while ( j > -1 )
    j = Ps_message.indexOf( OLDPRAEFIX_NEWLINE, i );
    if ( j > -1 )
    F_stringBuffer.append( Ps_message.substring(i,j) );
    F_stringBuffer.append( NEWLINE );
    i = j + Fi_lengthOldNewLine;
    F_stringBuffer.append( Ps_message.substring(i, Ps_message.length()) );
    Fs_message = F_stringBuffer.toString();
    return Fs_message;
    }

  • ZipEntry with umlaut

    Following code is crashing if zip file contain file with umlaut character in it's name.
    ZipEntry zipEntry = (ZipEntry) enum.nextElement();
    InputStream instrm = zipFile.getInputStream(zipEntry);
    File unzippedFile = new File(file.getParent(), zipEntry.getName());//.getName() returns "?" on umlaut place
    FileOutputStream outstrm = new FileOutputStream(unzippedFile);
    int bytesRead = instrm.read(buf, 0, 1024);//Null pointer exception...
    Does somebody have an idea how to avoid this happens?
    The problem take place only with ZipEntry. Normally I can work with files which contain umlauts in the name.
    Thanks!!!

    sorry, i know its not a help...
    its a problem with jdk, look here:
    http://developer.java.sun.com/developer/bugParade/bugs/4092784.html

  • Cannot renew code signing certificate - maybe bug with german Umlaut?

    Hello!
    Since one month I expierence a message that I should renew my code signing certificate and today I thought it is time to stop this message.
    Because I could not find anything about renewing the certificate in Mountain Lion I used the KB-article that discribes the process for Lion.
    http://support.apple.com/kb/HT5358
    after that I get this in at my terminal:
    sudo /Applications/Server.app/Contents/ServerRoot/usr/sbin/certadmin --recreate-CA-signed-certificate 'myserver.domain.de Signierungszertifikate für Code' 'IntermediateCA_MYSERVER.DOMAIN.DE_1' 7D3E2458
    when I press return I get this:
    /Applications/Server.app/Contents/ServerRoot/usr/sbin/certadmin Cannot find the certificate: myserver.domain.de Signierungszertifikate für Code
    I checked it again and again - I cannot find any typo or something like that - so maybe Mountain Lion wants to renew the certificate in a different way or certadmin cannot cope with german "Umlaute" - "für" - in english for - but I did not gave this name it was given by the system when I setup the server one year ago.
    Every hint is welcome, bye
    Christoph

    I am stupid - I read the KB article again and there it says
    "When entering the hexadecimal serial number, ensure that all letters are entered in lower case."
    I retyped the command with lower case hex numbers and everything was fine
    Bye,
    Christoph

  • JSR-75 on PalmOS - umlaut in category names

    Does anyone have an suggestion how to handle categories with umlauts in the JSR-75 on PalmOS (Garnet)?
    Detailed description of the problem:
    I get the list of the categories defined for a PIMList (ContactList). The call to getCategories returns an array of String. Some of the entries contain an umlaut. The umlaut is correctly "converted" into the Java string object.
    Now I create a new entry in the PIMList (createContact). I want to assing the newly created contact item to a category with an umlaut in it. I use the string returned by the getCategories method. The call to the method isCategory of the PIMList returns false. Why? I got the exact name of this category before with the call to getCategories.
    I can create a new category with the name I get from getCategories. This category is created on the Palm. But instead of the umlaut I get a "two byte character".
    I suppose that the name of the category is converted to utf-8 when calling getCategories. But there is no "backward conversion" when calling isCategory / addToCategory.
    Does someone know this problem or even help me? Thanks!
    Regards
    Sebastian

    The properties you mention are not defined in the JSR-75 specification so you cannot count on them being available in other devices.
    Nokia is, however, specification lead of a new interesting JSR called Mobile Service Architecture for CLDC (JSR-248). If you take a look at the draft available for public review (at www.jcp.org) you can see that the properties you mention (and some others) are defined in this JSR.
    Devices implementing this new JSR will likely have the properties available.

  • Yet another terminal/iterm umlaut irritation

    Hi all
    I'm back again with umlaut issues.
    However I'll begin by describing the set up.
    I have GNU bash, version 3.2.17(1)-release (powerpc-apple-darwin8.9.0) installed.
    I also use vim version 7.1b.1 BETA
    My .bashrc contains the following
    PATH=$PATH:/usr/local/bin/:/Users/duffy/'Java Arkiv'/Glassfish_Arkiv/glassfish/bin/
    export LCALL=svSE.UTF-8
    export LANG=sv_SE.UTF-8
    export JAVA_HOME=/Library/Java/Home
    export TERM=xterm-16color
    alias mysql=/usr/local/mysql/bin/mysql
    alias vim=/opt/local/bin/vim
    alias vimdiff=/opt/local/bin/vimdiff
    SHELL=/opt/local/bin/bash
    my .bash_profile contains
    if [ -f ~/.bashrc ]; then
    source ~/.bashrc
    fi
    my .inputrc contains
    set input-meta on
    set output-meta on
    set convert-meta off
    set meta-flag on
    My terminal settings are as follows
    I login with usr/bin/login
    use xterm-color
    screensettings -> utf-8 encoding, wide characters count as 2
    And in my .vimrc I have set the encoding to utf-8
    That about does it for my set up.
    Now - here goes.
    Terminal command window displays umlauts correctly, and appears to display umlauts correctly when typing. If I do a ls on a directory and see a file called lets påsk.txt the file name looks fine. If i type ls på* I get No such file or directory, however if I mark the file name and do a cut and paste in terminal, i.e I cut and paset "på" and paste it into ls then I get ls på* and it works fine. In other words the keyboard input and terminal output display doesnt seem to be the same. I cant figure out why this is. On top of that, when I start vim I can type å ä ö but everytime I do it uses two spaces, so the word påsk looks like this på sk, thsi despite the fact that the terminal uses utf-8 and vim encoding is set to utf-8.
    Maybe I've missed some info here, but I've pretty much put all the info in here I think is relevant.
    I await words fron the grea Guru out there.

    If i type ls på* I get No such file or directory
    In HFS+, which is the MacOSX's default file system, accented characters in file names are stored in "decomposed form". For example, "å" is stored as "a" + "COMBINING RING ABOVE". The file names output by the ls command are decomposed:
    ls p*.txt | od -t x1 -c
    If you enter "å" from your keyboard, on the other hand, it is in pre-composed form, i.e., a single Unicode character:
    echo å | od -t x1 -c
    If you type "ls -w -l påsk.txt", the "å" is pre-composed and bash passes this pre-composed form to some system call (such as stat(2)). But the system call seems to internally convert it into decomposed form, so it works.
    If you type "ls -w på*", on the other hand, bash gets filenames (which are decomposed) in the current directory and compares them with "på" (which is pre-composed), resulting in no match. A workaround is to use "ls -w pa*", i.e., without the accent.
    I also use vim version 7.1b.1 BETA
    Where did you get this? I guess it has been built without the multibyte support. In vim, type
    :version<Return>
    and search for a string "multi_byte". If it has "-" in front of it then multbyte support is not included in the vim.
    The ones in
    http://macvim.org/OSX/index.php (vim7.0) or
    http://code.google.com/p/macvim/ (vim7.1)
    will support multibyte.
    Or you can build vim7.1 by yourself (if you have Xcode Tools installed).

Maybe you are looking for

  • Time Machine backing up data FROM a NAS folder

    Hi all, Please forgive me if this topic is repeated, but I couldn't find any in the discussions. I have two computers in my home (one Mac and one PC/Win7). I am using now a Seagate Goflex Home NAS to keep photos, movies, documents, etc for the two co

  • Creation of a PDF archive from an HTML archive.

    Please I need help to convert/change/edit/save as - an HTML archive, which arrived to my email, to a PDF archive. Nicolas153

  • How to update Adobe Flash Player?

    I have gone through all the steps (including uninstalling Flash Player) and I still get the error message: "Unable to load metafile" - what do I do now?

  • Java.libary.path  How to use it ?

    I wanted to access the windows registry from my java program. so i downloaded a jni package and set the classpath to it. i now get java.lang.Unsatisfiedlinkerror.no ICE_JNIRegistry in java.library.path ? How to rectify this ?

  • Is there a way to enable the cover tab in iTunes 11.1.1?

    I try to load a cover picture to an album. The cover tab is disabled. Where can can I enable the functionality? Just adding a picture in information cover section does not work.