Accents and special characters - how to manage

Hi,
please could you explain how it is possible to handle accents and other special characters in Business Objects XI 3.1 (Universes and Reports)? Which are the parameters/settings I need to configure for
Database (ORACLE)
Application Server (AIX)
Universe
WebI
I want to display italian accents ò à è ù ì and some special characters like '@'. Even if they are correctly stored in DB, in WebI I retrieve characters without accent and a '?' for '@'
Any suggest?
Then, one more thing ...I noticed that creating a new report WebI, characters with accents appear without accents, instead if I create a report with Rich Client and I publish it, then accents are correctly displayed also opening the report with Web Intelligence. I have the application server on AIX, but Rich Client runs on WIN Server ...
Thanks a lot
Edited by: Tube Girl on Mar 22, 2010 4:25 PM

Hi,
I had almost the same problem (only different language) and manged to solve it after "google-ing" these links:
http://www.forumtopics.com/busobj/viewtopic.php?p=658760&sid=7fd77de133ee32213d987cb1550ae568
https://oraclespin.wordpress.com/2008/05/01/setting-nls_lang-for-oracle/
http://docs.oracle.com/html/B13804_02/gblsupp.htm
FYI, system is AIX 5.3 with Oracle 10g.
I've edited .profile for boxi user and added appropriate export of NLS_LANG variable, then restarted BO and now reports are correct.
I guess you should try adding this line to .profile of your boxi user
export NLS_LANG=ITALIAN_ITALY.WE8MSWIN1252
then restart and see results.
Hope this helps,
regards!

Similar Messages

  • Accents and special characters disabled in Safari

    I recently installed Mac OS Lion and since then I have some issues typing the special accents and characters in my Spanish keyboard.  Does anyone have a hint on why this happens?
    Thanks.

    Hi,
    I had almost the same problem (only different language) and manged to solve it after "google-ing" these links:
    http://www.forumtopics.com/busobj/viewtopic.php?p=658760&sid=7fd77de133ee32213d987cb1550ae568
    https://oraclespin.wordpress.com/2008/05/01/setting-nls_lang-for-oracle/
    http://docs.oracle.com/html/B13804_02/gblsupp.htm
    FYI, system is AIX 5.3 with Oracle 10g.
    I've edited .profile for boxi user and added appropriate export of NLS_LANG variable, then restarted BO and now reports are correct.
    I guess you should try adding this line to .profile of your boxi user
    export NLS_LANG=ITALIAN_ITALY.WE8MSWIN1252
    then restart and see results.
    Hope this helps,
    regards!

  • Accents and special characteres

    Hi,
    I'm migrating from 6i to 11g and in Portugal we have several words witch accents. I 6i all my forms display the text correctly but when i convert them all accents are gone. How do i make them appear again?

    Does it have to do with BIDI digitsubstitution? i tried changing it but all is the same. Have to change some weblogic parameter? Can anyone help please?

  • Remove spaces and special characters from a form field

    Hi,
    I am tragically new to all of this, but am trying to create a form in Adobe Acrobat 9.  I am trying to use a Custom Format script to take inputs in a form field and automatically remove spaces and special characters (hyphens specifically).  For example, if a user inputs "RAN-99 06" I would like it to change to "RAN9906."  I found this script that will not let users input special characters
    if (!event.willCommit) {
        event.change = event.change.replace(/[\$#~%\*\*\^\-\(\)\+=\[\]\{\};\"\<\>\?\|\\\!]/g, "");
    And that's okay, but I can't figure out how to disallow spaces.  Also, the perference would be for a script to allow users to input the data as they like, but to clean it up after they leave the text field.
    Thanks in advance!

    From the description, I assume that the script is currently in the Keystroke event. In fact, that would be a most logical way to have it; simply ignore anything unwanted when entered.
    If you want to allow the user to enter anything, but "clean it up" when done, you would place your code in the Validate event. You will have to adjust your Regular Expression so that it works globally, but that's the whole difference. This will change the value.
    Note that you can also enter the code into the Format event. However, that would only change the visual representation of the value, but internally, the value would remain as entered.
    Hope this can help.
    Max Wyss.

  • Is it possible to search for strings containing spaces and special characters?

    In our RoboHelp project, there are figures with text labels such as Figure 1, Figure 3-2, etc.
    When I search for "Figure 3" I get all pages containing "Figure" and "3", even if I surround it in quotes.  Similarly, searching for "3-2" treats the '-' character as a space and searches for all pages containing '3' or '2'.
    Is there a way to search for strings containing spaces and special characters?

    In that case I think the answer is no if you are using the standard search engine. However I believe that Zoom Search does allow this type of searching. Check out this link for further information.
    http://www.grainge.org/pages/authoring/zoomsearch/zoomsearch.htm
      The RoboColum(n)
      @robocolumn
      Colum McAndrew

  • Validating text field based on the combination of alphabets and special characters

    Hi Everyone,
    I am Using Oracle Apex 4.2 Version . I want to do validation for a textbox where it should accept all alphabets,numbers and special characters (abc12#$ , zbc, 123, nd12, 23_6!, @%77).
    But it should NOT accept all special characters only.(@#$#!)
    Pelase do help if any knows this.
    Thanks in advance,
    Nikhil.

    Hi Nikhil,
    Here is one way that could work.
    CREATE TABLE t (x VARCHAR2 (30));
    INSERT ALL
    INTO t
    VALUES ('XYZ123')
    INTO t
    VALUES ('XYZ 123')
    INTO t
    VALUES ('xyz 123')
    INTO t
    VALUES ('X1Y2Z3')
    INTO t
    VALUES ('123123')
    INTO t
    VALUES ('abc12#$')
    INTO t
    VALUES ('@%77')
    INTO t
    VALUES ('!@#$')
    INTO t
    VALUES ('~%^&*()_+')
    INTO t
    VALUES ('23_6!')
    INTO t
    VALUES ('zbc')
    INTO t
    VALUES ('123*456')
    SELECT * FROM DUAL;
    SELECT x
    FROM   t
    WHERE  ( Regexp_like (x, '[[:alpha:]]') -- include alpha characters
              OR Regexp_like (x, '[[:digit:]]') -- include numbers
              OR Regexp_like (x, '[[:punct:]]') ) -- include special character
           AND ( Regexp_like (x, '^[^%]*$');
                 AND Regexp_like (x, '^[^*]*$') ) -- exlude special characters % and *
    Jeff

  • How do I remove spaces and special characters from the file name during rendering?

    I understand that I can set LR_renamingTokensOn to true, but I would like to replace all spaces in the file name with an underscore and remove characters not in the range A-Z and 0-9. What's the easiest way to achieve this?

    local photo = catalog:getTargetPhoto()
    local sesn = LrExportSession {
        photosToExport = { photo },
        exportSettings = {
            -- ... (determine from export preset) - whatev you want, just be sure you set export directory: LR_export_destinationPathPrefix
            LR_tokens = "{{custom_token}}",
            LR_tokenCustomString = LrPathUtils.removeExtension( photo:getFormattedMetadata( 'fileName' ) ):gsub( "[ %c]", "" ) -- remove spaces and control characters
    sesn:doExportOnNewTask()

  • Apple, please address this issue!!! Searching and special characters

    I've found at least one other topic here regarding this issue, but Apple has yet to address it, or even confirm that it's in fact an issue.
    A big portion of my library consists of international artists, many who have special/nonstandard characters in their names (e.g. "Tiësto"). In previous versions of iTunes, I could search for "Tiesto" (note the absence of the umlaut over the 'e') and all of those tracks would appear. However, after upgrading to iTunes 9, I'm forced to search for "Tiësto" in order to view those tracks.
    Thinking about this issue, it would seem that this is normal behavior, as an artist named "Pàz" isn't necessarily the same as "Paz". But the reality is that NO ONE wants to take the time to search for the alt-code of a special character, and then type it in. Additionally, most people (including me) have a number of reasons for not renaming the artist to conform to standard characters.
    Clearly, something has changed between iTunes 8 and 9. And it's incredibly frustrating. A big chunk of my library is virtually inaccessible to me now, unless I search for terms that don't contain special characters.
    Apple, **PLEASE** address this issue. I'm not the only one with the problem and it's become a huge pain in my side to have to work around this.

    I don't see how it's sloppy or why there should be any reason that it shouldn't have given you both.
    Even putting aside that gracenote and others don't have consistent spelling in their databases, that when you download from the iTunes store, who knows what you're going to get, so that one's library could be all over the place and one might not even notice it, accents and other markings don't change the fact that a u is the same letter as ú.
    Yes, it's pronounced different. Yes, the difference is very important. But it's the same base letter. And even if it's considered a different letter in another language, in English we have 26 letters and all the accents and tildes and umlauts don't make them a new letter, which is what we're talking about here.
    It's a huge pain that looking for an album that has "Zürich" in the field and I type in "Zurich" I can't find it. I have to think about if I really do have that album, maybe I deleted it by accident, etc. And then I think to type "rich" and then it pops up.
    That's way too many steps. I don't know keyboard shortcuts and I don't want to learn them. In English, it should be letters, not characters, that count.

  • File Content Conversion (receiver) and special characters

    Hi all,
    I have a scenario that has a file receiver channel with content conversion. The record structure in the flat file is field-width delimited (hence no field separator) and the parameter 'fieldLengthTooShortHandling' has the value 'Cut' because the receiving system needs only specific widths for the fields. Hence if the field value exceeds the length permitted, the extra characters are clipped.
    I observed that some characters are not handled properly while creating the text file. For example, one of the fields contained a "minus" character (not the hyphen). The flat file was created successfully. I opened the file in notepad and found that the "minus" character has appeared correctly and the column count in that record was as expected. However when the same file was opened in Textpad, the minus character was displayed as â | |  ('a' with caret, bar, bar) So, all the fields after this field were shifted ahead by 2 characters and hence the total column count of the record had gone beyond the actual one.
    All this started due to the error reported by the receiver system which processes the flat file. Due to shift of characters in the flat file, the processing failed. Moreover that system cannot process the special characters (like minus or non-Latin accented characters etc.) So although there is no issue in XI interface as such, I just want to know if anyone has more informtion on why the characters are displayed differently as mentioned above.
    Regards,
    Shankar

    Define data type like
    order_recordset
    order_row 1..unbound
    f1
    f2
    All are same except communication channel configuration,
    Message Protcol : File content select, then below you got additional parameters.
    there you fill
    Document name : Your sender message type.
    Document namespace : Give your scenario namespace
    Recordset name : order_recordset ( mentioned in the data type)
    Recordset structure : order_row, *
    Name Value
    order_recordset.fieldSeparator : 'nl'
    order_row.fieldSeparator : ,
    order_row.endSeparator : 'nl'
    based on your text file you fill the above parameter values.

  • Dynamic text, system fonts, and special characters...

    This one is boggling my mind so if someone can help, please
    do
    I've got a dynamic text area pulling text from a MySQL
    database via AMFPHP. The text includes special characters such as
    accents, umlauts, etc (multi-language site platform). Most of them
    work fine, and ALL of them work fine when I'm on a Mac client.
    However, if I use a Windows client machine in either Firefox or
    IE6, there are a couple characters that for whatever reason just
    don't seem to show up -- instead I get the [] box character.
    The only characters I've found that seem to be affected like
    this are European quote characters like &#146; &#147; and
    &#148; (hex characters 146, 147, 148). I'm using the familiar
    ampersand-pound-number-semicolon escape sequence for them. And like
    I said, they all display fine on Mac/Firefox and Mac/Safari. Why
    are all my other special characters (umlauts, accents, etc) working
    fine and just these things failing? It's also worth noting that
    they look fine if I dump them out to a PHP file and pull it up in a
    browser...
    Please, if anyone can help... I've been bashing my head
    against this for the better part of the day!!

    Hi,
    did you take a look at the "gateway.php' ? There you can
    define charsets. Maybe a western europe charset will
    help you out. Since french language is using a lot more
    accents and so.
    This is what you can find in the gateway.php of amfphp. Just
    have to set the right one ;)

  • XMLParser and Special Characters

    Hi,
    I'm trying to read in an XML Document from a stream (e.g. a file) using XMLParser. The document contains german text (i.e. lots of special characters like umlauts �, �, � and others).
    If I read this stream into a text string all these special characters are perfectly handled (i.e. � looks like an �, etc.).
    However, if I import the stream into an XMLParser.Document using ImportDocument the umlauts seem to be scrambled. If the imported document is without any changes exported again to a stream (using ExportDocument) the umlauts are not displayed correctly anymore.
    Example Stream:
    <?xml version="1.0" encoding="iso-8859-1" ?>
    <UserID>M�ller</UserID>
    If this stream is imported into an XMLParser.Document and then exported again it contains
    <UserID>M��ller</UserID>
    I'm using correct XML encoding iso-8859-1 which is for western european languages and I guess it should not be a Forte locale issue since simple string handling of the stream works fine.
    Thanks for any hints,
    Daniel

    Let's start at the basics. Right now you are quite limited by your database character set as US7ASCII. You need to migrate to something that will support Latin and Greek characters at least. Maybe EL8ISO8859P7, or UTF-8. Please look at documentation Scanner Utility, available for Oracle 8.1.6 and above to make sure migration is safe before doing any import/export. The title of paper is: Database Character Set Migration, at: http://technet.oracle.com/products/oracle8i/listing.htm#nls
    UTF-8 will give you more versatility in the languages that your customer supports now or in the future. There is some performance overhead using Unicode but how much depends? I would base a large part of the Unicode decision on how likely it would be that other languages would need to be supported in the future and special character support.
    The special characters that your customer would like to support may already exist in Unicode. IF they don't or you choose another character set then your customer will need to look at the National Language Support Guide, Appendix 'B' "Customizing Locale Data"
    Are you running Greek windows? Otherwise how will you enter Greek characters? If you are using Greek windows you probably need to set your client NLS_LANG to EL8MSWIN1253.
    On your Forms questions you might want to take a look at the following :
    1. Chapter 4 of "Oracle Forms Developer and Reports Developer Release 6i: Guidelines for Building
    Applications" discusses How to design MultiLingual Applications.
    http://otn.oracle.com/docs/products/forms/doc_index.htm

  • [SOLVED] Finch and special characters?

    EDIT: Solved it by setting my SSH clients to send the locale at connect. Found out that locals were fine on server and on all computers but it changed local to POSIX when I ssh'd into the server. Easy fix to just set the ssh.conf to send LOCALE on connect and sshd to accept it.
    Hi everyone!
    Today I have been playing around a bit with my torrent/fileserver after I bought a new router. After I have changed some configs I can't get special characters to work in finch, like åäö.
    Finch works fine with special characters on my desktop and laptop (also running archlinux) but not on the server. I usually run a screen with Finch 24/7 on the server, wich I can ssh to, that's why I want it to work there, too .
    I have tried both en_US.utf8 and sv_SE.utf8 as locals. Does anyone have any idea how I can fix this? It worked before I started messing around but I can't understand what I have changed... Only thing is that I'm now using netcfg instead of setting ip in rc.conf, but it seems to be irrelevant...
    Thanks for answering!
    Here is my rc.conf:
    # /etc/rc.conf - Main Configuration for Arch Linux
    # LOCALIZATION
    # LOCALE: available languages can be listed with the 'locale -a' command
    # HARDWARECLOCK: set to "UTC" or "localtime", any other value will result
    # in the hardware clock being left untouched (useful for virtualization)
    # TIMEZONE: timezones are found in /usr/share/zoneinfo
    # KEYMAP: keymaps are found in /usr/share/kbd/keymaps
    # CONSOLEFONT: found in /usr/share/kbd/consolefonts (only needed for non-US)
    # CONSOLEMAP: found in /usr/share/kbd/consoletrans
    # USECOLOR: use ANSI color sequences in startup messages
    LOCALE="sv_SE.utf8"
    HARDWARECLOCK="localtime"
    TIMEZONE="Europe/Stockholm"
    KEYMAP="us"
    CONSOLEFONT=
    CONSOLEMAP=
    USECOLOR="yes"
    # HARDWARE
    # MOD_AUTOLOAD: Allow autoloading of modules at boot and when needed
    # MOD_BLACKLIST: Prevent udev from loading these modules
    # MODULES: Modules to load at boot-up. Prefix with a ! to blacklist.
    # NOTE: Use of 'MOD_BLACKLIST' is deprecated. Please use ! in the MODULES array.
    MOD_AUTOLOAD="yes"
    #MOD_BLACKLIST=() #deprecated
    MODULES=()
    # Scan for LVM volume groups at startup, required if you use LVM
    USELVM="no"
    # NETWORKING
    # HOSTNAME: Hostname of machine. Should also be put in /etc/hosts
    HOSTNAME="servzor"
    # Use 'ifconfig -a' or 'ls /sys/class/net/' to see all available interfaces.
    # Interfaces to start at boot-up (in this order)
    # Declare each interface then list in INTERFACES
    # - prefix an entry in INTERFACES with a ! to disable it
    # - no hyphens in your interface names - Bash doesn't like it
    # DHCP: Set your interface to "dhcp" (eth0="dhcp")
    # Wireless: See network profiles below
    # Enable these network profiles at boot-up. These are only useful
    # if you happen to need multiple network configurations (ie, laptop users)
    # - set to 'menu' to present a menu during boot-up (dialog package required)
    # - prefix an entry with a ! to disable it
    # Network profiles are found in /etc/network.d
    # This now requires the netcfg package
    NETWORKS=(main)
    # DAEMONS
    # Daemons to start at boot-up (in this order)
    # - prefix a daemon with a ! to disable it
    # - prefix a daemon with a @ to start it up in the background
    DAEMONS=(syslog-ng net-profiles netfs crond sshd acpi)
    Last edited by Ginux (2010-04-03 11:17:12)

    Since you're apparently using utf-8 for editing, your setting for inputenc should reflect that:
    \usepackage[utf8]{inputenc}
    I tried your sample with this and it worked here.
    EDIT: explanation: inputenc package is there to convert a (supported) outside encoding (eg. utf-8, latin1, cp1250, il2, ... many many) into the internal tex encoding, so that tex assigns appropriate glyphs to appropriate letters coming in from your file.
    Last edited by bender02 (2008-11-04 23:36:03)

  • Image Capture and special characters in file names.

    Hung up image capture yesterday: We tried to enter a date in the file name of the document we were scanning. It proceeded normally, but then did not save the file and image capture would not scan again until OS restarted.
    Problem repeated until we discovered a slash in the file name box. Changed the file name and everything works now.
    Software should either prevent user from entering special characters in the file name box, or properly handle the error and not hang up the application.

    Hi,
    Would mind providing a screenshot about how your Content Query Web Part rendering in your environment?
    In my environment, it displays the filename which contains periods:
    I would suggest you create a new page and insert a Content Query Web Part to perform the test again.
    Feel free to reply with the test result or if there any progress.
    Thanks
    Patrick Liang
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Patrick Liang
    TechNet Community Support

  • Where are the Symbols and special characters?

    Dears, 
    suddenly I the special characters had been crushed and no more exist.
    please check the image for more clarification.. What is the reason, how can I fix this issue???
    Mohammad Yousri - http://mohammad-yousri.blogspot.com

    what font do you need, you may need to install Windows fresh and next time, do not mess with the fonts
    I have a lot of fonts installed that came with various Adobe products.
    Corsair Carbide 300R with window
    Corsair TX850V2 70A@12V
    Asus M5A99FX PRO R2.0 CFX/SLI
    AMD Phenom II 965 C3 Black Edition @ 4.0 GHz
    G.SKILL RipjawsX DDR3-2133 8 GB
    EVGA GTX 6600 Ti FTW Signature 2(Gk104 Kepler)
    Asus PA238QR IPS LED HDMI DP 1080p
    ST2000DM001 & Windows 8.1 Enterprise x64
    Microsoft Wireless Desktop 2000
    Wacom Bamboo CHT470M
    Place your rig specifics into your signature like I have, makes it 100x easier to understand!
    Hardcore Games Legendary is the Only Way to Play!

  • PDFs using BI publisher and special characters - pound sterling

    I have a table with character strings which include the £ pound sterling sign.
    If I create an xml file from the table using the escape sequence for the pound sign & # 163; and load it into BI publisher, the pdf renders correctly using an rtf template.
    If I include the pound sign in the rtf template itself the pdf renders correctly and if I use a report query in Apex based on the table, and an rtf template then the pdf renders correctly.
    However there is a limit to the number of columns that can be sent to BI via the report query - we have found this to be 119 columns. For more than this we have been using a stored function to return xml data as a clob and use the clob to generate the pdf with the rtf report layout (using get_print_document). This method works fine but the pound sign is not rendered correctly (appears as "?").
    The NLS_LANG is set to english_united kingdom in the database and in BI publisher. The locale in bi publisher is en_GB. BI will create the pdf correctly with the pound sign if using a standard xml file (with escape seq for pound) so I am sure it is not a language issue in BI/Apex.
    Any ampersands in the xml cause the pdf to fail completely and we cannot represent special characters with a sequence containing the ampersand for this reason. I have tried "& # 163;" and "& amp;#163;" and variations and have also tried using CDATA, none of which gives the correct pdf output. I have also tried to switch the encoding to Windows-1252 or ISO-8859-1 in the header of the xml (xml generated by stored procedure so can control this), but this gives incorrect results too.
    We also need to send other special characters to BI via get_print_document and apex (bullet points, ampersands, dashes ) but cannot use the escape sequences because of the ampersand problem.
    Has anyone had any success with this?
    Is there any plan ( for a future version of Apex) to increase the number of columns that can be used in report query using Apex/BI publisher?
    Thanks
    Kathryn

    Hi Kathryn
    I've had exactly the same problem as you've mentioned.
    Firstly, I've also found that I cannot select more than 119 columns from a view using a report query and have opted to the use the stored function to return the xml (same as you).
    After conducting a lot of searching and experimenting i've found that you can use the following escape characters in REPLACE in your stored function:-
    '£' can be replaced with CHR(194)||CHR(163)
    '%' can be replaced with '%25'
    '&' can be replaced with '%26amp;'
    I found the above escape characters from the xml file or by opening the xml file in Wordpad. Therefore I'm sure you'll be able to find the escape characters for the other symbols that you mentioned i.e. bullet points, dashes etc.
    I hope this has helped. Good luck and let me know if you get any developments with the limitation on the number of columns that can be selected from a report query as this would save a lot of trouble (as I'm sure you're aware).
    Thanks
    Natalie

Maybe you are looking for