Inbound MQ with extended character sets

Hi
We are trying to send to PI data containing Swedish characters in both xml and non xml payloads.
The message is placed on an MQ queue (version 6.0.2.3) with a JMS header that has a ccsid of 1208 specified.
The PI adapter is specified as JMS | WebsphereMQ (non-JMS) | JMS Compliant and the payload module has
AF/Modules/MessageTransformerBean | Plain2XML | Transform.ContentType | text/xml;charset=utf-8
The received characters are not displaying correctly, which is a theme on several threads from the past but I've been unable to determine the solution.
I am more familiar of the MQ side so please excuse my bias. I already send extended character sets to other applications using jms over mq and we've tried using the same values on the MQ side to not avail.
In MQ we set the MQ header to the queue manager default but there is a jms specific additional header preceding the payload that specifies that the payload is utf-8.
From my perspecitve I can't see that PI is reading the JMS header at all (in fact if I remove it it has not effect) but we want it there in order to set some extended metadata properties.
When I look at the data on my queue as it leaves MQ it looks correct to view and in hex.
How do I get PI to recognise the JMS properties I've specified (its known as an mqrfh header in MQ).
Any advice, guidance, documentation to a PI novice would be most welcome.
Tim

Thankyou for the replies Sarvesh and Stefan.
I had read your previous replies on this subject, but was still stuck.
The delay in rep[lying is because we were wating for a reply from the Sap Support team.
They have now acknowledged that there may be a fault in the MessageTransformBean.
Its still only a may, but at the moment all your other suggestions have been used but not worked.
I'll update again when I get further information.
Tim

Similar Messages

  • Avant Garde Gothic - Extended character sets?

    I am working on a project for a client in Poland. We have been given regular and bold weights of Avant Garde Gothic with extended character sets that cover the Polish language. Now however, we want to use the medium weight as well, but have no idea where one goes to find a special extended character set. I have tried numerous type websites with no luck. Any help would be great :)

    There's an Avant Garde CE Gothic Demi, available from Linotype. It
    appears to be Adobe's with the Central European character set added.
    (I don't understand that at all!)
    URW may also have a Medium version. I have one whose font name may be
    AvantGarGotItcTEEMed. I don't know if it's still available.
    - Herb

  • Extended character set

    I've just had the results of CS3 pages on a PC, packaged and sent to be opened using the ID2Q plug-in on a Mac running Quark 6.
    The multiplication symbol 0215 (× if you can see it) has come across as a tall skinny diamond.
    As the files were packaged my fonts have been used, so I'm guessing that the Mac didn't like my use of the extended character set.
    Can anyone shed light, and is this likely to happen with all extended characters?
    k

    I can get a multi sign (ALT 0215) on Quark 4, so I don't think this is a
    Unicode issue. AFAIK, there is no diamond in the extended (or regular)
    set, which leads me to believe ID2Q decided your multi sign should be
    formatted in a different font, Symbol maybe?
    Are you sure they used *your* fonts (not their fonts which they think
    are exactly like your fonts)?
    Kenneth Benson
    Pegasus Type, Inc.
    www.pegtype.com

  • Query distributed database with different character sets.

    Hello experts, this is my situation:
    I have two databases A and B, the same version 11.1.0.7, the same OS Suse Linux Enterprise 10 but with different character sets, A has WE8MSWIN1252 while B has AL32UTF8. The database B is my XML DB repository so there I have some XML type tables. I need to query this tables from the database A using a dblink and in fact I have done that but the XML content is trasformed due to the different character sets between the databases. Some time there are data loss and some time there are data missmatch.
    Is there any way to query the tables stored in the database B without problems? I do not know if the following is correct: Maybe I can set the character set for the session in the database A during the time it query the database B. That is, change the character set in fly at session level.
    Do you have any special suggestion?
    I hope you can help me, thank you in advance.

    The Globalization Support Guide for 11.1.0.7 has a chapter on character set migration that should be helpful. AL32UTF8 is a superset of WE8MSWIN1252 but it is not a strict superset. That is, it doesn't meet the second prong of the test in the documentation
    The new character set is a strict superset of the current character set if:
    Each and every character in the current character set is available in the new character set.
    Each and every character in the current character set has the same code point value in the new character set. For example, many character sets are strict supersets of US7ASCII.Exporting the data from the A, changing the character set (or creating a new database with the AL32UTF8 character set), and then importing the data may be the easiest approach in your case.
    Justin
    Edited by: Justin Cave on Jan 13, 2011 12:08 PM

  • Searching with different character sets

    Hello,
    we have a problem with Intermedia 8.1.6. running on Solaris.
    The table contains the text with different character sets and that's the problem. User submits the query in his char.set and the IM sometimes doesn't find the data.
    Idea is to create the index using the flat ascii chars and to search in ascii ... but how?
    Can anybody help me?
    Thanks.
    Zozzi
    null

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Zozzi ([email protected]):
    sorry, wrong email in the prev msg ...
    this one is correct<HR></BLOCKQUOTE>
    Hello,
    Did you solve it ?
    If yes, how to do it. I am interested in knowing it.
    Many Thanks
    null

  • Find/Replace Extended Character Set characters in filenames in one pipeline

    Hello all,
    I have to work with some very bored people. Instead of putting a dash (hex 2d) into a filename, they opt for something from this
    set of extended characters, which makes my regular expressions fail.  IS there a way I can efficiently find & replace anything outside the standard character set
    in one pipelinewithout finding and replacing a character at a time?
    So,I'd like something like:
    get-childitem * | where-object $_.name -match '\x99' | rename-item -newname { $_.name -replace '\x99','='}
    from hex 80 to hex FF rather than a for-each.
    Thanks.

    Answer would depend on the way you want to replace... Easier if you want replace any char in set with selected char:
    $Name = -join (180..190|%{[char]$_})
    New-Item -ItemType File -Name $Name
    Get-ChildItem * | Rename-Item -NewName {
    [regex]::Replace(
    $_.Name,
    '[\xB4-\xBE]',
    } -WhatIf
    But if you want it more complicated, you may do that too. E.g. defining hashtable that can be used to replace individual elements:
    $Replacer = @{}
    foreach ($Char in (180..190 | % { [char]$_ })) {
    $Replacer.Add(
    [string]$Char,
    (echo _, -, =, . | Get-Random)
    $Replacer
    Get-ChildItem * | Rename-Item -NewName {
    [regex]::Replace(
    $_.Name,
    '[\xB4-\xBE]',
    $Replacer[$args[0].Value]
    } -WhatIf
    Using this syntax make it possible to include some logic in replace. E.g. you could easily use switch to decide what to do with given string:
    Get-ChildItem * | Rename-Item -NewName {
    [regex]::Replace(
    $_.Name,
    '[\xB4-\xBE]',
    switch ($args[0].Value) {
    º { "0" }
    µ { "u" }
    ¹ { "1" }
    ¸ { "," }
    Default { "_" }
    } -WhatIf

  • Transport tablespaces between databases with different character sets

    Hi everyone:
    I have two 10R2 databases on the same hp-ux 64bit server, 1st one with NLS_CHARACTERSET=US7ASCII, 2nd one with
    NLS_CHARACTERSET=AL32UTF8.
    NLS_NCHAR_CHARACTERSET on both databases is AL16UTF16.
    Can I transfer tablespaces from the 1st one to the 2nd. The data could be in English, French & Spanish.
    If not what are my options?
    Thanks in advance.

    First off, if you are storing French and Spanish data in database 1 where the character set is US7ASCII, you've got some serious problems. US7ASCII doesn't support non-English characters (accents, tildes, etc). If you're storing data this way, you've introduced data corruption that you'd have to resolve before copying the data data over to another machine.
    Second, technically, the source and target character set have to be identical. Since US7ASCII is a strict binary superset of AL32UTF8, you could theoretically transport a US7ASCII tablespace to an AL32UTF8 database. In your case, though, since the data is not really US7ASCII, you'd end up with corruption.
    Any of the Oracle built-in replication options is going to require that you resolve the corruption issue. Assuming that you can figure out what character set the source database really is, you could potentially dump the data to flat files (taking care not to allow character set conversion to take place) and SQL*Loader them into the destination system by identifying the proper character set in your control file. That's obviously going to be a rather laborious process, though.
    Justin

  • Can't Create Database with SJISYEN character set

    I am trying to create a new database in Oracle 10g using the SJISYEN character set. The summary page display in the Database Configuration wizard shows the following
    Character Sets
    Database Character Set JA16SJISYEN
    National Character Set AL16UTF16
    The database gets created successfully with no errors, but when I query the V$NLS_PARAMETERS view, the NLS_CHARACTERSET is reported as US7ASCII
    Should I be able to create a database with the SJISYEN character set with Oracle 10g?

    Can you please run the following SQL statement and see what you get:
    SQL>
    1 select * from nls_database_parameters
    2* where parameter like '%CHARACTERSET'

  • Migrating from 8.1.7 to 9.2 with different character set

    Are there going to be issues when I try to migrate from 8.1.7 to 9.2 where the character set in the 8i database was the default western european to utf8 in 9i?
    I ran the 8i scan utility to see if there would be issues importing into an 8i db with a different char set and it said that there would.

    Hi Jim,
    As your first step, I would recommend reading the white paper Character Set Migration Best Practices for Oracle9i on the Globalization Support Home page - http://otn.oracle.com/tech/globalization/index.html
    Since you mentioned that CSSCAN has already reported there are potential issues (I guess you have exceptional/Lossy data ?), I would strongly recommend splitting up the UTF8 character set change and the database version upgrade into 2 distant phrases. Probably moving to UTF8 in 8.1.7 first, then upgrading to 9.2 later.
    BTW, there is a separate Globlization Support and NLS discussion forum on OTN for further discussion on character set related issues.
    Nat

  • OWB 11.1.0.6.0 with database character set AL32UTF8 is not working

    Hi ,
    we are working for a project for Turkey.
    if we insert Turkish characters in database ,in sqldevelpoer we are able to see the correct data. but when i load a file from preprocessor in OWB Process Flow, the characters which are in Turkish got changed to different characters in database. our databse character set is AL32UTF8. could you please throw some light on this please.
    Many thanks,
    kiranmai.

    hi ,
    ya we are using the correct dataset only in preprocessr configuration. actually it was a problem with OWB only ,
    i have changed database character set to WE8ISO8859P9,then i am able to se correct Trukey chars in database. i think it was a SR for oracle .

  • NLS settings for a database link between DBs with different character sets

    I am using a database link to move data from one database to another and I am seeing some strange data problems. The databases have different character sets and different NLS settings. I wonder if this could be causing my problem.
    Here are the NLS parameters for the database where the database link exists. (the SOURCE database)
    1     NLS_CALENDAR     GREGORIAN
    2     NLS_CHARACTERSET     WE8MSWIN1252
    3     NLS_COMP     BINARY
    4     NLS_CURRENCY     $
    5     NLS_DATE_FORMAT     DD-MON-RR
    6     NLS_DATE_LANGUAGE     AMERICAN
    7     NLS_DUAL_CURRENCY     $
    8     NLS_ISO_CURRENCY     AMERICA
    9     NLS_LANGUAGE     AMERICAN
    10     NLS_LENGTH_SEMANTICS     BYTE
    11     NLS_NCHAR_CHARACTERSET     AL16UTF16
    12     NLS_NCHAR_CONV_EXCP     FALSE
    13     NLS_NUMERIC_CHARACTERS     .,
    14     NLS_SORT     BINARY
    15     NLS_TERRITORY     AMERICA
    16     NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    17     NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    18     NLS_TIME_FORMAT     HH.MI.SSXFF AM
    19     NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    Here are the NLS parameters for the database that the database link connects to. (the TARGET database)
    1     NLS_CALENDAR     GREGORIAN
    2     NLS_CHARACTERSET     AL32UTF8
    3     NLS_COMP     BINARY
    4     NLS_CURRENCY     $
    5     NLS_DATE_FORMAT     DD-MON-RR
    6     NLS_DATE_LANGUAGE     AMERICAN
    7     NLS_DUAL_CURRENCY     $
    8     NLS_ISO_CURRENCY     AMERICA
    9     NLS_LANGUAGE     AMERICAN
    10     NLS_LENGTH_SEMANTICS     BYTE
    11     NLS_NCHAR_CHARACTERSET     AL16UTF16
    12     NLS_NCHAR_CONV_EXCP     FALSE
    13     NLS_NUMERIC_CHARACTERS     .,
    14     NLS_SORT     BINARY
    15     NLS_TERRITORY     AMERICA
    16     NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    17     NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    18     NLS_TIME_FORMAT     HH.MI.SSXFF AM
    19     NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    The SOURCE database version is 10g Release 10.2.0.3.0 - Production
    The TARGET database version is 11g Release 11.1.0.6.0 - 64bit Production
    Do I need to modify the NLS settings in the SOURCE database before executing a script to insert data into the TARGET database?
    Thanks, Jack

    The difference in settings is not a problem by itself, especially that only the NLS_CHARACTERSET matters. Of course, this difference may lead to certain issues if not taken into consideration.
    Please, describe symptoms of your problems.
    -- Sergiusz

  • Running instances with different character set WE8MSWIN1252 and AL32UTF8

    We have db instances running AL32UTF8 character set and our applications are built around it, however we have request to create instances using WE8MSWIN1252 character set for another application without bringing in new h/w (server).
    what are the ways to implement it?
    Edited by: raygear on Aug 23, 2012 7:06 PM

    What is the problem? Run DBCA in the advanced mode and specify the required database character set for the new database.
    -- Sergiusz

  • Problems with oracle character set

    Hi,
    We use weblogic SP2 and a oracle oci driver to connect to a database. When we tried to store a text file within the database, we found that all the '£' signs in the file are converted to question marks.
    Looking at the NLS_DATABASE_PARAMETERS table in the database suggests that we are using AL32UTF8 as the character set. I have since tried setting the NLS_LANG variable on the application server to use the same character set but this doesn't seem to solve the problem.
    Any experts out there know how I can fix this problem?
    Thanks!

    Hi Yash,
    Can u tell me how to insert values in a nchar/nvarchar column in
    Indian language scripts?
    I have set database characterset to 'IN8ISCII'? But while
    inserting values in the table I am getting error as characterset
    mismatch'
    I have tried it using UTF8 chracterset also??
    As fa as your problem is concerened, I think u have to set the
    nls_ characterset value$ in props$ table
    and same for nls_nchar_characterset column also..
    Thanks in advance
    Manoj mehta

  • Trouble with national character set - HELP!

    I've got a problem with croatian characters like :h, f, , , p. They can not be displayed in Portal.
    There is no matter if it is display name or label...

    I think you've got me closer to the problem, whatever it is:
    [lars@laptop ~]$ xterm
    Warning: locale not supported by Xlib, locale set to C
    [lars@laptop ~]$ xterm -u8
    Warning: locale not supported by Xlib, locale set to C
    [lars@laptop ~]$ uxterm
    Warning: locale not supported by Xlib, locale set to C
    At least it seems to indicate a problem with the locale, which might be causing my problem - I don't know...
    [lars@laptop ~]$ locale
    LANG=en_DK.utf8
    LC_CTYPE="en_DK.utf8"
    LC_NUMERIC="en_DK.utf8"
    LC_TIME="en_DK.utf8"
    LC_COLLATE=C
    LC_MONETARY="en_DK.utf8"
    LC_MESSAGES="en_DK.utf8"
    LC_PAPER="en_DK.utf8"
    LC_NAME="en_DK.utf8"
    LC_ADDRESS="en_DK.utf8"
    LC_TELEPHONE="en_DK.utf8"
    LC_MEASUREMENT="en_DK.utf8"
    LC_IDENTIFICATION="en_DK.utf8"
    LC_ALL=

  • Issue with MView betweem DB's with different character sets

    source system is char set IS0-8859-15.
    BUT - there is some data in the db that is outside the range of ISO - values are between 128 and 159 decimal. - The data is valid in teh Windows-1252 charset but its not valid ISO.
    target is utf-8
    Current issue is value in source system is 232 (x84). Thiis is lower double quotation mark, but not valid iso character.
    After mview replicates the value, it is xC284. This is not a valid utf8 encoded value.
    Correct value in utf-8 would be xE2809E (unicode 201E).
    Any suggestions on how to get the mviuew to recognize windows-1252 characters, even though the charset of the database is iso?

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Zozzi ([email protected]):
    sorry, wrong email in the prev msg ...
    this one is correct<HR></BLOCKQUOTE>
    Hello,
    Did you solve it ?
    If yes, how to do it. I am interested in knowing it.
    Many Thanks
    null

Maybe you are looking for