Unicode in Reports60
I had enabled Unicode for Dev.6 with
"NLS_LANG"="AMERICAN_AMERICA.UTF8"
Now my reports can display Unicode fonts, but additionally this
caused a mess in the old reports and the new ones. The fields
are varchar2 and it seems that while displaying the fields
Reports60 losses the end of field tag because it prints the
fields plus some strings from the other records. Like:
The field is "Mr. John Smith" and prints "Mr. John SmithAnn".
I don't think the problem is in the font because I'm
experiencing the same problem with my old report that is using
Courier New and if I disable Unicode the reports are ok.
null
Nabil
Sorry I don't have any answers, however I have been looking
for some information on Unicode as to how it could help us
improve our multilingual system development. Any chance you
could point me in the right direction?
Darren
Nabil Hurtado (guest) wrote:
: I had enabled Unicode for Dev.6 with
: "NLS_LANG"="AMERICAN_AMERICA.UTF8"
: Now my reports can display Unicode fonts, but additionally
this
: caused a mess in the old reports and the new ones. The fields
: are varchar2 and it seems that while displaying the fields
: Reports60 losses the end of field tag because it prints the
: fields plus some strings from the other records. Like:
: The field is "Mr. John Smith" and prints "Mr. John SmithAnn".
: I don't think the problem is in the font because I'm
: experiencing the same problem with my old report that is using
: Courier New and if I disable Unicode the reports are ok.
null
Similar Messages
-
Hi All,
I am getting the following error while running a report in Answers tool.
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 46115] No Unicode translation is available for some of the input characters for MultiByteWideChar(). (HY000)
I am seeing this error on my local windows desktop. The database is oracle 10g.
The error is not very consistent. Sometime it works and then it fails because of the above error. Not sure what is causing it.
thx,
Bejoy
Edited by: bejoy.nair on Sep 1, 2009 9:00 AMI am also receiving the same error, but from within Answers. It occurs occasionally, depending on what fields I select. I am trying to narrow it down to a field and field type. I am using OBIEE 10.1.3.4.0 and Oracle 11g.
Any help would be appreciated.
Thanks,
Anne -
Unicode filename in download box
I need to upload/download files with Unicode names from a file hosting service. Every thing is going fine except one thing - internet explorer.
The problem is that IE doesn't recognize the file name i'm sending as Unicode - showing me an encoded string instead in the download box. The page displays the file name okay however the download box doesn't. The problem happens only if the unicode file name has no extension. With an ascii extension, the file displays fine. With a unicode extension, the name part appears correct but then the extension itself is garbled. Firefox works like a charm.
What I'm basically doing, is that I check for the browser. If IE, I encode the filename and set the content-disposition header and stuff. If firefox, do it the firefox way (mark the filename field in the content-disposition with a * just before the equal sign). I then send the file data into a servlet output stream.
As a temporary solution, I'm appending a .NoExtension extension to extention-less filenames. This has to do for now, unless any body here has a better idea...
please? : ]Take a look at this article:
http://java.sun.com/developer/technicalArticles/Intl/HTTPCharset/index.html -
Logical Database PNP. HR and Unicode
Hi,
currently we are checking all programs to make them unicode compliant. Using the logical database PNP a lot of macros is loaded automatically. One of them is
rp_provide_from_last (or rp_provide_from_frst) to get the last record in a specifed time-interval. The existance is stated in varaible named pnp-sw-found, it is 0 if no record was found and 1 if there is one in existance.
Checking the program (normal syntach check and extended syntax check) leads to the warning that varaibale names with a hyphen are no longer allowed in unicode programs if its not a structure (and pnp-sw-found is not). The program is doing well, and transaction UCCHECK does not mention this error/warning at all. Has someone experience with that issue and perhaps a solution?
cu
RainerThanks for the answers so far. Using PNPCE does not resolve my problem, cause we have a lot of own written reports and i just want to avoid to change them all.
And using PNPCE idoes not solve the problem, that i have to use pnp-sw-found, this one is still in existance and gives still the warning that thois is not unicode compliant.
Switching off the unicode flag is no good idea if we wanna go for unicode.
Anyone else with experience in unicode in the HR Area? -
Hi,
I'm working on a project to convert several hundred thousand life sciences articles into epub format, and we have run in to a problem with character entities.
Being that these are scientific articles, the characters are from a wide range of Unicode charts, and are essential to transmitting the meaning of the data.
The problem is that in my epub, the character entity inside a table data cell is rendering the @font-face correctly, but inside any other HTML element, the character renders as an empty box on our ipad2s.
I've placed pre tags in hopes that the unicode will not be rendered in your browser here. The code point in this example is x1d542 just in case.
So inside div, we see boxes, inside td, we see the character rendered properly.
<pre>
<div class="stix">Let 𝕂 be a field, which will be either the complex numbers ℂ or the finite field 𝔽</div>
<table id="t31" rules="all">
<tr>
<td>𝕂</td>
<td class="stix">𝕂</td>
<td>U+1D542 MATHEMATICAL DOUBLE-STRUCK CAPITAL K </td>
</tr>
</pre>
My CSS looks like this:
<pre>
@font-face {
font-family: 'STIX';
src: url('STIX-Regular.otf') format('opentype');
font-weight: normal;
font-style: normal;
unicode-range: U+02B0-02FF, U+07C0-07FF, U+0900-097F,U+0F00-0FD8, U+1D00-1D7F, U+1D80-1DBF, U+1D400-1D7FF, U+1E00-1EFF, U+1F00-1FFE,U+2000-206F, U+20A0-20B8, U+20D0-20F0, U+2300,23FF, U+25A0-25FF, U+2600-26FF, U+27C0-27EF, U+27F0-27FF, U+2900-297F, U+2A00-2AFF, U+2B00-2B59, U+2C60-2C7F ;
@font-face {
font-family: 'STIX-Math';
src: url('STIXMath-Regular.otf') format('opentype');
font-weight: normal;
font-style: normal;
unicode-range: U+02B0-02FF, U+07C0-07FF, U+0900-097F,U+0F00-0FD8, U+1D00-1D7F, U+1D80-1DBF, U+1D400-1D7FF, U+1E00-1EFF, U+1F00-1FFE,U+2000-206F, U+20A0-20B8, U+20D0-20F0, U+2300,23FF, U+25A0-25FF, U+2600-26FF, U+27C0-27EF, U+27F0-27FF, U+2900-297F, U+2A00-2AFF, U+2B00-2B59, U+2C60-2C7F ;
.stix {
font-family: "STIX", "STIX-Math", sans-serif;
</pre>
Is it possible that this is a rendering bug, because the character is rendering in the table cell, but not in other elements?
Have I missed something obvious?
Thanks,
AbeI assume you are including the STIX font as part of your epub files?
Perhaps the folks who do this blog might be able to help -- they have done some work with font embedding:
http://www.pigsgourdsandwikis.com/2011/04/embedding-fonts-in-epub-ipad-iphone-an d.html -
Not able to display data in different columns using Unicode encoding
Hi,
Iam using Unicode encoding in my Java appln to support Japanese characters while downloading CSV report. But using the Unicode encoding displays all data in the first column of Excel sheet.
Please let me know how to display data in different columns using Unicode encoding in Excel sheet.Hi Venkat,
After extracting data into DSO check the request whether active or not.
Check data in DSO in contents.
If is there any restrictions on info providers in Queries.
Let us know status clearly.......
Reg
Pra -
Not able to display data in separate columns using Unicode encoding
Hi,
Iam using Unicode encoding in my Java appln to support Japanese characters while downloading CSV report. But using the Unicode encoding displays all data in the first column of Excel sheet.
Please let me know how to display data in different columns using Unicode encoding in Excel sheet.
This is an urgent need. Please help me out.Hi,
I have no problem with item :P15_EV_LCL this is having a value my probem here is i am using java script to display the value in different color based on the condtion case
eg:
select
case
TRUNC((
( (NVL(Z."AEWP",0) - NVL(Z."BEWP_Final",0) ) / DECODE(Z."BEWP_Final",0,NULL,Z."BEWP_Final") ) * 100
),2)
= :P15_EV_LCL
then
span style="background-color:lightgreen"
|| TRUNC((
( (NVL(Z."AEWP",0) - NVL(Z."BEWP_Final",0) ) / DECODE(Z."BEWP_Final",0,NULL,Z."BEWP_Final") ) * 100
),2) || '%' || /span
else
span style="background-color:yellow"
|| TRUNC(
( (NVL(Z."AEWP",0) - NVL(Z."BEWP_Final",0) ) / DECODE(Z."BEWP_Final",0,NULL,Z."BEWP_Final") ) * 100
),2) || '%' || /span
end "Effort"
from actuals Z
If i dont use this <Span style="Background-color:color"> i am able to generate data in excel sheet if i use this color coding i am not able to get data in spread sheet.
Please suggest
Thanks
Sudhir
Edited by: Sudhir_N on Mar 23, 2009 10:00 PM -
Unable to show Unicode Data in Oracle RESTful Service JSON
Hi Everyone.
I have stored unicode data in Oracle database and when i retrieve in sql query it is showing the same. But when i retrieve the data in json using oracle RESTful web service (GET), it bringing with unknown character as shown below.
next: {},$ref: "http://000.00.00.00:8085/ords/mobile/sch/loginm/?user=SURESH&pwd=123&page=1"
items: [
uri: {},$ref: "http://000.00.00.00:8085/ords/mobile/sch/loginm/41"
stud_id: 41,
stud_code: "1001",
stud_name: "அப்துல் ஜப்பார்"
My Database Setup as below:
SQL> SELECT name,value$ FROM sys.props$;
NAME VALUE$
DICT.BASE 2
DEFAULT_TEMP_TABLESPACE TEMP
DEFAULT_PERMANENT_TABLESPACE USERS
DEFAULT_EDITION ORA$BASE
Flashback Timestamp TimeZone GMT
TDE_MASTER_KEY_ID
DBTIMEZONE -07:00
DST_UPGRADE_STATE NONE
DST_PRIMARY_TT_VERSION 11
DST_SECONDARY_TT_VERSION 0
DEFAULT_TBS_TYPE SMALLFILE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET AL32UTF8
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_RDBMS_VERSION 11.2.0.1.0
GLOBAL_DB_NAME MOBILE
EXPORT_VIEWS_VERSION
SQL> select DECODE(parameter, 'NLS_CHARACTERSET', 'CHARACTER SET',
2 'NLS_LANGUAGE', 'LANGUAGE',
3 'NLS_TERRITORY', 'TERRITORY') name,
4 value from v$nls_parameters
5 WHERE parameter IN ( 'NLS_CHARACTERSET', 'NLS_LANGUAGE', 'NLS_TERRITORY');
NAME VALUE
LANGUAGE AMERICAN
TERRITORY AMERICA
CHARACTER SET AL32UTF8
8
WORKLOAD_CAPTURE_MODE
WORKLOAD_REPLAY_MODE
Awaiting you solution.
-- Abdul JabbarKumar,
Ftping the PG.xml to mds folder will not help the page to goto MDS directory
You have to import the file using xmlimporter
I understand you have done the import, but it is not success.
Could you please post what is the script you used to import the PG.xml
and once you run what was the output you have got.
May be you can refer the URL for the scripts
http://apps2fusion.com/at/61-kv/331-oa-framework-scripts
With regards,
Kali.
OSSI. -
How can I create files in unicode format without "byte order mark"?
Hello together,
I have to export files in UTF-8 format and have to send them to another partner system which works with linux as operating system.
I have tried the different possibities to create files with ABAP. But nothing works 100% how I want.
Some examples:
1.)
OPEN DATASET [filename] FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
If I create a file in this way and download it from application server to local system the result for file format in a unicode text edior like NotePad is "ANSI AS UTF-8". This means I have no BYTE ORDER MARK inside.
But it is also possible that the file format is only ANSI if the file contains no "special characters", isn't it?
In my test cases I create 3 files. 2 of them has format "ANSI AS UTF-8", and one only "ANSII".
After transfer to Linux I get as result 2 times UTF8 and one time ASCII as file format.
2.)
OPEN DATASET [filename] FOR OUTPUT IN TEXT MODE ENCODING UTF-8 WITH BYTE ORDER MARK.
With this syntax the result in local editor looks like ok. I get as format really "UTF-8".
But I get problems with the system which receives the files.
All files has the file format UTF-8 on Linux but the interface / script can not read the file with BYTE ORDER MARK.
This is a very big problem for me.
Do anybody of you know if it possible to force creation in UTF-8 without a BYTE ORDER MARK?
This means more or less the first example but all files should have UTF-8 format!
Thanks in advance
ChristianThis means it is not possible to create a pure unicode file without the byte order mark?
You wouldn't happen to know how a file with byte order mark should read on a Linux system?
Or if this possible or not?
Regards
Christian -
Error during Upgradation SCM 5.0 to SCM 5.1 Non-Unicode System
Hi,
I am Upgrading SCM 5.0 Non-Unicode System to SCM 5.1 Non-unicode on Windows 2003 Server with MSS 2005 DB. During the initial PREPARE (EXTRACTKRN_PRE) phase I am getting an error like
init
2010/06/15 07:03:45 PREPARE: START OF PHASE EXTRACTKRN_PRE
ERROR: Cannot find file 'C:\SCM_2007_Upgrade_Media\51033508_8\NW_7.0_SR3_Kernel_WINDOWS__LNX_X86\KN_WINDOWS_X86_64_AUPG
KERNEL.TOC'
SEVERE ERROR(S) OCCURRED IN EXTRACTKRN_PRE !!!
The file name it is looking for does not exist in any of the Kernel versions I have checked. If I try to skip this step it is asking for the PASSWORD. How can I get this password? (only from SAP Support or any note?).
What is the Solution for the above ERROR. Please help.
Thanks,
Ajay.> I am not posting the question in the same group. I am posting in different groups due the confusion of in which group to post > > and of course to get faster response. All people cant check all the groups to answer. Moreover I am not keeping the questions > open. As soon as I get the answer I am proving the answer in every group and immediately closing the question.
> So what is the problem with that? I do not think this is creating any inconvenience to any one, just increasing the number of questions in the group.
This is not wanted as written in the "Rules of engagement". This is a public forum, not a "help desk".
> OK, As you said If I post only in a single group do you guarantee that you will check every time all the groups to answer my question/ find my question?
Again, this is a forum where the question are answered on a volunteered basis. "Demanding" help is not the way to go, you can ask for help but there's no guarantee that you'll get an answer. If you urgently need help use the official support channels or get yourself a consultant on-site who can answer all your question instantly.
> And please paste the URL of those rules that this is wrong. Since I didnt read that.
It's the first entry in every forum, it contains the "Rules of Engagement" which say:
Please do not Cross post.
Post your question in the most appropriate forum; not multiple forums. This is bad netiquette and will might only aggravate potential repliers.
Just imagine if everyone like you would paste questions in several forums.
Markus
Edited by: Markus Doehr on Jun 15, 2010 5:29 PM
Edited by: Markus Doehr on Jun 15, 2010 5:30 PM -
BSI Tax Factory on ECC 6.0 Unicode
I'm looking for some help with using BSI Tax Factory on an ECC 6.0 Unicode System. Does anyone have any experience with using BSI Tax Factory on the Unicode version of ECC 6.0? Were there any special steps that were taken in order to make it work? I spoke with someone at BSI and they told me that Tax Factory was not Unicode compliant and that I should talk to someone at SAP to find out what steps needed to be done to make the system work. While I inquire further with both SAP and BSI I thought I would post something here as well to see if anyone could offer any assistance.
Thanks in Advance!
-NickHi Nick,
BSI TaxFactory is not unicode complaint. But in the past we have had customers who converted from non-unicode to unicode and had no problems with BSI TaxFactory.
Export the BSI related data from the DB and import it back after the unicode conversion. And everything works fine!
Regards,
Tarun
SAP ERP HCM -
Hi,
Can anybody with following code. I am converting program to unicode and getting an error 'Null space must be a data type C N D T'. Here is the code.
DATA NULL_SPACE(2) TYPE x VALUE '0020'.
TRANSLATE BDCDATA-FVAL USING NULL_SPACE.
Regards,
venkat.Do something like below: -
data: left_content type string,
right_content type string,
xcontent type xstring.
data: w_longchar(20).
constants: c_unknown(7) value 'Unknown'.
xcontent = '0020'.
data: conv type ref to cl_abap_conv_in_ce.
conv = cl_abap_conv_in_ce=>create( input = xcontent ).
conv->read( importing data = left_content ).
- Cheers -
MS Access and Unicode (UTF8?)
Hi --
I've been able to insert Arabic data into an MS Access table programmatically,
using SQL and the \u notation. For example:
insert into MY_TABLE values ('\u0663'); // arabic character
Then, I can read this data out using ResultSet's getCharacterStream method. The data comes back out fine, and can be displayed in a simple JTextField as Arabic.
(This required opening the database connection using the "charSet = "UTF8" property in the call to DriverManager's getConnection method.)
My problem is that I have another Access table in which the data was entered manually -- having set the Control Panel Regional setting to Arabic, and using the MS Office Tool language Arabic. The data looks fine in the Access GUI (the Arabic characters show up as Arabic).
However, when I read the data using the same method in the first example, I get back question marks. I guess there's something different about the way the data was encoded? I read that Access stores all character data as Unicode, but I'm not sure if that implies a particular encoding (such as UTF8) or not.
Is there any way to figure out how the manually-entered data is encoded?
Or is there something else I'm doing wrong?
Thanks for any help.
-JHowever, when I read the data using the same method
in the first example, I get back question marks. I
guess there's something different about the way the
data was encoded? I read that Access stores all
character data as Unicode, but I'm not sure if that
implies a particular encoding (such as UTF8) or not.
Is there any way to figure out how the
manually-entered data is encoded?
Please see the article here: http://office.microsoft.com/en-us/assistance/HP052604161033.aspx
It suggests that Access stores data in UTF-16 or UTF-8 depending on whether a "Unicode Comression" feature is selected. So, I'd say you should try retrieving data from the other db as UTF-16.
Regards,
John O'Conner -
How do i use an input file with Asian characters(Unicode)?
/* Ardor
* Illiteraminator.java
* Version beta 1.0
* December 7, 2007
* Main interfacing class
import java.io.*;
import java.util.*;
public class Illiteraminator{
public static void main (String [] args){
// ArrayList<Word> dictionary = new ArrayList<Word>();
String fileName = "Mandrin.txt";
String character= "",definition = "",inputLine;
try{
Scanner fileScan = new Scanner (new File (fileName));
while (fileScan.hasNext()){
inputLine = fileScan.nextLine();
Scanner sc = new Scanner(inputLine);
character = sc.next();
while (sc.hasNext()){
definition = definition + " " + sc.next();
}//end while sc
// dictionary.add(new Word(character, definition));
//definition = "";
//character = "";
}//end while fileScan
} catch (FileNotFoundException e){
System.out.println("File not found, dig around for Mandrin.txt");
System.exit(1);
}//end catch
System.out.println(character);
System.out.println(definition);
}//end main
}//end class IlliteraminatorHi, i'm a first time programmer. Never touched programming until i took a Java class in university last semester. I am currently attempting to write a program to help me move away from my illiteracy in Mandrin. So, that's my code, and i am using Dr.Java while writing it. When i tested it out the output looked something like this v
A p p l e
M o n k e y C a t D o n k e y
My input file is saved in Unicode. It contains letters that cannot be saved in ANSI. I tried UTF-8, but the interactions section showed no output...
Is this just a problem with Dr.Java? Will i encounter a similar problem when i turn this into a GUI?
The following is a copy and pasted version of the txt file i used as input. It is saved in the Unicode format.
的[de] <grammatical particle marking genitive as well as simple and composed adjectives>; 我* wǒde my; 高* gāode high, tall; 是* sh�de that's it, that's right; 是...* sh�...de one who...; 他是说汉语*. Tā sh� shuō H�nyǔde. He is one who speaks Chinese. [d�] 目* m�d� goal [d�] true, real; *确 d�qu� certainly
一(A壹) [yī] one, a little; 第* d�-yī first, primary; 看*看 k�nyīk�n have a (quick) look at [y�] (used before tone #4); *个人 y� g� r�n one person; *定 y�d�ng certain; *样 y�y�ng same; *月y�yu� January [y�] (used before tones #2 and #3); *点儿 y�diǎnr a little; *些 y�xiē some {Compare with 幺(F么) yāo, which also means "one"}
是 [sh�] to be, *不*? sh�bush�? is (it) or is (it) not?; *否 sh�fǒu whether or not, is (it) or is (it) not?Sorry, but i do not understand this post at all... Can anyone explain it to me? Is he saying my IDE is running on something other than Unicode?
PS: I tried one of the Scanner constructors that takes a charset parameter. That fixed the odd output! However, every Chinese character has been replaced with a question mark. (It was a series of weird characters before i used the constructor with a charset parameter.) -
Which version of Weblogic on Solaris is compatible with Oracle 8.1.7 - Unicode?
Hi folks,
We want upgrade WLS 4.5.1 to one of the last version of WLS, but also we are
planing upgrade Oracle to 8.1.7 version and migrate the character set of the
database to UTF8 (Unicode),
so we need to know which versions of WLS are compatible with Oracle 8.1.7
and Unicode as Character Set.
Thanks in advance.
Moises Moreno.Hi Moises Moreno
The latest version of weblogic server is 6.1 with service pack 1. This version
supports oracle 8.1.7 on major unix platforms viz., solaris(2.6,2.7,2.8),
hp-unix(11.0,11.0i), linux7.1, Aix4.3.3 and on windows platforms viz.,
NTwithsp5, 2000.
BEA jdrivers are having Multibyte character set support (UTF8).
Note : Weblogic server 5.1 with SP10 also supports oracle 8.1.7.
FMI : http://www.weblogic.com/platforms/index.html#jdbc
Thanks & Regards
BEA Customer Support
Moises Moreno wrote:
Hi folks,
We want upgrade WLS 4.5.1 to one of the last version of WLS, but also we are
planing upgrade Oracle to 8.1.7 version and migrate the character set of the
database to UTF8 (Unicode),
so we need to know which versions of WLS are compatible with Oracle 8.1.7
and Unicode as Character Set.
Thanks in advance.
Moises Moreno.
Maybe you are looking for
-
Regarding passport registration issues, I need to have a letter from you including both the old and new imei numbers. Could you please send me an email mentioning that I have changed the phones with the given imei numbers?
-
DYNP_VALUES_UPDATE not updating SCREEN FIELD PERNAM
Following code not working. screen field pernam not getting updated. pls advise. tables:zmara ,DFIES. PARAMETER: pmatnr LIKE zmara-matnr, pernam like zmara-ernam. AT SELECTION-SCREEN ON PMATNR. perform p2. FORM P2. *****READ DATA: SCREEN_V
-
Transferring all data from a mac to another
I'm purchasing a new macbook soon, and i wish to move all of my current data on my macbook to the new one. Is it as simple as copying to an external, then moving it into the new macbook, or is there some weird complex process required?
-
How to copy a pdf file from a windows PC to an iPad?
how to copy a pdf file from a windows PC to an iPad?
-
Unable to post in Asset Fiscal year 2006
Hi All, While creating purchase order, system is giving following message- You cannot post to asset in company code IN00 fiscal year 2006 Message no. AA347 Diagnosis A fiscal year change has not yet been performed in Asset Accounting for company code