Unicode convertion
Hi all!!
I got the problem about Unicode.
My Application is using MS950 as the charset. The unicode of the chinese character "邨" that submitted from HTML form will be E473. Beside, my application support submitting a unicode file. The file also contain the chinese character "邨", but after insert into database, the unicode of the character "邨" is 90A8 instead of E473.
When using enquire from application that using MS950 charset, the character from FILE 90A8 displayed as "?".
am i have anything wrong? pls help!!
Thx alot!
Edmond
Yes i know the character from HTML form is in private use area, but why?
I just using simple JSP but specify charset as MS950.
if that's the case, can i convert all those private use area to standard unicode on server side?
Thx again!!
Edmond
As you can check from the [url
http://www.unicode.org/charts/]unicode standard,
the character U+E473 is in the "private use area"
section and clearly cannot be the character you want.
U+90A8 on the other hand is in "CJK Unified
Ideographs" -- this must be your character.
So now you have one problem more: you get the wrong
character from the HTML form :)
When using enquire from application that using MS950charset, the
character from FILE 90A8 displayed as "?".Displayed by what?
And is it only displayed as "?" or is it replaced by
"?" in the text? Try displaying the character's hex
code instead of the character itself:char c =
yourChar;
display(Integer.toHexString(c));
Similar Messages
-
SAP Businness connector 4.6 - post SAP ECC Unicode convertion
Hi,
after Unicode convertion of our ECC 6.0 system, SAP Bussiness connector create XML file with encoding UTF-16 (in the past was iso-8859-1). Our custom applictions, that read this .XML files, can't process this files with UTF.16 until we have manualy change from UTF-16 to UTF-8 or iso-8859-1
Now:
<?xml version="1.0" encoding="utf-16"?>
In the past:
<?xml version="1.0" encoding="iso-8859-1"?>
Can I set into Businness Connector right encoding?
Regards.Hi,
>You can try to uncheck the Unicode settings in the RFC destination in SAP (transaction SM59) and see how the content arrives >then. But if I remember correctly, this does also not produce the desired effect in BC (receive in iso-8859-1).
already try it without success.
>Apart from that, I see no other option than change the encoding via a BC service or to change the encoding on the application >which receives the FTP file.
OK.
Regards. -
Unicode convertion source=target
Hi,
I'm tring a Unicode convertion on a send-box system just copied from production system. I'm in export phase now in our Solaris 10 SPARC + Oracle 10.2.04 box
For the import I must use the same server, same <SID> name and system number... is it necessary to unistall previus ASCII istance and then re-install in Unicode? ... ora can I switch only the kernel from ASCII 2 Unicode? ... how?
Regards.
Ganimede Dignan.Source system tablespaces..
PSAPBTABD
PSAPBTABI
PSAPCLUD
PSAPCLUI
PSAPDDICD
PSAPDDICI
PSAPDIMD
PSAPDIMI
PSAPDOCUD
PSAPDOCUI
PSAPEL700D
PSAPEL700I
PSAPES700D
PSAPES700I
PSAPFACTD
PSAPFACTI
PSAPLOADD
PSAPLOADI
PSAPODSD
PSAPODSI
PSAPPOOLD
PSAPPOOLI
PSAPPROTD
PSAPPROTI
PSAPSOURCED
PSAPSOURCEI
PSAPSTABD
PSAPSTABI
PSAPTEMP
PSAPUNDO
PSAPUSER1D
PSAPUSER1I
PSAPZACCTCRD
PSAPZACCTCRI
PSAPZACCTITD
PSAPZACCTITI
PSAPZAFVCD
PSAPZAFVCI
PSAPZAFVVD
PSAPZAFVVI
PSAPZAUSPD
PSAPZAUSPI
PSAPZBALDATD
PSAPZBALDATI
PSAPZBALHDRD
PSAPZBALHDRI
PSAPZBKPFD
PSAPZBKPFI
PSAPZBSIMD
PSAPZBSIMI
PSAPZBSISD
PSAPZBSISI
PSAPZCDHDRD
PSAPZCDHDRI
PSAPZCKISD
PSAPZCKISI
PSAPZCKITD
PSAPZCKITI
PSAPZCMFPD
PSAPZCMFPI
PSAPZCNVCLUD
PSAPZCNVCLUI
PSAPZCOEPD
PSAPZCOEPI
PSAPZCOSPD
PSAPZCOSPI
PSAPZCOSSD
PSAPZCOSSI
PSAPZEIPOD
PSAPZEIPOI
PSAPZEKBED
PSAPZEKBEI
PSAPZEKPOD
PSAPZEKPOI
PSAPZGLPCD
PSAPZGLPCI
PSAPZJESTD
PSAPZJESTI
PSAPZKONHD
PSAPZKONHI
PSAPZLIPSD
PSAPZLIPSI
PSAPZMARCD
PSAPZMARCI
PSAPZMARDD
PSAPZMARDI
PSAPZMBEWD
PSAPZMBEWI
PSAPZMCKALKWD
PSAPZMCKALKWI
PSAPZMDKPD
PSAPZMDKPI
PSAPZMDTBD
PSAPZMDTBI
PSAPZMSEGD
PSAPZMSEGI
PSAPZMSTAD
PSAPZMSTAI
PSAPZNASTD
PSAPZNASTI
PSAPZRESBD
PSAPZRESBI
PSAPZS022D
PSAPZS022I
PSAPZS026D
PSAPZS026I
PSAPZS027D
PSAPZS027I
PSAPZS033D
PSAPZS033I
PSAPZVBAPD
PSAPZVBAPI
PSAPZVBFAD
PSAPZVBFAI
PSAPZVBPAD
PSAPZVBPAI
PSAPZVBRPD
PSAPZVBRPI
SYSAUX
SYSTEM
But SAPinst show me only... on advanced db configuration:
PSAPSR3
PSAPSR3700
PSAPSR3FACT
PSAPSR3ODS
PSAPSR3USR
SYSAUX
PSAPUNDO
PSAPTEMP
SYSTEM
Why? -
Syscopy (during Unicode convertion): Import on same tablespace layout
Hi forum,
we're working on a Unicode convertion of our ECC landscape made of Solaris
10 Sparc and Oracle 10.2.0.4.
We use the procedure described in UC for SAPNetWeaver 7.0 SP 14 and
higher. Export runs well without error
or problem in our db (1,6 TB of used space).
During import SAPinst show me a different, then source system,
tablespaces layout:
PSAPSR3
PSAPSR3700
PSAPSR3FACT
PSAPSR3ODS
PSAPSR3USR
SYSAUX
PSAPUNDO
PSAPTEMP
SYSTEM
But my soursce system was:
PSAPBTABD
PSAPBTABI
PSAPCLUD
PSAPCLUI
PSAPDDICD
PSAPDDICI
PSAPDIMD
PSAPDIMI
PSAPDOCUD
PSAPDOCUI
PSAPEL700D
PSAPEL700I
PSAPES700D
PSAPES700I
PSAPFACTD
PSAPFACTI
PSAPLOADD
PSAPLOADI
PSAPODSD
PSAPODSI
PSAPPOOLD
PSAPPOOLI
PSAPPROTD
PSAPPROTI
PSAPSOURCED
PSAPSOURCEI
PSAPSTABD
PSAPSTABI
PSAPTEMP
PSAPUNDO
PSAPUSER1D
PSAPUSER1I
PSAPZ***********
PSAPZ***********
PSAPZ***********
SYSAUX
SYSTEM
We have already read 425079, so, can we use source system layout?
can SAPinst use our previous tablespace layout during import? ... or do
we must it manualy?
Regards.Hi,
>It is far better that you create the target database using the new layout. The old tablespace layout with multiple tablespaces and >data/index separation is no longer adapted to modern environments, and a Unicode conversion gives you a unique opportunity to >switch to a more appropriate structure for your DB.
So... to do it: I leave standard tablespace (without create my old tablespace PSAPZ*******) and then enlarge enough to load our tables? ... do I modify any other file/script for the import?
Regards. -
MacOS 7 cannot find Unicode Converter
Trying to launch a Classic application on MacOS 7.0, we get the message "Cannot start application. Cannot find Unicode Converter." Is it possible to obtain whatever it is the System is actually looking for? Or can the System be faked out on this issue? It is a graphics application; I don't really care about text issues.
Unicode was introduced to the Mac OS in a version of Mac OS 8 if my faulty memory recalls correctly. It will be a system extension more than likely but also more than likely NOT called "Unicode Converter".
If you can figure out what library file to use, you can put a copy into the extensions folder on your System 7 machine and see what happens when you start your application.
But I suspect that your system will simply crash if you try this. It nay not even boot.
Gary -
Polish after unicode convertion - Language from different codepage how-to
Hi,
we have just convert a sendbox system from ASCII (SAP 1100 codepage) to Unicode (SAP ECC 6.0, Solaris 10 SPARC, Oracle 10.2.0.4, ecc.).
In ASCII, TCPDB table contais 1100 codepage, now is empty, is it OK?
Now we would try to install Polish language.... so I load it from SMLT from 000 client, apply delta support packges, run supplementation and then add L into istance profile, it OK? ... must I install also Polish locale into Solaris? ... any other work?
Regards.> Now we would try to install Polish language.... so I load it from SMLT from 000 client, apply delta support packges, run supplementation and then add L into istance profile, it OK?
- execute program RSCPINST, add "PL" and activate the changes
- open RZ10, edit your profile, "Basic maintenance" and select PL, save and restart instance
- then add the language as described with SMLT (import, delta support package, supplementation and client distribution for your production client)
> ... must I install also Polish locale into Solaris? ... any other work?
No. You are now in Unicode, the necessary libraries are the libicu* libraries in your kernel directory, those are the (if you want) locales for the operating system. So nothing more is necessary.
To display the polish characters correctly in the SAPGUI you need to
- install the polish codepage on the frontend system
- switch the default from "Arial Monospace" to some other font that supports Latin-2 (such as Courier New)
Markus -
ZHT16DBT - UNICODE convertion in JAVA
Please help me how to make string convertion between different code.
I have a database in ZHT16DBT code. How to convert the ZHT16DBT code to UNICODE in Java when doing any query statement from that database.
the following is the code i tried, but not work
int oracleID=CharacterSet.ZHT16DBT_CHARSET;
CharacterSet gb=CharacterSet.make(oracleID);
OracleResultSet rs=(oracle.jdbc.driver.OracleResultSet)st.executeQuery("select * from test");
oracle.sql.CHAR ch = (oracle.sql.CHAR) rs.getOracleObject(1);
ch = new oracle.sql.CHAR(ch.getBytes(), gb);
Thanks for your help...No - because whatever that char is, it's not an 'n'. Also, this has nothing to do with fonts, but encoding, although not even that is an issue here. You want to do a mapping of some sorts.
(And there are no functions in Java. The things one calls are named "methods".) -
Unicode convertion for Czech Language
Hi all,
my system is not unicode and I have to conver it into a unicode one because we are going to plan a roll-out project for our czech branch. We have an ECC5.0 with English, German, Italian and Spanish languages.
I have understood, more or less, what we have to do in a technical way, but I would like to know something more about the growing of hardware space needs.
how much will my system grow reasonably after uniocde conversion and czech language installation? 10%, 20%?
Thanks a lot.Hi,
Have a look at the link below which will help to answer the question.
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10317ed9-1c11-2a10-9693-ec0d9a3bc537
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/589d18d9-0b01-0010-ac8a-8a22852061a2
If the Unicode encoding form is UTF -8 database size growth will be 10% of its original size
If it is UTF 16 then it may grow up to 60 to 70% of the original size
Rgds
Radhakrishna D S -
Hi SCN,
I'm looking at the possibility of running a Combined Upgrade & Unicode Conversion (CUUC) in-situ; i.e. without a migration to new hardware.
However in all my years of SAP Basis projects any Unicode conversion I've seen has involved a migration to new hardware (which makes good sense as the process is the same as an OSDB migration with R3load/MigMon).
What I want to know is - has anyone ever run one in-place. I.e. keeping the same server and running the CUUC entirely inplace?
This seems logically problematic as you need a target system to import to.
I can imagine a process along these lines though...:
1. Take SAP layer offline
2. Run full backup of system
3. Export system
4. Drop database* (to make clean ready for new import)
5. Run SPUMG/SUMG/SAPINST to setup Unicode database ready to receive import...
6. ....and continue to run import into that database
7. Complete Unicode conversion process
However the big issue here for me is
(a) *dropping the live production system is far from attractive.... I'm presuming everyone would do as I've always seen and combine this with a migration to new improved hardware, so that a backout to the old system is simple
(b) I'm not sure at what point with the SAP tools (SPUMG/SUMG/SAPINST) this process would become convoluted with it all being in-situ on the same box. I.e. I'm not sure if it's something the SAP tools are really designed to cater for.
Any input/discussion on these points would be very welcome.
Regards, doonan_79FYI Community. Feeling after reading and research is to use a temorary server to import unicode converted system data into; then swing disks back to the original host. Making all required hostname updates during the process.
Allowing for parallel export/import, but going back to the original configured host to complete the process. -
Minimize downtime during UNICODE conversion
Hello forum !
I am looking at the UNICODE conversion of a large (?) SAP Oracle database for ECC. The Database it self is at about 10 TB. UNICODE converting this in one single run will take too long to fit inside the company defined tolerable downtime window.
When looking at the data I find that the largest 50 tables makes up about 75% of the total database size. Most of the 50 largest tables have data for several years back, and I am able to partition the tables with one partition pr year.
The good thing about this is that I am able to make the partitions with old data (more than one year old data) READ-ONLY.
My assumption is that I will be able to UNICODE convert 60-70% and create the indexes for these partitions without taking down the database to be converted.
Now,- doing the downtime-phase I will be able to exclude the old partitions of the largest tables in the export since I have already created a new UNICODE database with these objects and thereby drastically reducing the required downtime.
Is there anyone who has any experience with an approach like the one I have schetched out here ?
Kjell Erik FurnesCan you please be a bit more specific on your installation? How long is your possible downtime (export time should be about 2/3 of that)? How long is your actual export estimation? Do you have suitable hardware (CPU/disk)?
What you absolutely need to have:
[1043380 - Efficient Table Splitting for Oracle Databases|https://service.sap.com/sap/support/notes/1043380] -> export tables with ROWID splitting
[855772 - Distribution Monitor|https://service.sap.com/sap/support/notes/855772] -> enables distributed and parallel export/import
So on the hardware side you will need, a storage subsystem as fast as possible with two copies of the disks (one for exporting and one for importing). A bunch of application servers with fast CPUs helps a lot, everything connected with at least gigabit ethernet.
If you knew all this already, sorry for the spam. But i achived throughputs at about 400gb/hour on a medium sized hardware, i think throughputs of 1tb/hour should be possible (not talking about cluster tables
Regards, Michael -
BI Unicode Conversion impact on Microsft office tools
Hello Friends,
The BI version is BI 7.0, but we are using both Bex 3.x and Bex 7.0.BI 7.0 is being Unicode converted.
Questions
Does Unicode conversion have any impact on running the new/existing reports in 3.x and 7.x
Does Unicode conversion have any impact on running the new/existing WAD reports.
Does Unicode conversion have any impact on any of the Microsoft office products used by BI.
Thanks!Since Excel is the starting point for BEx Analyzer, Excel 2007 would need to be tested. In BEx Web, if you have Excel as a save as option for queries, you will need to test that too. It also requires that the .NET Framework 3.0 be on the end user machine.
See [OSS Note 1013201|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1013201] -
Error in unicode export...
Hi,
We are doing an inplace conversion of an ERP2005SR2 system. We have just upgraded the system from 4.6C yesterday and are now performing an unicode conversion.
Unfortunately we have an error in the unicode export... See below:
From the SAPINST_DEV.LOG file:
ERROR 2007-08-25 17:08:03 [iaxxbdbld.cpp:1001]
CR3ldStep::startR3ldProcesses lib=iamodload module=CR3ldStep
MSC-01015 Process finished with error(s), check log file G:\usr\sap\SAPinst\CPC\EXP/SAPSDIC.log
From the SAPSDIC.LOG file:
/sapmnt/BNP/exe/R3load: START OF LOG: 20070825202939
/sapmnt/BNP/exe/R3load: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#13 $ SAP
/sapmnt/BNP/exe/R3load: version R7.00/V1.4
Compiled Aug 8 2007 00:22:00
/sapmnt/BNP/exe/R3load -datacodepage 4102 -e /usr/sap/SAPinst/CPC/EXP/SAPSDIC.cmd -l /usr/sap/SAPinst/CPC/EXP/SAPSDIC.log -inplace
DbSl Trace: CPFB755 occured. Refer to job log.
(DB) INFO: connected to DB
(EXP) INFO: check NameTab widths: Result=0.
(RC) WARNING: unexpected "ext:" entry at line 6 in file /usr/sap/SAPinst/CPC/EXP/SAPSDIC.cmd,
entry will be ignored
(RSCP) WARN: UMGCOMCHAR read check, skip: no data found; probably old SPUMG.
(RSCP) INFO: "/usr/sap/SAPinst/CPC/EXP/SAPSDIC004.xml" created.
(RSCP) INFO: I18N_NAMETAB_TIMESTAMPS not in env: checks are ON (Note 738858)
(RSCP) INFO: UMGSETTINGS nametab creation: ok.
(RSCP) INFO: Global fallback code page = 1100
(RSCP) INFO: Common character set is not 7-bit-ASCII
(RSCP) INFO: Collision resolution method is 'fine'
(RSCP) INFO: R3trans code pages = Normal
(RSCP) INFO: EXPORT TO ... code pages = Normal
(RSCP) INFO: Check for invalid language keys: active, by default
(RSCP) INFO: I18N_NAMETAB_NORM_ALLOW = 999999999
(RSCP) INFO: I18N_NAMETAB_NORM_LOG = 1000000002
(RSCP) INFO: I18N_NAMETAB_ALT_ALLOW = 10000
(RSCP) INFO: I18N_NAMETAB_ALT_LOG = 10003
(RSCP) INFO: I18N_NAMETAB_OLD_ALLOW = 0
(RSCP) INFO: I18N_NAMETAB_OLD_LOG = 500
(GSI) INFO: dbname = "BNPSAP001 "
(GSI) INFO: vname = "DB400 "
(GSI) INFO: hostname = "SAP001 "
(GSI) INFO: sysname = "OS400"
(GSI) INFO: nodename = "SAP001"
(GSI) INFO: release = "3"
(GSI) INFO: version = "5"
(GSI) INFO: machine = "006500039C6C"
(GSI) INFO: instno = "0020141614"
(EXP) INFO: starting NameTab check. Allow 999999999 misses in DDNTT, 10000 misses in DDNTT_CONV_UC, 0 outdated alternate NameTab entries according to CRTIMESTMP.
(EXP) INFO: /SSF/DHEAD missing in DDNTT_CONV_UC
(EXP) INFO: /SSF/DTAB missing in DDNTT_CONV_UC
(EXP) INFO: /SSF/PTAB missing in DDNTT_CONV_UC
(EXP) INFO: TIBAN_ACTIVE missing in DDNTT_CONV_UC
(EXP) ERROR: entry for COPABBSEG in DDNTT is newer than in DDNTT_CONV_UC: 20070825073714 > 20070825073318
(EXP) ERROR: entry for COPABBSEG_GLX in DDNTT is newer than in DDNTT_CONV_UC: 20070825073756 > 20070825073318
(EXP) ERROR: entry for COPACRIT in DDNTT is newer than in DDNTT_CONV_UC: 20070825073801 > 20070825073318
(EXP) ERROR: entry for COPAOBJ in DDNTT is newer than in DDNTT_CONV_UC: 20070825073804 > 20070825073318
(EXP) ERROR: entry for EE72_COPAKEY in DDNTT is newer than in DDNTT_CONV_UC: 20070825073756 > 20070825072528
(EXP) ERROR: entry for JBDCHARDERI in DDNTT is newer than in DDNTT_CONV_UC: 20070825073804 > 20070825071134
(EXP) ERROR: entry for JBDCHARPAFO in DDNTT is newer than in DDNTT_CONV_UC: 20070825073805 > 20070825071134
(EXP) ERROR: entry for JBD_STR_FO_PA_CHAROBJ in DDNTT is newer than in DDNTT_CONV_UC: 20070825073805 > 20070825071127
(EXP) ERROR: entry for JHF11_KOMP_STR in DDNTT is newer than in DDNTT_CONV_UC: 20070825073820 > 20070825071037
(EXP) ERROR: entry for JVKOMP in DDNTT is newer than in DDNTT_CONV_UC: 20070825073822 > 20070825070937
(EXP) ERROR: entry for KOMG in DDNTT is newer than in DDNTT_CONV_UC: 20070825073808 > 20070825070802
(EXP) ERROR: entry for KOMGF in DDNTT is newer than in DDNTT_CONV_UC: 20070825073823 > 20070825070802
(EXP) ERROR: entry for KOMGFNEW in DDNTT is newer than in DDNTT_CONV_UC: 20070825073836 > 20070825070802
(EXP) ERROR: entry for KOMGFOLD in DDNTT is newer than in DDNTT_CONV_UC: 20070825073838 > 20070825070802
(EXP) ERROR: entry for KOMP in DDNTT is newer than in DDNTT_CONV_UC: 20070825073814 > 20070825070755
(EXP) ERROR: entry for KOMPAKE in DDNTT is newer than in DDNTT_CONV_UC: 20070825073805 > 20070825070755
(EXP) ERROR: entry for OICIL in DDNTT is newer than in DDNTT_CONV_UC: 20070825073825 > 20070825070219
(EXP) ERROR: entry for OIRCPMITEM in DDNTT is newer than in DDNTT_CONV_UC: 20070825073826 > 20070825070148
(EXP) ERROR: entry for STR_KOMG in DDNTT is newer than in DDNTT_CONV_UC: 20070825073828 > 20070825063750
(EXP) ERROR: entry for SVVSC_COPA in DDNTT is newer than in DDNTT_CONV_UC: 20070825073757 > 20070825063738
(EXP) ERROR: entry for TRCON_CONTRACT_DATA in DDNTT is newer than in DDNTT_CONV_UC: 20070825073846 > 20070825053905
(EXP) ERROR: entry for TRCON_CONTRACT_DATA_MM in DDNTT is newer than in DDNTT_CONV_UC: 20070825073847 > 20070825053905
(EXP) ERROR: entry for TRCON_CONTRACT_DATA_SD in DDNTT is newer than in DDNTT_CONV_UC: 20070825073848 > 20070825053905
(EXP) ERROR: entry for TRCON_IT_KOMP in DDNTT is newer than in DDNTT_CONV_UC: 20070825073830 > 20070825062450
(EXP) ERROR: entry for TRCON_OUTP_DBDATA in DDNTT is newer than in DDNTT_CONV_UC: 20070825073848 > 20070825053905
(EXP) ERROR: entry for WB2_EKOMP in DDNTT is newer than in DDNTT_CONV_UC: 20070825073831 > 20070825060540
(EXP) ERROR: entry for WCB_COND_DATA in DDNTT is newer than in DDNTT_CONV_UC: 20070825073848 > 20070825053905
(EXP) ERROR: entry for WCB_COND_DISP in DDNTT is newer than in DDNTT_CONV_UC: 20070825073839 > 20070825053905
(EXP) ERROR: entry for WCB_KEY_CHANGE in DDNTT is newer than in DDNTT_CONV_UC: 20070825073843 > 20070825053905
(EXP) ERROR: entry for WCB_KOMG in DDNTT is newer than in DDNTT_CONV_UC: 20070825073832 > 20070825053905
(EXP) ERROR: entry for WCB_KOMG_HELP in DDNTT is newer than in DDNTT_CONV_UC: 20070825073834 > 20070825060526
(EXP) ERROR: entry for WRF_POHF_KOMP_STY in DDNTT is newer than in DDNTT_CONV_UC: 20070825073844 > 20070825060357
(EXP) INFO: NameTab check finished. Result=2 #20070825203013
(EXP) INFO: check for inactive NameTab entries: Result=0.
(DB) INFO: disconnected from DB
/sapmnt/BNP/exe/R3load: job finished with 1 error(s)
/sapmnt/BNP/exe/R3load: END OF LOG: 20070825203013
I have an OSS message on "very-high" on this error, but they are not helping very much! SAP has proposed note 738858 were we set two environment variables. We have done this and that did not help.
We have upgraded and unicode converted two test systems and one development system in the last 2 months without this error.
Do you have any ideas?
Best regards
Henrik HviidHello!!:
I'm doing a CU&UC (as say the CU&UC Guide), and now i'm in the phase of upgrade from 46C MDMP to ECC6.0 no unicode (later in other phase i've to change to unicode). I've generated all the SCAN's, and launch the SAPup (Upgrade Process). The phase UPGRADE/SHADOW: START OF PHASE RUN_RADCUCNT_ALL of the SAPup has created the job RADCUCNTUPG, and has been running during 20 hours. It's possible to do this phase quickly??.
Otherwise, in the chapter 3.2.2 Phase RUN_RAD_CUCNT_NEW is going to regenerated the DDIC Objects. So in the Additional Preparation Steps in SAP NW7.0 nonUC (chapter 4.1. of the guide) will be neccesary to launch again the program RADCUCNT with the variant UNICODE-02??? or we've only to do the seven steps that appears in the Guide???.
I've seen the Note 932779, 837173, about this, but i don't see the solution. In the page 7 and 8 of the Note OSS 932779, say that:
- during CU&UC nametab is touched twice, once for upgrade and once for unicode conversion (in phase RADCUCNT_ALL and in phase RADCUCNT_NEW).
- later appears that during unicode conversion preparation (SPUM4/SPUMG) RADCUCNT runs again twice.
So in the chapter 4.1 about additional preparation steps in NW7.0 non UC there is not this step (only appears7 steps). It's really neccesary in the CU&UC in this moment?
Thanks!.
Alfredo. -
Unicode conversion causing OCRA, OCRB, and MICR lines to output as symbols
Hi,
I am tesing the printing of SAPScript invoices in a Unicode converted sandbox copy of our production system. When viewing the OTF output of an invoice I noticed that the line for postal barcode using OCR-B font is producing # symbols instead of numbers reprsenting the postal zip code. This works in the Non-Unicode environment on our production server.
The printer is an HPLJ8150. The device type for the printer that is used for the invoice, is ZHP8150, which uses the base device type of HPLJ8000. The character set is 1116, which is ISO 8859-1.
Can anyone tell me how to go about correcting this problem?
Any help will be greatly appreciated.
Thanks,
MarkHi Nils,
Thank you for the information. I have created a new message in the Form Printing forum. Since this is something that was changed after the unicode conversion, I think I will leave this message open a little while longer incase someone who experienced this type of issue is looking in this forum.
Two things I should add. One, the font used for the device type is working in a Non-Unicode production system at this time. Two, the OTF output of the spool is not printed on our printer but is instead converted to XML and the resulting file is FTP'd to a third party for printing. So the zip code that is represented by the bar code font is just the numerical text that is in the OTF output which is sent in the XML file.
Regards,
Mark -
I'm reading in a file that's encoded in UTF-8 and begins with the byte-order mark of EF BB BF. I'm curious to know why a byte-order mark is needed for something encoded in UTF-8, because aren't BOMs only used to figure out endianness, which isn't an issue with UTF-8 as some tutorials I've seen say. But then again, UTF-8 can use multiple bytes to specify a character, in which case endianness does matter, right?
My question is therefore whether endianness matters with UTF-8?
Also, at the online converter at the URL http://macchiato.com/unicode/convert.html, why can you convert from FE FF in UTF-16 to EF BB BF in UTF-8, but cannot do the conversion in the opposite way? Is it a problem with the converter, or something to do with the encodings themselves?
Thanks.http://www.unicode.org/faq/utf_bom.html#25
As for that URL, not likely that anyone here would know what's wrong with it. Could be that it doesn't assume UTF-8 will have a BOM, as it's not needed anyway, so doesn't treat it as such and converts it as if they were regular characters. -
How to load file thru reader which contains non-english char in file name
Hi ,
I want to know how to load file in english machine thru reader which contains non-english chars in file names (eg. 置顶.pdf)
as LoadFile gives error while passing unicode converted file name.
Regards,
ArvindYou don't mention what version of Reader? And you are using the AcroPDF.dll, yes?
Sent from my iPad
Maybe you are looking for
-
Form Validation on Complex Primary Key
I have a that inserts/updates a table with a 2 coulmn primary key (ticket_num, ticket_seq). Is there anyway to create a page validation item that will keep users from violating the pk constraint and getting the ugly ORA-00001 error. I think I just do
-
How can I recover these saved emails???
-
Why is "Document Compare" so horrendously slow in Acrobat X?
In Acrobat 8, it used to be possible to compare the text of two large (900+ pages) documents in a few minutes. I just installed Acrobat X, and now these comparisons take much longer. Is there any quick way to compare two large documents and see whi
-
Location of variable and function names in SWF bytecode?
Ok, so I'm looking at the specification PDF for v10 of a SWF file. Here is a link to it: http://www.adobe.com/devnet/swf/pdf/swf_file_format_spec_v10.pdf I can't find anything in there about where or how a SWF stores variable and function names in th
-
Photoshop cs5 photomerge crash
i have photoshop cs5(x64 & x32) install onto my asus n82j, which runs on win7 64bit, i5-520m, geforce gt 335m (vram 1gb), 4gb ram, 34gb of free space on my c drive. I can use everything without a problem except photomerge, whenever i click Automate>P