Unicode and converter
Hi there all, Im Chrno, and i have now a question... well exactly what im trying to do now, that's imposible to me lol... Hope you guys can help
OKey so this the question...
I want to convert a string like this (in Vietnamese) "tá lả" or something like "chúng tôi luôn chào đón bạn" to a string like this " *tá l & # 7 8 4 3;* "
I make the first string with Unicode and dont know how to convert it in wellform like the second string...
Ths for read it and hope u can help me out :)
Edited by: ChrnoLove on Apr 24, 2009 9:41 AM
yet all their ID tags were edited in iTunes when I had them on my PC
I think the problem there is that on a PC you can have them a legacy Japanese encoding while the Mac only accepts Unicode, and the Mac and PC also use slightly different forms of Unicode. But you are right, I don't see why, if all were ok in Windows, only some would be ok on the Mac.
There is a Japanese version of these forums where you might ask if you or a colleague knows Japanese well:
http://discussions.info.apple.co.jp/
Similar Messages
-
Cannot convert between unicode and non-unicode string datatypes
My source is having 3 fields :
ItemCode nvarchar(50)
DivisionCode nvarchar(50)
Salesplan (float)
My destination is :
ItemCode nvarchar(50)
DivisionCode nvarchar(50)
Salesplan (float)
But still I am getting this error :
Column ItemCode cannot convert between unicode and non-unicode string datatypes.
As I am new to SSIS , please show me step by step.
Thanks In Advance.My source is having 3 fields :
ItemCode nvarchar(50)
DivisionCode nvarchar(50)
Salesplan (float)
My destination is :
ItemCode nvarchar(50)
DivisionCode nvarchar(50)
Salesplan (float)
But still I am getting this error :
Column ItemCode cannot convert between unicode and non-unicode string datatypes.
As I am new to SSIS , please show me step by step.
Thanks In Advance.
HI Subu ,
there is some information gap , what is your source ? are there any transformation in between ?
If its SQL server source and destination and the datatype is as you have mentioned I dont think you should be getting such errors ... to be sure check advance properties of your source and check metada of your source columns
just check simple oledb source as
SELECT TOP 1 ItemCode = cast('111' as nvarchar(50)),DivisionCode = cast('222' AS nvarchar(50)), Salesplan = cast(3.3 As float) FROM sys.sysobjects
and destination as you mentioned ... it should work ...
somewher in your package the source columns metadata is not right .. and you need to convert it or fix the source.
Hope that helps
-- Kunal
Hope that helps ... Kunal -
Hello,
I am working on one project and there is need to extract Sharepoint list data and import them to SQL Server table. I have few lookup columns in the list.
Steps in my Data Flow :
Sharepoint List Source
Derived Column
its formula : SUBSTRING([BusinessUnit],FINDSTRING([BusinessUnit],"#",1)+1,LEN([BusinessUnit])-FINDSTRING([BusinessUnit],"#",1))
Data Conversion
OLE DB Destination
But I am getting the error of not converting between unicode and non-unicode string data types.
I am not sure what I am missing here.
In Data Conversion, what should be the Data Type for the Look up column?
Please suggest here.
Thank you,
Mittal.You have a data conversion transformation. Now, in the destination are you assigning the results of the derived column transformation or the data conversion transformation. To avoid this error you need use the data conversion output.
You can eliminate the need for the data conversion with the following in the derived column (creating a new column):
(DT_STR,100,1252)(SUBSTRING([BusinessUnit],FINDSTRING([BusinessUnit],"#",1)+1,LEN([BusinessUnit])-FINDSTRING([BusinessUnit],"#",1)))
The 100 is the length and 1252 is the code page (I almost always use 1252) for interpreting the string.
Russel Loski, MCT, MCSE Data Platform/Business Intelligence. Twitter: @sqlmovers; blog: www.sqlmovers.com -
Column "A" cannot convert between unicode and non-unicode string data types
I am following the SSIS overview video-
https://secure.cbtnuggets.com/it-training-videos/series/microsoft-sql-server-2008-business-development/6143?autostart=true
I have a flat file that i want to import the contents onto a SQL database.
I created a Dataflow task, source file and oledb destination.
I am getting the folliwung error -
"column "A" cannot convert between unicode and non-unicode string data types"
in the origin file the data type is coming as string[DT_STR] and in the destination object it is coming as "Unicode string [DT_WSTR]"
I used a data conversion object in between, dosent works very well
Please help what to doI see this often.
Right Click on FlatFileSource --> Show Advanced Editor --> 'Input and Output Properties' tab --> Expand 'Flat File Source Output' --> Expand 'Output Columns' --> Select your field and set the datatype to DT_WSTR.
Let me know if you still have issues.
Thank You,
Jay -
Column cannot convert between unicode and non-unicode string data types
I am converting SSIS jobs from SQL Server 2005 running on a Windows 2003 server to 2008R2 running on a Windows 2008 server. I have a dataflow with an OLE DB Source which is selecting from an Oracle view. This of course worked fine in
2005. This OLE DB Source will not even read the data from Oracle without the error "Column "UWI" cannot convert between unicode and non-unicode. The select is:
SELECT SOME_VIEW.UWI AS UWI,
CAST(SOME_VIEW.OIL_NET AS NUMERIC(9,8)) AS OIL_NET
FROM SOME_SCHEMA.SOME_VIEW
WHERE OIL_NET IS NOT NULL AND UWI IS NOT NULL
ORDER BY UWI
When I do "Show Advanced Editor" on this component, in the Input and Output Properties, I show the OLE DB External Column as DT_STR length 40 for the UWI column and for the Output Columns I see the UWI as the same DT_STR.
How can I get past this? I have tried doing a cast...cast(SOME_VIEW.UWI AS VARCHAR(40)) AS UWI and this gives the same error. The column in Oracle is a varchar2(40).
Any help is greatly appreciated. Thanks.Please check the data type for UWI using advanced editor for Oledb Source under
external columns and output columns. Are the data types same?
If not, try changing the data type (underoutput columns) same as data type shown under
external columns
Nitesh Rai- Please mark the post as answered if it answers your question -
Cannot convert between unicode and non-unicode string data types.
I'm trying to copy the data from 21 tables in a SQL 2005 database to a MS Access database using SSIS. Before converting the SQL database from 2000 to 2005 we had this process set up as a DTS package that ran every month for years with no problem. The only way I can get it to work now is to delete all of the tables from the Access DB and have SSIS create new tables each time. But when I try to create an SSIS package using the SSIS Import and Export Wizard to copy the SQL 2005 data to the same tables that SSIS itself created in Access I get the "cannot convert between unicode and non-unicode string data types" error message. The first few columns I hit this problem on were created by SSIS as the Memo datatype in Access and when I changed them to Text in Access they started to work. The column I'm stuck on now is defined as Text in the SQL 2005 DB and in Access, but it still gives me the "cannot convert" error.
I was getting same error while tranfering data from SQL 2005 to Excel , but using following method i was able to tranfer data. Hopefully it may also help you.
1) Using Data Conversion transformation
data types you need to select is DT_WSTR (unicode in terms of SQL: 2005)
2) derived coloumn transformation
expression you need to use is :
(DT_WSTR, 20) (note : 20 can be replace by your character size)
Note:
Above teo method create replica of your esting coloumn (default name will be copy of <coloumn name>).
while mapping data do not map actual coloumn to the destination but select the coloumn that were created by any of above data transformer (replicated coloumn). -
How to fix "cannot convert between unicode and non-unicode string data types" :/
Environment: SQL Server 2008 R2
Introduction:Staging_table is a table where data is being stored from source file. Individual and ind_subject_scores are destination tables.
Purpose: To load the data from a source file .csv while SSIS define table fields with 50 varchar, I can still transfer the data to the entity table/ destination and keeping the table definition.
I'm getting validation error "Cannot convert between a unicode and a non-unicode string data types" for all the columns.
Please helpHi ,
NVARCHAR = DT_WSTR
VARCHAR = DT_STR
Try below links:
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/ed1caf36-7a62-44c8-9b67-127cb4a7b747/error-on-package-can-not-convert-from-unicode-to-non-unicode-string-type?forum=sqlintegrationservices
http://social.msdn.microsoft.com/Forums/en-US/eb0d1519-4be3-427d-bd30-ae4004ea9e8d/data-conversion-error-how-to-fix-this
http://technet.microsoft.com/en-us/library/aa337316(v=sql.105).aspx
http://social.technet.microsoft.com/wiki/contents/articles/19612.ssis-import-excel-to-table-cannot-convert-between-unicode-and-non-unicode-string-data-types.aspx
sathya - www.allaboutmssql.com ** Mark as answered if my post solved your problem and Vote as helpful if my post was useful **. -
Oracle 10g R2 - Unicode is converted to unknown ?����
Ok, I've read a thousand posts. Tried everything still stuck.
I am using Oracle 10g : 10.2.0.5.0
I'm on a Mac, using SQL Developer 3.1.07
When I enter "ė€£¥©", the insertion into the database converts it to: ?����
This is straight from a SQL INSERT command, in SQLDeveloper, no java code or anything.
I tried adding a file called SQLDeveloper.app/Resources/sqldeveloper/sqldeveloper/sqldeveloper.conf and added these lines:
-Doracle.jdbc.defaultNChar=true
-Doracle.jdbc.convertNcharLiterals=true
Didn't help.
I tried created a test column and made the datatype NVARCHAR2. same results. VARCHAR column does the same corruption but that's expected.
I tried N'©' - doesn't work
I tried INSERT INTO testchar (column1) VALUES ( unistr('©') );
results is still : �
I check NSL_NVAR_CHARACTERSET and it's UTF8
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET US7ASCII
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET UTF8
NLS_RDBMS_VERSION 10.2.0.5.0I don't know what else to do. I am not a DBA. I'm just trying to fix a bug or at least tell a customer what's wrong. Entering special characters is required. Can anyone help? thank you!
I have read that the insertion into the database does its own conversion but I don't know if that applies to me since I'm in 10g R2
Edited by: 947012 on Aug 14, 2012 11:43 AM
My mac's locales are also :
$ locale
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_ALL=Edited by: 947012 on Aug 14, 2012 11:45 AM
Edited by: 947012 on Aug 14, 2012 11:49 AMNLS_NCHAR_CHARACTERSET specifies the encoding of the NVARCHAR2 variables but not the SQL statements. SQL statements, when they arrive to the database server, are always encoded in the database character set (NLS_CHARACTERSET). This is true for the whole statement, including any character literals. Therefore, even if a literal is valid in SQL Developer (which works in Unicode), when the literal arrives to the database, it is already stripped of all characters that the database character set does not support. The characters are not actually removed but they are converted to a replacement character. In case of the US7ASCII database character set, anything non-ASCII is lost.
The trick to avoid this problem is to:
- mark the literals that need to be preserved for NVARCHAR2 columns as NVARCHAR2 literals by prefixing them with "n".
- set the mentioned property to activate the client side encoding mechanism with basically does a rough parsing of the statement and replaces all N-literals with U-literals. Undocumented U literals are similar to UNISTR calls. They encode non-ASCII characters with Unicode escape sequences. For example, n'€' (U+20AC) is encoded as u'\20AC'. As u'\20AC' contains only ASCII characters, it arrives to the database unchanged. The SQL parser recognizes the U-literals and converts them (unescapes them) to NVARCHAR2 constants before further SQL processing, such as INSERT.
-- Sergiusz -
Unicode and Non-Unicode Instances in one Transport Landscape
We have a 4.7 landscape that includes a shared global development system supporting two regional landscapes. The shared global development system is used for all ABAP/Workbench activity and for global customization used by both regional production systems. The two regional landscapes include primarily three instances - Regional Configuration, Quality Assurance, and Production. The transport landscape includes all systems with transport routes for global and regional.
A conversion to unicode is also being planned for the global development and one regional landscape. It is possible that we will not convert the other regional landscape due to pending discussions on consolidation. This means one of the regional landscapes will be receiving global transports from a unicode-based system.
All information I've located implies no actual technical constraints. Make sure you have the right R3trans versions, don't use non-Latin_1 languages, etc. Basic caveats for a heterogenous environment ....
Is anyone currently supporting a complete, productive landscape that includes unicode and non-unicode systems? If so, any issues or problems encountered with transports across the systems? (insignificant or significant)
Information on actual experiences will be greatly appreciated ....
Many thanks in advance.Hi Laura,
Although i do not have the live / practical experience, but this is what i can share.
I have been working on a Non-Unicode to Unicode conversion project. While we were in the discussion phase there was one such possibility of a scenario that part of the landscapes would remain non-unicode. So based on the research i did by reading and directly interacting with some excellent SPA consultants, i came to know there are absolutely no issues in transporting ABAP programs from a Unicode system to non-unicode system. In a Unicode system the ABAP code has already been checked and rectified for higher syntax checks and these are downward compatible with the ABAP code on lower ABAP versions and non-unicode systems. Hence i beleive there should not be any issues, however as i mentioned this is not from practical experience.
Thanks.
Chetan -
Differnce between unicode and non unicode
Hi every body i want to differnce between unicode and non unicode and for what purposes this ulities are used explain me little brief what is t code for that , how to checj version, how to convert uni to non uni ?
Advance Thanks
Vishnuprasad.GHello Vishnu,
before Release 6.10, SAP software only used codes where every character is displayed by one byte, therefore character sets like these are also called single-byte codepages. However, every one of these character sets is only suitable for a limited number of languages.
Problems arise if you try to work with texts written in different incompatible character sets in one central system. If, for example, a system only has a West European character set, other characters cannot be correctly processed.
As of 6.10, to resolve these issues, SAP has introduced Unicode. Each character is generally mapped using 2 bytes and this offers a maximum of 65 536 bit combinations.
Thus, a Unicode-compatible ABAP program is one where all Unicode checks are in effect. Such programs return the same results in UC systems as in non-UC systems. To perform the relevant syntax checks, you must activate the "UC checks" flag in the screens of the program and class attributes.
With TC: /nUCCHECH you can check a program set for a syntax errors in UC environment.
Bye,
Peter -
This is driving me nuts.
Created a page where there is a mix of English and Chinese,
used unicode
and worked fine.
But then created another page exactly the same and now the
unicode is
not being converted..
First link is fine
http://www.destinationcdg.com/Bonaparte/BonaparteC.cfm
But this link is all screwed up.
http://www.destinationcdg.com/Bonaparte/areaC.cfm
Any ideas please.
DW8.02 CFMX7 and Apache2Hi guys
I've just realised that the solution here isn't totally complete. If you are still interested in helping I would be really grateful.
Quick re-cap:
The problem was Java was mis-calculating the length of unicode strings.
e.g. ...
String nihao = "??"; //Should read 2 chinese characters, may display here as ??
System.out.println(nihao.length()) ;... would print 6 or something, but not 2 as it should.
I was recommended to use a parameter when invoking javac which fixed this problem.
javac -encoding UTF-8 ClassName.javaNow, this solved the problem so far.
However!!!! What I assumed would work and didn't test until now is this:
System.out.println(nihao);But it doesn't work.
So in a nutshell. If I have a Class which contains unicode strings out of the usual latin set and encode that text file as unicode, use a -encoding UTF-8 parameter when compiling, Java still prints out ?? to the command line.
Is it my shell or is it Java?
I'm using the Bash shell.
If I had a file called ??.txt (should be 2 chinese chars) and used ls then ?? (should be 2 chiense chars) would not display properly. I would get ??.txt.
To get the file name to display properly I would need to use ls -v. This -v flag makes things work.
I've tried it with the java command but java doesn't like it.
This is really doing my head in. If anyone has any ideas please help.
Thanks.
Chinese characters don't seem to be uploading to this website so it makes this post difficult. Where you are supposed to see chinese I have said so. It might display as ??. There are places where I wanted to write ??.
I can't award Duke Dollars to this post as I did it already. I have posted a fresh version of this problem in the Java Programming forum. I have allocated Duke Dollars to that post so best to reply there if you have any ideas :)
Message was edited by:
stanton_ian -
Hi experts,
can anybody tell me
what is unicode and non-unicode in interview point of view.Just 2 or 3 sentences....
Thanks in advanceunicode is for multilingual capability in SAP system,
apart from that important unicode t.code is uccheck if you give report name we will get different error codes,in genaral we get an error of structures miss match,obsolute statemnts,open data set,describe ststment and so on
more over you can say we delete all obsolute function modules, look at the following it may help you
Before the Unicode error
lt_hansp = lt_0201-endda0(4) - lt_0002-gbdat0(4).
Solution.
data :abc type i,
def type i.
move lt_0201-endda+0(4) to abc.
move lt_0002-gbdat+0(4) to def.
lt_hansp-endda = abc - def.
Before the Unicode error:
WRITE: /1 'CO:',CO(110).
Solution.
FIELD-SYMBOLS: <fs_co> type any.
assign co to <fs_co>.
WRITE: /1 'CO:',<fs_co>(110).
DESCIBE002 In Unicode, DESCRIBE LENGTH can only be used with the IN BYTE MODE or IN CHARACTER MODE addition.
Before the Unicode error:
describe field <tab_feld> length len.
Solution.
describe field <tab_feld> length len IN character mode.
Before the Unicode error:
DESCRIBE FIELD DOWNTABLA LENGTH LONG.
Solution.
DESCRIBE FIELD DOWNTABLA LENGTH LONG IN byte MODE.
DO 002 Could not specify the access range automatically. This means that you need a RANGE addition
Before the Unicode error:
DO 7 TIMES VARYING i FROM aktuell(1) NEXT aktuell+1(1)
Solution.
DO 7 TIMES VARYING i FROM aktuell(1) NEXT aktuell+1(1) RANGE aktuell .
Before the Unicode error:
DO 3 TIMES VARYING textfeld FROM gtx_line1 NEXT gtx_line2.
Solution.
DATA: BEGIN OF text,
gtx_line1 TYPE rp50m-text1,
gtx_line2 TYPE rp50m-text2,
gtx_line3 TYPE rp50m-text3,
END OF text.
DO 3 TIMES VARYING textfeld FROM gtx_line1 NEXT gtx_line2 RANGE text..
Before the Unicode error:
DO ev_restlen TIMES
VARYING ev_zeichen FROM ev_hstr(1) NEXT ev_hstr+1(1).
Solution.
DO ev_restlen TIMES
VARYING ev_zeichen FROM ev_hstr(1) NEXT ev_hstr+1(1) range ev_hstr.
MESSAGEG!2 IT_TBTCO and "IT_ALLG" are not mutually convertible. In Unicode programs, "IT_TBTCO" must have the same structure layout as "IT_ALLG", independent of the length of a Unicode character.
Before the Unicode error:
IT_TBTCO = IT_ALLG.
Solution.
IT_TBTCO-header = IT_ALLG-header.
MESSAGEG!3 FIELDCAT_LN-TABNAME and "WA_DISP" are not mutually convertible in a Unicode program
Before the Unicode error:
IF GEH_TA15(73) NE RETTER15(73).
Solution.
FIELD-SYMBOLS: <GEHTA> TYPE ANY.
<RETTER1> TYPE ANY.
ASSIGN: GEH_TA TO <GEHTA>,
RETTER TO <RETTER>.
IF <GEHTA>15(73) NE <RETTER>15(73).
Before the Unicode error:
IMP_EP_R3_30 = RECRD_TAB-CNTNT.
Solution.
FIELD-SYMBOLS: <imp_ep_r3_30> TYPE X,
<recrd_tab-cntnt> TYPE X.
ASSIGN IMP_EP_R3_30 TO <imp_ep_r3_30> CASTING.
ASSIGN RECRD_TAB-CNTNT TO <recrd_tab-cntnt> CASTING.
<imp_ep_r3_30> = <recrd_tab-cntnt>.
Before the Unicode error:
and pernr = gt_pernr
Solution.
and pernr = gt_pernr-pernr
MESSAGEG!7 EBC_F0 and "EBC_F0_255(1)" are not comparable in Unicode programs.
Before the Unicode error:
IF CHARACTER NE LINE_FEED.
Solution.
IF CHARACTER NE LINE_FEED-X.
MESSAGEG!A A line of "IT_ZMM_BINE" and "OUTPUT_LINES" are not mutually convertible. In a Unicode program "IT_ZMM_BINE" must have the same structure layout as "OUTPUT_LINES" independent of the length of a Unicode character.
Before the Unicode error:
*data: lw_wpbp type pc206.
Solution.
data: lw_wpbp type pc205.
Before the Unicode error:
LOOP AT seltab INTO ltx_p0078.
Solution.
DATA: WA_SELTAB like line of SELTAB.
CLEAR WA_SELTAB.
MOVE-CORRESPONDING ltx_p0078 to wa_seltab.
move-corresponding wa_seltab to ltx_p0078.
MESSAGEG?Y The line type of "DTAB" must be compatible with one of the types "TEXTPOOL".
Before the Unicode error:
DATA:
BEGIN OF dtab OCCURS 100.
text(100),
include structure textpool.
End of changes
SET TITLEBAR '001' WITH dtab-text+9.
Solution.
the following declaration should be mentioned in the declaration of the textpool.
DATA:
BEGIN OF dtab OCCURS 100.
text(100),
include structure textpool.
End of changes
SET TITLEBAR '001' WITH dtab-entry.
MESSAGEG@1 TFO05_TABLE cannot be converted to a character-type field.
Before the Unicode error:
WRITE: / PA0015, 'Fehler bei MODIFY'.
Solution.
WRITE: / PA0015+0, 'Fehler bei MODIFY'.
MESSAGEG@3 ZL-C1ZNR must be a character-type data object (data type C, N, D, T or STRING) .
Before the Unicode error:
con_tab TYPE x VALUE '09',
Solution.
con_tab TYPE string VALUE '09',
Before the Unicode error:
data: g_con_ascii_tab(1) type x value '09'.
Solution.
data: g_con_ascii_tab type STRING value '09'.
MESSAGEG@E HELP_ANLN0 must be a character-type field (data type C, N, D, or T). an open control structure introduced by "INTERFACE".
Before the Unicode error:
WRITE SATZ-MONGH TO SATZ-MONGH CURRENCY P0008-WAERS.
WRITE SATZ-JAH55 TO SATZ-JAH55 CURRENCY P0008-WAERS.
WRITE SATZ-EFF55 TO SATZ-EFF55 CURRENCY P0008-WAERS.
WRITE SATZ-SOFE_EREU TO SATZ-SOFE_EREU CURRENCY P0008-WAERS.
WRITE SATZ-SOFE_ERSF TO SATZ-SOFE_ERSF CURRENCY P0008-WAERS.
WRITE SATZ-SOFE_ERSP TO SATZ-SOFE_ERSP CURRENCY P0008-WAERS.
WRITE SATZ-SOFE_EIN TO SATZ-SOFE_EIN CURRENCY P0008-WAERS.
WRITE SATZ-SOFE_EREU TO SATZ-SOFE_EREU CURRENCY P0008-WAERS.
WRITE SATZ-ERHO_ERR TO SATZ-ERHO_ERR CURRENCY P0008-WAERS.
WRITE SATZ-ERHO_EIN TO SATZ-ERHO_EIN CURRENCY P0008-WAERS.
WRITE SATZ-JAH55_FF TO SATZ-JAH55_FF CURRENCY P0008-WAERS.
Solution.
DATA: SATZ1_MONGH(16),
SATZ_JAH551(16),
SATZ_EFF551(16),
SATZ_SOFE_EREU1(16),
SATZ_SOFE_ERSF1(16),
SATZ_SOFE_ERSP1(16),
SATZ_SOFE_EIN1(16),
SATZ_ERHO_ERR1(16),
SATZ_ERHO_EIN1(16),
SATZ_JAH55_FF1(16).
WRITE SATZ-MONGH TO SATZ1_MONGH CURRENCY P0008-WAERS.
WRITE SATZ-JAH55 TO SATZ_JAH551 CURRENCY P0008-WAERS.
WRITE SATZ-EFF55 TO SATZ_EFF551 CURRENCY P0008-WAERS.
WRITE SATZ-SOFE_EREU TO SATZ_SOFE_EREU1 CURRENCY P0008-WAERS.
WRITE SATZ-SOFE_ERSF TO SATZ_SOFE_ERSF1 CURRENCY P0008-WAERS.
WRITE SATZ-SOFE_ERSP TO SATZ_SOFE_ERSP1 CURRENCY P0008-WAERS.
WRITE SATZ-SOFE_EIN TO SATZ_SOFE_EIN1 CURRENCY P0008-WAERS.
WRITE SATZ-ERHO_ERR TO SATZ_ERHO_ERR1 CURRENCY P0008-WAERS.
WRITE SATZ-ERHO_EIN TO SATZ_ERHO_EIN1 CURRENCY P0008-WAERS.
WRITE SATZ-JAH55_FF TO SATZ_JAH55_FF1 CURRENCY P0008-WAERS.
SATZ-MONGH = SATZ1_MONGH.
SATZ-JAH55 = SATZ_JAH551.
SATZ-EFF55 = SATZ_EFF551.
SATZ-SOFE_EREU = SATZ_SOFE_EREU1.
SATZ-SOFE_ERSF = SATZ_SOFE_ERSF1.
SATZ-SOFE_ERSP = SATZ_SOFE_ERSP1.
SATZ-SOFE_EIN = SATZ_SOFE_EIN1.
SATZ-ERHO_ERR = SATZ_ERHO_ERR1.
SATZ-ERHO_EIN = SATZ_ERHO_EIN1.
SATZ-JAH55_FF = SATZ_JAH55_FF1.
MESSAGEG-0 VESVR_EUR must be a character-type data object (data type C, N, D, T or STRING).
Before the Unicode error:
TRANSLATE vesvr_eur USING '.,'.
TRANSLATE espec_eur USING '.,'.
TRANSLATE fijas_eur USING '.,'.
Solution.
data: vesvreur(16),
especeur(16),
fijaseur(16).
vesvreur = vesvr_eur.
especeur = espec_eur.
fijaseur = fijas_eur.
TRANSLATE vesvreur USING '.,'.
TRANSLATE especeur USING '.,'.
TRANSLATE fijaseur USING '.,'.
vesvr_eur = vesvreur.
espec_eur = especeur.
fijas_eur = fijaseur.
MESSAGEG-D LT_0021 cannot be converted to the line incompatible. The line type must have the same structure layout as "LT_0021" regardless of the length of a Unicode .
Before the Unicode error:
data: lt_0021 like p0021 occurs 0 with header line.
Solution.
DATA: LT_0021 LIKE PA0021 OCCURS 0 WITH HEADER LINE.
Before the Unicode error:
append sim_data to p0007.
Solution.
DATA:wa_p0007 type p0007.
move-corresponding sim_data to wa_p0007.
append wa_p0007 to p0007.
MESSAGEG-F The structure "CO(110)" does not start with a character-type field. In Unicode programs in such cases, offset/length declarations are not allowed
Before the Unicode error:
TRANSFER COBEZ+8 TO DSN.
Solution.
FIELD-SYMBOLS:<fs_cobez> type any.
TRANSFER <fs_COBEZ>+8 TO DSN.
Before the Unicode error:
WRITE: /1 COBEZ+16.
Solution.
FIELD-SYMBOLS <F_COBEZ> TYPE ANY.
ASSIGN COBEZ TO <F_COBEZ>.
WRITE: /1 <F_COBEZ>+16.
MESSAGEG-G The length declaration "171" exceeds the length of the character-type start (=38) of the structure. This is not allowed in Unicode programs.
Before the Unicode error:
write: /1 '-->',
pa0201(250).
Solution.
field-symbols <fs_pa0201> type any.
ASSIGN pa0201 TO <fs_pa0201>.
write: /1 '-->',
<fs_pa0201>(250).
MESSAGEG-H The offset declaration "160" exceeds the length of the character-type start (=126) of the structure. This is not allowed in Unicode programs . allowed.
Before the Unicode error:
WRITE:/ SATZ(80),
/ SATZ+80(80),
/ SATZ+160(80),
/ SATZ+240(80),
/ SATZ+320(27).
Solution.
FIELD-SYMBOLS <FS_SATZ> TYPE ANY.
ASSIGN SATZ TO <FS_SATZ>.
WRITE:/ <FS_SATZ>(80),
/ <FS_SATZ>+80(80),
/ <FS_SATZ>+160(80),
/ <FS_SATZ>+240(80),
/ <FS_SATZ>+320(27).
MESSAGEG-I The sum of the offset and length (=504) exceeds the length of the start (=323) of the structure. This is not allowed in Unicode programs .
Before the Unicode error:
/5 PARAMS+80(80),
Solution.
FIELD-SYMBOLS: <PARAMS> TYPE ANY.
ASSIGN PARAMS TO <PARAMS>.
/5 <PARAMS>+80(80),
MESSAGEGWH P0041-DAR01 and "DATE_SPEC" are type-incompatible.
Before the Unicode error:
DO 5 TIMES VARYING I0008 FROM P0008-LGA01 NEXT P0008-LGA02.
Solution.
DO 5 TIMES VARYING I0008-LGA FROM P0008-LGA01 NEXT P0008-LGA02. "D07K963133
Before the Unicode error:
DO VARYING ls_data_aux FROM p0041-dar01 NEXT p0041-dar02.
Solution.
DO VARYING ls_data_aux-dar01 FROM p0041-dar01 NEXT p0041-dar02.
MESSAGEGY/ The type of the database table and work area (or internal table) "P0050" are not Unicode-convertible
Before the Unicode error:
select * from pa9705 client specified
into ls_9705
Solution.
select * from pa9705 client specified
into corresponding fields of ls_9705
Before the Unicode error:
select * from pa0202 client specified
into ls_0202
Solution.
select * from pa0202 client specified
into corresponding fields of ls_0202
OPEN 001 One of the additions "FOR INPUT", "FOR OUTPUT", "FOR APPENDING" or "FOR UPDATE" was expected.
Before the Unicode error:
OPEN DATASET FICHERO IN TEXT MODE.
Solution.
OPEN DATASET FICHERO IN TEXT MODE FOR INPUT ENCODING NON-UNICODE.
OPEN 002 IN... MODE was expected.
Before the Unicode error:
OPEN DATASET P_OUT FOR OUTPUT IN TEXT MODE.
Solution.
OPEN DATASET P_OUT FOR OUTPUT IN TEXT MODE ENCODING non-unicode.
OPEN 004 In "TEXT MODE" the "ENCODING" addition must be specified.
Before the Unicode error:
open dataset dat for output in text mode.
Solution.
open dataset dat for output in text mode ENCODING NON-UNICODE.
UPLO Upload/Ws_Upload and Download/Ws_Download are obsolete, since they are not Unicode-enabled; use the class cl_gui_frontend_services
Before the Unicode error:
move p_filein to disk_datei.
CALL FUNCTION 'WS_UPLOAD'
EXPORTING
filename = disk_datei
FILETYPE = FILETYPE
TABLES
DATA_TAB = DISK_TAB
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_READ_ERROR = 2.
Solution.
DATA: file_name type string.
move p_filein to file_name.
CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD
EXPORTING
FILENAME = file_name
FILETYPE = 'ASC'
HAS_FIELD_SEPARATOR = 'X'
HEADER_LENGTH = 0
READ_BY_LINE = 'X'
DAT_MODE = SPACE
CODEPAGE = SPACE
IGNORE_CERR = ABAP_TRUE
REPLACEMENT = '#'
VIRUS_SCAN_PROFILE =
IMPORTING
FILELENGTH =
HEADER =
CHANGING
DATA_TAB = disk_tab[]
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_READ_ERROR = 2
NO_BATCH = 3
GUI_REFUSE_FILETRANSFER = 4
INVALID_TYPE = 5
NO_AUTHORITY = 6
UNKNOWN_ERROR = 7
BAD_DATA_FORMAT = 8
HEADER_NOT_ALLOWED = 9
SEPARATOR_NOT_ALLOWED = 10
HEADER_TOO_LONG = 11
UNKNOWN_DP_ERROR = 12
ACCESS_DENIED = 13
DP_OUT_OF_MEMORY = 14
DISK_FULL = 15
DP_TIMEOUT = 16
NOT_SUPPORTED_BY_GUI = 17
ERROR_NO_GUI = 18
others = 19
Before the Unicode error:
CALL FUNCTION 'WS_DOWNLOAD'
EXPORTING
filename = fich_dat
filetype = typ_fich
TABLES
data_tab = t_down.
Solution.
data: filename1 type string,
filetype1(10).
move fich_dat to filename1.
move typ_fich to filetype1.
CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_DOWNLOAD
EXPORTING
FILENAME = filename1
FILETYPE = filetype1
WRITE_FIELD_SEPARATOR = 'X'
CHANGING
DATA_TAB = t_down[].
Before the Unicode error:
*CALL FUNCTION 'UPLOAD'
TABLES
DATA_TAB = datos
EXCEPTIONS
CONVERSION_ERROR = 1
INVALID_TABLE_WIDTH = 2
INVALID_TYPE = 3
NO_BATCH = 4
UNKNOWN_ERROR = 5
GUI_REFUSE_FILETRANSFER = 6
OTHERS = 7
Solution.
DATA: file_table type table of file_table,
filetable type file_table,
rc type i,
filename type string.
CALL METHOD CL_GUI_FRONTEND_SERVICES=>FILE_OPEN_DIALOG
CHANGING
FILE_TABLE = file_table
RC = rc
EXCEPTIONS
FILE_OPEN_DIALOG_FAILED = 1
CNTL_ERROR = 2
ERROR_NO_GUI = 3
NOT_SUPPORTED_BY_GUI = 4
others = 5
READ table file_table into filetable index 1.
move filetable to filename.
CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD
EXPORTING
FILENAME = filename
FILETYPE = 'ASC'
HAS_FIELD_SEPARATOR = 'X'
HEADER_LENGTH = 0
READ_BY_LINE = 'X'
DAT_MODE = SPACE
CODEPAGE = SPACE
IGNORE_CERR = ABAP_TRUE
REPLACEMENT = '#'
VIRUS_SCAN_PROFILE =
IMPORTING
FILELENGTH =
HEADER =
CHANGING
DATA_TAB = datos[]
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_READ_ERROR = 2
NO_BATCH = 3
GUI_REFUSE_FILETRANSFER = 4
INVALID_TYPE = 5
NO_AUTHORITY = 6
UNKNOWN_ERROR = 7
BAD_DATA_FORMAT = 8
HEADER_NOT_ALLOWED = 9
SEPARATOR_NOT_ALLOWED = 10
HEADER_TOO_LONG = 11
UNKNOWN_DP_ERROR = 12
ACCESS_DENIED = 13
DP_OUT_OF_MEMORY = 14
DISK_FULL = 15
DP_TIMEOUT = 16
NOT_SUPPORTED_BY_GUI = 17
ERROR_NO_GUI = 18
others = 19
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
Before the Unicode error:
CALL FUNCTION 'DOWNLOAD'
EXPORTING
filename = p_attkit
filetype = 'ASC'
TABLES
data_tab = tb_attrkit
EXCEPTIONS
invalid_filesize = 1
invalid_table_width = 2
invalid_type = 3
no_batch = 4
unknown_error = 5
OTHERS = 6.
Solution.
DATA : lv_filename TYPE string,
lv_filen TYPE string,
lv_path TYPE string,
lv_fullpath TYPE string.
DATA: Begin of wa_testata,
lv_var(10) type c,
End of wa_testata.
DATA: testata like standard table of wa_testata.
OVERLAY p_attkit WITH lv_filename.
CALL METHOD cl_gui_frontend_services=>file_save_dialog
EXPORTING
WINDOW_TITLE =
DEFAULT_EXTENSION =
default_file_name = lv_filename
WITH_ENCODING =
FILE_FILTER =
INITIAL_DIRECTORY =
PROMPT_ON_OVERWRITE = 'X'
CHANGING
filename = lv_filen
path = lv_path
fullpath = lv_fullpath
USER_ACTION =
FILE_ENCODING =
EXCEPTIONS
cntl_error = 1
error_no_gui = 2
not_supported_by_gui = 3
OTHERS = 4
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
BIN_FILESIZE =
filename = lv_fullpath
filetype = 'ASC'
APPEND = ' '
WRITE_FIELD_SEPARATOR = ' '
HEADER = '00'
TRUNC_TRAILING_BLANKS = ' '
WRITE_LF = 'X'
COL_SELECT = ' '
COL_SELECT_MASK = ' '
DAT_MODE = ' '
CONFIRM_OVERWRITE = ' '
NO_AUTH_CHECK = ' '
CODEPAGE = ' '
IGNORE_CERR = ABAP_TRUE
REPLACEMENT = '#'
WRITE_BOM = ' '
TRUNC_TRAILING_BLANKS_EOL = 'X'
WK1_N_FORMAT = ' '
WK1_N_SIZE = ' '
WK1_T_FORMAT = ' '
WK1_T_SIZE = ' '
IMPORTING
FILELENGTH =
TABLES
data_tab = tb_attrkit
fieldnames = testata
EXCEPTIONS
file_write_error = 1
no_batch = 2
gui_refuse_filetransfer = 3
invalid_type = 4
no_authority = 5
unknown_error = 6
header_not_allowed = 7
separator_not_allowed = 8
filesize_not_allowed = 9
header_too_long = 10
dp_error_create = 11
dp_error_send = 12
dp_error_write = 13
unknown_dp_error = 14
access_denied = 15
dp_out_of_memory = 16
disk_full = 17
dp_timeout = 18
file_not_found = 19
dataprovider_exception = 20
control_flush_error = 21
OTHERS = 22
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF. -
Unicode and non-unicode string data types Issue with 2008 SSIS Package
Hi All,
I am converting a 2005 SSIS Package to 2008. I have a task which has SQL Server as the source and Oracle as the destination. I copy the data from a SQL server view with a field nvarchar(10) to a field of a oracle table varchar(10). The package executes fine
on my local when i use the data transformation task to convert to DT_STR. But when I deploy the dtsx file on the server and try to run from an SQL Job Agent it gives me the unicode and non-unicode string data types error for the field. I have checked the registry
settings and its the same in my local and the server. Tried both the data conversion task and Derived Column task but with no luck. Pls suggest me what changes are required in my package to run it from the SQL Agent Job.
Thanks.What is Unicode and non Unicode data formats
Unicode :
A Unicode character takes more bytes to store the data in the database. As we all know, many global industries wants to increase their business worldwide and grow at the same time, they would want to widen their business by providing
services to the customers worldwide by supporting different languages like Chinese, Japanese, Korean and Arabic. Many websites these days are supporting international languages to do their business and to attract more and more customers and that makes life
easier for both the parties.
To store the customer data into the database the database must support a mechanism to store the international characters, storing these characters is not easy, and many database vendors have to revised their strategies and come
up with new mechanisms to support or to store these international characters in the database. Some of the big vendors like Oracle, Microsoft, IBM and other database vendors started providing the international character support so that the data can be stored
and retrieved accordingly to avoid any hiccups while doing business with the international customers.
The difference in storing character data between Unicode and non-Unicode depends on whether non-Unicode data is stored by using double-byte character sets. All non-East Asian languages and the Thai language store non-Unicode characters
in single bytes. Therefore, storing these languages as Unicode uses two times the space that is used specifying a non-Unicode code page. On the other hand, the non-Unicode code pages of many other Asian languages specify character storage in double-byte character
sets (DBCS). Therefore, for these languages, there is almost no difference in storage between non-Unicode and Unicode.
Encoding Formats:
Some of the common encoding formats for Unicode are UCS-2, UTF-8, UTF-16, UTF-32 have been made available by database vendors to their customers. For SQL Server 7.0 and higher versions Microsoft uses the encoding format UCS-2 to store the UTF-8 data. Under
this mechanism, all Unicode characters are stored by using 2 bytes.
Unicode data can be encoded in many different ways. UCS-2 and UTF-8 are two common ways to store bit patterns that represent Unicode characters. Microsoft Windows NT, SQL Server, Java, COM, and the SQL Server ODBC driver and OLEDB
provider all internally represent Unicode data as UCS-2.
The options for using SQL Server 7.0 or SQL Server 2000 as a backend server for an application that sends and receives Unicode data that is encoded as UTF-8 include:
For example, if your business is using a website supporting ASP pages, then this is what happens:
If your application uses Active Server Pages (ASP) and you are using Internet Information Server (IIS) 5.0 and Microsoft Windows 2000, you can add "<% Session.Codepage=65001 %>" to your server-side ASP script.
This instructs IIS to convert all dynamically generated strings (example: Response.Write) from UCS-2 to UTF-8 automatically before sending them to the client.
If you do not want to enable sessions, you can alternatively use the server-side directive "<%@ CodePage=65001 %>".
Any UTF-8 data sent from the client to the server via GET or POST is also converted to UCS-2 automatically. The Session.Codepage property is the recommended method to handle UTF-8 data within a web application. This Codepage
setting is not available on IIS 4.0 and Windows NT 4.0.
Sorting and other operations :
The effect of Unicode data on performance is complicated by a variety of factors that include the following:
1. The difference between Unicode sorting rules and non-Unicode sorting rules
2. The difference between sorting double-byte and single-byte characters
3. Code page conversion between client and server
Performing operations like >, <, ORDER BY are resource intensive and will be difficult to get correct results if the codepage conversion between client and server is not available.
Sorting lots of Unicode data can be slower than non-Unicode data, because the data is stored in double bytes. On the other hand, sorting Asian characters in Unicode is faster than sorting Asian DBCS data in a specific code page,
because DBCS data is actually a mixture of single-byte and double-byte widths, while Unicode characters are fixed-width.
Non-Unicode :
Non Unicode is exactly opposite to Unicode. Using non Unicode it is easy to store languages like ‘English’ but not other Asian languages that need more bits to store correctly otherwise truncation will occur.
Now, let’s see some of the advantages of not storing the data in Unicode format:
1. It takes less space to store the data in the database hence we will save lot of hard disk space.
2. Moving of database files from one server to other takes less time.
3. Backup and restore of the database makes huge impact and it is good for DBA’s that it takes less time
Non-Unicode vs. Unicode Data Types: Comparison Chart
The primary difference between unicode and non-Unicode data types is the ability of Unicode to easily handle the storage of foreign language characters which also requires more storage space.
Non-Unicode
Unicode
(char, varchar, text)
(nchar, nvarchar, ntext)
Stores data in fixed or variable length
Same as non-Unicode
char: data is padded with blanks to fill the field size. For example, if a char(10) field contains 5 characters the system will pad it with 5 blanks
nchar: same as char
varchar: stores actual value and does not pad with blanks
nvarchar: same as varchar
requires 1 byte of storage
requires 2 bytes of storage
char and varchar: can store up to 8000 characters
nchar and nvarchar: can store up to 4000 characters
Best suited for US English: "One problem with data types that use 1 byte to encode each character is that the data type can only represent 256 different characters. This forces multiple
encoding specifications (or code pages) for different alphabets such as European alphabets, which are relatively small. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters."<sup>1</sup>
Best suited for systems that need to support at least one foreign language: "The Unicode specification defines a single encoding scheme for most characters widely used in businesses around the world.
All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred
from one database or computer to another without concern that the receiving system will translate the bit patterns into characters incorrectly.
https://irfansworld.wordpress.com/2011/01/25/what-is-unicode-and-non-unicode-data-formats/
Thanks Shiven:) If Answer is Helpful, Please Vote -
Upgrade : SP, Unicode and OS/BD
Hi,
The project consists of applying the sps, convert to unicode and then migrate the version of the OS you BD.
What path can to make this upgrade while respecting the technical constraints?
OS : linux
Bd : oracle
ECC 6.0
Thank you in advance.Hi,
So you want to upgrade your SPS level, followed by a unicode conversion and then migrate from (linux+ oracle) but to what OS/DB combination?
basically you can migrate from any DB and OS combination to any other with very small restrictions.
Check out this link: System Copy and Migration
If you have any questions please let me know.
Cheers
Vlad -
How to edit a Animated Gif file and convert to SWF
I am using the creative cloud with fireworks. I chose the free trial with buying in mind if I saw it work properly. I simply want to upload an animated GIF file and then download it as a SWF file. I saw someone on youtube do this and it's not that if I get on the correct page I would not know how to do that but I just cannot find how to get the GIF into the software to edit. It has just simply put them in the creative cloud folder which can open them on IE> How do i make it available to edit and convert to SWF please? Thanks in advance.
You will likely get better program help in a program forum
The Cloud forum is not about using individual programs
The Cloud forum is about the Cloud as a delivery & install process
If you will start at the Forums Index https://forums.adobe.com/welcome
You will be able to select a forum for the specific Adobe product(s) you use
Click the "down arrow" symbol on the right (where it says All communities) to open the drop down list and scroll
Maybe you are looking for
-
My son's satelite notebook has the error "A Disk Read Error Occurred, Press control Alt Delete to restart. The computer will not boot in safe mode. I have the recovery cd, but I stopped because I did not want to delete any of his data on the hard dri
-
Whenever I edit a photo and close editor, the Organizer jumps back to the first photo in the catalog. Has anyone else had this problem with Elements 10? All my previous versions of Elements Organizer stay on the photo I just edited.
-
This is not difficult, but 6i makes it so
Hi guys - I am producing a pseudo-matrix report (meaning that it has the look and feel of a matrix, but is actually a simple group left report) in 6i. The report is a statistical summary of certain kinds of employees based on job category and gender.
-
Activity type column/ field in routing
Hi, observed that activity type columens (activities that we assign to std value key in work center) of any material in opertaion details screen of routing as non-changable (gradeout). pls advise how to make those fileds as editable.
-
How do I move from MobileMe to iCloud on a macbook??
I already went through the "move your mobileme account to iCloud" procedure on my macbook several times but still cannot access the iCloud settings on my mac - each time I open iCloud I am asked to first move my mobileme account iCloud??