Convert WE8MSWIN1252 to US7ASCII - Urgent
Could some one help me to convert the character setting of my Oracle 8i database from WE8MSWIN1252 to US7ASCII.
Is it possible in the first place as it tells me that the WE8MSWIN1252 is a super set... !
Cheers - Aravind.
Actually that's not possible (the opposite could be possible).
Similar Messages
-
It's possible to convert 'WE8MSWIN1252' character to chinese character set?
Hi All,
Is anyone know how to convert "WE8MSWIN1252" character to chinese character set in order to display chinese word in oracle apex?
My problem is i can't display chinese character set in oracle apex. The chinese field is showed like °×ѪÇò¼ÆÊý. I'm using WE8MSWIN1252 database character set.
I'm wondering it's possible to show character word?
I'm appreciating if anyone have a good solution to share with me.
Thanks a lot in advance!
Edited by: Apex Junior on Jul 16, 2010 2:18 PMWE8 is a Western European character set. If you wish to store and access a globalized multibyte character set you must have a database that supports it: You don't have one at the moment.
Given this is Apex I'd suggest you read the docs and reinstall.
Alternatively you could try CSSCAN and CSALTER and perhaps you can make the change but be very careful and have a good backup before you try.
http://www.morganslibrary.org/reference/character_sets.html -
Problem in converting into upper case urgent
hi,
i working on module pool program.
in the initial screen there is two fields one is number one is name.
if i enter the name in the name field it is automatically converting into upper case,
but in this case my select query is not working,how to solve this,i mean i have to take same as it is what i entered.
kindly give me suggestion.it is urgent issue.i have to delever today.
Thanks,
mohan.hi
in the Report to handle like this situation.. we use the extentions to the parameter as LOWER CASE .
parameters p_t type char10 LOWER CASE .
i think in the Module pool also.. we can do this in Properties of the FIELD... once check the Properties of the field... there could be a chance.
hi
<b>there is field <b>checkbox called UPPER/LOWER CASE</b>at the bottom of the properties... if tick this u r porblem will be solved</b>
Please Close this thread.. when u r problem is solved
Reward if Helpful
Regards
Naresh Reddy K
Message was edited by:
Naresh Reddy -
Hello,
We have a database which is 9i and has a NLS_CHARACTERSET set to US7ASCII.
We created a new database (version 10g 10.2.0.2.0) on a new server which has a NLS_CHARACTERSET set to UTF 8. When we exported the database from the 9i database to 10g database, obviously because of the NLS_CHARACTERSET there was an issue of data corruption (Columns width increasing by 3 times, understandable). Is there a way to convert the UTF8 character set on the 10g database to US7ASCII character set, and then re-importing and exporting the 9i database to 10g database. I know we can convert a subset to superset. I want to find out if there is a way to convert a superset to a subset.
Or do I have to re-create the whole database again.
Thanks,
KalyanYou shouldn't have problem migrate a US7ASCII to UTF8. UTF8 is superset of US7ASCII.
The problem you are facing is when your schema has column defined as CHAR(20) for example, a single byte character becomes a two-byte character in UTF8, you now have Data Truncation issue. Same problem can also happens in varchar type.
You can't change from superset to subset for obvious reason but you can convert your 9i from US7ASCII to UTF8 if you like.
Also run this character set scanner before you make conversion.
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96529/ch11.htm#1005049 -
hi,
I'm currently doing a project which requires me to generate a svg scatterplot graph. I already got data stored in vector.(in a servlet).But heard from my supervisor that i need to convert my data to xml format.With this xml format, i can generate a svg graph.I'm totally new to xml n don't know how to convert from java to xml..Though i read up on articles, i still can't get the idea of it. My data isnt very big. for graph i need to generate a x-axis :green , y-axis:red... n then retrieve the data n plot the scatterplot graph... can somebody help me??? very urgent... thanksOne quick way is to construct the XML your self.
e.g.
StringBuffer xml = new StringBuffer("<?xml version="1.0" encoding="ISO-8859-1"?>");
xml.append("<ROOT>");
while(has more elements) {
xml.append("<ELEMENT>").append(data).append("</ELEMENT>");
xml.append("</ROOT>");
Rene -
Converting to gorizontal display-urgent
Dear all,
I am displaying the values of material usage monthwise.These are being displayed vertically.Could you please tell me how to display this horizontally.
Also i want to sum up the usage.
Also i want to dispaly the count ,where count is the number of times that material has been used in that given period. i am also posting my code.
currently it is as follows:- (eg)
Material Period Quantity
Material1 02.07 2.000
Material1 04.07 1.000
Material2 05.07 3.000
Material2 08.07 1.000
Material2 09.07 2.000 and so on.....
I want is as follows
Material 02.07 04.07 05.07 08.07 09.07 TOTAL COUNT
Material1 2 1 0 0 0 3 2
Material2 0 0 3 1 2 6 3
(Note:-If u cant see it properly above try executing the code below.Above in horizontal display 2 is the quantity in 02.2007,1 is the quantity in 04.2007 and their total is 3 and since it has been used twice in a period 5 months ,the count is 2.Similary for material2)
*TYPES: BEGIN OF TY_MARC , "Plant Data for Material
MATNR TYPE MARC-MATNR, "Material Number
WERKS TYPE MARC-WERKS, "Plant
END OF TY_MARC .
*TYPES: BEGIN OF TY_MARA ,
MATNR TYPE MARA-MATNR, "Material Number
MTART TYPE MARA-MTART, "Material type
SPART TYPE MARA-SPART, "Division
WERKS TYPE MARC-WERKS, "Plant
END OF TY_MARA .
TYPES: BEGIN OF TY_ZV_OLR3_MARACKT , "Plant Material: Search Help with Short Text
MATNR TYPE MARA-MATNR , "Material Number
WERKS TYPE MARC-WERKS , "Plant
MTART TYPE MARA-MTART , "Material Type
MAKTX TYPE MAKT-MAKTX , "Material Description
SPART TYPE MARA-SPART , "Division
END OF TY_ZV_OLR3_MARACKT .
TYPES: BEGIN OF TY_MBEW , "Material Valuation
MATNR TYPE MBEW-MATNR , "Material Number
BWKEY TYPE MBEW-BWKEY , "Valuation area
BWTAR TYPE MBEW-BWTAR , "Valuation type
VERPR TYPE MBEW-VERPR , "Moving Average Price/Periodic Unit Price
END OF TY_MBEW .
*TYPES: BEGIN OF TY_MKPF , "Header: Material Document
MBLNR TYPE MKPF-MBLNR , "Number of Material Document
MJAHR TYPE MKPF-MJAHR , "Material Document Year
BUDAT TYPE MKPF-BUDAT , "Posting Date in the Document
END OF TY_MKPF .
*TYPES: BEGIN OF TY_MSEG , "Document Segment: Material
MBLNR TYPE MSEG-MBLNR , "Number of Material Document
MJAHR TYPE MSEG-MJAHR , "Document Segment: Material
ZEILE TYPE MSEG-ZEILE , "Item in Material Document
BWART TYPE MSEG-BWART , "Movement Type (Inventory Management)
MENGE TYPE MSEG-MENGE , "Quantity
MEINS TYPE MSEG-MEINS , "Base Unit of Measure
MATNR TYPE MSEG-MATNR , "Material Number
END OF TY_MSEG .
TYPES: BEGIN OF TY_WB2_V_MKPF_MSEG2 , "Data Selection from Material Documents
MBLNR TYPE MSEG-MBLNR , "Number of Material Document
MJAHR TYPE MSEG-MJAHR , "Document Segment: Material
BUDAT TYPE MKPF-BUDAT , "Posting Date in the Document
ZEILE_I TYPE MSEG-ZEILE , "Item in Material Document
BWART_I TYPE MSEG-BWART , "Movement Type (Inventory Management)
MENGE_I TYPE MSEG-MENGE , "Quantity
MEINS_I TYPE MSEG-MEINS , "Base Unit of Measure
MATNR_I TYPE MSEG-MATNR , "Material Number
WERKS_I TYPE MSEG-WERKS , "Plant
END OF TY_WB2_V_MKPF_MSEG2 .
TYPES: BEGIN OF TY_WB2_V_MKPF_MSEG2_COPY , "Copy of Data Selection from Material Documents
MATNR_I TYPE MSEG-MATNR , "Material Number
FYEAR(4) TYPE C , "Fiscal Year
MONTH(2) TYPE C , "Month
BWART_I TYPE MSEG-BWART , "Movement Type (Inventory Management)
MBLNR TYPE MSEG-MBLNR , "Number of Material Document
MJAHR TYPE MSEG-MJAHR , "Document Segment: Material
BUDAT TYPE MKPF-BUDAT , "Posting Date in the Document
ZEILE_I TYPE MSEG-ZEILE , "Item in Material Document
MENGE_I TYPE MSEG-MENGE , "Quantity
MEINS_I TYPE MSEG-MEINS , "Base Unit of Measure
WERKS_I TYPE MSEG-WERKS ,
COUNT TYPE I ,
USAGE TYPE MSEG-MENGE ,
END OF TY_WB2_V_MKPF_MSEG2_COPY .
Internal Tables Begin with IT_
Work Area Begin with WA_ *
*Internal Table Declaration.
*DATA: IT_MARC TYPE STANDARD TABLE OF TY_MARC ,
WA_MARC TYPE TY_MARC .
*DATA: IT_MARA TYPE STANDARD TABLE OF TY_MARA ,
WA_MARA TYPE TY_MARA .
DATA: IT_MATERIAL TYPE STANDARD TABLE OF TY_ZV_OLR3_MARACKT ,
WA_MATERIAL TYPE TY_ZV_OLR3_MARACKT .
DATA: IT_MBEW TYPE STANDARD TABLE OF TY_MBEW ,
WA_MBEW TYPE TY_MBEW .
*DATA: IT_MKPF TYPE STANDARD TABLE OF TY_MKPF ,
WA_MKPF TYPE TY_MKPF .
*DATA: IT_MSEG TYPE STANDARD TABLE OF TY_MSEG ,
WA_MSEG TYPE TY_MSEG .
DATA: IT_VIEW TYPE STANDARD TABLE OF TY_WB2_V_MKPF_MSEG2 ,
WA_VIEW TYPE TY_WB2_V_MKPF_MSEG2 .
DATA: IT_VIEW_COPY TYPE STANDARD TABLE OF TY_WB2_V_MKPF_MSEG2_COPY ,
WA_VIEW_COPY TYPE TY_WB2_V_MKPF_MSEG2_COPY ,
IT_VIEW_TMP TYPE STANDARD TABLE OF TY_WB2_V_MKPF_MSEG2_COPY ,
Data Declaration
Work field declaration - used in program
DATA: W_PERIOD TYPE S031-SPMON.
DATA: W_INDEX TYPE SY-TABIX .
DATA: W_QTY_TOT TYPE MENGE_D .
DATA: TOT_USE_TMP TYPE MENGE_D ,
TOT_USE TYPE MENGE_D ,
W_COUNT TYPE I .
RANGES: RO_DATE FOR SY-DATUM .
Parameters Begin with PR_ *
SELECTION-SCREEN BEGIN OF BLOCK blk1 WITH FRAME TITLE text-001.
PARAMETERS: PR_WERKS TYPE MARC-WERKS OBLIGATORY ,
PR_MTART TYPE MARA-MTART OBLIGATORY ,
PR_SPART TYPE MARA-SPART OBLIGATORY ,
PR_MATNR TYPE MARA-MATNR .
Select Options Begin with SO_ *
SELECT-OPTIONS: SO_SPMON FOR W_PERIOD .
SELECTION-SCREEN END OF BLOCK blk1 .
At selection-screen *
AT SELECTION-SCREEN ON VALUE-REQUEST FOR SO_SPMON-LOW.
PERFORM F4_HELP.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR SO_SPMON-HIGH.
PERFORM F4_HELP.
*& Form F4_HELP
Text-F4 Help for Month
-Taken from standard SAP program RMCS0F0M.
FORM F4_HELP .
DATA: BEGIN OF MF_DYNPFIELDS OCCURS 1 .
INCLUDE STRUCTURE DYNPREAD .
DATA: END OF MF_DYNPFIELDS .
DATA: MF_RETURNCODE LIKE SY-SUBRC ,
MF_MONTH LIKE ISELLIST-MONTH,
MF_HLP_REPID LIKE SY-REPID .
FIELD-SYMBOLS: <MF_FELD> .
*Worth reading screen
GET CURSOR FIELD MF_DYNPFIELDS-FIELDNAME.
APPEND MF_DYNPFIELDS.
MF_HLP_REPID = SY-REPID.
DO 2 TIMES.
CALL FUNCTION 'DYNP_VALUES_READ'
EXPORTING
DYNAME = MF_HLP_REPID
DYNUMB = SY-DYNNR
TABLES
DYNPFIELDS = MF_DYNPFIELDS
EXCEPTIONS
INVALID_ABAPWORKAREA = 01
INVALID_DYNPROFIELD = 02
INVALID_DYNPRONAME = 03
INVALID_DYNPRONUMMER = 04
INVALID_REQUEST = 05
NO_FIELDDESCRIPTION = 06
UNDEFIND_ERROR = 07.
IF SY-SUBRC = 3.
*Current screen is a range image
MF_HLP_REPID = 'SAPLALDB'.
ELSE.
READ TABLE MF_DYNPFIELDS INDEX 1.
*Underscores replaced by Blanks
TRANSLATE MF_DYNPFIELDS-FIELDVALUE USING '_ '.
EXIT.
ENDIF.
ENDDO.
IF SY-SUBRC = 0.
*The internal format conversion
CALL FUNCTION 'CONVERSION_EXIT_PERI_INPUT'
EXPORTING
INPUT = MF_DYNPFIELDS-FIELDVALUE
IMPORTING
OUTPUT = MF_MONTH
EXCEPTIONS
ERROR_MESSAGE = 1.
IF MF_MONTH IS INITIAL.
*Initial month => Proposal value of akt. Date deduced
MF_MONTH = SY-DATLO(6).
ENDIF.
CALL FUNCTION 'POPUP_TO_SELECT_MONTH'
EXPORTING
ACTUAL_MONTH = MF_MONTH
IMPORTING
SELECTED_MONTH = MF_MONTH
RETURN_CODE = MF_RETURNCODE
EXCEPTIONS
FACTORY_CALENDAR_NOT_FOUND = 01
HOLIDAY_CALENDAR_NOT_FOUND = 02
MONTH_NOT_FOUND = 03.
IF SY-SUBRC = 0 AND MF_RETURNCODE = 0.
CALL FUNCTION 'CONVERSION_EXIT_PERI_OUTPUT'
EXPORTING
INPUT = MF_MONTH
IMPORTING
OUTPUT = MF_DYNPFIELDS-FIELDVALUE.
COLLECT MF_DYNPFIELDS.
CALL FUNCTION 'DYNP_VALUES_UPDATE'
EXPORTING
DYNAME = MF_HLP_REPID
DYNUMB = SY-DYNNR
TABLES
DYNPFIELDS = MF_DYNPFIELDS
EXCEPTIONS
INVALID_ABAPWORKAREA = 01
INVALID_DYNPROFIELD = 02
INVALID_DYNPRONAME = 03
INVALID_DYNPRONUMMER = 04
INVALID_REQUEST = 05
NO_FIELDDESCRIPTION = 06
UNDEFIND_ERROR = 07.
ENDIF.
ENDIF.
ENDFORM. " F4_HELP
S T A R T O F S E L E C T I O N *
START-OF-SELECTION .
PERFORM CONVERT_MMYYYY_TO_DDMMYYY.
PERFORM GET_DATA.
*& Form CONVERT_MMYYYY_TO_DDMMYYY
Text-In Selection Screen we give date in MM.YYYY format
This cannot be passed to standard table.
Hence first convert it to DD.MM.YYYY format.
FORM CONVERT_MMYYYY_TO_DDMMYYY .
DATA: LW_VAR TYPE I . "LW-Local Workfield
DATA: LW_DATE1 TYPE SY-DATUM ,
LW_DATE2 TYPE SY-DATUM .
DATA: LW_STR_LOW TYPE STRING,
LW_STR_LOW1 TYPE STRING,
LW_STR_LOW2 TYPE STRING.
DATA: LW_STR_HIGH TYPE STRING,
LW_STR_HIGH1 TYPE STRING,
LW_STR_HIGH2 TYPE STRING.
LW_STR_LOW1 = SO_SPMON-LOW+0(4).
LW_STR_LOW2 = SO_SPMON-LOW+4(2).
CONCATENATE LW_STR_LOW1 LW_STR_LOW2 '01' INTO LW_STR_LOW.
LW_DATE1 = LW_STR_LOW.
*WRITE LW_DATE1.
LW_STR_HIGH1 = SO_SPMON-HIGH+0(4).
LW_STR_HIGH2 = SO_SPMON-HIGH+4(2).
IF LW_STR_HIGH2 = '01' OR
LW_STR_HIGH2 = '03' OR
LW_STR_HIGH2 = '05' OR
LW_STR_HIGH2 = '07' OR
LW_STR_HIGH2 = '08' OR
LW_STR_HIGH2 = '10' OR
LW_STR_HIGH2 = '12'.
CONCATENATE LW_STR_HIGH1 LW_STR_HIGH2 '31' INTO LW_STR_HIGH.
LW_DATE2 = LW_STR_HIGH.
WRITE LW_DATE2.
ELSEIF LW_STR_HIGH2 = '04' OR
LW_STR_HIGH2 = '06' OR
LW_STR_HIGH2 = '09' OR
LW_STR_HIGH2 = '11'.
CONCATENATE LW_STR_HIGH1 LW_STR_HIGH2 '30' INTO LW_STR_HIGH.
LW_DATE2 = LW_STR_HIGH.
WRITE LW_DATE2.
*Logic to check for Leap Year:-
*For all years that are multiples of 100,we need to test if its divisible by 400
*(instead of 4).If yes, then we can be sure that the year is a Leap Year.
ELSEIF LW_STR_HIGH2 = '02'.
*Begin:-To check for Leap Year when February.
LW_VAR = LW_STR_HIGH1 MOD 100.
IF LW_VAR EQ 0.
LW_VAR = LW_STR_HIGH1 MOD 400.
IF LW_VAR EQ 0.
CONCATENATE LW_STR_HIGH1 LW_STR_HIGH2 '29' INTO LW_STR_HIGH.
LW_DATE2 = LW_STR_HIGH.
WRITE LW_DATE2.
ELSE.
CONCATENATE LW_STR_HIGH1 LW_STR_HIGH2 '28' INTO LW_STR_HIGH.
LW_DATE2 = LW_STR_HIGH.
WRITE LW_DATE2.
ENDIF.
ELSEIF LW_VAR NE 0.
LW_VAR = LW_STR_HIGH1 MOD 4.
IF LW_VAR EQ 0.
CONCATENATE LW_STR_HIGH1 LW_STR_HIGH2 '29' INTO LW_STR_HIGH.
LW_DATE2 = LW_STR_HIGH.
WRITE LW_DATE2.
ELSE.
CONCATENATE LW_STR_HIGH1 LW_STR_HIGH2 '28' INTO LW_STR_HIGH.
LW_DATE2 = LW_STR_HIGH.
WRITE LW_DATE2.
ENDIF.
ENDIF.
*End:-To check for Leap Year when February.
ENDIF.
*Since we have low and high dates in two different strings,they directly
*cannot be passed to standard table (MSEG).So we first get them into RO_DATE.
RO_DATE-SIGN = 'I'.
RO_DATE-OPTION = 'BT'.
RO_DATE-LOW = LW_DATE1.
RO_DATE-HIGH = LW_DATE2.
APPEND RO_DATE.
ENDFORM. " CONVERT_MMYYYY_TO_DDMMYYY
*& Form GET_DATA
Text
FORM GET_DATA .
**Select fired on MARC to get only those materials of whose plant has been
**given on Selection Screen.
IF PR_MATNR EQ ''.
SELECT MATNR WERKS
FROM MARC
INTO TABLE IT_MARC
WHERE WERKS EQ PR_WERKS.
ELSE.
SELECT MATNR WERKS
FROM MARC
INTO TABLE IT_MARC
WHERE MATNR EQ PR_MATNR AND
WERKS EQ PR_WERKS.
ENDIF.
**If records are found then get material type and division from MARA for
**materials got from MARC.
IF SY-SUBRC = 0.
SELECT MATNR MTART SPART
FROM MARA
INTO TABLE IT_MARA
FOR ALL ENTRIES IN IT_MARC
WHERE MATNR EQ IT_MARC-MATNR.
ENDIF.
**Filtering out those materials on the basis of material type and division
**that is given on Selection Screen.
LOOP AT IT_MARA INTO WA_MARA.
W_INDEX = SY-TABIX.
IF WA_MARA-MTART NE PR_MTART OR WA_MARA-SPART NE PR_SPART.
DELETE IT_MARA INDEX W_INDEX.
ELSE.
MOVE: PR_WERKS TO WA_MARA-WERKS.
MODIFY IT_MARA FROM WA_MARA TRANSPORTING WERKS.
CLEAR: WA_MARA.
ENDIF.
ENDLOOP.
*Select fired on view ZV_OLR3_MARACKT which results in above 3 steps together
IF PR_MATNR EQ ''.
SELECT MATNR WERKS MTART MAKTX SPART
FROM ZV_OLR3_MARACKT
INTO TABLE IT_MATERIAL
WHERE WERKS EQ PR_WERKS AND
MTART EQ PR_MTART AND
SPART EQ PR_SPART.
ELSE.
SELECT MATNR WERKS MTART MAKTX SPART
FROM ZV_OLR3_MARACKT
INTO TABLE IT_MATERIAL
WHERE MATNR EQ PR_MATNR AND
WERKS EQ PR_WERKS AND
MTART EQ PR_MTART AND
SPART EQ PR_SPART.
ENDIF.
SORT IT_MARA BY MATNR.
SORT IT_MATERIAL BY MATNR.
*Select fired on MBEW to get Rate (VERPR) on basis of material and plant
*in IT_MARA.
SELECT MATNR BWKEY BWTAR VERPR
FROM MBEW
INTO TABLE IT_MBEW
FOR ALL ENTRIES IN IT_MARA
WHERE MATNR EQ IT_MARA-MATNR AND
BWKEY EQ IT_MARA-WERKS.
SELECT MATNR BWKEY BWTAR VERPR
FROM MBEW
INTO TABLE IT_MBEW
FOR ALL ENTRIES IN IT_MATERIAL
WHERE MATNR EQ IT_MATERIAL-MATNR AND
BWKEY EQ IT_MATERIAL-WERKS.
IF SY-SUBRC = 0.
Do Nothing.
ENDIF.
**Select fired on MKPF to get Document Number on basis of period given on
**selection screen.
SELECT MBLNR MJAHR BUDAT
FROM MKPF
INTO TABLE IT_MKPF
WHERE BUDAT IN RO_DATE.
**If records are found then pass Document Number (MBLNR) to MSEG Table.
IF SY-SUBRC = 0.
SELECT MBLNR MJAHR ZEILE BWART MENGE MEINS MATNR
FROM MSEG
INTO TABLE IT_MSEG
FOR ALL ENTRIES IN IT_MKPF
WHERE MBLNR EQ IT_MKPF-MBLNR AND
BWART IN ('261','262','601','602').
SORT IT_MSEG BY BWART MATNR.
ENDIF.
*Select fired on View WB2_V_MKPF_MSEG2 to get required common fields of
*Table MKPF and MSEG.Also above commented coded is thus avoided.
SELECT MBLNR
MJAHR
BUDAT
ZEILE_I
BWART_I
MENGE_I
MEINS_I
MATNR_I
WERKS_I
FROM WB2_V_MKPF_MSEG2
INTO TABLE IT_VIEW
WHERE BUDAT IN RO_DATE AND
BWART_I IN ('261','262','601','602') AND
WERKS_I EQ PR_WERKS.
*If records are found then read table MARA for which existing records
*are moved to copy table.
IF SY-SUBRC EQ 0.
SORT IT_VIEW BY MATNR_I.
LOOP AT IT_VIEW INTO WA_VIEW.
W_INDEX = SY-TABIX.
READ TABLE IT_MARA INTO WA_MARA WITH KEY MATNR = WA_VIEW-MATNR_I.
READ TABLE IT_MATERIAL INTO WA_MATERIAL WITH KEY MATNR = WA_VIEW-MATNR_I.
IF SY-SUBRC NE 0.
DELETE IT_VIEW INDEX W_INDEX.
ELSE.
MOVE-CORRESPONDING: WA_VIEW TO WA_VIEW_COPY.
MOVE: WA_VIEW-BUDAT+0(4) TO WA_VIEW_COPY-FYEAR,
WA_VIEW-BUDAT+4(2) TO WA_VIEW_COPY-MONTH.
APPEND WA_VIEW_COPY TO IT_VIEW_COPY.
CLEAR WA_VIEW_COPY.
ENDIF.
CLEAR: WA_VIEW,WA_MARA.
CLEAR: WA_VIEW,WA_MATERIAL.
ENDLOOP.
ENDIF.
*Sorting on basis of material year and month,processing event will be triggered
*at end of month.
SORT IT_VIEW_COPY BY MATNR_I FYEAR MONTH.
*On basis of movement type (BWART),calculation of exact quantity of material
*used in a particular month.(Processing Event used here)
LOOP AT IT_VIEW_COPY INTO WA_VIEW_COPY.
MOVE WA_VIEW_COPY TO WA_VIEW_TMP.
IF WA_VIEW_COPY-BWART_I = '261' OR WA_VIEW_COPY-BWART_I = '601'.
W_QTY_TOT = W_QTY_TOT + WA_VIEW_COPY-MENGE_I.
ENDIF.
IF WA_VIEW_COPY-BWART_I = '262' OR WA_VIEW_COPY-BWART_I = '602'.
W_QTY_TOT = W_QTY_TOT - WA_VIEW_COPY-MENGE_I.
ENDIF.
AT END OF MONTH.
W_COUNT = W_COUNT + 1.
TOT_USE = TOT_USE + W_QTY_TOT.
MOVE: W_QTY_TOT TO WA_VIEW_TMP-MENGE_I,
W_COUNT TO WA_VIEW_TMP-COUNT .
TOT_USE TO WA_VIEW_TMP-USAGE .
APPEND WA_VIEW_TMP TO IT_VIEW_TMP.
CLEAR: WA_VIEW_TMP,W_QTY_TOT,W_COUNT."TOT_USE.
ENDAT.
CLEAR: WA_VIEW_COPY,WA_VIEW_TMP.
ENDLOOP.
SORT IT_VIEW_TMP BY MATNR_I.
IF PR_MTART EQ 'FERT'.
SELECT MATNR BWKEY BWTAR VERPR
FROM MBEW
INTO TABLE IT_MBEW
FOR ALL ENTRIES IN IT_VIEW_TMP
WHERE MATNR EQ IT_VIEW_TMP-MATNR_I AND
BWTAR EQ 'NEW VEH' AND
BWKEY EQ IT_VIEW_TMP-WERKS_I.
ELSEIF PR_MTART EQ 'HALB'.
SELECT MATNR BWKEY BWTAR VERPR
FROM MBEW
INTO TABLE IT_MBEW
FOR ALL ENTRIES IN IT_VIEW_TMP
WHERE MATNR EQ IT_VIEW_TMP-MATNR_I AND
BWTAR EQ 'M&M' AND
BWKEY EQ IT_VIEW_TMP-WERKS_I.
ELSEIF PR_MTART EQ 'ZHLB'.
SELECT MATNR BWKEY BWTAR VERPR
FROM MBEW
INTO TABLE IT_MBEW
FOR ALL ENTRIES IN IT_VIEW_TMP
WHERE MATNR EQ IT_VIEW_TMP-MATNR_I AND
BWTAR EQ 'LOCAL' AND
BWKEY EQ IT_VIEW_TMP-WERKS_I.
ENDIF.
DATA: STRING TYPE STRING.
IF SY-SUBRC = 0.
SORT IT_MBEW BY MATNR.
WRITE: 20 SY-VLINE,49 SY-VLINE,70 SY-VLINE COLOR 4.
WRITE: 05'MATNR' ,40'PERIOD',60'QUANTITY'.
WRITE:SY-ULINE.
LOOP AT IT_VIEW_TMP INTO WA_VIEW_TMP.
WRITE: 20 SY-VLINE, 49 SY-VLINE,70 SY-VLINE.
CONCATENATE WA_VIEW_TMP-MONTH '.' WA_VIEW_TMP-FYEAR INTO STRING.
TOT_USE_TMP = WA_VIEW_TMP-MENGE_I.
AT END OF MONTH.
TOT_USE = TOT_USE + TOT_USE_TMP.
W_COUNT = W_COUNT + 1.
WRITE: / WA_VIEW_TMP-MATNR_I,STRING,TOT_USE_TMP,W_COUNT.
CLEAR W_COUNT.
ENDAT.
AT END OF MATNR_I.
WRITE: SY-ULINE.
W_COUNT = W_COUNT + 1.
WRITE: / WA_VIEW_TMP-MATNR_I,STRING,TOT_USE,W_COUNT.
CLEAR TOT_USE.
ENDAT.
ENDLOOP.
ENDIF.Create new db instance with appropriate character set
afterwards use exp imp utility for character set migration.
Regards
Singh -
Can't convert int to int[] URGENT!
HI list!
Here is the code part I have:
/** Holds the permutated groups which are arrays of integers. */
Vector v;
if(arg.length<3) System.exit (0); //Need at least 3 arguments
elements=new int[arg.length-1]; //Create array to hold elements
/* Copy the arguments into the element array. */
for(i=0;i<arg.length-1;i++) elements=Integer.parseInt(arg[i+1]);
groupsize=Integer.parseInt(arg[0]); //Get the number in each group.
calc(groupsize,elements.length); //Find out how many permutations there are.
v=permutate(groupsize,elements); //Do the permutation
for(i=0;i<v.size();i++) { //Print out the result
elements=(int[]) v.get(i);
System.out.println("");
for(j=0;j<elements.length;j++) System.out.print(elements
[j]+" ");
System.out.println("\nTotal permutations = "+v.size());
and the error I get is:
Permutate.java:26: Incompatible type for =. Can't convert int to int[]. for(i=0;i<arg.length-1;i++) elements=Integer.parseInt(arg[i+1]);elements=(int[]) v.get(i);
The above statement is illegal. Its not possible to
use an array type in casting at all.
Basically you are trying to do a kind of multiple
casting, which is not possible.
You might want to store vectors in vector v instead of
arrays in vector v. Its better to do this.There's nothing wrong with the line
elements = (int[]) v.get(i);
assuming the returned object from the vector is in fact an array of int. Even then, it is a runtime exception condition not a compile-time error.
Using arrays is perfectly valid where you know the size of the array at creation and do not need to resize it during use... The arrays are fast and compact. This is however not a comment on the approach taken in this case but merely an observation that arrays are not always an inferior choice to one of the collection classes.
Now put up your dukes... ;) -
Don't know how to convert media format! Urgent!
Hey,
Here is my code to convert between format
try
mainProcessor = Manager.createProcessor(new
MediaLocator("file:///c:\\j.wav"));
mainProcessor.configure();
mainProcessor.realize();
}catch (Exception e) {e.printStackTrace();}
DataSink sink;
MediaLocator dest = new MediaLocator
("file:///c:\\newfile.wav");
try{
sink = Manager.createDataSink
(mainProcessor.getDataOutput(), dest);
sink.open();
sink.start();
mainProcessor.start();
} catch (Exception err) { err.printStackTrace();}
I get a problem when I create the datasink object, the program give me such exception:
javax.media.NoDataSinkException: Cannot find a DataSink for: com.sun.media.multiplexer.RawBufferMux$RawBufferDataSource@1a75a2
Can anyone help me to correct the code, or tell me where can I get the code that can convert the format.
Thanks!
NachiTry this:
http://java.sun.com/products/java-media/jmf/2.1.1/solutions/Transcode.html -
I need to convert a mov file to mpeg1 for work. Does anybody know how to do this? Please help.
Nevermind.
-
Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster
In hopes that it might be helpful in the future, here's the procedure I followed to fix a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
BACKGROUND
Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
FIXING THE PROBLEM
How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
(As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields, and a second for CLOBs.
In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
alter system set global_names=false scope=memory;
CREATE PUBLIC DATABASE LINK OLD6
CONNECT TO DBUSERNAME
IDENTIFIED BY dbuserpass
USING 'restoreclone:1521/MYSID';
Testing the link...
SQL> select count(1) from users@old6;
COUNT(1)
454
Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
DUMP(TITLE)
Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
By comparison, a dump of that row on PRODCLONE's my_contents gives:
PRODCLONE> select dump(title) from my_contents where pk1=117286;
DUMP(TITLE)
Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
Eventually, I located a clever workaround at this link:
https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
It works like this:
On RESTORECLONE you create a view, vv, with UTL_RAW:
RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
View created.
This turns the title to raw on the RESTORECLONE.
You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in US7ASCII, so it was unable to do its transparent character set conversion.
PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
PRODCLONE> select dump(title) from my_contents where pk1=117286;
DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
COUNT(1)
533
By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
COUNT(1)
10568
So 10568 rows have characters which were transformed into 191s as part of the original conversion.
[ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected - got CLOB
Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
create or replace procedure find_us7_strings
(table_name varchar2,
fix_col varchar2 )
authid current_user
as
orig_sql varchar2(1000);
begin
orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' != CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
-- Uncomment if debugging:
-- dbms_output.put_line(orig_sql);
execute immediate orig_sql;
end;
And create a table to store the information as to which tables, columns, and rows have the bad characters:
drop table cnv_us7;
create table cnv_us7 (mytablename varchar2(50), myindx number, mycolumnname varchar2(50) ) tablespace myuser_data;
create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
--example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
set head off pagesize 1000 linesize 120
spool runme.sql
select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
where
data_type in ('CHAR','VARCHAR2')
and table_name in (select table_name from user_tab_columns where column_name='PK1' and table_name not in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
and char_length > 10
order by table_name,column_name;
spool off;
set echo on time on timing on feedb on serveroutput on;
spool output_of_runme
@./runme.sql
spool off;
Which eventually gives us the following inserted into CNV_US7:
20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
4 DESCRIPTION MY_FORUMS
21136 TITLE MY_CONTENTS
Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
We create our views on RESTOREDB:
create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
And then we can fix it directly via sql:
update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
where pk1 in (
select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
where taborig.pk1=tabnew.pk1
and myindx=tabnew.pk1
and mycolumnname='TITLE'
and mytablename='MY_CONTENTS'
and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
Note this part:
"and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there because if the users have changed TITLE -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
We can also create a stored procedure which will execute the SQL for us:
create or replace procedure fix_us7_strings
(TABLE_NAME varchar2,
FIX_COL varchar2 )
authid current_user
as
orig_sql varchar2(1000);
TYPE cv_type IS REF CURSOR;
orig_cur cv_type;
begin
orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
where pk1 in (
select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
where taborig.pk1=tabnew.pk1
and myindx=tabnew.pk1
and mycolumnname='''||FIX_COL||'''
and mytablename='''||TABLE_NAME||'''
and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
dbms_output.put_line(orig_sql);
execute immediate orig_sql;
end;
exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
exec fix_us7_strings('MY_CONTENTS','TITLE');
commit;
To validate this before and after, we can run something like:
select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
create or replace procedure find_us7_clob
(table_name varchar2,
fix_col varchar2)
authid current_user
as
orig_sql varchar2(1000);
type cv_type is REF CURSOR;
orig_table_cur cv_type;
my_chars_read NUMBER;
my_offset NUMBER;
my_problem NUMBER;
my_lob_size NUMBER;
my_indx_var NUMBER;
my_total_chars_read NUMBER;
my_output_chunk VARCHAR2(4000);
my_problem_flag NUMBER;
my_clob CLOB;
my_total_problems NUMBER;
ins_sql VARCHAR2(4000);
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
open orig_table_cur for orig_sql;
my_total_problems := 0;
LOOP
FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
EXIT WHEN orig_table_cur%NOTFOUND;
my_offset :=1;
my_chars_read := 512;
my_problem_flag :=0;
WHILE my_offset < my_lob_size and my_problem_flag =0
LOOP
DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
my_offset := my_offset + my_chars_read;
IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
THEN
-- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
-- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
my_problem_flag:=1;
END IF;
END LOOP;
IF my_problem_flag=1
THEN my_total_problems := my_total_problems +1;
ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
execute immediate ins_sql;
END IF;
END LOOP;
DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
END;
And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
exec find_us7_clob('MY_CONTENTS','DATA');
After completion, the CNV_US7 table looked like this:
RESTOREDB> set linesize 120 pagesize 100;
RESTOREDB> select count(1),mytablename,mycolumnname from cnv_us7
where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
where data_type='CLOB' )
group by mytablename,mycolumnname;
COUNT(1) MYTABLENAME MYCOLUMNNAME
69703 MY_CONTENTS DATA
On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
-- transforming CLOB to BLOB
l_off number default 1;
l_amt number default 4096;
l_offWrite number default 1;
l_amtWrite number;
l_str varchar2(4096 char);
begin
loop
dbms_lob.read ( p_clob, l_amt, l_off, l_str );
l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
utl_raw.cast_to_raw( l_str ) );
l_offWrite := l_offWrite + l_amtWrite;
l_off := l_off + l_amt;
l_amt := 4096;
end loop;
exception
when no_data_found then
NULL;
end;
We can test out the transformation of CLOBs to BLOBs with a single row like this:
drop table my_contents_lob;
Create table my_contents_lob (pk1 number,data blob);
DECLARE
v_clob CLOB;
v_blob BLOB;
BEGIN
SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
clob2blob (v_clob, v_blob);
END;
select dbms_lob.getlength(data) from my_contents_lob;
DBMS_LOB.GETLENGTH(DATA)
329
SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
UTL_RAW.CAST_TO_VARCHAR2(DATA)
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
create table my_contents_lob(pk1 number,data blob);
create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
create or replace procedure blob_conversion_my_contents
(table_name varchar2,
fix_col varchar2)
authid current_user
as
orig_sql varchar2(1000);
type cv_type is REF CURSOR;
orig_table_cur cv_type;
my_chars_read NUMBER;
my_offset NUMBER;
my_problem NUMBER;
my_lob_size NUMBER;
my_indx_var NUMBER;
my_total_chars_read NUMBER;
my_output_chunk VARCHAR2(4000);
my_problem_flag NUMBER;
my_clob CLOB;
my_blob BLOB;
my_total_problems NUMBER;
new_sql VARCHAR2(4000);
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
open orig_table_cur for orig_sql;
LOOP
FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
EXIT WHEN orig_table_cur%NOTFOUND;
new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
dbms_output.put_line(new_sql);
execute immediate new_sql;
-- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
-- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
-- dbms_output.put_line(new_sql);
select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
clob2blob(my_clob,my_blob);
END LOOP;
CLOSE orig_table_cur;
DBMS_OUTPUT.PUT_LINE('Completed program');
END;
exec blob_conversion_my_contents('MY_CONTENTS','DATA');
Verify that things work properly:
select dump( utl_raw.cast_to_varchar2(data)) from my_contents_lob where pk1=xxxx;
This should let you see see characters > 150. Thus, the method works.
We can now take this data, export it from RESTORECLONE
exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
and import the data on prodclone
imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
For paranoia's sake, double check that it worked properly:
select dump( utl_raw.cast_to_varchar2(data)) from my_contents_lob;
On our 10g PRODCLONE, we'll use these stored procedures:
CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
L_BLOB BLOB;
L_SRC_OFFSET NUMBER;
L_DEST_OFFSET NUMBER;
L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
L_WARNING NUMBER;
L_AMOUNT NUMBER;
BEGIN
DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
L_SRC_OFFSET := 1;
L_DEST_OFFSET := 1;
L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
DBMS_LOB.CONVERTTOBLOB(L_BLOB,
L_CLOB,
L_AMOUNT,
L_SRC_OFFSET,
L_DEST_OFFSET,
1,
V_LANG_CONTEXT,
L_WARNING);
RETURN L_BLOB;
END;
CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
L_CLOB CLOB;
L_SRC_OFFSET NUMBER;
L_DEST_OFFSET NUMBER;
L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
L_WARNING NUMBER;
L_AMOUNT NUMBER;
BEGIN
DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
L_SRC_OFFSET := 1;
L_DEST_OFFSET := 1;
L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
DBMS_LOB.CONVERTTOCLOB(L_CLOB,
L_BLOB,
L_AMOUNT,
L_SRC_OFFSET,
L_DEST_OFFSET,
1,
V_LANG_CONTEXT,
L_WARNING);
RETURN L_CLOB;
END;
And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
To find correct CSID for WE8ISO8859P1, we can use this query:
select nls_charset_id('WE8ISO8859P1') from dual;
Gives "31"
create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
L_CLOB CLOB;
L_SRC_OFFSET NUMBER;
L_DEST_OFFSET NUMBER;
L_BLOB_CSID NUMBER := 31; -- treat blob as WE8ISO8859P1
V_LANG_CONTEXT NUMBER := 31; -- treat resulting clob as WE8ISO8850P1
L_WARNING NUMBER;
L_AMOUNT NUMBER;
BEGIN
DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
L_SRC_OFFSET := 1;
L_DEST_OFFSET := 1;
L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
DBMS_LOB.CONVERTTOCLOB(L_CLOB,
L_BLOB,
L_AMOUNT,
L_SRC_OFFSET,
L_DEST_OFFSET,
L_BLOB_CSID,
V_LANG_CONTEXT,
L_WARNING);
RETURN L_CLOB;
END;
select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
Now, we can compare these:
select dbms_lob.compare(blob2clob(old.data),new.data) from my_contents new,my_contents_lob old where new.pk1=old.pk1;
DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
0
0
0
Vs
select dbms_lob.compare(blob2clobasc(old.data),new.data) from my_contents new,my_contents_lob old where new.pk1=old.pk1;
DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
-1
-1
-1
update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
Confirms that we're now working properly.
To run across all the _LOB tables we've created:
[oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
[oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
And then on PRODCLONE we can import:
imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
create or replace procedure fix_us7_CLOBS
(TABLE_NAME varchar2,
FIX_COL varchar2 )
authid current_user
as
orig_sql varchar2(1000);
bak_sql varchar2(1000);
begin
dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
execute immediate bak_sql;
orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
where pk1 in (
select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
where a.pk1=b.pk1
and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
-- dbms_output.put_line(orig_sql);
execute immediate orig_sql;
end;
Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
set serveroutput on time on timing on;
exec fix_us7_clobs('MY_CONTENTS','DATA');
commit;
After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
A summary:
1) We replaced the lossy characters by parsing a csscan output file
2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
Our actual error message:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00210: expected '<' instead of '�Error at line 1
31011. 00000 - "XML parsing failed"
*Cause: XML parser returned an error while trying to parse the document.
*Action: Check if the document to be parsed is valid.
Error at Line: 24 Column: 15
This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
Please advise if more information is needed from my end. -
How do we convert the Extended ASCII character to UTF8 without using the ALTER DATABASE CHARACTER SET command
Is [url http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/functions027.htm#i77037]convert function ?
SQL> select convert('a','utf8','us7ascii') from dual;
C
a -
9.2 convert ASCII to UTF8 welsh language
hello
I have a 9.2 ascii database that i cant convert to UTF8 yet
1 for an output (util file) i need to convert an ascii text string to utf-8 on export
2 i have two characters that are not supported by ascii, ŵŷ the users will represent these by typing w^y^
I tryed using UNISTR but non of the characters below are corectly converted
SELECT UNISTR(ASCIISTR( '剔搙)) FROM DUAL ;
how would you recomend converting a ascii latin 1 extended string to UTF-8 for export?
is it sencible to use the character replacement plan above for ŵŷ?
thanks
jamesProbably the unconverted characters are not contained in the first charset.
If this is right.
http://en.wikipedia.org/wiki/Windows-1252
...there is no conversion for values outside the first charset.
But I may made a mistake.
Are you sure Â, â, î, Ê, ê and ô are in the 1252 charset?
I am not able to see if there is a difference between the similar chars in the table on wikipedia and the ones you posted, that is why I asked.
Anyway this output seems to verify my indication.
Processing ...
SELECT convert ('Ââî€Êêô','WE8MSWIN1252','UTF8') FROM DUAL
Query finished, retrieving results...
CONVERT('¨âêô','WE8MSWIN1252','UTF8')
¨¨¨¨
1 row(s) retrieved
Processing ...
SELECT convert ('Ââî€Êêô','UTF8','UTF8') FROM DUAL
Query finished, retrieving results...
CONVERT('¨âêô','UTF8','UTF8')
¨âêô
1 row(s) retrieved
Processing ...
SELECT convert ('Ââî€Êêô','UTF8','WE8MSWIN1252') FROM DUAL
Query finished, retrieving results...
CONVERT('¨âêô','UTF8','WE8MSWIN1252')
¶¨Ç?¶îÇ?¶¨Ç?¶ô
1 row(s) retrieved
Processing ...
SELECT convert ('Ââî€Êêô','WE8PC858','UTF8') FROM DUAL
Query finished, retrieving results...
CONVERT('¨âêô','WE8PC858','UTF8')
1 row(s) retrieved
Processing ...
SELECT convert ('Ââî€Êêô','UTF8','WE8PC858') FROM DUAL
Query finished, retrieving results...
CONVERT('¨âêô','UTF8','WE8PC858')
ƒ??Ç?¶¯Ç?ƒ??ƒ??Ç?
1 row(s) retrievedSome characters are not supported on my DB so try these queries on yours to prove it.
SELECT convert ('Ââî€Êêô','WE8MSWIN1252','UTF8') FROM DUAL;
SELECT convert ('Ââî€Êêô','UTF8','UTF8') FROM DUAL;
SELECT convert ('Ââî€Êêô','UTF8','WE8MSWIN1252') FROM DUAL;
SELECT convert ('Ââî€Êêô','WE8PC858','UTF8') FROM DUAL;
SELECT convert ('Ââî€Êêô','UTF8','WE8PC858') FROM DUAL;Bye Alessandro -
Using convert function on nvarchar2
Hi experts,
I am having a bit of a problem with the convert function. We use convert to compare street- and citynames and ignore any special characters, such as é ç etc.
Something like:
select ...
from ...
where convert(new_street_name, 'US7ASCII') = convert(existing_street_name, 'US7ASCII')
This works fine if the datatype is varchar2, for instance:
SQL> select convert('äàáâçëèéêïìíîöòóôüùúûÿ','US7ASCII') text from dual;
TEXT
aaaaceeeeiiiioooouuuuy
If the datatype if nvarchar2 however, the result is not as expected:
SQL> select convert(cast('äàáâçëèéêïìíîöòóôüùúûÿ'as nvarchar2(64)),'US7ASCII') text from dual;
TEXT
慡慡捥敥敩楩楯潯潵畵
The NLS character settings on our database (10.2.0.4) are:
NLS_CHARACTERSET AL32UTF8 Character set
NLS_NCHAR_CONV_EXCP FALSE NLS conversion exception
NLS_NCHAR_CHARACTERSET AL16UTF16 NCHAR Character set
I have tried several combinations... but no luck so far. Is it possible to use convert on an nvarchar2 to go from é to e?
Maybe it is better just to use the translate function and define each conversion explicitly. Convert seemed a nice option because it works without any additional parameters... on a varchar2 at least
Thanks!The usage of convert is not encouraged by the docs and in my opinion it's rather by accident that this works in your specific case than something other.
What's going on?
Convert returns the char-datatype of the input.
We can use simple to_char to use the convert funtion in the way you intend.
(You shoud take care when handling n_char data in sql statements, especially in 10 g enviromnments. You have to set the environment parameter ORA_NCHAR_LITERAL_REPLACE=TRUE and use the n-prefix. Take a look at the globalization guide for more details.)
CREATE TABLE "TESTNCHAR"
( "ID" NUMBER,
"STR" NVARCHAR2(30),
"STR2" VARCHAR2(300)
insert into testnchar values (1, n'ßäàáâçëèéêïìíîöòóôüùúûÿ','ßäàáâçëèéêïìíîöòóôüùúûÿ')
select
id
,str,str2
,dump(str,1010) dmp
,dump(str2,1010) dmp2
,dump(convert(str,'US7ASCII')) dc
,dump(convert(str2,'US7ASCII')) dc2
,convert(to_char(str),'US7ASCII') c
,convert(str2,'US7ASCII') c2
from testnchar
ID
STR
STR2
DMP
DMP2
DC
DC2
C
C2
1
ßäàáâçëèéêïìíîöòóôüùúûÿ
ßäàáâçëèéêïìíîöòóôüùúûÿ
Typ=1 Len=46 CharacterSet=AL16UTF16: 0,223,0,228,0,224,0,225,0,226,0,231,0,235,0,232,0,233,0,234,0,239,0,236,0,237,0,238,0,246,0,242,0,243,0,244,0,252,0,249,0,250,0,251,0,255
Typ=1 Len=46 CharacterSet=AL32UTF8: 195,159,195,164,195,160,195,161,195,162,195,167,195,171,195,168,195,169,195,170,195,175,195,172,195,173,195,174,195,182,195,178,195,179,195,180,195,188,195,185,195,186,195,187,195,191
Typ=1 Len=23: 63,97,97,97,97,99,101,101,101,101,105,105,105,105,111,111,111,111,117,117,117,117,121
Typ=1 Len=23: 63,97,97,97,97,99,101,101,101,101,105,105,105,105,111,111,111,111,117,117,117,117,121
?aaaaceeeeiiiioooouuuuy
?aaaaceeeeiiiioooouuuuy
We can see that this already fails for ß.
To give you an idea of alternative approaches:
create table test_comp (nid number, str1 varchar2(300), str2 varchar2(300))
insert into test_comp values (1, 'ßäàáâçëèéêïìíîöòóôüùúûÿ','ssäàáâçëèéêïìíîöòóôüùúûÿ')
insert into test_comp values (2, 'ßäàáâçëèéêïìíîöòóôüùúûÿ','säàáâçëèéêïìíîöòóôüùúûÿ')
select
from test_comp
where
str2 like str1
no data found
select
from test_comp
where
upper(str2) like NLS_UPPER(str1, 'NLS_SORT = XGERMAN')
NID
STR1
STR2
1
ßäàáâçëèéêïìíîöòóôüùúûÿ
ssäàáâçëèéêïìíîöòóôüùúûÿ -
Reading Data from SQL Server 2000 Linked Servers
Hello,
I have a colleague who wants to read data from a OO4O driver Linked Server my FRENCH_FRANCE.US7ASCII Oracle 8i database.
He's got some troubles to read data with "é", "à", and so on. They always replaced by "?"
Since I have ask him to use SQL Developer on his server to see if he has got configuration problems, his PL/SQL queries return "square figure" characters (not the usual "²") instead of "?".
For info, I've said it to use CONVERT(column, 'FR8DEC', 'US7ASCII) but it doesn't work too (also try with 'UTF8' or 'WE8MSWIN1252')
Is there a way with the SQL Server collation ???
How can I solve his problem ???The select query returns the following infos :
PARAMETER VALUE
NLS_CHARACTERSET US7ASCII
NLS_NCHAR_CHARACTERSET US7ASCII
My colleague uses the 10.2.0.1.0 client with ODAC 10.2.0.2.20.
His SQL Server linked server is 2000 with the SQL_Latin1_General_CP437_BIN (he tried the use of the distant NLS_LANG option and the no set the NLS_LANG too with no results).
Thanks in advance,
Yours,
Mickaël -
NLS CHARACTER SET 변경 방법 (ORACLE 7)
제품 : ORACLE SERVER
작성날짜 : 2004-11-09
NLS CHARACTER SET 변경 방법 (ORACLE 7)
======================================
PURPOSE
이 자료는 Oracle RDBMS SERVER에서 NLS CHARACTER SET 변경 방법에 대한
내용을 소개한다.
[ ORACLE 7 에서만 가능 ]
데이타베이스의 CHARACTER SET은 데이타 딕셔너리 테이블인 sys.props$에
들어 있다.
SQL>desc sys.props$
Name Null? Type
NAME NOT NULL VARCHAR2(30)
VALUE$ VARCHAR2(2000)
COMMENT$ VARCHAR2(2000)
SQL>column c1 format a30
SQL>select name c1, value$ c1 from sys.props$;
C1 C1
DICT.BASE 2
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_DATE_FORMAT DD-MON-YY
NLS_DATE_LANGUAGE AMERICAN
NLS_CHARACTERSET US7ASCII
NLS_SORT BINARY
GLOBAL_DB_NAME NLSV7.WORLD
여기서 NLS_CHARACTERSET에 현재 DB의 CHARACTER SET이 들어 있는데
이 값을 변경하여 DB의 CHARACTER SET을 변경할 수 있다.
여기서는 US7ASCII에서 KO16KSC5601로 옮기는 경우를 알아보자.
우선 바꾸고자 하는 CHRACTER SET이 지원되는 지를 다음 명령으로
확인한다.
select convert('a','KO16KSC5601','US7ASCII') from dual;
만약 이 Select 문에서 ORA-01482 에러가 발생하면 지정한 CHARACTER
SET이 지원되지 않는 경우이며 에러가 발생하지 않으면 CHARACTER
SET을 변경할 수 있다.
작업을 하기 전에는 만약을 위해서 DB 전체를 백업 받아두도록 한다.
CHARACTER SET 을 잘못 변경하면 DB 를 OPEN 할 수가 없기 때문이다.
1. 다음의 Update문을 실행하여 CHARACTER SET을 변경한다.
UPDATE sys.props$
SET value$ = 'KO16KSC5601'
WHERE name = 'NLS_CHARACTERSET';
Update 시에 NLS_CHARACTERSET을 지원되지 않는 값으로 잘못 설정하거나
실수로 콘트롤 문자 같은 것이 들어가게 되면 DB가 Shutdown 된 다음에는
Startup 이 안되므로 Update 후에 다음 명령으로 확인을 한 다음에
Commit을 하도록 한다.
select name, value$
from sys.props$
where value$ = 'KO16KSC5601';
Select가 제대로 출력되면 Commit 하고 Shutdown 했다가 Startup 하게
되면 새로운 CHARACTER SET 값을 갖게 된다. SELECT가 안 되면 ROLLBACK
하고 UPDATE부터 다시 하도록 한다.
2. 환경 변수 NLS_LANG 을 변경한다.
.profile 에서
NLS_LANG=American_America.KO16KSC5601; export NLS_LANG
또는 .cshrc 에서.
setenv NLS_LANG American_America.KO16KSC5601
*** win95 및 winnt의 client에서는 registry editor에서 NLS_LANG 값을
맞추어준다.
주의 !!!
: 위의 작업을 하기 전에 발생할 가능성이 있는 문제점
1) update 중 KO16KSC5601 이나 US7ASCII 등에서 철자를 잘못 입력하시면,
db 재기동 후에, 다음과 같은 에러가 발생합니다.
ora-12708
12708, 00000, "error while loading create database NLS parameter %s"
2) KO16KSC5601과 US7ASCII의 비교
Character set : KO16KSC5601 인 경우
===================================
: double byte encoding schema 이므로, 한글의 경우 2 bytes 를 크기 1로
계산하는 함수들이 있습니다.
그리고, table , column name에 한글을 double quote 없이 사용할 수 있습니다.
예)
SQL> create table 시험1
2 (컬럼1 varchar(10));
가
a
홍길동
SQL> select length(컬럼1) from 시험1;
1
1
3
SQL> select lengthb(컬럼1) from 시험1;
2
1
6
Character set : US7ASCII 인 경우
=================================
: single byte 7 bit encoding 을 사용합니다.
한글로 된 table, column 이름 사용 시 double quote를 반드시 사용해야 한다.
예)
SQL> create table "사원"
2 ("사원이름" varchar2(10));
가
a
홍길동
SQL> select length("사원이름") from "사원";
2
1
6
SQL> select lengthb("사원이름") from "사원";
2
1
6
US7ASCII일 때에는 vsize, lengthb function의 결과가 length 함수와 모두
동일합니다.
cf) substr, substrb 함수도 위의 length, lengthb의 관계와 같음.제품 : ORACLE SERVER
작성날짜 : 2004-11-09
NLS CHARACTER SET 변경 방법 (ORACLE 7)
======================================
PURPOSE
이 자료는 Oracle RDBMS SERVER에서 NLS CHARACTER SET 변경 방법에 대한
내용을 소개한다.
[ ORACLE 7 에서만 가능 ]
데이타베이스의 CHARACTER SET은 데이타 딕셔너리 테이블인 sys.props$에
들어 있다.
SQL>desc sys.props$
Name Null? Type
NAME NOT NULL VARCHAR2(30)
VALUE$ VARCHAR2(2000)
COMMENT$ VARCHAR2(2000)
SQL>column c1 format a30
SQL>select name c1, value$ c1 from sys.props$;
C1 C1
DICT.BASE 2
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_DATE_FORMAT DD-MON-YY
NLS_DATE_LANGUAGE AMERICAN
NLS_CHARACTERSET US7ASCII
NLS_SORT BINARY
GLOBAL_DB_NAME NLSV7.WORLD
여기서 NLS_CHARACTERSET에 현재 DB의 CHARACTER SET이 들어 있는데
이 값을 변경하여 DB의 CHARACTER SET을 변경할 수 있다.
여기서는 US7ASCII에서 KO16KSC5601로 옮기는 경우를 알아보자.
우선 바꾸고자 하는 CHRACTER SET이 지원되는 지를 다음 명령으로
확인한다.
select convert('a','KO16KSC5601','US7ASCII') from dual;
만약 이 Select 문에서 ORA-01482 에러가 발생하면 지정한 CHARACTER
SET이 지원되지 않는 경우이며 에러가 발생하지 않으면 CHARACTER
SET을 변경할 수 있다.
작업을 하기 전에는 만약을 위해서 DB 전체를 백업 받아두도록 한다.
CHARACTER SET 을 잘못 변경하면 DB 를 OPEN 할 수가 없기 때문이다.
1. 다음의 Update문을 실행하여 CHARACTER SET을 변경한다.
UPDATE sys.props$
SET value$ = 'KO16KSC5601'
WHERE name = 'NLS_CHARACTERSET';
Update 시에 NLS_CHARACTERSET을 지원되지 않는 값으로 잘못 설정하거나
실수로 콘트롤 문자 같은 것이 들어가게 되면 DB가 Shutdown 된 다음에는
Startup 이 안되므로 Update 후에 다음 명령으로 확인을 한 다음에
Commit을 하도록 한다.
select name, value$
from sys.props$
where value$ = 'KO16KSC5601';
Select가 제대로 출력되면 Commit 하고 Shutdown 했다가 Startup 하게
되면 새로운 CHARACTER SET 값을 갖게 된다. SELECT가 안 되면 ROLLBACK
하고 UPDATE부터 다시 하도록 한다.
2. 환경 변수 NLS_LANG 을 변경한다.
.profile 에서
NLS_LANG=American_America.KO16KSC5601; export NLS_LANG
또는 .cshrc 에서.
setenv NLS_LANG American_America.KO16KSC5601
*** win95 및 winnt의 client에서는 registry editor에서 NLS_LANG 값을
맞추어준다.
주의 !!!
: 위의 작업을 하기 전에 발생할 가능성이 있는 문제점
1) update 중 KO16KSC5601 이나 US7ASCII 등에서 철자를 잘못 입력하시면,
db 재기동 후에, 다음과 같은 에러가 발생합니다.
ora-12708
12708, 00000, "error while loading create database NLS parameter %s"
2) KO16KSC5601과 US7ASCII의 비교
Character set : KO16KSC5601 인 경우
===================================
: double byte encoding schema 이므로, 한글의 경우 2 bytes 를 크기 1로
계산하는 함수들이 있습니다.
그리고, table , column name에 한글을 double quote 없이 사용할 수 있습니다.
예)
SQL> create table 시험1
2 (컬럼1 varchar(10));
가
a
홍길동
SQL> select length(컬럼1) from 시험1;
1
1
3
SQL> select lengthb(컬럼1) from 시험1;
2
1
6
Character set : US7ASCII 인 경우
=================================
: single byte 7 bit encoding 을 사용합니다.
한글로 된 table, column 이름 사용 시 double quote를 반드시 사용해야 한다.
예)
SQL> create table "사원"
2 ("사원이름" varchar2(10));
가
a
홍길동
SQL> select length("사원이름") from "사원";
2
1
6
SQL> select lengthb("사원이름") from "사원";
2
1
6
US7ASCII일 때에는 vsize, lengthb function의 결과가 length 함수와 모두
동일합니다.
cf) substr, substrb 함수도 위의 length, lengthb의 관계와 같음.
Maybe you are looking for
-
Everytime i eject my ipod and reconnect it all my music is gone. and sometimes while i try adding music to it. itunes completely freezes. please help. and sometimes when i plug it back into my PC i have to rename my ipod and such
-
HOWTO: Get a logitech marble mouse to work w/scroll and more
This bit of information was taken from a post at the ubuntu forums. It worked for me, enjoy! Modify your Xorg.conf/XF86Config file to have this in the mouse section: Section "InputDevice" Identifier "MarbleMouse" Driver "mouse" Option "Protocol" "aut
-
I want to use my IPOD as a digital music server and not a video server. Can I output the menu's from my IPOD to my TV and control via a remote.
-
CreatePushNotificationChannelForApplicationAsync() Time out
Hi, I'm having a lot of problems with push notifications in my app, when calling CreatePushNotificationChannelForApplicationAsync() I get this exception: The wait operation timed out. (Exception from HRESULT: 0x80070102) This code have been working f
-
This is not really a pacman issue. I am wondering if there exists an equivalent to the /var/lib/pacman/sync/*/package_name/desc|depends that contains the list of "Required By" for a given packages. I know I can get that through pacman -Qi, but it i