XSLT, Call transformation, Character Set
Hi
Using call transformation (XSLT scheme) to convert XML file to html the problem is that
When HTML displayed in sap, - name 'Søren Fjællegård' is displayed like 'S#ren Fj#lleg#rd'
What to do ? Any ideas to handle special character sets
Hi Jon,
What is the XSLT program that you're using to transform the XML?
Can you try modifying the XSLT stylesheet and add or modify the following attribute.
<xsl:output indent="yes"/>
Regards,
Erwin
Similar Messages
-
XML file to Internal table through XSLT - Call Transformation
Hi Friends,
I am trying to work a scenario where i have a simple XML file and i need to convert the data in to an internal table. When i execute the XSLT seperately, it works fine. I get the output, but when it is invoked through a ABAP program, i am getting the error "The called method START_XSLT_DEBUGGER of the calss CL_WB_XSLT_DEBUGGER returned the exception CX_XSLT_FORMAT_ERROR"
I feel my XSLT program is not correct, but i am unable to find out what the issue is. Any help is really apreciated. I have gone through the SDN forum replies. But could not figure out what is wrong with my program
Below given are the details.
My XML File:
<?xml version="1.0" encoding="utf-8"?>
<List>
<ITEM>
<ITEMQUALF>ITEM1</ITEMQUALF>
<MATERIAL>MAT1</MATERIAL>
</ITEM>
<ITEM>
<ITEMQUALF>ITEM2</ITEMQUALF>
<MATERIAL>MAT2</MATERIAL>
</ITEM>
<ITEM>
<ITEMQUALF>ITEM3</ITEMQUALF>
<MATERIAL>MAT3</MATERIAL>
</ITEM>
</List>
My XSLT program:
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:sap="http://www.sap.com/sapxsl" xmlns:asx="http://www.sap.com/abapxml" exclude-result-prefixes="asx" version="1.0">
<xsl:strip-space elements="*"/>
<xsl:output encoding="utf-8" indent="yes" omit-xml-declaration="yes"/>
<!--xsl:template match="/"-->
<xsl:template match="List">
<asx:abap version="1.0">
<asx:values>
<T_ACTUAL>
<xsl:for-each select="*">
<ITEMQUALF>
<xsl:value-of select="ITEMQUALF" />
</ITEMQUALF>
<MATERIAL>
<xsl:value-of select="MATERIAL" />
</MATERIAL>
</xsl:for-each>
</T_ACTUAL>
</asx:values>
</asx:abap>
</xsl:template>
</xsl:transform>
In my ABAP program:
REPORT z_xslt_abap_2.
TYPES:
BEGIN OF ty_actual,
itemqualf TYPE char50,
material TYPE char50,
END OF ty_actual,
line_t(4096) TYPE x,
table_t TYPE STANDARD TABLE OF line_t,
ty_t_actual TYPE STANDARD TABLE OF ty_actual.
DATA:
t_actual TYPE ty_t_actual,
t_srctab TYPE table_t,
v_filename TYPE string.
DATA: gs_rif_ex TYPE REF TO cx_root,
gs_var_text TYPE string.
v_filename = 'D:\XML\xslt_test.xml'.
* Function call
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = v_filename
filetype = 'BIN'
TABLES
data_tab = t_srctab
EXCEPTIONS
OTHERS = 1.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
* Call Transformation
TRY.
CALL TRANSFORMATION (`ZXSLT_RAM`)
SOURCE XML t_srctab
RESULT t_actual = t_actual.
CATCH cx_root INTO gs_rif_ex.
gs_var_text = gs_rif_ex->get_text( ).
MESSAGE gs_var_text TYPE 'E'.
ENDTRY .
IF t_actual IS NOT INITIAL.
WRITE: 'Success'.
ENDIF.
When i run the XSLT program seperately, this is the output that i get:
<asx:abap xmlns:asx="http://www.sap.com/abapxml" version="1.0">
<asx:values>
<T_ACTUAL>
<ITEMQUALF>ITEM1</ITEMQUALF>
<MATERIAL>MAT1</MATERIAL>
<ITEMQUALF>ITEM2</ITEMQUALF>
<MATERIAL>MAT2</MATERIAL>
<ITEMQUALF>ITEM3</ITEMQUALF>
<MATERIAL>MAT3</MATERIAL>
</T_ACTUAL>
</asx:values>
</asx:abap>
I have been stuck with this for more than two days. If anyone can help me out with this, it would be really great. Please let me know, where i am going wrong.
Thanks in advance.
Best Regards,
Ram.Hi,
You can try this sample program, hopefully will help you.
<a href="https://www.sdn.sap.com/irj/sdn/wiki?path=/display/snippets/readdatafromXMLfileviaXSLT+program&">Read Data From XML</a>.
Regards, -
XML to Internal table using XSLT by CALL TRANSFORMATION error
Dear experts,
I have to fetch the data from an XML file using XSLT into internal tables. The XML file is very big as following:-
<?xml version="1.0" standalone="yes" ?>
- <Shipment>
<shipmentID>25091203S000778</shipmentID>
<manifestDateTime>2009-12-03T20:16:52.00</manifestDateTime>
<shipmentFacilityNumber>025</shipmentFacilityNumber>
<shipmentFacilityAbbreviation>CHI</shipmentFacilityAbbreviation>
<shipmentFacilityAddress1>810 KIMBERLY DRIVE</shipmentFacilityAddress1>
<shipmentFacilityAddress2 />
<shipmentFacilityCity>CAROL STREAM</shipmentFacilityCity>
<shipmentFacilityState>IL</shipmentFacilityState>
<shipmentFacilityPostalCode>601880000</shipmentFacilityPostalCode>
<shipmentTruckCarrierCode>X150</shipmentTruckCarrierCode>
<shipmentSourceCode>T</shipmentSourceCode>
<userID>CAMPOSG</userID>
- <Delivery>
<primaryCustomerNumber>954371</primaryCustomerNumber>
<primaryCustomerName>MIDWEST OFFICE SUPPLY</primaryCustomerName>
<primaryCustomerAddress1 />
<primaryCustomerAddress2>4765 INDUSTRIAL DR</primaryCustomerAddress2>
<primaryCustomerCity>SPRINGFIELD</primaryCustomerCity>
<primaryCustomerState>IL</primaryCustomerState>
<primaryCustomerPostalCode>627030000</primaryCustomerPostalCode>
<primaryCustomerPhoneNumber>2177535555</primaryCustomerPhoneNumber>
<shuttleStopFacilityNumber />
<billOfLadingNumber>25HZK99</billOfLadingNumber>
<carrierProNumber />
<shipmentTotalCartonCount>6</shipmentTotalCartonCount>
<shipmentTotalWeight>266</shipmentTotalWeight>
- <order>
<orderNumber>25HZK99</orderNumber>
<subOrderNumber />
<dateProcessed>2009-12-03</dateProcessed>
<primaryOrderNumber />
<shipTruckCode>X150</shipTruckCode>
<shipTruckDescription>UDS - ADDISON</shipTruckDescription>
<shipTruckPriorityCode>01</shipTruckPriorityCode>
<shipTruckGroupCode>01</shipTruckGroupCode>
<shipTruckDepartureTime>20.00.00</shipTruckDepartureTime>
<shipTruckDockID>07</shipTruckDockID>
<ldpFacilityAbbreviation />
<shuttleAvailableIndicator>N</shuttleAvailableIndicator>
<shuttleMessageText />
<crossDockFacilityCode />
<crossDockTruckCode />
<crossDockID />
<subsidizedFreightTruckID />
<customerPurchaseOrderNumber>623559</customerPurchaseOrderNumber>
<headerTypeCode>P</headerTypeCode>
<orderTypeID>RG</orderTypeID>
<deliveryTypeID>DS</deliveryTypeID>
<deliveryMethodCode />
<customerBarCode />
<customerReferenceData>25HZK99</customerReferenceData>
<customerReferenceText />
<customerRouteData>ZNED UNTED</customerRouteData>
<customerRouteText>ROUTE</customerRouteText>
<endConsumerPurchaseOrderNumber />
<endConsumerPurchaseOrderText />
<endConsumerName>CHARLESTON TRANS. FACILITY</endConsumerName>
<endConsumerAddress1>HOMEWOOD DT PROGRAM DEPT. 3</endConsumerAddress1>
<endConsumerAddress2>17341 PALMER BLVD.</endConsumerAddress2>
<endConsumerAddress3 />
<endConsumerCity>HOMEWOOD</endConsumerCity>
<endConsumerState>IL</endConsumerState>
<endConsumerPostalCode>60430</endConsumerPostalCode>
<endConsumerCountryCode />
<fillFacilityNumber>025</fillFacilityNumber>
<shpFacilityNumber>025</shpFacilityNumber>
<homeFacilityAbbrCode>STL</homeFacilityAbbrCode>
<homeFacilityNumber>015</homeFacilityNumber>
<multiCartonIndicator>Y</multiCartonIndicator>
<primaryCustomerIndicator>Y</primaryCustomerIndicator>
<shipToCustomerNumber>954371001</shipToCustomerNumber>
<customerCompanyID>01</customerCompanyID>
<customerTruckID>U888</customerTruckID>
<customerTruckDescription>UDS - ADDISON</customerTruckDescription>
<customerTruckDockID>13</customerTruckDockID>
<thirdPartyBillCarrier />
<thirdPartyBillID />
<thirdPartyBillType />
<qualityCheckIndicator>N</qualityCheckIndicator>
<warehouseLaydownID />
<packListPosition>I</packListPosition>
<preferredPackingType>CTN</preferredPackingType>
<preferredPackingMaterial>PAPER</preferredPackingMaterial>
<preferedPackingInstructions />
<totalOrderCartonQty>6</totalOrderCartonQty>
<convertAddressIndicator>N</convertAddressIndicator>
<dealerInstructionIndicator>Y</dealerInstructionIndicator>
<dealerinstructions1>CPO#: 623559</dealerinstructions1>
<dealerinstructions2>ATTN: DANA GRIFFIN</dealerinstructions2>
<dealerinstructions3>INFO: 612</dealerinstructions3>
<dealerinstructions4>ROUTE: ZNED UNTED</dealerinstructions4>
<dealerinstructions5 />
<dealerinstructions6 />
<shippingInstructionsIndicator>N</shippingInstructionsIndicator>
<shippingInstructions1 />
<shippingInstructions2 />
<shippingInstructions3 />
<shippingInstructions4 />
<shippingInstructions5 />
<shippingInstructions6 />
<specialInstructionsIndicator>N</specialInstructionsIndicator>
<specialInstructions1 />
<specialInstructions2 />
<customeContainerDesc />
- <carton>
<deliveryCartonID>253370905995</deliveryCartonID>
<shipIndicator>Y</shipIndicator>
<deliveryPalletID>X150</deliveryPalletID>
<consolidatedDeliveryCartonID />
<scanDateTime>2009-12-03T19:36:12.00</scanDateTime>
<cartonWeight>52</cartonWeight>
<dropShipFlag>1</dropShipFlag>
<carrierTrackingNumber />
<carrierZoneID>0</carrierZoneID>
<codAmount />
<customerPackageAmount />
<declaredValue />
<residentialDeliveryIndicator />
<serviceTypeCode>00</serviceTypeCode>
<ssccCode>006860244400829393</ssccCode>
- <Item>
<shipPrefix>UNV</shipPrefix>
<shipStockNumber>21200</shipStockNumber>
<itemDescription>PAPER XERO/DUP WE LTR 20#</itemDescription>
<orderQuantity>1</orderQuantity>
<originalShipQuantity>1</originalShipQuantity>
<shipQuantity>1</shipQuantity>
<inventoryUnitCode>CT</inventoryUnitCode>
<inventoryWeightQuantity>52.000</inventoryWeightQuantity>
<upcNumber>00000000000000</upcNumber>
<upcRetailCode>087547212004</upcRetailCode>
<hazmatIndicator>N</hazmatIndicator>
<serialRequiredIndicator>N</serialRequiredIndicator>
<dealerMemoPO>S</dealerMemoPO>
<cartonLineNumber>1</cartonLineNumber>
<orderLineNumber>11</orderLineNumber>
<originalOrderPrefix>UNV</originalOrderPrefix>
<originalOrderStockNumber>21200</originalOrderStockNumber>
<reasonCode />
- <Item_Serial>
<serialNumber />
</Item_Serial>
</Item>
</carton>
</order>
</Delivery>
</Shipment>
This is not the complete XML file as it exceeds the 15000 characters and then I cann't post here. So I have deleted much part of it.
The hierarchy is as following: Shipment->Delivery->Order->Carton->Item.
I have created a XSLT for it which is working fine.
But when I execute my report program it gives CX_SY_XSLT_FORMAT_ERROR saying that
Transformation error: Non-canonical structure of element name XML_OUTPUT.Dear experts,
My report program is as following:-
*& Report Z_ASNTRNS
REPORT Z_ASNTRNS.
*& Report Z_ASNTRNS
TYPE-POOLS: abap, ixml.
TABLES: ZASN_SHIPMENT,ZASN_DELIVERY,ZASN_ORDER,ZASN_CARTON,ZASN_ITEM.
*CONSTANTS gs_file TYPE string VALUE 'C:Documents and SettingsC5134126DesktopRajesh_kandakatlaSampleASNFile.xml'.
This is the structure for the data from the XML file
TYPES: BEGIN OF ts_item,
ZSHIPMENT LIKE ZASN_ITEM-ZSHIPMENT,
VBELN LIKE ZASN_ITEM-VBELN,
ORDER_NUMBER LIKE ZASN_ITEM-ORDER_NUMBER,
CARTON_ID LIKE ZASN_ITEM-CARTON_ID,
ITEM LIKE ZASN_ITEM-ITEM,
CARTON_LINE_NUM LIKE ZASN_ITEM-CARTON_LINE_NUM,
CARTON_LINE_NUMBER LIKE ZASN_ITEM-CARTON_LINE_NUM,
AEDAT(8),
AEZET(6),
ITEM_DESCRIPTION LIKE ZASN_ITEM-ITEM_DESCRIPTION,
ORD_QTY(16),
ORIGINAL_SHIP(16),
SHIP_QTY(16),
UPC_NUMBER LIKE ZASN_ITEM-UPC_NUMBER,
DEALER_MEMO_PO(5),
ORDER_LINE_NUM LIKE ZASN_ITEM-ORDER_LINE_NUM,
STATUS LIKE ZASN_ITEM-STATUS,
END OF ts_item.
TYPES: BEGIN OF ts_carton,
ZSHIPMENT LIKE ZASN_CARTON-ZSHIPMENT,
VBELN LIKE ZASN_CARTON-VBELN,
ORDER_NUMBER LIKE ZASN_CARTON-ORDER_NUMBER,
CARTON_ID LIKE ZASN_CARTON-CARTON_ID,
AEDAT(8),
AEZET(6),
SHIP_INDICATOR LIKE ZASN_CARTON-SHIP_INDICATOR,
TRACKING_NUMBER LIKE ZASN_CARTON-TRACKING_NUMBER,
ZZCARTON_WGT(18),
Item type ts_item,
END OF ts_carton.
TYPES: BEGIN OF ts_order,
ZSHIPMENT LIKE ZASN_ORDER-ZSHIPMENT,
VBELN LIKE ZASN_ORDER-VBELN,
ORDER_NUMBER LIKE ZASN_ORDER-ORDER_NUMBER,
AEDAT(8),
AEZET(6),
SUB_ORDER LIKE ZASN_ORDER-SUB_ORDER,
ORDER_DATE(8),
PRIMARY_ORDER LIKE ZASN_ORDER-PRIMARY_ORDER,
CUSTOMER_PO LIKE ZASN_ORDER-CUSTOMER_PO,
PRIMARY_ID LIKE ZASN_ORDER-PRIMARY_ID,
SHIP_TO LIKE ZASN_ORDER-SHIP_TO,
ANZPK(5),
carton type ts_carton,
END OF ts_order.
TYPES: BEGIN OF ts_delivery,
ZSHIPMENT LIKE ZASN_DELIVERY-ZSHIPMENT,
VBELN LIKE ZASN_DELIVERY-VBELN,
AEDAT(8) TYPE C,
AEZET(6) TYPE C,
PRIMARY_CUSTOMER LIKE ZASN_DELIVERY-PRIMARY_CUSTOMER,
BILL_OF_LADING LIKE ZASN_DELIVERY-BILL_OF_LADING,
CARTON_COUNT(5),
TOTAL_WEIGHT(18),
order type ts_order,
END OF ts_delivery.
TYPES: BEGIN OF ts_shipment,
ZSHIPMENT LIKE ZASN_SHIPMENT-ZSHIPMENT,
MANIFEST_DATE_TIME(25),
AEDAT(8) TYPE C,
AEZET(6) TYPE C,
SDATE(8) TYPE C,
STIME(6) TYPE C,
SFACILITY_NUMBER LIKE ZASN_SHIPMENT-SFACILITY_NUMBER,
ZZCARRIERCODE LIKE ZASN_SHIPMENT-ZZCARRIERCODE,
Delivery type ts_delivery,
END OF ts_shipment.
TYPES: BEGIN OF ts_shipment1,
ZSHIPMENT LIKE ZASN_SHIPMENT-ZSHIPMENT,
MANIFEST_DATE_TIME(25),
SFACILITY_NUMBER LIKE ZASN_SHIPMENT-SFACILITY_NUMBER,
ZZCARRIERCODE LIKE ZASN_SHIPMENT-ZZCARRIERCODE,
PRIMARY_CUSTOMER LIKE ZASN_DELIVERY-PRIMARY_CUSTOMER,
BILL_OF_LADING LIKE ZASN_DELIVERY-BILL_OF_LADING,
CARTON_COUNT(5),
TOTAL_WEIGHT(18),
ORDER_NUMBER LIKE ZASN_ORDER-ORDER_NUMBER,
SUB_ORDER LIKE ZASN_ORDER-SUB_ORDER,
ORDER_DATE(8),
PRIMARY_ORDER LIKE ZASN_ORDER-PRIMARY_ORDER,
CUSTOMER_PO LIKE ZASN_ORDER-CUSTOMER_PO,
PRIMARY_ID LIKE ZASN_ORDER-PRIMARY_ID,
SHIP_TO LIKE ZASN_ORDER-SHIP_TO,
CARTON_ID LIKE ZASN_CARTON-CARTON_ID,
SHIP_INDICATOR LIKE ZASN_CARTON-SHIP_INDICATOR,
TRACKING_NUMBER LIKE ZASN_CARTON-TRACKING_NUMBER,
ZZCARTON_WGT(18),
ITEM LIKE ZASN_ITEM-ITEM,
CARTON_LINE_NUM LIKE ZASN_ITEM-CARTON_LINE_NUM,
CARTON_LINE_NUMBER LIKE ZASN_ITEM-CARTON_LINE_NUM,
ITEM_DESCRIPTION LIKE ZASN_ITEM-ITEM_DESCRIPTION,
ORD_QTY(16),
ORIGINAL_SHIP(16),
SHIP_QTY(16),
UPC_NUMBER LIKE ZASN_ITEM-UPC_NUMBER,
DEALER_MEMO_PO(5),
ORDER_LINE_NUM LIKE ZASN_ITEM-ORDER_LINE_NUM,
END OF ts_shipment1.
TYPES: BEGIN OF t_xml_line,
data(256) TYPE x,
END OF t_xml_line.
*Typdefinition für Airplus
*READ THE DOCUMENTATION "LASG_XML_INVOICE_BTM"!!!
VARs beginning with "a_" are ATTRIBUTES
DATA: l_ixml TYPE REF TO if_ixml,
l_streamfactory TYPE REF TO if_ixml_stream_factory,
l_parser TYPE REF TO if_ixml_parser,
l_istream TYPE REF TO if_ixml_istream,
l_ostream TYPE REF TO if_ixml_ostream,
l_document TYPE REF TO if_ixml_document,
l_node TYPE REF TO if_ixml_node,
l_xml TYPE REF TO cl_xml_document,
l_xmldata TYPE string.
DATA: l_xml_table TYPE TABLE OF t_xml_line,
l_xml_line TYPE t_xml_line,
l_xml_table_size TYPE i.
DATA: l_filename TYPE string.
DATA: xml_out TYPE string ,
size type i.
DATA: l_xml_x1 TYPE xstring.
DATA: l_len TYPE i,
l_len2 TYPE i,
l_tab TYPE tsfixml,
l_content TYPE string,
l_str1 TYPE string,
c_conv TYPE REF TO cl_abap_conv_in_ce.
l_itab TYPE TABLE OF string.
DATA: BEGIN OF l_itab occurs 0,
data(256) type c,
end of l_itab.
TYPES : BEGIN OF TY_TEXT,
data(255) type C,
END OF TY_TEXT.
DATA: F_XML TYPE STRING.
DATA : LT_TEXT_OUT type table of TY_TEXT with header line.
tables
DATA: it_shipment TYPE STANDARD TABLE OF ts_shipment,
wa_shipment TYPE ts_shipment.
*Errorvariables
DATA: xslt_err TYPE REF TO cx_xslt_exception,
err_string TYPE string.
PARAMETERS: pa_file TYPE localfile OBLIGATORY
DEFAULT 'C:Documents and SettingsC5134126DesktopRajesh_kandakatlaSampleASNFile.xml'.
START-OF-SELECTION.
Creating the main iXML factory
l_ixml = cl_ixml=>create( ).
Creating a stream factory
l_streamfactory = l_ixml->create_stream_factory( ).
PERFORM get_xml_table CHANGING l_xml_table_size l_xml_table.
here we use the CALL TRANSFORMATION method which calls
the XSLT program "z_asnfile"
TRY.
CALL TRANSFORMATION ('Z_ASNFILE')
SOURCE xml LT_TEXT_OUT[]
RESULT xml_output = it_shipment
catch any error, very helpful if the XSLT isn't correct
CATCH cx_xslt_exception INTO xslt_err.
err_string = xslt_err->get_text( ).
WRITE: / 'Transformation error: ', err_string.
EXIT.
ENDTRY." setting a breakpoint to watch the workarea
by the internal table "it_airplus"
break-point.
LOOP AT it_shipment INTO wa_shipment.
ENDLOOP.
*& Form get_xml_table
FORM get_xml_table CHANGING l_xml_table_size TYPE i
l_xml_table TYPE STANDARD TABLE.
l_filename = pa_file.
upload a file from the client's workstation
CALL METHOD cl_gui_frontend_services=>gui_upload
EXPORTING
filename = l_filename
filetype = 'BIN'
IMPORTING
filelength = l_xml_table_size
CHANGING
data_tab = l_xml_table
EXCEPTIONS
OTHERS = 19.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
Convert binary to text.
CALL FUNCTION 'SCMS_BINARY_TO_TEXT'
EXPORTING
INPUT_LENGTH = 70000
FIRST_LINE = 0
LAST_LINE = 0
APPEND_TO_TABLE = ' '
MIMETYPE = ' '
WRAP_LINES = 'X'
IMPORTING
OUTPUT_LENGTH =
TABLES
BINARY_TAB = l_xml_table
TEXT_TAB = LT_TEXT_OUT
EXCEPTIONS
FAILED = 1
OTHERS = 2
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDFORM. "get_xml_table -
Hi
Using <b>XSLT</b> and <b>Call transformation</b> (XSLT scheme) to convert XML file to html the problem is that line breaks in xml are ignored when passing the call tranformation.
<b>Note in xml look like:</b>
<com:Note><![CDATA[
Serie 87% 0,000000
Amount in this period 01-01-2006 - 01-07-2006 - 180 days
Currency 16.267.117,38 DKK
Loan DKK 14.332.700,00
Debt 7.358.534,23
Indexsfactor 226,230
]]></com:Note>
<b>When HTML displayed in sap, - note is just a long string which continue out of the screen. Note looks like:</b>Serie 87% 0,000000Amount in this period 01-01-2006 - 01-07-2006 - 180 daysCurrency 16.267.117,38 DKKDebt 7.358.534,23Indexsfactor 226,230
What to do ? Any ideas ?hi Jan,
Check out the link
http://www.topxml.com/code/default.asp?p=3&id=v20031025170911&ms=20&l=xsl&sw=categ
This link contains a file that contains an importable xslt stylesheet with two templates. The first is a text wrap template that breaks texts at carriage returns. You need to modify the xslt.
[code]<xsl:template name="text.wrap">
<xsl:param name="texttowrap"/>
<xsl:variable name="textlength" select="string-length($texttowrap)"/>
<!-- don't waste time if no text supplied or remaining from recursion-->
<xsl:if test="$textlength > 0">
<xsl:choose>
<xsl:when test="contains($texttowrap,$CR)">
<!-- get the text before the first instance of a carriage return character-->
<xsl:variable name="<span style="background-color:yellow;color:red;font-weight:bold;">line</span>beforefirst<span style="background-color:yellow;color:red;font-weight:bold;">break</span>" select="substring-before($texttowrap,$CR)"/>
.................[/code]
Hope this helps.
Regards,
Richa -
Problem in Call transformation - xslt program
Hi Experts,
For the below XML file i have declared the XSLT Program as described.
i am not able to get the data into Internal table.
Could you pls help me in where am i going wrong?
Is it XSLT declaration or what?
Data declarations
TYPES: BEGIN OF TY_BH,
RECORDTYPE(02), " Record Type
DOCTYPE(02), " Document type
REFERENCE(16), " Reference Document Number
DOCUMENTDATE(08), " Document Date in Document
POSTINGDATE(08), " Posting Date in the Document
COMPANYCODE(04), " Company Code
CURRENCY(03), " Currency Key
EXCHANGERATE(08), " Exchange rate
PARK(01), " Park document
ITEMNUMBER(03), " Number of line item
END OF TY_BH,
BEGIN OF TY_HH,
RECORDTYPE(02), " Record Type
SOURCE(04),
DESTINATION(04),
TIMESTAMP(14),
END OF TY_HH,
BEGIN OF TY_TT,
RECORDTYPE(02),
TOTALRECORDS(10) TYPE N,
TOTALVALUE(16),
END OF TY_TT.
TYPES: BEGIN OF TY_BL,
RECORDTYPE(02),
REFERENCE(16),
REFLINEITEM(03),
ACCTTYPE(01),
DRCRINDICATOR(01),
ACCOUNT(10),
AMOUNT(13),
VENDORNAME1(40),
VENDORNAME2(40),
VENDORNAME3(40),
VENDORNAME4(40),
STREET(40),
CITY(40),
POSTALCODE(10),
COUNTRY(02),
CONTACTPERSON(10),
ALTERNATEPAYEECODE(10),
ALTERNATEPAYEENAME1(40),
ALTERNATEPAYEENAME2(40),
ALTERNATEPAYEENAME3(40),
PAYMENTTERMS(04),
BASELINEDATE(08),
PAYMENTMETHODS(01),
ALLOCATION(18),
LINEITEMTEXT(50),
TAXCODE(02),
TAXAMOUNT(13),
WHTAXCODE(02),
WHTAXBASE(13),
FUND(10),
FUNDCENTER(16),
COSTCENTER(10),
INTERNALORDER(12),
TAXAUTOMATICALLY(01),
SPECIALGLINDICATOR(01),
END OF TY_BL.
DATA: GT_BH TYPE STANDARD TABLE OF TY_BH,
GS_BH TYPE TY_BH,
GT_BL TYPE STANDARD TABLE OF TY_BL,
GS_BL TYPE TY_BL,
GT_HH TYPE STANDARD TABLE OF TY_HH,
GS_HH TYPE TY_HH,
GT_TT TYPE STANDARD TABLE OF TY_TT,
GS_TT TYPE TY_TT.
DATA: GT_RESULT_XML5 TYPE ABAP_TRANS_RESBIND_TAB,
GS_RESULT_XML5 TYPE ABAP_TRANS_RESBIND.
DATA: GS_RIF_EX TYPE REF TO CX_ROOT,
GS_VAR_TEXT TYPE STRING.
DATA: BEGIN OF GT_XML,
HH TYPE TY_HH,
BH LIKE TABLE OF GT_BH,
BL LIKE TABLE OF GT_BL,
TT TYPE TY_TT,
END OF GT_XML.
DATA: GT_ITAB TYPE STANDARD TABLE OF CHAR2048,
GS_ITAB TYPE CHAR2048.
I have the below file read into internal table GT_ITAB.
XML File
<?xml version="1.0" encoding="utf-8"?>
<ABCInbound xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="C:\XYZSchema\ABCInbound.xsd">
<HH>
<RecordType>HH</RecordType>
<Source>ABC</Source>
<Destination>XYZ</Destination>
<TimeStamp>20050909220546</TimeStamp>
</HH>
<BH>
<RecordType>BH</RecordType>
<DocType>AB</DocType>
<Reference>2205516125</Reference>
<DocumentDate>20080909</DocumentDate>
<PostingDate></PostingDate>
<CompanyCode>ABC</CompanyCode>
<Currency>INR</Currency>
<ExchangeRate>1.0000</ExchangeRate>
<Park></Park>
<ItemNumber>2</ItemNumber>
</BH>
<BL>
<RecordType>BL</RecordType>
<Reference>2205516125</Reference>
<RefLineItem>1</RefLineItem>
<AcctType>K</AcctType>
<DrCrIndicator>H</DrCrIndicator>
<Account>01000003</Account>
<Amount>364.00</Amount>
<VendorName-1>TOM & JERRY IS MY</VendorName-1>
<VendorName-2> NAME TO BE PAID</VendorName-2>
<VendorName-3>1987566Z</VendorName-3>
<VendorName-4>22</VendorName-4>
<Street>UCX STREET</Street>
<City>ROAD 4</City>
<PostalCode>515004</PostalCode>
<Country>IND</Country>
<ContactPerson></ContactPerson>
<AlternatePayeeCode></AlternatePayeeCode>
<AlternatePayeeName-1></AlternatePayeeName-1>
<AlternatePayeeName-2></AlternatePayeeName-2>
<AlternatePayeeName-3></AlternatePayeeName-3>
<PaymentTerms></PaymentTerms>
<BaselineDate></BaselineDate>
<PaymentMethods></PaymentMethods>
<Allocation></Allocation>
<LineItemText>item text</LineItemText>
<TaxCode></TaxCode>
<TaxAmount>0.00</TaxAmount>
<WHTaxCode></WHTaxCode>
<WHTaxbase>0.00</WHTaxbase>
<Fund></Fund>
<FundCenter></FundCenter>
<CostCenter></CostCenter>
<InternalOrder></InternalOrder>
<TaxAutomatically></TaxAutomatically>
<SpecialGLIndicator></SpecialGLIndicator>
</BL>
<TT>
<RecordType>TT</RecordType>
<TotalRecords>1</TotalRecords>
<TotalValue>222</TotalValue>
</TT>
</ABCInbound>
Call transformation as below
GET REFERENCE OF GT_XML INTO GS_RESULT_XML5-VALUE.
GS_RESULT_XML5-NAME = 'IABC'.
APPEND GS_RESULT_XML5 TO GT_RESULT_XML5.
TRY.
CALL TRANSFORMATION Z_XML_TO_ABAP5
SOURCE XML GT_ITAB
RESULT (GT_RESULT_XML5).
CATCH CX_ROOT INTO GS_RIF_EX.
GS_VAR_TEXT = GS_RIF_EX->GET_TEXT( ).
MESSAGE GS_VAR_TEXT TYPE 'E'.
ENDTRY.
WHEN I CHECK GT_XML its initial.
XSLT Program Z_XML_TO_ABAP5 is like below.
Pls let me know if my xslt declaration is wrong.
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output encoding="iso-8859-1" indent="yes" method="xml" version="1.0"/>
<xsl:strip-space elements="*"/>
<xsl:template match="/">
<asx:abap xmlns:asx="http://www.sap.com/abapxml" version="1.0">
<asx:values>
<IABC>
<xsl:apply-templates select="//ABCInbound"/>
</IABC>
</asx:values>
</asx:abap>
</xsl:template>
<xsl:template match="ABCINBOUND">
<item>
<RECORDTYPE>
<xsl:value-of select="RecordType"/>
</RECORDTYPE>
<SOURCE>
<xsl:value-of select="Source"/>
</SOURCE>
<DESTINATION>
<xsl:value-of select="Destination"/>
</DESTINATION>
<TIMESTAMP>
<xsl:value-of select="TimeStamp"/>
</TIMESTAMP>
<RECORDTYPE>
<xsl:value-of select="RecordType"/>
</RECORDTYPE>
<DOCTYPE>
<xsl:value-of select="DocType"/>
</DOCTYPE>
<REFERENCE>
<xsl:value-of select="Reference"/>
</REFERENCE>
<DOCUMENTDATE>
<xsl:value-of select="DocumentDate"/>
</DOCUMENTDATE>
<POSTINGDATE>
<xsl:value-of select="PostingDate"/>
</POSTINGDATE>
<COMPANYCODE>
<xsl:value-of select="CompanyCode"/>
</COMPANYCODE>
<CURRENCY>
<xsl:value-of select="Currency"/>
</CURRENCY>
<EXCHANGERATE>
<xsl:value-of select="ExchangeRate"/>
</EXCHANGERATE>
<PARK>
<xsl:value-of select="Park"/>
</PARK>
<ITEMNUMBER>
<xsl:value-of select="ItemNumber"/>
</ITEMNUMBER>
<RECORDTYPE>
<xsl:value-of select="RecordType"/>
</RECORDTYPE>
<REFERENCE>
<xsl:value-of select="Reference"/>
</REFERENCE>
<REFLINEITEM>
<xsl:value-of select="RefLineItem"/>
</REFLINEITEM>
<ACCTTYPE>
<xsl:value-of select="AcctType"/>
</ACCTTYPE>
<DRCRINDICATOR>
<xsl:value-of select="DrCrIndicator"/>
</DRCRINDICATOR>
<ACCOUNT>
<xsl:value-of select="Account"/>
</ACCOUNT>
<AMOUNT>
<xsl:value-of select="Amount"/>
</AMOUNT>
<VENDORNAME1>
<xsl:value-of select="VendorName-1"/>
</VENDORNAME1>
<VENDORNAME2>
<xsl:value-of select="VendorName-2"/>
</VENDORNAME2>
<VENDORNAME3>
<xsl:value-of select="VendorName-3"/>
</VENDORNAME3>
<VENDORNAME4>
<xsl:value-of select="VendorName-4"/>
</VENDORNAME4>
<STREET>
<xsl:value-of select="Street"/>
</STREET>
<CITY>
<xsl:value-of select="City"/>
</CITY>
<POSTALCODE>
<xsl:value-of select="PostalCode"/>
</POSTALCODE>
<COUNTRY>
<xsl:value-of select="Country"/>
</COUNTRY>
<CONTACTPERSON>
<xsl:value-of select="ContactPerson"/>
</CONTACTPERSON>
<ALTERNATEPAYEECODE>
<xsl:value-of select="AlternatePayeeCode"/>
</ALTERNATEPAYEECODE>
<ALTERNATEPAYEENAME1>
<xsl:value-of select="AlternatePayeeName1"/>
</ALTERNATEPAYEENAME1>
<ALTERNATEPAYEENAME2>
<xsl:value-of select="AlternatePayeeName2"/>
</ALTERNATEPAYEENAME2>
<ALTERNATEPAYEENAME3>
<xsl:value-of select="AlternatePayeeName3"/>
</ALTERNATEPAYEENAME3>
<PAYMENTTERMS>
<xsl:value-of select="PaymentTerms"/>
</PAYMENTTERMS>
<BASELINEDATE>
<xsl:value-of select="BaselineDate"/>
</BASELINEDATE>
<PAYMENTMETHODS>
<xsl:value-of select="PaymentMethods"/>
</PAYMENTMETHODS>
<ALLOCATION>
<xsl:value-of select="Allocation"/>
</ALLOCATION>
<LINEITEMTEXT>
<xsl:value-of select="LineItemText"/>
</LINEITEMTEXT>
<TAXCODE>
<xsl:value-of select="TaxCode"/>
</TAXCODE>
<TAXAMOUNT>
<xsl:value-of select="TaxAmount"/>
</TAXAMOUNT>
<WHTAXCODE>
<xsl:value-of select="WHTaxCode"/>
</WHTAXCODE>
<WHTAXBASE>
<xsl:value-of select="WHTaxbase"/>
</WHTAXBASE>
<FUND>
<xsl:value-of select="Fund"/>
</FUND>
<FUNDCENTER>
<xsl:value-of select="FundCenter"/>
</FUNDCENTER>
<COSTCENTER>
<xsl:value-of select="CostCenter"/>
</COSTCENTER>
<INTERNALORDER>
<xsl:value-of select="InternalOrder"/>
</INTERNALORDER>
<TAXAUTOMATICALLY>
<xsl:value-of select="TaxAutomatically"/>
</TAXAUTOMATICALLY>
<SPECIALGLINDICATOR>
<xsl:value-of select="SpecialGLIndicator"/>
</SPECIALGLINDICATOR>
<RECORDTYPE>
<xsl:value-of select="RecordType"/>
</RECORDTYPE>
<TOTALRECORDS>
<xsl:value-of select="TotalRecords"/>
</TOTALRECORDS>
<TOTALVALUE>
<xsl:value-of select="TotalValue"/>
</TOTALVALUE>
</item>
</xsl:template>
</xsl:transform>
I able to get if declare only for BL and BH in separate xslt transformations.
Regards,
SimhaHello Mithun,
when you use the call transformation statement you have to specifiy the xslt transformation used. As a first step you usually use the transformation with the name ID. This is a special transformation for making the asXML representation of abap data. Unfortunately if you look into this transformation you find the following:
<xsl:transform version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
>
<xsl:strip-space elements="*"/>
<xsl:template match="/">
<xsl:copy-of select="."/>
</xsl:template>
</xsl:transform>
If I remember correctly when you use another transformation this will first call the ID transformation and after this the specified one. So it should not be possible to just copy ID transformation and remove the line. I'll have to think again how to avoid the behaviour.
Best Regards
Roman -
XSLT-ABAP using Call Transformation
Hello Friends,
I am new to this XSLT-ABAP transformation. I went through the blogs and forums and got a fair bit of idea on this. Now, i am trying to create a simple program/ xslt transformation to test the scenario. Once this is successfull i need to implement this in our project.
I am not sure, where and what i am doing wrong. Kindly check the below given XSLT/ XML/ ABAP Program and correct me.
My XML File looks as given below:
<?xml version="1.0" encoding="utf-8" ?>
- <List>
- <ITEM>
<ITEMQUALF>ITEM1</ITEMQUALF>
<MATERIAL>MAT1</MATERIAL>
</ITEM>
- <ITEM>
<ITEMQUALF>ITEM2</ITEMQUALF>
<MATERIAL>MAT2</MATERIAL>
</ITEM>
- <ITEM>
<ITEMQUALF>ITEM3</ITEMQUALF>
<MATERIAL>MAT3</MATERIAL>
</ITEM>
</List>
My XSLT Transformation looks as given below:
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:sap="http://www.sap.com/sapxsl" version="1.0">
<xsl:strip-space elements="*"/>
<xsl:template match="*">
<List>
<xsl:for-each select="ITEM">
<xsl:element name="ITEM">
<xsl:element name="ITEMQUALF">
<xsl:value-of select="ITEMQUALF"/>
</xsl:element>
<xsl:element name="MATERIAL">
<xsl:value-of select="MATERIAL"/>
</xsl:element>
</xsl:element>
</xsl:for-each>
</List>
</xsl:template>
</xsl:transform>
My ABAP program looks as below:
REPORT ztest_ram.
TYPES:
BEGIN OF ty_test,
itemqualf TYPE char10,
material TYPE char10,
END OF ty_test,
ty_t_test TYPE STANDARD TABLE OF ty_test.
DATA:
l_xml TYPE REF TO cl_xml_document,
t_test TYPE ty_t_test,
wa_person TYPE LINE OF ty_t_test,
t_xml_out TYPE string,
v_retcode TYPE sy-subrc,
v_totalsize TYPE i.
DATA: gs_rif_ex TYPE REF TO cx_root,
gs_var_text TYPE string.
* Create object
CREATE OBJECT l_xml.
* Call method to import data from file
CALL METHOD l_xml->import_from_file
EXPORTING
filename = 'C:\xml\xml_test.xml'
RECEIVING
retcode = v_retcode.
* Call method to Render into string
CALL METHOD l_xml->render_2_string
IMPORTING
retcode = v_retcode
stream = t_xml_out
size = v_totalsize.
* Call Transformation
TRY.
CALL TRANSFORMATION (`ZXSLT_RAM`)
SOURCE XML t_xml_out
RESULT outtab = t_test.
CATCH cx_root INTO gs_rif_ex.
gs_var_text = gs_rif_ex->get_text( ).
MESSAGE gs_var_text TYPE 'E'.
ENDTRY.
When i run this ABAP program to fetch the data from XML in to Internal table, i get the error message:
Incorrect element List for XML-ABAP transformation
I am really not sure how to proceed further. Could any one help me on this?
Note: Please do not paste the same links, as i have gone through most of them.
Thank you.
Best Regards,
Ram.UPDATE, works now.
ABAP:
method IF_HTTP_EXTENSION~HANDLE_REQUEST.
*THIS METHOD IS AN HTTP INTERFACE FOR A
*SICF WEB SERVICE HANDLER. IT RECEIVES AN XML PAYLOAD,
*READS IT INTO AN XSTRING, THEN TRANSFORMS THE
*XSTRING INTO ABAP DATA USING AN ABAP XSLT
*TRANSFORMATION PROGRAM
*Process incoming xml Request
data: lxs_request TYPE xstring.
lxs_request = server->request->get_data( ).
*BUILD DATA TYPES
TYPES: BEGIN OF ccw_line,
field11 TYPE STRING,
field22 TYPE STRING,
END OF ccw_line.
TYPES: BEGIN OF ccw_head,
field1 TYPE STRING,
field2 TYPE STRING,
lines TYPE STANDARD TABLE OF ccw_line WITH DEFAULT KEY,
END OF ccw_head.
DATA: ccw_heads type STANDARD TABLE OF ccw_head,
xccw_heads TYPE ccw_head.
DATA: ccw_lines TYPE STANDARD TABLE OF ccw_line,
zccw_lines TYPE ccw_line.
DATA: lr_transformation_error TYPE REF TO cx_transformation_error.
DATA: err_text TYPE string.
*CALL TRANSFORMATION
TRY.
CALL TRANSFORMATION zccwpayload_prg
SOURCE XML lxs_request
RESULT OUTPUT = ccw_heads. "RESULT PARAMETER ("OUTPUT") NAME MUST EQUAL TRANSFORMED XML ROOT eg <OUTPUT>XML DATA...</OUTPUT>
* RESULT XML my_xml_result. "THIS CAN BE USED IF YOU WANT TO RETURN XML INSTEAD OF ABAP DATA
CATCH cx_xslt_exception INTO lr_transformation_error.
err_text = lr_transformation_error->get_text( ).
server->response->set_cdata( err_text ).
ENDTRY.
*SAVE TO DATABASE
*BUILD RESPONSE
call METHOD server->response->set_cdata
EXPORTING
DATA = err_text.
endmethod.
XML SOURCE:
<?xml version="1.0" encoding="ISO-8859-1"?>
<HEADS> <!--MATCH ON THIS IN XSLT!!!-->
<HEAD><!-- FOR-EACH ON THIS-->
<headval1>myHeader</headval1>
<LINES>
<Line>
<lineval1>myLine</lineval1>
</Line>
<Line>
<lineval1>myLine</lineval1>
</Line>
<Line>
<lineval1>myLine</lineval1>
</Line>
<Line>
<lineval1>myLine</lineval1>
</Line>
</LINES>
</HEAD>
<HEAD>
<headval1>myHeader</headval1>
<LINES>
<Line>
<lineval1>myLine</lineval1>
</Line>
<Line>
<lineval1>myLine</lineval1>
</Line>
<Line>
<lineval1>myLine</lineval1>
</Line>
</LINES>
</HEAD>
<HEAD>
<headval1>myHeader</headval1>
<LINES>
<Line>
<lineval1>myLine</lineval1>
</Line>
<Line>
<lineval1>myLine</lineval1>
</Line>
</LINES>
</HEAD>
</HEADS>
XSLT PROGRAM:
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:sap="http://www.sap.com/sapxsl" version="1.0">
<xsl:output encoding="UTF-8" indent="yes" method="xml"/>
<xsl:template match="/HEADS"><!--This should be the root name of your source XML eg <HEADS>xml data...</HEADS> if you don't have a single root match on "/" -->
<asx:abap xmlns:asx="http://www.sap.com/abapxml" version="1.0">
<asx:values>
<OUTPUT><!--MUST be all caps, MUST match CALL TRANSFORMATION RESULTS parameter name (RESULTS OUTPUT = myABAPDataStructure), and MUST not contain an underscore!!!-->
<xsl:for-each select="HEAD">
<HEAD> <!--ALL CAPS!!!-->
<FIELD1>
<xsl:value-of select="headval1"/>
</FIELD1>
<FIELD2>
<xsl:value-of select="headval1"/>
</FIELD2>
<LINES>
<xsl:for-each select="LINES/Line">
<LINE>
<FIELD11>
<xsl:value-of select="lineval1"/>
</FIELD11>
<FIELD22>
<xsl:value-of select="lineval1"/>
</FIELD22>
</LINE>
</xsl:for-each>
</LINES>
</HEAD>
</xsl:for-each>
</OUTPUT>
</asx:values>
</asx:abap>
</xsl:template>
</xsl:transform>
SAMPLE OF TRANSFORMED XML (MATCHES ABAP DATA STRUCTURE):
IF YOU TEST () YOUR TRANSFORMATION (IN XSLT_TOOL) WITH THE SAMPLE FILE AND IT DOESN'T LOOK LIKE THIS, YOUR TRANSFORMATION WILL FAIL. TAGS MUST BE ALL CAPS!!!!
<?xml version="1.0" encoding="UTF-8"?>
<asx:abap version = "1.0" xmlns:asx = "http://www.sap.com/abapxml">
<asx:values>
<OUTPUT>
<HEAD>
<FIELD1>myHeader</FIELD1>
<FIELD2>myHeader</FIELD2>
<LINES>
<LINE>
<FIELD11>myLine</FIELD11>
<FIELD22>myLine</FIELD22>
</LINE>
<LINE>
<FIELD11>myLine</FIELD11>
<FIELD22>myLine</FIELD22>
</LINE>
<LINE>
<FIELD11>myLine</FIELD11>
<FIELD22>myLine</FIELD22>
</LINE>
<LINE>
<FIELD11>myLine</FIELD11>
<FIELD22>myLine</FIELD22>
</LINE>
</LINES>
</HEAD>
<HEAD>
<FIELD1>myHeader</FIELD1>
<FIELD2>myHeader</FIELD2>
<LINES>
<LINE>
<FIELD11>myLine</FIELD11>
<FIELD22>myLine</FIELD22>
</LINE>
<LINE>
<FIELD11>myLine</FIELD11>
<FIELD22>myLine</FIELD22>
</LINE>
<LINE>
<FIELD11>myLine</FIELD11>
<FIELD22>myLine</FIELD22>
</LINE>
</LINES>
</HEAD>
<HEAD>
<FIELD1>myHeader</FIELD1>
<FIELD2>myHeader</FIELD2>
<LINES>
<LINE>
<FIELD11>myLine</FIELD11>
<FIELD22>myLine</FIELD22>
</LINE>
<LINE>
<FIELD11>myLine</FIELD11>
<FIELD22>myLine</FIELD22>
</LINE>
</LINES>
</HEAD>
</OUTPUT>
</asx:values>
</asx:abap> -
CALL TRANSFORMATION on XSLT creates leading 0 in RESULT XML
Hello,
calling the transformation in one step I got a cx_xslt_format_error-exception. I was reading in the forum to split the transformation into two calls:
CALL TRANSFORMATION z_bv_xml2abap
SOURCE XML gt_itab
RESULT XML lv_string.
CALL TRANSFORMATION z_bv_xml2abap
SOURCE XML lv_string
RESULT (gt_result_xml).
The result xml from the first transformations show two things I don't understand. The first character is ¶ (0xB6). This is displayed as # and throws an error when I try to call this in a tool like XMLSpy. So this could be the reason for the xc_xslt_format_error exception. But I don't know where it comes from.
The second confusing thing is that the lv_string shows UTF-16 and not UTF-8. Where can I change this.
Help would be really great,
VanessaHello and have a good start into the week.
Has really nobody an idea about this?
Thx, vanessa -
CALL TRANSFORMATION - XSLT encoding not settable?
Hello fellow ABAPer,
I have a problem creating an XML file from an simple itab, using XSLT transformation (CALL TRANSFORMATION).
Here's what I'm doing: I have a simple itab it_person type tt_person:
TYPES: BEGIN OF tt_person,
id(4) TYPE n,
firstname(20) TYPE c,
lastname(20) TYPE c,
END OF tt_person.
I'm filling it_person with test-data:
gs_person-id = '1'.
gs_person-firstname = 'John'.
gs_person-lastname = 'Smith'.
APPEND wa_person TO it_person.
Now I'm getting the reference of my itab for the CALL TRANSFORMATION command and I'm finally doing the actual transformation like this:
CALL TRANSFORMATION z_test_transformation
SOURCE (it_source_tab)
RESULT XML it_xml.
The Transformation (XSLT Program) looks like this:
http://uploading.com/files/775c1d31/trans.txt/
(sorry, I tried to post the transformation's code here, but that's not working, it screws the whole formating of this posting, that's why I had to upload it)
When I'm gui_downloading the XML file it looks like this:
<?xml version="1.0" encoding="iso-8859-1"?>
<CUSTOMERS>
<item>
<id>1</id>
<first_name>John</first_name>
<last_name>Smith</last_name>
</item>
</CUSTOMERS>
So everything is fine, until now: I need a different encoding, the other system's parser is not able to read ISO-8859-1 encoded files. So I need the first line of my XML to look like this:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
... and this is where I'm stuck right now, I can't get that to work.
I can change the line in my transformation to:
<xsl:output encoding="uft-8" indent="yes" method="xml" version="1.0" standalone="no"/>
but the resulting XML file is still ISO-8859-1. The "standalone" tag, that I need as well, doesn't work either.
So, what am I doing wrong? Is it not possible to create other XMLs than ISO-8859-1 with the XSLT-Transformation?
Thanks alot, any help would be highly appreciated.
AJI guess it_xml is an internal table cotaining chars?
i'm using
xxml TYPE xstring,
and
CALL TRANSFORMATION z_abap_to_xml_root
PARAMETERS mestyp = c_mestyp nsuri = c_namespace
SOURCE filename = fname table = it_out
RESULT XML xxml.
and i'm getting the xml encoding like
<?xml version="1.0" encoding="utf-8" ?>
Directly after your transformation insert the following call if your report can run interactively:
CALL FUNCTION 'DISPLAY_XML_STRING'
EXPORTING
xml_string = xxml
* TITLE =
* STARTING_X = 5
* STARTING_Y = 5
EXCEPTIONS
no_xml_document = 1
OTHERS = 2.
This displays the XML so you can check the encoding to be sure that the data is not tempered elsewhere.
Downloading the XML String even if its xstring should be no problem. -
Call transformation XSLT and long note string in XML
Hi
Using call transformation (XSLT scheme) to convert XML file to html the problem is that line breaks in xml are ignored when passing the call tranformation.
<b>Note in xml look like:</b>
<com:Note><![CDATA[
Serie 87% 0,000000
Amount in this period 01-01-2006 - 01-07-2006 - 180 days
Currency 16.267.117,38 DKK
Loan DKK 14.332.700,00
Debt 7.358.534,23
Indexsfactor 226,230
]]></com:Note>
<b>When HTML displayed in sap, - note is just a long string which continue out of the screen. Note looks like:</b>
Serie 87% 0,000000Amount in this period 01-01-2006 - 01-07-2006 - 180 daysCurrency 16.267.117,38 DKKDebt 7.358.534,23Indexsfactor 226,230
<i>What to do ? Any ideas ?</i>Hi Jon,
What is the XSLT program that you're using to transform the XML?
Can you try modifying the XSLT stylesheet and add or modify the following attribute.
<xsl:output indent="yes"/>
Regards,
Erwin -
Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster
In hopes that it might be helpful in the future, here's the procedure I followed to fix a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
BACKGROUND
Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
FIXING THE PROBLEM
How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
(As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields, and a second for CLOBs.
In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
alter system set global_names=false scope=memory;
CREATE PUBLIC DATABASE LINK OLD6
CONNECT TO DBUSERNAME
IDENTIFIED BY dbuserpass
USING 'restoreclone:1521/MYSID';
Testing the link...
SQL> select count(1) from users@old6;
COUNT(1)
454
Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
DUMP(TITLE)
Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
By comparison, a dump of that row on PRODCLONE's my_contents gives:
PRODCLONE> select dump(title) from my_contents where pk1=117286;
DUMP(TITLE)
Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
Eventually, I located a clever workaround at this link:
https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
It works like this:
On RESTORECLONE you create a view, vv, with UTL_RAW:
RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
View created.
This turns the title to raw on the RESTORECLONE.
You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in US7ASCII, so it was unable to do its transparent character set conversion.
PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
PRODCLONE> select dump(title) from my_contents where pk1=117286;
DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
COUNT(1)
533
By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
COUNT(1)
10568
So 10568 rows have characters which were transformed into 191s as part of the original conversion.
[ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected - got CLOB
Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
create or replace procedure find_us7_strings
(table_name varchar2,
fix_col varchar2 )
authid current_user
as
orig_sql varchar2(1000);
begin
orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' != CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
-- Uncomment if debugging:
-- dbms_output.put_line(orig_sql);
execute immediate orig_sql;
end;
And create a table to store the information as to which tables, columns, and rows have the bad characters:
drop table cnv_us7;
create table cnv_us7 (mytablename varchar2(50), myindx number, mycolumnname varchar2(50) ) tablespace myuser_data;
create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
--example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
set head off pagesize 1000 linesize 120
spool runme.sql
select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
where
data_type in ('CHAR','VARCHAR2')
and table_name in (select table_name from user_tab_columns where column_name='PK1' and table_name not in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
and char_length > 10
order by table_name,column_name;
spool off;
set echo on time on timing on feedb on serveroutput on;
spool output_of_runme
@./runme.sql
spool off;
Which eventually gives us the following inserted into CNV_US7:
20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
4 DESCRIPTION MY_FORUMS
21136 TITLE MY_CONTENTS
Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
We create our views on RESTOREDB:
create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
And then we can fix it directly via sql:
update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
where pk1 in (
select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
where taborig.pk1=tabnew.pk1
and myindx=tabnew.pk1
and mycolumnname='TITLE'
and mytablename='MY_CONTENTS'
and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
Note this part:
"and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there because if the users have changed TITLE -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
We can also create a stored procedure which will execute the SQL for us:
create or replace procedure fix_us7_strings
(TABLE_NAME varchar2,
FIX_COL varchar2 )
authid current_user
as
orig_sql varchar2(1000);
TYPE cv_type IS REF CURSOR;
orig_cur cv_type;
begin
orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
where pk1 in (
select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
where taborig.pk1=tabnew.pk1
and myindx=tabnew.pk1
and mycolumnname='''||FIX_COL||'''
and mytablename='''||TABLE_NAME||'''
and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
dbms_output.put_line(orig_sql);
execute immediate orig_sql;
end;
exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
exec fix_us7_strings('MY_CONTENTS','TITLE');
commit;
To validate this before and after, we can run something like:
select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
create or replace procedure find_us7_clob
(table_name varchar2,
fix_col varchar2)
authid current_user
as
orig_sql varchar2(1000);
type cv_type is REF CURSOR;
orig_table_cur cv_type;
my_chars_read NUMBER;
my_offset NUMBER;
my_problem NUMBER;
my_lob_size NUMBER;
my_indx_var NUMBER;
my_total_chars_read NUMBER;
my_output_chunk VARCHAR2(4000);
my_problem_flag NUMBER;
my_clob CLOB;
my_total_problems NUMBER;
ins_sql VARCHAR2(4000);
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
open orig_table_cur for orig_sql;
my_total_problems := 0;
LOOP
FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
EXIT WHEN orig_table_cur%NOTFOUND;
my_offset :=1;
my_chars_read := 512;
my_problem_flag :=0;
WHILE my_offset < my_lob_size and my_problem_flag =0
LOOP
DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
my_offset := my_offset + my_chars_read;
IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
THEN
-- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
-- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
my_problem_flag:=1;
END IF;
END LOOP;
IF my_problem_flag=1
THEN my_total_problems := my_total_problems +1;
ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
execute immediate ins_sql;
END IF;
END LOOP;
DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
END;
And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
exec find_us7_clob('MY_CONTENTS','DATA');
After completion, the CNV_US7 table looked like this:
RESTOREDB> set linesize 120 pagesize 100;
RESTOREDB> select count(1),mytablename,mycolumnname from cnv_us7
where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
where data_type='CLOB' )
group by mytablename,mycolumnname;
COUNT(1) MYTABLENAME MYCOLUMNNAME
69703 MY_CONTENTS DATA
On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
-- transforming CLOB to BLOB
l_off number default 1;
l_amt number default 4096;
l_offWrite number default 1;
l_amtWrite number;
l_str varchar2(4096 char);
begin
loop
dbms_lob.read ( p_clob, l_amt, l_off, l_str );
l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
utl_raw.cast_to_raw( l_str ) );
l_offWrite := l_offWrite + l_amtWrite;
l_off := l_off + l_amt;
l_amt := 4096;
end loop;
exception
when no_data_found then
NULL;
end;
We can test out the transformation of CLOBs to BLOBs with a single row like this:
drop table my_contents_lob;
Create table my_contents_lob (pk1 number,data blob);
DECLARE
v_clob CLOB;
v_blob BLOB;
BEGIN
SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
clob2blob (v_clob, v_blob);
END;
select dbms_lob.getlength(data) from my_contents_lob;
DBMS_LOB.GETLENGTH(DATA)
329
SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
UTL_RAW.CAST_TO_VARCHAR2(DATA)
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
create table my_contents_lob(pk1 number,data blob);
create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
create or replace procedure blob_conversion_my_contents
(table_name varchar2,
fix_col varchar2)
authid current_user
as
orig_sql varchar2(1000);
type cv_type is REF CURSOR;
orig_table_cur cv_type;
my_chars_read NUMBER;
my_offset NUMBER;
my_problem NUMBER;
my_lob_size NUMBER;
my_indx_var NUMBER;
my_total_chars_read NUMBER;
my_output_chunk VARCHAR2(4000);
my_problem_flag NUMBER;
my_clob CLOB;
my_blob BLOB;
my_total_problems NUMBER;
new_sql VARCHAR2(4000);
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
open orig_table_cur for orig_sql;
LOOP
FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
EXIT WHEN orig_table_cur%NOTFOUND;
new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
dbms_output.put_line(new_sql);
execute immediate new_sql;
-- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
-- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
-- dbms_output.put_line(new_sql);
select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
clob2blob(my_clob,my_blob);
END LOOP;
CLOSE orig_table_cur;
DBMS_OUTPUT.PUT_LINE('Completed program');
END;
exec blob_conversion_my_contents('MY_CONTENTS','DATA');
Verify that things work properly:
select dump( utl_raw.cast_to_varchar2(data)) from my_contents_lob where pk1=xxxx;
This should let you see see characters > 150. Thus, the method works.
We can now take this data, export it from RESTORECLONE
exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
and import the data on prodclone
imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
For paranoia's sake, double check that it worked properly:
select dump( utl_raw.cast_to_varchar2(data)) from my_contents_lob;
On our 10g PRODCLONE, we'll use these stored procedures:
CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
L_BLOB BLOB;
L_SRC_OFFSET NUMBER;
L_DEST_OFFSET NUMBER;
L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
L_WARNING NUMBER;
L_AMOUNT NUMBER;
BEGIN
DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
L_SRC_OFFSET := 1;
L_DEST_OFFSET := 1;
L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
DBMS_LOB.CONVERTTOBLOB(L_BLOB,
L_CLOB,
L_AMOUNT,
L_SRC_OFFSET,
L_DEST_OFFSET,
1,
V_LANG_CONTEXT,
L_WARNING);
RETURN L_BLOB;
END;
CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
L_CLOB CLOB;
L_SRC_OFFSET NUMBER;
L_DEST_OFFSET NUMBER;
L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
L_WARNING NUMBER;
L_AMOUNT NUMBER;
BEGIN
DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
L_SRC_OFFSET := 1;
L_DEST_OFFSET := 1;
L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
DBMS_LOB.CONVERTTOCLOB(L_CLOB,
L_BLOB,
L_AMOUNT,
L_SRC_OFFSET,
L_DEST_OFFSET,
1,
V_LANG_CONTEXT,
L_WARNING);
RETURN L_CLOB;
END;
And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
To find correct CSID for WE8ISO8859P1, we can use this query:
select nls_charset_id('WE8ISO8859P1') from dual;
Gives "31"
create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
L_CLOB CLOB;
L_SRC_OFFSET NUMBER;
L_DEST_OFFSET NUMBER;
L_BLOB_CSID NUMBER := 31; -- treat blob as WE8ISO8859P1
V_LANG_CONTEXT NUMBER := 31; -- treat resulting clob as WE8ISO8850P1
L_WARNING NUMBER;
L_AMOUNT NUMBER;
BEGIN
DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
L_SRC_OFFSET := 1;
L_DEST_OFFSET := 1;
L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
DBMS_LOB.CONVERTTOCLOB(L_CLOB,
L_BLOB,
L_AMOUNT,
L_SRC_OFFSET,
L_DEST_OFFSET,
L_BLOB_CSID,
V_LANG_CONTEXT,
L_WARNING);
RETURN L_CLOB;
END;
select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
Now, we can compare these:
select dbms_lob.compare(blob2clob(old.data),new.data) from my_contents new,my_contents_lob old where new.pk1=old.pk1;
DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
0
0
0
Vs
select dbms_lob.compare(blob2clobasc(old.data),new.data) from my_contents new,my_contents_lob old where new.pk1=old.pk1;
DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
-1
-1
-1
update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
Confirms that we're now working properly.
To run across all the _LOB tables we've created:
[oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
[oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
And then on PRODCLONE we can import:
imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
create or replace procedure fix_us7_CLOBS
(TABLE_NAME varchar2,
FIX_COL varchar2 )
authid current_user
as
orig_sql varchar2(1000);
bak_sql varchar2(1000);
begin
dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
execute immediate bak_sql;
orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
where pk1 in (
select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
where a.pk1=b.pk1
and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
-- dbms_output.put_line(orig_sql);
execute immediate orig_sql;
end;
Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
set serveroutput on time on timing on;
exec fix_us7_clobs('MY_CONTENTS','DATA');
commit;
After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
A summary:
1) We replaced the lossy characters by parsing a csscan output file
2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
Our actual error message:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00210: expected '<' instead of '�Error at line 1
31011. 00000 - "XML parsing failed"
*Cause: XML parser returned an error while trying to parse the document.
*Action: Check if the document to be parsed is valid.
Error at Line: 24 Column: 15
This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
Please advise if more information is needed from my end. -
CALL TRANSFORMATION on Unicode WebAS 6.20: No valid source context supplied
Hello,
in the last day's I was stuck into a strange problem. I had to develop a Web Service Client on our Web AS 6.20 Unicode system. I followed the Blog <a href="/people/durairaj.athavanraja/blog/2004/09/20/consuming-web-service-from-abap">Consuming Web Service from ABAP</a>. The problem was that my CALL TRANSFORMATION always throwed an exception "No valid source context supplied". I've tested the transformation with a local copy of the XML the Web Service returned and it works quite well. I had a look into the Documentaion of CALL TRANSFORMATION and it says:
== Documentation Quote Begin ==
Addition 3a
... SOURCE XML sxml
Effect
Specification of the transformation source
Transfer the XML document sxml using addition 3a. The following three possibilities exist for specifiying sxml:
The XML document can be in an ABAP variable sxml of the type STRING or XSTRING or in an internal standard table sxml of the elementary line type C.
== Documentation Quote End ==
So there should be no difference between STRING and XSTRING. But there is a difference! Here is my testcase which I've derived from my Blog <a href="/people/gregor.wolf3/blog/2006/06/27/geocode-business-partner-with-google-maps">Geocode Business Partner with Google Maps</a>:
<b>XSLT Transformation - ZGOOGLE_GEOCODE_TO_ABAP</b>
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:asx="http://www.sap.com/abapxml" xmlns:kml="http://earth.google.com/kml/2.0" version="1.0">
<xsl:template match="/">
<asx:abap version="1.0">
<asx:values>
<GEOCODE>
<LON>
<xsl:value-of select="substring-before(kml:kml/kml:Response/kml:Placemark/kml:Point/kml:coordinates,',')"/>
</LON>
<LAT>
<xsl:value-of select="substring-before(substring-after(kml:kml/kml:Response/kml:Placemark/kml:Point/kml:coordinates,','),',')"/>
</LAT>
<ALT>
<xsl:value-of select="substring-after(substring-after(kml:kml/kml:Response/kml:Placemark/kml:Point/kml:coordinates,','),',')"/>
</ALT>
</GEOCODE>
</asx:values>
</asx:abap>
</xsl:template>
</xsl:transform>
<b>ABAP Report</b>
REPORT z_gw_test_geocode.
DATA:
BEGIN OF geocode,
lon TYPE string,
lat TYPE string,
alt TYPE string,
END OF geocode.
DATA: client TYPE REF TO if_http_client,
url TYPE string,
c_xml TYPE string,
x_xml type xstring.
* Build URL to call Googe Maps Geocoding
CONCATENATE 'http://maps.google.com/maps/geo?'
'q=Tacherting,+DE'
'&output=xml'
'&key=ABQIAAAA2WL-mG7HpdSjlxystL3uBhRvBuAcdiWwcJAQgt9kNvfse-yNqBQuxwHkHo31WjTJ_RzVPIhXNludVg'
INTO url.
****Create the HTTP client
CALL METHOD cl_http_client=>create_by_url
EXPORTING
url = url
IMPORTING
client = client
EXCEPTIONS
OTHERS = 1.
client->send( ).
client->receive( ).
****Get the response content in Character format
c_xml = client->response->get_cdata( ).
****Get the response content as Binary
x_xml = client->response->get_data( ).
****Transform XML as String to ABAP Values
DATA: xslt_err TYPE REF TO cx_xslt_exception,
error_text TYPE string.
WRITE: / 'Transformation with STRING'.
TRY.
CALL TRANSFORMATION zgoogle_geocode_to_abap
SOURCE XML c_xml
RESULT geocode = geocode.
CATCH cx_xslt_exception INTO xslt_err.
error_text = xslt_err->get_text( ).
WRITE: / error_text.
ENDTRY.
WRITE: / 'LON: ', geocode-lon.
WRITE: / 'LAT: ', geocode-lat.
WRITE: / 'ALT: ', geocode-alt.
****Transform XML as XString to ABAP Values
WRITE: / 'Transformation with XSTRING'.
TRY.
CALL TRANSFORMATION zgoogle_geocode_to_abap
SOURCE XML x_xml
RESULT geocode = geocode.
CATCH cx_xslt_exception INTO xslt_err.
error_text = xslt_err->get_text( ).
WRITE: / error_text.
ENDTRY.
WRITE: / 'LON: ', geocode-lon.
WRITE: / 'LAT: ', geocode-lat.
WRITE: / 'ALT: ', geocode-alt.
<b>Result</b>
This is the result on our 6.20 Unicode System:
Transformation with STRING
No valid source context supplied
LON:
LAT:
ALT:
Transformation with XSTRING
LON: 12.570504
LAT: 48.078269
ALT: 0
I've tried it on our 6.20 and 6.40 NON-Unicode systems and the result was:
Transformation with STRING
LON: 12.570504
LAT: 48.078269
ALT: 0
Transformation with XSTRING
LON: 12.570504
LAT: 48.078269
ALT: 0
Finaly I've tried it on our Solutin Manager 4.0 which runs on Web AS 7.00 and is also a Unicode installation. Here the result is correct:
Transformation with STRING
LON: 12.570504
LAT: 48.078269
ALT: 0
Transformation with XSTRING
LON: 12.570504
LAT: 48.078269
ALT: 0
So now what to do? I've found nothing in OSS regarding this behaviour. Any tips? I also try a OSS Message.
Regards
GregorHi,
Can you tell me about your project on short notes. For information.
Regards
R.Rajendran -
Problem inserting XML doc (character set)
Hi all,
I'm having trouble trying to insert XML either "posting" it (xsql) or "putting" it
(OracleXML putXML).
The error that I get: "not supported
oracle-character-set-174".
The environment is:
Oracle 8i 8.1.5
(NLS_CHARACTERSET EL8MSWIN1253 for greek)
JDK 1.2.2
Apache Web Server 1.3.11
Apache JServ 1.1
XSQL v 0.9.9.1 and
XMLSQL, XML parser v2 that comes with it.
I had dropped all java classes and reloaded
them using oraclexmlsqlload batch file.
But still getting the same error.
The thing that is that I am
able to insert XML doc that was generated
with an authoring tool called W4F that extracts data from HTML pages and map them to
XML document, even with greek characters
in it. But when XML is generated using
an editor or the servlet like the following:
newschedule.xsql like
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="latestschedules.xsl"?>
<page connection="dtv" xmlns:xsql="urn:oracle-xsql">
<xsql:insert-request date-format="DD'/'MM'/'YYYY" table="schedule_details_view"
transform="request-to-newschedule.xsl"/>
<xsql:query table="schedule"
tag-case="lower" max-rows="5" rowset-element="latestschedules"
row-element="schedule">
select *
from schedules
order by schedule_id desc
</xsql:query>
</page>
request-to-newschedule.xsl like
<?xml version = '1.0'?>
<ROWSET xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xsl:version="1.0">
<xsl:for-each select="request/parameters">
<ROW>
<SCHEDULE_ID><xsl:value-of select="Schedule_id_field"/></SCHEDULE_ID>
<DESCRIPTION><xsl:value-of select="Description_field"/></DESCRIPTION>
<DETAILS>
<DETAILS_ITEM>
<STARTING_TIME><xsl:value-of select="Starting_Time_field_1"/></STARTING_TIME>
<DURATION><xsl:value-of select="Duration_field_1"/></DURATION>
</DETAILS_ITEM>
<DETAILS_ITEM>
<STARTING_TIME><xsl:value-of select="Starting_Time_field_2"/></STARTING_TIME>
<DURATION><xsl:value-of select="Duration_field_2"/></DURATION>
</DETAILS_ITEM>
<DETAILS_ITEM>
<STARTING_TIME><xsl:value-of select="Starting_Time_field_3"/></STARTING_TIME>
<DURATION><xsl:value-of select="Duration_field_3"/></DURATION>
</DETAILS_ITEM>
<DETAILS_ITEM>
<STARTING_TIME><xsl:value-of select="Starting_Time_field_4"/></STARTING_TIME>
<DURATION><xsl:value-of select="Duration_field_4"/></DURATION>
</DETAILS_ITEM>
<DETAILS_ITEM>
<STARTING_TIME><xsl:value-of select="Starting_Time_field_5"/></STARTING_TIME>
<DURATION><xsl:value-of select="Duration_field_5"/></DURATION>
</DETAILS_ITEM>
</DETAILS>
</ROW>
</xsl:for-each>
</ROWSET>
Hope that someone could help me on this ...
Any advice is highly appreciated.
Thanks in advance
Nicos Gabrielides
email: [email protected]Hi,
How about applying an XSL on the existing XML doc to create another XML doc to filter out the table column not found in the target db table, so that all the columns are matched and then use putXML to load?
Hope that helps.
OTN team@IDC -
We developed a SSIS Package to pull the data From Oracle source to Sql Server 2012. Here we used ADO.Net source to pull the records from Source but getting the below error after pulling some 40K records.
[ADO NET Source [2]] Error: The ADO NET Source was unable to process the data. ORA-64203: Destination buffer too small to hold CLOB data after character set conversion.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.
The PrimeOutput method on ADO NET Source returned error code 0xC02090F5.
The component returned a failure code when the pipeline engine called PrimeOutput().
The meaning of the failure code is defined by the component,
but the error is fatal and the pipeline stopped executing.
There may be error messages posted before this with more
information about the failure.
Anything that we can do to fix this?Hi,
Tried both....
* Having schema type as Nvarchar(max). - Getting the same error.
* Instead of ADO.Net Source used OLEDB Source with driver as " Oracle Provide for OLE DB" Getting error as below.
[OLE DB Source [478]] Error: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on OLE DB Source returned error code 0xC0202009. The component returned a failure
code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the
failure.
Additional Info:
* Here the Source task is getting failed not the conversion or destination task.
Thanks,
Loganathan A. -
Changing Character set in SAP BODS Data Transport
Hi Experts,
I am facing issue in extracting data from SAP.
Job details: I am using an ABAP data Flow which fetches the data from SAP and loads into Oracle table using Data Transport.
Its giving me below error while executing my job:
(12.2) 05-06-11 11:54:30 (W) (3884:2944) FIL-080102: |Data flow DF_SAP_EXTRACT_QMMA|Transform R3_QMMA_EXTRACT__AL_ReadFileMT_Process
End of file was found without reading a complete row for file <D:/DataService/SAP/Local/Z_R3_QMMA>. The expected number of
columns was <30> while the number of columns actually read was <10>. Please check the input file for errors or verify the
schema specification for the file format. The number of rows processed was <8870>.
reason: When analyzed I found the reason for this is presence of special characters in data. So while generating the data file in SAP working directory which is available on SAP Application server the SAP code page is 1100 due to which the delimeter of the file and the special characters are represented with #. So once the ABAP is executed and data is read from the file it is treating the # as delimiter and throwing the above error.
I tried to replace the special characters in ABAP data Flow but the ABAP data Flow doesnot support replace_substr function. I also tried changing the Code Page value to UTF-8 in SAP datastore properties but this didnt work as well.
Please let me know what needs to be done to resolve this issue. Is there any way we change the character set while reading from the generated data file in BODS to convert code page 1100 to UTF-8.
Thanks in advance.
Regards,
Sudheer.Unfortunately, I am no longer working on this particular project/problem. What I did discover though, is that /127 actually refers to character <control>+<backspace>. (http://en.wikipedia.org/wiki/Delete_character)
In SAP this and any other unknown characters get converted to #.
The conclusion I came to at the time, was that these characters made their way into the actual data and was causing the issue. In fact I think it is still causing the issue, since no one takes responsibility for changing the records, even after being told exactly which records need to be updated ;-)
I think I did try to make the changes on the above mentioned file, but without success. -
Character set migration error to UTF8 urgent
Hi
when we migrated from ar8iso889p6 to utf8 characterset we are facing one error when i try to compile one package through forms i am getting error program unit pu not found.
When i running the source code of that procedure direct from database using sqlplus its running wihtout any problem.How can i migrate this forms from ar8iso889p6 to utf8 characterset. We migrated from databas with ar8iso889p6 oracle 81.7 database to oracle 9.2. database with character set UTF8 (windows 2000) export and import done without any error
I am using oracle 11i inside the calling forms6i and reports 6i
with regards
ramya
1) this is server side program yaa when connecting with forms i am getting error .When i am running this program using direct sql its working when i running compiling i am getting this error.
3) yes i am using 11 i (11.5.10) inside its calling forms 6i and reports .Why this is giving problem using forms.Is there any setting changing in forms nls_lang
with regardsHi Ramya
what i understand from your question is that you are trying to compile a procedure from a forms interface at client side?
if yes you should check the code in the forms that is calling the compilation package.
does it contains strings that might be affected from the character set change???
Tony G.
Maybe you are looking for
-
Why Can I NOT Keep My Internet Explorer "DELETE HISTORY ON EXIT" Set
Ok I have been fighting this every since Windows 8.0 and NO success. Its like Microsoft coded I.E. to still hold onto all of my history and cache even though I have selected "Delete on exit". Also the Advance setting where it caches sites files etc
-
Customizing Account Factsheet and standard components
Hi all, I am currently working on account factsheet and I would need some help. My factsheet is a copy of the standard FCC_ACCOUNT_FS. When we click on the business agreement ID in the FICACMP_ACC components (ListWindow FS_TOTAL_TREE), there is a lin
-
How to reset my question Id secuirity
It says it sent the reset questions to my email but its not there
-
P_r_request- get_logsys failure when updating InfoObject from InfoObject
I'm trying to update master data of an InfoObject ZLCLROLE based on another InfoObject 0TCTROLE. I only need the master data local to the local BI system. So in the start routine I set: SOURCE_SYSTEM = p_r_request->get_logsys. to identify the local B
-
how do i link my iphone to my apple id'