Unicode (numerical) character conversion

What is the problem?
Some characters written in unicode format do not get transformed to their appropriate font-character.
Example
Numerical version:    ≥ is printed as ≥
String version:          ≥ is printed correctly
Both have the same unicode, yet LiveCycle DS will only convert the "string"-version and not the numerical version.
Question
Is there a way to make LiveCycle convert the numerical unicode representation of special characters aswell? If so how do I do this using LiveCycle Designer. I am working with textFields
Workaround
I have written the following workaround in my java project, obviously I'd appreciate it if I LC did this work for me as I'm not willing to implement a full-scale unicode table while LC has it by itself already (≥ works ...):
private String getNodeText(NodeList nl)
if (nl == null){
return null;
if (nl.getLength() == 0){
return null;
String unescapedString = StringEscapeUtils.unescapeXml(nl.item(0).getTextContent());
return (unescapedString.equals("null") ? null : replaceUnicodeNumbers(unescapedString));
private String replaceUnicodeNumbers(String unescapedString) {
return unescapedString.replace("≥", "≥"); // I changed this code to something more usable

You want the LPAD function
select LPAD( col1 , 9, ' ') from my_table;
Hi everybody !
Can someone solve this for me. I have a varchar2(9) variable, var, with the value '1234'.
Now I want to convert this to having leading blanks, ie. ' 1234'.
How to do this?
If I use the to_char(var,'fmt') function I get some error message that says there's too many declarations of to_char. I guess because tha variable already is a varchar2.
Thanx in advance
Vrjan

Similar Messages

  • Character conversion problems when calling FM via RFC from Unicode ECC 6.0?

    Hi all,
    I faced a Cyrillic character convertion problem while calling an RFC function from R/3 ECC 6.0 (initialized as Unicode system - c.p. 4103). My target system is R/3 4.6C with default c.p. 1500.
    The parameter I used in my FM interface in target system is of type CHAR10 (single-byte, obviously).
    I have defined rfc-connection (SM59) as an ABAP connection and further client/logon language/user/password are supplied.
    The problem I faced is, that Cyrillic symbols are transferred as '#' in the target system ('#' is set as default symbol in RFC-destination definition in case character convertion error is met).
    Checking convertions between c.p. 4103  and target c.p. 1500 in my source system using tools of transaction i18n shows no errors - means conversion passed O.K. It seems default character conversion executed by source system whithin the scope of RFC-destination definition is doing something wrong.
    Further, I played with MDMP & Unicode settings whithin the RFC-destination definition with no successful result - perhaps due to lack of documentation for how to set and manage these parameters.
    The question is: have someone any experience with any conversion between Unicode and non-Unicide systems via RFC-call (non-English target obligatory !!!), or can anyone share valuable information regarding this issue - what should be managed in the RFC-destination in order to get character conversion working? Is it acceptable to use any character parameter in the target function module interface at all?
    Many thanks in advance.
    Regards,
    Ivaylo Mutafchiev
    Senior SAP ABAP Consultant

    hey,
    I had a similar experience. I was interfacing between 4.6 (RFC), PI and ECC 6.0 (ABAP Proxy). When data was passed from ECC to 4.6, RFC received them incorrectly. So i had to send trimmed strings from ECC and receive them as strings in RFC (esp for CURR and QUAN fields). Also the receiver communication channel in PI (between PI and  RFC) had to be set as Non unicode. This helped a bit. But still I am getting 2 issues, truncation of values and some additional digits !! But the above changes resolved unwanted characters problem like "<" and "#". You can find a related post in my id. Hope this info helps..

  • Unicode character conversion

    Hello,
    From external system we receive XML messages in UTF-8. The data are transfered from XI to SAP WAS by RFC adapter. The communication language is set to 'CS' (Czech).  The data are saved into database with no conversion to set code page (1401). The receiving system is not Unicode compatible.
    Into database are writen unicode chars (e.g. &#x159;) instead of single chars in Czech alphabet.
    Is here any way how to force a XI to make a character conversion ?
    Thanks for any feedback.
    Marian Morzol

    Pl identify your database characterset
    SQL> select * from NLS_DATABASE_PARAMETERS;If the NLS_CHARACTERSET is WE8ISO8859P1, it is not capable of storing the Euro symbol (pl do a google search to find various references).
    To store the Euro symbol, you will most likely need to change the database characterset to UTF8 - pl see the MOS Docs mentioned in this thread for details - Adding Greek & German language to R12
    HTH
    Srini

  • Printing ZPL (Zebra) data to printer spooler without character conversion

    Hi all,
    We are printing shipping labels from UPS, with a process where we recive the ZPL label code directly from UPS, and we just need to pass the data to the printer to get the labels. We have already implemented this with Fedex and some custom labels, and it works perfectly. The problem with the UPS label data is that it contains non-printable characters (in the MaxiCode data field). When passed to the SAP printer spooler (see code example below), the data gets corrupted because SAP interprets these non-printable characters as printer control codes.
    I have verified this by saving the ZPL data to a local file, before printing it through the SAP spooler. I then print this raw data and compare the output with the labels printed from the spooler. The MaxiCode (the big 2D barcode) is different in these labels. UPS has also tested the labels, and rejected them because of incorrect data in the barcode.
    For printing, we are using printers defined as type "PLAIN", but I also tried using the "LZEB2" device type with the same result. The error we see in the spooler entry is this:
    Print ctrl S_0D_ is not defined for this printer. Page 1, line 2, col. 2201
    Print output may not be as intended
    The printer ctrl code differs, depending om the label. I have examined the spooler data in "raw" mode, and there is always an ASCII character 28 (hex 1C) in front of the characters that SAP think are control codes, and this is why I think these non-printable characters are the reason for the problems.
    This is the function module I use to print the ZPL data (and as stated above, this works fine for Fedex and custom labels). The ZPL data is converted to binary format before passed to the function module, but I also tried to send the data in text format with another FM, but the result is the same. I have experimented with the "codepage" parameter, and this one gives the least amount of errors, and some labels actually get through without errors. But still at least 50% of the labels gets corrupted, with log entries like above.
    CALL FUNCTION 'RSPO_SR_WRITE_BINARY'
          EXPORTING
            handle           = lv_spool_handle
            data             = lv_label_line_bin
            length           = lv_len
            codepage         = '2010'
          EXCEPTIONS
            handle_not_valid = 1
            operation_failed = 2
            OTHERS           = 3.
    Does anyone know if there is a way to send data to the spooler without character conversion or interpretation of printer control codes? Or is there any other smart way to get around this problem?
    /Leif

    I do a more direct output to the spooler, to avoid any issues with the WRITE statement and SAP's report output processing. At the same time, I insert line breaks so that the output is easy to debug in the spooler if needed. Also included is the code to to detect the escape code (ASCII #28) and to insert a control code ZZUPS in its place (you can skip this for Fedex). Here's a simplified example, but please note this is for a Unicode system, some minor changes is required in a non-Unicode system.
    CONSTANTS: lc_spcode TYPE c LENGTH 5 VALUE 'ZZUPS',
               lc_xlen TYPE i VALUE 5.
       DATA: lv_print_params TYPE pri_params,
             lv_spool_handle TYPE sy-tabix,
             lv_name TYPE tsp01-rq0name,
             lv_spool_id TYPE rspoid,
             lv_crlf(2) TYPE c,
             lv_lf TYPE c,
             lstr_label_data TYPE zship_label_data_s,
             lv_label_line TYPE char512,
             lv_label_line_bin TYPE x LENGTH 1024,
             lv_len TYPE i,
             ltab_label_data_255 TYPE TABLE OF char512,
             ltab_label_data TYPE TABLE OF x,
             lv_c1 TYPE i,
             lv_c2 TYPE i,
             lv_cnt1 TYPE i,
             lv_cnt2 TYPE i,
             lv_x(2) TYPE x.
       FIELD-SYMBOLS: <n> TYPE x.
       lv_crlf = cl_abap_char_utilities=>cr_lf.
       lv_lf = lv_crlf+1(1).
       lv_name = 'ZPLLBL'.
    CALL FUNCTION 'RSPO_SR_OPEN'
         EXPORTING
           dest                   = i_dest
           name                   = lv_name
           prio                   = '5'
           immediate_print        = 'X'
           titleline              = i_title
           receiver               = sy-uname
    *      lifetime               = '0'
           doctype                = ''
         IMPORTING
           handle                 = lv_spool_handle
           spoolid                = lv_spool_id
         EXCEPTIONS
           device_missing         = 1
           name_twice             = 2
           no_such_device         = 3
           operation_failed       = 4
           OTHERS                 = 5.
       IF sy-subrc <> 0.
         RAISE spool_open_failed.
       ENDIF.
    LOOP AT i_label_data INTO lstr_label_data.
         CLEAR ltab_label_data_255.
         SPLIT lstr_label_data-label_data AT lv_lf INTO TABLE ltab_label_data_255.
         LOOP AT ltab_label_data_255 INTO lv_label_line.
           IF lv_label_line NE ''.
             lv_len = STRLEN( lv_label_line ).
    *       Convert character to hex type
             lv_c1 = 0.
             lv_c2 = 0.
             DO lv_len TIMES.
               ASSIGN lv_label_line+lv_c1(1) TO <n> CASTING.
               MOVE <n> TO lv_x.
               IF lv_x = 28.
                 lv_cnt1 = 0.
                 lv_label_line_bin+lv_c2(1) = lv_x.
                 lv_c2 = lv_c2 + 1.
                 DO lc_xlen TIMES.
                   ASSIGN lc_spcode+lv_cnt1(1) TO <n> CASTING.
                   MOVE <n> TO lv_x.
                   lv_cnt2 = lv_c2 + lv_cnt1.
                   lv_label_line_bin+lv_c2(2) = lv_x.
                   lv_c2 = lv_c2 + 2.
                   lv_cnt1 = lv_cnt1 + 1.
                   lv_len = lv_len + 1.
                 ENDDO.
               ELSE.
                 lv_label_line_bin+lv_c2(2) = lv_x.
                 lv_c2 = lv_c2 + 2.
               ENDIF.
               lv_c1 = lv_c1 + 1.
             ENDDO.
    *       Print binary data to spool
             lv_len = lv_len * 2. "Unicode is 2 bytes per character
             CALL FUNCTION 'RSPO_SR_WRITE_BINARY'
               EXPORTING
                 handle                 = lv_spool_handle
                 data                   = lv_label_line_bin
                 LENGTH                 = lv_len
               EXCEPTIONS
                 handle_not_valid       = 1
                 operation_failed       = 2
                 OTHERS                 = 3.
             IF sy-subrc <> 0.
               RAISE spool_write_failed.
             ENDIF.
           ENDIF.
         ENDLOOP.
       ENDLOOP.
       CALL FUNCTION 'RSPO_SR_CLOSE'
         EXPORTING
           handle = lv_spool_handle.
       IF sy-subrc <> 0.
         RAISE spool_close_failed.
       ENDIF.

  • ORA-01858: a non-numeric character was found where a numeric was expected

    hi ,
    This was the code which shows the sales rep invoice amount and collected amount but while running report thru concurrent program its showing the following error:
    ORA-01858: a non-numeric character was found where a numeric was expected
    WHERE TO_CHAR ( TO_DATE ( PS.GL_DATE , 'DD/MON/YY' ) , 'MON-YYYY' ) BETWEEN TO_CHAR ( TO_DATE ( : ==> P_todate , 'YYYY/MM/DD' ) , 'MON-YYYY' ) AND TO_CHAR ( TO_DATE ( : P_todate , 'YYYY/MM/DD' ) , 'MON-YYYY' ) AND ps.customer_id = cust.custome
    The Actual Code was this
    SELECT SUBSTR(SALES.name,1,50) salesrep_name_inv,
    --ps.CLASS,
    SUM(ABS(ps.acctd_amount_due_remaining)) acctd_amt,
    SUM(ABS(ps.amount_due_remaining)) amt_due_remaining_inv,
    SUM(ABS(ps.amount_adjusted)) amount_adjusted_inv,
    SUM(ABS(ps.amount_applied)) amount_applied_inv,
    SUM(ABS(ps.amount_credited)) amount_credited_inv,
              SALES.salesrep_id,
    NULL "REMARKS"
    -- ps.gl_date gl_date_inv,
    FROM ra_cust_trx_types ctt,
    ra_customers cust,
    ar_payment_schedules ps,
    ra_salesreps SALES,
    ra_site_uses site,
    ra_addresses addr,
    ra_cust_trx_line_gl_dist gld,
    gl_code_combinations c,
    ra_customer_trx ct
    WHERE TO_CHAR(TO_DATE(PS.GL_DATE,'DD/MON/YY'),'MON-YYYY')
    BETWEEN TO_CHAR(TO_DATE(:P_todate,'YYYY/MM/DD'),'MON-YYYY') AND TO_CHAR(TO_DATE(:P_todate,'YYYY/MM/DD'),'MON-YYYY')
    AND ps.customer_id = cust.customer_id
    AND ps.customer_trx_id = ct.customer_trx_id
    AND ps.cust_trx_type_id = ctt.cust_trx_type_id
    AND NVL(ct.primary_salesrep_id, -3) = SALES.salesrep_id
    AND ps.customer_site_use_id+0 = site.site_use_id(+)
    AND site.address_id = addr.address_id(+)
    AND TO_CHAR(TO_DATE(PS.GL_DATE_CLOSED,'DD/MON/YY'),'MON-YYYY')
    BETWEEN TO_CHAR(TO_DATE(:P_todate,'YYYY/MM/DD'),'MON-YYYY') AND TO_CHAR(TO_DATE(:P_todate,'YYYY/MM/DD'),'MON-YYYY')
    --AND    ps.gl_date_closed > TO_DATE(:P_todate,'MON-YYYY')
    AND ct.customer_trx_id = gld.customer_trx_id
    AND gld.account_class = 'REC'
    AND gld.latest_rec_flag = 'Y'
    AND gld.code_combination_id = c.code_combination_id
    AND sales.salesrep_id is not null and sales.name is not null
    -- and ps.payment_schedule_id+0 < 9999
    -- AND SALES.salesrep_id ='1001'
    GROUP BY SALES.name,
    --ps.CLASS,
    SALES.salesrep_id

    So to_date function accepts a string as input and returns a date. When a date is input instead, it is implicity converted to the required type of the function paremeter, which is a string, so that to_date can convert it back to a date again.
    If you are lucky with the implicit conversion, you get the same date back, if you are not you might get a different date or an error.
    From your query it appears that this conversion from a date, to a string, to a date, and then back to a string using to_char this time, is being done to remove the time or day part of the date. The actual range comparison is being done on strings rather than dates, which is dangerous as strings sort differently than dates.
    In this example if I sort by date, Jan 01 comes between Dec 00 and Feb 01 as you would expect.
    SQL> select * from t order by d;
    D
    12-01-2000
    01-01-2001
    02-01-2001When converted to strings, Feb 01 comes between Dec 00 and Jan 01, which is probably not the desired result
    SQL> select * from t order by to_char(d,'DD-MON-YY');
    D
    12-01-2000
    02-01-2001
    01-01-2001If you want to remove time and day parts of dates you should use the trunc function
    trunc(d) removes the time, trunc(d,'mm') will remove the days to start of month.
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/functions201.htm#i79761

  • "character conversion error" while parsing xml files

    Hello,
    I'm trying to parse MusicXML (Recordare) files, but I'm getting an exception.
    I'm using the SAX parser (javax.xml.parsers.SAXParser).
    Here is the code I use to instantiate it:
    final javax.xml.parsers.SAXParserFactory saxParserFactory = javax.xml.parsers.SAXParserFactory.newInstance();
    final javax.xml.parsers.SAXParser saxParser = saxParserFactory.newSAXParser();
    final org.xml.sax.XMLReader parser = saxParser.getXMLReader();
    I'm using my own handler, but I get the same exception even if I use org.xml.sax.helpers.DefaultHandler.
    The error I get is:
    Character conversion error: "Illegal ASCII character, 0xc2" (line number may be too low).
    The first few lines of my xml files look like this:
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <!DOCTYPE score-partwise
    PUBLIC "-//Recordare//DTD MusicXML 0.6 Partwise//EN"
    "http://www.musicxml.org/dtds/partwise.dtd">
    <score-partwise>
    [...etc...]
    If I delete the <!DOCTYPE ...> line, then I don't get the exception anymore. But the MusicXML files I get (from some other program) always contain this line, and it would be quite some work to delete them from every file manually.
    So does anyone know if there is a way to avoid deleting that line in every file, while still being able to parse the xml files without exceptions?
    Or maybe does anyone know what the exact cause of the exception is? (because I don't know what exactly causes it)
    Thank you in advance.
    Greetz,
    Jipo

    So does anyone know if there is a way to avoid
    deleting that line in every file, while still being
    able to parse the xml files without exceptions?ok this is side-stepping the real problem but I've used this code to filterout DTD references for other reasons   public static InputStream filterOutDTDRef(InputStream in) throws IOException {
          BufferedReader iniReader = new BufferedReader(new InputStreamReader(in));
          StringBuffer newXML = new StringBuffer();
          for(String line = iniReader.readLine(); line!=null; line = iniReader.readLine())
             newXML.append(line+"\n");
          in.close();
          int s = newXML.indexOf("<!DOCTYPE ");
          if(s!=-1)
             newXML.replace(s,newXML.indexOf(">",s)+1,"");
          return new ByteArrayInputStream(newXML.toString().getBytes());
       }and it actually speeds up the parsing phase too (since the DTD ref.s were on the web and the XML standard mandates that there is a fetch for each xml file parsed..)
    you can feed the above into the InputSource constructor that takes an InputStream argument.
    Now for the real problem... 0xc2 is "LATIN CAPITAL LETTER A WITH CIRCUMFLEX" according to a unicode chart - which is not an ASCII character (as the error message correctly reports). I'm not sure why the file is being parsed as ASCII though? You could try parsing in a FileReader to the inputsource and hope it picks up the default character encoding of your system, and that that character encoding matches the file. Or you could try passing in a FileReader constructed with a explicit character encoding (eg "UTF8") and see if that does the trick?
    asjf

  • Error while pulling data from an Oracle database. ORA-01858: a non-numeric character was found where a numeric was expected

    I'm trying to pull data from an Oracle database using SSIS. When I try to select a few fields from the source table, it returns the following error message:
        [OLE DB Source [47]] Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80040E14.
        An OLE DB record is available.  Source: "OraOLEDB"  Hresult: 0x80040E14  Description: "ORA-01858: a non-numeric character was found where a numeric was expected".
        An OLE DB record is available.  Source: "OraOLEDB"  Hresult: 0x80004005  Description: "ORA-01858: a non-numeric character was found where a numeric was expected".
    The source columns are a combination of numeric and texts, and I've also tried selecting one of them, which didn't work. I'm using the Oracle client 11.2.0.1, and it works fine with any other data sources I have connected to so far. How can I resolve this
    error?

    Hi H.James,
    According to your description, the issue is a non-numeric character was found where a numeric was expected while pulling data from an Oracle database in SSIS.
    Based on the error message, the issue should be you are comparing a number column to a non-number column in a query. Such as the query below (ConfID is a number, Sdate is a date):
     where C.ConfID in (select C.Sdate
                       from Conference_C C
                       where C.Sdate < '1-July-12')
    Besides, a default behavior for the Oracle OleDb Provider that change the NLS Date Format of the session to 'YYYY-MM-DD HH24:MI:SS can also cause the issue. For more details about this issue, please refer to the following blog:
    http://blogs.msdn.com/b/dataaccesstechnologies/archive/2012/01/20/every-bug-is-a-microsoft-bug-until-proven-otherwise.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Non-numeric character error

    This is my error:
    javax.servlet.ServletException: ORA-01858: a non-numeric character was found where a numeric was expected
    This is my code:
    ResultSet myRs = myStmt.executeQuery("select count(*) dexCount from cs_dexia where userid = '" + vUserid + "'");
              int vCount = 0;     
                   while(myRs.next()){
                        vCount = myRs.getInt("dexCount");
              out.println(vCount);
                   if(vCount > 0)
                   response.sendRedirect("swapform.jsp");
    What have I done wrong?

    I have changed that and am now getting:
    javax.servlet.ServletException: ORA-00904: invalid column name
    the query runs in my sql tool no problem, could it be something to do with the alias?

  • Error occurred during character conversion in SXMB_MONI

    Hello Experts,
    Good Day!
    I would like to seek your help here. When i used tcode SXMB_MONI to search for messages i get this error : Error occurred during character conversion.
    So far no problem with the program. Its work for all other dates. Just for one particular day and specific time period, im geting this error.
    Does anyone know what is this error means? Please reply..
    Thanks for your help.
    Looking forward for ur replies..

    Hi Presheela,
    Basically this problem occurs when ur payload contains any special characters like '&' ,'>'...etc .So you have to take care of how to deal with these characters in XI.
    Refer the following documentation:
    How to Work with Character Encodings in Process Integration
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42
    Regards,
    Vinod.

  • Character conversion error

    Hi all,
    We are getting the following error when trying to parse an xml string resource - Character conversion error: "Illegal ASCII character, 0xc2" (line number may
    be too low)-. We have not been able to get around this. We have tried creating the InputSource two different ways:
    reader = new StringReader(stringSource);
    src = new InputSource( reader );
    and
    src = new InputSource(new InputStreamReader(new ByteArrayInputStream(stringSource.getBytes())));
    The problem does appear to go away if we treat the DTD we are validating against as a file. If we set it has a uri, we get the above problem.
    Is anyone else experiencing this problem?
    Any help would be greatly appreciated.
    Thanks in advance,
    Greg

    Hi,
    2 possible solutions:
    1) try using the xerces parser instead of sun's parser
    2) look at the posting at the following url and see wether the posted solution solves your problem: http://forums.java.sun.com/thread.jsp?forum=34&thread=67558
    Hope this helps,
    Kurt.

  • Problem in the character conversion

    Hi Guys,
    I am facing problem in the character conversion
    I am posting data from SAP to third party system using XI , by converting whole input message to a String .I am using SOAP adapter to communicate XI to third party system.
    Thirdparty system needs String to be wrapped in CDATA so that it will not choke by looking at the special characters. I did Wrap the output string in CDATA, using ABAP mapping but when I do that XI is converitng  arrow brackets < and >. into &lt and u2018&gtu2019   my assumption is it is double encoding.
    example -
    before map -  <AppSystemInfo>
    after mapping  it is converted as -  <![CDATA[ &ltAppSystemInfo&gt]]>
    Edited by: Vamsi on Jun 17, 2010 10:00 PM
    Edited by: Vamsi on Jun 17, 2010 10:01 PM

    Did you try to see the output?
    bcz if you are trying this in mapping testing it will show you like this as this conversion if for xml, so xml will not do anything wrong with the special characters, so for that special characters will be converted like that.
    Once try to run end to end interface and try to see at receiver side that how data looks like.
    Thanks,
    Hetal

  • Replace Non-Numeric Characters with a Numeric Character in a String

    Hi Guys,
    I need to replace all the non-numeric characters (including embedded blanks & hyphen) in a string to a numeric character '1'.
    The trailing blanks should not be replaced.
    e.g. "P22233344455566" should be changed to "122233344455566"
    &    "49-1234567           " should be changed to "4911234567          "
    Please help.

    Use [replace|http://help.sap.com/abapdocu_70/en/ABAPREPLACE_IN_PATTERN.htm] with a regular expression to translate any non-numeric character (i.e. any character not between 0 and 9) to 1:
      REPLACE ALL OCCURENCES OF REGEX '[^0-9]' IN value WITH '1'.
    Cheers, harald
    p.s.: In older releases [translate|http://help.sap.com/abapdocu_70/en/ABAPTRANSLATE.htm] would also do the trick, but is more lengthy, because one would need to specify each individual character that should be replaced, e.g.:
      TRANSLATE value TO UPPER CASE.
      TRANSLATE value USING
          ' 1_1-1a1b1c1d1e1f1g1h1i1j1k1l1m1n1o1p1q1r1s1t1u1v1w1x1y1z1'.

  • Sqlldr with error: non-numeric character was found when numeric number

    Hi,
    I have been struggling with this problem for long, can't get to anywhere.
    I am trying to use sqlldr to load a CSV file into table, the table looks like this :
    AD_ID NUMBER(38)
    CNTCT_ID VARCHAR2(60)
    AD_FILE_NAME VARCHAR2(80)
    AD_TITLE VARCHAR2(300)
    AGCY_APRVL_DATE DATE
    CORE_APRVL_DATE DATE
    ENTR_CMNT CLOB
    IC_APRVL_DATE DATE
    PURP_TEXT CLOB
    RVW_BRD_APRVL_DATE DATE
    ACTIVE_FLAG VARCHAR2(1)
    .......................................more fields
    The control file looks like this:
    LOAD DATA
    INFILE "C:\ORACLE_IRTMB\IRPADS\SQL_DATA\ADS_T.CSV"
    BADFILE "C:\ORACLE_IRTMB\IRPADS\ADS_T.BAD"
    DISCARDFILE "C:\ORACLE_IRTMB\IRPADS\ADS_T.DSC"
    truncate INTO TABLE ADS_T
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
         AD_ID INTEGER ,
         CNTCT_ID char ,
         AD_FILE_NAME char ,
         AD_TITLE char nullif (AD_TITLE=BLANKS) ,
         AGCY_APRVL_DATE DATE "MM/DD/YYYY" nullif (AGCY_APRVL_DATE=BLANKS) ,
         CORE_APRVL_DATE DATE "MM/DD/YYYY" ,
         ENTR_CMNT CHAR(7000) nullif (ENTR_CMNT=BLANKS) ,
    PURP_TEXT CHAR(7000) nullif (ENTR_CMNT=BLANKS) ,
         RVW_BRD_APRVL_DATE DATE "MM/DD/YYYY" ,
         ACTIVE_FLAG char ,
         ....more fields
    The Data file looks like this:
    10132,simpsonl,PMSDHHStemplate.pdf,"Depression, Irritability, Mood Swings Sound Familiar?",1/13/2003,11/14/2002,,1/13/2003,"The NIMH is conducting research on premenstrual
    10133,jolkovsl,10133ClozapineDHHS ver 0.pdf,Mood Swings? Unpredictable Moods? Are These Moods hard to Treat?,1/28/2003,11/14/2002,,1/28/2003,"The NIMH is conducting a study to test the efficacy of ...
    --- and log file looks like this:
    Record 5: Rejected - Error on table ADS_T, column RVW_BRD_APRVL_DATE.
    second enclosure string not present
    Record 7: Rejected - Error on table ADS_T, column RVW_BRD_APRVL_DATE.
    second enclosure string not present
    Record 9: Rejected - Error on table ADS_T, column RVW_BRD_APRVL_DATE.
    second enclosure string not present
    Record 2: Rejected - Error on table ADS_T, column AGCY_APRVL_DATE.
    ORA-01858: a non-numeric character was found where a numeric was expected
    Record 4: Rejected - Error on table ADS_T, column AGCY_APRVL_DATE.
    ORA-01858: a non-numeric character was found where a numeric was expected
    Record 6: Rejected - Error on table ADS_T, column AGCY_APRVL_DATE.
    ORA-01858: a non-numeric character was found where a numeric was expected
    IF I use to_date in control file:
    LOAD DATA
    INFILE "C:\ORACLE_IRTMB\IRPADS\SQL_DATA\ADS_T.CSV"
    BADFILE "C:\ORACLE_IRTMB\IRPADS\ADS_T.BAD"
    DISCARDFILE "C:\ORACLE_IRTMB\IRPADS\ADS_T.DSC"
    truncate INTO TABLE ADS_T
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
         AD_ID INTEGER ,
         CNTCT_ID char ,
         AD_FILE_NAME char ,
         AD_TITLE char nullif (AD_TITLE=BLANKS) ,
         AGCY_APRVL_DATE "to_date(:AGCY_APRVL_DATE,'MM/DD/YYYY')" ,
         CORE_APRVL_DATE DATE "MM/DD/YYYY" ,
         ENTR_CMNT CHAR(7000) nullif (ENTR_CMNT=BLANKS) ,
         IC_APRVL_DATE DATE "MM/DD/YYYY" ,
         PURP_TEXT CHAR(10000) nullif (PURP_TEXT=BLANKS) ,
         RVW_BRD_APRVL_DATE DATE "MM/DD/YYYY" ,
    I got extracctly same error message as above...
    If I use to_char in control file:
    LOAD DATA
    INFILE "C:\ORACLE_IRTMB\IRPADS\SQL_DATA\ADS_T.CSV"
    BADFILE "C:\ORACLE_IRTMB\IRPADS\ADS_T.BAD"
    DISCARDFILE "C:\ORACLE_IRTMB\IRPADS\ADS_T.DSC"
    truncate INTO TABLE ADS_T
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
         AD_ID INTEGER ,
         CNTCT_ID char ,
         AD_FILE_NAME char ,
         AD_TITLE char nullif (AD_TITLE=BLANKS) ,
         AGCY_APRVL_DATE "to_char(:AGCY_APRVL_DATE,'MMDDYYYY')" ,
    Then it's said a not valid number
    Record 2: Rejected - Error on table ADS_T, column AGCY_APRVL_DATE.
    ORA-01722: invalid number
    Record 4: Rejected - Error on table ADS_T, column AGCY_APRVL_DATE.
    ORA-01722: invalid number
    Record 6: Rejected - Error on table ADS_T, column AGCY_APRVL_DATE.
    ORA-01722: invalid number
    Record 8: Rejected - Error on table ADS_T, column AGCY_APRVL_DATE.
    ORA-01722: invalid number
    someone, please help me out here.
    Thanks a lot.
    Wei

    hello
    pls use to_date
    If your session is set to default date format of DD-MON-YY, execute the following and you will receive the error message:
    SQL> select to_date('20-JAN-2010', 'DD-MM-YYYY') from dual;
    ERROR:
    ORA-01858: a non-numeric character was found where a numeric was expected
    no rows selected
    When you are converting a string to a date, you have specified that the date is being passed in DD-MM-YYYY format. But you have passed the date in DD-MON-YYYY format. As the month is expected as a number by oracle, but you have passed a character, oracle is unable to translate the string to a number.
    Do one of the following:
    SQL> select to_date('20-JAN-2010', 'DD-MON-YYYY') from dual
    2 /
    TO_DATE
    20-JAN-2010
    OR
    SQL> select to_date('20-10-2010', 'DD-MM-YYYY') from dual
    2 /
    TO_DATE
    20-JAN-2010
    or you can use alter sessin set nls_date_format='.....................';
    regards

  • ORA-01858: a non-numeric character found where a digit was expected

    Hi,
    I'll cut introductions short. We're trying to read data from a flat file (OWB ver.9.2. by the way). After successfully mapping, then generating (with two insignificant warnings), we execute.
    Although it says the execution was 'successful', more than a dozen errors return in the form of ORA-01858: a non-numeric character found where a digit was expected
    a - Here is the error:
    Record 1: Rejected - Error on table "TOPSSTAGE"."FLTR_DCS_IRREG", column "OPERATIVE_FLIGHT_LEG_DATE".
    ORA-01858: a non-numeric character was found where a numeric was expected
    *it's actually repeated 20 times (Record 1, Record 2, and so on and so forth...) with the same exact error description and details.
    b - Here's the part of the OWB-generated script that defines the creation of that field/column:
    "OPERATIVE_FLIGHT_LEG_DATE" DATE "mm/dd/yyyy"
    c - And here's the data from the flat file (1 of 4 rows)
    "2/20/2007","I","0899","N","TPE","MNL","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","750","950","746","955"," P3231",,"04","32","32",,,,,,,,"758","946",,"A000","4","D000","5","000","0","0","0","0","D","P","0","0","0","0","0","0","0","0","0","0","0","2/22/2007","000","012","144","2/20/2007","2/20/2007"
    *the part where it is in bold is the one that is mapped from the column OPERATIVE_FLIGHT_LEG_DATE to the OPERATIVE_FLIGHT_LEG_DATE column (flat to physical, right?)
    it is set as DATE in all settings (mapping, file, etc..)
    Any form of help will be greatly appreciated :)
    Thanks!
    Regards,
    _lilim                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi Bharadwaj,
    Yes, I believe you are right. And yes that column is designated a primary key. The weird thing is, we do have null values being entered into the table from the flat file, but these values are far from the "OPERATIVE_FLIGHT_LEG_DATE" column (index-wise). Let me give you an example of our case:
    this is a row in the flat file: (sample data only)
    "2/20/2007","I","0899","N","TPE","MNL","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","750","950","746","955"," P3231",,"04","32","32",,,,,,,,"758","946",,"A000","4","D000","5","000","0","0","0","0","D","P","0","0","0","0","0","0","0","0","0","0","0","2/22/2007","000","012","144","2/20/2007","2/20/2007"
    as you can see, I've made the 'null' values bold, as well as the data for OPERATIVE_FLIGHT_LEG_DATE (which shows up, as you can see, at the end of the row).
    Now, in the OWB table, the OPERATIVE_FLIGHT_LEG_DATE column is set as a primary key (or foreign, i think, I'll check again) and, thus, is set as one of those columns to first receive data transferred from the flat file.
    We see nothing wrong with our mask (which is mm/dd/yyyy by the way) as well as with our data type settings. It has to be with the flat file I guess. We're gonna try putting in dummy values in place of the null ones in the flat file. I'll get back to you asap. Thanks for helping out. :D
    Regards,
    _lilim

  • Character conversion error: Unconvertible UTF-8 character beginning..

    Hello,
    I'm using TrAX for XSLT transformations, and having a following
    problem
    Character conversion error: "Unconvertible UTF-8 character beginning with 0xa9" (line number may be too low).
    org.xml.sax.SAXParseException: Character conversion error: "Unconvertible UTF-8 character beginning with 0xa9" (line number may be too low).
            at org.apache.crimson.parser.InputEntity.fatal(InputEntity.java:1100)
            at org.apache.crimson.parser.InputEntity.fillbuf(InputEntity.java:1072)
            at org.apache.crimson.parser.InputEntity.isXmlDeclOrTextDeclPrefix(InputEntity.java:914)
            at org.apache.crimson.parser.Parser2.maybeTextDecl(Parser2.java:2795)
            at org.apache.crimson.parser.Parser2.externalParameterEntity(Parser2.java:2880)
            at org.apache.crimson.parser.Parser2.maybeDoctypeDecl(Parser2.java:1167)
            at org.apache.crimson.parser.Parser2.parseInternal(Parser2.java:489)
            at org.apache.crimson.parser.Parser2.parse(Parser2.java:305)
            at org.apache.crimson.parser.XMLReaderImpl.parse(XMLReaderImpl.java:442)
            at mlts.converter.XMLImport.outputESGML(XMLImport.java:311)
            at mlts.converter.Converter.processFile(Converter.java:312)
            at mlts.converter.Converter.Convert(Converter.java:229)
            at test.main(test.java:7)Following the source code, I've found that the exception is thrown
    when it reads DTD. I tried to read DTD using InputSource
    in ascii, in latin-1 and my program reads it without any problem.
    I really appreciate any help,
    Thanks

    http://forum.java.sun.com/thread.jsp?forum=34&thread=254927

Maybe you are looking for