Different codepages at FileUpload

I have a Web Dynpro that uploads a CSV file, transfers it into an internal table and works with it. This all works fine. But now a user tried to upload a UTF-8 encoded file and the result were weird characters.
Is there any way to convert the file depending on the encoding?
My original coding looked like
  l_r_conv  =  cl_abap_conv_in_ce=>create( ).
  l_d_rest  = id_string.
  l_d_cr_lf  =   '0D0A'.
  l_d_cr_lf2 =   l_d_cr_lf.
  SPLIT l_d_rest AT l_d_cr_lf INTO TABLE l_t_file IN BYTE MODE.
  LOOP AT l_t_file INTO l_d_rest.
*    TRY.
    CALL METHOD l_r_conv->CONVERT
      EXPORTING
        INPUT           = l_d_rest
*        N               = -1
      IMPORTING
        DATA            = l_d_line
*        LEN             =
*        INPUT_TOO_SHORT =
*     CATCH CX_SY_CONVERSION_CODEPAGE .
*     CATCH CX_SY_CODEPAGE_CONVERTER_INIT .
*     CATCH CX_PARAMETER_INVALID_TYPE .
*    ENDTRY.
    INSERT l_d_line INTO TABLE l_t_data.
  ENDLOOP.
  et_file  = l_t_data.
Thanks in advance
Dirk

DudeOnFire wrote:
Hello
After using iTunes for a few years, I found out that if I had a Artist/Song name with two different capitalizations at different places, it would create two different artists on my iPod. For example: Band of Horses and Band Of Horses. It looks the same and pretty hard to see while browsing my iTunes but when I see it on my iPod, it irritates me. So I'm wondering if anyone knows a way to fix these quickly? Like some kind of mod/hack or program?
No quick fix that I know of - just scan through the Artists menu on the iPod looking for repeated artist names & fix them in iTunes. Now you know it's a problem set yourself style rules and stick to them in future. I Go For Title Case Myself. Tidying each album as you put it into iTunes is much easier than doing it after the fact.
Also, is there a way to make the duplicate searching less strict? Once I had a song that was spelled "Day N Nite" and the other one was "Day 'N' Nite", they were the exact same songs except iTunes didn't see it.
Nope...
For more on the first question see my previous post on Grouping Tracks Into Albums.
tt2

Similar Messages

  • Using different codepages in a csv file

    hi guys
    I'm working on a project where we load data from arround 50 different acapta system which we want to load in a DWH. The axapata systems export the data, based on a exact schema, to a csv file and send us this file. After that, we us a loop to run the same dataflow for each file (arround 200 files).
    Some of this axapta systems are not able to generate correct utf8 files. For most of the system, this is no problem, but we have a problem with some language (Chiniese, Thai, ...).
    The File Format definition is on uft8 and we could not change it on the fly (the function is out-gray in the DataFlow). In this case, we should create a DataFlow for each File, which we are sourcing. However, I'm not happy with this.
    Do anybody have an idea, how could we solve this case? Is it possible, to change the codepage of the file format on the fly (in the loop)? Is it possible to use a variable for the define the codepage of a Dataflow (it's not in the list).
    Thanks for helpiing
    Christoph

    I would run a script on operation system level to get all the files converted to code page you need before file is uploaded into BW.

  • Polish after unicode convertion - Language from different codepage how-to

    Hi,
    we have just convert a sendbox system from ASCII (SAP 1100 codepage) to Unicode (SAP ECC 6.0, Solaris 10 SPARC, Oracle 10.2.0.4, ecc.).
    In ASCII, TCPDB table contais 1100 codepage, now is empty, is it OK?
    Now we would try to install Polish language.... so I load it from SMLT from 000 client, apply delta support packges, run supplementation and then add L into istance profile, it OK? ... must I install also Polish locale into Solaris? ... any other work?
    Regards.

    > Now we would try to install Polish language.... so I load it from SMLT from 000 client, apply delta support packges, run supplementation and then add L into istance profile, it OK?
    - execute program RSCPINST, add "PL" and activate the changes
    - open RZ10, edit your profile, "Basic maintenance" and select PL, save and restart instance
    - then add the language as described with SMLT (import, delta support package, supplementation and client distribution for your production client)
    > ... must I install also Polish locale into Solaris? ... any other work?
    No. You are now in Unicode, the necessary libraries are the libicu* libraries in your kernel directory, those are the (if you want) locales for the operating system. So nothing more is necessary.
    To display the polish characters correctly in the SAPGUI you need to
    - install the polish codepage on the frontend system
    - switch the default from "Arial Monospace" to some other font that supports Latin-2 (such as Courier New)
    Markus

  • Read data from an external Oracle-DB: Codepage problems

    Hi,
    I try to get data from an external Oracle-DB which runs under NLS_CHARACTER
    WE8ISO8859P1. In this DB are russian texte stored. If I read these texts via native-SQL I obviously get wrong characters. (e.g. Èíñòðóìåíòû äëÿ óãëîâîé øëèôîâàëüíîé ìàøèíû instead of Инструменты для угловой шлифовальной машины). If I save the text as a html file and then open it with IE. I can change the encoding and get the right view.
    Has anybody got an idea ? (Mabye I can read the data in a different codepage or maybe there is a possibility to convert the codepage in SAP after reading it from oracle)
    Thanks a lot !!!!

    The problem is solved.
    Many thanks !

  • Codepage conversion from multi-lingual table using JDBC

    Hello,
    I have an Oracle database that has been around since before any unicode was available. It is currently 9.2.0.7.0 and the charset is WE8ISO8859P1. I am trying to port all of the data in the 'Translation' table to a UTF-8 database. The problem is that each of the translated strings is stored in its own native MS Windows codepage. This table was populated over the years via OCI connections in C/C++, and this method worked. But now we need it moved to UTF-8 and are using JAVA. Obviously, all the cp1252 languages come over just fine, but all the other languages get converted to garbage by JDBC, since it is auto-converting them and assumes the source is 8859. I could do it with OCI, as I mentioned, but I am mandated to use JAVA and a web interface, so users can choose what data to move/convert to the new DB (on-demand) from a web page.
    Is there a way to force the Oracle connection to think the database is in a different codepage so that the conversion comes out correct? OR
    Is there a way to get the data from Oracle as unconverted chars, or generic binary, or something, and then have JAVA convert it to UTF-8 from the correct codepage before writing it to the new table?
    Any help is appreciated.
    Monte
    Message was edited by:
    Wookie

    ## Is there a way to force the Oracle connection to think the database is
    ## in a different codepage so that the conversion comes out correct?
    No.
    ## Is there a way to get the data from Oracle as unconverted chars,
    ## or generic binary, or something, and then have JAVA convert it
    ## to UTF-8 from the correct codepage before writing it to the new table?
    Yes. Use:
    SELECT UTL_RAW.CAST_TO_RAW(col) FROM tab;
    and retrieve the value with ResultSet.getBytes. Then convert the retrieved byte[] to java.lang.String using String(byte[] bytes, String charsetName) constructor. Note, charsetName is Java name. Then, you can use setString() on PreparedStatement containing the INSERT for the converted value.
    -- Sergiusz

  • Different Systems, same Configuration in ID, but different Result?!

    Hi guys,
    i´m working on a Idoc to File Scenario, which works in our development environment (DXI) but fails in our test environment (QXI)
    first of all some information about our system landscape.
    We are using 3 XI Systems for development (DXI), test (QXI) and production (PXI).
    For some reason we don´t have one central SLD, but 3 SLD (one for each XI System)
    So the Repository content could be transported via File, the configuration in ID is environment specific (not transported) and have to be maintained manually.
    As I sad above the scenario is Idoc to File, more exactly it is 1 Idoc to 2 files.
    The result files of one idoc are used to have the "same" file name pattern for example
    file 1 = file2_TIMESTAMP.idoc
    file 2 = file1_TIMESTAMP.eds
    The order of the files is important. file 1 should be created first, file 2 afterwards.
    in this pattern TIMESTAMP is the timestamp should be the same for both file, so I use the timestamp of the XI message of the idoc
    (=> StreamTransformationConstants.TIME_SENT)
    the content of the file differs. While file 2 contains one text line with some control information (static and some payload data), contains the file 1 the idoc xml, but with a
    different codepage (ISO-8859-1 instead of UTF-8)
    To get the wright content and filename, 2 java mappings are used, that are referenced in interface determination. furthermore the order of the files should be maintained, so that file 1 (idoc xml) should be created first, and file 2 (control information) afterwards
    (filename mapping is done via Dynamic configuration)
    As I sad at the beginning the whole scenario works fine for development environment, but fails for test.
    In test the 2 files are created with the correct filename, but the file content of the file 1 is wrong. It contains the same content then the file 2 (control information)
    when i have a look at the apap monitoring sxmb_moni the message content after java mapping is correct for both files.
    but in message monitoring (RWB) the file 1 has the same content as file 2, which is incorrect.
    Directory content of development and test are the same.
    why is the result different?
    anybody has an idea.
    This error drives me crazy
    Kind regards
    Jochen

    Hi Suddha,
    thanks for your help.
    obviously something like that must be the reason for that behaviour.
    I compared the audit logs of a succesful message (DXI) and a failed message (QXI)
    there was only one difference between this to messages.
    In the QXI there was a entry like "process the underlying message of mulit-message-ID xxxxxx"
    I have searched both the RWB and the integration server for thismsg-ID, but without success.
    I forgot one detail.
    There is one difference between development environment and test.
    In test and production we have two java nodes. In development we only have one single java node.
    Could this be of any influence on a scenario like that?
    Suddhasatta Guha wrote:
    Could this be a problem while retrieving the message from the Receive queue...............the application retrieving the wrong message before delivering it to the File Channel
    Since the IDoc message is splitted in the Interface Determination Step - is there any Parent Message ID to this individual messages?
    In RWB the messages from the Integration Engine have a reference and a parent msg id, but the adapter engine message don´t.
    In this point there is no difference between DXI and QXI.
    maybe somebody have another idea how to get more information about this error
    Kind regards
    Jochen

  • TRANSLATE LOWER-CASE statement returns different values in different systems

    Hi Experts,
         I encountered the following problem while debugging the print program of an Adobe Form, which has the below-mentioned line of code:-
    TRANSLATE v_var1 TO LOWER CASE.
    The variable v_var1 is of type MSEHT whose basic type is CHAR with length 10.
    The incoming variable, v_var1 has the value 'ШT', which is the upper-case Russian equivalent of the Unit Of Measure - ST.
         In the Production system, after the execution of the statement, the value v_var1 changes to 'шт'. But in the Development and Testing systems, which also have the same code as the Production system, the value v_var1 changes to 'шt'. In all the three systems, I had logged in the same login language('E'). The same problem remains when I log-in in Russian('R'). The text gets reflected in the Adobe forms and there aren't any JavaScript/FormCalc conditions on the particular window to change the font/case. The user wants 'шт' to be displayed (as is being displayed now) in the Prod. But since I'm not able make the same appear in the testing system, they are not confident to transport the changes to the prod.
         In the help documentation of TRANSLATE, I found that the "text environment" is a factor that affects TRANSLATE statements and could be set by the statement
         SET LOCALE LANGUAGE lang.
         which will basically change the system language. There aren't any SET LOCALE LANGUAGE statements in the Print program. I can hard-code and display it but I would like to know alternate solutions. Is it because of any system-specific font/text setting?

    Hi David,
    perhaps the systems got different codepages. This may cause the different.
    Just another suggestion.
    Regards
    Florian

  • How to codepages in Data Integrator

    Data Integrator sets the codepage at the job server and the datastore layer. Each job server, which is the Data Integrator "engine" where all of the processing takes place, has a single codepage. In addition, each datastore has its own codepage. The datastore codepage is set via the "locale" of the datastore. Because Data Integrator allows each datastore to have its own codepage, you can build a single job which will read from and/or load into datastores with multiple different codepages.

    1st step: create your schema on your database, you can create one schema for both repositories (master and work) or you can create one schema for each repository.
    2nd step: execute repcreate.bat
    3rd step: run topology and create one or more work repositories.
    You can also read the "Installation Guide" let us know if you have any issue.
    Regards,

  • Codepage in OTF file

    Hi all,
    after a release update to ECC 6.0 our sapscripts seem to use a different codepage in the OTF represenation. The OTF file is sent to an external output management system which requires cp1100 instead of the new default cp4201.
    We already tried to change the cp in the device type and even send a print-control command from SAP-Script, but it seems that when SAP-Script starts to print text using the font HELVE it turns the codepage back to the default which is bound to HELVE.
    Any idea?
    Thanks and best regards
    Norman

    Hi,
    Thanks for feedback.
    Lengh is the issue. If I split the single varaible into several to only have 255 chars in each of them,  it is possible to not loose part of the content.
    I try to put the two variables (part1 and part2) on the same line in SMARTFORM without any separators. This has exactly the same effect than if I use one varaible which exceed 255 chars. The content of the 2nd one is lost in the OTF.
    I then insert a blank between them, and I do not loose anything in the OTF.
    Unfortunatelly, due to the separator, the URL is also split in 2 parts on my final content, which is not really good.
    Splitting the label is not beautiful, but not dangerous. Adding one space in the URL may leads to not working link.
    I will try to consider this issue when preparing the content from the OTF file.
    If you have any hints what to consider, I would appreciate.
    Kind Regards,
    barbara

  • How Data Integrator supports multiple character sets in a single ETL transaction

    <p>When using Data Integrator (DI) to process a mix of multi-byte and single-byte data, it is recommended that you use UTF-8 for the job server codepage. You can, however, use different codepages for the individual datastores.</p><p>Imagine this situation : Great Big Company Inc. wants to create a global customer database. To do this, Great Big Company Inc. must read from a database of US customers, and a database of Korean customers. Great Big Company Inc. then wants to load both sets of customers into a single database.</p><p>Can DI manage these requirements? Of course. The codepage is the thing.</p>

    I've never seen this used the way you are using it. In my experience the only way to do this would be to execute a single SQL statement that returns multiple result sets - you are trying to append two SQL statements.
    You could define an in-line procedure wrapping your two select statements, or you could define a stored procedure to do the same thing. Then (either way) use CallableStatement to execute the call to the procedure.

  • How to change a Code Page in SAP SCRIPT ?

    I have a specific requirement, would need your help :
    There is a <u><b>CODE PAGE</b></u> which is getting assigned to the <u><b>SAP SCRIPT FORM</b></u>. <i><b>For E.g : 1100 is generally used for SAP SCRIPTS.</b></i>
    I would like to know if there is a possibility to change the <u><b>Code Page</b></u> based on certain conditions ??
    Also i would like to know how and where is a <u><b>CODE PAGE and SAP SCRIPT FORM linked.</b></u>
    I have analysed that a <b>default code page</b> can be set in the <b>SAP LOG ON PAD</b> but not able to find the issues mentioned above.
    Kindly help me out with your valuable suggestions and solutions.
    Thankyou
    Brijesh.

    Hi,
    MDMP means Multi Display Multi Processing. See the note system for details. The problem of MDMP is that the different languages use different codepages and not all characteres can be displayed in every codepage.
    A solution would be a unicode conversion of the system. In unicode there is only one internal codepage, which contains all special characters of the individual languages. However the migration needs several hours of down time and unicode systems need more memory and disk space. I can't tell you a percentage for that. You need at least R/3 4.7 for unicode.
    In non-unicode system I believe you can't change the codepage inside a SAPscript document directly. I would have to end form processing an start a new form. To change the language you can try the ABAP-command SET LOCALE. Also see the language parameter in START_FORM.
    If that doesn't work I would try using RFC-Connections. Make RFC-Connections in your system which contain the system it self as target system, but with the desired language filled in. You do not need to provide a user or password, that would be taken form the logged on user.
    Inside your printing program you put the real printing into a function module. Your report then can call the function via RFC using a connection in the correct logon language.
    Greetings

  • How to Pass a HEX-Value to AdobeLifeCycle (TA:SFP) .

    Hello all,
    how to pass a HEX-Value to print a BLACK RIGHT-POINTING TRIANGLE?
    I want to pass a HEX value to AdobeLifeCycle (TA:SFP).
    This is done as follwoing:
    *-- Variablen
    DATA hex(2) TYPE x.
    SET BIT: 01 OF hex TO 0,
             02 OF hex TO 0,
             03 OF hex TO 1,
             04 OF hex TO 0,
             05 OF hex TO 0,
             06 OF hex TO 1,
             07 OF hex TO 0,
             08 OF hex TO 1,
             09 OF hex TO 1,
             10 OF hex TO 0,
             11 OF hex TO 1,
             12 OF hex TO 1,
             13 OF hex TO 1,
             14 OF hex TO 0,
             15 OF hex TO 1,
             16 OF hex TO 0.
    The HEX-VALUE ist u201E25BAu201C from the codepage 4110 = BLACK RIGHT-POINTING TRIANGLE, (use TA:SPC to see it).
    I set the HEX-VALUE before i call the PDF-OUTPUT:
    move hex to ls_frmglobal-hex. u201E(the field ist defined as u201ERAWSTRINGu201C)
    Then i call the Output;
    Now call the generated function module
    CALL FUNCTION fm_name
      EXPORTING
        /1bcdwb/docparams = fp_docparams
        frmglobal         = ls_frmglobal
        frmisu            = ls_frmisu
        frminf            = ls_frminf
        connections       = connections
        t_sums            = t_sums
      EXCEPTIONS
        usage_error       = 1
        system_error      = 2
        internal_error    = 3
        OTHERS            = 4.
    The result is u201E25BAu2018 instead of the BLACK RIGHT-POINTING TRIANGLE.
    Can any one have a idea what is goning wrong?
    I know how to print a Triangle on the Designer the quetion is how to pass the Hex-Value from a different codepage than the Stanndard codepage which we use.
    Thanks and regards
    Ibrahim

    I am trying to match \xfa which means that match faas
    a hex value.
    All I want to know is how to use the RE class tocheck
    for a cetain hex value.
    Correction:
    If the data is numeric, it can be matched using a hex,
    or octal representation for the regular
    expression. for instance, the numeric value 6 will be
    matched with either of the following regexes (hex,
    and octal, respectively). Read the API for
    the Pattern class if this doesn't make sense.
    "\06"
    "\006"(Should have checked first...)

  • GUI_DOWNLOAD problem in unicode system

    Hi Guru's,
    I am facing one prolem in gui_download. we are doing unicode remediation in one report. In the program  one internal table declared as of type c with length 255 and data filled into the internal table by importing the data ifrom cluster. After  that this internal table  used  by ws_download function moduel with  file type as BIN to download it  in word doc file. We replaced the function module with gui_download. It is working fine in non-unicode system but it is not downloading properly in the unicode system.
    i am unable to find what is the cause.. I tried with different different codepages giving in run time..it is not solving my problem.
    << Moderator message - Everyone's problem is important. Please do not ask for help quickly. >>
    Thanks & Regards,
    Sastry R
    Edited by: Rob Burbank on Dec 13, 2010 9:39 AM

    Hi Clemens.
    I replaced the ws_download function module with gui_download.
    here is my code
    Earlier before 6.0 code as follows
    CALL FUNCTION 'WS_DOWNLOAD'
       EXPORTING
         bin_filesize            = data_len
         filename                = p_file
         filetype                = 'BIN'
       TABLES
         data_tab                = data_tab
       EXCEPTIONS
         file_open_error         = 1
         file_write_error        = 2
         invalid_filesize        = 3
         invalid_table_width     = 4
         invalid_type            = 5
         no_batch                = 6
         unknown_error           = 7
         gui_refuse_filetransfer = 8
         OTHERS                  = 9.
    IF sy-subrc <> 0 AND no_error_dlg = space.
       MESSAGE i002(sy) WITH text-i03.    "FILE OPEN ERROR
    ENDIF.
    Replaced above with following code
      DATA:lv_fname TYPE string,
           lv_ftype(10) VALUE 'BIN',
           lv_codepage type abap_encod VALUE '4102'.
    CALL METHOD cl_gui_frontend_services=>gui_download
        EXPORTING
          bin_filesize            = data_len
          filename                = lv_fname
          filetype                = lv_ftype
          codepage                = lv_codepage
        CHANGING
          data_tab                = data_tab
        EXCEPTIONS
          file_write_error        = 1
          no_batch                = 2
          gui_refuse_filetransfer = 3
          invalid_type            = 4
          no_authority            = 5
          unknown_error           = 6
          header_not_allowed      = 7
          separator_not_allowed   = 8
          filesize_not_allowed    = 9
          header_too_long         = 10
          dp_error_create         = 11
          dp_error_send           = 12
          dp_error_write          = 13
          unknown_dp_error        = 14
          access_denied           = 15
          dp_out_of_memory        = 16
          disk_full               = 17
          dp_timeout              = 18
          file_not_found          = 19
          dataprovider_exception  = 20
          control_flush_error     = 21
          not_supported_by_gui    = 22
          error_no_gui            = 23
          OTHERS                  = 24.
      IF sy-subrc <> 0 AND no_error_dlg = space.
        MESSAGE i002(sy) WITH text-i03.    "FILE OPEN ERROR
      ENDIF.
    I tried with all othr code pages also like 4110/4103/1110/1100/1102. It is not working,
    It is giving problem in unicode system. File is downloading.but not properly..
    and when i am opening the word file it is asking me select encoding type to make document readble along with the  available text encoding formats.
    Please help me..
    Thanks & Regards,
    Sastry R

  • Cl_gui_frontend_services= gui_download - issue with german special char

    Hello,
    we are using cl_gui_frontend_services=>gui_download to create from an itab an excel file.
    We face the issue that in this excel file german special characters like Ä, Ü, Ö, ß are not displayed correctly.
    I think we need to use a different codepage. But which one?
    could you please give us a short coding example how to call cl_gui_frontend_services=>gui_download.
    Thanks a lot
    Kind regards
    Manfred

    Hi,
    Check the system is unicode or non-unicode . Codepage for Unicode system is ' 4102' and non-unicode is '1100'.
    Below are the sample code for the GUI_download with Class.
    DATA:  l_filename    TYPE string,
           l_filen       TYPE string,
           l_path        TYPE string,
           l_fullpath    TYPE string,
           l_usr_act     TYPE I.
    l_filename = SPACE.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>FILE_SAVE_DIALOG
      EXPORTING
        DEFAULT_FILE_NAME    = l_filename
      CHANGING
        FILENAME             = l_filen
        PATH                 = l_path
        FULLPATH             = l_fullpath
        USER_ACTION          = l_usr_act
      EXCEPTIONS
        CNTL_ERROR           = 1
        ERROR_NO_GUI         = 2     
        NOT_SUPPORTED_BY_GUI = 3
        others               = 4.
    IF sy-subrc = 0
          AND l_usr_act <>
          CL_GUI_FRONTEND_SERVICES=>ACTION_CANCEL.
    CALL FUNCTION 'GUI_DOWNLOAD'
      EXPORTING
        FILENAME                        = l_fullpath
       FILETYPE                        = 'DAT'
      TABLES
        DATA_TAB                        = T_DOWNL
    EXCEPTIONS
       FILE_WRITE_ERROR                = 1
       NO_BATCH                        = 2
       GUI_REFUSE_FILETRANSFER         = 3
       INVALID_TYPE                    = 4
       NO_AUTHORITY                    = 5
       UNKNOWN_ERROR                   = 6
       HEADER_NOT_ALLOWED              = 7
       SEPARATOR_NOT_ALLOWED           = 8
       FILESIZE_NOT_ALLOWED            = 9
       HEADER_TOO_LONG                 = 10
       DP_ERROR_CREATE                 = 11
       DP_ERROR_SEND                   = 12
       DP_ERROR_WRITE                  = 13
       UNKNOWN_DP_ERROR                = 14
       ACCESS_DENIED                   = 15
       DP_OUT_OF_MEMORY                = 16
       DISK_FULL                       = 17
       DP_TIMEOUT                      = 18
       FILE_NOT_FOUND                  = 19
       DATAPROVIDER_EXCEPTION          = 20
       CONTROL_FLUSH_ERROR             = 21
       OTHERS                          = 22.
    IF SY-SUBRC <> 0.
       MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
               WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    ENDIF.
    Edited by: Sumodh P on May 11, 2010 5:24 PM

  • Select with timestamp in the where-clause don't work after importing data

    Hello,
    I have to databases (I call them db1 and db2, they have the same datastructure) and exported some data from the table "rfm_meas"from db1. Later on I imported that dataset into the "rfm_meas"-table of db2. The table contains a col with the datatype "timestamp(6)" and checking the success of the import looks fine:
    (executed on db2)
    SELECT
    id,acqtime
    from
    rfm_meas
    WHERE
    box_id=1
    AND id>145029878
    Returns two rows:
    ID ACQTIME
    145029883 01.06.10 10:30:00,000000000
    145029884 01.06.10 10:50:00,000000000
    It seems there are valid timestamps as I expected.
    But if I now want to select all rows from box_id=1 which are newer than e.g. 25-may-2010 I would try this:
    SELECT
    id,acqtime
    from
    rfm_meas
    WHERE
    box_id=1
    AND acqtime>=to_timestamp('25-05-2010 17:10:00,000','DD-MM-YYYY HH24:MI:SS,FF3')
    And it returns ... nothing!? If I execute the same query on db1 it works correctly.
    I guess db1 and db2 has different codepages!?
    If I insert some rows from a PL/SQL script in db2 into the "rfm_meas"-table, querys like the one above works fine. Therefore i guess, during importing the data there must be something wrong, so I cann see the timestamp, but can't use it.
    How can i fix that? Any ideas?
    If someone need more details I will provide it.
    Regards
    Steffen

    check this link out
    Importing timestamp columns appears to use to_date instead of to_timestamp

Maybe you are looking for

  • Jpeg won't display in my email

    Jpeg won't display in my email, all i get is an empty box with a ? mark in the middle.

  • How to set matching rules in FEBAN

    Hello, I'm configuring the electronic bank statement to process the items using the transaction FB05 (post with clearing). By default the only matching rule is the amount in company currency (i see it posting my items in foreground). How can I add di

  • Web Dynpro ABAP timeout

    Hi everyone, Is there a way to implement a custom timeout message for the end-user. I guess it needs to be some sort of handler on the ICF, but any help would be appreciated.. Thanks, Nick

  • Using PDF in a report

    Hi All, I am working on a report where I need to use PDF as an output option inplace of the ALV. So can you please tell me how can I do this. Thansk, Rajeev

  • Can I uninstall Photoshop from my Mac and reinstall it on my PC without paying again?

    Hello, About three months ago, I bought a package with Adobe Photoshop Elements 12 and Premeire Elements 12. I bought a DVD install that works with either Mac or Windows. I registered it under a Mac, because that's what I had at the time. However, no